With AI Software Export Ban, Are We Restricting Bad Actors Or Crippling Innovation?
Effective this Monday (Jan 6, 2020), the US Industry and Security Bureau has banned the export of AI software (specifically designed to automate the analysis of geospatial imagery) except for Canada. Originally, when the proposal was made under the banner Export Control Reform Act (ECRA) almost two years back in 2018, it was thought to be for restricting all the “emerging technologies” – or at least a broader scope of AI software – for specific countries, such as China.
Since then, the scope has narrowed down considerably, and the current restrictions are only for software that can do auto-tagging, identification, and auto-discover “points of interest” of geospatial imagery. This ban is specific to training a Deep Convolutional Neural Network which is used in automating geospatial imagery and point clouds. It may be possible that the restriction can be expanded to a broader field and/or modified to apply only to specific countries.
Here is my take on this issue. Political agendas aside, I’m looking at it from the technical, innovation, and governance perspectives. In my mind, it raises more questions than this rule answers.
The Restriction is Narrow
First, when you look at the rule closely, it is clear that it specifically applies only to the Graphical User Interface (GUI) and not to the software itself, or the underlying research that does the actual work.
Second, the current rule doesn’t exactly ban the export but requires companies to apply for a license. So if a company wants to export this software to a country other than Canada, then they need to apply for a license explaining the purpose so it can be granted permission. This also raises several questions.
- Would there be favoritism shown to companies such that some companies could obtain the license sooner as they are bigger, better, and mightier than others?
- Would it raise a “quid pro quo” situation by the US agency expecting something in return from the company by approving the license quickly?
- When the companies apply for the license, and if it takes too long to get approved, should they stop the current usage and wait for the approval to come through or should they shut down immediately and wait for the approval to come through?
- What if companies don’t want to comply? How/who will enforce it? A classic example is Apple refusing to unlock the phone for a government agency in the past.
With software solutions moving to a SaaS-based cloud model, it is going to be hard to enforce who can access what, from where and when. While it is possible to put restrictions based on geo-locations, IP addresses, and time-bound controls, it will be hard to enforce it in a large sense. For example, what if a Canadian company has a user base or employees located outside of Canada (such as call centers, analysts, and other users of the system outside the country)? What if a US or Canada-based employee is traveling abroad and wants to access this information? Does this need to be enforced based on the profile of an employee or based on the location of an employee? What if those employees, who are outside the country, travel to the US or Canada? Would they be allowed to access these systems? If that is the case, can some of these companies bring their employees to the US and Canada and have them use these systems?
If you restrict them based on profiling, that can open another can of worms with human resources. Also, who is responsible for the strictness of the enforcement – the companies that apply for licenses, the users that use the system, or some government agencies like TSA? Would they check everyone’s laptop to see if they are carrying restricted software outside the US/Canada and if they do, will they promise not to use that software while they are outside the country? When you make a rule, especially Internet-related rules, it is going to be hard to enforce it.
Also, what happens if a user spoofs their IP/location, etc. How would you enforce the usage of that user? Or even the bigger question, if a hacker accesses company information without their consent, would the software company still be responsible for letting them use it – with or without their consent?
An interesting fact is that the rule applies ONLY if all of the below is true.
- It provides a GUI interface.
- Reduces pixel variations when the scale changes – In other words, AI fixed the pixel blind spots to avoid pixelation.
- Trains a Deep CNN to detect objects of interest from positive/negative samples.
- Identifies objects in geospatial imagery using the trained models.
The underlying research work and the software itself is not restricted from usage. Specifically, this restriction targets non-technical users utilizing this as a mechanism to identify “points of interest” in specific geo-locations. This raises the following questions:
- If we are worried about an enemy state, given the funding and governmental support they have, how long would it take for them to develop an equivalent or better GUI interface to use?
- If a company obtains a license to export this software with the GUI to someone and if they build an API interface (not the GUI interface) then would the same restrictions apply?
Pricing and Economics
Given this new restriction, especially for the SaaS companies where the profit and revenues are very small, will these companies apply a surcharge/usage penalty because of this to their existing client base as this reduces their potential user base? Or will they keep the prices as is and eat up the cost because of patriotism? Or would the government consider providing compensation to the affected software companies as it did to the soybean farmers? If so, how will the compensation be calculated?
Innovation needs no bounds and knows no bounds. If we start putting restrictions on what can be innovated and what can be shared, then we are walking into a dangerous territory of restricting and controlling innovation. Free minds refuse to accept that and often times even become rebellious. What if a researcher refuses to continue their work in a certain field because they don’t want to comply with the restriction? Would the government force them to continue to innovate for the greater good or make them disappear into an upstate farm?
As they say, “Sharing is caring.” If you are not sharing, you may end up crippling innovation and invention. This is dangerous coming from a country that is built on principles of freedom and innovation. You need a global flow of information exchange for the larger innovations such as AI, quantum computing, etc. to thrive.
Last, but not least, there are so many other issues with AI that we need to worry about. We need a combined effort from brilliant minds around the world to solve problems such as Ethical AI, Explainable AI, and replacing jobs that AI replaces with newer jobs. I hope this will all work out for the greater good in the end!
This article was originally published in Forbes on Jan 10, 2020 – https://www.forbes.com/sites/cognitiveworld/2020/01/10/are-we-restricting-bad-actors-or–crippling-innovation/?sh=70c0de1b3a57