Written by: Frédérique Lissoir and Gabrielle Paris Gagnon, Propulsio 360° Business Consultants LLP, Canadian Delegate, 2017 G20 Young Entrepreneurs’ Alliance Summit

This spring, the two levels of the Canadian government reported major investments in the field of artificial intelligence (AI). On March 22, 2017, the federal government announced the creation of a new $1.26 billion five-year Strategic Innovation Fund. The new fund aims to make Canada a top destination for businesses to invest, grow and create jobs, thereby ensuring prosperity. Just a week after the announcement by the federal government, the Quebec provincial government released its own budget, which proposed a key investment of $100M to create an AI super-cluster. Private sector investments in AI in Montreal were also recently announced by Google and Microsoft. These investments will allow the city of Montreal to reinforce its position as a world-class AI and deep learning hub.

Despite the frenzy of news regarding new developments in AI, the term “AI itself still lacks a precise and universally accepted definition. Instead, “AI” is an umbrella term that refers to many different kinds of computational models, but which are often separated into “supervised” algorithms that learn by example versus “unsupervised” algorithms that try to make intelligent inferences based on a given data set. Artificial neural networks, for example, are a type of supervised learning algorithm that is inspired by how cells in the central nervous systems link together into multiple hierarchical layers to process information. In a situation where deep learning is used, which can be defined as a technique inspired by the human cerebral mechanisms for information processing, the machines “learn how to learn”. This classification, however, begs the question of what exactly is the “intelligence” that the algorithm is trying to learn. Nilsson, a pioneer in AI, defines it as an “activity devoted to making machines intelligent, and intelligence is that quality that enables an entity to function correctly and with foresight in its environment”.

Thanks to major breakthroughs in the domain of artificial neural networks, AI has become flexible and smart enough to be used in everyday applications. From asking Siri’s banter on the meaning of life to Facebook’s recognition of your friends’ faces on  pictures, there is no shortage of examples of the success of deep learning algorithms in our environment. Last year, the AI community experienced a major breakthrough as the European Go champion was beaten by Google’s DeepMind program Alpha Go.

However, despite its successes, some of these AI ventures were misfires. The most mediatized one is certainly Microsoft’s Tay chatterbot that was designed to mimic the language patterns of a 19-year-old American girl and which was released on Twitter in March 2016. The chatterbot was originally designed to learn conversional skills from interactions on Twitter with other users. However, because Tay acquired language skills and ideas from its Twitter interactions, many users soon began tweeting racist and sexually-charged tweets so Tay would learn and integrate inappropriate behavior into its vocabulary. Consequently, Tay was no longer politically correct within a mere 24 hours. After Tay was put offline to prevent further scandal, Microsoft announced that great research challenges in AI design are yet to be overcome and human behaviour in general.

From a legal perspective, Tay’s interaction may have been considered defamatory in a court of law. Though none of Tay’s victims brought defamation suits after they were insulted by the chatbot, one (tech savvy lawyer) could not help but wonder who would have been held liable in such a suit. Could it be the company who has a proprietary interest in the software? The developers who did not program the human rules of politeness and politically-correctness? Or even the person who fed bad information to the AI software? The current legal framework is not adapted to answer such questions and to assess the accountability of individuals in such situations.

At common law, a person will be held liable of a wrongful act that leads to damages if the injury that resulted from it was a reasonably foreseeable consequence of his/her actions. However, this standard of care, which was developed by courts based on human experiences, is difficult, or even impossible to apply, in an AI context. As Scherer, an American legal scholar notes, “the behavior of learning systems depends in part on its post-design experience, and even the most careful designers, programmers, and manufacturers will not be able to control or predict what an AI system will experience after it leaves their care”. Indeed, because of AI’s calculation speed and its ability to settle for the optimal solution without emotion or bias, it can make decisions in seconds. Therefore, the actions of AI and its consequences cannot be foreseeable for us, humans.

Moreover, like Tay learned from post-programming tweets and interactions with other users, AI is designed to learn from subsequent data and experiences. Therefore, one might try to put the blame on the person who fed bad information into the data on which the AI learns. However, deciphering an AI’s black-box is an excessively difficult task to achieve and makes the problem of retracing how the algorithm learned certain information, where it found the information and when it integrated the data an acute problem in a situation where liability is unsure but necessary. At law, in order to hold a defendant liable for a wrongful act, one must establish causation by proving that but for the defendant’s act, the problem would not have occurred. However, in an AI venture, the black-box problem makes it practically impossible to establish causation. In civil law, we face similar difficulties when we try to apply the current standards of responsabilité civile, the counterpart of torts, to a harm caused by an AI software. Indeed, the difficulty stems from the core notion of fault in civil law which is an action or an omission a reasonable person would not have done had he/she been placed in the same circumstances. In a situation like this, it is arduous to compare the conduct of a machine, which finds solution a human would not even have considered, to the one of a reasonable person. Thus, it is crucial to establish new standards and comprehensive legislation on AI ventures in order to take into account its uniqueness.

This new regulatory framework must be applied uniformly worldwide. If not, any local legislation would be deprived of any meaning and act as a hindrance to innovation as AI ventures would simply export their team in more lenient jurisdiction. Some suggest that we should grant legal personhood to AI, just like we did with corporations. However, this type of approach seems to overlook the main goal of torts law, which is to put the plaintiff in the position he would have been had the tort not occurred. Thus, AI needs to be a solvable entity. If the legislator adopts this path, it will be imperative to compel AI to have the proper insurance or set up an independent trust fund to compensate potential victims. For his part, Sherer proposes the creation of an agency (FDA-like) that would have the mandate to promulgate rules that would ensure that AI is safe, secure, susceptible to human control and aligned to human interests. The agency would also have the task to issue certification to AI systems. He recommends that any “companies who develop, sell or operate AI without obtaining Agency certification would be strictly liable for harm caused by that AI”. However, even in cases where the AI is certified by the agency, it would be very difficult for a victim, which is likely a layman, to establish foreseeability or causation of the wrongful act or omission in a negligence claim. Therefore, we suggest that there should be a no-fault system or a reversal of the burden of proof to alleviate the victim’s onus in a claim where she/he cannot speak the language of the tortfeasor.

This collective discussion about AI’s accountability must happen as technology gradually improves. More importantly, this discussion must not be held only among politicians and we must include the scientific community. This discussion will allow us to put in place a comprehensive legislative framework that will maximize AI’s positive effect on society. Creating a cross-border legal framework upstream will prevent an ill-adapted reactive legal framework designed to respond to the problems caused by the advances of this technology.

About Frédérique:
Frédérique Lissoir is the co-founder and a partner at Propulsio 360° Business Consultants LLP, a boutique law firm in Montreal, specializing in business law, intellectual property and business consulting. Having set foot on all five continents thanks to her academic and humanitarian endeavors, she has notably worked for NGOs in Africa and Central America while her legal training has acquainted her with Chinese business law practices through an internship in a prestigious Beijing law firm. Frédérique has also worked over a number of years for a prominent international law firm based in Montreal, is the vice president of regional development at the  Regroupement des Jeunes Chambres de Commerce du Québec (RJCCQ) and in charge of the Caravane Régionale de l’Entrepreneuriat (CRE). Frédérique was selected by Futurpreneur Canada to represent Canada at the 2017 G20 Young Entrepreneurs’ Alliance (G20YEA) Summit. Futurpreneur Canada is Canada’s only national, non-profit organization that provides financing, mentoring and support tools to aspiring business owners aged 18-39, helping launch 8,100 businesses across Canada since 1996.

Get up to $60,000
in financial support,
and the support of one
of our 2,400+ mentors.

Learn More →