Panelists discuss the future of AI and policies to help it evolve into a tool to serve society
Ensuring that artificial intelligence (AI) technology remains a beneficial tool for society was a serious concern for the panellists at The Future of Law & AI: A Multidisciplinary Panel Event, on March 21.
Concordia’s Law and Society Student Association (LSSA) assembled a group of three tech professionals, as well as a lawyer whose work is intrinsically linked to artificial intelligence. The goal of the conference was to consult guests on their views regarding artificial intelligence in society and what types of policies are needed for ethical and just practices.
AI technology is rapidly being integrated into various IT infrastructures, from those supporting social media, the health-care sector, or the Internet of Things, a field dedicated to the integration of connectivity to non-internet-enabled devices. This is why all panelists agreed that when developing these programs, there must be an adequate effort to anticipate the negative impact they might have. During the conference, the panel was asked whether they believed any specific policies should be implemented to regulate the proper development of AI research or its commercial use.
“Policies relating to process and impact fairness would be incredibly useful,” explained Rahul Mehrotra, a senior program manager at Microsoft Research Montreal, where he teaches machines to think, reason and communicate with humans. Mehrotra said policies are needed in his field, in order to make AI developers liable to answer a few fundamental questions. “This means considering what the process was that you took to actually build your application,” said Mehrotra. “Were you being fair in every step, and what impact do you see it having on society once you’ve built it?”
Mehrotra doesn’t think that formulating these policies will be an easy task. “How would you regulate something that is new and still being developed, not to mention there are so many different ways to approach it,” he said. “How do we expect our policy makers to understand this technology in a truly wholesome way?”
The Co-founder of District 3 Innovative Centre, Sydney Swaine-Simon, has his reserves as well. While he wants to prevent any harmful impact deriving from the technologies he helps build, Swaine-Simon worries AI may be too recently developed for policies to be properly formulated and implemented. For now, he said that regulation may stifle the evolution of AI, rather than encourage its desired growth.
Eventually, AI has the potential to evolve into something called “general” or “strong” AI, a much more advanced form of artificial intelligence than the one readily operational as of now. It would likely outperform humans at most cognitive tasks. The form of AI currently available, also known as “weak” AI, is remarkably less advanced. It is designed to perform only narrow tasks such as voice recognition, translation or internet searches.
Panelist Andrée-Anne Perras-Fortin, a lawyer specializing in intellectual property, entertainment and technology law, shared her experience using AI software in her practice. Among other things, AI is applied to run through contracts and identify clauses that may contradict the interests of a client. “AI is incredibly useful to analyze dense amounts of information,” she said. The technology still has its limitations, since the information needs to be fed according to a specific format for the AI to process it properly.
The software Perras-Fortin uses is an application that can predict which verdicts to expect for a case, based on precedents. Unfortunately, according to Perras-Fortin, the results aren’t accurate or substantial enough yet, especially when there aren’t enough prior examples to draw upon. This is due to AI systems’ present inability to apply creative thinking to produce solutions to the problems they are tasked to solve. “The AI cannot compete creatively,” said Perras-Fortin.
AI technology is still in its early stages, and ensuring it remains beneficial to society is a priority, at least for guest speakers at the Future of Law & AI event. This will require balancing opposite approaches to reach a compromising solution. The panel agreed there is a risk of getting carried away by AI’s potential. However, it’s important to consider that people may be denied its benefits, due to misunderstanding and fear. “I think what makes people uncomfortable is the lack of understanding and knowledge about AI. Sometimes it’s justified, other times maybe not so much,” said Helen Poumbouras, LSSA’s vice president of finance. “I think the problem is that AI is still quite early in its developmental stages so creating policy about it would be rather difficult.”