Skip to content

Public must help guide future of artificial intelligence, expert say

Speakers at privacy and security summit weigh risks and benefits of AI.
AI-data-Andriy Onufriyenko-Moment-Getty Images
Risks of artificial intelligence include its use in cybercrime while benefits can be found in its effectiveness in solving large, complex problems
How artificial intelligence (AI) and machine learning are controlled in society should be subject to public debate, not left in the hands of lawyers to create policy, Vancouver International Privacy & Security Summit online conference delegates heard May 7.

Artificial intelligence simulates human thought, using machines programmed to reason like humans and mimic their actions. The term may also be applied to any machine that exhibits traits associated with a human mind such as learning and problem-solving. The goal is to use data for the purpose of rationalizing actions with the best chance of achieving a goal.

Or, said University of Ottawa associate professor Florian Martin-Bariteau, it’s a broad range of technologies designed to produce results to influence the environment using both human and machine inputs. Machine learning, he said, can analyze vast amounts of data to find useful patterns.

“Generally, artificial intelligence exceeds that of humans,” he said.

But, as Molecular You president Dr. Rob Fraser said, as end users continue to feed algorithms, AI is in a state of continual optimization.

Fraser said trust by users is a key to the success of AI but added there can be unintended consequences.

He cited the example of the computer in director Stanley Kubrick’s 1968 film 2001: A Space Odyssey. In that story, the computer HAL murders the crew of a spaceship after learning that the astronauts intend to disconnect it.

Closer to 2021, Martin-Bariteau said, the risks can involve cybercrime while benefits can be found in AI’s effectiveness in helping solve large, complex problems such as the COVID-19 pandemic.

Whatever the case, said Provincial Health Services Authority director of privacy and access Dr. Holly Longstaff, AI has arrived and people need to start thinking differently and learning differently in order to harness it.

“We need to have a cultural shift in the use of data and the governance of data,” she said. “We have to have a multidisciplinary approach.”

Further, she said, it’s necessary for the public sector to take the lead in AI.

“AI will wind up in the domain of the private sector and that’s not in patients’ best interest,” she said.

Speakers agreed a multi-disciplinary approach to handing AI is needed.

But, pointed out Providence Health Care director of digital products Soyean Kim, the way in which machines are taught also has to be factored in.

“Are we teaching a machine things that are inherently biased?” she asked.

Further, asked University of California, Berkeley, professor of machine learning Clarence Chio, with attacks on other systems already commonplace, how are we assured AI systems are secure?

“Can we trust the machines?” he asked.

 

jhainsworth@glaciermedia.ca

twitter.com/jhainswo

push icon
Be the first to read breaking stories. Enable push notifications on your device. Disable anytime.
No thanks