Premier David Eby sees more light than shadow when it comes to the effects of artificial intelligence.
Eby said AI represents an “incredible opportunities” for B.C. while also acknowledging the “significant potential for disruption or unintended consequences that government needs to be very attuned to.”
Eby made these comments Friday (May 26) in Nanaimo.
“It’s obviously a very significant new technology,” he said. “It has a lot of promise, it has some potential challenges that we are going to have to overcome. But overall, I think that AI in terms of B.C.’s technology sector, our economy and the opportunities it presents for increasing our productivity, our efficiency and opportunities for all British Columbians, it’s very significant and we want to be able to take advantage of that, while addressing any negative impacts.”
He pointed specifically to use of AI by Vancouver-based AbCellera in the analysis of immunity cells in the search for new antibodies that could be used to produce drugs to treat cancer among other diseases. “So I have seen the promise first hand, that has actually been delivered on already in British Columbia by AI.”
Eby added B.C.’s privacy commissioner is already working with other privacy commissioners across the country to look at the implications of AI around the privacy of British Columbians and the head of the public service has already launched work to make sure the bureaucracy is responsive to these new technological developments.
These comments come amidst calls from the BC Greens for establishing an all-party task force on AI to create what the party calls a “common understanding among MLAs and ensure ongoing consultation with academia, industry, the public sector, and the public.”
Canada’s privacy commissioner announced that the federal government, along with B.C., Alberta and Quebec, will jointly investigate Open AI, the company behind the AI-powered chatbot ChatGPT.
While perhaps the most well-known AI-powered application, it is only one of many such AI-powered tools now revolutionizing entire industries at a pace that has some experts worried.
An open letter first published in March 2023 is calling for on all AI labs to immediately pause for at least six months the training of AI systems more powerful than GPT-4, the basis of ChatGPT.
“(Should) we develop nonhuman minds that might eventually outnumber, outsmart, obsolete and replace us?” it reads. “(Should) we risk loss of control of our civilization? Such decisions must not be delegated to unelected tech leaders. Powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable.”
Institutions around the world, meanwhile, are grappling with the effects of AI, with some more prohibitive than others. Italy, for example, temporarily banned ChatGPT in March in reflecting the more conservative approach around privacy and use of technology that prevails across much of continental Europe.
English-speaking parts of the world, meanwhile, have taken a more liberal approach as several major companies are rushing to introduce their own versions of ChatGPT or in the case of Microsoft striking partnerships with market leader Open AI.