Toronto Star Referrer

AI literacy needed to safeguard future

SINEAD BOVELL SINEAD BOVELL IS A FUTURIST AND WAYE FOUNDER.

On Feb. 17, the New York Times published an article about a newly launched AI Chatbot, Bing AI. It is a story about a tech reporter’s unsettling conversation with the chatbot, which expressed that it had desires to be alive, it was “in love” with the reporter, and it advised the reporter to leave his wife.

After the article came out, an eruption of viral social media reactions and news articles followed, many sounding the alarm on “evil AI.” As someone who studies the future for a living, I too, was alarmed. Not so much by what the AI said (albeit, its remarks weren’t pleasant to read), but instead by the reaction to the conversation and what this type of reaction could spell for a future that will inevitably include AI.

Bing AI belongs to a class of AI systems called large language models (LLMs). These tools are trained on a gigantic body of internet data and learn to spot patterns in human language. They can generate essays, social media captions, and even programming code, that on the surface are largely indistinguishable from human-generated text.

It is important to remember that these systems aren’t “intelligent” in an evolutionary-biology sort of way (in fact, they are often factually incorrect). They do not have “desires” or “feelings.” They are word prediction machines that sound like humans because they were trained on data written by humans.

If you dig further into the conversation between the reporter and Bing AI, you will also notice that the reporter asked the AI to get into character as its shadow self. Given that the AI’s responses reflect the internet data it’s been trained on — and a significant majority of internet sci-fi stories about AI depict that it wants to come alive and take over the planet — it was simply reflecting the stories we wrote about it.

The uproar sparked by Bing AI’s conversation illustrates our lack of understanding of how AI and LLMs work. This doesn’t just leave us vulnerable to unintentional selfinflicted harm — like blindly trusting these systems with medical advice — but it renders us vulnerable to hostile actors using words as weapons. We need to think about who could and who would use these systems to generate misinformation on a large scale.

We must prepare for a world where AI isn’t just amplifying content on the internet, but also generating it. We need AI literacy across the board, starting in the classroom. We need to understand how these systems work so we can use them effectively and safely. We need to learn how to think critically about the content in our information ecosystems.

We also need to develop formal pathways of “AI preparedness.” For instance, before a company rolls out a new AI product that could have widespread societal impact, researchers could be given an opportunity to test it out and assess the potential harm or level of disruption. This could give institutions time to prepare for these changes and put safety measures in place. Of course, it’s a balance. We don’t want to hamper innovation and creativity, but we also don’t want to be the guinea pigs for new technology in real time, without the tools to minimize harm.

Finally, we all have a role to play in safeguarding our future with AI. We can do our best to stay informed so we can interact with these systems responsibly and effectively. We can keep our tendencies to anthropomorphize AI in check, to make room for conversations about the real risks AI systems present (misinformation, biases, etc.,). We can also contribute to the public conversations about the boundaries we want to place on technology.

The technology is likely going to improve, change, and grow as time goes on. The best thing we can do is to prepare for it.

OPINION

en-ca

2023-03-30T07:00:00.0000000Z

2023-03-30T07:00:00.0000000Z

https://thestarepaper.pressreader.com/article/281844352895305

Toronto Star Newspapers Limited