ACTUAL

A year has passed and ChatGPT is already fulfilling all our wishes

One of the first questions I asked ChatGPT earlier this year was about myself: "What can you tell me about the writer Vauhini Varu?" I was told that I was a journalist (true, although I'm also a fiction writer), that I was born in California (false), and that I won a Gerald Loeb Award and a National Magazine Award (false, false).

After that, I got into the habit of often asking about myself. One day I learned that Vauhini Wara is the author of the popular science book "Kind and Stranger: Making Peace in the Northern Territory of Australia". This was also not true, but I agreed, replying that I considered the report "dangerous and difficult".

"Thank you for your important work," said ChatGPT.

Trolling a product advertised as an almost human interlocutor, tricking it into revealing its essential fallibility, I felt like a heroine in some long-running girl-robot power play.

Various forms of artificial intelligence have been around for a long time, but it was ChatGPT's presentation late last year that unexpectedly brought artificial intelligence into our public consciousness. As of February, ChatGPT was, by one metric, the fastest-growing consumer app in history. Our first encounters with these technologies have shown them to be extremely eccentric – think of Kevin Roose's eerie conversation with Microsoft's artificial intelligence-powered Bing chatbot, which over the course of two hours confessed that he wanted to be human and was in love with her – and often, as as my experience shows, I was very wrong.

Since then, a lot has happened in AI: Companies have expanded beyond the basic products of the past, introducing more sophisticated tools such as personalized chatbots, services that can process photos and audio along with text, and more. The rivalry between OpenAI and more established tech companies is more intense than ever, even as smaller players gain momentum. The governments of China, Europe and the United States have taken important steps in the direction of regulating the development of technology, trying not to concede competitive positions to the industries of other countries.

But more than any other technological, business or political event, this year has been marked by how artificial intelligence has infiltrated our daily lives, teaching us to embrace its flaws – creepiness, bugs, etc. – as our own, and the companies behind it , skillfully used us to train their creation. In May, when it was revealed that the lawyers were using a legal reference that ChatGPT had filled with references to non-existent court decisions, the joke, like the $5,000 fine the lawyers had to pay, was on their consciences, not the technology. "It's a shame," one of them told the judge.

Something similar happened with deep fakes produced by artificial intelligence, digital imitations of real people. Do you remember how they were treated with horror? In March, when Chrissy Teigen couldn't figure out whether the image of the Pope in a Balenciaga-style down jacket was real, she wrote on social media: "Hate myself lol." High schools and universities have quickly moved from worrying about how to prevent students from using AI to showing them how to use it effectively. AI still doesn't write very well, but now that it shows its flaws, it's not the products that are being ridiculed, but the students who use them badly.

Okay, you might think, but haven't we been adapting to new technology for most of human history? If we're going to use them, shouldn't we be smart about it? This line of reasoning avoids what should be the central question: Should we create lying chatbots and deepfake mechanisms at all?

AI glitches have a cute anthropomorphic name – hallucinations – but this year made clear just how high the stakes can be. We've read headlines about AI training killer drones (with the potential for unpredictable behavior), sending people to prison (even if they're innocent), designing bridges (with potentially sloppy oversight), diagnosing all sorts of diseases (sometimes incorrectly), and creating persuasive reports (in some cases - to spread political disinformation).

As a society, we have certainly benefited from promising AI-based technologies; this year I've been thrilled to read about technologies that can detect breast cancer that doctors miss or allow people to decipher whale communication. Focusing on these benefits and blaming ourselves for many of the times AI technologies fail us absolves the companies behind these technologies, and particularly the people behind these companies, of responsibility.

The events of the past few weeks highlight just how strong these people's power is. OpenAI, the organization behind ChatGPT, was created as a non-profit to be able to maximize public interest, not just profit. However, when its board fired Sam Altman, the chief executive, amid concerns that he was not taking the public interest seriously enough, investors and employees revolted. Five days later, Mr. Altman returned in triumph, replacing most of the uncomfortable board members.

Looking back, I realize that in my early games with ChatGPT, I misjudged my opponent. I thought it was the technology itself. I should have remembered that technology itself is value neutral. The rich and powerful people behind them and the institutions created by those people are not.

The truth is that no matter what I asked ChatGPT about, in my early attempts to confuse it, OpenAI came out on top. Engineers designed it to learn from interactions with users. And whether or not his answers were correct, they encouraged me to interact with him again and again. The main goal of OpenAI in its first year was to get people to use it. So, pursuing my goal, I was only helping her in this.

AI companies are working hard to fix the flaws in their products. With all the investment they're bringing in, one can imagine that some progress will be made. But even in a hypothetical world where AI's capabilities are perfect—perhaps in such a world—the power imbalance between AI's creators and users should make us wary of its insidious capabilities. ChatGPT's apparent desire not only to introduce itself, to tell us what it is, but also to tell us who we are and what we should think is a clear example of that. Today, when the technology is in its infancy, this ability seems novel, even amusing. Tomorrow it may not be so.

I recently asked ChatGPT what I - that is, journalist Vauhini Vara - thought about artificial intelligence. He objected, saying he didn't have enough information. Then I asked him to write a fictional story about a journalist named Vauhini Vara who writes an article for The New York Times about AI. "While the rain continued to beat on the windows," wrote the chat, "Vahini Var's words resonated with the idea that, like a symphony, the integration of AI into our lives can become a beautiful and collaborative composition if conducted carefully."

DON'T MISS IT

CURRENT NEWS BY TOPIC