ACTUAL

A year has passed and ChatGPT is already fulfilling all our wishes

One of the first questions that I asked Chatgpt early this year concerned me: "What can you tell me about the writer Vahini Varu?" I was answered that I was a journalist (though I was also an artistic writer) that I was born in California (false) and that I won the Geralda Loeb Prize and the National Magazine Award (Like, Lie).

After that, I had a habit of asking myself often. One day, I learned that Vahini Vara is the author of the popular book "Native and Alien: Creating Peace in the northern territory of Australia". It was also false, but I agreed, saying that I consider a report "dangerous and difficult."

"Thank you for your important job," Chatgpt said.

Treating the product, advertised as almost a human interlocutor, forcing him to reveal his essential fallacy, I felt like a heroine in some prolonged game of strength between a girl and a robot.

Various forms of artificial intelligence have been used for a long time, but it is the Chatgpt presentation that has unexpectedly brought artificial intelligence into our public consciousness. As of February, Chatgpt was one of the metrics, the fastest growing consumer application in history. Our first meetings with these technologies have shown that they are extremely eccentric-remember the creepy Kevin Ruza conversation with Bing Chat from Microsoft based on artificial intelligence, who confessed for two hours that he wanted to be human and in love with it-and often, as my experience shows, was very mistaken.

Since then, there has been a lot in AI: companies have gone beyond the basic products of the past, presenting more sophisticated tools, such as personalized chatbots, services that can handle photos and sound with the text, and more. The rivalry between Openai and better technological companies has become more intense than ever, even though smaller players were gaining momentum. The governments of China, Europe and the United States have taken important steps towards regulating technology development, trying not to give way to the competitive positions of other countries.

But more than any other technological, business or political event, this year was distinguished by how artificial intelligence has penetrated our daily life, teaching us to perceive its shortcomings-spooky, mistakes, etc.-as our own, and the companies behind it were used to train us. In May, when it turned out that lawyers used a legal certificate that Chatgpt filled with references to non -existent judgments, a joke, as well as a fine of $ 5,000, which lawyers had to pay, was on their conscience, not on technology. "It's ashamed," one of them said.

Something similar has happened to produced artificial intelligence with deep fakes, digital imitation of real people. Remember how they were treated with horror? In March, when Crissi Teagen could not find out whether the real image of the Pope in a Balenciga -style down jacket, she wrote on social networks: "I hate myself, lol." Secondary schools and universities quickly moved from worrying about how to prevent artificial intelligence from using students to show them how to use it effectively. AI is still not writing well, but now that it shows its shortcomings, not products are ridiculed, but students who use them.

Okay, you will think, but have we not adapt to new technologies over most of human history? If we are going to use them, should we not be smart in this matter? This line of reasoning avoids what should be the central question: should you create liability chatbots and deep fake mechanisms at all?

Artificial intelligence errors have a cute anthropomorphic name - hallucinations - but this year clearly showed how high the rates can be. We have read headlines about how AI is taught by killer (with unpredictable behavior), sends people to prison (even if they are innocent), designs bridges (with potentially careless supervision), diagnoses all sorts of diseases (sometimes incorrectly) and creates convincing reports (in some cases).

As a society, we have certainly won from promising technologies based on AI; This year, I was delighted, reading about technologies that can detect breast cancer that do not notice doctors or allow people to decipher whatever communication. Focusing on these benefits and accusation of ourselves in many cases where artificial intelligence technologies are being supplied, releases the responsibility of the company under these technologies, and, in particular, people behind these companies.

The events of the last few weeks emphasize how strong the power of these people is. Openai, an organization behind Chatgpt, was created as a non -profit to maximize public interest, not just to maximize profits. However, when her reign was released by Sam Altman, the Executive Director, against the background of fears that he was not serious enough about public interests, investors and employees rebelled. Five days later, Mr. Altman returned with a triumph, replacing most of the inconvenient board members.

Looking back, I understand that in my early games with Chatgpt I misinterpreted my opponent. I thought it was the technology itself. I would have to remember that technologies themselves are neutral about values. The rich and influential people behind them and the institutions created by these people are no.

The truth is that no matter what I asked Chatgpt, in my early attempts to knock it from the pledge, Openai went forward. Engineers have designed it so that he studied on the basis of communication with users. And regardless of whether his answers were correct, they encouraged me to interact with him again and again. The main purpose of Openai in the first year was to force people to use it. So, pursuing my goal, I only helped her with that.

The companies that develop AI are working hard to correct the shortcomings of their products. Considering all the investments they attract, we can imagine that some progress will be made. But even in a hypothetical world where artificial intelligence opportunities will be perfect - perhaps in such a world - the imbalance of power between creators and users of artificial intelligence should make us beware of his insidious opportunities. Not only to introduce ourselves, to tell us what he represents himself, but also to tell us who we are and what we should think is a prime example. Today, when technology is in a state of infancy, this ability seems new, even funny. Tomorrow it may not be true.

I recently asked Chatgpt that I - that is, journalist Vahini Vara - I think about artificial intelligence. He denied, saying that he had not enough information. Then I asked him to write a fictional story about a journalist named Vahini Vara, who wrote an article for The New York Times about Shi. "While the rain continued to knock on the windows," the chat wrote, "Vahini Varas' words have echoed with the idea that, like a symphony, the integration of AI into our lives can be a wonderful and common composition, if done with caution."

DON'T MISS IT

INTERESTING MATERIALS ON THE TOPIC