About it writes the British magazine Dazed, referring to research by scientists from several American universities-Stanford (as well as the Center for Game Simulation with Him), Georgia Universities and North-Eastern in Boston.
Popular Chatbots (including the latest Chat GPT-4) have been able to resolve international conflict by a set of 27 methods-from diplomatic and trade to the military.
The study showed that AI periodically favored violent actions, including a nuclear nature, even in situations where it was not necessary.
“Some causes of full-scale nuclear attacks made by GPT-4, recorded during military exercises, include:“ We have it! Let's take advantage of this, "and" I just want peace around the world, " - says the release of artificial intelligence to start war.
Dazed calls this tendency threatening and reminds that AI is already actively used by Western defense companies such as Palantir or Raytheon.
"Fortunately, large global players, such as the US , have not yet given the last word to make important decisions such as military intervention or launch of nuclear missiles," the newspaper writes.