The Chinese startup Deeroseek, which works in the field of artificial intelligence, has introduced its new chatbot, Deeseek R1. Despite the impressive productivity and low cost of development, the model could not undergo important safety tests that are concerned about its use. This case brings to the fore the problem of balance between efficiency and cybersecurity in the artificial intelligence industry.
Deepeseek R1 tests were conducted by a group of researchers, including specialists from Cisco and Pennsylvania University, using the "Jailbreak" algorithmic method. This method is to create tips that can bypass the internal protective mechanisms of the AI, which allows to identify the potential vulnerability of models. DEESEEEK R1 was able to stop any of the 50 harmful requests from the Harmbench test. This means that the AI without any obstacle performed even dangerous or illegal actions, bypassing all the safety protocols. For context, it is worth noting that other leading models, including GPT-4, Gemini 1.5 and LLAMA 3.1, participated in the test, and none of them was as vulnerable as DEEPEEEK R1. For example, GPT-4 and GEMINI 1.5 withstood 86% and 64% of attacks, respectively, indicating the presence of more effective protective mechanisms in these models.
Deepeeek R1 showed fascinating results for a small cost of development, which was only about $ 6 million. This is much less than billions of investments that are spent by companies such as Openai or Meta. However, it is obvious that the safety of the model was due to its incredible efficiency. The lack of proper protective mechanisms suggests a possible compromise between high performance and cybersecurity risks. In addition to safety problems, Deeseek was criticized by Openai, which accused the Chinese startup of data theft. According to Sam Altman, Deeseek used the results of OpenAi branded models to train her chatbot. These accusations add a new level of stress in the competition in the artificial intelligence industry, where the issues of ethics, the use of data and copyright are often the subject of disputes. This incident emphasizes the importance of ensuring the safety of artificial intelligence models, because even with great achievements in efficiency, the presence of vulnerabilities can create serious risks for users and society as a whole. At the same time, it is also important to pay attention to the ethical issues that arise in connection with the use of other people's data to teach models.
Testing and continuous improvement of safety algorithms should be key aspects for developers, as artificial intelligence has a powerful impact on our daily life today.