OpenAI Says GPT-4 Poses Limited Risk of Helping Create Bioweapons

Post date:



OpenAI’s most powerful artificial intelligence software, GPT-4, poses “at most” a slight risk of helping people create biological threats, according to early tests the company carried out to better understand and prevent potential “catastrophic” harms from its technology.

For months, lawmakers and even some tech executives have raised concerns about whether AI can make it easier for bad actors to develop biological weapons, such as using chatbots to find information on how to plan an attack. In October, President Joe Biden signed an executive order on AI that directed the Department of Energy to ensure AI systems don’t pose chemical, biological or nuclear risks. That same month, OpenAI formed a “preparedness” team, which is focused on minimizing these and other risks from AI as the fast-developing technology gets more capable.

Source link


Leah Sirama
Leah Sirama
Leah Sirama, a lifelong enthusiast of Artificial Intelligence, has been exploring technology and the digital realm since childhood. Known for his creative thinking, he's dedicated to improving AI experiences for all, making him a respected figure in the field. His passion, curiosity, and creativity drive advancements in the AI world.