OpenAI’s most powerful artificial intelligence software, GPT-4, poses “at most” a slight risk of helping people create biological threats, according to early tests the company carried out to better understand and prevent potential “catastrophic” harms from its technology.

For months, lawmakers and even some tech executives have raised concerns about whether AI can make it easier for bad actors to develop biological weapons, such as using chatbots to find information on how to plan an attack. In October, President Joe Biden signed an executive order on AI that directed the Department of Energy to ensure AI systems don’t pose chemical, biological or nuclear risks. That same month, OpenAI formed a “preparedness” team, which is focused on minimizing these and other risks from AI as the fast-developing technology gets more capable.

Source link

2 COMMENTS

  1. helloI really like your writing so a lot share we keep up a correspondence extra approximately your post on AOL I need an expert in this house to unravel my problem May be that is you Taking a look ahead to see you

  2. I would claim that a true assistance is involved in producing excellent posts. It’s my first time visiting your website, and I’m amazed at how much research you did to produce such a fantastic article. Fantastic work!

LEAVE A REPLY

Please enter your comment!
Please enter your name here