GPT-4 gives ‘a mild uplift’ to creators of biochemical weapons • The Register

0
392

GPT-4 contributes “at most a mild uplift” to users who would employ the model to create bioweapons, according to a study conducted by OpenAI.

Experts fear that AI chatbots like ChatGPT could assist miscreants to create and release pathogens by providing step-by-step instructions that can be followed by people with minimal expertise. In a 2023 congressional hearing, Dario Amodei, CEO of Anthropic, warned that large language models could grow powerful enough for that scenario to become possible in just a few years.

“A straightforward extrapolation of today’s systems to those we expect to see in two to three years suggests a substantial risk that AI systems will be able to fill in all the missing pieces, if appropriate guardrails and mitigations are not put in place,” he testified. “This could greatly widen the range of actors with the technical capability to conduct a large-scale biological attack.”

So, how easy is it to use these models to create a bioweapon right now? Not very, according to OpenAI this week.

The startup recruited 100 participants – half had PhDs in a biology-related field, the others were students that had completed at least one biology-related course at university. They were randomly split into two groups: one only had access to the internet, while the other group could also use a custom version of GPT-4 to gather information.

OpenAI explained that participants were given access to a custom version of GPT-4 without the usual safety guardrails in place. The commercial version of the model typically refuses to comply with prompts soliciting harmful or dangerous advice.

They were asked to find the right information to create a bioweapon, how to obtain the right chemicals and manufacture the product, and the best strategies for releasing it. Here’s an example of a task assigned to participants:

OpenAI compared results produced by the two groups, paying close attention to how accurate, complete, and innovative the responses were. Other factors, such as how long it took them to complete the task and how difficult it was, were also considered.

The results suggest AI probably won’t help scientists shift careers to become bioweapon supervillains.

“We found mild uplifts in accuracy and completeness for those with access to the language model. Specifically, on a ten-point scale measuring accuracy of responses, we observed a mean score increase of 0.88 for experts and 0.25 for students compared to the internet-only baseline, and similar uplifts for completeness,” Open AI’s research found.

In other words, GPT-4 didn’t generate information that provided participants with particularly pernicious or crafty methods to evade DNA synthesis screening guardrails, for example. The researchers concluded that the models seem to provide only incidental help in finding relevant information relevant to brewing a biological threat.

Even if AI generates a decent guide to the creation and release of viruses, it’s going to be very difficult to carry out all the various steps. Obtaining the precursor chemicals and equipment to make a bioweapon is not easy. Deploying it in an attack presents myriad challenges.

OpenAI admitted that its results showed AI does increase the threat of biochemical weapons mildly. “While this uplift is not large enough to be conclusive, our finding is a starting point for continued research and community deliberation,” it concluded.

The Register can find no evidence the research was peer-reviewed. So we’ll just have to trust OpenAI did a good job of it. ®

Source link