Introducing GPTs

0
987


As always, you are in control of your data with ChatGPT. Your chats with GPTs are not shared with builders. If a GPT uses third party APIs, you choose whether data can be sent to that API. When builders customize their own GPT with actions or knowledge, the builder can choose if user chats with that GPT can be used to improve and train our models. These choices build upon the existing privacy controls users have, including the option to opt your entire account out of model training. 

We’ve set up new systems to help review GPTs against our usage policies. These systems stack on top of our existing mitigations and aim to prevent users from sharing harmful GPTs, including those that involve fraudulent activity, hateful content, or adult themes. We’ve also taken steps to build user trust by allowing builders to verify their identity. We’ll continue to monitor and learn how people use GPTs and update and strengthen our safety mitigations. If you have concerns with a specific GPT, you can also use our reporting feature on the GPT shared page to notify our team.

GPTs will continue to get more useful and smarter, and you’ll eventually be able to let them take on real tasks in the real world. In the field of AI, these systems are often discussed as “agents”. We think it’s important to move incrementally towards this future, as it will require careful technical and safety work—and time for society to adapt. We have been thinking deeply about the societal implications and will have more analysis to share soon.



Source link