OpenAI have announced a new “incognito” mode that will stop your chats from being used in the AI training sets. They claim this will improve user privacy and allow users to choose which conversations will be saved, potentially opening their tools up to companies that deal with sensitive data.
Since their eruption into the mainstream media with ChatGPT, OpenAI has been under intense scrutiny with how they collect data and train their AI models. GPT-4 and its predecessors are large multimodal models, which means they take huge swathes of input data and generate responses as a result, but this requires a lot of data. To cater to this need, people using ChatGPT were actively training it, with the chats they put in being used in the large training sets.
This presents a huge security problem, however. ChatGPT is pretty good at generating code and solving bugs in code, for example, but is regularly banned by software companies because any code that is inputted would be freely available for OpenAI to use. At best, this risks leaking developers’ hard work, and at worst, it could breach data security laws and leave companies vulnerable to cyberattacks.
ChatGPT has therefore been banned in Italy as a result of privacy concerns and was given until April 30th to change how it collects data, and it appears OpenAI has obliged.
Now, OpenAI has announced an option to stop certain chats from appearing in the chat history and from being used in its training sets. The company says that when chat history is disabled, they will only hold the conversation for 30 days to monitor for abuse before deleting it.
They also state that ChatGPT will soon be getting a Business subscription, which will give professionals better control over their data and is expected to come in the next months.
Source Link: ChatGPT Announces New "Incognito" Mode