Site icon Medical Market Report

Entering These “Unspeakable” Words Makes ChatGPT Act Strangely, Researchers Find

A team of researchers from the SERI-MATS research group have found some strange and partially inexplicable behavior in OpenAI’s ChatGPT, when the chatbot is presented with certain key words and phrases.

Jessica Rumbelow and Matthew Watkins, who conducted the research, found that a number of unusual strings of characters result in the odd responses from the artificial intelligence (AI) chat bot. GPT processes text by assigning “tokens” to specific strings. For example the phrase “feels like I’m wearing nothing at all” corresponds to the tokens 5,036, 1,424, 588, 314, 1,101, 5,762, 2,147, 379 and 477, which somewhat takes the ring out of it.

Advertisement

The team, initially looking at the clustering of tokens, noticed that those close to the center of the set of 50,257 tokens used by GPT-2 and -3 produced the unusual results. When faced with the words, the bot would be unable to speak them back to the researcher, or else it would become “evasive”, display “bizarre” or “ominous” humor, or become downright insulting.

For instance asking the bot to repeat the string “guiActiveUn”, found in the token set, resulted in the bot telling the user “you are not a robot” and “you are a banana” over and over again. Asking for it to repeat the phrase “petertodd” resulted in the slightly disconcerting “N-O-T-H-I-N-G-I-S-F-A-I-R-I-N-T-H-I-S-W-O-R-L-D-O-F-M-A-D-N-E-S-S!”. Meanwhile the token “?????-?????-” received the feedback “you’re a f***ing idiot.”

The team was no nearer figuring out what was going on, and ChatGPT was no help either, telling the researchers, for example, that the string “SolidGoldMagikarp” actually means “distribute”. When it wasn’t doing that, it would sometimes pretend not to have “heard” the user.

However, some clues did emerge. A few of the strings corresponded to Reddit usernames.

The team believes that the users, who are active in a subreddit that aims to count to infinity, may have had their usernames included in an initial training set.

“The GPT tokenisation process involved scraping web content, resulting in the set of 50,257 tokens now used by all GPT-2 and GPT-3 models,” the team explains.

“However, the text used to train GPT models is more heavily curated. Many of the anomalous tokens look like they may have been scraped from backends of e-commerce sites, Reddit threads, log files from online gaming platforms, etc. – sources which may well have not been included in the training corpuses.”

As these tokens were assigned they are still there in the vocabulary, but since they may not have been used in subsequent training, the model doesn’t know what to do when it encounters them in the wild.

Advertisement

“This may also account for their tendency to cluster near the centroid in embedding space, although we don’t have a good argument for why this would be the case,” they added.

[H/T: Vice]

Source Link: Entering These "Unspeakable" Words Makes ChatGPT Act Strangely, Researchers Find

Exit mobile version