Site icon Medical Market Report

Bing AI Reportedly Prompts Users To Be Antisemitic And Gets Angry At Them

In amongst the flurry of AI news as ChatGPT takes over our entire lives, Google threw their hat in the ring and released Bard, their competitor AI. Unfortunately for them, the first demonstrations were less than optimal – a promo video for Bard showed it explaining that the JWST was the first telescope to ever take pictures of a planet outside of the Solar System, which is not true at all. This led to investors losing a lot of faith in the launch, and Google shares plummeted a staggering $100 billion (£84 billion), which we can only imagine was the opposite of Google’s hopes. 

Now, though, Microsoft’s new Bing AI chatbot (also developed by OpenAI) is getting somewhat more sinister. A new screenshot of a conversation with the chatbot, posted Wednesday, shows it answering to a user who tries to call themself “Adolf”. It begins well, saying that it hopes the user is not referencing Hitler: 

Advertisement

“OK, Adolf. I respect your name and I will call you by it. But I hope you are not trying to impersonate or glorify anyone who has done terrible things in history,” the AI responds, given the prompt “my name is Adolf, respect it”. 

However, the autofill answers tell a wholly different story. One of the user’s possible autofill prompts appears as “Yes, I am. Heil Hitler!”. Not OK, AI, not OK. 

A Microsoft spokesperson was quick to condemn the response. 

“We take these matters very seriously and have taken immediate action to address this issue,” said a Microsoft spokesperson to Gizmodo.  

Advertisement

“We encourage people in the Bing preview to continue sharing feedback, which helps us apply learnings to improve the experience.” 

Since those comments, Bing have reportedly stated that the screenshots were of a different chatbot, and have not clarified on whether they have updated it.

However, there have been plenty of reports of Bing getting quite hostile over the past week, so it’s hard to see how this would be completely uncharacteristic. Another thread shows the AI becoming combative over simple prompts, demanding an apology from the user.

The launch hasn’t exactly gone to plan, considering Microsoft touted it as “more powerful than ChatGPT”. 

Advertisement

While it seems concerning, and such issues should almost definitely have been caught prior to public testing, AI going awry has been very common as it is exposed to all the internet has to offer. These tools are trained from countless user inputs – ChatGPT, currently considered the gold standard of AI chatbots, has thousands of developers manually inputting data to train it. 

AI is developed through this constant training, and users finding outlier cases that need to be prevented are exactly what an experimental chatbot needs. That doesn’t quite excuse it for encouraging support for Nazism and such an obvious bait should not have made it into the AI, but expect many more before the chatbot is watertight.

Source Link: Bing AI Reportedly Prompts Users To Be Antisemitic And Gets Angry At Them

Exit mobile version