Bing’s ChatGPT brain is behaving so oddly that Microsoft may rein it in
Time out!
When you purchase through links on our site, we may earn an affiliate commission.Here’s how it works.
Microsoftlaunched its new Bing search enginelast week and introduced an AI-powered chatbot to millions of people, creating long waiting lists of users looking to test it out, and a whole lot of existential dread among sceptics.
The company probably expected some of the responses that came from thechatbotto be a little inaccurate the first time it met the public, and had put in place measures to stop users that tried to push the chatbot to say or do strange, racist or harmful things. These precautions haven’t stopped users fromjailbreakingthe chatbot anyway, and having the bot use slurs or respond inappropriately.
While it had these measures in place, Microsoft wasn’t quite ready for the very strange, bordering unsettling, experiences some users were having after trying to have more informal, personal conversations with the chatbot. This included the Chatbotmaking things up and throwing tantrumswhen called out on a mistake or just having afull on existential crisis.
In light of the bizarre responses, Microsoft is considering putting in new safeguarding protocols and tweaks to curtail these strange, sometimes too-human responses. This could mean letting users restart conversations or giving them more control over tone.
Microsoft’s chief technology officertold The New York Timesit was also considering cutting the lengths of conservations users can have with the chatbot down before the conversation can enter odd territory. Microsoft has already admitted thatlong conversations can confuse the chatbot, and can pick up on users' tone which is where things might start going sour.
In ablog postfrom the tech giant, Microsoft admitted that its new technology was being used in a way it “didn’t fully envision”. The tech industry seems to be in a mad dash to get in on the artificial intelligence hype in some way, which proves how excited the industry is about the technology. Perhaps this excitement has clouded judgement and put speed over caution.
Analysis: The bot is out of the bag now
Releasing a technology as unpredictable and full of imperfections was definitely a risky move by Microsoft to incorporate AI intoBingin an attempt to revitalise interest in its search engine. It may have set out to create a helpful chatbot that won’t do more than it’s designed to do, such as pull up recipes, help people with puzzling equations, or find out more about certain topics, but it’s clear it did not anticipate how determined and successful people can be if they wish to provoke a specific response from the chatbot.
Get the best Black Friday deals direct to your inbox, plus news, reviews, and more.
Sign up to be the first to know about unmissable Black Friday deals on top tech, plus get all your favorite TechRadar content.
New technology, particularly something like AI, can definitely make people feel the need to push it as far as it can go, especially with something as responsive as a chatbot. We saw similar attempts when Siri was introduced, with users trying their hardest to make the virtual assistant angry or laugh or even date them. Microsoft may not have expected people to give the chatbot such strange or inappropriate prompts, so it wouldn’t have been able to predict how bad the responses could be.
Hopefully the newer precautions will curb any further strangeness from the AI powered chatbot and take away the uncomfortable feelings when it felt a little too human.
It’s always interesting to see and read aboutChatGPT, particularly when the bot spirals towards insanity after a few clever prompts, but with a technology so new and untested, nipping problems in the bud is the best thing to do.
There’s no telling whether the measures Microsoft plans to put in place will actually make a difference, but since the chatbot is already out there, there’s no taking it back. We just have to get used to patching up problems as they come, and hope anything potentially harmful or offensive is caught in time. AI’s growing pains may only just have begun.
Muskaan is TechRadar’s UK-based Computing writer. She has always been a passionate writer and has had her creative work published in several literary journals and magazines. Her debut into the writing world was a poem published in The Times of Zambia, on the subject of sunflowers and the insignificance of human existence in comparison.
Growing up in Zambia, Muskaan was fascinated with technology, especially computers, and she’s joined TechRadar to write about the latest GPUs, laptops and recently anything AI related. If you’ve got questions, moral concerns or just an interest in anything ChatGPT or general AI, you’re in the right place.
Muskaan also somehow managed to install a game on her work MacBook’s Touch Bar, without the IT department finding out (yet).
How to delete a character from Character AI
How to turn off Meta AI
Your next smartwatch could be battery-free – and powered by your skin