AI bots like ChatGPT are being censored - but I think that could be a good thing
I have no mouth, and I must scream
When you purchase through links on our site, we may earn an affiliate commission.Here’s how it works.
Oh,ChatGPT- how we love/hate you. Whatever your opinion of the internet’s favoritechatbot, there’s no denying that it’s becoming rapidly entrenched in our digital lives; fromhelping you with writingtofinding your dream home on Zillow, ChatGPT is everywhere now.
But how do you keep an AI in line? We’ve already seen chatbots causing a multitude of problems, after all.Microsofthad to rein in itsBingAI shortly after release because thechatbot was lying and throwing tantrums, AI is being used aggressively fordigital scams, and I was personally able to get ChatGPT to extol its love ofeating human infants. Yikes.
Now, I’m not blaming the chatbots here. I’m not even really blaming the people who make them. As our esteemed Editor-in-ChiefLance Ulanoff said, “I don’t live in fear of AI. Instead, I fear people and what they will do with chatbots” - ultimately, it’s the human wielders of this powerful new technology who will cause real problems for other people.
That doesn’t mean that AI businesses don’t have a societal obligation to make their chatbots safe to use, though. With villains out there using AI tools for everything from fraud to revenge porn, I was immensely disappointed to see thatMicrosoft laid off its entire AI ethics teamearlier this year.
As a pioneer in the AI space, committing to AI in a number of ways, Microsoft should be doing better. However, according to a new statement from the tech giant, the company is taking a new approach to AI ethics.
Keeping AI ethical requires many hands
In ablog post, Microsoft’s ‘Chief Responsible AI Officer’ Natasha Crampton detailed the company’s new plan: essentially, distributing responsibility for AI ethics across the entire business, rather than tasking an individual team with keeping a handle on it.
Senior staff will be expected to commit to “spearheading responsible AI within each core business group”, with “AI champions” in every department. The idea is that every Microsoft employee should have regular contact with responsible AI specialists, fostering an environment where everyone understands what rules AI should abide by.
Crampton discusses ‘actionable guidelines’ for AI, and referred back to Microsoft’s‘Responsible AI Standard’, the company’s official rulebook for building AI systems with safety and ethics in mind. It’s all very serious business, clearly constructed to repair some of the reputational damage caused by Bing AI’s rocky start.
Will it work, though? That’s hard to judge; making sure the entire company understands the risks posed by irresponsible AI use is a good start, but I’m not convinced it’s enough. Crampton notes that several of the disbanded ethics team members were “infused” into the user research and design teams to keep their expertise on hand, which is good to see.
Censorship, but it’s actually good this time (I promise)
Of course, there’s an entirely different route that could be taken to ensure AIs aren’t used for nefarious purposes - censorship.
As I know from first-hand research,ChatGPT(and most other chatbots) have pretty rigorous safeguarding protocols in place. You can’t get it to suggest you do something potentially harmful or criminal, and it’ll steadfastly refuse to produce sexual content. You can circumvent these barriers with the right know-how, but at least they’re there.
Nvidiarecentlyunveiled a new AI safety softwarecalled NeMo Guardrails, which employs a three-pronged approach to preventing machine learning programs from going rogue. To sum up quickly, these ‘guardrails’ are broken into three areas: security, safety, and topical. The security rails prevent the bot from accessing stuff on your computer it shouldn’t while the safety rails work to tackle misinformation by fact-checking the AI’s citations in real time.
The most interesting of the three, though, are the topical guardrails. As the name suggests, these determine which topics the chatbot can use when responding to a user, which primarily works to keep the bot on-subject and prevent unrelated tangents. However, they also allow for the setting of ‘banned’ topics.
The problem with topical guardrails
With tools like NeMo, companies can effectively ban an AI from discussing a whole subject in any capacity. Nvidia evidently has a lot of confidence in the software, rolling it out to business users, so we can assume it works at leastreasonablywell - which, honestly, could be great!
If we can hard-code guardrails into publicly-accessible AI models that prevent them from working for scammers or manufacturing illegal pornographic content, that’s a good thing. To anyone who disagrees, I say this: ChatGPT is easily accessible to kids. If you think literal children should be exposed to AI-generated smut, I don’t want to discuss AI with you.
However, there are definite issues with using this sort of censorship as a tool for keeping the reins tight on AI-powered software. AsBloombergrecently reported, ChatGPT alternatives cropping up in China are very clearly being censored by the state, rendered incapable of properly discussing banned subjects deemed too politically contentious, like the 1989 Tiananmen Square protests or the independent nation of Taiwan.
I don’t want to get overly political here, but I think we can all agree that this kind of thing is very much the ‘bad’ sort of censorship. Online censorship in China is sadly commonplace, but imagine if ChatGPT wasn’t allowed to talk about the death of George Floyd or the Pequot massacre because those topics were deemed too ‘sensitive’ by politicians? Looking at the current state of world affairs, it’s a worryingly believable future.
Quis custodiet ipsos custodes?
Once again, we come back to the real problem with AI: us. Who guards the guardrails? It’s all well and good for Microsoft to say that it’s forging ahead with plans to keep Ai ethical, but what Crampton really means is that the tech firm’s AI will adhere to the ethics of Microsoft - not the world. The White House unveiled an‘AI Bill of Rights’last year, and again, that’s one presidential administration’s idea of what AI ethics should look like, not a democratically decided one.
To be clear, I’m not actually saying that Microsoft is an unethical company when it comes to AI. I’ll leave that to Elon Musk and his ridiculous‘anti-woke’ chatbot plans. But there has to be an acknowledgment of the fact that whatever rules an AI has to follow must first be chosen and programmed by humans.
Ultimately, transparency is king. AI is already starting to face serious backlash as it encroaches into more of our lives, be that Snapchat usersreview-bombing the app’s new AI assistantorChatGPT getting sued for defamationin Australia. Even Geoffrey Hinton, the famed ‘Godfather of AI’, has warned ofthe dangers posed by AI. If they want to avoid trouble, chatbot creators must tread carefully.
I genuinely do hope Microsoft’s new approach (and tools like Nvidia’s guardrails) have a positive impact on how we interact with AI safely and responsibly. But there’s clearly a lot of work left to be done - and we need to keep a critical eye on those deciding the rules by which AIs must abide.
Get the best Black Friday deals direct to your inbox, plus news, reviews, and more.
Sign up to be the first to know about unmissable Black Friday deals on top tech, plus get all your favorite TechRadar content.
Christian is TechRadar’s UK-based Computing Editor. He came to us from Maximum PC magazine, where he fell in love with computer hardware and building PCs. He was a regular fixture amongst our freelance review team before making the jump to TechRadar, and can usually be found drooling over the latest high-end graphics card or gaming laptop before looking at his bank account balance and crying.
Christian is a keen campaigner for LGBTQ+ rights and the owner of a charming rescue dog named Lucy, having adopted her after he beat cancer in 2021. She keeps him fit and healthy through a combination of face-licking and long walks, and only occasionally barks at him to demand treats when he’s trying to work from home.
How to delete a character from Character AI
How to turn off Meta AI
Quordle today – hints and answers for Saturday, November 9 (game #1020)