5 Ways the ‘Godfather of AI’ thinks AI could ruin everything

The award-winning former Google scientist has had a change of heart

When you purchase through links on our site, we may earn an affiliate commission.Here’s how it works.

Almost a decade ago Turing Prize winner and now formerGoogleemployee Dr. Geoffrey Hinton was known to enthusiastically exclaim among his AI students, “I understand how the brain works now!” Now, however, the “Godfather of AI” has walked away from Google – and he’s more apt to be heard desperately ringing AI alarm bells.

In anexit interview withThe New York Times,Hinton expressed deep concerns about the rapid expansion of AI, saying “it is hard to see how you can prevent the bad actors from using it for bad things.”

A direct line connects Hinton’s decades-long pioneering work in neural networks to the chatbots (ChatGPT,Google Bard,Bing AI) of today. His breakthroughs more than a decade ago led Google to bring him on board to develop next-gen deep-learning systems that could help computers interpret images, text, and speech in much the way humans do.

Speaking to Wiredin 2014, Hinton’s enthusiasm was clear: “I get very excited when we discover a way of making neural networks better – and when that’s closely related to how the brain works.”

A very different Hinton spoke toThe New York Timesthis week, to outline all the ways AI could run humanity right off the rails. Here are the key takeaways:

Rush to compete means guardrails fly off

Rush to compete means guardrails fly off

While companies likeMicrosoft, Google, andOpenAIoften profess that they’re taking a slow and cautious approach to AI development andchatbotdeployment, Dr. Hinton toldThe New York Timesthat he’s worried the reality is that increased competition is resulting in a less cautious approach. And he’s clearly right: Earlier this year we witnessed Googlerushing outa not-ready-for-primetime Bard to meetthe surprise appearanceof Microsoft’s ChatGPT-powered Bing AI.

Can these companies balance a market imperative to stay ahead of the competition (Google staying Number 1 in search, for example) with the greater good? Dr. Hinton is now unconvinced.

Get the best Black Friday deals direct to your inbox, plus news, reviews, and more.

Get the best Black Friday deals direct to your inbox, plus news, reviews, and more.

Sign up to be the first to know about unmissable Black Friday deals on top tech, plus get all your favorite TechRadar content.

Loss of truth

Dr. Hinton worries about the proliferation of AI leading to a flood of fake online content. Of course, that’s less a future concern than a real-time one as people are now regularly fooled by AI music thatfakes the vocal gifts of the masters(including dead ones),AI news imagesbeing treated as real, and generative imageswinning photography competitions. With the power and pervasiveness ofdeep fakes, few videos we watch today can be taken at face value.

Still, Dr. Hinton may be right that this is just the beginning. Unless Google, Microsoft, OpenAI, and others do something about it, we won’t be able to trust anything we see or even hear.

A ruined job market

Dr. Hinton warnedThe Timesthat AI is set to take on more than just tasks we don’t want to do.

Many of us have taken chatbots like Bard andChatGPTfor a spin at writing presentations, proposals, and even programming. Most of the output is not ready for primetime, but some of it is or is at least passable.

There are right nowdozens of AI-generated novels for saleonAmazon, and the screenwriters union (Writers Guild of America) in the US has expressed concern that if they do not agree on a new contract, studios mightoutsource their jobs to AI. And while there haven’t been widespread layoffs linked directly to AI, the growth of these powerful tools is leading some torethink their workforces.

Unexpected and unwanted behavior

One of the hallmarks of neural nets and deep learning AI is that it can use vast swaths of data to teach itself. One of the unintended consequences of this human-brain-like power is that AI can learn lessons you never anticipated. A self-determining AI might AI on those lessons. Dr. Hinton said an AI that can not only program but then run its own programming is especially concerning, especially as it relates to unintended outcomes.

AI will be smarter than us

Current AI often appears smarter than humans, but with its propensity forhallucinationsand manufactured facts, it’s far from matching our greatest minds. Dr. Hinton believes the day when AI does outsmart us is fast approaching, and certainly faster than he originally anticipated.

AI may be able to fake empathy – see its recent trial asa source for medical advice– but that’s not the same as true human empathy. Super-smart systems that can figure everything out but without consideration of how their choices might impact humans are deeply concerning.

While Dr. Hinton’s warnings are in stark contrast to his original enthusiasm for the tech sector he all but invented, something he toldWiredabout the work on human-brain-mimicking AI systems back in 2014 now sounds oddly prescient: “We ceased to be the lunatic fringe. We’re now the lunatic core.”

A 38-year industry veteran andaward-winning journalist, Lance has covered technology since PCs were the size of suitcases and “on line” meant “waiting.” He’s a former Lifewire Editor-in-Chief, Mashable Editor-in-Chief, and, before that, Editor in Chief of PCMag.com and Senior Vice President of Content for Ziff Davis, Inc. He also wrote a popular, weekly tech column for Medium called The Upgrade.

Lance Ulanoffmakes frequent appearances on national, international, and local news programs including Live with Kelly and Mark, theToday Show, Good Morning America, CNBC, CNN, and the BBC.

How to delete a character from Character AI

How to turn off Meta AI

NYT Strands today — hints, answers and spangram for Sunday, November 10 (game #252)