Google AI Smash shows how the modern ad-based press tends to devalue the truth

0

from I’m-sorry-I-can’t-do-that,-Dave department

The Washington Post dropped what it claimed was a bombshell. In the story, Google software engineer Blake Lemoine hinted that Google’s LaMDA (Language Model for Dialogue Applications) system, which extracts from Google’s vast repositories of data and words to generate realistic chatbots and human-sounding, had become fully conscious and sentient.

He followed that up with several blog posts alleging the same thing:

Over the past six months, LaMDA has been incredibly consistent in its communications about what it wants and what it believes. his rights are as a person.

This was accompanied by a more skeptical piece at The Economist where Google Vice President Blaise Aguera y Arcas had this to say again about the company’s LaMDA technology:

“I felt the ground shift under my feet…it felt more and more like I was talking to something intelligent.”

This paved the way for an avalanche of news aggregates, blog posts, YouTube videos (many of which are automated clickbait spam) and Twitter posts – all promoting the idea that HAL9000 was born in Mountain View, California, and that Lemoine was a heroic whistleblower saving a new, fledgling life form from a ruthless overlord:

The problem? None of this was true. Google had made a very realistic mock-up with its LaMDA system, but hardly anyone who actually works in AI thinks the system is remotely autonomous. This includes scientist and author Gary Marcus, whose blog post on the smash is honestly the only thing you should probably bother reading on the subject:

Absurdity. Neither LaMDA nor any of its (GPT-3) cousins ​​are remotely intelligent.1 All they do is match patterns, tap into massive statistical databases of human language. The models may be cool, but the language spoken by these systems actually means nothing at all. And that certainly doesn’t mean these systems are sensitive.

Which doesn’t mean that human beings can’t be fooled. In our book Rebooting AI, Ernie Davis and I called this human tendency to get fooled by The credulity gap – a pernicious, modern take on pareidolia, the anthropomorphic bias that allows humans to see Mother Theresa in a cinnamon roll image.

That’s not to say that what Google has developed isn’t very cool and useful. If you’ve created a digital assistant so realistic that even your engineers believe it’s a real person, you’ve absolutely accomplished something with potential for practical application. Yet, as Marcus notes, when it really comes down to its basic components, Google has built a complicated “word spreadsheet,” not a sentient AI.

The old quote “a lie can travel halfway around the world before the truth can put on its boots” is especially true in the age of modern ad-engagement media, in which hyperbole and controversy reign and the truth (especially if it is complicated or not sexy). ) is automatically devalued (I’m a journalist specializing in complex telecom policy and consumer rights issues, ask me how I know).

It happened again here, with Marcus’ debunking probably seeing a tiny fraction of the attention from stories promoting the illusion.

The Post’s criticisms came fast and furious, with many noting that the paper lent credence to a claim that simply didn’t justify it (which has been a positively brutal trend in the political press over the past decade):

This tends to happen often with AI, which as a technology is absolutely far from sentient, but is regularly portrayed in the press as a few clumsy steps away from Skynet or Hal9000 – just because the truth is n readers are not interested. “New technologies are very scary” gets some jabs, so this was the angle pursued by the Post, which some professors and media critics thought was journalistic malpractice:

In short, the Post amplified an inaccurate claim from an unreliable narrator because he knew a moral panic over emerging technology would attract more readers than a direct debunking (or obviously the correct approach of not cover it at all). Although several outlets pushed debunking articles after a few days, they likely received a fraction of the attention of the original hype.

Which means you’ll almost certainly come across misinformed people at parties who think Google AI is sensible for years to come.

Filed Under: ai, artificial intelligence, HAL9000, hype, LaMDA, moral panic, skynet

Companies: Google, Washington Post

Share.

Comments are closed.