Google has a plan to prevent its new AI from being dirty and rude

Silicon Valley CEOs usually focus on the positive when announcing their company’s next big thing. In 2007, Apple’s Steve Jobs praised the “revolutionary user interface” and “groundbreaking software” of the first iPhone. Google CEO Sundar Pichai took a different tack at his company’s annual conference on Wednesday, when he announced a beta test of Google’s “most advanced conversational AI yet.”

Pichai said the chatbot, known as LaMDA 2, can have a conversation on any topic and has performed well in tests with Google employees. He announced an upcoming app called AI Test Kitchen that will make the bot available for outsiders to try out. But Pichai added a stark warning. “Although we’ve improved security, the model could still generate inaccurate, inappropriate, or offensive responses,” he said.

Pichai’s shifting pitch illustrates the mix of excitement, confusion and concern swirling around a series of recent breakthroughs in the capabilities of machine learning software that processes language.

Technology has already improved the power of autocomplete and web search. It has also created new categories of productivity apps that help workers by generating fluid text or programming code. And when Pichai first unveiled the LaMDA project last year, he said it could eventually be used in Google’s search engine, virtual assistants, and workplace apps. But despite all these dazzling promises, it’s unclear how these new AI wordsmiths can be reliably controlled.

Google’s LaMDA, or Language Model for Dialogue Applications, is an example of what machine learning researchers call a large language model. The term is used to describe software that builds a statistical sense of the language’s patterns by processing vast amounts of text, usually sourced online. For example, LaMDA was originally trained using over a trillion words from online forums, Q&A sites, Wikipedia, and other websites. This vast trove of data helps the algorithm perform tasks like generating text in the different styles, interpreting new text, or functioning as a chatbot. And these systems, if they work, will have nothing to do with the frustrating chatbots you use today. Currently, Amazon’s Google Assistant and Alexa can only perform certain pre-programmed tasks and distract when presented with something they don’t understand. What Google is now proposing is a computer you can actually talk to.

Chat logs released by Google show that LaMDA can be informative, thought-provoking, or even funny—at least sometimes. Testing the chatbot prompted Google vice president and AI researcher Blaise Agüera y Arcas to write a personal essay last December in which he argued the technology could provide new insights into the nature of language and intelligence. “It can be very hard to shake off the idea that there’s a ‘who’ and not an ‘it’ on the other side of the screen,” he wrote.

Pichai made it clear when he announced the first version of LaMDA last year, and again on Wednesday, that he sees it potentially as a path to voice interfaces far broader than the often frustratingly limited capabilities of services like Alexa, Google Assistant, and Apple’s Siri. Now, Google executives seem convinced they’ve finally found the way to make computers you can actually talk to.

At the same time, great language models have proven that they can fluently speak dirty, nasty, and just plain racist. Deleting billions of text words from the internet inevitably leads to a lot of objectionable content. OpenAI, the company behind the GPT-3 speech synthesizer, has reported that its development can perpetuate gender and racial stereotypes and is asking customers to implement filters to weed out unsavory content.

Comments are closed.