PaLM 2 Is a Major AI Update Built Into 25 Google Products
At Google I/O, the company unveils a major new foundation for much of its artificial intelligence work.
Stephen Shanklandprincipal writer
Stephen Shankland has been a reporter at CNET since 1998 and writes about processors, digital photography, AI, quantum computing, computer science, materials science, supercomputers, drones, browsers, 3D printing, USB, and new computing technology in general. He has a soft spot in his heart for standards groups and I/O interfaces. His first big scoop was about radioactive cat poop.
Expertiseprocessors, semiconductors, web browsers, quantum computing, supercomputers, AI, 3D printing, drones, computer science, physics, programming, materials science, USB, UWB, Android, digital photography, scienceCredentials
I've been covering the technology industry for 24 years and was a science writer for five years before that. I've got deep expertise in microprocessors, digital photography, computer hardware and software, internet standards, web technology, and other dee
Google on Wednesday unveiled a new AI system called PaLM 2, a major update to its most powerful language processing system. The artificial intelligence technology already is incorporated into 25 Google products, underscoring its importance as the company races to capitalize on a computing revolution.
"PaLM 2 is a state-of-the-art language model. It's good at math, coding, reasoning, multilingual translation and natural language generation," said Zoubin Ghahramani, a vice president at Google's DeepMind division that oversees the company's artificial intelligence work.
PaLM 2 is used with nearly 25 Google products already, including Google's Bard chatbot, Gmail, Google Docs, Google Sheets and YouTube, Google plans to reveal at Google I/O developer conference. That means PaLM 2 will mostly dwell inside Google's data centers. But the model can be scaled down enough to run on a smartphone too.
AI has been used for years, mostly behind the scenes for tasks like filtering spam, taking better photos and translating your spoken commands into smart speaker actions. Google pioneered much of that technology. But now a new chapter has begun. Large language models like PaLM capture a lot of what humans know and say, and generative AI can create images and write fluently.
PaLM 2 will be surpassed, too. Google is training a successor called Gemini right, the fruits of the combination of Google's DeepMind and Brain teams, Chief Executive Sundar Pichai said. Gemini was created to be multimodal, meaning it can input like text ad photos, and to be "highly efficient," he said.
Even though Google was instrumental to many of these AI developments and has called itself an AI-first company since 2016, it was OpenAI's ChatGPT that led to an explosion of mainstream interest in modern AI's power. At Google I/O, Google is making the case that its AI work is ready for prime time, not just experimental services.
What's unclear is whether AI's benefits ultimately will be eclipsed by its dangers. Google, like OpenAI and other big AI proponents, argues it's embracing AI carefully, building in mechanisms to avoid abuse and keeping a close eye on the technology. But not all AI is monitored closely, and the technology's near-term risks include misinformation, fakery, cheating, automated online attacks, reinforcement of negative stereotypes and elimination of human jobs.
Geoffrey Hinton, one of the inventors of modern AI, resigned from Google in May. "It is hard to see how you can prevent the bad actors from using it for bad things," he told The New York Times.
Google insists it's very conscious of AI's problems as well as its benefits. "We've been taking a bold and responsible approach to deploying a lot of these AI based technologies," Ghahramani said. "We're trying to put them in people's hands in a way that is useful and minimizes the risks to users."
And the company expanded its work to detect language that's toxic, makes personal attacks or is sexually explicit, paying annotators 1.5 cents per judgement to label a set of 2 million comments, Google said in a technical report about PaLM 2.
PaLM 2 runs on a smartphone
Like Meta's LLaMA, PaLM 2 comes in a variety of sizes, an approach that offers more smarts in some circumstances and faster performance in others, Ghahramani said. It can be shrunk all the way down to run on phones with a PaLM 2 version called Gecko.
On the latest smartphone models, PaLM 2 can process 20 tokens per second, Ghahramani said. Tokens are the words, word fragments, numbers or other basic elements that language models ingest while being trained to recognize patterns in vast swathes of text on the internet.
For example, the phrase "turning and turning in the widening gyre" is nine tokens long. AI models also assemble their responses out of tokens.
PaLM 2 reasoning abilities
The new model is better at reasoning and employing common sense -- a big problem with language models that often produce plausible sounding answers instead of the truth. One reason for the improvement is a massive increase in mathematical and scientific training data, Ghahramani said.
For example, it can solve a logic puzzle involving the position of a few colored cars. "PaLM 2 can step through the logic of how to position objects relative to each other. It explains the steps and even provides a diagram that's helpful to visualize the answer," he said.
Ghahramani acknowledged the tricky issue of saying whether PaLM 2 truly is reasoning -- a key point in the long-term evolution of AI toward what's called artificial general intelligence, or AGI. Google trains its large language models on lots of text reflecting human interactions, he said, and that data shapes the AI.
"You can argue whether it's real reasoning or whether it's just imitating human reasoning, but the thing to think about is how useful is it for the user," he said. "We are aiming to build products that really solve problems that people care about."
PaLM 2 trained on more languages
Google trained PaLM 2 on web documents, books, programming code, mathematics and "conversational" data, with personally identifying information scrubbed out, the company said in a research paper. And there's a higher proportion of non-English training data this time around.
PaLM 2 ventures further beyond English, though, with training data in more than 100 languages for better understanding of language-specific nuances and idioms.
And it understands programming code better, including older languages like Fortran. Programming is a top use of large language models. "If you're looking for help to fix a piece of code, PaLM 2 can not only fix the code, but also provide the documentation you need, in any language," Ghahramani said.
Adapting PaLM 2 for different jobs
PaLM 2 is more flexible than its predecessor. That's because it serves as a foundation that can be augmented for specific uses.
For example, Med-PaLM 2 has been tuned for medical applications like looking at an X-ray and writing its own mammography report. Google will let partners test that technology this summer.
"Med-PaLM 2 was the first large language model to perform at an expert level on the US medical licensing exam," Ghahramani said.
Another variant, Sec-PaLM, has been trained to analyze potentially malicious software and explain what they're up to. It'll be built into the Google Cloud services.
More efficient AI
AI consumes gargantuan amounts of power. Training advanced AI models requires data centers stuffed with chips like Nvidia's H100 or Google's Tensor Processing Units. Some AI can run on lesser hardware, but today's smooth-talking language models and generative AI demand high-end, power-hungry machinery.
Here, too, PaLM 2 is better. "It's more efficient to serve while performing better overall," Ghahramani said.