X

AI and You: Altman Says Humanity Needs to Solve for AI Safety, EU Agrees on 'Historic' AI Law

Get up to speed on the rapidly evolving world of AI with our roundup of the week's developments.

Connie Guglielmo SVP, AI Edit Strategy
Connie Guglielmo is a senior vice president focused on AI edit strategy for CNET, a Red Ventures company. Previously, she was editor in chief of CNET, overseeing an award-winning team of reporters, editors and photojournalists producing original content about what's new, different and worth your attention. A veteran business-tech journalist, she's worked at MacWeek, Wired, Upside, Interactive Week, Bloomberg News and Forbes covering Apple and the big tech companies. She covets her original nail from the HP garage, a Mac the Knife mug from MacWEEK, her pre-Version 1.0 iPod, a desk chair from Next Computer and a tie-dyed BMUG T-shirt. She believes facts matter.
Expertise I've been fortunate to work my entire career in Silicon Valley, from the early days of the Mac to the boom/bust dot-com era to the current age of the internet, and interviewed notable executives including Steve Jobs. Credentials
  • Member of the board, UCLA Daily Bruin Alumni Network; advisory board, Center for Ethical Leadership in the Media
Connie Guglielmo
10 min read
A digital illustration shows a glowing yellow grid of connections, on a black background, stretching off into the distance.
Xuanyu Han/Getty Images

It's been a minute since OpenAI CEO Sam Altman was ousted, returned five days later, and then reset the governing board at the maker of ChatGPT. If, like most of the world, you haven't been following the plot twists after the board fired him for not being "consistently candid" in his communications, don't worry. There are lots of insider looks and timelines and analysis pieces about the saga, and about the men who have starring roles in the future of AI, including Altman; Microsoft co-founder Bill Gates and CEO Satya Nadella; Twitter/X owner Elon Musk; Google co-founder Larry Page; venture capitalists Reid Hoffman and Peter Thiel; and Meta CEO Mark Zuckerberg.

But the simple recap is this: power, ego and money pitted against concern and ethics. OpenAI has a valuation of more than $85 billion, and its investors, including Microsoft, didn't want to see that evaporate with Altman's departure, which set up a staff revolt and potential brain drain at the startup. At the same time, researchers, engineers and ethicists working on the tech were cautious and mindful of its possible implications for the future of humanity, which aren't all positive, and how OpenAI's leader was driving the tech forward.

"The OpenAI debacle has illustrated that building AI systems is testing whether businesspeople who want to make money can work in sync with researchers who worry that what they are developing could eventually eliminate jobs or become a threat if technologies like autonomous weapons grow out of control," wrote The New York Times in its Five Days of Chaos summation.

"The crisis at OpenAI is personifying a question that has been boiling inside the AI industry and creating angst among technology giants and world leaders: Who can be trusted to open the Pandora's box that artificial intelligence might represent?" noted The Wall Street Journal in its behind-the-scenes investigation.

The take by The Atlantic magazine carries this headline: The Money Always Wins. 

Altman, who was on the shortlist for Time magazine's person of the year, told What Now? With Trevor Noah this week (12 days after his ouster and rehiring) that he got so many messages in the 30 minutes after he was axed that it "broke" his iPhone. 

"I was in my hotel room, took this call, had no idea what it was going to be, and got fired by the board. It felt like a dream. I was confused. It was chaotic. It did not feel real. Obviously, like upset and painful. But confusion was just, like, the dominant emotion at that point ... just in a fog, in a haze. I was, like, I didn't understand what was happening. It happened in this, like, unprecedentedly, in my opinion, crazy way. And then in the next, like, half hour, my phone — I got so many messages that iMessage broke on my phone. I'm still, like, a little bit in shock and a little bit just trying to, like, pick up the pieces. You know, I'm sure as I have time to, like, sit and process this, I'll have a lot more feelings about it."

In the hour-long conversation, Noah also asked about worries that genAI will cause the apocalypse. There are many questions about ChatGPT's debut in November 2022, and whether the company had thought through the implications before setting off an arms race in Silicon Valley, goading Meta, Google and Microsoft to accelerate and fast-track their AI development. "Speed is even more important than ever," a Microsoft executive told employees, according to The New York Times. It would, the exec reportedly said, be "an absolutely fatal error in this moment to worry about things that can be fixed later."

But can everything be fixed later? Altman, 38, told Noah it isn't exactly possible to make generative AI completely safe. "We say airplanes are safe, but airplanes do still crash very infrequently, like amazingly infrequently, to me. We say that drugs are safe, but ... the FDA will still certify a drug that can cause some people to die sometimes. And so safety is like society deciding something is acceptably safe, given the risk-reward trade-offs. And that, I think, we can get to. But it doesn't mean things aren't going to go really wrong."

As for his nightmare scenario, Altman said the risk-reward trade-offs mean humanity will just have to figure it out.

"Society has, like, actually a fairly good — messy, but good — process for collectively determining what safety thresholds should be," Altman told Noah. "I think we do, as a world, need to stare that in the face ... this idea that there is catastrophic or potentially even existential risk, in a way that, just because we can't precisely define it, doesn't mean we get to ignore it either. And so we're doing a lot of work here to try to forecast and measure what those issues might be, when they might come, how we would detect them early."

Yes, I'll sleep better tonight. Won't you? 

Here are the other doings in AI worth your attention.

EU agrees on 'historic' AI legislation

A month after the Biden administration released an executive order that aims to put guardrails around the development and use of AI, lawmakers in the European Union on Dec. 8 agreed to a "sweeping new law to regulate artificial intelligence, one of the world's first comprehensive attempts to limit the use of a rapidly evolving technology," The New York Times said.

The AI Act, which will affect tech companies in the 27 countries in the EU and seek to protect 450 million consumers, "paves the way for what could become a global standard to classify risk, enforce transparency and financially penalize tech companies for noncompliance," The Washington Post said. Companies risk fines of up to 7% of their global revenue.

"Historic! The EU becomes the very first continent to set clear rules for the use of AI," European Commissioner Thierry Breton posted on X. "The #AIAct is much more than a rulebook — it's a launchpad for EU startups and researchers to lead the global AI race."

The law is expected to be finalized in 2024 and "would not take effect until 2025 at the earliest," The Guardian noted.

AI Alliance working on AI's risk-reward conundrum

IBM and Meta joined forces this week to launch the AI Alliance, a collaboration involving more than 50 companies, universities, research groups, science agencies and AI leaders, to ensure that "open innovation in AI benefits everyone and that it is built responsibly." 

OpenAI isn't among those signing on, nor is Google. 

"We believe it's better when AI is developed openly — more people can access the benefits, build innovative products and work on safety," said Nick Clegg, president of global affairs for Meta. The company created the free, open-source Llama 2 large language model, or LLM, which is an alternative to Open AI's and Google's technology. 

Google steps up AI arms race with Gemini, kind of 

Google, which lets people play with AI through its Bard chatbot, last week released an update to the Gemini LLM that powers its chatbot. The update is a "dramatic departure for AI," according to CNET's Stephen Shankland, and ups the ante with rival OpenAI. The game-changer: Gemini's ability to move beyond text-based AI tasks, like summarizing documents and writing programming code, to understanding video, audio and photos. That means things like being able to figure out hand gestures in a video.

"Text-based chat is important, but humans must process much richer information as we inhabit our three-dimensional, ever-changing world," Shankland said. "And we respond with complex communication abilities, like speech and imagery, not just written words. Gemini is an attempt to come closer to our own fuller understanding of the world."

Google, which is working to outpace rivals including OpenAI and Meta, said it will deliver Gemini's capabilities next year to the billions of people who use its products, including search, Chrome, Google Docs and Gmail. "For a long time we wanted to build a new generation of AI models inspired by the way people understand and interact with the world — an AI that feels more like a helpful collaborator and less like a smart piece of software," Eli Collins, a product vice president at Google's DeepMind division, told CNET. 

But there's a little hitch. Shankland noted that though Google's promotional video for the Gemini update "doesn't fundamentally misrepresent Gemini's abilities," it's in keeping with common promo clips that "make products look more glamorous than they truly are." Google, he said, included a disclaimer in its video, saying Gemini doesn't respond as quickly as shown.

The bottom line: We're in an AI arms race and companies are moving to get ahead. But as Bloomberg columnist Parmy Olson put it, "take Google's latest show of sprinting ahead with a pinch of salt."

At least for now.

Microsoft's Seeing AI app is now available on Android 

In this week's pick for tech for good, Microsoft said it made its free Seeing AI app available on the Google Play Store for the 3 million Android users out there and said it's available in 18 languages, with plans to expand to 36 next year.

"Seeing AI narrates a person's surroundings and is designed to help blind and low-vision people carry out tasks like reading mail, identifying products and hearing descriptions of photos," wrote CNET's Abrar Al-Heeti. "Users point their phone's camera, snap a picture and then will hear a description."

The app has different categories for various tasks: the Short Text function speaks text aloud as soon as the text shows up in front of the camera, The People feature identifies folks around you. The Currency function identifies money. Meanwhile, the Scenes feature lets you hear a description of a setting you've photographed, allowing you to move your finger across the screen to hear the locations of different objects. 

Seeing AI can also identify colors and read handwritten text, like in greeting cards – very handy for those of us who get holiday cards from people with poor penmanship (not that I'm complaining about getting a card).

In addition to English, the app is available in Czech, Danish, Dutch, Finnish, French, German, Greek, Hungarian, Italian, Japanese, Korean, Norwegian Bokmal, Polish, Portuguese, Russian, Spanish, Swedish and Turkish.    

What every chief exec should know about AI

Still not sure what the genAI fuss is all about? Or maybe you think you know what it's all about? Either way, McKinsey shared a 17-page guide back in May (which I just came across) entitled "What every CEO should know about generative AI." It's a useful primer, whether you're chief of a company or not. 

Noting that ChatGPT reached 100 million users in just two months, McKinsey said, "It democratized AI in a manner not previously seen while becoming by far the fastest-growing app ever. Its out-of-the-box accessibility makes generative AI different from all AI that came before it. Users don't need a degree in machine learning to interact with or derive value from it; nearly anyone who can ask questions can use it."

As for companies that "may see an opportunity to leapfrog the competition by reimagining how humans get work done," McKinsey offers up an important caution that I think gets lost amid all the magical thinking about genAI: "Companies will also have to assess whether they have the necessary technical expertise, technology and data architecture, operating model, and risk management processes that some of the more transformative implementations of generative AI will require."

Happy reading.

At McDonald's, those fries will come with a side of AI

At first glance, the news that McDonald's will be using Google's AI cloud technology to help its restaurants optimize operations seemed like the standard "look who signed on as a customer" story for Google.

But I like the takeaway (pun totally intended) that it may mean the AI will enable McDonald's restaurants to produce better burgers and fries for customers, who often use a mobile app or those self-service kiosks to order a meal. "The world's biggest restaurant chain, noted The Street, says the move will collectively result in "customer benefits such as hotter, fresher food" around the fast-food chain's global restaurant system.  

In case you didn't know, McDonald's has more than 38,000 restaurants around the world, in more than 100 countries. 

AI term of the week: Large Language Models (LLMs)

With all the talk of the genAI arms raise in Silicon Valley, which is based on the technology in these companies' LLMs, I thought it worthwhile to offer up a few definitions.

The first is from venture firm Andreeesen Horowitz, which offers a simple one. 

"Large Language Model (LLM): A type of AI model that can generate human-like text and is trained on a broad dataset."

Market research firm Gartner offers up a slightly more detailed answer.

"Large Language Models (LLMs): A specialized type of artificial intelligence (AI) that has been trained on vast amounts of text to understand existing content and generate original content."

And for an even more detailed answer, I went to The Alan Turing Institute: 

"Large Language Model: A type of foundation model that is trained on a vast amount of textual data in order to carry out language-related tasks. Large language models power the new generation of chatbots, and can generate text that is indistinguishable from human-written text. They are part of a broader field of research called natural language processing, and are typically much simpler in design than smaller, more traditional language models."   

Editors' note: CNET is using an AI engine to help create some stories. For more, see this post.