X

5-ish Things on AI: Apple Serious About AI, Pitting AI Against the Experts, the ELVIS Act Passes

Get up to speed on the rapidly evolving world of AI with our roundup of the week's developments.

Connie Guglielmo SVP, AI Edit Strategy
Connie Guglielmo is a senior vice president focused on AI edit strategy for CNET, a Red Ventures company. Previously, she was editor in chief of CNET, overseeing an award-winning team of reporters, editors and photojournalists producing original content about what's new, different and worth your attention. A veteran business-tech journalist, she's worked at MacWeek, Wired, Upside, Interactive Week, Bloomberg News and Forbes covering Apple and the big tech companies. She covets her original nail from the HP garage, a Mac the Knife mug from MacWEEK, her pre-Version 1.0 iPod, a desk chair from Next Computer and a tie-dyed BMUG T-shirt. She believes facts matter.
Expertise I've been fortunate to work my entire career in Silicon Valley, from the early days of the Mac to the boom/bust dot-com era to the current age of the internet, and interviewed notable executives including Steve Jobs. Credentials
  • Member of the board, UCLA Daily Bruin Alumni Network; advisory board, Center for Ethical Leadership in the Media
Connie Guglielmo
9 min read
A hand holding a smartphone with light sparkling out of it
Qi Yang/Moment via Getty Images

It wasn't the best week for Apple, given the DOJ has accused the company of violating antitrust laws by using the iPhone to stifle competition in the smartphone market and making it "extremely difficult and expensive" for consumers to "venture outside the Apple ecosystem." 

We'll have to see how that plays out. When it comes to AI, Apple made some moves in March that position it to play an important role in how generative AI will become part of everyday life. That's notable given the company has been slow to share news about its AI investments (which are rumored to total $1 billion a year). CEO Tim Cook did say on an earnings call in February that he sees a "huge opportunity for Apple with gen AI" and that "we view AI and machine learning as fundamental technologies, and they're integral to virtually every product that we ship."  

First up are reports from Bloomberg, The New York Times and others that Apple is partnering with Google to bring the search giant's Gemini AI model to the iPhone, as noted by CNET's Lisa Eadicicco. While that would certainly be a big win for Google -- Gemini would then run on its own mobile operating system, Android, as well as on Apple's iOS -- the bigger takeaway is how important gen AI is becoming. iOS, which powers millions and millions of iPhones, could help drive gen AI into the mainstream.

"A partnership like this could have huge implications about the role of generative AI in smartphones, suggesting it's becoming a must-have for new phones rather than just a niche feature found on select models," Eadicicco said. 

Right now, reports of a Google-Apple partnership on Gemini are just that (although Google already pays Apple to include its search engine on the iPhone and a deal might be an extension of their search agreement.) The speculation is that Apple has been talking to others about bringing their chatbots to the iPhone, including OpenAI, the maker of ChatGPT. The Wall Street Journal, citing sources, said Apple reportedly held talks with Baidu in China to use its gen AI tech on iPhones sold in that country.

As a reminder, Apple releases a new version of the iPhone every September, along with an update to its iOS mobile software, so we could see some gen AI tech included in iOS 18 this year. 

That's not the only Apple-AI news. Apple published a research paper this month, with the spiffy title MM1: Methods, Analysis & Insights from Multimodal LLM Pre-training, that describes the company's work training large language models (LLMs), the systems that power gen AI chatbots.

"Apple researchers have developed new methods for training large languages models on both text and images, enabling more powerful and flexible AI systems, in what could be a significant advance for artificial intelligence and for future Apple products," reported VentureBeat, which was the first to spot the research paper. 

"Sources say Apple is working on a large language model framework called 'Ajax' as well as a chatbot known internally as 'Apple GPT.' The goal is to integrate these technologies into Siri, Messages, Apple Music and other apps and services," VentureBeat added. "For example, AI could be used to auto-generate personalized playlists, assist developers in writing code, or engage in open-ended conversation and task completion."

Then there's the news that Apple earlier this year acquired a Canadian AI startup called DarwinAI that uses AI to visually inspect components during the manufacturing process, according to Bloomberg, which broke the news.  

While Apple has lagged behind Google and other rivals including Microsoft and Samsung in sharing details of its AI plans, the company does already incorporate AI features into its products. CNET's Sareena Dayaram notes that AI tools have "played a behind-the-scenes role in iPhone for years." 

That includes features like a synthetic voice called Personal Voice, which was added to iOS 17 last year. "Personal Voice is an accessibility setting that uses on-device machine learning to allow people at risk of speech loss to replicate their voice so they can more easily communicate with loved ones. To learn your voice, the iPhone asks you to read out loud 150 phrases. It then uses AI to analyze your voice and generates a synthetic version of it," Dayaram noted in her step-by-step guide to using the feature. 

There's also Live Text, a computer vision tool that was added in 2021 and that recognizes handwritten and typewritten text in photos and other images.

My favorite is the update to AutoCorrect that now lets you curse without Apple "changing your swear word of choice to something more benign, like 'duck.'"  

Here are the other doings in AI worth your attention.

Microsoft's free version of Copilot gets a power boost as it works to woo users

In a bid to get more people working with its gen AI assistant Copilot, Microsoft said in a blog post that it's expanding how people can get the tool. That includes giving users of the free version of Copilot access to GPT-4 Turbo, the OpenAI model that powers Copilot Pro (the $20 per month subscription version of the chatbot.) Copilot Pro users have access to GPT-4 Turbo by default; Copilot users need to set the assistant to either Creative or Precise mode to use GPT-4 Turbo, Microsoft told CNET.

GPT-4 has been trained on data up to April 2023 and can also handle text-to-speech prompts. 

Copilot, which was released last year, is available in Microsoft's Bing search engine, Windows 11, the Edge browser and online. The company just announced it's making Copilot available in its free Microsoft 365 web apps (including Word and Outlook), without you needing to have a Microsoft 365 subscription too. 

In other news, Microsoft announced that it's hired the co-founder of DeepMind, a London-based AI lab that was acquired by Google in 2014, to run its new AI division, according to a blog post by Microsoft CEO Satya Nadella. Mustafa Suleyman, 39, left Google in 2022 and became co-founder and CEO of the startup Inflection AI. Karén Simonyan, a co-founder of Inflection and its chief scientist, is also joining Microsoft as chief scientist, along with several of the startup's employees, Nadella said. 

Suleyman and Simonyan will lead a new organization called Microsoft AI that's "focused on advancing Copilot and our other consumer AI products and research," Nadella said. "We have a real shot to build technology that was once thought impossible and that lives up to our mission to ensure the benefits of AI reach every person and organization on the planet, safely and responsibly."

The New York Times noted that Suleyman "helped popularize the idea that artificial intelligence technology could one day destroy humanity. But he has also shown concern for more concrete and immediate dangers associated with the technology, including the spread of disinformation and job losses. In his recent book, 'The Coming Wave,' he argued that if these and other dangers could be overcome, the technology would be enormously transformative, especially as a means for drug discovery and other forms of health care."

Pitting AI against the experts

CNET has started a new short-form video series called "Expert vs. AI". In the first matchup, car tech expert Brian Cooley goes up against Google's Gemini to ask if now's the right time to buy an electric car.

Instead of offering you the TL;DR on their conclusions, I encourage you to watch Cooley's back-and-forth with Gemini. Whether or not buying an EV is right for you at this point, Cooley and Gemini did agree on the answer to this question: "In one word, what's the best thing EVs have going for them?"'

Cooley: "Inevitability."

Gemini: "Momentum."

Watch this: Expert vs. AI: Is Now the Time to Buy an EV?

Serious writers don't need to worry about AI, says author Salman Rushdie

Novelists concerned that gen AI tools could take away their livelihood don't need to be concerned -- at least not yet. That's the word from award-winning author Salman Rushdie, who came to that conclusion after experimenting with ChatGPT and asking it to write 200 words of prose in his style. 

The results, he said, were "a bunch of nonsense."

"No reader who had read a single page of mine could think I was the author. Rather reassuring," Rushdie wrote in an essay for the literary journal La Nouvelle Revue Francaise that was translated by the Agence France-Presse. 

He also said that ChatGPT had "no originality" and was "completely devoid of any sense of humour."

He added that since gen AI tools are evolving so quickly, they will be used to create some writing: "Given that Hollywood is constantly creating new versions of the same film, artificial intelligence could be used to draft screenplays."

ELVIS Act aims to protect musicians, as states move to rein in AI

Georgia, California and Tennessee stepped into the AI regulatory ring with proposed bills and new laws aimed at protecting against the potential dangers of gen AI. 

To get endorsement for a new law that would ban the use of AI deepfakes for political messaging, a Georgia politician used visual aids to make his point. 

Georgia Republican Rep. Brad Thomas created a deepfake video featuring audio and images of a state senator, Colton Moore, and Mallory Staples, "a former Republican congressional candidate who now runs a far-right activist organization called the Georgia Freedom Caucus," The Guardian reported.

Using what he said was an inexpensive AI video tool, Thomas told the judiciary committee considering his proposed law that he was able to produce "cinematography-style video. Those individuals look absolutely real, and they're AI-generated." 

The fake versions of Moore and Staples were shown endorsing the bill in the AI video. In real life, Moore and Mallory are against the law, the paper said, noting that Moore believes the bill is an "attack on memes used in political discourse and that satire is protected speech."

Thomas added, "How is using my biometric data, like my voice and likeness, to create media supporting a policy that I clearly don't agree with the First Amendment right of another person?"

The bill passed out of committee on an 8-1 vote.

Across the country, California's Privacy Protection Agency voted on a proposal to set rules about how businesses use AI and collect personal information about consumers, including students and workers, according to news site CalMatters.org.

"The proposed rules seek to create guidelines for the many areas in which AI and personal data can influence the lives of Californians: job compensation, demotion, and opportunity; housing, insurance, health care, and student expulsion. For example, under the rules, if an employer wanted to use AI to make predictions about a person's emotional state or personality during a job interview, a job candidate could opt out without fear of discrimination for choosing to do so."

The rules would apply to any company with over $25 million in annual sales "or processing the personal data of more than 100,000 Californians," CalMatters said. "AI regulation in California could be disproportionately influential. A Forbes analysis found that 35 of the top 50 AI companies in the world are headquartered in California."

A final vote on the draft rules is expected in 2025.

Then there's Tennessee, which passed the ELVIS Act to protect musicians from unauthorized deepfakes and voice clones, Rolling Stone reported. 

"The bill, short for the Ensuring Likeness Voice and Image Security Act, updates the state's Protection of Personal Rights law (which protects an individual's 'name, photograph, or likeness'), to include protections for artists' voices from AI misuse," Rolling Stone said.

Tennessee Gov. Bill Lee said the "first-of-its-kind" law was intended to protect musicians who come to work in the state that their creations will be legally protected as AI threatens their intellectual property and copyrights.

Added Rolling Stone, "Companies like Splice and Soundful have generated AI-created beats and songs at the push of a button, while AI voice cloning tools have helped songwriters produce viral (and controversial) songs that mimic superstar artists -- last year, voice-cloning tech created an uproar in the industry after an anonymous TikTok user made a popular song using the cloned vocals of Drake and the Weeknd."

Is it real or AI?

Just how good is AI at creating deepfake images? There are several places online where you can test your skills at ferreting out whether something is real or AI-generated. Here are three worth trying out. 

The first is Which Face is Real, developed by researchers Jevin West and Carl Bergstrom at the University of Washington as part of the Calling Bullshit project.

The second is courtesy of a quiz by The New York Times called Test Yourself: Which Faces Were Made by A.I.?

CNET Senior Photographer James Martin put together a test with 17 photos, some he took and some generated by AI called AI or Not: Can You Spot the Real Photos?

I didn't fare so well and was fooled more times than I care to admit; see you how do.

Editors' note: CNET is using an AI engine to help create some stories. For more, see this post.