X

5-ish Things on AI: Go Talk to 'Claude,' Famed Apple Designer Eyes Wearables, and More

Get up to speed on the rapidly evolving world of AI with our roundup of the week's developments.

Connie Guglielmo SVP, AI Edit Strategy
Connie Guglielmo is a senior vice president focused on AI edit strategy for CNET, a Red Ventures company. Previously, she was editor in chief of CNET, overseeing an award-winning team of reporters, editors and photojournalists producing original content about what's new, different and worth your attention. A veteran business-tech journalist, she's worked at MacWeek, Wired, Upside, Interactive Week, Bloomberg News and Forbes covering Apple and the big tech companies. She covets her original nail from the HP garage, a Mac the Knife mug from MacWEEK, her pre-Version 1.0 iPod, a desk chair from Next Computer and a tie-dyed BMUG T-shirt. She believes facts matter.
Expertise I've been fortunate to work my entire career in Silicon Valley, from the early days of the Mac to the boom/bust dot-com era to the current age of the internet, and interviewed notable executives including Steve Jobs. Credentials
  • Member of the board, UCLA Daily Bruin Alumni Network; advisory board, Center for Ethical Leadership in the Media
Connie Guglielmo
9 min read
The letters A and I protrude from a textured background that resembles a computer circuit board.
J Studios / Getty Images

Former Apple designer Jony Ive — the man who helped bring the iMac, iPod, iPhone, iPad and Apple Watch to life — was rumored in December to be in talks with Sam Altman, CEO of ChatGPT-maker OpenAI, about creating a new company focused on consumer AI devices. 

Now comes word that Ive is reportedly seeking as much as $1 billion in funding for the startup.

At least that's the speculation from The Information, which said Ive, 57, has been talking with potential backers including Emerson Collective (founded by Steve Jobs' wife, Laurene Powell Jobs) and Thrive Capital, which is already a backer of OpenAI.

AI Atlas art badge tag

Ive left Apple in 2009 and started his own design firm, LoveFrom, along with noted designer Marc Newsom

There aren't any details about what kind of AI devices Ive is looking to build, but The Information quoted unnamed sources in saying it "wouldn't look like a phone."

There are a few AI devices on the market: Meta's Ray-Ban glasses already have AI capabilities rolling out to the public in the next month; startup Brilliant is making display-enabled AI glasses; and the $199 Rabbit R1 is a handheld AI-powered gadget, like a phone-meets-AI Game Boy. 

So far, the AI wearable that's generated the most buzz is the Humane AI Pin, designed by former Apple executives. CNET's Katie Collins got to try the $699 AI Pin (which also requires a $24 monthly subscription fee) before its release and called the minimalist, mostly voice-activated device "just our first step into a brave, new world." 

CNET reviewer Scott Stein has been living with the Humane AI Pin, which just went on sale, and shared his perspective. "A lot of the tech that I test can seem like science fiction, but using it in the real world can be a chore. That's how I feel about the Humane AI Pin. I can see a future idea in this clip-on thing that promises a living version of a Starfleet Badge, but that future isn't here yet." 

Seems like there's a lot of promise for an AI wearable but it's yet to be realized. That's why speculation about what Ive may be up to is worth watching. 

For perspective on AI products already on the market, including reviews of Microsoft's Copilot, OpenAI's ChatGPT and Google's Gemini, check out CNET's AI Atlas, a new consumer hub that also offers news, how-tos, explainers and other resources to get you up to speed on gen AI. Plus, you can sign up at AI Atlas to get this column via email every week. 

Here are the other doings in AI worth your attention.

Anthropic's Claude chatbot 'isn't sentient, but it certainly feels sentient'  

The most popular generative AI chatbots, based on user engagement with the tools, are OpenAI's ChatGPT, Microsoft's Bing, Google's Gemini, Character.AI, Perplexity, Anthropic's Claude and Microsoft's CoPilot, according to the latest traffic data compiled by Similarweb.

The audience tapping into Claude is pretty small (20 million visitors compared with 1.6 billion to ChatGPT), but don't let the size of the user base fool you. That's the word from CNET AI chatbot reviewer Imad Khan, who gave Claude an 8 (out of 10) score in his evaluation.

"When Claude answers questions in contemplative ways and also goes out of its way to ask you follow-up questions and your opinions, it's hard not to be surprised by its supposed curiosity," Khan said. "Let's be clear: That curiosity isn't real. But when it asked me questions like, "What is your perspective?" I felt compelled to give it an honest answer. This type of reciprocal understanding is what humans do with one another."

That's why Khan called Claude the "most conversational of all the available free AI engines" and noted that it "gives direct answers that feel well thought-out." 

Americans 'cautious' about AI, but shopping uses are ticking up

A new poll by YouGov, conducted in mid-March, found that many Americans are still skeptical of the advantages of AI. While nearly half (44% of those surveyed) said it's likely that AI "will eventually become more intelligent than people" (14% said it's already smarter than humans), more than half (54%) said they're "cautious" about how AI is changing the world, with 49% saying they're "concerned." 

And about 1 in 7 Americans said they're "very concerned about AI ending humanity."

People were asked to describe their views on AI by choosing from this list of words: cautious, concerned, skeptical, curious, scared, excited, hopeful, impressed, overwhelmed and indifferent.

"Despite many Americans believing AI will become more intelligent than people, there are some things people don't trust AI to do, like make unbiased opinions, make ethical decisions, or provide accurate information," YouGov found in its poll. 

You can see all the results here

The data also highlights that age makes a difference in welcoming AI or not. Most of the people who expressed concerns about AI's ability to make ethical and unbiased decisions or provide accurate information were 45 and older. Added YouGov, "Adults under 45 are more likely than older Americans to believe AI will have a positive impact on society (36% vs. 19%), a positive impact on their own life (38% vs. 15%), and a positive impact on the U.S. economy (36% vs. 20%)."

Let's compare the YouGov results with an Adobe Research survey of 3,000 consumers, which found that more than half of Americans have been using gen AI to help with daily tasks at home, work and school, as well as for shopping and travel. 

Forty-four percent said they're using gen AI every day, with the top use cases, in order, being research and brainstorming; creating first drafts of written content; creating visuals and presentations; using the AI chatbot as an alternative to search; summarizing text; and creating programming code.

When it comes to shopping, 52% said they're likely to use gen AI to help buy clothes, while 71% said they think that using gen AI "to produce images of them wearing a product can boost their confidence when making a purchase."

The top five ways consumers are using AI to shop are: automatically filtering products on websites based on their preferences; designing a custom product; summarizing product reviews; engaging with a chatbot for customer service; and using a virtual personal shopper to help customize product options. For travel, the top use cases are getting a comparison of pricing options (93%); discovering the working hours for hotel services and restaurants (90%); and finding nearby parking, restaurants and pharmacies (90%).  

Meta announces next gen of its AI chip to power Facebook, Instagram

As part of its investment in AI applications, Meta last year also developed a custom chip — the Meta Training and Inference Accelerator, or MTIA — for use in its data centers to run AI products on its popular platforms, including Facebook, Instagram and WhatsApp, and train its AI systems.

"The chip, referred to internally as 'Artemis,' will help Meta reduce its reliance on Nvidia's AI chips and reduce its energy costs overall," Reuters reported, noting that Meta CEO Mark Zuckerberg said the company planned to buy about 350,000 AI chips from Nvidia this year.

A new version of the chip "significantly improves performance compared to the last generation and helps power our ranking and recommendation ads models on Facebook and Instagram," Meta said in a blog post. "This new version of MTIA more than doubles the compute and memory bandwidth of our previous solution while maintaining our close tie-in to our workloads. It is designed to efficiently serve the ranking and recommendation models that provide high-quality recommendations to users."

If you aren't up on what's happening in terms of AI computer power (aka chips), big tech players have been investing in their own chips, and there have been rumors that OpenAI CEO Sam Altman is seeking "trillions" in investments to build chips for AI applications.

"Google this week made its fifth-generation custom chip for training AI models, TPU v5p, generally available to Google Cloud customers, and revealed its first dedicated chip for running models, Axion," TechCrunch noted. "Amazon has several custom AI chip families under its belt. And Microsoft last year jumped into the fray with the Azure Maia AI Accelerator and the Azure Cobalt 100 CPU."

To save money, Texas using AI to grade written answers on standardized tests 

The Texas Education Agency decided that human test scorers are too pricey when it comes to grading its STAAR tests that measure student proficiency in reading, writing, science and social studies. So it's now relying on an "automated scoring engine" using natural language processing (just like AI chatbots such as ChatGPT), The Texas Tribune reported. The scoring engine will save the agency about $15 million to $20 million a year that it would otherwise have spent on humans, the Tribune said.  

The STAAR tests — for State of Texas Assessment of Academic Readiness — measure how well students in grades three through 12 understand state-mandated curriculum and are used to determine whether school districts are properly educating students. The tests were redesigned in 2023 and feature fewer multiple choice questions and six to seven times more open-ended questions that require a written response. Those written answers — known as constructed response items — take longer to score (as you'd expect), prompting the agency to look to technology to take on the task.

The agency said that in 2023 it hired about 6,000 temps to score the tests but will now need fewer than 2,000. The AI system will do an initial pass and assign a grade to all the student tests, and then a "quarter of the responses will be rescored by humans," the Tribune noted.

One thing though: The Texas Education Agency doesn't want anyone to call the scoring engine artificial intelligence, saying it isn't autonomous and can't think on its own (such a system would be an artificial general intelligence system, and they don't exist).

"It may use similar technology to chatbots such as GPT-4 or Google's Gemini, but the agency has stressed that the process will have systematic oversight from humans," the Tribune said. "It won't 'learn' from one response to the next, but always defer to its original programming set up by the state."

Still, Texas may want to look up the definition of generative AI, including chatbots, because it is a form of AI, and there's no way to get around that. Also, some parents and educators have called out that when it was used on a limited basis in 2023, the human-AI testing system (known as a hybrid scoring system) gave out a higher than expected number of zeroes on those constructed responses and may also fail to give students proper credit for their written answers.

Test results will affect how kids "see themselves as a student," Kevin Brown, the executive director of the Texas Association of School Administrators and a former superintendent at Alamo Heights Independent School District, told the Tribune. With humans doing the grading, "students were rewarded for having their own voice and originality in their writing," Brown told the paper, adding that he's worried computers might not be as good at rewarding originality.

Lori Rapp, superintendent at Lewisville ISD, told the paper that school districts haven't been given an adequate look at how the programming works and that, "The automation is only as good as what is programmed."

Udacity offers free online course on AI ethics. Take it 

As a follow-up to US President Joe Biden's October 2023 executive order offering guidance on creating safe, secure, trustworthy and ethical AI systems and a March announcement by the White House Office of Management and Budget about AI safety, online learning platform Udacity is offering a class — "Discovering Ethical AI" — for free through April 30. The one-hour class is led by Ria Cheruvu, an AI software architect at Intel. You can find details here

Is it worth your time? Yes. I particularly thought the quiz questions worked to reinforce the mindset that should be embraced by anyone who's helping to implement an AI system ethically and pragmatically.

AI usage in newsrooms is rising, says the AP 

The Associated Press, which has been a leader in adopting automated tools and gen AI to write stories, did a study and found that gen AI is being used by nearly 70% of the nearly 300 newsroom staffers it surveyed in December. 

It's being used for everything from writing social media posts and headlines to transcribing interviews to creating story drafts, the AP found. In addition, 20% of the respondents said they're using gen AI to create social media graphics and videos. 

The research shows that for any news organization looking to stay relevant, familiarity with AI is a must, noted Poynter, a media resource organization. "Experiment, experiment, experiment," Hannes Cools, assistant professor at the University of Amsterdam and co-author of the study, told Poynter. "Responsible experimentation could spark discussion, and that could lead to more responsible use. I do believe that generative AI is here to stay, and it will (if it hasn't already) be present in many aspects of our daily lives."

The AP, by the way, put out its AI usage policy in August 2023. And if you're interested, CNET's AI policy was originally posted in June 2023 and was updated last month. 

Editors' note: CNET used an AI engine to help create several dozen stories, which are labeled accordingly. The note you're reading is attached to articles that deal substantively with the topic of AI but are created entirely by our expert editors and writers. For more, see our AI policy.