From legal challenges to OpenAI, the creator of ChatGPT, to a rethink of the kind of work that may be asked (or not) of computer programmers in the next five years, to AI-generated movie trailers, AI technology and issues continue to raise important questions about the brave new world around conversational AI.
While Stability AI founder Emad Mostaque has a doozy of a prediction about computer programmers, the comments of the week come from a journalism roundtable hosted in mid-June by the International Center for Journalists about how to use AI tech and tools "without losing audience trust."
"We have a 521 million-year-old technology called the human brain, which needs equal amounts of investment so it can optimize for things like care, compassion, deep listening, fully embodied information gathering, co-creation and dissemination," said Jennifer Brandel of Hearken, a company that advises startups.
"We humans still have a competitive advantage when it comes to one dimension against AI, that is care. AI couldn't care less. It cannot intrinsically care. So journalists or those doing acts of journalism need to make up for what's lost, and care more."
Here are the other doings in AI worth paying attention.
The world is talking about AI technology because a group of people, including many computer programmers, got together to advance genAI technology. Well, the founder of one of the most notable AI companies -- Emad Mostaque of Stability AI, developer of the popular Stable Diffusion text-to-image generator -- is predicting there will be "no programmers in five years." Mostaque, who was called out by Forbes in June for reportedly making some misleading claims, told tech luminary Peter Diamandis in an interview that much of the work programmers do can already be done by AI engines. Here's the one-minute exchange from a 31-minute interview Diamandis posted earlier this week:
Mostaque: There are no programmers in five years.
Mostaque: I think we always have to look at the unchanging versus the inevitable. So an inevitable is 41% of all code on GitHub right now is AI generated. ChatGPT can pass the Google Level 3 programmer exam and it will run pretty much on a MacBook or phone.
Diamandis: And that's this year?
Mostaque: This year. Right now.
The takeaway, says Diamandis, "Those of you with kids who are having Python lessons and so forth … Maybe it's instead helping them to understand how to ask great questions or give great directions or prompts."
Speaking of kids and the next generation
While students' use and potential misuse of AI tools may lead to a plethora of similar sounding Social Studies reports on the Constitution, chatbots like ChatGPT have teachers and educators looking for positive ways the tech can transform classrooms. That's why this CNET analysis, AI's Teachable Moment: How ChatGPT Is Transforming The Classroom, is worth a read.
So too is the US Department of Education's 71-page report on AI and the future of teaching and learning, which was released May 24 and announced here.
Why pay attention to AI and education now? Here are the three reasons the department cited:
First, AI may enable achieving educational priorities in better ways, at scale, and with lower costs. Addressing varied unfinished learning of students due to the pandemic is a policy priority, and AI may improve the adaptivity of learning resources to students' strengths and needs. Improving teaching jobs is a priority, and via automated assistants or other tools, AI may provide teachers greater support.
Second, urgency and importance arise through awareness of system-level risks and anxiety about potential future risks. For example, students may become subject to greater surveillance. Some teachers worry that they may be replaced – to the contrary, the Department firmly rejects the idea that AI could replace teachers.
Third, urgency arises because of the scale of possible unintended or unexpected consequences. When AI enables instructional decisions to be automated at scale, educators may discover unwanted consequences. In a simple example, if AI adapts by speeding curricular pace for some students and by slowing the pace for other students (based on incomplete data, poor theories, or biased assumptions about learning), achievement gaps could widen.
Read the full report here (PDF).
OpenAI not so novel?
OpenAI, the company behind ChatGPT, was hit with a class-action lawsuit in California over a claim it violated copyrights and the privacy of an unknown number of people "when it used data scraped from the internet to train its tech," The Washington Post reports.
"The lawsuit seeks to test out a novel legal theory — that OpenAI violated the rights of millions of internet users when it used their social media comments, blog posts, Wikipedia articles and family recipes," says the paper. (If you haven't seen it, in April, the Post published a story calling out the "secret list of websites" that AI engines are scraping to build out their AI engines, or large language models.)
It's not the only legal challenge OpenAI is facing. Two authors are also suing OpenAI for allegedly infringing on authors' copyrights to "train" ChatGPT, Reuters reports. The authors Paul Tremblay, whose work includes The Cabin at the End of the World, and Mona Awad, say in their suit that OpenAI's training data incorporated over 300,000 books, including using "shadow library" websites that share copyrighted material without permission.
In nonlegal news, ChatGPT saw monthly traffic to its website drop by 9.7% in June from May and unique visitors decline, according to data compiled by analytics firm Similarweb in a blog post saying the "novelty" is wearing off and that users are looking to other tools like Google's Bard and Microsoft's Bing.
Still, ChatGPT is "the fastest-growing consumer application ever, and now boasts over 1.5 billion monthly visits," says Reuters, which calls it one of the top 20 websites in the world. And Similarweb adds that while "Google is in no danger of being eclipsed by the OpenAI tech demo site that turned into a cultural phenomenon … ChatGPT still attracts more worldwide visitors than Bing.com, Microsoft's search engine, or Character.AI, the second most popular stand-alone AI chatbot site."
Google steps up with Gemini
Google's DeepMind AI lab attracted the world's attention seven years ago when it created an AI program called AlphaGo that defeated a human champion of the strategy board game Go. Now, Wired reports, DeepMind is working on a new large language model called Gemini that's similar to GPT-4, which powers ChatGPT, but which it claims will do much more than its rival. (Yes, I know, competitors always say they will do more than their rivals.) What is more? Planning and problem solving, Demis Hassabis, DeepMind's CEO, said in an interview with Wired.
"At a high level you can think of Gemini as combining some of the strengths of AlphaGo-type systems with the amazing language capabilities of the large models," Hassabis said.
AlphaGo, Wired writes, "was based on a technique DeepMind has pioneered called reinforcement learning, in which software learns to take on tough problems that require choosing what actions to take like in Go or video games by making repeated attempts and receiving feedback on its performance. It also used a method called tree search to explore and remember possible moves on the board."
Will it truly be better than ChatGPT, which cost OpenAI over $100 million to develop? We'll see. But what we do know is that Gemini will also be expensive, with Hassabis saying it will cost tens or hundreds of millions of dollars to create.
Look who's talking
CNET audio producer Stephen Beacham tried out AI voice generators to create a 5-minute tutorial that shows you how to create a clone of your voice. He used ElevenLabs' AI text-to-speech tools. I encourage you to watch it all the way through. As Beacham found, it's "creepy and cool." And he offers up a surprising twist.
Summertime means movie trailers -- fake ones
Back in May, the internet was abuzz over a minute-long AI-generated trailer for Star Wars, if the iconic sci-fi flick was directed by Wes Anderson. Star Wars: The Galactic Menagerie is hilarious, on point, as Anderson fans will attest, and has racked up nearly 3 million views in the past two months.
But that's not the only AI-generated trailer out there touting some notable flicks, including more Anderson-inspired trailers for The Lord of the Rings, Harry Potter, The Matrix and The Godfather. Yes, they are all short, clever and worth watching.
Want more? There are also trailers for fake films, including The Great Catspy (that's not a typo), a World War 2 movie with a cloned Michael Caine voiceover and a sci-fi saga about the end of the world.
As fun as they may be to watch, these trailers also highlight the very real concern of artists, filmmakers and other creatives who are worried about AI taking away new opportunities for humans to work on creative projects -- and for actors (their faces and voices) to be co-opted without their permission, a major discussion point during the current SAG-AFTRA labor negotiations with Hollywood. Some actors have sold rights to their voices: 92-year-old James Earl Jones last year agreed to allow AI to clone the voice he gave to Darth Vader so the character could live on in the Star Wars franchise, according to Vanity Fair, but other actors -- including those who voice characters in games -- say AI puts their jobs "on the chopping block."
Editors' note: CNET is using an AI engine to help create some stories. For more, see this post.