Galaxy S23 Ultra: Hands-On Netflix Password-Sharing Crackdown Super Bowl Ads Apple Earnings Google's Answer to ChatGPT 'Knock at the Cabin' Review 'The Last of Us' Episode 4 Foods for Mental Health
Want CNET to notify you of price drops and the latest stories?
No, thank you
Why You Can Trust CNET
Our expert, award-winning staff selects the products we cover and rigorously researches and tests our top picks. If you buy through our links, we may get a commission. Reviews ethics statement

The weird, frightening future of AI may not be what you think

AI researcher Janelle Shane discusses the weird, hopeful, scary and often broken world of algorithms.

A photo of the cover of Janelle Shane's book on AI: You Look Like A Thing And I Love You.
Janelle Shane/Kapo Ng

There are five principles of AI weirdness, according to author and researcher Janelle Shane. One of them is: "The danger of AI is not that it's too smart but it's not smart enough." But another is: "AI does not really understand the problem you want it to solve."

The world feels like absolute chaos right now. And often, AI feels like part of the problem. Algorithms determining how messages are delivered, and to which people. Facial recognition software that can be biased. The rise of deepfakes.

What we watch on streaming services is peppered with suggestions from AI. My photos are tuned and curated by AI. And every day, I'm writing emails that prompt me with suggestions to autofinish my sentences, as if my thoughts are being led by AI, too.

Janelle Shane is an AI researcher and long-time writer of the popular AI Weirdness blog. Her experiments are, well, weird. You've probably seen them: lists of Harry Potter-themed desserts. Escape room name generators. AI-generated cat portraits.

Now playing: Watch this: The future of AI is weird, broken and sometimes full...

I bought her book, You Look Like a Thing and I Love You because even though I cover tech, after all these years I still don't feel like I understand AI. If you feel that way, Shane's book is a primer, and a guide full of real observations from her experiments training neural nets. There are fun cartoons, too. It's a way to grasp the madness of our AI world, and also realize that the weirdness has rules. Grasping its underpinnings and its mistakes feels essential now more than ever.

I'd been thinking of connecting with Shane before everything happened to disrupt 2020, but we spoke over Zoom a few weeks ago to discuss weird AI and her book, and some thoughts of where AI could lead things next. I'm particularly interested in the idea of AI as a collaborative tool, for better and for worse. 

Our conversation, recorded before George Floyd's death and the protests that have followed, is embedded above. And if you're looking for a great starter book to reflect on AI in a time that's already impossibly strange, to find another way to reflect on 2020, Shane's work could be a place to start.

Read more: CNET Book Club interviews great tech and sci-fi authors