There are five principles of AI weirdness, according to author and researcher Janelle Shane. One of them is: "The danger of AI is not that it's too smart but it's not smart enough." But another is: "AI does not really understand the problem you want it to solve."
The world feels like absolute chaos right now. And often, AI feels like part of the problem. Algorithms determining how messages are delivered, and to which people. Facial recognition software rise of deepfakes.. The
What we watch on streaming services is peppered with suggestions from AI. My photos are tuned and curated by AI. And every day, I'm writing emails that prompt me with suggestions to autofinish my sentences, as if my thoughts are being led by AI, too.
Janelle Shane is an AI researcher and long-time writer of the popular AI Weirdness blog. Her experiments are, well, weird. You've probably seen them: lists of . Escape room name generators. AI-generated cat portraits.
I bought her book, You Look Like a Thing and I Love You because even though I cover tech, after all these years I still don't feel like I understand AI. If you feel that way, Shane's book is a primer, and a guide full of real observations from her experiments training neural nets. There are fun cartoons, too. It's a way to grasp the madness of our AI world, and also realize that the weirdness has rules. Grasping its underpinnings and its mistakes feels essential now more than ever.
I'd been thinking of connecting with Shane before everything happened to disrupt 2020, but we spoke over Zoom a few weeks ago to discuss weird AI and her book, and some thoughts of where AI could lead things next. I'm particularly interested in the idea of AI as a collaborative tool, for better and for worse.
Our conversation, recorded before George Floyd's death and the, is embedded above. And if you're looking for a great starter book to reflect on AI in a time that's already impossibly strange, to find another way to reflect on 2020, Shane's work could be a place to start.