Is artificial intelligence really intelligent?
Is artificial intelligence really intelligent?
19:13

Is artificial intelligence really intelligent?

Tech Industry
Speaker 1: AI is the label that a lot of people slap on a lot of things in electronics these days, probably too many of them slapping it on too many things. And there are people out there who are maybe starting to feel like AI is a lot of a, and not too much eye that can lead to cynicism among a lot of different parties who should be open to it and embracing it. Now what Jeff Hawkins is gonna have a lot of interesting insights into this. He is the co-founder of AI firm. Numenta, he's the author of [00:00:30] a new book called a thousand brains all about their view of AI. And by the way, kids, those of you who are under 40 would also like to know we are currently talking to the person who probably did more than anyone successfully launch the modern smartphone. We'll talk about that in another episode, uh, Jeff, when you take a look at AI and where you think it needs to go to really achieve its glory, as it's been sold to so many of us, you really focus on the neocortex of the human brain. What is that? And why is it so important? Speaker 2: All right. So the neocortex [00:01:00] is a part of, of the, of human brain. It's part of every mammals brain, but in the humans, it's big, it's about 70% of the volume of the brain. It's a big wrinkly thing on top, you know, so if you look at a brain, you say, what's that wrinkly thing. That's the New York cortex and it is the organ of intelligence. So ours is big and that's pretty much why we're really, we're really smart. And so it's everything you think from like basic vision and hearing and touch to language mathematics philosophy. Our conversation today, that's the New York cortex. [00:01:30] My Neo cortex is making my mouth move and yours is understanding my words. It's the only example of an intelligent system. So if we understand how the Neo cortex works, then we have a, a roadmap for how to build intelligent machines and felt for very many, many years, uh, 40 years actually that's dating myself a bit that we would probably not be able to figure out how to build inte machines until we figured out what intelligence was. And that then means figuring out how the brain does it, how the brain Speaker 1: Works. Now you go on to describe, um, how the neocortex, if [00:02:00] I have this right, if you were to look at it, conceptually, it sounds like it's a bunch of little end stacked, little pieces of wire that each have lots of filaments in them. That's Speaker 2: A crude description, but it's, it's a visualization that works. It's if you, if you could take the Neo cortex out of your head, it's only about, uh, two and a half millimeter thick, and it's about the size of a dinner napkin. And yes, it is divided into these what we call columns, which are very skinny, little strands. And there's about 150,000 of these columns that are stacked [00:02:30] side by side. They're not visible. You can't see them, but we know they're there. Um, and, uh, one of the, uh, amazing hypothesis that was put forward, um, several decades ago by not man named Vernon mouth council, was that the different parts of the near court checks to do different things like the language and hearing and touch, and you know, other thought processes, they all built up these columns and the columns look nearly identical. So Mount castle proposed that underlying all the things we think of as intelligence, there's this common algorithm, [00:03:00] uh, what they call a cortical algorithm. There's some fundamental processing element that's repeated over and over and over again. And in some ways it makes language and other times it makes vision, other times it makes hearing, and it makes us intelligence. So then to understand how Neocort works, you just have to figure out how the columns work. That's the core element of intelligence. Speaker 1: It sounds like almost a general purpose CPU model is happening in these columns. Is that about right? Speaker 2: It is, but I wouldn't make the analogy. It's it's general purpose, but I wouldn't make the analogy to a CPU. In fact, it's, that's one of the things that's confused. A [00:03:30] lot of AI researchers is to think about the brain of some sort of computer that, you know, gets some input and processes it and does something with reality is what the brain does. It is, it, it it's through its sensors. It builds a model of the world and internal to your head, you have this model of everything around you. In fact, your perceptions are the model, not the real world, which is a very strange idea, but you have this model of the world, everything, you know, is in this model, in your world, in your head. So the, the, the core element is a model building element. [00:04:00] And, uh, when you have this model of the world, then you can think about it. Then you can act upon it. Then you can say, well, given my model of the world, how would I make a good interview? Or how would I make a, you know, a good, a good show? Uh, how would I inform people about things or how do I get a degree, or how do I, you know, solve any problem? So it's all about model building and not about processing per se. So it Speaker 1: Sounds like you're describing, uh, a concept. That's more about references than computations. Speaker 2: It is. It's more about how knowledge of the world is structured in your head [00:04:30] and, and this model. And then once you, that's the core of it. And then once you have that, you can reason with it. You can say what would happen if I did this and, and I see something new, what's it like that I've seen before? So it's less about get an input and act and do something. It's more about get some input, update your model of the world. And when you need to act, you have this model to work from you Speaker 1: Open the book with a very stark, um, sort of a description. I think it's in the first few lines you say, right now, your cells are reading this [00:05:00] book. And Speaker 2: I started the book that way, cuz I wanted people to think like, wow, you know, it's, there's no, there's no magic going on in your head. There's these little simple cells. There's billions of 'em, but there's little simple cells. And somehow they create our experiences and create everything we do. And, and I just, that is such a profound mystery in some sense. Um, but now we understand how they do it to a large extent, not completely, but to a large stand. So the idea that the brain is learning a model of the world is not, it's been known for some people for some time. Not a lot of people don't know. [00:05:30] Don't think of it that way, but it's been known for a while. Um, what we discovered is how it does it. And there's two key elements to this one is, um, is that the brain uses what are called reference frame, which you can think of like a, you know, Cartesian coordinates like an X, Y, Z coordinate system you've learned in high school. Speaker 2: And so everything, you learn, everything you see, every knowledge piece of information you have is stored in these reference frame. So even when you're just looking out at the world and I say, oh, there's a door, six feet away from me. The brain is [00:06:00] referencing that. And referencing that says, where is that door relative to me? What is the positions relative to me? So everything is structured. It's almost like a CAD model, like in a computer it's it's, it's basically building up this three dimensional model of the world using these reference frames. That's the first thing. And um, and now we know a lot about how those reference rooms are working. The second big discovery was the model that's in your near cortex is we talked about the columns earlier. Each column is an independent modeling system. So there are about 150,000 modeling systems in your brain. Speaker 2: And [00:06:30] they're complimentary. So when I say, where is knowledge of a, an item, like a coffee cup? So I'm drinking coffee right here. Where is knowledge of this up in my brain? It's not in one place. There are, there are thousands of models that model the shape of the cup from vision. That, from how it feels even how it sounds. And so these are all these independent models and they vote to read to consensus. So when you look out at the world, you're not aware that there's these thousands and thousands of models that are all trying to figure out what's going on, but they vote it and they say, okay, we all couldn't agree. That's a coffee cup at disposition [00:07:00] relative to your body. And um, each one has its own sort of subfield to specialty. So the thousand brain theory says, yeah, there's thousands of these models. That's a new idea. We didn't, we didn't, no one really understood until recently. And then the other thing that all built on these reference frames, and now, now we can piece it all together and understand the mechanisms by how this works. And by, by the way that I, as I argue in the book, this is going to be the foundation for how we build truly intelligent machines that they're gonna work on those same principles. Speaker 1: So I'm, I'm glad you've got a coffee cup in your hand right now, because if there's a recurring [00:07:30] theme through your book, it's you being fascinated by watching your hand on a coffee cup. And a lot of it comes down to tell me how that fits in this story. That Speaker 2: Was, uh, sort of the, the, the core of the insight. How do, how do we figure out this reference fame things? And so one day I was, I was holding a coffee cup, not this one. I had one of the Noment logo on it. And I was waiting for my wife to come and we were trying to solve certain problems. I described the book. I won't go into 'em now, but I had an observation, you know, at, as you move your finger [00:08:00] over an object, like a cup, every time you move it, you have an expectation or a prediction. What it's gonna feel like, even if you're not thinking about it, your brain is saying, oh, when I touch here, I'm gonna feel this curvy edge. If I touch over here, I'm gonna feel the handle. This bar down the bottom. It's rough. So as you touch this cup, your finger, your brain is making this prediction. Speaker 2: And it doesn't matter where the cup is relative to my body. It doesn doesn't matter where my finger is. It's where my finger is relative to the cup. And so let's hold us that simple observation, like how does the brain make a prediction? What it's going to feel like, am I moving my finger here? Go [00:08:30] one thing, I'm moving it over here, feeling nothing. The answer was, the brain had to know where my finger is relative to the cup. And then we realized it has to know where all your pieces of your skin are relative to the cup, cuz they're all making to different predictions. So this told us there were thousands of reference frames that the cortex was using to know where my fingers are relative to the cup. It's like an engineering problem. Speaker 1: Now take us forward into how that model or that thinking applies to inanimate things and concepts. How does that sort of model prediction apply to things like empathy and democracy? Speaker 2: [00:09:00] What we typically think about is movement, like moving my finger, what I'm doing is moving to a new point in a reference frame, I'm saying, oh, in the reference frame of the cup, I'm going from this point to this point, well, the brain can do this for nonphysical objects too. I can have reference frames mathematics. And the, the knowledge I have about mathematics is stored in these locations, in these reference frames. And what I do when I think about mathematics is I'm literally it's, it is equivalent to moving, but I'm not moving anything. I'm basically moving to a new point in a reference frame. [00:09:30] Oh yeah. And, and I recall the information there. So our thoughts like you and I, we have these thoughts all day things pop in your head all day out. It's not random. What's happening is your brain is moving through these points in these reference frames and recalling the information that was stored at each location. And that's how you recall knowledge thinking is a form of, of movement. And it is it's essentially your brain is moving through a reference frame of space that exists only in your head, but it's the same process exactly this same process when your finger moving relative to the coffee cup. Speaker 1: So I wanted to lay this groundwork and thank [00:10:00] you for a very, I know it's a very concise, very brief summary of some big concepts, but now we can talk about how this applies to the state and future of what is commonly called and sometimes, uh, improperly labeled. I'm sure AI, uh, first of all, when you, what you're talking about now seems to have almost no connection to modern computing and software. And you know, I think most of us think that AI is the next derivation of computing and software. Is that almost wrong? Speaker 2: Um, I don't wanna characterize what other people think [00:10:30] about AI. Um, I think AI is its practice today. Uh, and it's different forms, lots of different forms, many different types. Yeah. Um, none of 'em adhere to the principles we just talked about. Uh, none of 'em had this concept. Okay. We're building a model of the world using reference frames and lots of models. And, and by the way, the other key component, the only way you can learn about the world is through movement. So you can't learn what a coffee cup feels like unless you move and you can't learn what a coffee cup looks like until you rotate it. And look at the of positions. You can't learn what your house is without moving through it. [00:11:00] So part of this also this theory is that, um, intelligent systems to learn a model of the world, have to move through the world or move their RIS through the world and AI systems today don't do this. Speaker 2: Um, and so I, I can unequivocally say that AI systems today are not intelligent in the way that a human is. Um, they don't have this knowledge of the world. They don't have common sense knowledge about the world. You know, they, can't an AI system that recognizes a coffee cup. Can't tell you what coffee cups are good for and how they're made. And you know what the different, you know, history of coffee [00:11:30] cups and what, what does the one that Starbucks look like? They just don't know this stuff. The question we have to ask is AI practitioners, can we do more of what we're doing now to get us to truly intelligent machines or do we have to do something different? And, uh, today's AI has been very successful and it's very valuable, but it is not in my opinion, intelligent. And it is not on the path to being intelligent unless we adopt some new principles to how it should work. Uh, and I laid those. I laid out some baseline principles in my book, some of which, [00:12:00] which we've just been talking about. Speaker 1: I think a lot of people are probably coming from reference points as I am as a layperson. And non-engineer saying, oh, this is you start with what we have and now you extend it into its next version. Or does this need to be more of a clean break that whys different skills? Speaker 2: First of all, the things I'm talking about are not mysterious engineers understand these concepts. Uh, I can, my book is an explanation of these concepts. People don't, it's not, it's not magic. It's just like people haven't understood this before. So pretty much anyone can understand the principles ball as I'm talking about. They're [00:12:30] not that conceptually difficult. I mean, robotics is us use reference frames, the animation studios use reference frames. So these concepts are out there, but people haven't been thinking about in terms of brains. Um, so I think we have a lot of engineering work to do this. And, but what's been missing is just an understanding of what we have to do. It just, it, people just didn't know this, you know, people didn't understand what intelligence we didn't. We didn't understand what intelligence is. We didn't understand what are the elements you have to have to create intelligence. Speaker 2: And I think we have [00:13:00] that now. So, so now it's like, yes, we have to sort of retrain ourselves or train our thinking a little bit, but this isn't like magic. It's not like we have to build quantum accounts. No, we can write this stuff in software. We can use traditional Silicon architectures to build these things. Um, we had just been sort of lacking and understanding what it is to be intelligence. And therefore we've been solving a AI problems, which are useful and valuable, but aren't really intelligent. Um, they're not like getting us to the true goal of, uh, you know, artificial general intelligence. [00:13:30] You know, it reminds me a little bit about the beginning of the computer era and in the 1930s and early 1940s pioneer, such as Alan touring and John van Norman kind of defined what computing was like, this is what a computer does, you know, has its inputs and memory. And its a, and Speaker 1: We're still working with the echoes of that today. Speaker 2: Yeah, sure. You know, we still, every little process that we have today is still a, you know, a touring machine in some sense. Um, so, but it took them decades to figure out how to build that. Right. They didn't know about, [00:14:00] uh, they didn't know about transistors or integrated circuits or software or compiles. None of these things they knew about back then we are starting from a, but they had to develop all that. We are starting from a much more advanced base today. We're starting from a base where we have all those things and we can, we can just tweak the 'em to gather what we need. So it's gonna happen much faster than it did on the computing era, which Speaker 1: Took decades. We're kind of starting our mission to Mars from the moon. Yeah, exactly. Speaker 2: That's a great way of putting it. Yeah. We've already figured how to get to the moon and we have to tweak the things a bit here, but mostly it's an education [00:14:30] system and that's why I wrote this book. It's mostly people have to, I'm making the argument like, you know, Hey, these systems we have today, aren't intelligent and here's how we can get them to be intelligent. And I'm making the argument why they're not intelligent in what we have to do. Uh, and if I'm successful, then you know, then we'll be, we'll be more moving that direction, Speaker 1: Put this into some concrete, go forward terms now, cuz you're not just an author. You are, uh, you know, you're the co-founder of Numenta tell me what you guys do, cuz obviously what you're doing is to put this into Speaker 2: Action. Yeah. So if you went to Numenta two years ago, you'd [00:15:00] see a bunch of neuroscientists there. Um, today you'd see a bunch of machine learning people or AI people. And um, what we, we took what we learned about the brain and we've made a roadmap about like, how can we take today's neural networks, today's deep learning networks or whatever AI and how can we move to where we need to go? And um, so we started implementing that. The first thing we did, um, is we worked on something called sparsity, which is in the brain. Everything in the brain is the cells. Most of 'em are silent and only a very small percentage of [00:15:30] 'em are active at any point time. And that's not the way today's AI works. Today's deep learning networks. Everything is active all the time. So by adding sparity to today's networks, we've shown that we can speed 'em up by a factor of 50. Speaker 2: We can show that they are much more robust. You can save huge amounts of power. That's a first step, that's a baby step, but dramatic changes. The next thing we're working on in we're working on right now is what we call continuous learning. Today's AI systems. You have to train 'em once and then you deploy them. And if you have to, you [00:16:00] want 'em to do something else, you have to train it all over again. And this can be very expensive and slow brains are not like that. We learn continuously, you know, we don't forget because I learned something new. So we know how brains do that. So that's that 10 years learning is a another huge savings. It would, it would take huge amounts of energy and time out of these systems because you don't have to stop and retrain them all the time. Um, and then, um, the Nu the thing we're working on after that, we're just starting to work on the reference frame concept, um, how to integrate that into, uh, these existing systems. So [00:16:30] hopefully we'll be able to make this sort of gradual transit and we're trying to lead the way maybe after my book comes out, some other people will be following the same path. Um, but we're not gonna wait for that. We're doing it ourselves too. Speaker 1: You said something interesting just now about how we have to with current AI systems, train them to go learn something and they can go out and they can learn in a rather limited way. It sounds like before we have to bring 'em back to the garage, so be, and train them to learn something else. Is that one of the handicaps Speaker 2: It is. If, if you may not be aware, there's there's these new, um, AI [00:17:00] systems that are called, uh, these language models that they're, they're good for. Like you can type in a sentence and it completes it. And things like this, um, this technology is called the transformer technology. This is the current hot, one of the hot things in AI right now, those, those model, some of the largest models can take millions of dollars to train. I mean, there's thousands of hours of GP time just to train them. And if you wanna then have it know something else, you have to do it all over again. That just, that just Speaker 1: Doesn't that just doesn't gonna scale on that just Speaker 2: Well, well, it, it doesn't scale, right? And so right now the, the AI world is, is facing [00:17:30] a scaling crisis. And, and people talk about this. I, I'm not my this up, you know, so this is well known. If you talked about the people who are running the large data centers and the large, um, uh, AI systems, they, they wanna mail bigger and bigger systems, but they can't because it'll just take too much energy too much time, too much compute resources. I mean, astronomical, it's going up exponentially. And so they were hitting a roadblock there. So we have to break through that roadblock. Uh, and I think we know how to do that. Speaker 1: Uh, as we wrap up here, [00:18:00] gimme, uh, I know this is a big question, but, uh, if there's any one area you're particularly excited about the application of AI in the near to midterm it be in, in medical analysis, would it be in autonomous vehicles? Would it be somewhere else? Would it be in a better autocorrect? Where do you think the first great hater is? Speaker 2: I know you want me to answer that question. I'm gonna give you the honest truth. Honest truth is we have no idea what the big, you know, payoffs are gonna be. If you ask someone in 1950, what were the application's [00:18:30] gonna be in computing? What they have said, personal computers, smartphones, GPS, internet, digital music, digital photography, none of that, none of it, they would say, uh, calculating tri tables. You know, that's what, that's the answer they would've given you. So I, I am not, I no wizard here. I cannot. All I know is that this we build and I described this in the book, we're gonna build truly intelligent machines. They're gonna be unbelievable. And we can sort of put some parameters on what they might look like and what their capacities are, but how people are gonna apply [00:19:00] 'em in the short term. I don't know. It's just, it's nearly impossible to say. And I wish I could give you a better answer. Speaker 1: That's part of what makes it such a fascinating area, to be honest. All right. Jeff luck is co-founder of.

Up Next

Bitcoin consumes more energy than many countries
nowwhat-cryptoenvironmentfinal2

Up Next

Bitcoin consumes more energy than many countries

That time Michael Dell almost had his PC company taken away
dellthumb

That time Michael Dell almost had his PC company taken away

A commercial space industry is on the horizon
nwaerospace

A commercial space industry is on the horizon

Is there such a thing as dirty solar?
solarthumb

Is there such a thing as dirty solar?

Why 'made in America' is a slippery concept
abrar-sherr

Why 'made in America' is a slippery concept

Business travelers as we knew them may be done. Now what?
nw-scott-hornick

Business travelers as we knew them may be done. Now what?

Have 5G networks underwhelmed you so far?
rogerthumb

Have 5G networks underwhelmed you so far?

A look at what's replacing the DSLR. Hint: It's (mostly) not a phone
shankland

A look at what's replacing the DSLR. Hint: It's (mostly) not a phone

Why your smart home is still dumb, and what Matter is doing about that
tobinthumb

Why your smart home is still dumb, and what Matter is doing about that

Coronavirus delta variant: How to stay safe as the COVID threat changes
pekozs-thumb

Coronavirus delta variant: How to stay safe as the COVID threat changes

Tech Shows

The Apple Core
apple-core-w

The Apple Core

Alphabet City
alphabet-city-w

Alphabet City

CNET Top 5
cnet-top-5-w

CNET Top 5

The Daily Charge
dc-site-1color-logo.png

The Daily Charge

What the Future
what-the-future-w

What the Future

Tech Today
tech-today-w

Tech Today

Latest News All latest news

Apple May Give FineWoven Accessories 1 More Season
finewoven-240424-land-00-00-13-04-still003

Apple May Give FineWoven Accessories 1 More Season

US vs. TikTok: What Happens Next
240424-yt-tiktok-vs-us-v04

US vs. TikTok: What Happens Next

Battle of the Humanoid Robots: MenteeBot Is Ready
240423-yt-menteebot-ai-robot-v08

Battle of the Humanoid Robots: MenteeBot Is Ready

What to Expect at Apple's May 7 iPad Event
240423-yt-apple-ipad-ipad-pro-pencil-v02

What to Expect at Apple's May 7 iPad Event

Did a Week With the Apple Watch Make Me Use My iPhone Less?
240419-site-does-having-an-apple-watch-make-me-use-my-iphone-less-4

Did a Week With the Apple Watch Make Me Use My iPhone Less?

How Google Tests the Cameras in Its Pixel Phones
240417-site-google-pixel-lab-exclusive-1

How Google Tests the Cameras in Its Pixel Phones

Most Popular All most popular

First Look at TSA's Self-Screening Tech (in VR!)
innovation

First Look at TSA's Self-Screening Tech (in VR!)

Samsung Galaxy S24 Ultra Review: More AI at a Higher Cost
240123-site-samsung-galaxy-s24-ultra-review-4

Samsung Galaxy S24 Ultra Review: More AI at a Higher Cost

'Circle to Search' Lets Users Google From Any Screen
circlesearchpic

'Circle to Search' Lets Users Google From Any Screen

Asus Put Two 14-inch OLEDs in a Laptop, Unleashes First OLED ROG Gaming Laptop
asus-preces-00-00-25-11-still003

Asus Put Two 14-inch OLEDs in a Laptop, Unleashes First OLED ROG Gaming Laptop

Samsung Galaxy Ring: First Impressions
samsung-galaxy-ring-clean

Samsung Galaxy Ring: First Impressions

Best of Show: The Coolest Gadgets of CES 2024
240111-site-best-of-ces-2024-1

Best of Show: The Coolest Gadgets of CES 2024

Latest Products All latest products

Battle of the Humanoid Robots: MenteeBot Is Ready
240423-yt-menteebot-ai-robot-v08

Battle of the Humanoid Robots: MenteeBot Is Ready

2025 Audi Q6, SQ6 E-Tron: Audi's Newest EV Is Its Most Compelling
cnet-audiq6

2025 Audi Q6, SQ6 E-Tron: Audi's Newest EV Is Its Most Compelling

Hands-On with Ford's Free Tesla Charging Adapter
pic3

Hands-On with Ford's Free Tesla Charging Adapter

Nuro R3 is an Adorable Self-Driving Snack Bar
240320-site-nuro-r3-first-look-v1

Nuro R3 is an Adorable Self-Driving Snack Bar

First Look: The $349 Nothing Phone 2A Aims to Brighten Your Day
240304-site-nothing-phone-2-first-look-v3

First Look: The $349 Nothing Phone 2A Aims to Brighten Your Day

Best of MWC 2024: Bendable Screens, AI Wearables and More
240229-site-best-of-show-at-mwc

Best of MWC 2024: Bendable Screens, AI Wearables and More

Latest How To All how to videos

Tips and Tricks for the AirPods Pro 2
airpods-pro-2

Tips and Tricks for the AirPods Pro 2

How to Watch the Solar Eclipse Safely From Your Phone
screenshot-2024-04-03-at-15-47-11.png

How to Watch the Solar Eclipse Safely From Your Phone

Windows 11 Tips and Hidden Features
240311-site-windows-11-hidden-tips-and-tricks-v2

Windows 11 Tips and Hidden Features

Vision Pro App Walkthrough -- VisionOS 1.0.3
VisionOS 1.0.3

Vision Pro App Walkthrough -- VisionOS 1.0.3

Tips and Tricks for the Galaxy S24 Ultra
240216-site-galaxy-s24-ultra-tips-and-hidden-features-2

Tips and Tricks for the Galaxy S24 Ultra

TikTok Is Now on the Apple Vision Pro
tiktok-on-vision-pro-clean

TikTok Is Now on the Apple Vision Pro