Reflections after reading A Thousand Brains: A New Theory of Intelligence

Published Categorised as AI, Books Tagged , , ,

Jeff Hawkins newest book A Thousand Brains: A New Theory of Intelligence discusses how the brain uses map-like structures to model the world and what this might mean for our next leap in achieving true artificial intelligence. Continue reading to discover my reflections from reading the book.

The book in three sentences

  1. The author presents a new theory on how we can achieve true intelligence and not just artificial intelligence.
  2. The theory is based on our current understanding of the brain’s neocortex. It presents an idea where thousands of independent predictive models for everything we know vote together to create one unique perception.
  3. The book argues that our current AI and deep learning research is stuck on the wrong path by focusing too much on static problems. We need to focus more on mental flexibility, similar to what the brain does when learning and adapting more fluidly.

Impressions

The book reinforced my prior belief that our current research path is not heading towards true intelligence or consciousness. Jeff admits that we currently don’t have a scientific explanation for consciousness, but he encourages us to look at consciousness as something we can mathematically represent. He also talks about how the brain stores information and knowledge through something he calls reference frames. Reference frames are almost like an abstract coordinate system where items or thoughts are stored using a mental or physical location. I drew some cognitive parallels to network analysis that aims to model relationships between information, and maybe network analysis is the early seeds on more advanced artificial references frames?

On a side note, the book’s introduction to the brain’s inner workings made me realise that the fundamentals of deep learning are closer to mimicking the workings of the brain than I had previously realised. While the mimicking is still on a very basic level, I was surprised to learn how closely the workings of an artificial neuron and an organic one imitate each other.

How the book changed me

Overall, the book made me think about applying my current knowledge of deep learning in building blocks to model proper intelligent machines. I believe we need to find a way to make general algorithms that can do more than one thing. With this, I mean getting beyond hyperspecific models that, for example, only can do one type of image classification.
We can argue that transfer learning is one step in the right direction as it lets us train models without retraining the entire network, but the result is still a specific model. We also have active learning, a popular industry buzzword where an algorithm re-adjusts its parameters based on new data. Still, it’s just a glorified way of retraining the model on the latest data available. Either from scratch or a partially made network using transfer learning.
These thoughts leave me with two questions.

  • How can we create a network that can adapt its knowledge the moment new data is available?
  • How can we create a base model to repurpose to speak, recognise images, and understand text?

Who should read it

While this is a fascinating book, it’s not the book you buy to look for answers. Instead, it’s a book you read to be inspired. I highly recommend this book for people in the data and machine learning space; I also think the book is a good read for the general public. There has been a lot of fear of how AI quickly gain consensus and take over the world, and I think this book might calm some people down a level or two.

See the book on Amazon A Thousand Brains: A New Theory of Intelligence.

Note some links on this page are affiliate links, and I might earn a small commission when you buy through my link.

By Christopher Ottesen

Chris is a data scientist based in London, UK.

Leave a comment

Your email address will not be published. Required fields are marked *