Doni Ivanov


Age of Genius

YouTube video

People participating in the common discussion around Artificial Intelligence have felt the state of the future materially change sometime during the Yule of 2025. I’ve felt it, too. And I’ve been dealing with a kind of sinking emotional and intellectual feeling that all the things we will ever have are the things we have now. I feel maybe like Feynman did in his Los Alamos From Below talk describing how, after he saw the bomb drop, he was asking himself what the point of anything was. Unlike Feynman we are not faced with something necessarily destructive, though I am sure AI will prove quite so in some respects, but with something that suggests creation. And maybe unlike the bomb, which for many years has been useless, we have something that for many years will be useful.

These advances are the first big changes in my working life and I’ve been grappling with them by writing all sorts of notes to explore my thoughts. In the process I noticed each question I wrote down gave rise to a few more which leads me to think everyone will come to wonder what things mean in an AI world. The worthiness of the discussion, motivated by everyone’s observations of recent advances in agentic systems, suggests maybe now we can come to imagine the destiny of full automation.

The best guide I know to encourage the necessary change in perspective for the task is Richard Hamming’s Learning to Learn lecture series. There, I first consulted the Artificial Intelligence videos which stressed handling our ego when discussing the topic. Then I consulted the general overview lecture where he listed out the advantages a computer has over a human, but then he stopped short of sharing advantages humans have over machines, only pointing out the listener was probably listing them now. This was a nod to our ego, but I now see it as hinting more. He meant for us to take a position on to what extent do machines think and where the separating line is between what machines can do and what humans can do. I am guessing he left it to us because he saw too many boundaries give way when probed with imagination that was too early for his time to share. Faced with the reality of AI, capable agentic systems, full automation, whatever you want to call it, I too am struggling with where to divide machine and man. Perhaps a gentler way of taking up the ego question is to wrestle with our understanding of what is a human’s work.

I currently believe inferring is the one thing that humans can do that machines can’t, and the only evidence I have for this guess is AI hasn’t inferred us to a world of antigravity, teleportation, and time travel. So let me use my one human power to infer the following: computers will be left to deduce all possible connections within their system and humans will be responsible for adding knowledge outside of the system to said system. I think this is best supported by what I believe Hamming’s machine assisted isosceles triangle proof story suggests: that it was within the capabilities of a computer to create the surprising proof because it deduced the result from all the rules and knowledge it was given.

I want to apply this to life as the system AI will come to consider. Humanity has what it knows in an area within a boundary of capabilities and as we increase the circumference with cutting edge technology, the area of reachable knowledge within will grow, and all of the inside area will be mapped out and connected to everything else by an AI system that can figure out ways to combine and apply our know-how.

Which by the way was the kind of pattern matching tasked to innovators. We had our inventors that gave us their beloved creations of time they didn’t really know what to do with and the rest of us applied the discoveries over many years to any human hope and enterprise. Furthermore, when we had the time to work on it, all of us engaged in trivializing human knowledge.

I think these long-tailed, humanity-level, processes will become automated. In an AI world, when you discover something and you get it really clear in your mind, it will be made even clearer still and it will be immediately understood if it connects with any other known invention.

This is what most intelligence feels like to me, it’s a kind of pattern matcher and “trivializer”, which helps make out the shape of the term Artificial Intelligence a little more clearly. That is to say, it’s not called Artificial Genius. So you can see what is missing now: inferring, invention, discovery, something that intelligence doesn’t necessarily provide, something Hamming calls great work, or we might call genius.

I think this challenges current notions of creativity and talent. After all, how creative was it really to apply an achievement in one field to all the others when compared to the discovery of said achievement? And how talented are humans really in endeavors where a computer can hit the target no human can? It’s the genius that’s creative and must be relied on to hit targets that cannot be seen, only inferred.

In his overview lecture, Hamming talked about the doubling of knowledge and how it implied that science could not go on the way it has because everyone would have to be a scientist in the future. Hamming also talked about how science and engineering were becoming more and more intertwined, and people were going to be pushed hard to get scientific principles out into the field and then use that engineering to push science.

What I believe now is, science cannot go on the way it has and everyone will have to be a scientist. I imagine this because you can also think of AI as something that can recreate any human knowledge. In the before time, we accessioned oral and written records to humanity’s store for use in the global processes mentioned previously. In an AI world, we will accession knowledge in a kind of AI model form where anything can be known and applied everywhere immediately as part of a kind of outcome-first automated engineering.

I think critically about what is going to be truly important and truly valuable, and I can’t help but imagine a future where those things that are unknown will be valuable, until they are found out, at which point they will immediately become worthless. I also believe it will be a world marked by clarity to disregard what is known and marked by focus to search in the unknown. Or said another way, the average person spends their time on things that are not cutting edge, but with AI, it will only ever make sense for people to spend time on things that are unknown, which is an interesting characteristic to consider on the quest to finding something worth doing.

There is a kind of modern belief that in contrast to the curious giants of the past, modern giants worked under the sustenance of a monopoly, and future giants can only ever be minted in such financial freedom. I found this rhetoric demotivating, but now I think our new AI enabled world will force everyone’s attention on important problems to the extent there will be a wonderful amount of great work coming, and, if you believe it as I say it, an age of genius.

Hamming encouraged us by saying that the future has a great many possibilities, and that is how I’d like to leave you. There isn’t anything known for you anymore, all that’s left are the possibilities.