10 Comments
User's avatar
Robert Shepherd's avatar

Maybe I’m being dense with my human brain, but—

—isn’t there a fundamental difference between chess and science? Chess is bounded: we know the rules of the game, and they are defined axiomatically. No matter how intelligent a player of chess, we do understand their goal: they are trying to win, and we can meaningfully say what this means. And this will be true even if their strategies are beyond us— the destination is legible, if nothing else.

But science isn’t like this: we don’t actually know what a discovery beyond human understanding even looks like. And this is potentially a problem, because it follows that we may have no way of knowing if a discovery like that is *actually true.* Some may be empirically verifiable in ways we can understand. But if some aren’t, all we really have is faith— an AI which says “this result is certainly empirical for a reason you can’t understand” may or may not be right. I’m not sure we have any way to know.

And this is a profound difference, really. An AI that beats you at chess demonstrates it by beating you at chess. An AI that understands the world in a way you could never comprehend… cannot necessarily demonstrate anything, including that it’s actually doing science. This is a serious problem; I don’t know that there is a solution to it

Valentin Guigon's avatar

I was about to make a similar observation. Chess is bounded, well-defined and ultimately intractable for humans. Research is unbounded, ill-defined, and ultimately intractable for machines. One could associate an infinite number of statements and come up with an infinite number of hypothesis. If the research crisis taught anything, it's that there exists a delimitation between useful and useless hypotheses. Yet the outcome may appear in a every distant future.

Machines are not yet suited for uncertainty.

David Schatsky's avatar

Science that no one can understand is not useful. Indeed, maybe it's not even science.

The game-playing examples like Deep Blue are noteworthy because they were designed not to explain but to win.

Kevin Horgan's avatar

Thanks, Matthew. Seems we need AIs, that provide transparent discoveries, that can be validated easily.

Rick Bolin's avatar

The big problem with AI legibility is our inability to distinguish the truth from its mistakes without analyzing the original data our selves. Don’t think AI is mistake prone? … just ask it how accurate its results are.

Oisín Moran's avatar

This is a great read! Perhaps an early example of what AI science legibility difficulties might look like is Mochizuki's claimed proof of the ABC conjecture. Originally published around 2012 and clocking in at a few hundred pages, it took until quite recently for some enough people to properly understand and seemingly refute it, though it still looks to be a little up in the air.

Another interesting game analogy is where you should aim in darts given your skill level [0]. I think this is fun to analogise to things like chess and go, and eventually stuff like science. For example the move the AlphaGo/Zero would play is not necessarily the best move for me because I have different (in this case worse) computational abilities. I shouldn't aim for the triple 20 in darts because I'm not accurate enough, and I wouldn't know what do after a move 37. I can imagine a future where there is a bifurcation in science of things some humans understand, and things AIs do, but us wanting to still chip into the non-human-understood section, perhaps with the help of AIs, perhaps needing several generations. We already outsource understanding among humans anyway—what proportion of the population understand relativity compared the proportion that benefit from its use in GPS?

Another interesting thought sparked by the bit about our desire for understanding, is whether it is purely understanding that we desire, or having to work for it a bit. In discussing puzzle design, Grant Elyot makes the nice distinction between "Eureka" & "Fiero" [1], where Eureka moments can be easily transmitted, observed second hand, and still enjoyed, while Fiero is a bit more about the slog to get there and can't really be transmitted. In this context you can imagine a continuum between figuring out something yourself independently, to the Matrix "I know Kung Fu" brain download. Hopefully we can make many points along that scale.

It would be nice to see different ways to carve along the seams of science and what regions of study different AI systems would come up with.

[0] https://www.stat.cmu.edu/~ryantibs/papers/darts.pdf

[1] https://youtu.be/oCHciE9CYfA?si=1QQOBa4XhB8dKlw1&t=2119

Karol Buda's avatar

Phenomenal article.

The main thought I had while reading this was that when I play chess, my goal is to win and, therefore, I am incentivised to play the best move, but I'd argue that my purpose when playing chess is generally to have fun, learn, enjoy the social interaction, and destress (or get even more stressed, as many fellow online chess players may relate to lol).

That's why your reference to Brandon Boesch's work resonates with me; it seems irrelevant that it may be AI that leads the way in scientific breakthroughs, and therefore "wins" the game. As long as I get to run my own parallel track of science, or, as you aplty put it, get to "excavate" the work done by AI, I will very much be doing science in a way that gives me purpose. In fact, the latter is very much the main component of our current jobs as scientists: reviewing the literature, marvelling at other's work, trying to decode it for our own understanding.

Finally, to your point that "scientific discoveries must eventually intersect with material reality to have any utility." I find that this is more true of engineering efforts stemming from scientific discovery, but scientific discoveries do not have to have utility to be great and worthy of our time. I doubt a lot of the Ig Nobel prize winners care if their work gets "utilized," and all jokes aside, all of those works are highly rigorous and fascinating. The scientific method is ultimately just a tool, like a brush to a painter. The problems we choose to direct it at, and the resulting outcomes are irrelevant; that is what makes "pure" science truly human.

Andrew Antes's avatar

How many human-illegible research papers will be chalked up as AI slop? How many already have been?

Mbwanga Sambata's avatar

Great essay! What types of knowledge do you expect AI to produce exactly that's not intelligible? If it says "this molecule cures Alzheimer's" we should definitely ask it questions about how it arrived at this conclusion, what the mechanism is, and check ourselves. Checking is often way easier than coming up with the answer in the first place.

If it says "the key to consciousness is this set of equations" and no one can ever understand these with the brain we have, then we are in a bind. If we don't evolve to the point we can understand it (implants / genetic augmentation?) we will find ourselves at a mercy of a higher intelligence. Our main problem would be far removed from merely curating scientific findings made by AI.

Related to that, another claim I'm sensing is that we should/would be letting AI "do its own science" completely unsupervised. We already put limitations on humans doing e.g. research into human pathogens, why would we let AI do the same without such restrictions?

Becoming Human's avatar

What is interesting is that a super-intelligence that can understand something more complex than we can is tantamount to being part of the observable but not yet intelligible universe.

If there is a conclusion that we cannot account for, then it is just a natural effect, like a drug that works without us understanding the effect