Discussion about this post

User's avatar
Robert Shepherd's avatar

Maybe I’m being dense with my human brain, but—

—isn’t there a fundamental difference between chess and science? Chess is bounded: we know the rules of the game, and they are defined axiomatically. No matter how intelligent a player of chess, we do understand their goal: they are trying to win, and we can meaningfully say what this means. And this will be true even if their strategies are beyond us— the destination is legible, if nothing else.

But science isn’t like this: we don’t actually know what a discovery beyond human understanding even looks like. And this is potentially a problem, because it follows that we may have no way of knowing if a discovery like that is *actually true.* Some may be empirically verifiable in ways we can understand. But if some aren’t, all we really have is faith— an AI which says “this result is certainly empirical for a reason you can’t understand” may or may not be right. I’m not sure we have any way to know.

And this is a profound difference, really. An AI that beats you at chess demonstrates it by beating you at chess. An AI that understands the world in a way you could never comprehend… cannot necessarily demonstrate anything, including that it’s actually doing science. This is a serious problem; I don’t know that there is a solution to it

David Schatsky's avatar

Science that no one can understand is not useful. Indeed, maybe it's not even science.

The game-playing examples like Deep Blue are noteworthy because they were designed not to explain but to win.

8 more comments...

No posts

Ready for more?