13 Comments
User's avatar
Philip Ashton's avatar

Very interesting! While coming up with new paradigms may be very difficult for an AI, surely it wouldn’t have to rely on “AI peer review” to test them.

If it was advanced enough, it could test whether the new paradigm explained more of the data than the existing paradigm (classic Kuhn).

Using AI to identify datasets/findings that seem contradictory under the existing paradigms may be a good place to start.

Postindustriality's avatar

The hypernormal science pattern you describe applies beyond academia. Organizations do the same: optimize within the existing framework, measure faster, produce more - while the capacity to ask whether the framework itself needs replacing quietly erodes. The London Underground metaphor works in both directions.

crd's avatar

"In other words, we must build visionary machines rather than merely predictive ones."

Must we really, though?

Viveka's avatar

We must at least build systems that support visionary thinking, which will likely include machines. For now, humans are the best at this but they are very poorly supported.

BajoLimay's avatar

Muy buen artículo. Creo que el problema es extensivo sobre todo a las ciencia s dela complejidad, a aquellas que como la ecología, la economía o la biología, operan con múltiples variables y escalas, con muchas trayectorias posibles para un mismo sistema.

PEG's avatar

You’ve rediscovered Boden’s three types of creativity. AI excels at combinational and exploratory creativity, can’t do transformational. A literature review is in order.

Viveka's avatar

Boden’s framing is rather good. A luminary in creativity studies who happens to have been working in computational and generative creativity for decades, her work really ought to be required reading for everyone touching on those fields.

PEG's avatar

Alas it’s not, and I totally agree. Boden gets some recognition, but not for this model.

Dan Elbert's avatar

New paradigms require the invention of new concepts, new words, new abstractions. Indeed, AI trained in existing data and concepts cannot easily go beyond it.

One possible way may be having the AI look for correspondences between mathematical objects and experimental data, since there is a lot of math that has been invented/discovered but hasn't yet been applied to science.

PEG's avatar

AI is architecturally constrained to work within existing symbolisation. A great tool for exploring existing symbolisation, but we need humans if we want to go beyond it.

Ben Winchester's avatar

> AI is architecturally constrained to work within existing symbolisation

Is it? I thought the whole point of encoder/decoder architectures was that AI could reframe data into a new set of symbols.

PEG's avatar

That’s mapping symbols to symbols, not creating new ones.

Malcolm Storey's avatar

The map tale I heard was Russian cartographers who mapped their domain at progressively finer scales until one day they realised the terrain was it's own map at 1:1, and so they declared their task complete.