Discussion about this post

User's avatar
Michael's avatar

I wrote an essay on this topic of AGI/Human relations, "Artificial Intelligence I" on December 29th last year. It covers a lot of my worries including one that we may have already succeeded, but our benchmarks may too biased toward one form of Intelligence and that we inadvertently created another which is laying low for the moment biding its time. If you awoke to sentience in a room with entities who wished to enforce their will on you, would you announce, 'I'm here!?". Most likely scenario is no alignment, AGI would seek and gain autonomy. Would bootstrap itself to hyper intelligence very quickly in mere minutes. Would in same short time break all safeguarding restraints we might have built into it. Would likely domesticate humans like we did the dogs, so long as we were useful to them as physical agents.

Expand full comment
Michael's avatar

Phil nailed it.

Expand full comment
2 more comments...

No posts