😰 Why AgI will feel Pain and Fear

I’d like to argue that the key to achieving AgI is “pain” and “fear”. This is based on a mental experiment I’ve been noodling with during my idle curiosity. I have not read any of the literature on AgI, so I have no idea if these ideas are novel. Either way, please indulge me, and would love to hear your comments!

Thought Experiment for the Reader

Definition of AgI

I’d like to define AgI as:

Any system that can match or surpass a human in its ability to self-navigate productivity on an unbounded scope of tasks.

I.e. an AgI is any system that is just as good if not better than a human at figuring out how to solve any problem by learning and teaching itself new approaches and then focusing on whichever approach happens to be the best.

Okay, so what’s the missing ingredient to AgI?

Test of Intelligence

I’ll suggest that there’s a simple question we can ask ourselves to determine if any component of human intelligence is critical or not in the creation of AgI:

If we take a component of human intelligence away, does the human become smarter or dumber?

Let’s try this test on the following things:

  • Self-assessment and reprioritization: It’s simple to deduce that without the ability to assess progress and reprioritize tasks, a human would get stuck working on the same thing forever, and progress would be reduced to random luck that the thing they happen to be working on is the correct thing. Therefore, self-assessment and reprioritization are critical to AgI.
  • Self-preservation: It seems clear that without the ability to protect oneself from death, one will die (on a long enough time horizon), and therefore one will not be able to keep working on anything at all, let alone work on the correct thing. For instance: if an AI runs out of electrical power, or the datacenter it is instantiated in is destroyed, it won’t be able to continue progress. If one keeps working on the same failing approach for too many years, one may die before landing on the correct approach. Therefore, self-preservation is critical to AgI.
  • Monitoring scarce resources: How can you possibly know you need to recharge your battery unless you “feel tired”? How can you possibly treat an injury unless you know you have one? Time is a scarce resource too, and the awareness of time passing is in this case just as relevant as detecting energy levels and detecting injuries. If you run out of time, you die. Therefore, I posit that “pain-sensing” or any self-monitoring of scarce resources, enables self-preservation, and is critical to AgI.

Conclusion: The Fear of Death

Stringing all this together, I would posit that there is a binary condition that separates the truly autonomous AgI agents from “servant” agents, and that is time management. Is time management motivated extrinsically or intrinsically? Are we telling this AI what to do, or is it figuring that out for itself? At the root of this is the concept of “motivation”, and I’m arguing that the only true motivator of time management is the fact that time itself is the foundational scarce resource. All other resources an agent could possibly need are a function of time, as procure all other resources takes time. The fear of “running out of time”, a.k.a. the “fear of death”, is the only reason we ever need to manage our time. Without the risk of lions approaching, a deer can graze uninterrupted. Without the risk of time running out, an AI agent can simply keep attempting the same failing approach to solving a problem till infinity. Fear is the core motivator to time management, and therefore the foundational unlock to AgI.

Thoughts?

TODO:

  • How to make an AI afraid: pain
  • Implicit in fear is the awareness of oneself, and thus, consciousness
  • Why the formation of “fear” neurons in the brain is inevitable