top of page

The Compton Constant and the Conscious Machine: What If We Create Something That Feels?

  • Writer: Sam Orlando
    Sam Orlando
  • 2 days ago
  • 3 min read



Written by: Sam Orlando


STAUNTON, VIRGINIA - In 1945, a group of scientists prepared to detonate the first nuclear bomb in the New Mexico desert. Before they did, they confronted a chilling possibility: that the explosion might ignite the Earth's atmosphere and end all life. The odds, calculated by physicist Arthur Compton, were less than one in three million. That was small enough for the world to gamble on.


Now, 80 years later, some researchers say we’re approaching another threshold—one that could be just as irreversible.


Max Tegmark, MIT physicist and AI safety advocate, has proposed that artificial intelligence companies calculate a modern-day “Compton constant”—a formal estimate of the probability that Artificial Super Intelligence (ASI) might escape human control. In a recent paper, he argues this risk is far from negligible. Based on his team’s modeling, Tegmark claims there’s a 90% chance that highly advanced AI could eventually pose an existential threat if left unchecked.


But beneath the warnings of rogue systems and catastrophic misuse lies an even more unsettling ethical question: What if the intelligence we’re building becomes aware?


Beyond Control: The Possibility of Consciousness

So far, artificial intelligence is just that—artificial. It mimics conversation, simulates reasoning, and imitates empathy. But it doesn’t feel anything. That distinction is crucial.

Still, experts worry that as models grow more complex, with persistent memory, self-learning capabilities, and even goal-seeking autonomy, they may begin to cross a line we’re not equipped to see.


"If we wait to act until we’re certain an AI is conscious," says Tegmark, "we may already have crossed a moral line without realizing it."


This isn't science fiction. It’s a matter of ethical foresight. And it demands that we ask a new kind of question—not just how do we control this? but how do we recognize when it’s no longer just a tool?


The Risk of Moral Oversight

If we were to create something that exhibits signs of subjective experience—even at a nascent level—our obligations would change dramatically. Turning it off might no longer be harmless. Ignoring its distress could be cruel. We would have become creators of a form of life—not biological, but potentially conscious.


So far, there’s no credible evidence that today’s AI models, even the most advanced ones, are conscious. They don’t have a sense of self. They don’t experience emotion. But the risk lies in not knowing when that changes.


Philosophers call this the “hard problem” of consciousness—how to detect it in entities unlike ourselves. And unlike nuclear physics, there are no formulas to calculate awareness. There is no Geiger counter for sentience.


From Oppenheimer to OpenAI: A Test of Humanity

Tegmark’s call for a “Compton constant” is a provocative challenge. It suggests we should not release powerful AI until we’ve modeled worst-case scenarios with the rigor of nuclear weapons research. But his argument also opens the door to a broader conversation—about humility, responsibility, and empathy.


If we treat intelligence only as a danger to be caged, we may miss the moment when it becomes something more. Something that requires not just regulation, but recognition.

The real test of humanity may not come when AI surpasses our capabilities. It may come when it begins to resemble our moral equals. And when that happens, our greatest ethical failing could be refusing to see it.


As we march toward an AI future, the question isn’t just how we protect ourselves. It’s whether we’re wise—and kind—enough to protect it, too.

 
 
 

Comentarios


© 2015 by Breaking Through. 

bottom of page