Financial Markets


In a thought-provoking paper by the University of Manchester's Michael Garrett, the concept of Artificial Intelligence (AI) morphing into a 'Great Filter' - an idea borrowed from the Fermi Paradox postulating what impedes intelligent life from expanding into the cosmos - takes center stage. The paper, published in Acta Astronautica, cautions of the rapid progression of AI advancing into Artificial Super Intelligence (ASI), creating a gap that could vastly outstrip human intelligence. It also attributes this rapid evolution as a trigger leading to unforeseen and unintended catastrophic outcomes.

Garrett muses that such advancements could decide the life span of civilizations too. Civilized life as we understand might typically exist for less than 200 years, a daunting assertion that could shed light on the conspicuous silence and lack of detectable technological evidence from extraterrestrial lifeforms. The argument lies in a civilization's technological advancements, specifically the birth and evolution of ASI, rising swiftly to the point of self-destruction. The question then arises: is humanity next in line?

To circumvent such existential threats, Garrett emphasizes the essentiality of grounding regulations on AI development while concurrently investing resources into maturing as a multi-planetary society. The perturbing alarm bells concerning AI range from job displacement to creating discriminatory patterns through algorithms, potentially underminining the essential pillars of democratic societies, and the chilling prospect of autonomous decision making sans human accountability.

Garrett argues that attaining multi-planetary status could provide humanity with some semblance of a safety net. Spreading across multiple planets would distribute risks and increase resilience against the fallout from an AI-induced catastrophe on Earth. However, the reality is quite sobering, the advancement in AI appears to be leaving space technology in its cosmic dust.

The rate of AI development is hurtling forward at an unprecedented pace, creating a significant chasm between AI evolution and space technology progress. AI has immense potential to improve without significant physical limitations constraining its evolution. In stark contrast, space travel faces monumental hurdles and challenges that range from surviving harsh environmental conditions to the limited availability of resources.

Garrett's concluding remarks stress the need for humanity to shift its priorities towards space travel and to create a universally acceptable regulatory framework for governing AI. Failure to address these two critical areas could spell catastrophic consequences for the future of technical civilizations.

His warning paints a future where the unbridled growth of AI without stringent regulations and the lack of adequate investment in space exploration could transform us into hostages of our technological creations. It is a wakeup call for global societies to act collectively and decisively. Mankind's future may just hinge on our capacity to harness, regulate and keep pace with the power we have unleashed.