• Email Us: [email protected]
  • Contact Us: +1 718 874 1545
  • Skip to main content
  • Skip to primary sidebar

Medical Market Report

  • Home
  • All Reports
  • About Us
  • Contact Us

New Solution To The Fermi Paradox Suggests The Great Filter Is Nearly Upon Us

April 12, 2024 by Deborah Bloomfield

An astronomer has suggested a new solution to the Fermi Paradox, which implies that the “great filter” may still lie in our near future.

First, a little background. With 200 billion trillion (ish) stars in the universe and 13.7 billion years that have elapsed since it all began, you might be wondering where all the alien civilizations are at. This is the basic question behind the Fermi paradox, the tension between our suspicions of the potential for life in the universe (given planets found in habitable zones, etc) and the fact that we have only found one planet with an intelligent (ish) species inhabiting it. 

Advertisement

One solution, or at least a way of thinking about the problem, is known as the Great Filter. Proposed by Robin Hanson of the Future of Humanity Institute at Oxford University, the argument goes that given the lack of observed technologically advanced alien civilizations, there must be a great barrier to the development of life or civilization that prevents them from getting to a stage where they are making big, detectable impacts on their environment that we can witness from Earth.

There could be other reasons why we haven’t heard from aliens yet, ranging from us simply not listening for long enough (or not searching for the right signals from aliens, due to our technological immaturity) to aliens deliberately keeping us in a galactic zoo. But if the Great Filter idea is correct, we don’t know what point we are at along it.

It could be that the filter comes earlier, such as it being really difficult to make the leap from single-cell life to complex life, or from complex animals to those that are intelligent. It could be, though, that the Great Filter lies ahead of us, preventing us from becoming a galaxy-exploring civilization. It could be, for example, that civilizations inevitably discover a way of destroying themselves (like nuclear bombs) before they are advanced enough to become a multi-planet species.

In a new paper Michael Garrett, Sir Bernard Lovell chair of Astrophysics at the University of Manchester and the Director of the Jodrell Bank Centre for Astrophysics, outlines how the emergence of artificial intelligence (AI) could lead to the destruction of alien civilizations.

Advertisement

“Even before AI becomes superintelligent and potentially autonomous, it is likely to be weaponized by competing groups within biological civilizations seeking to outdo one another,” Garrett writes in the paper. “The rapidity of AI’s decision-making processes could escalate conflicts in ways that far surpass the original intentions. At this stage of AI development, it’s possible that the wide-spread integration of AI in autonomous weapon systems and real-time defence decision making processes could lead to a calamitous incident such as global thermonuclear war, precipitating the demise of both artificial and biological technical civilizations.”

When AI leads to Artificial Superintelligence (ASI), the situation could get much worse.

“Upon reaching a technological singularity, ASI systems will quickly surpass biological intelligence and evolve at a pace that completely outstrips traditional oversight mechanisms, leading to unforeseen and unintended consequences that are unlikely to be aligned with biological interests or ethics,” Garrett continues. “The practicality of sustaining biological entities, with their extensive resource needs such as energy and space, may not appeal to an ASI focused on computational efficiency—potentially viewing them as a nuisance rather than beneficial. An ASI, could swiftly eliminate its parent biological civilisation in various ways, for instance, engineering and releasing a highly infectious and fatal virus into the environment.”

Civilizations could mitigate this risk by spreading out, testing AI (or living with it) on other planets or outposts. An advantage of this would be that the alien civilization could watch progress on these planets, and have warnings of the risks. If AI suddenly started destroying the planet in its endless pursuit of producing paperclips, for example, another watching planet could know of that potential outcome and take steps to avoid it.

Advertisement

However, Garrett notes that on Earth we are advancing much more quickly towards AI and ASI than we are towards becoming a multi-planetary species. This has to do with the scale of the challenge involved, with space exploration requiring incredible amounts of energy, material advances, and overcoming the harsh environments found in space. Meanwhile, advances in AI are dependent on increasing data storage and processing power, which we appear to be doing consistently.

According to Garrett, if other civilizations are following the path we appear to be set on, perhaps having AI assist with the challenges of becoming interplanetary, AI calamities will likely happen before they can establish themselves elsewhere in their solar systems/galaxies. Garrett estimates that the lifespan of civilizations, once they adopt AI in widespread use, is around 100-200 years, giving very little opportunity for contact or sending signals to other aliens out there. This would make our chance of finding such a signal fairly slim.

“If ASI limits the communicative lifespan of advanced civilizations to a few hundred years, then only a handful of communicating civilisations are likely to be concurrently present in the Milky Way,” Garrett concludes. “This is not inconsistent with the null results obtained from current SETI surveys and other efforts to detect technosignatures across the electromagnetic spectrum.”

It could get bleaker still, as it implies that the Great Filter (our own destruction before we are technologically mature) may still be ahead of us, rather than in our past.

Advertisement

The paper is published in the journal Acta Astronautica.

Deborah Bloomfield
Deborah Bloomfield

Related posts:

  1. Soccer-FIFA chief says Brazil game abandonment was ‘crazy’
  2. Soccer-‘Stop attacking our children’: Sparta Prague condemn reports of Kamara abuse
  3. Frank Rubio Becomes First US Astronaut To Spend A Whole Year In Space
  4. The Alligator Penis Is Permanently Erect, But Pops Out Only When Needed

Source Link: New Solution To The Fermi Paradox Suggests The Great Filter Is Nearly Upon Us

Filed Under: News

Primary Sidebar

  • Deleting “Mitch” Protein From Cells Could Make Humans “Immune” To Obesity
  • Antarctic Glacier Has Been Spotted Committing “Ice Piracy” On Its Neighbor
  • Bat Virus Evolution Suggests COVID-19 Virus Emerged Naturally, Spreading To Humans Through Wildlife Trade
  • Heart Attack Vs Cardiac Arrest: What’s The Difference?
  • Musk Outlines The Questionable Reason He Wants To Get To Mars So Badly, NASA Astronaut Responds
  • In 1972 The Soviets Launched A Spacecraft Bound For Venus. In The Next Few Days, It Will Return To Earth
  • Sounds From Inside A Star Reveal Unexpected Properties Of An Aging Orange Dwarf
  • Hear An Elephant Reunion Spark Sounds Even Keepers Had Not Heard Before
  • Why Do Elevators Have Mirrors Inside Them?
  • Cuttlefish Communicate With Arm Waving And Can Sense The Ripples With Their Bodies
  • First Ever Fatal Bear Attack In Florida Leads To The Deaths Of 3 Black Bears
  • Pathogenic Fungal Spores Found Surviving Miles Above Our Heads In Earth’s Stratosphere
  • “Alchemy” In Action As CERN Detects Lead Atoms Turning Into Gold
  • When Did The Earth’s Magnetic Field Form?
  • Who Were The Mysterious “Sea Peoples”, Destroyers Of The Ancient Empires?
  • Galaxy’s Extreme Core Might Have A Whole New Source Of Ghostly Particles
  • 20 Years Of “Very Concerning” Data Concludes Cats Can Catch Bird Flu And Could Pass It To Humans
  • The Ancient Pythagorean “Cup Of Justice” Pranks Users If They Fill It With Too Much Wine
  • When It Comes To Pain, The Nocebo Effect Beats The Placebo Effect
  • English Speakers Obey This Quirky Grammar Rule, Even If They Don’t Know It
  • Business
  • Health
  • News
  • Science
  • Technology
  • +1 718 874 1545
  • +91 78878 22626
  • [email protected]
Office Address
Prudour Pvt. Ltd. 420 Lexington Avenue Suite 300 New York City, NY 10170.

Powered by Prudour Network

Copyrights © 2025 · Medical Market Report. All Rights Reserved.

Go to mobile version