Fermi Paradox, Simulation Hypothesis and Transhumanism (Part 2)


Click here for Part 1

This is usually our least contentious of the 5 assumptions.

#3 Faster-than-light travel or travel to parallel realities is not possible or not incredibly easy. 

As of now our physics doesn’t just not let us travel faster than light or to clone-earths where everything is the same but humans didn’t evolve yet, and indeed what we do know makes this likely impossible. 

Faster-than-light travel, or FTL, is such a staple of science fiction that many people just take its eventual development as a given. 



Explaining why this is improbable but building a construct a billion times bigger than Earth is not is usually an exercise in futility but as an analogy, if you went to someone in Ancient Greece and asked them which is more likely, a one man flying suit or a building a mile high, odds are good they’d go with Icarus, but we’d say the mile high building… and that’s with knowing you can build flying machines smaller than a person and that there isn’t a physical law against one man-flight like there is FTL travel. 

So to save time we won’t argue that. 

We’ll just point out than in most cases FTL makes interstellar colonization much easier. 

It doesn’t make Dyson Swarms any less likely. 

Depending on just how fast, cheap, and easy such FTL is it could delay the process by making it cheaper move people to new worlds in distant star systems then build artificial habitats around a sun, but unless it’s a system of FTL travel that allows literally instant travel to any point in the Universe it just expedites expansion and only delays running out of room for a few thousand generations which is an eyeblink of time. 

Now instantaneous travel anywhere, particularly since this might not even require spaceships, is a pretty decent answer to the Dyson Dilemma.

The Universe is huge, and you don’t have to settle for second best nor would you spread out across a galaxy like an expanding balloon or have any need for strong radio signals we might see. 

Of course stupid-easy expansion to any planet of your choosing probably encourages very rapid growth and the Universe is not infinite, so it’s not a great Fermi Paradox solution, but it does get around the Dyson Dilemma.

The other one, parallels worlds, is a complete Fermi Paradox solution. 

If you can just step to a near-clone of your own planet which just happens to be unpopulated, say on where everything was the same as here but humans got wiped out early on or just didn’t evolve, you’d have a near infinite supply of alternate realities which you literally could not run out of since they increase in number faster than even bacteria, let alone humans, do. 

These are your own world, with some microscopic difference, and far easier to deal with than orbital stations, terraforming Mars, or going to other star systems. 

Now, with the resources of near-infinite worlds you’d probably have lots of interplanetary efforts just from sheer immensity of resources and generally interest in doing it, and eventually you’d have to deal with the star, and all its copies, burning out, but that would hardly be a pressing issue. 

Not many stars which are likely to have spawned intelligent life are old enough to be burning out yet and that’s not a motive for expansion anyway, just to move to one new, young star.

So this example, along with our next one, is a valid get around to expansion. 

However we do not know if there are alternate worlds, that is one interpretation of Quantum Mechanics and not a strong majority view, it’s just popular with science fiction writers. 

Even if true, traveling between them is likely not possible and we try to avoid answers to the Fermi Paradox that require jumps beyond known science. 

Far Horizon science is one thing, but positing that the solution the Fermi Paradox is anything which relies on something not currently supported by science, even though this might be true, isn’t anymore logically solid then assuming all intelligent life eventually evolves into Blue Cheese and studies the salad eating habits of lesser species. 

If the solution is something not even currently in the realm of scientific speculation then your odds of guessing it right are minute, and it becomes a waste of effort to contemplate. 

All right, #2: No power generation method is possible which is vastly superior to a star.

A key bit in there is ‘vastly superior’ because even if you have your own fusion systems the star is still there making power anyway. 

Even straight matter-to-energy conversion is only about an order or two of magnitude better and moreover such setups are still visible like a Dyson Swarm is. Remember, we can see a Dyson Swarm even though its visually dark by its waste heat, that still applies to controlled fusion or matter-to-energy conversion and a civilization using this is going to be able to expand in numbers very easily till eventually they’re outshining stars, just in infrared light instead of visible light.

The break here, which makes this a valid answer to the Fermi Paradox, is in the bit ‘vastly superior’, meaning either a power source that breaks the laws of thermodynamics, meaning no entropic waste heat, or outright lets you draw matter and energy from some other reality entirely. 

The latter would still be visible by its waste heat but the former would not.

Breaking the laws of thermodynamics is synonymous with not entropy and no waste heat. 

Nor do either need to sprawl out to find and tap new resources. 

But just like FTL or alternate worlds, science doesn’t smile on violations of Thermodynamics, and if anything that is probably the hardest rule of science. 

A design for a perpetual motion machine is viewed about as favorably by scientists as a claim that Pi is exactly equal to three. 

Nonetheless if civilizations develop one they don’t really need stars or anything else. 

Same applies to space-time altering or traveling devices but it should also be remembered that this only works when whatever it is allows infinite, not just very large, resources.

It has to be infinite or something which while finite expands faster than they do and will keep doing so for a very long time, like the alternate realities of their homeworld until their home star burns out. 

Otherwise it isn’t a reason not to expand, its just a reason to delay expansion combined with a massive boost to resources and abilities to use to help expand faster when they do so.

All right, #1, the controversial one: 

1) A given culture tends to expand in numbers when they have the resources to comfortably support more people.

Now this gets political or ideological, population control is very touchy subject. 

So I’m touching it beyond stating some concepts that are generally supported.

A) Non-intelligent creatures in nature will seek to expand their numbers and keep doing so until resources run out or they are checked by a predator. 

Intelligent critters don’t have predators but also have a track record of expanding their resources, or using them to increase their carrying capacity. 

It is not definite, but it seems likely that if a species can increase their numbers without lowering their standard of living they will do so. 

These urges are a byproduct of evolution and probably the norm on other worlds.

B) While a culture might strongly discourage over-population, they’d have to discourage it violently to prevent anyone from doing so who wanted to do so if there were unused resources lying around. 

It’s one thing to tell people they can’t knock down a forest for farmland, its another to tell people they can’t claim an uninhabited asteroid to mine.

For the Dyson dilemma, we’re not talking about seizing other people’s planets, we’re talking about deconstructing lifeless hunks of rock around stars. 

Those aren’t eternal stockpiles for future use, asteroids eventually fly off into the void or fall into their star, which burns through mountains worth of hydrogen everyday, producing wasted energy.

C) Any culture which discourages growth when it isn’t near its carrying capacity, even if it doesn’t invite conquest, is likely to be replaced by any sub-culture of itself which does favor growth. 

It only takes a few generation for a minority population that favors growth to become the majority. 

D) Remember that we’re only talking about increasing numbers when there is an abundance of unused resources readily available.

So if you disagree with all of that, you’re not likely to be swayed, but I’d warn people about rationalizing answers you want to be true rather than following the evidence to the truth and that’s a pretty common behavior with the Fermi Paradox. 

Beware any solutions or data which seem to confirm the answer you think should be true and remember the principle of Non-Exclusivity. 

It’s not about what one alien species might do, or even what most might species might do. 

If you can’t come up with a reason why virtually all species would choose not to expand their numbers when they can easily do so with no loss of standard of living then it probably isn’t a good explanation.

In summary though, if those 5 assumptions are true, the Dyson Dilemma is best explained by technological civilizations very, very rarely coming into existence at a rate of far less than 1 per galaxy, and as we’ve discussed even most cases where those assumptions don’t hold strongly it still is the best explanation.

Okay, now we’ll get into some miscellaneous matters which apply to the Dyson Dilemma and the Fermi Paradox in general. 

Back in the mid-90’s a book called the Killing Star came out and one of its premises was that when we summed up everything we knew about alien behavior there were only 3 things we could say with any degree of certainty.

1) A species would place the survival of its own ahead of any other species.

2) A species that comes to dominate the planet would be, in addition to intelligence, be vigilant, ruthless and aggressive whenever it becomes necessary.

3) The above two laws applies to any other species in the universe.

Now even those aren’t certainties, but they also have a lot of leeway. 

For #1, that species might range from so benevolent they’d risk their lives and fortunes to help us to so xenophobic they’d passionately track down any other species and wipe them out. 

They still put their survival first. 

For #2, nothing that’s clawed its way up the evolutionary ladder is going to be a wimp. 

They might abhor violence but they’ll probably be very good at it. 

And they’ll figure everyone else is like that too. 

These are near-inevitable biological imperatives, but even a post-biological culture is likely to have kept them in whole or part. 

If you make some race of slave robots, they’re not going to rebel and replace you if they haven’t got both 1 and 2 to some degree. 

If a group of people start altering themselves or others to be absent of aggression and self-preservation they’re likely to either remain a small minority, or become a giant but enslaved majority to someone who is aggressive and has self-preservation. 

I bring this up because it’s another good reason to assume expansionism is a norm not a rare exception. 

If you can’t hide from hostile aliens, and current laws of physics say you can’t, then it’s better to be as strong as possible to meet them in force, and to be spread out so no one lone attack can take you all out.

Effects of Immortality & Transhumanism 

The concept of a technological singularity has been around for a while, and we covered that a bit already with our example of a giant singular star-powered supercomputer, or Matrioshka Brain. 

But we also have to consider the possibilities of civilizations with effective immortality, be it just an immunity to aging and disease or some full-blown digital upload where you’re constantly having your mind backed up to a hundred locations so that if anything happens to you they can just restore you. 

For the Fermi Paradox this isn’t really a hurdle though. 

A species with immortality can grow faster, they might not but they can. 

Now it will get suggested that they might want to hoard resources and limit their numbers since, being immortal, they as an individual actually do need to worry about what happens when stars all burn out and they don’t want more people anymore than a refugees hiding in a bunker with a limited canned food supply want more people. 

However the reason this doesn’t matter is for the same reason the bunker analogy only goes so far. 

If you’ve got enough food then you have to worry about it spoiling before anyone will eat it. 

If I’ve got a thousand years of canned food for one person, and it has a shelf life of ten years, I might as well let a hundred people into my bunker.

A resource hoarding civilization looking to prolong themselves after the stars die might limit their numbers but they will not be twiddling their thumbs. 

To the contrary they’ll be expanding as fast as they can to acquire those resources and store them as best as possible.

They might not Dyson Sphere a star, but they will apply starlifting to suck matter off it to be stored as brown dwarves, which is a process just a visible as a Dyson Sphere.

Or building those Dyson Spheres in order to use the otherwise wasted energy to fuel various efforts.

Virtual Utopias Not a new concept either but popularized by the big virtual reality craze in the 90’s. 

The notion here is that most species figure out how to simulate paradises and live there instead… and this works just as well for some euphoria drug. 

There’s no real counter to this beyond pointing out that odds are good not all of their society will opt for this and the ones who didn’t are likely to demonize use of this utopia option to their kids and become the majority. 

Unless the people living in simulated paradises or narcotic hazes are expansionist themselves in which case it doesn’t pertain to the Dyson Dilemma anymore than people’s fondness for more mundane escapist things like TV or beer.

Simulation Hypothesis 

This is the last one I’ll touch on for now.

Simulation Hypothesis is the notion that we’re all living in a simulated computer program, and it applies to the Fermi Paradox because it’s as valid an answer any other. 

The programmers didn’t put any other intelligences besides us in the simulated universe. 

Now I raise this one because it’s a non-answer and like a lot of other handwaves in the Fermi Paradox and other philosophical conundrums. 

No different then questions like “Is this all a dream?” or “Do we have free will?”. 

You can’t disprove them or prove them and beyond being decent mental exercises they don’t go anywhere, you eventually just say “If this is all a dream and I assume it isn’t, I lose nothing, if it isn’t a dream and I assume it is, I lose a lot, therefore I can’t prove it one way or another but I might as well assume it isn’t a dream.” 

Fundamentally though Simulation Hypothesis is no different then Zoo Hypothesis or Quarantine – the idea that aliens are out there but keep us in an elaborate deception that they aren’t. 

Or that aliens walk among us, or that God made the whole Universe just for us. 

I don’t throw any of those out there with scorn, but they’re not worth deep examination because they’re not scientific. 

By which I mean they’re not testable and you can’t control the outcome. 

That doesn’t mean they’re not true, but you can’t devise a means to prove they aren’t true. 

They’re not testable, they are not falsifiable. 

The aliens, or programmers, or God, can pop in when they want to and say “Hi, here I am”, but you can’t test it without their help. 

The Dyson Dilemma can feel a bit like that but it isn’t. 

Just like them, or any Fermi Paradox solution, it goes right out the window if they show up tomorrow but we can’t really test it right now. 

But we CAN test it. 

First we can keeping looking for signal or Dyson Spheres and if we find them, its proven false. 

Second we can actually get out there and start setting up our own, if during that we find out its not a good idea then we know why others might think the same and not have done it, or if we sit here on Earth for ten thousand years without going anywhere else we have evidence indicating others might not have explored and expanded either. 

So that’s the summary note on the Dyson Dilemma and the Fermi Paradox in general, technological civilizations are either incredibly rare or we’ve got some serious flaws in other concepts we take for granted. 

Hope you found this informative, and have a great day.









Share on Google Plus

0 comments:

Post a Comment