The Gray Goo Scenario

My Thoughts On: April 15th, 2003

Another post from my old EZBoard forum. This post deals with the Gray Goo end-of-the-world theory where nanotechnic robots engulf everything. I mock it in this post for several reasons. I am disappointed that I lost focus mid-article, but then again, it didn't need a lot of focus to begin with.

I've been known to criticize plenty of mainstream scientific theories that most people marvel at and play up to (like time travel and miscellaneous other things related to black holes), and I never took the proper pot-shots at the "Gray Goo" scenario. Since it's a timeless subject, I might as well just go ahead and take a couple plugs while I'm thinking about it.

The "Gray Goo" Scenario is the infamous "nanotechnology destroys the world" theory. It goes something to the extent of "molecular nanotechnology is ordered to reproduce out of all the materials available to it, and, through carelessness, creates a cascade reaction- converting biomass into nanomass, automonously 'absorbing' the whole world".

If you still don't know what I'm talking about, watch the movie "The Blob". While not quite the same thing, it's still a classic.

The "Gray Goo" Scenario works on several assumptions about how nanotechnological "cells" would self-replicate, them being-

1. That the Nanotechnic machines created can operate off of any/all organic substances by design.

2. That these micro-machines are indifferent to environment. (the reasoning being - "They're robots! They don't mind!")

3. That the method they use to self-procreate is indifferent to environment, and independent of other cells.

4. That the circumstances involved would provide the ideal climate for replication with optimized production rates. And that such rates and climates would remain throughout the entire scenario.

5. That we would be incapable of stopping it. Yes, take a look at most papers, this is almost always a literal assumption - "We might be able to stop it, but assuming we can't... *insert dramatic doomsday scenario here*"

6. That a machine operating off of biomass would not be susceptible to predation by other animals.

7. That the machines would form some kind of internal intelligence and evolve beyond our levels of reason, creating a form of uber-machine-child, which, while innocent, is still evil and must be destroyed before it goes back to the past to kill the time-travelling hired guns, who are trying to stop the disaster before it occurs. (okay, maybe this assumption has nothing to do with the subject, but it's what movie-makers and millenialists probably imagine)

Well, I started this post in the mood to totally bash it. But, as is evident by my clear inability of taking the subject seriously, I just don't want to. For some reason, the idea of this post has lost it's luster.

Oh well. Anyways, here is the short of my argument...

"When working with doomsday scenarios, it's positive for the scenario's development to speculate the most dangerous and severe situation to occur. But, when giving any form of credence to the scenario, we must first look at the reality of the event from happening. The 'Gray Goo' scenario does not bear real credence because it makes too many assumptions about how the event 'could' occur.

For example, the likeliness of a non-organic machine which can self-replicate even existing in a natural earth environment is unrealistic. While cell-sized machines may someday run around in the human bloodstream, the complexity of these machines pales against the complexity of a basic cell, which divides and replicates based on a complex series of mitosis/meiosis. Even if this success is replicated, it very well may be totally impossible for it to be replicated on a scale similar to the average cell, in any way that it would be considered more efficient. Such a macro-nano creation would not pose such a serious threat, as the larger the machines and the more complex, the harder they are to replicate, and the less likely they would be to be sustained in a natural environment without mechanical problems.

But, given that they are assumed to work at this rate- we stumble into a new problem of resource. While carbon is arguably a abundant resource, the question remains, is carbon sufficient enough to drive a small bit of nano-matter from a mound to a continental or global mass? How would such an 'organism' transcommute even the smallest distances without wearing down or breaking apart? A carbon-based machine with many moving parts would be inoperable over a short period of time if not in a liquid, and where does a machine which is made for replication get or contain large quantities of water?

It seems, the reality of this bumbling machine would reduce it to a level simpler than plankton, except the chemical properties of the plankton leaves it capable of mutating to adapt to the environment, while the nano-mass would simply self-replicate into extinction. If each cell is set to consume and reproduce, then other vital forms of defense and evasion would likely be compromised, ignored, or given inappropriate priority to a situation- leaving the nano-cell likely to predation; since it's carbon-based there is no reason to believe it cannot be consumed. Independent machines like this cannot evolve in a natural sense to protect itself from this problem.

Even then, it lacks the ability to organize itself into organs, and since each cell is given it's independent function, there is no reason to believe it would distinguish itself from others. Leaving it, likely, to eat itself and it's own offspring. Such a thing would mean doom.

But, given all that for granted, is the scenario possible? Maybe. More importantly though, the situation is not realistic, as it would likely fail unless given the explicit attention to derive solutions specifically to it's complications. Someone, who is incredibly smart, would have to spend years of research (if not generations) to solve issues of means of adaptivity, complexities against competition of resources, implimentation of the nano-machine delivery, and how to solve it's lack of a central intelligence. If someone wants to destroy the world that badly, well, it could happen.

But I shouldn't have to bring up the point to say that, within every reasonable conjecture involved, it definitely won't happen."

There you go. Now you know. Next week, I tell you the truth about aliens.