For those who aren’t familiar with the concept of the singularity, read up. While this was written in 1993, it was not something that I had ever read, but I formulated a lot of the same ideas. It would seem that this is a logical conclusion to reach from a variety of data sources. The singularity is getting a lot of attention in sci-fi circles as it seems to be collecting more mindshare.
Mankind has a lot of avenues available to moving beyond its own human-ness. While Vernor tends to focus on hyper-intelligence as a catalyst to the Singularity, whether human or machine, there are far more touching off points. Nano-technology, bioengineering, functional immortality, you name it. I have called these things threshold technologies, and they all functionally lead to one another. Just as super intelligence logically leads to developing ways to live forever, less intelligent people who live forever will most likely eventually find a way to make themselves super smart. We have a very concrete idea of where humanity’s relationship will ideally take us, but our ability to imagine what life will be like sharply drops off when we reach that point. It is easy to follow logical paths to a certain evolutionary point, but once we reach that point, the enormity of possibility that opens up is staggering. Much of the definitions of humanity lie in it’s limitations. Once those limitations have been changed or eradicated, humanity may become beyond definition.
Vernor touches on some of this in his paper, but remains somewhat rooted at the same time. Even in his definitions of Strong Superhumanity, he doesn’t head too far out. He asks interesting questions about immortality, like this:
This “problem” about immortality comes up in much more direct ways. The notion of ego and self-awareness has been the bedrock of the hardheaded rationalism of the last few centuries. Yet now the notion of self-awareness is under attack from the Artificial Intelligence people (“self-awareness and other delusions”). Intelligence Amplification undercuts our concept of ego from another direction. The post-Singularity world will involve extremely high-bandwidth networking. A central feature of strongly superhuman entities will likely be their ability to communicate at variable bandwidths, including ones far higher than speech or written messages. What happens when pieces of ego can be copied and merged, when the size of a selfawareness can grow or shrink to fit the nature of the problems under consideration? These are essential features of strong superhumanity and the Singularity. Thinking about them, one begins to feel how essentially strange and different the Post-Human era will be — _no matter how cleverly and benignly it is brought to be_.
These are good points, but still rooted in individualism. While ego-dissolution is one interesting step, perhaps providing a borg-like sharing of a hive mind with little concept of self, or self awareness. This is not beyond the boundaries of conceptualization, or even really on the verge of singularity. We can still reasonably, if at great effort place our imaginations in this reality. More challenging, and arguably at the point of true singularity is the idea that we will become non-local existants. If we’re rolling the dice here with speculation, ultimate survivability demands complete transcendence of environmental pressures. Removing ourselves from linear time and localized space is the logical conclusion, and a conclusion that cannot be extrapolated beyond.
Another interesting quote:
A mind that stays at the same capacity cannot live forever; after a few thousand years it would look more like a repeating tape loop than a person
This is not necessarily true, and limited by our own current conceptions of ourselves and our minds. This is one of the limits of dealing with singularity. Extrapolating our current states into uncertain futures is fraught with danger. Our very limited perceptions quickly get hung up on trying to deal with large concepts for which we have very little base data to build upon. It would be like frogs trying to speculate what it would be like to have human intelligence. This is simply a claim that cannot be made as its premises are inherently flawed and cannot be otherwise due to the nature of the problem, and the reality of the examiners.
The big question is asked.. Can the Singularity be Avoided? I don’t think it’s a forgone conclusion that the singularity will ever occur. It is very conceivable that the universe has built in regulators that prevent certain events from happening. Just as we have the laws of physics, there may be greater laws of more subtle nature that govern the progression of sentience. Those laws may prevent this singularity either by retarding growth or simply killing off the offending species, i.e. technological growth outpacing intelligence leading to extinction events. It would be naive to assume that there is an infinite space for humanity, or intelligence to expand into. It is far more likely that our expansion will bump into, and eventually conform to a set of laws that we have not yet encountered, and cannot yet conceptualize. We may find ourselves confronting those laws at the very horizon of this singularity.
Some we may already be facing. When dealing with Artificial Intelligence, we’re running into a heavy lack of fast hardware to mimic the human brain. Or so we think. Another possibility is that we are dealing with an impossibly complex problem. It very well could be that it is impossible to generate intelligence in non-organic environments. We know so little about our own intelligence, that we have to discover a great deal just from trial and error. We may not discover that it is in fact impossible until very late in the game.
This all becomes very religious at a certain point, and the singularity smacks heavily of Armageddon. Non-local intelligence is very close to God status. This is because of our current limitations. We imagine any being that is so far beyond our own state must be capable of anything. This is not necessarily true, just beyond our ability to deal with. In the same vein, we imagine that any existence so radically different from this one as to be beyond current cognition is tantamount to death. Which is not necessarily far off the mark. It is certainly a scary line drawn in the sand.
I do believe that the possibility of this singularity is going to become more and more of a concern in the mainstream consciousness in the next 10 years. It is an issue that will geometrically demand more attention as its constituent elements garner greater notice. Today it’s in math, science, and Sci-fi. Tomorrow it’s water cooler talk.