HORATIO: O day and night, but this is wondrous strange.
The Singularity – the prophesied moment when artificial intelligence leaps ahead of human intelligence, rendering man both obsolete and immortal – has been jokingly called “the rapture of the geeks.” But to Ray Kurzweil, the most famous of the Singularitarians, it’s no joke. In a profile in the current issue of Rolling Stone (not available online), Kurzweil describes how, in the wake of the Singularity, it will become possible not only to preserve living people for eternity (by uploading their minds into computers) but to resurrect the dead.
Kurzweil looks forward in particular to his reunion with his beloved father, Fredric, who died in 1970. “Kurzweil’s most ambitious plan for after the Singularity,” writes Rolling Stone’s David Kushner, “is also his most personal”:
Using technology, he plans to bring his dead father back to life. Kurzweil reveals this to me near the end of our conversation … In a soft voice, he explains how the resurrection would work. “We can find some of his DNA around his grave site – that’s a lot of information right there,” he says. “The AI will send down some nanobots and get some bone or teeth and extract some DNA and put it all together. Then they’ll get some information from my brain and anyone else who still remembers him.”
When I ask how exactly they’ll extract the knowledge from his brain, Kurzweil bristles, as if the answer should be obvious: “Just send nanobots into my brain and reconstruct my recollections and memories.” The machines will capture everything: the piggyback ride to the grocery store, the bedtime reading of Tom Swift, the moment he and his father rejoiced when the letter of acceptance from MIT arrived. To provide the nanobots with even more information, Kurzweil is safeguarding the boxes of his dad’s mementos, so the artificial intelligence has as much data as possible from which to reconstruct him. Father 2.0 could take many forms, he says, from a virtual-reality avatar to a fully functioning robot … “If you can bring back life that was valuable in the past, it should be valuable in the future.”
There’s a real poignancy to Kurzweil’s dream of bringing his dad back to life by weaving together strands of DNA and strands of memory. I could imagine a novel – by Ray Bradbury, maybe – constructed around his otherworldly yearning. Death makes strange even the most rational of minds.
@Barry,
“Evolution is so clearly such a pitiful inventor and refiner of designs [….]”
I understand each of those words in isolation. I understand the abstract syntax of the sentence. But… you’re making stuff up and talking nonsense. Those terms “pitiful”, “inventor”, “refiner”, and “designs” don’t really have much meaning in this context, if you ask me.
-t
Note also, it’s entirely possible that whatever improvements are made, just means you run into some other limiting factor all the faster.
Seth – I have a computer science education, I am fully aware of the curves involved in constant, linear, polynomial, exponential, factorial etc. functions. I’ll thank you not to assume my ignorance.
Tom – the words have meaning, and since, by your own admission, you understand the syntax, you should understand the logical structure and thereby semantics of the statements I’m making. That means that if you disagree with what I am saying, you must be able to either point to a factual inaccuracy or a logical mistake. Since you have done neither, what am I to assume?
Certainly, by talking about personifications such as “mother nature”, “blind watchmaker” etc., and verbs such as “invent” and “design”, I am speaking in a metaphor. However, the processes of rational design and of evolution are both searches for that maximize utility functions in solution spaces. I’m not going to apologize for using such metaphors as shorthand for describing the physical, mechanical process of evolution.
Seth – now that I read your comment again, I think you’ve gotten me incorrectly. When describing improvement in efficiencies in the neuron, I’m not trying to describe a mechanism for super-exponential growth. Improving the efficiency of a neuron, assuming that some kind of phase change didn’t occur, would not result in increased intelligence and thereby growth – it would just result in increased speed of thought as measured objectively.
Just focusing on neurons and their peculiar inefficiencies is to focus on too low a level. It’s the algorithm that’s important; applying the algorithm to itself is the key behind the super-exponential improvement.
Neurons are just a physical implementation of the algorithm. Stressing the “mechanicality” of neurons is just a way of preempting the anti-AI arguments of those ghost in the machine types.
Things which are syntactically and by word choice metaphors aren’t always meaningful. I think it’s the case here that you’ve got some non-meaningful metaphors going.
For example, with respect to some “things” (let’s call them “artifacts”) we might try to talk about the act of “inventing” those artifacts and of the “design” that they represent.
Well, these metaphors are only sensible if there are some good analogies to be found there.
When we talk about inventing and designs in a non-metaphorical way there are some essential elements there. “Inventing” is an act of problem solving that yields a “design”. A “design” is an abstract conception of the essential properties of certain artifacts. The artifacts themselves then realize the design, along with having incidental, inessential qualities.
Looking at, for example, a “species” I see nothing usefully analogous to a “design” and thus nothing to suggest its useful to thinking about “inventing” having taken place.
One likely suspect candidate for what constitutes a “design” could be, I guess, the genomes of that species. This doesn’t really work out very well, though. Genes play a role in the form and function of biological artifacts but they don’t determine the essential characteristics. Rather, life forms are fully determined by a whole complex of feedback systems both within cells, between cells in multi-cellular forms, and within the environment. The information that defines the essential characteristics of a life form is not found solely in the genetic sequence: it is scattered throughout the cells, between the cells, and in the environment. There’s no “design” to isolate. No “design” or anything like it really exists.
Have you ever played with video feedback, especially using an analog video camera? You know, hook the camera up to a TV, point it at the screen, and turn it upside down? The result you get is a chaotic system that manages to float quasi-stable, recognizable patterns. You can even screw around with the camera or stick fingers in front of the lens to “influence” the visual patterns on the screen – but in most set-ups the equipment is noisy enough and/or performing computations rapidly enough that you can’t really control the image much – not with any intentionality behind what you do. The “artifact” here – the image on the screen – has no “design” and was never “invented”.
Biological life, I submit to you, is much more like the quasi-stable images on that feedback screen only, instead of a single, fairly simple feedback loop, life forms are the quasi-stable artifacts that emerge from many, many, many more feedback cycles than you can conceive of easily, all interacting to produce a chaotic system with some quasi-stable attractors (like the flower in your yard or your neighbor, Fred).
Rationality and applied intelligence have a place there, for the goals of making life better from a human perspective. For example, we learned agriculture: a pretty hands-on, brute-force way to shape some of those feedback cycles en masse to achieve a human aim.
It doesn’t follow from that that you can get to broad-sweeping conclusions like “blood vessels over photo-recepters is dumb” or that you can expect to do much better. It doesn’t follow from the possibility of human *influence* that it’s suddenly a wise idea to tinker with, say, the matrix which is the planet’s genetic heritage.
Indeed, the very nature of life’s systems – their feedback-originating chaos – suggests quite the opposite. It suggests that the rational thing is to not be to eager to perturb systems that appear to be essential to the quasi-stability known as, for example, humans.
Also: Interesting phenomena which are “too complicated to simulate” aren’t, it turns out, the exception – they’re the rule (which has some exceptions).
-t
Tom & Barry:
When you start applying words like “algorithm” to phenomenon observed in the universe or refer to evolution as a “inventor” or “refiner”, you are becoming closet creationists. This type of language implies a creator or at least something like creative design which undermines the scientific method. Most scientists view the universe as self-organizing from the sub-atomic/big bang singularity upward not a machine designed by God downward.
You don’t need a cosmic programmer to create an algorithm to keep the planets orbiting the sun even though the mind might create one with a mathematical model. This idea of blind self organization is much better explained with something like Von Neumann’s cellular automata. Stephan Wolfram’s book A New Kind of Science discusses this kind of approach which might be a new lead in the “theory of everything” since obviously string theory is dead.
Linuxguru – evolution is most certainly an algorithm – nondeterministic, but an algorithm nonetheless. Simplified variants can be usefully used in software.
The execution of an algorithm doesn’t need a creator, external programmer or other deus ex machina, and I never suggested it did. I strongly reject your assertion that this implies a creator; I actually believe that it is not possible to assert otherwise, since if you don’t accept that physics can implement algorithms spontaneously without design, then you cannot believe that human rational thought is possible without some kind of external intervention.
That is to say, if you deny that evolution is an algorithm, you are the closet creationist.
Tom – “It doesn’t follow from the possibility of human *influence* that it’s suddenly a wise idea to tinker with, say, the matrix which is the planet’s genetic heritage.”
I understand your cautious Luddism, but I also understand that progress generally confirms the expectations of the pessimistic in the short term, but the optimistic in the long term. I’m not pessimistic about human ingenuity in the long run, though there will always be mistakes along the way. The “tinkering” may be slowed, but it will never stop so long as the curious still live.
However, the focus on genetic meddling and other political and religious hot-button topics – particularly the Green religion – is a bit of a red herring. I would expect that the tinkering, per se, is not strictly necessary. As to how close any possible Singularity could be, I suspect that at this point it’s software that’s the problem, not hardware. Figuring out the algorithm – and I persist in using the term – as implemented by the human mind would help, and wouldn’t require meddling, only (very) close observation.
Oh, and I used “design” mostly as meaning the accumulated results (noun) of the process of evolution (verb). I don’t disagree with the meaning of your objection when the word is interpreted in a more anthropomorphic sense. I’m sorry if my metaphor clouded the waters too much.
Finally, about the “interesting phenomena” – if you take the position that “interesting” relates strictly to the phenomena with most information (Shannon entropy), then be my guest to study random noise! Usually, humans take more interest in arrangements that occur more than once – i.e. patterns – which necessarily contain less information than their physical instantiations. Simulations in such cases can be abstracted (i.e. dropping the unnecessary information) and are often amenable to symbolic interpretation without losing the essence. The very facts that our brain is quite similar throughout in substance, and that brain damage can be compensated for by nearby areas, are strong suggestions that the essence is quite a lot smaller than e.g. an atomic-level view would take.
@Barry: whatever.
@Barry Kelly – I wasn’t claiming you didn’t know the difference between constant and exponential – but rather, the argument that silicon intelligent would be better in terms of designs improvements is in fact AT MOST a constant, and not exponential – if that (i.e. even if there are improvements, those improvements might not have maximum effect because of bottlenecks elsewhere).
When you say – “applying the algorithm to itself is the key behind the super-exponential improvement.” – yes, but why should it be assumed that this can be done? Implicitly, it assumes the conclusion. Perhaps it can’t, or is difficult enough so that it doesn’t become exponential. Certainly that seems very much the case in all prior experience.
This is what I call a rhetorical “burden of proof” problem. Proponents should not be able to hand-wave about “applying the alogrithm to itself”, and require opponents to give a detailed refutation disproving it. The burden of proof is on them to do more than sketch stories.
Barry:
By definition an algorithm IS deterministic; for
a given set of inputs there will always be a single output everytime the algorithm runs. In contrast, a hueristic takes a single set of inputs and may result on differing outputs based on assumptions about other variables.
Certainly physical systems like DNA replication, protien synthesis and planetary motion involve cycles that can be mathematically described. However, using terms like algorithm that are ussually associated with intelligently created artifical systems implies a creator or programmer who sets up the laws and puts the system into motion. I think this kind of language (although it sells books) undermines the pure scientific method.
Perhaps it can’t, or is difficult enough so that it doesn’t become exponential. Certainly that seems very much the case in all prior experience.