Anders Sandberg and Nick Bostrom, of Oxford’s Future of Humanity Institute, have published an in-depth roadmap for “whole brain emulation” – in other words, the replication of a fully functional human brain inside a computer. “The basic idea” for whole brain emulation (WBE), they write, “is to take a particular brain, scan its structure in detail, and construct a software model of it that is so faithful to the original that, when run on appropriate hardware, it will behave in essentially the same way as the original brain.” It’s virtualization, applied to our noggins.
Though “currently only a theoretical technology,” WBE is, the authors say, “the logical endpoint of computational neuroscience’s attempts to accurately model neurons and brain systems” and “may represent a radical new form of human enhancement.” In something of an understatement, they write that “the economic impact of copyable brains could be immense, and could have profound societal consequences.”
The document is a fascinating one, not only in its comprehensive description of “how a brain emulator would work if it could be built and [the] technologies needed to implement it,” but also in its expression of an old-school materialist conception of the human mind (a conception that is in tension with some of neuroscience’s more interesting recent discoveries). The authors’ belief that it is, at least theoretically, possible to build a brain emulator “that is detailed and correct enough to produce the phenomenological effects of a mind” leads them, inevitably, to the issue of free will.
They deal with the problem of free will, or, as they term it, the possibility of a random or “physically indeterministic element” in the working of the human brain, by declaring it a non-problem. They suggest that it can be dealt with rather easily by “including sufficient noise in the simulation … Randomness is therefore highly unlikely to pose a major obstacle to WBE.” And anyway: “Hidden variables or indeterministic free will appear to have the same status as quantum consciousness: while not in any obvious way directly ruled out by current observations, there is no evidence that they occur or are necessary to explain observed phenomena.”
The only way you can emulate a person with a computer is by first defining the person to be a machine. The Future of Humanity Institute would seem to be misnamed.
penrose
Regarding: “The only way you can emulate a person with a computer is by first defining the person to be a machine.”
Consider:
“The question is this — Is man an ape or an angel? My Lord, I am on the side of the angels. I repudiate with indignation and abhorrence the contrary view, which is I believe, foreign to the conscience of humanity – Benjamin Disraeli”
In case it isn’t clear, I’m quoting that ironically.
Or, he might have said:
“The only way you can evolve a person from an ape is by first defining the person to be an ape.” (not an angel)
Can’t they focus on making a cellphone that doesn’t ring when I’m tired and sleeping, and it’s that annoying bully calling again? I mean: not all human beings respect that rule (especially the latest models, issued during the last two years) — but this seems more pressing then giving Clippy the not-so-intelligent-assitant-from-Microsft-Office some random-based free-will.
In all seriousness, trying to make sense out of natural language is a tricky problem in itself, and might be a better way to intelligence then mimicking the brain organ.
Once again, people miss a critical point: the spirit.
We can emulate a few brain cells with a computer model. But when you put billions of brain cells together the resulting brain has a spirit / conscience / soul / whatever you name it. At this point there is no indication the computer model still fits. Think of it as Newton’s laws which work fine for low speeds but break when we get closer to the speed of light.
Where does this spirit in the brain come from? Is it a side effect of electricity? Of some chemical reaction? Something else? We – Have – No – Clue.
So it’s completely premature to talk about free will.
It is quite premature, indeed, to talk about adding random noise to a model to simulate free will.
These guys need to be schooled by Alva Noë.
I’d love to hear what neuroscience discoveries point to anything other than a materialist conception of the mind. I’m pretty interested in this field and i’ve never read anything credible to the contrary.
“…(a conception that is in tension with some of neuroscience’s more interesting recent discoveries)”
I second Michaels request. You’re teasing us, Nick. Where are the links?!?
– Apemachine Frank
so, will we travel to Permutation City?
“At this point there is no indication the computer model still fits.”
Actually, there’s every indication that the computer model fits and is explanatory. But looking at your desktop PC and saying it doesn’t look like a brain is a bit like knocking two sticks together and saying it doesn’t look like a power generator.
My perception of the past few decades is that there is a long story behind work like this, coming out of an academic philosophy department.
Other historically traditional departments tended to be more “self-funding” and even revenue generating, compared to the philosophy departments. The industry of academic philosophy underwent an enormous contraction in the 1980s and one subset of the attempts to rescue it focused on “multi-disciplinary” stuff — e.g., maybe the philosophy dept. could piggy-back on the grant money of the C.S. department by working on all of the (assumed into existence) philosophical challenges of new high tech.
Can’t rock the boat too much if you’re a philosopher going in that direction. For example, better not actually be planning to spend a lot of work, say, critiquing Google from an ethics point of view or an analysis of power point of view. No, instead, the forumla became: learn some jargon and how to apply it well enough that it takes a while and some effort to figure out you are b.s.’ing. Using that jargon, come up with a fanciful prediction of some big thing that’s just around the corner in science (like “WBE”). It’s complete b.s., to be sure — but it just has to sound plausible. Then relate that to some classic sophistic topics like questions of free will and go into the seminar business.
They’re talking, in other words, about a big nothing (“WBE” is not even on the map of possibilities by any sane estimation) but making it sound plausible enough that they then get to talk about Descartes — back to comfortable territory.
A hell of a waste of words, at best.
In the “actually interesting” category I see that J. H. Conway et al. have issued a new paper this year strengthening their “Free Will Theorem.” That‘s good stuff, in applied philosophy.
-t
links: here, then here and here and here and follow the end notes.
Nick,
Another place your BS filter can get some tickling is looking into modern genomics — both the “cheap sequencing” push and the GMO stuff. See a recent /. article (yes, really, just for links) about how we don’t know more than the first tiniest thing about “gene expression” and hold that up against those other fields (and, the federal regulators recent decision to streamline open-air growing of GMO drug-producing crops). It really, really matters when people BS in ways for which this WBE article is paradigm. It’s hard to wake people up. (As you know, I’m sure.)
-t
Hmm?
“While some may disagree with LeDoux’s conclusion that “the brain makes the self” through its synapses, he makes an important contribution to the literature on the relationship between these two entities.”
Seems quite materialist to me.
“The only way you can emulate a person with a computer is by first defining the person to be a machine.”
Meh. That’s a pretty meaningless piece of sophistry. We already know that everything running on the laws of physics are effectively machines, and pieces of meat are no different.
Or to make my point more clear: are you arguing that brains violate the laws of physics?
If so, how do they do that? I submit that it can only be by our lack of full knowledge, that there’s a few missing laws. Once discovered, they can be included in the emulation.
If not, then there’s no argument. The existing laws of physics can be calculated, quite simply because our calculations are run on machines that themselves run on the laws of physics; any recourse that seems incalculable can be devolved to the hardware.
I give the authors kudos for at least trying to find algorithm for digitalizing the brain; the problems is that there are some many things we don’t know. For example, the authors assume that neurons are the only cells involved in intelligence, memory and etc. There is evidence that other cell types including glial cells have a roll as well. They would need to be scanned and simulated as well. However, the fundamental problem with any “computer simulation” of the brain is that consciousness is an artifact of massive super parallel networks of billions of self organizing units (neurons) in a physical three dimensional space. It’s not been proved that consciousness could exist outside of a complicated network of self organizing units, e.g. inside a serial program running on a single processor. I wish they had addressed this issue too.
Barry, you wrote:
It doesn’t actually work that way.
First, the WBE paper assumes and hand-waves its way to a conclusion that models of brain structure and brain environment, if *simplified so as to be computationally tractable*, are sufficient for (various interesting “levels” of) WBE. They assume their conclusion (sort of, see below). This is only a “practicality” objection, though, and doesn’t really get to your question about the “laws of physics”.
Second, see that Conway paper (“The Strong Free Will Theorem”, widely available on-line). No amount of computation in advance can predict the correlated squared-spin measurements of the twinned particles in the thought experiment. If scientists have free will, so do electrons and, more importantly our very best physical theories seem to confirm that we can not prove that either scientists or electrons *lack* free will. In other words, there may very well not exist, even in principle, a theory capable of emulating a brain: the universe may very well grant us but one way to determine the behavior of a brain and that’s to watch it and see what it does.
Formally, a “roadmap” it is, describing a kind of master plan of experiments to test the assumptions in the hand-waves. For example, is there really a “scale cut-off” in brain structure above which the details are both computationally tractable and a complete (enough) theory of how the brain produces the mind? In some sense, all they are saying is that “Well, we’ll be able to run the experiment soon.”
That’s multi-disciplinary academic philosophy in its role as a surveyor of other fields and an illumination of the internal discourse of those fields by “elevating” it to a philosopher’s “big picture”.
However, if they are going to perform that role well then when they identify a hypothesis that will become testable, that hypothesis must be coherent, convincing, and consistent with what is already known.
Their work fails that requirement. When they talk about emulating to the point of recognizability the personality of a pet, or of “in tact memories and skills” they are making up rather implausible stuff out of whole cloth and calling it a hypothesis.
Thus, while we are getting computers and scanners fast enough and detailed enough to run larger and larger “simulations” if we want to know what those simulations will prove or disprove, those philosophers have given us a misleading, incoherent answer.
I’ve little doubt that increasing mappings and simulations of various aspects of brain structure will yield new “things we can do to brains”. That’s interesting (and alarming).
What do we get from these folks though? We get told that really with these experiments we’re on a quest to discover how the physicality of the brain gives rise to the phenomenology of the whole mind. How breathless. How inspiring-sounding. And how dangerously wrong and meaningless.
-t
Regarding: “are you arguing that brains violate the laws of physics”
Well, I don’t want to put words into anyone’s mouth, but I think the answer is, of course they are. Humanity is deemed to possess a shard of divinity, an essential soul which transcends the animal kingdom, err, the materialistic universe. Otherwise we’d just be vile apes, err, mere computers. Brute beasts, brute computation, both are concepts which scare a certain worldview.
By the way, is there anything in that “WBE” paper which hasn’t been done better in dozens of science-fiction stories? The idea that if we can replicate the exact state of a brain, it can be copied, is pretty old story stuff. Heck, the fan Star Trek guides wrote stuff like that a long time ago.
Seth:
Gee, I can’t push the Conway paper enough. It offers a conclusion that from a very small subset of axioms that are very rock solid assumptions from special relativity and quantum mechanics we can prove that if humans contain a spark of something beyond physical law — let’s call it free will — then so do (for example) electrons.
Isn’t that interesting in the sense that it suggests a world view which accepts our best science, 100%, and yet which gives us a universe that is also “animistic” in some sense? This is not to say that, for example, the free will of an electron would be recognizable in anthropocentric terms. It’s just to hint that our best theories add up to the conclusion that, indeed, humans might have some “spark of the divine” just because everything that exists might.
There’s a third way, in other words, between a clockwork universe and an anthropocentric concept of soul per se: and that third way seems to be implied by our best physics.
And, incidentally, the axioms from which the theorem are derived are all quite settled empirical questions. There appears no danger of them being over-turned by any future experiment. The permanent ambiguity between animism and unknowable determinism is a real, measurable phenomenon. Whatever the universe is, physicists have proved that it may very well include the participation of the divine. Go figure.
(And, boy, the public debate over intelligent design was really poor in related ways. The correct answer is not that I.D. is obviously wrong but, rather, in it’s most plausible form it is formally unprovable — and so is its negation. Thus it is not a scientific hypothesis although it does point out an ambiguity that can’t be shed from our best theories of physics. You can believe it and you can simultaneously believe everything you hear in science class about evolution: they are mutually consistent beliefs. The only difference is that I.D. is beyond proof (as is “Not(I.D.)”) and thus makes a good topic for “philosophy of science” rather than science per se.
-t
“if humans contain a spark of something beyond physical law — let’s call it free will — then so do (for example) electrons.”
I would call that a hilarious _reductio ad absurdum_.
Just off the top of my head, I think this is equivocating between free will in the philosophical meaning, and some sort of analytic indeterminacy.
Or, from another direction, I don’t think the brains-are-not-computers people are going to be converted if you tell them that computers have souls too.
I’m not sure if you mean that as a very clever joke (Conway’s argument involves two very elegant “reductios” and it is (typically for the author) playful.).
The Free Will Theorem (“FWT”) describes a real experiment that you can perform (not easy, but doable). It proves (by a pair of reductios) that there does not exist any mathematical function (computable or not, whether we know the “fundamental constants” or “initial conditions” or not) which takes as input the complete history of the universe up to that experiment, from any perspective, and gives as output the results of the experiment. The universe is logically free to choose an arbitrary outcome based on empirical axioms that are very, very well established.
Weirdly, the arbitary outcome is not *meaningless*. The universe is free to decide, but not arbitrarily free.
That doesn’t prove that the universe is animistic and it doesn’t prove that the universe is deterministic. One thing it does prove that an animistic interpretation and a deterministic interpretation are *both* non-scientific. Neither can be proved or disproved empirically.
Logically, and accepting only quite uncontroversial empirical truths as axioms, if you say you are a determinist you are professing a formally non-scientific faith. Likewise if you say you are an animist. Likewise if you *very carefully* profess faith in a cleaned up expression of intelligent design. None of those hypotheses (determinism, animism, et al.) are scientific questions. We can’t find experiments to prove one of those or the other because we’ve already done experiments that prove we can’t.
You’ve heard all of that in pop-sci “creepy-crawly” accounts of QM and Relativity before but what’s new in Conway is three-fold:
(a) If you are into the “technology” of math — the nitty gritty — Conway’s arguments are great. Elegant, playful, convincing, brashly generalized…. typical Conway (see the interlude chapter of “On Numbers and Games”). But, that’s not (in detail) for this blog.
(b) Not only does he rigorously show the universe’s inherent scientific ambiguity about determinism vs animism but in doing so he relies only on an extremely small, extremely well verified set of empirical facts. This isn’t a theorem about QM and Relativity per se. This is a theorem about very tiny, weakened subsets of QM and Relativity — I mean: take a few of the smallest, least-controversial parts of QM and Relativity that you can find — the most banal of tiny subsets — and then Conway builds his proof on those axioms.
(c) he starts to elevate these results back into the philosophical domain rather well, in particular as relates to the “phenomenology of the mind”
In part, the “Free Will” papers are about vocabulary:
For one thing: When Conway writes math like in these papers he’s writing in a style that could be called “multiply formalizable”. He’s talking in abstractions that can be formalized into machine-checkable proofs — but in more than one way. For example, he uses words like “function” but he doesn’t mean “function” per any one axiom/definition system of math but, really, he’s talking about an abstraction that is some “uncontroversial common properties of ‘functions’ in any of the commonly used axiom/definition systems”.
It’s hard to keep up with his writing, sometimes, because he’s so overwhelmingly good at talking at that level of abstraction. He has a keen eye for the “landscape” of math.
That’s one way the paper is “about vocabulary” but the other I notice is, well, the way it uses words like “free will” and the way it draws deliberate attention to the problematics of the way it uses such words.
Pointing to the scientific ambiguity of freedom vs. determinism, the paper forcefully points out by example that the language of freedom and choice is more consonant with the mathematical reality of our least controversial empirical observations.
Saying that an electron has free will, Conway points out, is the most natural common (English) way to convey the gist of the mathematical characteristics of the situation.
-t
I think it WOULD be possible to emulate or mimic the observed effects of a brain in the short term — it could almost be done right now. The brain I’m referring to of course wouldn’t be human, but it would be fairly complicated as brains go; say, for example, the brain of an advanced insect like a hornet or an ant. Notice, also, that I said the effects of a brain, i.e. results of decisions — NOT consciousness. I’ll get to explaining myself in a minute; but I don’t think it is even theoretically possible to create consciousness with silicon, although a very intriguing possibility is that you might get something similar, yet profoundly different.
If you are a gamer you have certainly played a shooter with bots. I am a huge fan of some older games like Doom and Quake, and play death-match with bots all the time. Even in such simple games the bots seem to make semi-intelligent decisions. The ‘decisions’ they make are just the results of some pretty simple rules encoded into C. The gap between the ‘intelligence’ of these bots and that of a worker ant isn’t that great. That’s NOT saying, however, that the bots are conscious, or could ever be.
I’m not exactly a dualist; I think consciousness is the by-product of a particular kind of mechanistic process; it’s just that I think that that process is carbon-based, and couldn’t work with the kind of simplified silicon-based system people will eventually make.
If you think about it there are common natural, inorganic processes that mimic life and consciousness, and examining them for a minute could be very instructive. For example, take any vortex, like a dust-devil, tornado, or just water going down the drain. The movable vortexes like a tornado have a shape, a trajectory or path, a birth, a death, and, while they exist, both a past and a future. In all of this they mimic life. Yet they are entirely incorporeal. A tornado, for example, is just air.
Now, obviously, a tornado is much simpler than any thing alive. But like living matter at least for a while it is a SELF-REPLICATING PROCESS.
To proceed with my argument I need to take a step back a say something about why I am not a dualist (we’ll get back to the tornado in a bit). I think, as have many before me, that whatever the building blocks of consciousness are, that they are shared by all matter, even inanimate matter like stones. This is not the same as saying that inanimate matter is conscious. It isn’t, at least in the sense we use the term. Specifically, I would like to make a totally unfounded assumption, that consciousness is a by-product of movement or activity — something like a spark, say, maybe an almost imperceptible warping of time, that accompanies the movement of electrons. Like I said, I know this assumption is unfounded, but bear with me for a minute. If this were the case, normally, as for example in a stone, this by-product of molecular activity — this “consciousness’ — would be random (because of the immense number of molecules), and therefore both imperceptible and for all practical purposes nonexistent.
Yet any organism more complicated than a bacterium, at any one time has several orders of magnitude more processes acting in parallel than even a vortex like a tornado. In a way the analogy between vortexes and life breaks down here. For it to be exact, the vortexes would have to be something like three billion years old, and gradually increasing in complexity the entire time. In any event, with the incredibly complicated, organized “mechanical substratum” that carbon-based life provides, the amalgamated conscious processes that are part-and-parcel with matter become non-random, not all at once, but increasing in their non-randomness the more their substratum — e.g. the animal they’re based in, can benefit from their activities, through the process of natural selection. We — people, or rather our minds — are the cumulation of this whole process.
So even a computer that may seem to mimic consciousness — an intelligent machine, if you will — will be MUCH simpler, and with an entirely different base than an organic system like a human, or even an ant. If such a thing actually has a consciousness, I would be surprised if it were anything like our own.
Just to make my point, supposing such a computer were built, would YOU be willing to “upload” your own consciousness to it, with the price being that the ‘old’ you would immediately die? I sure wouldn’t, unless I knew my death was irrevocably immanent.
BTW, I’ve been reading Bergson’s “Creative Evolution” but still have a long way to go. What I’ve written here wasn’t influenced by the little I’ve read; but I’ve read enough to know that it is pertinent to this discussion.
Haven’t had a chance to read through all the comments yet, but I want to assure Seth that I haven’t gone all New Age on him. What I believe I wrote in the post was “old school materialist” – the longstanding view that the (fixed) structure of the physical brain explained everything about mind/consciousness/self. Map the brain, and you’ve got the self. Free will? An illusion – or perhaps “noise.” That view – which is very much a mechanical, machine view – is outdated. The mind does actually seem to exist, and thoughts seem to exert a physical influence on the structure of the brain. Map the brain, and you have, well, a map of the brain. Mystery doesn’t have to be a sign of divinity, Seth; it may just mean we don’t know. The history of physics provides a very good illustration, also calling into question old ideas of materialism.
Nick, well, maybe I misread you, but that last paragraph about “defining the person to be a machine” did seem rather mystical.
Let’s put it this way – there’s nothing that says the brain, complete with mind, can’t be emulated, and plenty that indicates it can. Of course it will be difficult. Look at it this way – nobody has built a cell yet, but people don’t say nowadays that cells have some vital essence that makes them forever unsynthesizable by mere chemistry.
The mind is self-modifying code, to a certain extent, and common programs aren’t self-modifying, so people tend to think that all programs are fixed that way. But it’s quite possible to write self-modifying code.
When you say:
“The history of physics provides a very good illustration, also calling into question old ideas of materialism.”
Umm, I have a degree in physics from MIT.
There are indeed deep paradigm shifts that happened. But there’s also much more abuse of metaphor.
One of the most intellectual experiences I recall was studying the Heisenberg Uncertainty Principle, and trying to grasp its meaning in terms of the mathematical operators. Quite profound. But inversely, I didn’t think much about the way that it was abused as a philosophical statement.
I think the deep question as to whether the universe is determinate or statistically indeterminate is like the Uncertainty Principle, in that people often take a real but highly technical matter and try to turn it into all sorts of metaphorical implications.