I, Nanobot — Bionanotechnology is Coming 107
Maria Williams writes "Alan H. Goldstein, inventor of the A-PRIZE, and popular science columnist,
says:
Scientists are on the verge of breaking the carbon barrier — creating artificial life and changing forever what it means to be human. And we're not ready...
Nanofabricated animats may be infinitesimally tiny, but their electrons will be exactly the same size as ours — and their effect on human reality will be as immeasurable as the universe. Like an inverted SETI program, humanity must now look inward, constantly scanning technology space for animats, or their progenitors. The first alien life may not come from the stars, but from ourselves." Yes it's an older article, but it's a fairly quiet sunday today.
On the verge of breaking the carbon barrier? (Score:5, Funny)
The carbon barrier will be broken by silicon. (Score:5, Interesting)
A pretentious, overheated, paranoid dupe, at that. :) But since we're here:
Life will almost certainly arise from Ai (Ai=artificial intelligence), but it won't be "nano" anything. My feeling as an Ai researcher is that we're already well past the computing threshold for Ai/AL (AL=artificial life.) Ai will always equal AL, though the reverse is not implied. A typical desktop today seems to me to have more than enough power to "come to life." What we are missing is the algorithm, no more.
My reasoning for this is as follows:
The complexity of the part of the brain that we use to think is not as high as commonly supposed. Much of what is in there handles what are really (to a computer) mundane things; pattern recognition (sound, light, touch), movement, of which none of which are components of intelligence, per se (ex, a blind, deaf immobile person is still intelligent.) A computer's sensorium can simply be a text stream; and in this realm, pattern recognition, retrieval and communications are relatively natural to it, and extremely high powered in comparison to how we do the same thing. Likewise, a computer has no autonomic system to regulate, no balance to keep, no hungers to assuage. We know very well how to do associative memory, and we know how to make those associations carry associations of their own; I'm talking of the coloration of memory with any set or range of characteristics one might imagine or preconceive into being. This stuff is all relatively easy. Processing it and making sense of it, that's really the issue. We don't have that.
But: Processing isn't likely to be a hard problem. I say this because the human mind (and all the other minds we are aware of) achieve what they do with massive parallelism of very simple structures. Parallel processing is no different than serial processing, in terms of what it can and cannot do in the end. Speed-wise, yes, parallelism is very powerful, but computationally, it is no more effective than going one step at a time and accumulating results. Further, there is no parallel structure that cannot be emulated or represented with serial processing of those structures in a manner that stages results to the appropriate level of parallelism. Processing is, instead, probably an esoteric problem, in that the way it works is not obvious to our higher-level or aggregate way of considering information, memory, and reason.
Some have argued that because of the massive parallelism in the brain, creating something comparable will require a huge amount of resources that we do not yet have. There is a basic misconception at work here, and it is a critical one: If, as they argue, your desktop is not fast enough to compute serially what the brain does in parallel in the same time frame, this in no way undermines the idea that your desktop can still do the computation. In other words, If you ask me a question, and I give you an answer you deem to be intelligent within, say, 10 seconds of reflection; and you type the same question to your presumptive Ai, and it gives you essentially the same answer in say, an hour, that answer is no less intelligent for being more slowly produced. Intelligence isn't about speed - it never has been. Intelligence is about the nature of the question, the nature of the conversation, the nature of the reflection.
The day that someone comes up with an algorithm that produces answers of a nature that we can begin to argue about how well they compare - or exceed - the human capacity, will be the same day that the hardware companies begin to examine the software and build hardware optimized to make those algorithms faster, compile better, etc. Though this will in no way make anything more intelligent, it will make the intelligence easier to converse with for us, right up until the point when the computer is faster than we are. At that point, again, it will be important for us to remember that speed isn't the issue, and never was. Otherwis
Re: (Score:2)
One can hope so, since the only thing we can honestly tell it is that by the time we were smart enough to contemplate why we got here, we'd already been here for so long that all evidence of our origins had become mightily indirect, so much so that we really can't say for sure where we came from. But hey, there is this process we've characterized and confirmed — evolution — we like the odd
Re: (Score:2)
In order for you to bring this to the argument, you have to assert that this is the class of problems the brain is solving in order for intelligence to arise. I don't think you can make a case for that. Even in the abstract.
That is not at al
Re: (Score:1)
In other words, If you ask me a question, and I give you an answer you deem to be intelligent within, say, 10 seconds of reflection; and you type the same question to your presumptive Ai, and it gives you essentially the same answer in say, an hour, that answer is no less intelligent for being more slowly produced. Intelligence isn't about speed - it never has been.
"
You are comparing 10 seconds to an hour (the computer program is running 360 times slower than the human), but nobody knows how to compare
Re:The carbon barrier will be broken by silicon. (Score:4, Insightful)
No, I've not forgotten this, I've simply ignored it because it isn't an actual problem for several reasons.
First, computers have a huge advantage in that once we have one working intelligent system, we can copy its state and have two - or two hundred - or two million - with relative ease. We can't do this with people, nor is there any hint we'll be able to any time soon. This advantage makes it worth more to "educate" one system. It is as if by educating your kid to the level of a PhD, I'd educated your entire town. All of a sudden, your kid becomes very important and worthwhile to educate.
Second, any one computer can be "educated", if you will (have those questions answered) through multiple sources, multiplying the learning speed and dividing the learning time. And this only needs be done once; they're not like people, who keep showing up un-programmed and who must each be laboriously programmed by flipping low level switches, as it were. This is worth doing no matter what it costs, because of point one. Knowledge compilation is ongoing right now; there are several very large projects of this nature.
Third, if one can simply prove that some particular method gives rise to intelligence, market and social factors will converge on the problem and simply up the efficiency until the problem is solved many times over. The only thing we have to have is some kind of result we can demonstrate in a time frame that will convince those who must invest to get those needed resources on line towards solving the problem.
Fourth, parallelism can be applied at multiple levels. Look what Google has done to make search work. An intelligence that resides in 1000 machines is no less intelligent for having done so, if one were to have to apply such broad leverage - and remember, there is no evidence that this is so. We won't know how to ask the question, much less evaluate the answer, until we have a working algorithm.
Fifth, in the "educate" phase, even assuming that intelligent response is running at 360:1, that is not to say that initial education of the first unit will run at that rate. If information is simply put in place (remember, study is a human problem -- computers may not require study at all) it may be that the learning rate is many thousands of times that of a human. You just can't assume that learning equals reasoning. It isn't always the same process, and it certainly isn't the same when simply copying.
Generally, speed does not matter. Results matter. If speed is so poor that results are unobtainable, then speed matters. I don't expect that this is the case based upon the evidence at hand. We work; dogs work; mice work. All demonstrate various levels of intelligence with descending degrees of "hardware" available to them. Computers have huge amounts of capability. I conclude that computers can work too.
Re: (Score:2)
Re: (Score:2)
I'm quite familiar with the problem and landscape of emulation. I'm the author of this emulation [blackbeltsystems.com], which is by far the fastest of all the 6809 emulations out there, last I heard. :) Also some others; but that one, you can play
Re: (Score:1)
The day that someone comes up with an algorithm that produces answers of a nature that we can begin to argue about how well they compare - or exceed - the human capacity, will be the same day that the hardware companies begin to examine the software and build hardware optimized to make those algorithms faster, compile better, etc. Though this will in no way make anything more intelligent, it will make the intelligence easier to converse with for us, right up until the point when the computer is faster than
Re: (Score:2)
Have you ever seriously considered that an IQ test would be relevant to an Ai? :)
They're going to be vastly different, and you can count on that, I think. Speed does not matter to prove the point. Results matter, and not all situations remain unsolved if the results are slow in coming. That's just not how the world works. Speed is nice to have, and as I have already said, once the point is proven, speed will come along on its own as a matter of market and so
Re: (Score:1)
They're going to be vastly different, and you can count on that, I think. Speed does not matter to prove the point. Results matter, and not all situations remain unsolved if the results are slow in coming. That's just not how the world works
It does lol
Re: (Score:1)
We are not there yet (Score:2)
It's true that a blind person cannot see visual patterns, but this doesn't mean a blind person cannot visualize patterns. Building a mental image of a pattern that fits one's observations is a basi
Re: (Score:2)
Actually, it may. It depends on where the fault in the system lies. Much pattern recognition is done in the optic nerve. Visualization is one way of representing patterns, but it is not the only way. Computers can recognize patterns using many types of methods. There's nothing special going on here; we're just really good at it (when we're intact and functioning normally.)
Re: (Score:3, Insightful)
So you believe there is some magical algorithm which, when implemented, is self-aware? The "tur
Re:The carbon barrier will be broken by silicon. (Score:4, Insightful)
Other than that loaded word, "magic", absolutely. You represent a case of at least one. I represent another. I simply extend that idea to a different architecture, because I am of the opinion that the architectures have significant equivalence, computationally speaking.
What I don't believe in is some "magic" situation that will not give up its operational methods and modalities in response to a concerted effort to understand it, when the entire problem resides right here in our "back yard", as the human brain certainly does. Postulating that the brain is a magic box with functionality that cannot be replicated is stepping out on a limb that no other comparable intellectual effort we have ever undertaken can justify. The only problems we've been consistently unable to solve are of a type where the information is unavailable to us (eg, the big bang, or whatever happened, or didn't happen.) Even then we do pretty well. But the brain isn't like that. It is right here. In literally billions of instances. We'll figure it out. I have very high confidence this will happen, and that it isn't all that far out in the future.
I have no reason to assume that an Ai would not be self-aware. More to the point, neither do you.
Re: (Score:3, Interesting)
Re:The carbon barrier will be broken by silicon. (Score:4, Insightful)
Yes. Humans already show this kind of intelligence can exist; I have no trouble generalizing to other forms from there. Artificial is not a distinction that concerns me in the sense of creating a barrier to possibility.
Agreed. But like most things in the world, "stuff" often happens without any consideration for the cans that get opened. We just have to deal with the issues. Not that I think we'll do very well. There may be attempts to legislate progress out of the loop (as we see with stem cells and cloning, for instance) but I don't see that as being effective, long term. Politicians don't have the control they think they do unless the resources required are so large that they cannot be hidden. Ai is the antithesis of such a project; you can do it at home, disconnected from the net, you could succeed and no one need know until your Ai has been duplicated and distributed a million or more times. Like DVD John's code, any attempts to "control" are doomed to failure before they begin.
In order: They should, they should be able to, religion is mythology/100% bunkum and there are no actual implications whatsoever other than perhaps we're a step closer to getting over it, and free will in the philosophical sense is not a question that has any practical application, so why bother and for that matter, how many people really care? Will Ai's care? Good question. I hope to be able to ask some day. :)
Reality doesn't care if you have a hard time. Things are what they are. No more, no less. We either intelligently play the cards reality deals us, or we buy into illusion / bad metaphor and generate canned responses via someone else's playbook. Each of us has to make that call.
Well, mine aren't; they're meaningful because I deem them so. Likewise, the emotions, dreams and desires of those I care about are meaningful to me for the same reason; I elect to engage them and enjoy that process. All else, to me, is utterly meaningless navel-gazing.
My research is on high performance associative memory. My results have been very pragmatic; no surprises at all for me; things work as I pretty much always thought they worked, and though they may also work in other ways, the path I'm on has been quite productive.
I see no religious problems other than religion itself, which I do see as a huge problem, basically the result of fear, gullibility, and ignorance in various combinations. As far as moral and ethical issues go, we've been really poor at dealing with them among humans; there is no indication we will be any better if or when we introduce (or find, or are found by) other forms of life. Things will get more complicated, more divisive, and we'll make a further muddle of it. In my opinion. As to my "unconfirmed beliefs", that'd be 99% of what I think about in every domain. I've a confidence-based world view, not a conviction based one. So you'll have to be more specific. :)
Re: (Score:2)
Specifically, what is your theory of consciousness? How did consciousness develop over time and why do we have it and other living (and non-living) things don't? Is consciousness metaphysical in nature or is it simply about having the right algorithm? If it is the latter, then doesn't that imply that we are no better tha
Re: (Score:3, Interesting)
I don't have a formal theory. My opinion is that it is an emergent property with considerable variation that we like to put under one comfortable name, but that which really doesn't fit as well as we would like it to. In other words, no, animals and humans were not all created equal in any sense. :)
Slowly, I suspect. Other than that, I have no idea.
Re: (Score:2)
>ignorance in various combinations. As far as moral and ethical issues go, we've been really poor at dealing with them among humans; there
>is no indication we will be any better if or when we introduce (or find, or are found by) other forms of life. Things will get more
>complicated, more divisive, and we'll make a further muddle of it. In my opinion. As to my "
Re: (Score:2)
That would depend on who you ask. For instance, the US constitution, via the 13th amendment, specifically legalizes slavery and involuntary servitude for the lowest class of individuals in debt to society. I'm of the opinion that any intelligent, informed individual should have the option to sign themselves into slavery for reasons they find sufficient, and typically, that means reward -- monetary or otherwise
Re: (Score:2)
Re: (Score:2)
On the contrary, everything we know about emergent systems suggests this. Furthermore, the interaction between consciousness and any brain manipulation you care to consider (drugs... injury... surgery... sleep.... sensory input... destruction) further re-inforces the idea that the brain is in fact not only the seat of animal consciousness, but the foundation, home and require
Re: (Score:1)
Such an algorithm exists, because there are intelligent animals (including ourselves) we can observe.
People are always giving the appearance of responding intelligently without actually being very intelligent. People in general are not that intelligent. We work with a v
Re: (Score:2)
Well said.
Re: (Score:1)
Intelligence isn't about speed
That depends if you consider the Turing test as a measure of intelligence (its certainly not perfect, but its been around for quite a while so I'd thought I would throw it in). If I asked two black boxes to reflect on modern literature and I got a response out of one of them in 10 seconds, and the other in an hour, I might conclude there is an intelligent agent in one, and a computer at the heart of the other.
Then again, if Moore's Law remains true for only.....
Re: (Score:2)
Which one? The whole point of the Turing test is to try to fool you. Humans are crafty. Maybe the human waits until the computer responds, then responds later. Every time. Then where are you?
Anyway, initial speed is irrelevant. If we can craft something that answer
Re: (Score:2)
I think nanites are closer to achieving this than anyone constructing AI simply because the two goals are different. The goal of nanites is closer (or the same as) achieving what we would call life. The goal of AI is (as the name suggests) intelligence.
There are some things
Re: (Score:2)
Um. Well, it's a discussion we could have. From where I stand: A virus is incomplete; it can't reproduce or propagate without a host, it can't think, has no consciousness, and does not generally have the ability to choose or follow a path. If asked, I'd have to say it wasn
Re: (Score:2)
I was thinking about saying "bacteria" instead of "virus," but a virus is simpler and I do believe it's alive. It's usage of a host to reproduce seems the same thing to me as us eating.
Would you say that a blood cell is alive? It cannot survive without a host. H
Re: (Score:2)
I think I'd venture that a blood cell or a neuron is a living part of a living system, therefore alive. As opposed to a dead cell -- hair, nail, enamel -- or an embedded item, like the stones in a gizzard.
Re: (Score:2)
Reality doesn't care. It'll just keep on producing things that upset you. Sorry about that.
Neither. I had been a friend of Isaac's since the 1960's, but never a huge fan of his robotics ideas. We agreed to disagree, mostly. I'm more of a James P. Hogan fan, come to that.
Re: (Score:1)
Sorry to break it too you but your a machine and your brain is a computer. Your thinking too abstractly. A computer is just the result of interactions between protons, neutrons, electrons, etc. Just like your brain, and everything else in the universe. You believe in god dont you?
Re: (Score:2)
And he's named Al Goldstein?
As in Al Goldstein [wikipedia.org]?
Must be one heck of a date...
Doomsday predictor? (Score:2, Insightful)
I think this guy may be taking himself a little too seriously:
"So why listen to the voice of one who is not Ishmael, not Cassandra, not even Ralph Nader? Because I can tell you something that no one else can. I can tell you the exact moment when Homo sapiens will cease to exist. And I can tell you how the end will come. I can show you the exact design of the device that will bring us down. I can reveal the blueprint, provide the precise technical specifications."
More like "I love me!". (Score:4, Insightful)
He's referencing his own work (2 of the 5 referenced), and the dates are in 2006 (with a Wired article from 2005).
This seems more like an attempt to get people to buy his newly published book than anything else. And as evidenced in your clip, there's a bit of megalomania there.
Re: (Score:1, Insightful)
Re: (Score:1)
Re: (Score:2, Insightful)
I hate to rain on this clown's parade, him being an official "genius" and all - but rather obviously not current - as the predictable trend is the death of democracy (already happened, of course) and the ascendency of the global corporate dictatorship (this would be China - leading the planet to the One World Corporation) so all such all knowing predictions go by the wayside....
Not read science fiction? Watts? Vonnegut? (Score:2)
As examples:
Peter Watts, Starfish [rifters.com], 1999. He posits the existence of a type of life that would out-compete anything in our 3.5 billion years old biosphere. As a character says:
AI (Score:1)
A Wind in the Door (Score:1)
I can't believe I managed to pull that reference out of my butt. Go me.
Re: (Score:2)
Actually, it's midichlorian.
I thought everyone knew that by now.
Re: (Score:1)
sed -e"s/superconductor/nanotech/g" (Score:2)
Yeah, I know superconductors have found a few uses in a few niche fields, but I remember very clearly how we were told that superconductivity would change all our lives... and that was almost 20 years ago.
Re: (Score:2)
The point is that buzzwords don't often live up to their hype. The overuse of the prefix "nano" is just an example.
Nanobot ? (Score:1)
In space no one can hear you scream. (Score:1)
I think the movie Alien fucked up any chance we had of ever walking on an alien world.
Re: (Score:2)
Overblown (Score:1, Insightful)
Engines of Creation (Score:3, Informative)
ID? (Score:1)
Re: (Score:2)
Huh? (Score:2)
We all make mistakes (Score:2)
Darth Vader: Do they have a code clearance?
Admiral Piett: It's an older code, sir, but it checks out. I was about to clear them.
Re: (Score:1)
Luke: I'm endangering the mission, I shouldn't have come
Realizing that Vader had sensed him and purposely let them through. Obviously the editor has
slow news day ... (Score:2)
nano-FUD (Score:3, Interesting)
Re: (Score:1)
Pathogens such as the avian flu or ebola virus are worrisome because of their "self-replication" and "evolvability". Certainly creating "nanobots" with these characterists should give us pause.
I believe that the author's point is tha
Crikey (Score:2)
Oddly, it would seem it was a better waste of my time.
Comic book plot goldmine (Score:3, Funny)
_The Singularity is Near_ (Score:3, Informative)
We are very close to having strong AI, nanotech in our bodies, and greatly enhanced information sharing / intelligence - all the research is well under way with positive results thus far.
http://tinyurl.com/ygmtxw [tinyurl.com] (amazon.com)
Re: (Score:2)
I beg to disagree. The exp curve has no such thing. Only in relation to something else, you can point a definite limit; for example, the rate of technological evolution could surpass the rate of human learning ability at some point.
Re: (Score:2)
I think the proper reference point is the human lifespan. Sure, medicine and technology have progressed in the past and extended the lives of people who otherwise would have died young, but what he is predicting is enough medical and technological improvements to endlessly ext
What this guy is missing (Score:3, Insightful)
They don't realize what evolution is. They have come to the problem from artificial intelligence, or systems analysis, or mathematics, or astronomy, or aerospace engineering. Folks like Ray Kurzweil, Bill Joy and Eric Drexler have raised some alarms, but they are too dazzled by the complexity and power of human cybersystems, devices and networks to see it coming. They think the power of our tools lies in their ever-increasing complexity -- but they are wrong.
The biotech folks just don't get it either. People like Craig Venter and Leroy Hood are too enthralled with the possibilities inherent in engineering biology to get it
Failing to understand himself the very basics of evolutionary biology - little primitive nano -organisms (viruses, bacteria, single cell organisms) (even artificially created ones) are not as grave dangers as he paints precisely because they are primitive. Those primitive forms of life dominated earth and more complex organisms took over because there are inherent evolutionary benefits of being complex -if it wasn't so more complex forms of life would never evolve
Complex lifeforms such dominated all primitive lifeforms over the course of millions years, due to their inherent capability to adapt better to more varied environment and conditions
That the simple cellular automata (forgive me the pun
This was true for "blind "evolution and will be even more true for human governed one -as we have and advantage over all other lifeforms - we can change its course the way we want with our tools and technologies
He dismisses ave of prominent futurologists for complex human created systems
Primitive life forms can only manipulate what exist around them: at the level of molecular biology everything is simple chemistry -dependent on presence of particular elements:
Re: (Score:1)
They are enough of a threat to wipe out between 50 and 100 million people, 2.5 - 5% of the human population, in one year [wikipedia.org]. That is quite a sufficient danger that we ought to have some concern about "primitive" organisms.
Re: (Score:1)
And that is precisely absolutely NOTHING. Complex organisms were living with such "threat" for hundreds of millions of years and were not dented in any significant way. -Au contraire were striving and expanding their sphere of domination and influence at each turn of evolutionary spi
Re: (Score:1)
The death of 50 and 100 million people is "nothing"?
Please, for your safety and the safety of others, seek mental health care immediately. You have clearly lost touch with consensual reality. Best of luck to you, I hope you can find effective treatment.
Re: (Score:1)
Choice of electrons (Score:1, Funny)
Maybe I'm just ignorant, but how many different 'types' of electrons are there?
Drivel (Score:4, Interesting)
The fact that he supports Bill Joy and Richard Smalley pretty well defines this idiot.
Smalley is a wannabe who came late to nanotech and decided to use his higher conventional scientific status to try to take over the field from Drexler. He failed, despite this guy's opinion.
Joy is simply incapable of rational reasoning.
And this guy's paranoid fantasies about nanobots "spontaneously impregnating" each other with technology to become an artificial life form - and one that dooms all other life in addition, despite millions of years of evolutionary adaptation on this planet to just about every conceivable hazard - is just drivel.
Sure, nanotech could be designed to destroy all life - but it would have to be DESIGNED to do so. The odds of it happening by chance are so low as to be not worth considering. Sure, badly designed nanotech - and there WILL be badly designed nanotech, we CAN count on that - COULD cause massive medical issues on a par with a deadly virus such as the flu virus in the early 20th century or something like Ebola. So what? You take the risk and you try to prevent it. Nanotech offers many technical options for inhibiting this sort of thing. If the IT industry will get off its ass and develop some AI-based engineering programs that check for stupid engineering mistakes, the entire engineering industry would be better off as well.
What IS going to happen is that Transhumanists will use nanotech to transform themselves into a superior species And that's where the threat is going to come from as monkey-ass humans follow their usual primate instincts to try to suppress the Transhumans - and unlike the Star Trek shows and Terminator movies, the humans will get their asses kicked trying - at least until the Transhumans have improved enough that they can just ignore the chimps and go about their business anyway.
Re: (Score:2, Insightful)
Sure, nanotech could be designed to destroy all life - but it would have to be DESIGNED to do so. The odds of it happening by chance are so low as to be not worth considering.
The chances of nanotech being designed to be deadly are actually very very high. Think of it as a very advanced targetted virus. This probably won't possible in the near decade or two but designing a nanobot which kills and replicates based on certain characteristics is far from impossible. Say... you don't like that certain terrorist or president. Well, we take this nanobot here, key in the DNA and off it goes to do its job. Its use as a wepon is very attractive and humans have a history of getting a weap
Re: (Score:2)
Re: (Score:2)
I'm perfectly well aware that nanotech can be DESIGNED to be deadly. That was my point. The point of TFA on the other hand was that nanotech could be ACCIDENTALLAY and EASILY converted SPONTANEOUSLY to being deadly to life - and not just some life, ALL life. That is far more unlikely, which the author does not acknowledge.
Re: (Score:2)
>nanotech to transform themselves into a superior species
"Superior" at what? Surviving? Is that really all that
matters to you? Rock formations are superior in surviving
>And that's where the threat is going to come from as monkey-ass humans
>follow their usual primate instincts to try to suppress the Transhumans
>- and unlike the Star Trek shows and Terminator movies, the humans will
>get their asses kicked trying - at least until
Re: (Score:2)
Trust me, we're not worried about you being dead rather than living in "our hell" either.
Works for us.
Re: (Score:2)
Re: (Score:2)
You think we care if anybody finds it "appealing?" One is either a Transhumanist or one isn't. Do you care if the roaches under your sink find your Roach Motel "appealing"?
Re: (Score:2)
Very Poor Science (Score:4, Informative)
He states that the human body has a potential of thousands of Volts. In reality it seldom reaches 1 volt. (A potential of 1/2 volt over a distance of a 50 angstroms is not a potential of millions of volts, it may be high in terms of Volts per Meter, but over any larger distance, it just dissapears. That's why we can't power pacemakers or laptop computers off your neural energy. Zero power is just zero power, no matter how clever you think your argument is.) He states that there are strong similarities between Carbon and Silicon chemistry. Yes, but there are also energy differences that are profound. Reality, there are very few living creatures that can use Silicon. Most of those that can are bacteria, and they use it only to create a shell or frame.
A little reality here, there are good reasons to believe that the first engineered bio machines will not be too greatly different than the ones we've already been living with. We call them bacteria. Control is a problem. It has been a problem for a very long time.
The article is just an attempt to scare people with little knowlege of the underlying science. The Author appears to be ignorant of basic physics or chemistry. His biology may or may not be suspect. I don't know that area as well. If this is the best that the critics of nanobot research can do, then they should be doomed to failure.
Rank superstition and scary fiction are a very poor way to make technical policy.
Electron Size (Score:1)
Excuse me if I'm wrong, but I was somehow under the impression that all electrons are the same size :/
Bionanotechnology (Score:2)
Nanobots == scare tactics (Score:1)
Furthermore, it has been argued that for technical and financial reasons building self-replicating machines doesn't make much sense (C. Phoenix, E. Drexler, Safe exponential manufacturing, Nanotechnology 15, 869-874 (2004))
Goldstein's Road to Melville (Score:1)
"OK, Alan Goldstein, I will not call you Ishmael. But somewhere along your road to Melville, you took a detour into speculative fiction, because that is clearly the genre of your Salon article, I, Nanobot." More here [blogspot.com]