Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Security

I, Nanobot — Bionanotechnology is Coming 107

Maria Williams writes "Alan H. Goldstein, inventor of the A-PRIZE, and popular science columnist, says: Scientists are on the verge of breaking the carbon barrier — creating artificial life and changing forever what it means to be human. And we're not ready... Nanofabricated animats may be infinitesimally tiny, but their electrons will be exactly the same size as ours — and their effect on human reality will be as immeasurable as the universe. Like an inverted SETI program, humanity must now look inward, constantly scanning technology space for animats, or their progenitors. The first alien life may not come from the stars, but from ourselves." Yes it's an older article, but it's a fairly quiet sunday today.
This discussion has been archived. No new comments can be posted.

I, Nanobot — Bionanotechnology is Coming

Comments Filter:
  • by Timesprout ( 579035 ) on Sunday December 17, 2006 @11:55AM (#17277826)
    Sounds like someone at the lab has a hot date lined up......
    • by fyngyrz ( 762201 ) * on Sunday December 17, 2006 @12:55PM (#17278206) Homepage Journal

      A pretentious, overheated, paranoid dupe, at that. :) But since we're here:

      Life will almost certainly arise from Ai (Ai=artificial intelligence), but it won't be "nano" anything. My feeling as an Ai researcher is that we're already well past the computing threshold for Ai/AL (AL=artificial life.) Ai will always equal AL, though the reverse is not implied. A typical desktop today seems to me to have more than enough power to "come to life." What we are missing is the algorithm, no more.

      My reasoning for this is as follows:

      The complexity of the part of the brain that we use to think is not as high as commonly supposed. Much of what is in there handles what are really (to a computer) mundane things; pattern recognition (sound, light, touch), movement, of which none of which are components of intelligence, per se (ex, a blind, deaf immobile person is still intelligent.) A computer's sensorium can simply be a text stream; and in this realm, pattern recognition, retrieval and communications are relatively natural to it, and extremely high powered in comparison to how we do the same thing. Likewise, a computer has no autonomic system to regulate, no balance to keep, no hungers to assuage. We know very well how to do associative memory, and we know how to make those associations carry associations of their own; I'm talking of the coloration of memory with any set or range of characteristics one might imagine or preconceive into being. This stuff is all relatively easy. Processing it and making sense of it, that's really the issue. We don't have that.

      But: Processing isn't likely to be a hard problem. I say this because the human mind (and all the other minds we are aware of) achieve what they do with massive parallelism of very simple structures. Parallel processing is no different than serial processing, in terms of what it can and cannot do in the end. Speed-wise, yes, parallelism is very powerful, but computationally, it is no more effective than going one step at a time and accumulating results. Further, there is no parallel structure that cannot be emulated or represented with serial processing of those structures in a manner that stages results to the appropriate level of parallelism. Processing is, instead, probably an esoteric problem, in that the way it works is not obvious to our higher-level or aggregate way of considering information, memory, and reason.

      Some have argued that because of the massive parallelism in the brain, creating something comparable will require a huge amount of resources that we do not yet have. There is a basic misconception at work here, and it is a critical one: If, as they argue, your desktop is not fast enough to compute serially what the brain does in parallel in the same time frame, this in no way undermines the idea that your desktop can still do the computation. In other words, If you ask me a question, and I give you an answer you deem to be intelligent within, say, 10 seconds of reflection; and you type the same question to your presumptive Ai, and it gives you essentially the same answer in say, an hour, that answer is no less intelligent for being more slowly produced. Intelligence isn't about speed - it never has been. Intelligence is about the nature of the question, the nature of the conversation, the nature of the reflection.

      The day that someone comes up with an algorithm that produces answers of a nature that we can begin to argue about how well they compare - or exceed - the human capacity, will be the same day that the hardware companies begin to examine the software and build hardware optimized to make those algorithms faster, compile better, etc. Though this will in no way make anything more intelligent, it will make the intelligence easier to converse with for us, right up until the point when the computer is faster than we are. At that point, again, it will be important for us to remember that speed isn't the issue, and never was. Otherwis

      • "
        In other words, If you ask me a question, and I give you an answer you deem to be intelligent within, say, 10 seconds of reflection; and you type the same question to your presumptive Ai, and it gives you essentially the same answer in say, an hour, that answer is no less intelligent for being more slowly produced. Intelligence isn't about speed - it never has been.
        "

        You are comparing 10 seconds to an hour (the computer program is running 360 times slower than the human), but nobody knows how to compare
        • by fyngyrz ( 762201 ) * on Sunday December 17, 2006 @03:25PM (#17279346) Homepage Journal
          What your are forgetting is that the computer probably needs to answer and ask a lot of questions before it can be compared to you.

          No, I've not forgotten this, I've simply ignored it because it isn't an actual problem for several reasons.

          First, computers have a huge advantage in that once we have one working intelligent system, we can copy its state and have two - or two hundred - or two million - with relative ease. We can't do this with people, nor is there any hint we'll be able to any time soon. This advantage makes it worth more to "educate" one system. It is as if by educating your kid to the level of a PhD, I'd educated your entire town. All of a sudden, your kid becomes very important and worthwhile to educate.

          Second, any one computer can be "educated", if you will (have those questions answered) through multiple sources, multiplying the learning speed and dividing the learning time. And this only needs be done once; they're not like people, who keep showing up un-programmed and who must each be laboriously programmed by flipping low level switches, as it were. This is worth doing no matter what it costs, because of point one. Knowledge compilation is ongoing right now; there are several very large projects of this nature.

          Third, if one can simply prove that some particular method gives rise to intelligence, market and social factors will converge on the problem and simply up the efficiency until the problem is solved many times over. The only thing we have to have is some kind of result we can demonstrate in a time frame that will convince those who must invest to get those needed resources on line towards solving the problem.

          Fourth, parallelism can be applied at multiple levels. Look what Google has done to make search work. An intelligence that resides in 1000 machines is no less intelligent for having done so, if one were to have to apply such broad leverage - and remember, there is no evidence that this is so. We won't know how to ask the question, much less evaluate the answer, until we have a working algorithm.

          Fifth, in the "educate" phase, even assuming that intelligent response is running at 360:1, that is not to say that initial education of the first unit will run at that rate. If information is simply put in place (remember, study is a human problem -- computers may not require study at all) it may be that the learning rate is many thousands of times that of a human. You just can't assume that learning equals reasoning. It isn't always the same process, and it certainly isn't the same when simply copying.

          Generally, speed does not matter. Results matter. If speed is so poor that results are unobtainable, then speed matters. I don't expect that this is the case based upon the evidence at hand. We work; dogs work; mice work. All demonstrate various levels of intelligence with descending degrees of "hardware" available to them. Computers have huge amounts of capability. I conclude that computers can work too.

          • I think you're highly underestimating the processing power requirements, as well as making absurd comparisons such as "mice work ergo can computers work." Aside form failing to define work to any reasonable degree, you're missing an important problem with emulation. Unless the "computer" is given the same processing hardware (a biological brain), then emulation is inherently slower. It still takes a considerable amount of power for a modern CPU to emulate a console CPU from just a decade ago. Emulating
            • by fyngyrz ( 762201 ) *

              you're missing an important problem with emulation. Unless the "computer" is given the same processing hardware (a biological brain), then emulation is inherently slower. It still takes a considerable amount of power for a modern CPU to emulate a console CPU from just a decade ago

              I'm quite familiar with the problem and landscape of emulation. I'm the author of this emulation [blackbeltsystems.com], which is by far the fastest of all the 6809 emulations out there, last I heard. :) Also some others; but that one, you can play


      • The day that someone comes up with an algorithm that produces answers of a nature that we can begin to argue about how well they compare - or exceed - the human capacity, will be the same day that the hardware companies begin to examine the software and build hardware optimized to make those algorithms faster, compile better, etc. Though this will in no way make anything more intelligent, it will make the intelligence easier to converse with for us, right up until the point when the computer is faster than
        • by fyngyrz ( 762201 ) *

          Ever wondered why IQ test is time based?

          Have you ever seriously considered that an IQ test would be relevant to an Ai? :)

          They're going to be vastly different, and you can count on that, I think. Speed does not matter to prove the point. Results matter, and not all situations remain unsolved if the results are slow in coming. That's just not how the world works. Speed is nice to have, and as I have already said, once the point is proven, speed will come along on its own as a matter of market and so


          • They're going to be vastly different, and you can count on that, I think. Speed does not matter to prove the point. Results matter, and not all situations remain unsolved if the results are slow in coming. That's just not how the world works


            It does lol ,and that's precisely how the world works. If you get "results" beyond acceptable time frame -they are not results ,they are what is called "failure" . If you don't arrive to benchmark result in finite time - its not AI, its a halting function.
          • I think one of the main barriers for people understanding your arguements, fyngyrz, is that you (from my understanding) are discussing machine intelligence. Most of the comments I've seen here are relating to what could be termed "human emulation". The first "true AI" which we develop will not be able to pass the Turing test. It will, however, function in a similar manner to a single-celled organism. This "being" may not be sentient, but it will be intelligent. I truely don't believe that this is too f
      • I think you are wrong when you state that "pattern recognition (sound, light, touch), movement, of which none of which are components of intelligence, per se (ex, a blind, deaf immobile person is still intelligent.)". I think an argument could be made that pattern recognition is a fundamental part of intelligence.

        It's true that a blind person cannot see visual patterns, but this doesn't mean a blind person cannot visualize patterns. Building a mental image of a pattern that fits one's observations is a basi

        • by fyngyrz ( 762201 ) *

          but this doesn't mean a blind person cannot visualize patterns.

          Actually, it may. It depends on where the fault in the system lies. Much pattern recognition is done in the optic nerve. Visualization is one way of representing patterns, but it is not the only way. Computers can recognize patterns using many types of methods. There's nothing special going on here; we're just really good at it (when we're intact and functioning normally.)

          Multiplying those numbers, we get that 10**16 calculations/secon

      • Re: (Score:3, Insightful)

        by E++99 ( 880734 )

        Life will almost certainly arise from Ai (Ai=artificial intelligence), but it won't be "nano" anything. My feeling as an Ai researcher is that we're already well past the computing threshold for Ai/AL (AL=artificial life.) Ai will always equal AL, though the reverse is not implied. A typical desktop today seems to me to have more than enough power to "come to life." What we are missing is the algorithm, no more.

        So you believe there is some magical algorithm which, when implemented, is self-aware? The "tur

        • by fyngyrz ( 762201 ) * on Sunday December 17, 2006 @04:55PM (#17280080) Homepage Journal
          So you believe there is some magical algorithm which, when implemented, is self-aware

          Other than that loaded word, "magic", absolutely. You represent a case of at least one. I represent another. I simply extend that idea to a different architecture, because I am of the opinion that the architectures have significant equivalence, computationally speaking.

          What I don't believe in is some "magic" situation that will not give up its operational methods and modalities in response to a concerted effort to understand it, when the entire problem resides right here in our "back yard", as the human brain certainly does. Postulating that the brain is a magic box with functionality that cannot be replicated is stepping out on a limb that no other comparable intellectual effort we have ever undertaken can justify. The only problems we've been consistently unable to solve are of a type where the information is unavailable to us (eg, the big bang, or whatever happened, or didn't happen.) Even then we do pretty well. But the brain isn't like that. It is right here. In literally billions of instances. We'll figure it out. I have very high confidence this will happen, and that it isn't all that far out in the future.

          The "turing test" obfuscates the issue. It is not intelligent if it is not self-aware.

          I have no reason to assume that an Ai would not be self-aware. More to the point, neither do you.

          • Re: (Score:3, Interesting)

            by stinerman ( 812158 )
            So it goes without saying that you believe in strong AI [wikipedia.org]. This supposes that the only difference between a sufficiently advanced "computer" and a human is their hardware. In fact, one would consider a human to simply be a computer with the mind being the programs. One could even reason that artificial intelligences could love, feel pain, etc. This, to me, opens up a very large ethical can of worms. Do AIs now have the right to vote? Can they own property? What are the religious implications? What are
            • by fyngyrz ( 762201 ) * on Sunday December 17, 2006 @06:38PM (#17280900) Homepage Journal
              So it goes without saying that you believe in strong AI.

              Yes. Humans already show this kind of intelligence can exist; I have no trouble generalizing to other forms from there. Artificial is not a distinction that concerns me in the sense of creating a barrier to possibility.

              This, to me, opens up a very large ethical can of worms.

              Agreed. But like most things in the world, "stuff" often happens without any consideration for the cans that get opened. We just have to deal with the issues. Not that I think we'll do very well. There may be attempts to legislate progress out of the loop (as we see with stem cells and cloning, for instance) but I don't see that as being effective, long term. Politicians don't have the control they think they do unless the resources required are so large that they cannot be hidden. Ai is the antithesis of such a project; you can do it at home, disconnected from the net, you could succeed and no one need know until your Ai has been duplicated and distributed a million or more times. Like DVD John's code, any attempts to "control" are doomed to failure before they begin.

              Do AIs now have the right to vote? Can they own property? What are the religious implications? What are the implications for free will?

              In order: They should, they should be able to, religion is mythology/100% bunkum and there are no actual implications whatsoever other than perhaps we're a step closer to getting over it, and free will in the philosophical sense is not a question that has any practical application, so why bother and for that matter, how many people really care? Will Ai's care? Good question. I hope to be able to ask some day. :)

              I would have a very hard time dealing with the fact that the only difference between me and a computer is complexity.

              Reality doesn't care if you have a hard time. Things are what they are. No more, no less. We either intelligently play the cards reality deals us, or we buy into illusion / bad metaphor and generate canned responses via someone else's playbook. Each of us has to make that call.

              My emotions, desires, and dreams are effectively meaningless in that they are manufactured by inborn programming rather than some sort of deeper force.

              Well, mine aren't; they're meaningful because I deem them so. Likewise, the emotions, dreams and desires of those I care about are meaningful to me for the same reason; I elect to engage them and enjoy that process. All else, to me, is utterly meaningless navel-gazing.

              I would be interested to hear what you have found in your research as well as your unconfirmed beliefs, as well as any moral, ethical, or religious dilemmas you see on the horizon.

              My research is on high performance associative memory. My results have been very pragmatic; no surprises at all for me; things work as I pretty much always thought they worked, and though they may also work in other ways, the path I'm on has been quite productive.

              I see no religious problems other than religion itself, which I do see as a huge problem, basically the result of fear, gullibility, and ignorance in various combinations. As far as moral and ethical issues go, we've been really poor at dealing with them among humans; there is no indication we will be any better if or when we introduce (or find, or are found by) other forms of life. Things will get more complicated, more divisive, and we'll make a further muddle of it. In my opinion. As to my "unconfirmed beliefs", that'd be 99% of what I think about in every domain. I've a confidence-based world view, not a conviction based one. So you'll have to be more specific. :)

              • As to my "unconfirmed beliefs", that'd be 99% of what I think about in every domain. I've a confidence-based world view, not a conviction based one. So you'll have to be more specific. :)

                Specifically, what is your theory of consciousness? How did consciousness develop over time and why do we have it and other living (and non-living) things don't? Is consciousness metaphysical in nature or is it simply about having the right algorithm? If it is the latter, then doesn't that imply that we are no better tha

                • Re: (Score:3, Interesting)

                  by fyngyrz ( 762201 ) *

                  Specifically, what is your theory of consciousness?

                  I don't have a formal theory. My opinion is that it is an emergent property with considerable variation that we like to put under one comfortable name, but that which really doesn't fit as well as we would like it to. In other words, no, animals and humans were not all created equal in any sense. :)

                  How did consciousness develop over time

                  Slowly, I suspect. Other than that, I have no idea.

                  d why do we have it and other living (and non-livin

              • by dpilot ( 134227 )
                >I see no religious problems other than religion itself, which I do see as a huge problem, basically the result of fear, gullibility, and
                >ignorance in various combinations. As far as moral and ethical issues go, we've been really poor at dealing with them among humans; there
                >is no indication we will be any better if or when we introduce (or find, or are found by) other forms of life. Things will get more
                >complicated, more divisive, and we'll make a further muddle of it. In my opinion. As to my "
                • by fyngyrz ( 762201 ) *

                  Are we then obligated to free our computer running AI? Conversely, is not freeing it slavery?

                  That would depend on who you ask. For instance, the US constitution, via the 13th amendment, specifically legalizes slavery and involuntary servitude for the lowest class of individuals in debt to society. I'm of the opinion that any intelligent, informed individual should have the option to sign themselves into slavery for reasons they find sufficient, and typically, that means reward -- monetary or otherwise

          • by E++99 ( 880734 )

            So you believe there is some magical algorithm which, when implemented, is self-aware

            Other than that loaded word, "magic", absolutely. You represent a case of at least one. I represent another. I simply extend that idea to a different architecture, because I am of the opinion that the architectures have significant equivalence, computationally speaking.

            What I don't believe in is some "magic" situation that will not give up its operational methods and modalities in response to a concerted effort to understa

            • by fyngyrz ( 762201 ) *

              All this presupposes that consciousness is an effect of a physical device, i.e. the brain. There is no evidence to suggest this.

              On the contrary, everything we know about emergent systems suggests this. Furthermore, the interaction between consciousness and any brain manipulation you care to consider (drugs... injury... surgery... sleep.... sensory input... destruction) further re-inforces the idea that the brain is in fact not only the seat of animal consciousness, but the foundation, home and require

        • So you believe there is some magical algorithm which, when implemented, is self-aware? The "turing test" obfuscates the issue. It is not intelligent if it is not self-aware. Even if it can give the appearance of responding intelligently.

          Such an algorithm exists, because there are intelligent animals (including ourselves) we can observe.

          People are always giving the appearance of responding intelligently without actually being very intelligent. People in general are not that intelligent. We work with a v
      • by sbben ( 983577 )

        Intelligence isn't about speed

        That depends if you consider the Turing test as a measure of intelligence (its certainly not perfect, but its been around for quite a while so I'd thought I would throw it in). If I asked two black boxes to reflect on modern literature and I got a response out of one of them in 10 seconds, and the other in an hour, I might conclude there is an intelligent agent in one, and a computer at the heart of the other.

        Then again, if Moore's Law remains true for only.....

        • by fyngyrz ( 762201 ) *

          If I asked two black boxes to reflect on modern literature and I got a response out of one of them in 10 seconds, and the other in an hour, I might conclude there is an intelligent agent in one, and a computer at the heart of the other.

          Which one? The whole point of the Turing test is to try to fool you. Humans are crafty. Maybe the human waits until the computer responds, then responds later. Every time. Then where are you?

          Anyway, initial speed is irrelevant. If we can craft something that answer

      • I think you're confusing the concept of "life" with the concept of "consciousness." Those two things are not equivalent. Computers are already arguably smarter than the common cold, but they are not "alive." The cold virus is alive.

        I think nanites are closer to achieving this than anyone constructing AI simply because the two goals are different. The goal of nanites is closer (or the same as) achieving what we would call life. The goal of AI is (as the name suggests) intelligence.

        There are some things
        • by fyngyrz ( 762201 ) *

          I think you're confusing the concept of "life" with the concept of "consciousness." Those two things are not equivalent. Computers are already arguably smarter than the common cold, but they are not "alive." The cold virus is alive.

          Um. Well, it's a discussion we could have. From where I stand: A virus is incomplete; it can't reproduce or propagate without a host, it can't think, has no consciousness, and does not generally have the ability to choose or follow a path. If asked, I'd have to say it wasn

          • Personally, I think we traditionally use "reproduction" as evidence of life because all the life we know needs to reproduce or else it will not continue it's existance. In the case of nanites, reproduction may not be necessary for their survival.

            I was thinking about saying "bacteria" instead of "virus," but a virus is simpler and I do believe it's alive. It's usage of a host to reproduce seems the same thing to me as us eating.

            Would you say that a blood cell is alive? It cannot survive without a host. H
            • by fyngyrz ( 762201 ) *

              Would you say that a blood cell is alive? It cannot survive without a host. How about a neuron? It's not intelligent (on its own), doesn't move, or reproduce.

              I think I'd venture that a blood cell or a neuron is a living part of a living system, therefore alive. As opposed to a dead cell -- hair, nail, enamel -- or an embedded item, like the stones in a gizzard.

              I would agree that intelligence can be a form of life too, as an entity. But where do you draw that line? I don't think AI even gets close

    • by Hartree ( 191324 )
      Sounds like someone at the lab has a hot date lined up......

      And he's named Al Goldstein?

      As in Al Goldstein [wikipedia.org]?

      Must be one heck of a date...

  • Long article is LOOOONG

    I think this guy may be taking himself a little too seriously:

    "So why listen to the voice of one who is not Ishmael, not Cassandra, not even Ralph Nader? Because I can tell you something that no one else can. I can tell you the exact moment when Homo sapiens will cease to exist. And I can tell you how the end will come. I can show you the exact design of the device that will bring us down. I can reveal the blueprint, provide the precise technical specifications."
    • by khasim ( 1285 ) <brandioch.conner@gmail.com> on Sunday December 17, 2006 @12:13PM (#17277936)
      Check out his "NOTES AND REFERENCES". :)

      He's referencing his own work (2 of the 5 referenced), and the dates are in 2006 (with a Wired article from 2005).

      This seems more like an attempt to get people to buy his newly published book than anything else. And as evidenced in your clip, there's a bit of megalomania there.
      • Re: (Score:1, Insightful)

        by Anonymous Coward
        It's not unusual to cite one's own work. Especially if you're saying, this is what I think, which is based on what I once thought back in [2].
      • You're not a relative of Sarah Conner... from the future, are you? You really freaked me out with the "articles are from 2006" comment - I thought I'd been applying the wrong year to everything I sign, and now it's almost over!
    • Re: (Score:2, Insightful)

      by sgt_doom ( 655561 )
      Gosh, it is soooo neat to be all knowing...

      I hate to rain on this clown's parade, him being an official "genius" and all - but rather obviously not current - as the predictable trend is the death of democracy (already happened, of course) and the ascendency of the global corporate dictatorship (this would be China - leading the planet to the One World Corporation) so all such all knowing predictions go by the wayside....

    • His statements of Cassandraesque knowledge aside, it seems as if he's simply reinvented and added more details to a concept found in science fiction: the human-made or human-loosed wee tiny thing against which our own chemistry can't compete.

      As examples:
      Peter Watts, Starfish [rifters.com], 1999. He posits the existence of a type of life that would out-compete anything in our 3.5 billion years old biosphere. As a character says:

      "Two prototypes," Rowan continued. "Three, four billion years ago. Two competing models. One o

  • by FlyByPC ( 841016 )
    I for one welcome our new nanobot overlords. (Heck, maybe they'll be able to give me some pointers on soldering these SOT-23 circuits...
  • I read this book in middle school. Isn't he supposed to go searching for intelligent life inside of cellular mitochindrion?

    I can't believe I managed to pull that reference out of my butt. Go me.
  • Venture capital solicitation: just change the buzzwords.

    Yeah, I know superconductors have found a few uses in a few niche fields, but I remember very clearly how we were told that superconductivity would change all our lives... and that was almost 20 years ago.

  • Hmm, (Microsoft bullshit protocol)? Perhaps use %nbsp; (No bullshit protocol) instead next time.
  • Ever notice how all theese groups decide to look for Alien life somewhere on Earth ?
    I think the movie Alien fucked up any chance we had of ever walking on an alien world.
    • The scary thing is that the creatures in Alien are no more fucked up than some of the things we find on earth.
  • Overblown (Score:1, Insightful)

    by Anonymous Coward
    We broke the carbon barrier back when we discovered fermentation. We changed what it means to be human when we discovered agriculture.
  • Engines of Creation (Score:3, Informative)

    by Tx ( 96709 ) on Sunday December 17, 2006 @12:13PM (#17277942) Journal
    The author mentions K. Eric Drexler and his book Engines of Creation [amazon.com], but I'm wondering if he actually read it. TFA says "Folks like Ray Kurzweil, Bill Joy and Eric Drexler have raised some alarms, but they are too dazzled by the complexity and power of human cybersystems, devices and networks to see it coming. Well, I took Engines of Creation to be one great big warning almost from start to finish about the same kind of thing this article talks about, so what am I missing? He's invented some new words for stuff, but other than that...
  • by ArcherB ( 796902 ) *
    I wonder if this new nano-population will be debating ID on nano-dot in a billion years.

  • Youve got a little bionanao stuck in your what now?
  • Yes it's an older article, but it's a fairly quiet sunday today.

    Darth Vader: Do they have a code clearance?

    Admiral Piett: It's an older code, sir, but it checks out. I was about to clear them.
    • I remember a little more to that scene.

      Luke: I'm endangering the mission, I shouldn't have come

      Realizing that Vader had sensed him and purposely let them through. Obviously the editor has .. other motives. Muahaha
  • ... vacations are canceled until morale improves.
  • nano-FUD (Score:3, Interesting)

    by Anonymous Coward on Sunday December 17, 2006 @12:52PM (#17278194)
    Alan H. Goldstein is a crackpot who apparently lacks basic understanding of molecular biology and genetics. For non-biologists out there - this guy is basically saying something like "ALL LINUX USERS ARE DOOMED! A few dropped packets or corrupted bits and Windows viruses and spyware could mutate to infect your computers!"

    A nanobiotechnology device that is smart enough to circulate through the body hunting viruses or cancer cells is, by definition, smart enough to exchange information with that human body. This means, under the right conditions, the "device" could evolve beyond its original function.
    Newsflash... viruses are self-replicating "devices" capable of exchanging information with and modifying host organisms. Under the right conditions (e.g. over the course of the last few billion years), they have been evolving under selective pressure to propagate more readily. Stop trying to portray "nanobots" as bogeymen when pathogens already exist with precisely the same FUD-inducing attributes like "self-replication" and "evolvability".

    One solution: Alter synthetic genetic codes such that they are incompatible with natural ones because there is a mismatch in the gene's coding for amino acids." In other words, we will be protected because these organisms will have genomes never before seen on Earth! Perhaps, but that could also be a description of the ultimate biohazard. If the Ebola virus is considered a Biosafety Level 4 threat, what level would categorize a pathogenic organism made completely from synthetic genetic codes?
    Ultimate biohazard? Nice strawman, dude. Using orthogonal genetic codes in synthetic organisms is exactly the right way to make them safe and restrict them to laboratories. These synthetic genomes will be innocuous for the same reason that bacteriophage are harmless to humans - the manifestation of the genetic codes necessary to make them functional will be fundamentally incompatibile with natural systems.
    • Stop trying to portray "nanobots" as bogeymen when pathogens already exist with precisely the same FUD-inducing attributes like "self-replication" and "evolvability".

      Pathogens such as the avian flu or ebola virus are worrisome because of their "self-replication" and "evolvability". Certainly creating "nanobots" with these characterists should give us pause.

      These synthetic genomes will be innocuous for the same reason that bacteriophage are harmless to humans

      I believe that the author's point is tha

  • by Cylix ( 55374 )
    The moment I caught the word "progenitor" in the article.... I couldn't stop thinking about Star Trek.

    Oddly, it would seem it was a better waste of my time.

  • by Sciros ( 986030 ) on Sunday December 17, 2006 @01:07PM (#17278292) Journal
    So, assuming I actually understood the article, it means that we will be able to create nanobiobots literally capable of mutation. I'm not sure how self-replication will work, but if it somehow does, we could potentially create a very deadly little bugger. Nevermind that as there's medical research in something cool there's *always* military research to go along with it. So making nice nanobiobots will go hand-in-hand with making mean ones. Wow I want to write a Superman comicbook right now.
  • by march ( 215947 ) * on Sunday December 17, 2006 @02:09PM (#17278752) Homepage
    Ray Kurzweil has written YAB (yet another book) on a similar subject. It is quite an interesting read about how evolution in the universe is an exponential curve, and that we are just on the cusp, or knee, of it going up. In there, he has pages and pages of supporting data relating to nanotech and bionanotech. It is a good read.

    We are very close to having strong AI, nanotech in our bodies, and greatly enhanced information sharing / intelligence - all the research is well under way with positive results thus far.

    http://tinyurl.com/ygmtxw [tinyurl.com] (amazon.com)
    • exponential curve, and that we are just on the cusp, or knee, of it going up.

      I beg to disagree. The exp curve has no such thing. Only in relation to something else, you can point a definite limit; for example, the rate of technological evolution could surpass the rate of human learning ability at some point.

      • I beg to disagree. The exp curve has no such thing. Only in relation to something else, you can point a definite limit; for example, the rate of technological evolution could surpass the rate of human learning ability at some point.

        I think the proper reference point is the human lifespan. Sure, medicine and technology have progressed in the past and extended the lives of people who otherwise would have died young, but what he is predicting is enough medical and technological improvements to endlessly ext
  • by Dark_MadMax666 ( 907288 ) on Sunday December 17, 2006 @02:16PM (#17278804)
    He dismisses rather pompously very smart people :

    They don't realize what evolution is. They have come to the problem from artificial intelligence, or systems analysis, or mathematics, or astronomy, or aerospace engineering. Folks like Ray Kurzweil, Bill Joy and Eric Drexler have raised some alarms, but they are too dazzled by the complexity and power of human cybersystems, devices and networks to see it coming. They think the power of our tools lies in their ever-increasing complexity -- but they are wrong.


    The biotech folks just don't get it either. People like Craig Venter and Leroy Hood are too enthralled with the possibilities inherent in engineering biology to get it


    Failing to understand himself the very basics of evolutionary biology - little primitive nano -organisms (viruses, bacteria, single cell organisms) (even artificially created ones) are not as grave dangers as he paints precisely because they are primitive. Those primitive forms of life dominated earth and more complex organisms took over because there are inherent evolutionary benefits of being complex -if it wasn't so more complex forms of life would never evolve .

    Complex lifeforms such dominated all primitive lifeforms over the course of millions years, due to their inherent capability to adapt better to more varied environment and conditions ,due to multitude of specialized mechanisms and systems which allow them to get better access to resources ,better combat external treats , etc .And now this guy says just because we make one artificial new life form it will dominate instantly? -having to fight with all the existing lifeforms ,which became better adapted to current environment over the course of hundreds of millions of years?

    That the simple cellular automata (forgive me the pun ;) ) driven by chemistry can overcome combined intelligence of super beings/creators (from this lifeform' point of view) such as the scientists which created it? Everybody remembers from school that bacteria/virus/whatever primitive organisms can potentially cover whole earth in thick layers in record time, this guys seems missed the part why it never happens - conditions around it are never favorable for that to happen , and more primitive organism is - less tools it has to change its conditions to its own liking

    This was true for "blind "evolution and will be even more true for human governed one -as we have and advantage over all other lifeforms - we can change its course the way we want with our tools and technologies ,and we become even more proficient at this ,not less so.

    He dismisses ave of prominent futurologists for complex human created systems ,failing to understand that this is the most powerful engine on earth. Not the microorganism , note the insects, nor reptiles, nor mammals, nor anything else out of living realms has comparable power on earth to what our civilization has. Human civilization in the sheer ability to manipulate environment far exceeds anything " natural evolution" created so far .The ultimate power of life is the power to change environment around itself ,power to manipulate matter and energy to its own liking.

    Primitive life forms can only manipulate what exist around them: at the level of molecular biology everything is simple chemistry -dependent on presence of particular elements: ,higher level you go you see organisms being able to create those necessary elements out of other elements ,and when you go even higher you start seeing more advance of its kind .Till you come to this bright flash of light ( from evolutionary scale of time point of view ) marking the time when evolution of technology started ,driven by first intelligence capable of doing so (hum
    • little primitive nano -organisms (viruses, bacteria, single cell organisms) (even artificially created ones) are not as grave dangers as he paints precisely because they are primitive.

      They are enough of a threat to wipe out between 50 and 100 million people, 2.5 - 5% of the human population, in one year [wikipedia.org]. That is quite a sufficient danger that we ought to have some concern about "primitive" organisms.

      Complex lifeforms such dominated all primitive lifeforms over the course of millions years, due to the

      • They are enough of a threat to wipe out between 50 and 100 million people, 2.5 - 5% of the human population, in one year. That is quite a sufficient danger that we ought to have some concern about "primitive" organisms.

        And that is precisely absolutely NOTHING. Complex organisms were living with such "threat" for hundreds of millions of years and were not dented in any significant way. -Au contraire were striving and expanding their sphere of domination and influence at each turn of evolutionary spi

        • And that is precisely absolutely NOTHING.

          The death of 50 and 100 million people is "nothing"?

          Please, for your safety and the safety of others, seek mental health care immediately. You have clearly lost touch with consensual reality. Best of luck to you, I hope you can find effective treatment.

          • well, it is absolutely nothing in the metric used. Do you report aequivalent fluctuations of bacteria populations as scientific news? is such a population decrease a danger for the survival of the species?
  • by Anonymous Coward
    I would hate for their electrons to be different from ours...

    Maybe I'm just ignorant, but how many different 'types' of electrons are there?

  • Drivel (Score:4, Interesting)

    by Master of Transhuman ( 597628 ) on Sunday December 17, 2006 @05:23PM (#17280314) Homepage

    The fact that he supports Bill Joy and Richard Smalley pretty well defines this idiot.

    Smalley is a wannabe who came late to nanotech and decided to use his higher conventional scientific status to try to take over the field from Drexler. He failed, despite this guy's opinion.

    Joy is simply incapable of rational reasoning.

    And this guy's paranoid fantasies about nanobots "spontaneously impregnating" each other with technology to become an artificial life form - and one that dooms all other life in addition, despite millions of years of evolutionary adaptation on this planet to just about every conceivable hazard - is just drivel.

    Sure, nanotech could be designed to destroy all life - but it would have to be DESIGNED to do so. The odds of it happening by chance are so low as to be not worth considering. Sure, badly designed nanotech - and there WILL be badly designed nanotech, we CAN count on that - COULD cause massive medical issues on a par with a deadly virus such as the flu virus in the early 20th century or something like Ebola. So what? You take the risk and you try to prevent it. Nanotech offers many technical options for inhibiting this sort of thing. If the IT industry will get off its ass and develop some AI-based engineering programs that check for stupid engineering mistakes, the entire engineering industry would be better off as well.

    What IS going to happen is that Transhumanists will use nanotech to transform themselves into a superior species And that's where the threat is going to come from as monkey-ass humans follow their usual primate instincts to try to suppress the Transhumans - and unlike the Star Trek shows and Terminator movies, the humans will get their asses kicked trying - at least until the Transhumans have improved enough that they can just ignore the chimps and go about their business anyway.
    • Re: (Score:2, Insightful)

      by newt0311 ( 973957 )

      Sure, nanotech could be designed to destroy all life - but it would have to be DESIGNED to do so. The odds of it happening by chance are so low as to be not worth considering.

      The chances of nanotech being designed to be deadly are actually very very high. Think of it as a very advanced targetted virus. This probably won't possible in the near decade or two but designing a nanobot which kills and replicates based on certain characteristics is far from impossible. Say... you don't like that certain terrorist or president. Well, we take this nanobot here, key in the DNA and off it goes to do its job. Its use as a wepon is very attractive and humans have a history of getting a weap

      • by zenhkim ( 962487 )
        > The chances of nanotech being designed to be deadly are actually very very high. Think of it as a very advanced targetted virus. This probably won't possible in the near decade or two but designing a nanobot which kills and replicates based on certain characteristics is far from impossible. Say... you don't like that certain terrorist or president. Well, we take this nanobot here, key in the DNA and off it goes to do its job. Its use as a wepon is very attractive and humans have a history of getting a

      • I'm perfectly well aware that nanotech can be DESIGNED to be deadly. That was my point. The point of TFA on the other hand was that nanotech could be ACCIDENTALLAY and EASILY converted SPONTANEOUSLY to being deadly to life - and not just some life, ALL life. That is far more unlikely, which the author does not acknowledge.

    • >What IS going to happen is that Transhumanists will use
      >nanotech to transform themselves into a superior species

      "Superior" at what? Surviving? Is that really all that
      matters to you? Rock formations are superior in surviving ...

      >And that's where the threat is going to come from as monkey-ass humans
      >follow their usual primate instincts to try to suppress the Transhumans
      >- and unlike the Star Trek shows and Terminator movies, the humans will
      >get their asses kicked trying - at least until
      • "Since I'd rather be dead than living in the hell you describe, I'm not that worried about it."

        Trust me, we're not worried about you being dead rather than living in "our hell" either.

        Works for us.
        • Sounds more inviting all the time, with attitudes like that ... I'm sure you're baffled why more people don't find it appealing.

          • You think we care if anybody finds it "appealing?" One is either a Transhumanist or one isn't. Do you care if the roaches under your sink find your Roach Motel "appealing"?

  • Very Poor Science (Score:4, Informative)

    by YetAnotherBob ( 988800 ) on Sunday December 17, 2006 @06:20PM (#17280774)
    So this guy has a PHD in Biochemestry? Where does he get his facts??

    He states that the human body has a potential of thousands of Volts. In reality it seldom reaches 1 volt. (A potential of 1/2 volt over a distance of a 50 angstroms is not a potential of millions of volts, it may be high in terms of Volts per Meter, but over any larger distance, it just dissapears. That's why we can't power pacemakers or laptop computers off your neural energy. Zero power is just zero power, no matter how clever you think your argument is.) He states that there are strong similarities between Carbon and Silicon chemistry. Yes, but there are also energy differences that are profound. Reality, there are very few living creatures that can use Silicon. Most of those that can are bacteria, and they use it only to create a shell or frame.

    A little reality here, there are good reasons to believe that the first engineered bio machines will not be too greatly different than the ones we've already been living with. We call them bacteria. Control is a problem. It has been a problem for a very long time.

    The article is just an attempt to scare people with little knowlege of the underlying science. The Author appears to be ignorant of basic physics or chemistry. His biology may or may not be suspect. I don't know that area as well. If this is the best that the critics of nanobot research can do, then they should be doomed to failure.

    Rank superstition and scary fiction are a very poor way to make technical policy.
  • Nanofabricated animats may be infinitesimally tiny, but their electrons will be exactly the same size as ours -- and their effect on human reality will be as immeasurable as the universe.

    Excuse me if I'm wrong, but I was somehow under the impression that all electrons are the same size :/

  • When I was a student we called that biochemistry, or biotechnology. Even in 1989 we already worked on the nano scale. We just didn't mention it because it was so obvious. Then the suits started throwing money at everything that was called 'nano', so now everybody who wants money for research calls whatever (s)he is doing nano.
  • Newsflash: there won't be any nano(ro)bots anytime soon (if ever). We are not even close.
    Furthermore, it has been argued that for technical and financial reasons building self-replicating machines doesn't make much sense (C. Phoenix, E. Drexler, Safe exponential manufacturing, Nanotechnology 15, 869-874 (2004))
  • Hi, I'm a reporter and editor who has specialized in nanotechnology for the past four years or so. For what it's worth, I wrote a response to I, Nanobot back in March. Here's the link: [blogspot.com]

    "OK, Alan Goldstein, I will not call you Ishmael. But somewhere along your road to Melville, you took a detour into speculative fiction, because that is clearly the genre of your Salon article, I, Nanobot." More here [blogspot.com]

"The whole problem with the world is that fools and fanatics are always so certain of themselves, but wiser people so full of doubts." -- Bertrand Russell

Working...