In the Cambrian period, there was an "explosion" of life. During that time, over 500 million years ago, pretty much all the major animal phyla evolved over just a few million years. I'm starting to wonder if we're not experiencing an explosion of robotic "life."
Consider the following:
I'm also interested by the nature of these new robots. They remain quite task-specific. While it's hard to see how this aspect of robotics might be considered in any way "natural," I think it may well be. That is, these robots are "fit" to exist in the artificial contexts in which humans designed and built them. What do I mean by context? Well, consider any of the examples I give above and think of the very limited scope in which those robots could be defined as "useful" - that's their context.
In many ways, humans are not responsible for creating the contexts. Engineers design rescue robots that work only to find survivors in disaster areas, but those engineers did not intentionally create the context of the disaster. Rather, those contexts arise due to other forces that are beyond the control of the engineer, even though they may have contributed somewhat to those forces. This seems entirely consistent with how evolution works, where species evolve in response to environmental effects that may be due in part to their impact but nonetheless not under their intentional control.
You can think of humans, then, as the DNA of robots. We represent the templates that produce them; and when external forces drive us - be they technological or contextual - we change the ways that robots look and act, just like changes in DNA change how natural organisms look and act. When we come up with a design (a DNA sequence) for a robot that is highly successful, that design tends to stick, and even to propagate to other types of robots - just like how genes can propagate through a population, and even how the same functionality (like vision systems and eyes) can evolve multiple independent times.
Engineers will work on the first sensible idea they find. They won't say: Here's a great idea! Let's find more before deciding which to pursue! They will instead say: Here's a great idea! Let's build it and see what happens! And those ideas will be largely stimulated by their own experiences, technologies with which they are familiar, contexts that they know well, and all the other robots that had been built to date. New robots share a lot with old robots, or at least with biological inspirations derived from other natural sources (like the robotic snake example). This too is quite like how evolution works. New species are built by changing old ones; this is how we can trace species back in time genetically, because we can track the changes in genes back in time. Humans, as a result, are partly fish. In the same way, there are basics underlying every robot that could be said to be their shared genetic heritage, even if that shared heritage is only the knowledge passed down through the years from one robotics engineer to the other.
Please don't interpret this as my suggesting that there is some conscious intent behind evolution. That would be creationist bullshit. (Creationists and believers in so-called Intelligent Design are worthy only of your derision, contempt, and profanity.) Instead, I think the opposite is true: humans exert no conscious control over the development of robots; robots are a natural and "automatic" response by humans, given certain technologies, to specific contexts. Robots are, in this view, unavoidable developments.
So we have these multitudinous "species" of robots, each quite well-adapted to very specific environments, having developed through an unconscious, uncontrolled process mediated by humans who also exert no intentional control over the process…. Sounds a lot like evolution, really.
If this is so, then we're probably well on our way to someday creating a true artificial intelligence. Just as we evolved to be conscious and intelligence creatures from "unconscious" and unintelligence creatures without any intention or plan, so too may we someday (rather soon I think) develop quite by accident a truly intelligent robot. We may not realize it at first, but I think it is quite inevitable. And we may be seeing the first truly massive advance just now, with all these weird types of new robots springing up, just like the Cambrian Explosion.
Of course, that first robotic intelligence is very unlikely to be like HAL, or SkyNet. It will probably be much stupider than we are; so stupid, in fact, that we might not even notice it and destroy it as we would any malfunctioning piece of automated equipment. But I suspect that one way or the other, much as natural life arose naturally, so too will artificial life arise artificially, whether we want it to or not.
Consider the following:
- Thomas and Janet, the romantic robots
- BigDog, the robotic pack mule that can walk (rather comically) on icy surfaces
- Swarms of quadcopters acting cooperatively
- Various UAVs
- The Google Car
- Stair-climbing robots
- Rescue robots
- Robot snakes
- Pneumatic robot babies
- Robots to soothe the dying
- even a walking pneumatic robot made of LEGO!
I'm also interested by the nature of these new robots. They remain quite task-specific. While it's hard to see how this aspect of robotics might be considered in any way "natural," I think it may well be. That is, these robots are "fit" to exist in the artificial contexts in which humans designed and built them. What do I mean by context? Well, consider any of the examples I give above and think of the very limited scope in which those robots could be defined as "useful" - that's their context.
In many ways, humans are not responsible for creating the contexts. Engineers design rescue robots that work only to find survivors in disaster areas, but those engineers did not intentionally create the context of the disaster. Rather, those contexts arise due to other forces that are beyond the control of the engineer, even though they may have contributed somewhat to those forces. This seems entirely consistent with how evolution works, where species evolve in response to environmental effects that may be due in part to their impact but nonetheless not under their intentional control.
You can think of humans, then, as the DNA of robots. We represent the templates that produce them; and when external forces drive us - be they technological or contextual - we change the ways that robots look and act, just like changes in DNA change how natural organisms look and act. When we come up with a design (a DNA sequence) for a robot that is highly successful, that design tends to stick, and even to propagate to other types of robots - just like how genes can propagate through a population, and even how the same functionality (like vision systems and eyes) can evolve multiple independent times.
Engineers will work on the first sensible idea they find. They won't say: Here's a great idea! Let's find more before deciding which to pursue! They will instead say: Here's a great idea! Let's build it and see what happens! And those ideas will be largely stimulated by their own experiences, technologies with which they are familiar, contexts that they know well, and all the other robots that had been built to date. New robots share a lot with old robots, or at least with biological inspirations derived from other natural sources (like the robotic snake example). This too is quite like how evolution works. New species are built by changing old ones; this is how we can trace species back in time genetically, because we can track the changes in genes back in time. Humans, as a result, are partly fish. In the same way, there are basics underlying every robot that could be said to be their shared genetic heritage, even if that shared heritage is only the knowledge passed down through the years from one robotics engineer to the other.
Please don't interpret this as my suggesting that there is some conscious intent behind evolution. That would be creationist bullshit. (Creationists and believers in so-called Intelligent Design are worthy only of your derision, contempt, and profanity.) Instead, I think the opposite is true: humans exert no conscious control over the development of robots; robots are a natural and "automatic" response by humans, given certain technologies, to specific contexts. Robots are, in this view, unavoidable developments.
So we have these multitudinous "species" of robots, each quite well-adapted to very specific environments, having developed through an unconscious, uncontrolled process mediated by humans who also exert no intentional control over the process…. Sounds a lot like evolution, really.
If this is so, then we're probably well on our way to someday creating a true artificial intelligence. Just as we evolved to be conscious and intelligence creatures from "unconscious" and unintelligence creatures without any intention or plan, so too may we someday (rather soon I think) develop quite by accident a truly intelligent robot. We may not realize it at first, but I think it is quite inevitable. And we may be seeing the first truly massive advance just now, with all these weird types of new robots springing up, just like the Cambrian Explosion.
Of course, that first robotic intelligence is very unlikely to be like HAL, or SkyNet. It will probably be much stupider than we are; so stupid, in fact, that we might not even notice it and destroy it as we would any malfunctioning piece of automated equipment. But I suspect that one way or the other, much as natural life arose naturally, so too will artificial life arise artificially, whether we want it to or not.
COMMENTS