jono199 - no one has ever used any of asimovs laws on a real robot. Ever.
Bull-hocky. Pure bull-hocky. Not only are Isaac Asimov's laws a fundamental part of numerous robotic development programs with robot-human interaction projects, but there are robots in public right now that use Asimov's logic sets. Prime example: ever been in a big hospital in the last 4 years? Those little self-automated courier robots do just that. Hell, I programmed 3 of them myself when I was in college for the local hospital.
Oh, and by the way...the 'a' in
Asimov is capitalized.
jono199 - We can't hold back the science of something because of terrorists. Why should cowards that hide in the shadows and murder the defensless have any power over free peoples?
He's not saying terrorists are physically holding back the advancement of AI. What he's talking about is the very real situation of other social objectives turning the focus of technological development. In other words, if war needs scientists to focus on making better bombs, they aren't going to be focussed on making better mousetraps. If the war on terrorism creates a lack of demand for better AI (which by the way it hasn't) then there will be a lack of people working on AI systems, slowing the development down.
As for
hard-AI theorists vs.
soft-AI theorists, there's two key problems to this argument.
1. How do you define "intelligent"? We can't even identify intelligence to begin with, let alone quantify it.
2. Just because something is quote-unquote "intelligent," doesn't mean it's "life" and to that end, we haven't even addressed whether or not that matters or is a factor.
In the basic sense of "intelligence," your coffee pot is intelligent. In the advanced senses of "intelligence," (which we haven't even extended into fully) we ourselves don't even classify full as "intelligent beings."
Intelligence is, for all intents and purposes, right now, a matter of characteristic and habitual perspective.
As for the historical perspective, guess what folks, we're not on the edge of AI or "just seeing" AI. AI has been studied for decades. Neural networks have been a conceptual design since the 1700's, were first formally studied in the 20's, and were first used on a broad-scale real-world setting in the late 70's and 80's. Furthermore, neural networks are interacting with us every day. They're used hugely in economic venues, medical and scientific fields, and, shockingly, even by the governments of several countries (especially including the USA).
As for Roshi229...
Are we ready? Well, that depends on the perspective. First off, if you're asking "are we ready for the concept of AI?" guess what, it's already here.
Are we ready of Asimov's written ideas of AI? Sure! The fact that we can't do it yet is besides the point. In fact, work is already being done, right now, to implement artificially intelligent robots into homes. The implications of AI robots and interfaces abound...the only holdback is the ability, and the ability gets increased every day.
Are we ready for a day when AI becomes counted as life and citizen? Well, that's up for grabs and goes pretty deep into theological, philisophical, and theoretical areas. Personally, if it becomes possible to attain such a level physically (by that I mean, technically by definition) then I believe it is inevitable that it will happen. Do I think I'm ready to have a robot "person" in my society? Well...no. From a theological standpoint, regardless of power we shouldn't do it. From a philisophical standpoint, defining a machine's instance as "life" can either be the definition of ourselves and life itself, or just another association to a possible explanation of "life" with no real tangiable sense. From a theoretical standpoint, since I can't even define my own "intelligence," how can I define something I create as "intelligent"?
In a last note for this post:
If the technological capability existed to make Robbie, and if we had the ability to truly classify something as "intelligent", would Robbie one day happen? Absolutely. It's inevitable.
Will Robbie kill his owners and plunge us all into the Matrix? Well, I can't answer that. Frankly, the only people who I think have ever even come close to an answer to that, are Arthur C. Clarke, Keith Laumer, and Isaac Asimov.