A.I, Robotics, Neural Nets etc..

your oppinion about radio broadcasting over mobile phones

  • I love this idea

    Votes: 0 0.0%
  • Its interesting

    Votes: 0 0.0%
  • Poor battery

    Votes: 0 0.0%
  • Mobile phones are for talking/gaming

    Votes: 0 0.0%
  • I hate this idea

    Votes: 0 0.0%
  • Dont have mobile phones

    Votes: 0 0.0%

  • Total voters
    0
Status
Not open for further replies.
Where did you read that?

In responce to Roshi, you can always ask questions and give theoretical or hypothetical answers, these are always off by a little bit and sometimes dead wrong. I simply meant that we can talk about this until the cows come home but we will never know for sure. Given the inherent inpredictability of the subject we cannot trust any answers we may recieve. While they may come from respected sources no one can predict the future. Also if you look at academics in this field their views are all over the place. Some say we are all doomed, some tell us not to worry about it and some are sitting on the fence.
 
jono199 - no one has ever used any of asimovs laws on a real robot. Ever.
Bull-hocky. Pure bull-hocky. Not only are Isaac Asimov's laws a fundamental part of numerous robotic development programs with robot-human interaction projects, but there are robots in public right now that use Asimov's logic sets. Prime example: ever been in a big hospital in the last 4 years? Those little self-automated courier robots do just that. Hell, I programmed 3 of them myself when I was in college for the local hospital.
:mad:
Oh, and by the way...the 'a' in Asimov is capitalized.
jono199 - We can't hold back the science of something because of terrorists. Why should cowards that hide in the shadows and murder the defensless have any power over free peoples?
He's not saying terrorists are physically holding back the advancement of AI. What he's talking about is the very real situation of other social objectives turning the focus of technological development. In other words, if war needs scientists to focus on making better bombs, they aren't going to be focussed on making better mousetraps. If the war on terrorism creates a lack of demand for better AI (which by the way it hasn't) then there will be a lack of people working on AI systems, slowing the development down.

As for hard-AI theorists vs. soft-AI theorists, there's two key problems to this argument.
1. How do you define "intelligent"? We can't even identify intelligence to begin with, let alone quantify it.
2. Just because something is quote-unquote "intelligent," doesn't mean it's "life" and to that end, we haven't even addressed whether or not that matters or is a factor.

In the basic sense of "intelligence," your coffee pot is intelligent. In the advanced senses of "intelligence," (which we haven't even extended into fully) we ourselves don't even classify full as "intelligent beings."

Intelligence is, for all intents and purposes, right now, a matter of characteristic and habitual perspective.

As for the historical perspective, guess what folks, we're not on the edge of AI or "just seeing" AI. AI has been studied for decades. Neural networks have been a conceptual design since the 1700's, were first formally studied in the 20's, and were first used on a broad-scale real-world setting in the late 70's and 80's. Furthermore, neural networks are interacting with us every day. They're used hugely in economic venues, medical and scientific fields, and, shockingly, even by the governments of several countries (especially including the USA).

As for Roshi229...

Are we ready? Well, that depends on the perspective. First off, if you're asking "are we ready for the concept of AI?" guess what, it's already here.
Are we ready of Asimov's written ideas of AI? Sure! The fact that we can't do it yet is besides the point. In fact, work is already being done, right now, to implement artificially intelligent robots into homes. The implications of AI robots and interfaces abound...the only holdback is the ability, and the ability gets increased every day.
Are we ready for a day when AI becomes counted as life and citizen? Well, that's up for grabs and goes pretty deep into theological, philisophical, and theoretical areas. Personally, if it becomes possible to attain such a level physically (by that I mean, technically by definition) then I believe it is inevitable that it will happen. Do I think I'm ready to have a robot "person" in my society? Well...no. From a theological standpoint, regardless of power we shouldn't do it. From a philisophical standpoint, defining a machine's instance as "life" can either be the definition of ourselves and life itself, or just another association to a possible explanation of "life" with no real tangiable sense. From a theoretical standpoint, since I can't even define my own "intelligence," how can I define something I create as "intelligent"?

In a last note for this post:
If the technological capability existed to make Robbie, and if we had the ability to truly classify something as "intelligent", would Robbie one day happen? Absolutely. It's inevitable.
Will Robbie kill his owners and plunge us all into the Matrix? Well, I can't answer that. Frankly, the only people who I think have ever even come close to an answer to that, are Arthur C. Clarke, Keith Laumer, and Isaac Asimov.
 
Asimovs laws are as I said simple english phrases and do not constitute predicate logic, on which all computing etc works. If people choose to use hazard detection and collision fail safes on hospital robots thats fine, but its not the same as a robot taking in and understanding Asimovs laws.

Asimovs laws are not coding telling a robot not to run people down since Asimovs laws assume that a robot has intelligence and can apply said laws in any given situation. Almost like people having manners for example.

As for Neural Nets being used in governments etc thats not really what im talking about, mostley they use the stockastic methods which assume a fully defined problem, im sure you would agree its not like the human brain which learns. The methods used for prediction of trends and so on involves limited problems where the system is given previous results and shown the answers and is taught how to predict, assuming a set of conditions remains in effect. Im talking about a network capable of dealing with problems it has not encountered before and is not lead through.

As of yet, as you say, people have not figured AI out and probably will not do for quiet a while. I don't think Neural nets existed in the 1700's, what I think your referring to are clockwork automitons which performed a set of seemingly complex tasks by using complex mechanics. But im not involved in any history courses. Oh but something i do know is that the first main bit of research for Neural Networks was the McCullock and Pitts Neuron Model (MCP Neuron) in 1943. They suggested that 'any computable function could be performed by some network of neurons' - Quote, Warren McCullock.
 
ok, for the terrorist comments i think i was misunderstood on most fronts... yes Shoobie, i would agree, that could parallel my intended point and is a valid one itself, however to the topic for which i intended to point out was the simple fact that we have enough controversy in our lives already, we have enough reason for terror threats, and terrorist plots... and to be honest we can't even handle what we have... why should we create robots with an inteligence to give these monsters another reason to say we have no morals, no God, and think we're God... why should we fule the fire at this point... and the possibility of this creating a new rise in terrorist activity by the current threats and new domestic threats is great. (i'm talking about the release of AI in fully functional form... something like I, Robot)

as for are we ready... Shoobie... got it on the nose! that's about as close an answer as i could give myself... honestly, no i do not think that we are ready for anything (AI) beyond our current applied abilities... we are not ready to talk to reasoning robots... i'm having enough trouble trying to figure out how to talk to my cell phone carrier's automated 1-800 number... if we can't even get that to understand us... we don't need to have a robot around trying to clean our house and cook our dinner...

and as for the hypothetical answers Jono... you had better be able to get very close with your hypothesys before you try and create hard AI or even soft AI. if you can't answer those questions then you are either a decade or more away for the posibility and talking science fiction at this point, or you have no buisness messing around in an experimental buisness you can't predict...
i can't predict the future or the actions/thoughts of others when i get in the car everyday and go to work... but i've got a good idea of what's going to happen... and i have planned escape routs every mile of the way... IF i can not predict with some certainty what i'm going to be driving over/through/around/near... i STOP get out of the car and go investigate before driving my little two door front wheel drive car through a mud puddle...

and i can predict with some certainty what's going to happen if a dirty bomb id dropped on a city in the US, although i've never seen it happen and to my knowledge it's not right in front of us in any shape form or fation other than an idea about as real as your possibility for hard AI...

just my thoughts... don't mean to offend.... and i'm quite enjoying this thread!
someone's got to be the antagonist....
 
the point is not weather or not you know what you're going to find, where you go, or what you do, but weather or not you can predict the path with enough certainty to prepare for the journy...
you may not know what lies at the bottom of the ocean, but if you don't take into account the extream cold, pressure and lack of light and the inability to breath under water you're never going to get there now are you?
my point is hard to get across so i try to use analogies... all i'm saying is... can we predict the dangers involved, the inherant problems we'll encounter... and the social upheaval this may cause!!!!
 
jono199...
Asimovs laws are as I said simple english phrases and do not constitute predicate logic, on which all computing etc works. If people choose to use hazard detection and collision fail safes on hospital robots thats fine, but its not the same as a robot taking in and understanding Asimovs laws.
Asimov's Laws were never meant to be set as code or implemented by wrote. He never states anywhere that his three lines of text were to be fed into a machine as-is and used. In fact, if you read his books, you'll find that he addresses this quandry dozens of times throughout each short story.
Asimovs laws are not coding telling a robot not to run people down since Asimovs laws assume that a robot has intelligence and can apply said laws in any given situation. Almost like people having manners for example.
Ah, ah, ah, ah, ah... *grins*...you are wrong my friend. If you read Asimov's "I, Robot" book, he never gives his robots true intelligence. In fact, the stories in "I, Robot" develope from the character's misconceptions that their robots have intelligence (and that they will be able to use that intelligence with the 3 laws). In fact, in "I, Robot" the three Laws of Robots are coded into the robot's "brains," and it is in fact that logic interpretation of those laws themselves that lends to the situational problems explored in the collection of shorts that make up the book "I, Robot." It is only much after that point that the laws themselves become hard-coded failsafes behind the scenes to protect the robots and their human creators from the "intelligence" given to them.

Asimov's robots, even in the "Robot City" series, are still only set-logic literary explorations of the mechanical interpretation of Isaac's Laws, and to that extent, still bound by the hard-coded laws that their actions and natures are derived and controlled by.

Back to coding, Asimov's laws are never meant to be actually coded in, but instead, were meant to be the templates, if you will, for the actual coding and development of robots. And the laws ring true and are used quite often. If you think Asimov's 3 laws never had any part in the construction of AI and robots, you're gravely mistaken.

As for Neural Nets being used in governments etc thats not really what im talking about, mostley they use the stockastic methods which assume a fully defined problem, im sure you would agree its not like the human brain which learns. The methods used for prediction of trends and so on involves limited problems where the system is given previous results and shown the answers and is taught how to predict, assuming a set of conditions remains in effect. Im talking about a network capable of dealing with problems it has not encountered before and is not lead through.
Well, first off, if you can explain how the human brain learns, dear god tell us! You're the only one who does and we need to know.
Second, the Wall-Street neural network is very primative (it's also old). The use of what could be called "the most true-to-fact neural-networks" was first implemented in the late 70's and is still used today to an astounding degree that the programmers and users will out-right testify is going beyond what they programmed and "taught." I am, of course, referring to the submarine mine-detection sonar-listener systems used aboard US submarines as we speak.
Of course, the system can only truly expand to the limits you provide it. If you only give it the information on a certain range of data, it can only "know" that much, but what it can do with it can grow. It's like keeping mold in a jar.
To that end, a neural-network cannot every be truly "intelligent" or "life." In fact, you can make a brain...doesn't mean it'll be intelligent or life. That's a problem we haven't figured out yet.
As of yet, as you say, people have not figured AI out and probably will not do for quiet a while. I don't think Neural nets existed in the 1700's, what I think your referring to are clockwork automitons which performed a set of seemingly complex tasks by using complex mechanics. But im not involved in any history courses. Oh but something i do know is that the first main bit of research for Neural Networks was the McCullock and Pitts Neuron Model (MCP Neuron) in 1943. They suggested that 'any computable function could be performed by some network of neurons' - Quote, Warren McCullock.
Nope, not talking about clockwork automatons. The idea of neural networks (in essense, the foundation ideas leading to today's concepts and implementations of neural networks) has been pondered and theorized for centuries. Trouble is, back then they only had at most clocks, and clocks were very difficult to make into multi-pathed logical analyzers. Leonardo Da vinci even dabbled in the concept of a multi-directional machine that learned how to change its output based on input. So the concept of neural networks isn't this new idea that sprang out of a 1990's scientist's head.

The intelligence of a neural network is relational to the quality of the information available to it.
 
go shoobie... go shoobie... go shoobie... ::: spins around in chair throwing fist in the air ::::

i'm going to the bookstore after work!
 
In reply to Roshi...
ok, for the terrorist comments i think i was misunderstood on most fronts... yes Shoobie, i would agree, that could parallel my intended point and is a valid one itself, however to the topic for which i intended to point out was the simple fact that we have enough controversy in our lives already, we have enough reason for terror threats, and terrorist plots... and to be honest we can't even handle what we have... why should we create robots with an inteligence to give these monsters another reason to say we have no morals, no God, and think we're God... why should we fule the fire at this point... and the possibility of this creating a new rise in terrorist activity by the current threats and new domestic threats is great. (i'm talking about the release of AI in fully functional form... something like I, Robot)
Terrorists are not going to deter scientists from creating AI systems and furthuring development. In our society, working towards AI might be fine, while in another person's society that might be totally wrong. There's always going to be someone to fanatically protest something. Hell, there's been protests against the manufacture of bricks because microbes are killed in the process!

I mean, there are countries with people who believe in arranged marriages. Now, to some other people, that's against God's will and a sin...but only an extreme few actually protest it with force.

In short, I'm not going to limit my growth of knowledge because someone else might not like it. My learning might get delayed because someone's burning down my house...but it won't get stopped.

we don't need to have a robot around trying to clean our house and cook our dinner...
I'm sure you aren't but I want to make it clear that a lot of people are having a problem grasping the concept of human-robot interaction because they think that this will only happen when robots look like people. If you made a robot that made breakfast and cleaned the house, it's a robot interacting with humans...regardless of what it looks like. I mean, it could look like a square box with treads and eight mechanical arms that communicates in shrills and beeps...and yet still be an interacting robot with "intelligence" and possibly even "life." Let's say you have R2D2, or one of the robots from "The Black Hole" in your house. It's still a robot interacting with humans and you have achieved the level of robots talked about, despite their not looking like people.

and i can predict with some certainty what's going to happen if a dirty bomb id dropped on a city in the US, although i've never seen it happen and to my knowledge it's not right in front of us in any shape form or fation other than an idea about as real as your possibility for hard AI...
"dirty bomb" is bad wordage to use in an argument. First off, a so-called "dirty-bomb" isn't what most people think of when they hear it. Second, a dirty-bomb isn't always nuclear. Lastly, the threat from nuclear dirty-bombs is extremely low due to the inherent properties of a "dirty" nuclear bomb.

just my thoughts... don't mean to offend.... and i'm quite enjoying this thread!
someone's got to be the antagonist....
You're not offending anyone. You're not the antagonist either. You're just a conscientious debater and thinker. No harm done, and these kinds of discussions are very welcome. :)
 
Status
Not open for further replies.
Back
Top Bottom