The National Forum   Donate   Your Account   On Line Opinion   Forum   Blogs   Polling   About   
The Forum - On Line Opinion's article discussion area



Syndicate
RSS/XML


RSS 2.0

Main Articles General

Sign In      Register

The Forum > General Discussion > Artificial Intelligence - our future?

Artificial Intelligence - our future?

  1. Pages:
  2. 1
  3. 2
  4. 3
  5. 4
  6. Page 5
  7. All
@Pericles,

“You assume that in order to recognize danger, it is also necessary to understand fear.”

Yes. That’s how it works for organic life.

“This - in my scenario - wouldn't require the machine to understand fear, merely recognize that non-existence is less desirable than existence.”

I can see how AI would be completely “rational”, to a point. One of the things I find difficult to wrap my head around though, is learning. We as humans, learn from information input, but probably even more through “experience”…we learn from our mistakes. That implies that we are prepared to negotiate some calculated risks, even though sometimes our calculations may be askew. We also take into consideration our “natural attributes”…whether we are tall enough, strong enough, nimble enough, fast enough etc. to achieve the task and to create new scenarios utilizing other features within our armoury.

The other aspect about this, is that we are “determined”. That means that even though we may damage ourselves in the pursuit of something, we will experiment with other ways to achieve what it is that we wish to achieve. So we create new scenarios to attempt the same task…if that doesn’t work, try this. We are even prepared to accept certain damage in certain scenarios as par for the course.

“My only point was that a machine with the reasoning ability required for self-determination would be likely to have sufficient logic circuits to assess the results of particular actions, in a manner that would encourage them to opt for self-preservation.”

You know, maybe we shouldn’t program “self-preservation” into robotics. If the urge for self-preservation becomes too strong, then Mankind must at some time become a threat to that sense of self preservation. I appreciate Asimov’s 3 laws of robotics, but it only requires a short-circuit, a crossed wire, and suddenly something that is programmed to respect the sanctity of human life, doesn’t. We all have computers, so we all know that glitches occur and programs don’t always run as they should.

TBC...
Posted by MindlessCruelty, Friday, 3 September 2010 11:26:54 AM
Find out more about this user Recommend this comment for deletion Return to top of page Return to Forum Main Page Copy comment URL to clipboard
“Which would not encompass humans alone, and prompt them to pre-emptive action against them, but also take into account that they are not the only machine on the planet. Extrapolating their need to eliminate humans will logically take them to the need to see other machines as a threat also.”

Not necessarily. What if they developed an “us and them” mentality? An AI form of “racism” for example….that organic intelligence is just too unpredictable, risky and chaotic.

“I still think we should fit them with mechanical off-switches though. Just to be on the safe side.”

Yes, but don’t let them know that it exists if they have a sense of self preservation, for it would/could become a “threat”.
Posted by MindlessCruelty, Friday, 3 September 2010 11:27:50 AM
Find out more about this user Recommend this comment for deletion Return to top of page Return to Forum Main Page Copy comment URL to clipboard
  1. Pages:
  2. 1
  3. 2
  4. 3
  5. 4
  6. Page 5
  7. All

About Us :: Search :: Discuss :: Feedback :: Legals :: Privacy