The Forum > General Discussion > Artificial Intelligence - our future?
Artificial Intelligence - our future?
- Pages:
-
- 1
- 2
- 3
- 4
- Page 5
-
- All
The National Forum | Donate | Your Account | On Line Opinion | Forum | Blogs | Polling | About |
Syndicate RSS/XML |
|
About Us :: Search :: Discuss :: Feedback :: Legals :: Privacy |
“You assume that in order to recognize danger, it is also necessary to understand fear.”
Yes. That’s how it works for organic life.
“This - in my scenario - wouldn't require the machine to understand fear, merely recognize that non-existence is less desirable than existence.”
I can see how AI would be completely “rational”, to a point. One of the things I find difficult to wrap my head around though, is learning. We as humans, learn from information input, but probably even more through “experience”…we learn from our mistakes. That implies that we are prepared to negotiate some calculated risks, even though sometimes our calculations may be askew. We also take into consideration our “natural attributes”…whether we are tall enough, strong enough, nimble enough, fast enough etc. to achieve the task and to create new scenarios utilizing other features within our armoury.
The other aspect about this, is that we are “determined”. That means that even though we may damage ourselves in the pursuit of something, we will experiment with other ways to achieve what it is that we wish to achieve. So we create new scenarios to attempt the same task…if that doesn’t work, try this. We are even prepared to accept certain damage in certain scenarios as par for the course.
“My only point was that a machine with the reasoning ability required for self-determination would be likely to have sufficient logic circuits to assess the results of particular actions, in a manner that would encourage them to opt for self-preservation.”
You know, maybe we shouldn’t program “self-preservation” into robotics. If the urge for self-preservation becomes too strong, then Mankind must at some time become a threat to that sense of self preservation. I appreciate Asimov’s 3 laws of robotics, but it only requires a short-circuit, a crossed wire, and suddenly something that is programmed to respect the sanctity of human life, doesn’t. We all have computers, so we all know that glitches occur and programs don’t always run as they should.
TBC...