The Forum > General Discussion > Ethical Autonomous Cars
Ethical Autonomous Cars
- Pages:
-
- 1
- 2
- 3
- Page 4
- 5
- 6
- 7
- ...
- 10
- 11
- 12
-
- All
Posted by onthebeach, Monday, 28 March 2016 11:11:37 AM
| |
Origin of the 'Trolley Problem', abortion, for any who like a bit of depth,
http://pitt.edu/~mthompso/readings/foot.pdf Posted by onthebeach, Monday, 28 March 2016 11:14:21 AM
| |
I had a look at Foxy's reasearch and it seems that most of us need not worry too much about autonomous cars, which will take decades to perfect.
Posted by ttbn, Monday, 28 March 2016 11:54:13 AM
| |
A lot of posters seem to be under the impression this technology is still decades away. It is not. There are trucks driving in Europe and the US which are Level 3 autonomous right now. With Level 3 there is still a human as a backup. Level 4 is complete autonomy. There are mines in the Pilbara where the ore trucks are controlled from a data centre 1,200 kms away.
http://www.abc.net.au/news/2015-10-18/rio-tinto-opens-worlds-first-automated-mine/6863814 While there are obviously concerns that need to be ironed out I fully expect there will be Level 4 autonomous trucks travelling the Hume within 3-4 years. Computers do not fatigue the way humans do, they do not dope themselves up with stimulants, become inattentive, they do not leave unsafe distances between themselves and passenger cars and they aren't trying to beat the clock. Having driven the Hume many times I will acknowledge there have been many improvements to that notorious stretch of road but trucks, especially late at night, are still an issue whenever I travel it. I for one would feel safer having predictable autonomous trucks running that route than human drivers. I'm happy to concede there will be the need for infrastructure spending to make our roads more compatible for AV use but that is already being explored. Dear Suseonline, You wrote; “I doubt there would be any car made that was drunken-idiot-proof, would you?” I think we would all welcome measures which would deter your 'drunken idiot' hopping in his/her car because they either can't access a taxi, or they don't want to get their car home, or the myriad of other excuses they currently come up with. There would be a raft of ways to address your concerns, perhaps just as taxis have a light to show if they are carrying a passenger these cars could be fitted with something similar. Computer technology will give ample ability to 'black box' the car's activity and be able to indicate if a human had taken control. Posted by SteeleRedux, Monday, 28 March 2016 1:05:08 PM
| |
Dear Armchaircritic,
As a driver of an non-airbagged Series 1 Discovery, which I love, I hear what you are saying, but I'm not about claim any superiority in the way of safety. The 'tanks' of old remain death traps. http://www.youtube.com/watch?v=joMK1WZjP7g Dear Hasbeen, You've probably not been exposed to the quite extraordinary advances in computational physics that goes into modern computer games. The ability to map how different bodies will react to impact is to a large degree there already. Here is a small demonstration of how fine-grained that has become; http://www.youtube.com/watch?v=pEX13W-IuLA Computing for cows or kangaroos is quite easy in comparison. Dear mikk, As with Foxy's article I think you may be misrepresenting the Trolley Problem. The surgeon would be far more like the person on the bridge standing next to the fat man. We accept that, even given the numbers of lives involved, humans can not and should not be judged for rescuing or failing to harm the immediate. Which is exactly why I posed questions 2 and 3. “2. An autonomous car is faced with the scenario of being unable to brake in time to avoid one of two obstacles, a pedestrian or a parked truck. The first would potentially kill the pedestrian while the second would likely seriously injure the vehicle's occupant. Which action is ethically more robust?” “3. Would you purchase a car that was programmed with the sort of software that doesn't place your and your family's well-being at the top of the list?” Protection of the immediate is a natural human response. The 'immediate' for the AI would be the vehicles passengers. Should this be what we program into an AV? Do we accept that this may have a consequence of ultimately harming more humans? Posted by SteeleRedux, Monday, 28 March 2016 1:06:41 PM
| |
I don't know about anyone else Steele redux, and maybe I am getting old, but I hope I am not on the roads if or when any autonomous trucks are traveling the highway!
Both computers and truck mechanisms can and do fail. I just think of the many truckies who have put their own lives on the line to steer their trucks in directions away from other cars or crowded areas near the roads when the truck's brakes, or whatever, fail. I don't trust computers to have quite the same concern... Posted by Suseonline, Monday, 28 March 2016 1:13:43 PM
|
It is the 'Trolley Dilemma', nothing new there.
http://people.howstuffworks.com/trolley-problem.htm
To address your problem, it is right for the system to be set to avoid an accident by braking, but wrong to decide to initiate action that puts anyone in danger.
So, the option is for the vehicle to brake but not swerve. However, swerving at speed is a dangerous maneuver that should be avoided anyhow, since by doing so a whole range of uncontrollable and impossible to predict consequences are brought into play.
If the far superior rules governing navigation at sea are applied, the driver must always keep a diligent watch and always prefer action that will rule out the possibility of an accident, or if a collision happens nonetheless its effects are minimised.