The Forum > General Discussion > Ethical Autonomous Cars
Ethical Autonomous Cars
- Pages:
-
- 1
- 2
- 3
- ...
- 10
- 11
- 12
-
- All
Posted by SteeleRedux, Saturday, 26 March 2016 11:17:44 AM
| |
Steele Redux, surely these cars will have humans in them who will be able to override the car's system in emergencies?
Why would a car need to be hands free anyway? I am all for cleaner fuels/energy being used to power our vehicles, but I don't see the point in hands free as such. Posted by Suseonline, Saturday, 26 March 2016 7:16:46 PM
| |
Dear SteeleRedux,
Interesting questions and I've come across a link that helps answer the top misconceptions regarding autonomous cars. No 6 covers the ethical questions: http://www.driverless-future.com/?page_id=774 Posted by Foxy, Saturday, 26 March 2016 7:59:30 PM
| |
The driver will almost always be held responsible.
I prefer the rules applying to navigation at sea where there is no escape for the stupid and irresponsible Master of a vessel (and for the legal owner). As another possible comparison with the maritime environment, most shipping is identified by certain main characteristics which are read by authorities and by other mariners to advise their decisions (Automatic Identification System). I like it. I can see where privacy concerns could arise where motor vehicles are concerned. Returning to land, there are so many stupid and wilful drivers that any improvement by engineers - examples being better safety from improved roads and vehicle engineering - is instantly counterbalanced and setback by the dumb-ass, risk-taking and selfish drivers, who take more risks. Every day I drive a heavier vehicle and allow the required vehicle spacing some idiot in a small maneuverable four-banger ducks into the gap, placing him/herself (yep, there is growing equality in stupidity) in the way of a large load that will not be able to stop now that the gap has been halved. While I applaud the technical advances and 'driverless' cars (really more aids to driving better), if I had my way there would be fewer drivers on the road. -The number lessened by taking the licences from the unethical, unprincipled, risk-taking fools. Posted by onthebeach, Saturday, 26 March 2016 8:01:47 PM
| |
Before anyone comes to lecture me about the maritime AIS, I would imagine that any road version would be used for different purposes and we are there already. It is just that the authorities and industry haven't got around as yet to linking the available data. However a GPS tracker would aid/improve the number plate ID and yes, some makers already have that in newer vehicles too (for servicing).
Posted by onthebeach, Saturday, 26 March 2016 8:16:21 PM
| |
Suseonline : You asked "Why would a car need to be hands free anyway?"
Well of course it doesn't "need" to be hands free, but there are many reasons why it is preferable. Here are some: There is the moral argument. The Google cars have collectively now driven a vast total distance now and the data is showing that they a better drivers than people. So this leads to the question: If people are worse drivers then computers then isn't it morally better to require that computers replace human drivers since there will be less fatalities, injury and damage? There is also the economic argument. We can be more productive since we can now work while driving somewhere (similiar to working on planes). Also, transportation costs will dramatically fall since we don't have to pay people to drive trucks and vans. Nor do we have to pay taxis drivers. We can use our collective transportation assets more efficiently with driverless cars. For example, arrangements such as community owned cars that are shared easily: since we don't have to park the car, it can go straight to the next passenger- the ordering system can be computer controlled so that it delivers cars in the most efficient way. As it is at the moment, the majority of cars are parked at any given time- which is a total waste of resources. Another is also the environmental argument. Since we can enter into new sharing arrangements (instead of the current ones such as taxis and buses,etc) there is less overall need for new cars. Also because we can drive more efficiently due to computer controlled scheduling, it has the potential for less overall kms driven. Additionally is the less tangible benefits that people may get from them, such as enjoying a stress free ride where you can sit back relax and sitesee or do other stuff such as talk on the phone or read a book etc. Posted by thinkabit, Saturday, 26 March 2016 9:33:28 PM
| |
Thinkabit, the mind boggles!
I hope I am not around to see a world run by computers. I can see some advantages to autonomous cars of course, but I remain a bit frightened of all the possible problems they will bring. I think maybe the cost of making these sorts of cars will be a major factor in whether they will be popular or not. Posted by Suseonline, Saturday, 26 March 2016 10:09:15 PM
| |
Dear Suseonline,
If you were a passenger in an aircraft caught in fog and the safest way to land was the pilot switching to autopilot and letting the computer take control, would you insist on a hands on approach? Most wouldn't. In many ways this is just an extension. As thinkabit says the autonomous cars may well continue to prove far safer than human driven ones. If you wanting to take a long trip with the kids or grandkids, the idea of having the the best driver at the controls would have to be a consideration, or does your pride take precedence of their safety? Perhaps you would enjoy watching a movie with them rather than dealing with rain and bright oncoming lights when driving through the night? Well Ford has already thought of that; http://www.wired.co.uk/news/archive/2016-03/10/ford-patents-movie-window-for-driverless-cars Remember this isn't a world run by computer but one serviced by them. Dear foxy, I have read similar arguments elsewhere and they seem to be a rehash from a single source. All are pretty soft in my opinion but let's look at their self proclaimed 'strongest argument'. “d) The question is wrong.” Why? “As we know, the trolley problem has no ethically right solution- because in principle we can not weigh one life against another – , which makes it practically impossible for self-driving cars to solve the dilemma.” This is to misstate the scenario. Most people who are posed the trolley problem would choose to pull the lever to sacrifice one life to save 5. If instead it were one to one they would not touch it. There is no dilemma. The closest thing to the actual trolley dilemma would be to launch, ejector seat style, one of the car's passengers into the path of the car to slow it down enough to save the others. No one would think that was acceptable. Were there any other arguments put forward in the article you linked to that you thought had any validity? Ultimately the speed and processing power of onboard computers will force ethics in software programming. Posted by SteeleRedux, Saturday, 26 March 2016 11:48:05 PM
| |
SR,
While you have raised an interesting moot point, the reality is that in the foreseeable future AI is not going to be used to judge the value of human life, irrespective of whether it could or not, simply take the action that would most likely prevent or reduce the severity of an accident. In 90% of cases this would involve maximum braking using ABS in a straight line, as anything else would tend to make the situation worse. Posted by Shadow Minister, Sunday, 27 March 2016 10:01:00 AM
| |
From what I've seen of testing here in SA, it will be many moons before these contraptions will be safe. I will never forget the look on the transport minister's face when the car he was a passenger in flattened a row of cardbard cut-outs it was supposed to brake for. Any money spent on the dangerously-increasing number of cars on the road should go toward making the cars we have safer, and convincing hoon drivers of the error of their ways.
Posted by ttbn, Sunday, 27 March 2016 10:32:46 AM
| |
then along comes a computer hacker with a warped sense of humour and CHAOS!!
Posted by Is Mise, Sunday, 27 March 2016 3:17:56 PM
| |
Dear Shadow Minister,
We must have a different driving style. Most of my near misses involved not only braking but also taking evasive action and at least one memorable occasion involved speeding up when passing a Double B as a car had pulled into traffic ahead of me without looking. Could I have braked hard to avoid a collision? Possibly, but certainly the safest option was to accelerate. As with anyone who had driven for any number of years there have been many close calls. The one that stands out though was pulling out at night to overtake a bus. In that first fraction of a second my brain registered something not being right and I pulled back in just in time to miss a speeding car without its headlight on coming from the other direction. It was very much a near death experience. The thing is that I doubt very much I could have pulled off the same manoeuvre now. Eyesight and reactions are not what they once were. It is interesting talking to younger people about autonomous cars. All seem to want to retain control when first asked but when it is put to them that it is not unforeseeable that after a night on the town they could go to their car and instruct it to take them home in autonomous mode without the risk of being caught for drink driving their attitude changes. The thought of not having to put up with cold and sometimes dangerous taxi ranks or shared rides is instantly appealing. If available software enhancements would minimise harm is certain situations why wouldn't you want them to be more reactive than simply braking? I should be noted that the software in these cars is capable of learning. A section of road that required 'hands on' control the first few times can be driven over autonomously after the software learns from the driver how to traverse it. Also there are constant updates to the software, the one released this week allows for parking and retrieval via the car's remote. http://www.dailymail.co.uk/news/article-3392647/Tesla-software-update-allows-self-parking-limits-speed.html All very exciting stuff. Posted by SteeleRedux, Sunday, 27 March 2016 6:03:08 PM
| |
Hi all,
I'm still quite interested if there is anyone prepared to offer answers to the questions I put in my original post? I will rephrase them slightly to permit them to stand on their own. 1. Should a car be programmed to swerve to avoid 3 pedestrians on the road even if it meant hitting a single pedestrian on the footpath? 2. An autonomous car is faced with the scenario of being unable to brake in time to avoid one of two obstacles, a pedestrian or a parked truck. The first would potentially kill the pedestrian while the second would likely seriously injure the vehicle's occupant. Which action is ethically more robust? 3. Would you purchase a car that was programmed with the sort of software that doesn't place your and your family's well-being at the top of the list? Any takers? Posted by SteeleRedux, Sunday, 27 March 2016 7:06:22 PM
| |
Steele redux, if you had to pose these sort of scenarios when talking about the 'programming' of autonomous cars, then my reaction would be to suggest we don't make them at all!
I would like to see any possible autonomous cars that could be easily turned on into fully human-driven cars in a split second, so that these sort of scenarios could be decided by the human if need be. I realize that may be difficult but unless they could be made this way then i can't see them being 'ethical' in any real way. As for the exciting idea that we could hop into these cars drunk, and be driven safely home, I would dispute the wisdom of this thought. Imagine a drunken idiot in a car supposedly taking him home and he is stupid enough to try controlling some part of the car, or the directions, or the speed, himself ? I doubt there would be any car made that was drunken-idiot-proof, would you? Posted by Suseonline, Sunday, 27 March 2016 7:42:19 PM
| |
I'd like to see one of these 'brain fart' cars on a country road at night dodging kangaroos.
Or in a situation that I experienced between Guyra and Ebor (NSW) when the road was still dirt; I crested a hill and there was a semi coming towards me, on the down slope of the next hill and driving on the crown of the road. I pulled over and stopped to let him have a free go and to avoid the worst of the dust which was boiling behind him. Just as he started to climb my side a car cam out of the dust at speed (the semi wasn't wasting any time either). Had I not stopped I'd have met the idiot in the car and his family head on in the dust. The semi driver waved his thanks and gave an expressive gesture as he passed. What would an autonomous car have done? For hacking of car computer systems see Google. Posted by Is Mise, Sunday, 27 March 2016 11:32:35 PM
| |
Hi Steele,
Here's whats going to happen: People wont just be people. It may depend on how IMPORTANT or WEALTHY that person is, or the car may choose to hit 3 male adults rather than a woman with a stroller. Lets say the car has to choose between hitting a MP or running into a crowd of people - The car might be programmed to value the MP more than the people. So it won't even be about ethics, it will be about how valuable a human being you are. Would I want one of these cars? On face value No, but we may all be sold on some sales pitch. Personally I'm thinking more about older cars, the ones where you knew you were driving them and there wasn't any computers or gps in them. I'm not sure if technology is empowering us or imprisoning us sometimes. Lets Play Dodge 'Em Cars What I want to know is what an autonomous car and driver would do if say I attacked said car with the intention of doing it harm. Will the autonomous car run away? Or will it stay and fight? Posted by Armchair Critic, Monday, 28 March 2016 12:43:16 AM
| |
Anyone who actually believes they will be able to program an autonomous car to chose which group of pedestrians to hit, or miss, anytime soon is kidding themselves. Even very expensive auto pilots on ships & large boats are not reliable enough to be left without a crew member on permanent watch over the things.
Actually Is Mise, your example is one where they should be of some use. They will have to have a good radar system, that would have been able to see through that dust, to see that car you couldn't. Hell one would even have saved steely with his lightless car. But how good do you think they will be at noticing a dog, kangaroo or cow on the side of the road, judge if it is a threat, & make a useful response. Then how about that idiot stopped talking on their phone on the side of the road. Will they be able to make a sensible choice of action there, like anticipating the likelihood of that car pulling out without looking? They would have to err on the conservative response. I can see a half hour drive becoming a 2 hour drive, as the things creep[ around if we are ever stupid enough to let them loose. While I will happily accept granny cluttering up my main road, driving into town at 70 Km/H, if that is all she feels & probably is safe doing, I'm damned if I'll accept some yuppie in their computer controlled thing slowing down every time some cow sticks it's head through a fence to eat a bit of grass, just so they can prattle on their phone. Once they can produce a useful robot vacuum cleaner it may be time to think about cars, but probably not this century. Posted by Hasbeen, Monday, 28 March 2016 2:39:56 AM
| |
Steele
A restating of the "trolley problem" as the "surgeon problem" may change your mind on pulling the lever. You are a surgeon who has 5 patients who will die if they dont get an immediate transplant. Today you are doing a very dangerous operation on someone where the slightest mistake will mean their death. If they die you can save all your other patients. Do you "interfere" and kill your patient to save 5 others? Still think most people would "pull the lever" Steele? I wouldnt. Like that article said, in many situations there is no "right" answer. Autonomous cars only need to not make "wrong" decisions. An example of this would be it would be wrong to drive onto a footpath or crash into a building to avoid a hazard on the road. Posted by mikk, Monday, 28 March 2016 8:03:19 AM
| |
SteeleRedux,
It is the 'Trolley Dilemma', nothing new there. http://people.howstuffworks.com/trolley-problem.htm To address your problem, it is right for the system to be set to avoid an accident by braking, but wrong to decide to initiate action that puts anyone in danger. So, the option is for the vehicle to brake but not swerve. However, swerving at speed is a dangerous maneuver that should be avoided anyhow, since by doing so a whole range of uncontrollable and impossible to predict consequences are brought into play. If the far superior rules governing navigation at sea are applied, the driver must always keep a diligent watch and always prefer action that will rule out the possibility of an accident, or if a collision happens nonetheless its effects are minimised. Posted by onthebeach, Monday, 28 March 2016 11:11:37 AM
| |
Origin of the 'Trolley Problem', abortion, for any who like a bit of depth,
http://pitt.edu/~mthompso/readings/foot.pdf Posted by onthebeach, Monday, 28 March 2016 11:14:21 AM
| |
I had a look at Foxy's reasearch and it seems that most of us need not worry too much about autonomous cars, which will take decades to perfect.
Posted by ttbn, Monday, 28 March 2016 11:54:13 AM
| |
A lot of posters seem to be under the impression this technology is still decades away. It is not. There are trucks driving in Europe and the US which are Level 3 autonomous right now. With Level 3 there is still a human as a backup. Level 4 is complete autonomy. There are mines in the Pilbara where the ore trucks are controlled from a data centre 1,200 kms away.
http://www.abc.net.au/news/2015-10-18/rio-tinto-opens-worlds-first-automated-mine/6863814 While there are obviously concerns that need to be ironed out I fully expect there will be Level 4 autonomous trucks travelling the Hume within 3-4 years. Computers do not fatigue the way humans do, they do not dope themselves up with stimulants, become inattentive, they do not leave unsafe distances between themselves and passenger cars and they aren't trying to beat the clock. Having driven the Hume many times I will acknowledge there have been many improvements to that notorious stretch of road but trucks, especially late at night, are still an issue whenever I travel it. I for one would feel safer having predictable autonomous trucks running that route than human drivers. I'm happy to concede there will be the need for infrastructure spending to make our roads more compatible for AV use but that is already being explored. Dear Suseonline, You wrote; “I doubt there would be any car made that was drunken-idiot-proof, would you?” I think we would all welcome measures which would deter your 'drunken idiot' hopping in his/her car because they either can't access a taxi, or they don't want to get their car home, or the myriad of other excuses they currently come up with. There would be a raft of ways to address your concerns, perhaps just as taxis have a light to show if they are carrying a passenger these cars could be fitted with something similar. Computer technology will give ample ability to 'black box' the car's activity and be able to indicate if a human had taken control. Posted by SteeleRedux, Monday, 28 March 2016 1:05:08 PM
| |
Dear Armchaircritic,
As a driver of an non-airbagged Series 1 Discovery, which I love, I hear what you are saying, but I'm not about claim any superiority in the way of safety. The 'tanks' of old remain death traps. http://www.youtube.com/watch?v=joMK1WZjP7g Dear Hasbeen, You've probably not been exposed to the quite extraordinary advances in computational physics that goes into modern computer games. The ability to map how different bodies will react to impact is to a large degree there already. Here is a small demonstration of how fine-grained that has become; http://www.youtube.com/watch?v=pEX13W-IuLA Computing for cows or kangaroos is quite easy in comparison. Dear mikk, As with Foxy's article I think you may be misrepresenting the Trolley Problem. The surgeon would be far more like the person on the bridge standing next to the fat man. We accept that, even given the numbers of lives involved, humans can not and should not be judged for rescuing or failing to harm the immediate. Which is exactly why I posed questions 2 and 3. “2. An autonomous car is faced with the scenario of being unable to brake in time to avoid one of two obstacles, a pedestrian or a parked truck. The first would potentially kill the pedestrian while the second would likely seriously injure the vehicle's occupant. Which action is ethically more robust?” “3. Would you purchase a car that was programmed with the sort of software that doesn't place your and your family's well-being at the top of the list?” Protection of the immediate is a natural human response. The 'immediate' for the AI would be the vehicles passengers. Should this be what we program into an AV? Do we accept that this may have a consequence of ultimately harming more humans? Posted by SteeleRedux, Monday, 28 March 2016 1:06:41 PM
| |
I don't know about anyone else Steele redux, and maybe I am getting old, but I hope I am not on the roads if or when any autonomous trucks are traveling the highway!
Both computers and truck mechanisms can and do fail. I just think of the many truckies who have put their own lives on the line to steer their trucks in directions away from other cars or crowded areas near the roads when the truck's brakes, or whatever, fail. I don't trust computers to have quite the same concern... Posted by Suseonline, Monday, 28 March 2016 1:13:43 PM
| |
I have to admit I'm not very tech savvy.
I have problems with computers, and the thought of self-driving cars does make me nervous. Still it is a great discussion topic - and worth debating. If you don't learn - you don't grow. Therefore, the following link although a few years old is still worth reading. It predicts (by an expert) that autonomous cars will be operating in Australia within a few years. Wow! http://www.motoring.com.au/self-driving-cars-here-in-four-years-45276/ Posted by Foxy, Monday, 28 March 2016 1:48:38 PM
| |
Let's hope that Jeep solves its problems!
See:https://blog.kaspersky.com/blackhat-jeep-cherokee-hack-explained/9493/ Posted by Is Mise, Monday, 28 March 2016 3:05:18 PM
| |
Dear Foxy and Suseonline,
Forgive me but I think the two of you might be missing the point. This is not really about the technology but rather about the ethics. Both of you have shown on innumerable occasions that you are capable of thinking through and appreciating the ethical implications of a wide range of topics on OLO. In many ways you are among the most proficient. A corporation by law is supposed to act in the way that maximises profits for its shareholders, or in the very least act in their best interest. However it is community input on acceptable behaviour that thankfully has them at least making some show of acting ethically. This is essentially the same case. Just as you do not have to be a medical professional or a scientist to serve on ethics committees you do not have to be across the technology to be able to have valuable input into this debate. Ultimately it will be the community who will guide what is and what isn't acceptable behaviour of AVs. There are many reasons why not only is this technology inevitable but in a large range of cases desirable. Being engaged in the debate around how they will operate is important. Dear Is Mise, Many things are capable of being hacked including military weaponry, vital infrastructure and essential utilities. What is important is that systems are continually upgraded as to be as robust as possible. it is right to raise it as a concern but i'm not sure it is not a reason to roll out the technology. Posted by SteeleRedux, Monday, 28 March 2016 3:32:18 PM
| |
Probably, one day, the enormous advances in driverless cars will be combined with the Uber phenomenon.
If so, much larger vehicles may be just as effective to take more people from point A to point B,C,....... and Z. All we will have to do is pay a fare and let the vehicle take us here. We could get on at designated stops and chat to other people in the vehicle while we are travelling, to avoid getting bored. And we could call it, something like, 'public transport'. Hey, I may patent that. Joe Posted by Loudmouth, Monday, 28 March 2016 5:52:26 PM
| |
Dear SteeleRedux,
I have difficulty getting my mind to grasp the ethical questions associated with autonomous cars. If humans have difficulty with ethics regarding car incidents how can we expect autonomous cars to be able to make ethical decisions unless for in-built technology? Posted by Foxy, Monday, 28 March 2016 6:04:07 PM
| |
Dear Loudmouth,
You wrote; “Probably, one day, the enormous advances in driverless cars will be combined with the Uber phenomenon.” That is already happening. For instance earlier this year GM put up half a billion dollars in a partnership with Lyft, an Uber competitor, to develop a network of driverless cars. http://www.wired.com/2016/01/gm-and-lyft-are-building-a-network-of-self-driving-cars/#slide-1 This may well mean the pressure for individual car ownership will diminish since the expense of registration, insurance, upkeep, garaging, and parking may be an expense people will be happy to forego if the convenience is there. Dear Foxy, I fell you are over-complicating the issue. Let me try framing my questions this way; 1. You are suddenly faced with four people unexpectedly on the road in front of your car. You only have time to swerve left or right. By going left you will hit a mother, father and a child. Going right you will hit a single adult. By doing nothing you will inevitably strike them all. What is the ethical action? 2. You are travelling down a suburban street and suddenly a pedestrian steps out from behind a parked car. The only way of missing him would be to swerve into a parked truck. Do you risk potential injury to yourself to save someone from near certain death or live changing injuries? 3. If you were a passenger in the vehicle how would you want the driver to act in the same scenario? Do you find any of these questions difficult to answer? Posted by SteeleRedux, Monday, 28 March 2016 8:29:38 PM
| |
Come on Steely, a green talking about ethics. What an oxymoron.
Posted by Hasbeen, Monday, 28 March 2016 9:58:08 PM
| |
Of course the questions are difficult to answer because of the variables that are left out.
What sort of car am I driving? What make of car am I driving? What is the efficiency of its brakes? How many turns from lock to lock on the steering? Power or manual steering? Automatic or manual transmission, or a combination? Type and make of tyres? My reaction time? Does the car have foot emergency brakes and a hand driving brake or does it have foot driving brakes and a hand parking brake? Is it petrol, electric or diesel powered? I won't throw in a steam car as there are so few of them on the road, but they (usually) have a significant stopping factor that most other cars don't have; that is the ability to go into reverse under power. Posted by Is Mise, Monday, 28 March 2016 11:37:05 PM
| |
The ethical and legal issues with automated cars are not new and have had to be considered when building other systems. This is something engineers and programmers have to deal with as part of their job.
If an autonomous car kills a pedestrian, then it is unlikely the occupants of the car, including the "driver", would be considered at fault. The car's manufacturer or the individual software engineer might be at fault, if this was a problem they should have anticipated. However, it may be that no one is at fault. Decision making will have to be programmed into autonomous vehicles, just as it is in to system which make life and death decisions every day. It will not be the cars making the ethical choices, but those who program them. You ask "Whose safety is paramount?" and "Would you purchase a car ... that doesn't place your and your family's well-being at the top of the list?". The public interest, is likely to be considered paramount, so no you will not be able to buy a car which puts your family first. Cars will be programmed to preserve the lives of a larger number of pedestrians, over a smaller number of passengers. In such a situation the car may be programmed to ignore manual input, just as cars are already programmed to ignore the driver if there is a risk of a rollover. This week I am giving an annual lecture on ICT ethics to students at the Australian National University. For this I use the code of ethics of the Australian Computer Society, but these codes are much the same for all professional bodies. They place the "Public Interest" above that of an individual client: http://www.tomw.net.au/basic_ict_professional_ethics/ Posted by tomw, Tuesday, 29 March 2016 8:27:45 AM
| |
Dear SteeleRedux,
I can't answer what I would do at the critical moment. It all depends on the curcumstances, emotional conditions, and unforeseen physical, mental, and other factors involved at the split moment. Therefore, yes I do have a problem answering your questions. A machine can only be programmed for a limited number of reactions. And is not capable to be programmed for infinitesable reactions to ever changing circumstances that even the human brain cannot predict. Posted by Foxy, Tuesday, 29 March 2016 8:44:07 AM
| |
cont'd ...
BTW - Did you happen to watch on SBS - Monday evening, "The Brain with David Eagleman?" I found the program very interesting. David Eagleman took a journey "through the unseen world of decisions and how they get made." Posted by Foxy, Tuesday, 29 March 2016 8:49:23 AM
| |
Hi Sgteele,
I was so excited by your post: "That is already happening. For instance earlier this year GM put up half a billion dollars in a partnership with Lyft, an Uber competitor, to develop a network of driverless cars. http://www.wired.com/2016/01/gm-and-lyft-are-building-a-network-of-self-driving-cars/#slide-1 "This may well mean the pressure for individual car ownership will diminish since the expense of registration, insurance, upkeep, garaging, and parking may be an expense people will be happy to forego if the convenience is there." This could usher in a new era in transportation, in which larger vehicles are hired on a trip-by-trip basis, perhaps with a "driver" to collect "fares" and to ferry all "passengers" from convenient and designated "stops" to where they want to go, thus offering a service at relatively low cost - in Latin, they would use the word "omnibus" meaning "for everybody", but we could shorten it to, I don't know, how about "bus" ? Joe Posted by Loudmouth, Tuesday, 29 March 2016 9:55:34 AM
| |
Hi there STEELEREDUX...
You mentioned a timeline where you suggested, Level 4 autonomous heavy vehicles may be operating up and down the Hume Hwy within 3 to 4 years? I too travelled that notorious piece of blacktop, between Goulburn and Holebrook, and Goulburn via the Federal Hwy to the ACT border. Firstly in a 'very tight' squeeze, Cooper S and later, a 5.8 V8 Falcon. I understand much of the Hume has been thoroughly upgraded, and many of the more dangerous stretches completely eradicated, and replaced with wide concrete tarmac. Some of the more horrendous fatal(s) have occurred along my previous designated patrols, and the thought of some heavily laden, multi-axle, articulated vehicle, without a competent human at the wheel, would resurrect many of those awful nightmares of senseless death and injuries occasioned against innocents, while travelling along those dreadful roads. I'm all for science and technology, but I'd need to be very well convinced that a heavily laden truck without a human at it's controls, was safe enough to permit it to travel along many of NSW's main roads, autonomously? Posted by o sung wu, Tuesday, 29 March 2016 1:42:27 PM
| |
"Computing for cows or kangaroos is quite easy in comparison"
Computing for cows may be easy but kangaroos are a different matter, as anyone knows who has encountered half a dozen of them on a highway at night. The car would need to make 6 split second decisions x 6 to the 6th as the marsupial macropods changed their little minds and directions willy-nillly. Posted by Is Mise, Tuesday, 29 March 2016 2:13:40 PM
| |
Dear Hasbeen,
Which Green? As I have stated innumerable times on this forum I am neither a Green nor a Green voter, they have blotted that copybook a long time ago. However the fact that you thought that I was a Green, given the topic I raised, does tell its own tale and policies aside I would hold that most of their politicians are of a higher ethical standard than most of those from either of the two majors any day. Dear Foxy, You wrote; “I can't answer what I would do at the critical moment. It all depends on the curcumstances, emotional conditions, and unforeseen physical, mental, and other factors involved at the split moment. Therefore, yes I do have a problem answering your questions.” Which is exactly why I asked “What is the ethical action?”. When faced with these situations who can ever say exactly what they would do. Probably the best we can say is 'I would hope I would do the right thing'. It acknowledges our limitations as humans but also the fact that we can recognise ethical outcomes and would strive for them if possible. If the correct ethical decision was recognisable and achievable wouldn't you want the AV to strive for it? No I didn't catch The Brain but given your recommendation I will look for it on SBS On Demand. Dear Loudmouth, Facetiousness aside everything you say is fine but why are you limiting yourself to larger vehicles with drivers? Just imagine the utility provided by the 10 smaller autonomous vehicle that could be run for the same cost as your bus. The smaller size would mean the tight lesser roads could now be traversed, even driveways could be accessed. No more standing at rain swept bus stops, conforming your life to unfriendly and unreliable timetables, sitting behind view obscuring advertising billboards, in vehicles whose size prevent any reasonable traffic manoeuvring. Pine over the old days if you want but don't glorify them. Dear Is Mise, Yup. All perfectly doable right now. Posted by SteeleRedux, Tuesday, 29 March 2016 3:14:02 PM
| |
Dear SteeleRedux,
Of course if ethical solutions were possible to program into AV's I imagine all of us would want them to strive for it. However I question the possibility of it being able to be done successfully. Although, with innovation occuring at such a fast rate nowdays - I guess anything is possible. Posted by Foxy, Tuesday, 29 March 2016 3:27:57 PM
| |
Dear o sung wu,
Your concerns are of course valid but I'm wondering if you could put your mind to what it would take before you would be comfortable with autonomous trucks. If they were shown to be half as likely to be involved in an accident for every 100,000kms driven is that a tipping point? Much of our improvement in road safety have come about by decreasing driver impairment from drugs or lack of sleep as you can probably appreciate. If the awareness available to an AV whether by its inboard systems like tire pressure or wear sensors, or its external sensors like radar, or its ability to communicate with other AV vehicles, was shown to be far superior than those of human controlled vehicles would that help? I am also yet to be convinced but I am most certainly open to the possibility. Dear tomw, Thank you for your reply. You wrote; “If an autonomous car kills a pedestrian, then it is unlikely the occupants of the car, including the "driver", would be considered at fault. The car's manufacturer or the individual software engineer might be at fault, if this was a problem they should have anticipated. However, it may be that no one is at fault.” I'm not sure there is such a thing as a no-fault accident as far as insurance companies are involved, why should it be any different here? However the fault may well lie with other parties such as road designers or works crews who will need to adapt to AV vehicles. You say “It will not be the cars making the ethical choices, but those who program them.” In one view you are correct but given the self learning capabilities already present in these vehicles we can also take the view that this is akin to teaching our children right from wrong. When they are infants that teaching is pretty prescriptive but as they get older good parents attempt to teach frameworks which will assist their children to make ethical decisions in scenarios that have not been articulated or contemplated. Posted by SteeleRedux, Tuesday, 29 March 2016 3:52:46 PM
| |
Cont..
You also opined, “The public interest, is likely to be considered paramount, so no you will not be able to buy a car which puts your family first. Cars will be programmed to preserve the lives of a larger number of pedestrians, over a smaller number of passengers. In such a situation the car may be programmed to ignore manual input, just as cars are already programmed to ignore the driver if there is a risk of a rollover.” Public interest should be paramount but there is ample evidence that it currently is not. Consider bullbars on 4WDs. Obviously designed to protect the occupant but devastating to pedestrians when they are struck. I think the debate on this is far from settled which is why I raised it. Dear Foxy, Thank you for the reply. You have acknowledge ethical decision making can be programmed into cars and that it is something we should strive for. Is this a tipping point for you? Should AVs be let loose on our roads without it? Posted by SteeleRedux, Tuesday, 29 March 2016 3:53:37 PM
| |
Interesting discussion.
The car in the dust problem would cause both cars brakes to be applied immediately because the radar can see through dust at that range. The car pulling into the on coming lane would be detected and a calculation done taking into account all three cars speed and a decision made to accelerate or brake. The hacker problem is easily solved. Don't connect to the internet. The big problem is EMI ! (Electromagnetic Interference.) To build an older story for EMI, I was driving on the autobahn in Germany some years ago and I saw these wires strung over the aurobahn about every one or two metres. Puzzled about this I called up someone on the radio I had in the car and asked about it. It seems that on my left was Radio Free Europe beaming over the road and the Czech border into Eastern Europe. So the Megawatts had the effect of stalling the newer cars fitted with electronic engine management systems. The ADAC (=NRMA) had to position tow trucks at each end of that section. This type of problem can be designed out but not always successfully as those who have parked at Black Mountain in Canberra can testify. Posted by Bazz, Tuesday, 29 March 2016 4:04:10 PM
| |
Reading this debate, I am reminded of the search and rescue mounted when a driver got stranded in the Big Desert in western Victoria because he trusted the erroneous instructions from the car's GPS over the road signs on the highway saying 'To Mildura'.
Posted by Cossomby, Tuesday, 29 March 2016 4:14:46 PM
| |
SteeleRedux, I don't know how insurance companies assignment fault in an accident, fortunately never having had to find out.
As for the self-learning capabilities of vehicles, these would be very much less than children. I think we are a long way from treating a machine as an ethical, independently thinking person. ps: There are regulations covering bullbars for vehicles, to take into account the injuries they may cause to pedestrians and passengers (bullbars protect the vehicle, not the occupants): https://www.vicroads.vic.gov.au/safety-and-road-rules/vehicle-safety/bullbars Posted by tomw, Tuesday, 29 March 2016 5:09:02 PM
| |
EMI more:
In Europe cars, trucks etc are tested in a strong UHF field to ensure that their systems including brakes systems continue to operate as designed. Those tests probably eliminate 99% of EMI. However two way radio systems in Europe had the effect that some could not be used at knock off time as employees could not unlock their cars. This the same effect as Black mountain where cars cannot be unlocked until the TV transmitters are turned off, hence the sign in the car park. Later systems use a different frequency band. There are stories of cars in the US stalling or accelerating if a nearby car transmitted. Another story is the Sydney Harbour Bridge toll gates before etolls. If you transmit as you reach the toll gates the gate opens without having to put money in. Taxi drivers, couriers and the likes of us soon woke up to it and it was a big joke on the DMR for a long time. I used to annoy drivers who had radar detectors. Press the tx button and it would turn their radar alarm on. However all that was just to warn that all the programming skills in the world may not be enough to make autonomous cars bullet proof. Posted by Bazz, Tuesday, 29 March 2016 5:10:57 PM
| |
A good point Cossomby.
If the current GPS systems can get it wrong so that some people get lost on the roads, the mind boggles at what could go wrong with Autonomous Cars. I was also wondering how expensive these cars would be to buy for the average person, and how much the insurance might cost. I would imagine that both of these sums would prohibit the widespread use of such vehicles for quite some time yet. Posted by Suseonline, Tuesday, 29 March 2016 5:11:21 PM
| |
TomW and the legs of van drivers !
Posted by Bazz, Tuesday, 29 March 2016 5:13:55 PM
| |
Dear SteeleRedux,
I'm sure that in the distant future artificial intelligence may well exceed the human capacity but at present it is still in the developmental stage with a potential of unpredictable malfunction with possible drastic results. I for one would not trust such a machine in today's world. Not yet. Posted by Foxy, Tuesday, 29 March 2016 5:57:08 PM
| |
Suseonline : You said: "I was also wondering how expensive these cars would be to buy for the average person, and how much the insurance might cost."
Once they are mass produced they will not be much more expensive then today's vehicles- all that contributes to the unit cost is adding the sensors and computers, once the software is developed. The computers already are very cheap (they would only be a few hundred dollars at most) but the sensors are still quite expensive because at the moment they are not off the self components. However, the senors cost will plummet with mass production. Regarding insurance, it is the other way round. It will be considerably more expensive to insure a human-driven car then a computer driven one, because cars like the google car are already better drivers than humans and they will only improve over the coming years. Insurance companies only consider the risk when pricing premiums and the less risk leads to cheaper insurance. In fact, insurance will possibly be one of the main reasons why people will be buy these cars. It may even be that within a couple of decades only very specialist insurance companies will insurance human drivers because there will be so few drivers that they will have to perform in depth research on each individual driver to properly access the risk- cause at that stage there are not enough general drivers to distribute the risk over. Posted by thinkabit, Tuesday, 29 March 2016 8:38:15 PM
| |
"The car in the dust problem would cause both cars brakes to be applied
immediately because the radar can see through dust at that range. The car pulling into the on coming lane would be detected and a calculation done taking into account all three cars speed and a decision made to accelerate or brake." Does the radar also detect dust filled holes in the road? The truck was straddling the crown of a narrow dirt road that was barely two lanes wide. Suppose for a moment that both cars entered the dust and the one behind the truck pulled out 10 metres in front of my car; braking distance for both vehicles, say 40 metres in the dirt----BANG!! Posted by Is Mise, Tuesday, 29 March 2016 10:20:12 PM
| |
Dear tomw,
You wrote; “As for the self-learning capabilities of vehicles, these would be very much less than children. I think we are a long way from treating a machine as an ethical, independently thinking person.” Wait a second, a self learning computer has recently beaten the world Go champion, something the NYT predicted in 1997 would take a hundred years, 4 games to one. AlphaGo's human competitor described the machine as having human-like intuition. http://www.theatlantic.com/technology/archive/2016/03/the-invisible-opponent/475611/ Whilst the likelihood of a computer being able to act as a human being across a broad spectrum of situations is still a way off driving a car is a relatively simple task and the ethical decisions encompassing that task are not that complicated. I have little problem in thinking that within the narrow confines of controlling an AV a computer could be regarded as an ethical independent entity. I'm keen to hear your reasons you think this could not be the case. To those who see GPS failings as reasons not to trust the technology please recognise what is being envisaged for AVs involves far more connectivity than a link to a satellite. There is already been some interesting approaches to handling intersections that should dramatically speed up traffic flow and decrease GHGs. http://www.youtube.com/watch?v=4pbAI40dK0A http://www.rt.com/news/337009-autonomous-cars-slot-system/ A large number of the traffic jams on our roads can be attributed to inconsistent driver behaviour. http://www.youtube.com/watch?v=Suugn-p5C1M There is little doubt AVs would have a dramatic impact on road congestion meaning less pressure to build more capacity. Posted by SteeleRedux, Tuesday, 29 March 2016 10:21:34 PM
| |
"....driving a car is a relatively simple task"
Not where there are kangaroos. I have dents in the Statesman to prove it. The first one I could have avoided by accelerating but neither I nor a computer could have foreseen that the roo would dive in front of the car as I braked nor could one foresee the one, that 20 kilometres further on suddenly dived out and crashed into my left rere mudguard and tore it partly off. I have, in the past, successfully avoided 6 roos bent on self destruction at the same time. I could not have done so had there been oncoming traffic. A kangaroo, a goat, a pig or a deer can dive in front of a car within the minimum stopping distance at any speed above, say, 10 kph and where swerving is not an option then an impact is inevitable. Posted by Is Mise, Wednesday, 30 March 2016 2:53:48 AM
| |
Is Mise, the inevitable is inevitable.
Re the dust problem, it might well be possible to detect the dust and bring the car to a halt. However a human driven car could still charge into the dust. If both cars were computer driven they would both stop. Posted by Bazz, Wednesday, 30 March 2016 7:54:27 AM
| |
Bazz,
If the car can see through the dust hen there would be no reason to stop, the car following he truck is in a safe position but if it hits a dust filled hole and moves out then if the distance between the cars is less than the possible stopping distance they will either have to swerve dangerously or collide. What about the transition period when all cars are not driverless? There wil have to be provision for human override, bushfires and floods come to mind. Posted by Is Mise, Wednesday, 30 March 2016 1:33:59 PM
| |
Re dust filled pothole, don't know if there would be a solution.
Two systems the visible would see a "wall" of dust & stop, the radar would look through and see an oncoming car. Better with instrumentation than without. Transition will have to rely on the fitted car to avoid. Posted by Bazz, Wednesday, 30 March 2016 3:19:14 PM
| |
Thinkabit says, "Regarding insurance, it is the other way round. It will be considerably more expensive to insure a human-driven car then a computer driven one, because cars like the google car are already better drivers than humans and they will only improve over the coming years."
You are joking aren't you. When a computer controlled car has won the World F1 championship, Indianapolis, LeMans, & holds the Bathurst lap record it might just be time to let the first one loose on a public road. Then, & only then can they claim they are even equal to human drivers in diverse situations. Until then they are just a bit of techno garbage for geeks to play with, & like most scientists, make grandiose announcements their technology can't actually support. I suggest you don't hold your breath while waiting. Posted by Hasbeen, Wednesday, 30 March 2016 5:51:03 PM
| |
I understand STEELEREDUX that there's been many improvements to the safety of heavy vehicles with a sort of 'black box' arrangement fitted to many of them, allowing authorities to monitor and micro-manage the safety elements of various components fitted to those heavy vehicles, and the safety limits placed on drivers. All these measures though laudable are a case of shutting the gate after the horse has bolted as it were?
To answer your question as honestly as I can; I don't believe a time would come, where I'd ever be comfortable with driver-less, Level 4 autonomous Heavy Vehicle on our Hwy's. Too many variables that 'may' interfere with the technology. Systems failure can occur. Unforeseeable or unexpected weather events can arise, which again 'may' interfere with the system. Vandalism or worse sabotage, may occur. I realise I may appear to be a real old 'fuddy duddy' in the eyes of most. That's OK, but any sort of driver-less vehicular movement, save for that consigned to be on a permanent set of rails, just isn't something that I can get my head around, to be very honest with you? Posted by o sung wu, Wednesday, 30 March 2016 7:56:43 PM
| |
Dear o sung wu,
I think there is most certainly a role for the older generations, and I include myself in this, to temper the enthusiasm of the young. The work they will have to do to assuage concerns you and I might have is vitally important. It doesn't mean we should be blind to the possibilities of the technology, just the bar should to be set as high as it needs to be to gain the confidence of the vast majority of us. There will be some like Hasbeen who would have AVs require an F1 licence before being allowed to drive on our roads, completely forgetting of course the amount of 'geeks' it takes behind the wall to win a championship, but the rest of us need to be shown these things will be markedly safer and more functional than what we have at the moment. I imagine there will be a few stumbles along the way but that is the same with any new technology. We shouldn't forget that the early motor vehicles were required to have a man with a flag walking in front of them warning other road users. But we shouldn't underestimate how fast the technology is advancing. This is a TED talk over three years ago on the subject of quadcopter agility. Keep in mind these thinks have a brain the size of a fingernail but the computational feats they are able to perform is extraordinary. http://www.youtube.com/watch?v=w2itwFJCgFQ I do think there is a good rational for insisting that autonomous vehicles at least have rudimentary ethical functionality. It would make me more accepting of their introduction. This is precisely the reason for me posting this thread. I would not want them to have a basic 'when in doubt slam the brakes on' algorithm, especially as they could be so much more reactive and 'responsible'. Ultimately I feel we should be insisting on it and now is the time to be talking these issues through.. Posted by SteeleRedux, Wednesday, 30 March 2016 8:45:17 PM
| |
Hasbeen: Your example of racing car drivers is about the worst example you give can. Racing drivers are horrendously bad drivers when it comes to safe driving. I've never heard of a Bathurst race where there hasn't been accidents and crashes. Indeed many people watch motor-racing on TV for the crashes, which they are almost guaranteed to get for any race of sizable length . These drivers are simply driving way too fast for the conditions- they are extremely unsafe drivers. (I've often wondered how on earth it is even legal because (well in QLD anyway don't know about NSW) it is illegal to operate a car in a dangerous way- this includes on non-public roads.)
The standard that the Google car has to better is a driver who is 55+ years old and has been driving everyday ,in city traffic and distance travel, for the last 40 or so years and has never once been in a single crash nor even a slight bump with any other object. But not only that, a driver who has never had a speeding ticket, nor driven past a stop sign without completely stopping (unless legally allowed to- such as making way for ambulance under lights) nor shot a red light nor cut-in on someone when overtaking nor spun the wheels on a wet road doing a hill start nor failed to indicate correctly on a round-a-bout nor failed to dip their headlights in time for oncoming cars nor parked illegally, etc... However, if it is speed that interests you. Then computers definitely are the way to go. The fastest dirigible man-made objects, ie: rockets, are driven completely by computers when under speed. Do you really think that if rockets were controlled and steered by humans (without any computers what-so-ever) that would get anywhere near a designated rendezvous point for docking with the space station. Do you really think that humans could fly a terrain hugging cruise missile better than a computer? Would you really bet on the fighter jet under human control avoiding a modern surface-to-air missile? Posted by thinkabit, Thursday, 31 March 2016 9:24:22 AM
| |
thinkabit,
You didn't factor in 'roos,deer, goats and pigs. I might add that out on the road all the racing and rally drivers that I've known have been very safe and considerate drivers; they know and understand the physical forces involved in handling a car. Posted by Is Mise, Thursday, 31 March 2016 1:56:32 PM
| |
Garbage thinkabit, there is an old adage in motor racing that goes, "to finish first, first you have to finish". The odd hoon goes racing, but after a crash or 2, or having their lack of ability highlighted by comparison, they soon scuttle back to showing off on the road.
Motor racing was the spur that developed so many of the systems that make the mundane shopping trolley so safe today. As for racing drivers crashing, I still hold a couple of Bathurst lap records, I raced there for years, winning many races or classes, up to formula 1, & without a geek to be seen. In some thousands of miles around Bathurst, I never so much as put a scratch on a car. I did however encounter many unexpected situations, requiring instant correct decisions to avoid an accident. As well as being able to drive very quickly, it is this ability to recognise & avoid suddenly developing situation that distinguish the good from the less good driver. Thus I want to see the computer achieve a very high level of this ability before they are allowed anywhere near public roads. No amount of simulations, or trundling around roundabouts at slow speeds will prove their ability in emergency situations. The race track is the logical venue to evaluate the computers ability to handle those times when it all goes pear shape. A 100 kilometre race at Bathurst would be more testing than 100 thousand kilometres around town. Posted by Hasbeen, Thursday, 31 March 2016 7:23:04 PM
| |
Dear Hasbeen,
Here you go mate, footage from 4 years ago. http://www.youtube.com/watch?v=YxHcJTs2Sxk Shelley's done even better since. "Driverless cars now out-perform skilled racing drivers, engineers at Stanford University have shown, after pitting their latest model against a track expert. The team has designed a souped-up Audi TTS dubbed ‘Shelley’ which has been programmed to race on its own at speeds above 120 mph at Thunderhill Raceway Park in Northern California. When they tested it against David Vodden, the racetrack CEO and amateur touring class champion, Shelley was faster by 0.4 of a second." Granted still not mixing it with other cars but I would say in a few years that will be happening too. Posted by SteeleRedux, Thursday, 31 March 2016 10:05:35 PM
| |
That is exactly my point Steely. It is the other cars that generate the emergency situations that test the ability of the good driver.
It is pretty easy to fine tune a computer to achieve a good result when you can progressively change the program to suit a given circuit, this is not driving, or anything approaching it. When they can reliably pick their way quickly, fast & regularly through the first lap melee in a crowded grid, on a strange track, I will withdraw my objection entirely. Until then, no way! I remain to be convinced that geeks, renowned lousy drivers, will ever get them to do it. Posted by Hasbeen, Friday, 1 April 2016 2:30:29 PM
| |
At the moment I am in India and have been having second hand driving experiences, I'd never attempt to drive under local conditions.
The British established driving on the left and other road rules and most Indian drivers agree with them but a significant number break most rules and some drivers all of them. Yesterday's prime example was three motor cyclists dodging through a gap in the median strip, on a major and crowded road, and coming against the traffic; number two was another bloke on a motor bike riding over the median strip at right angles and accelerating (with inches to spare) into a side street. Cars and bikes (motor and pedal) overtake on the left, as do cars, then cut in front of other drivers to the accompaniment of a chorus of horns. The short of it is that the volume of traffic is such that adherence to the rules causes a grinding halt. Driverless, logically controlled vehicles could not cope and I'm in a provincial city, Mumbi is much worse and Delhi unbelievable. An Australian traffic policeman would turn white haired in minutes!! Posted by Is Mise, Friday, 1 April 2016 4:03:56 PM
| |
Hi there IS MISE...
I was in Calcutta in the early nineties and trying to drive there, was an absolute nightmare, minus the road rage. A squillion low powered motor bikes, and everything in between that was capable of mounting an engine of sorts. Not only carrying around the entire family but other sundry household goods? Dodging old overcrowded, omnibuses that were spewing out all manner of diesel effluvium, yet the driver was heavily engaged in a serious conversation, with just an occasional glance at on-coming traffic as he did so ? Like the People's Republic, I saw two fatal accidents on the same day in Calcutta, where the deceased was just left in situ until someone in authority came along, and removed him to where ever? An amazing experience I have to say? STEELEREDUX, to thoroughly test a Level 4 autonomous vehicle control system, perhaps it should first trialed in a crowded City like Calcutta, where recognition and observance of local traffic legislation should be scrutinized? What do you think ? PS: A very interesting and thought provoking Topic, I believe! Posted by o sung wu, Friday, 1 April 2016 7:52:23 PM
| |
o sung wu, mate, I'm surprised at you. Why would you wish one of our more harebrained techno monstrosities on the poor people of India?
Your post tells us they are perfectly capable of killing off each other, with no input from us. If we want to play silly games with electronic contrivances, at least we should be the ones to pay the cost. Just how many do you think would survive the 6 week traffic jam, that would probably ensue if these things were trilled there? Posted by Hasbeen, Friday, 1 April 2016 8:46:23 PM
| |
Yeah, shame on me HASBEEN! Seriously, any 'remotely controlled' or fully autonomous vehicle should be tested in an area similar to that of say Manila, or any major city in India, even a crowded Chinese city perhaps? Anywhere where the average driver doesn't care a 'tinker cuss' for road rules or normal courtesy's that are generally shown on busy roads. Places like Tokyo are too well regulated and drivers too acquiescent for the road laws, to mount a fair trail, in my view?
Of course the testing would need to be so very, very, exhaustive, before allowing 'real time' trialing anywhere where human life may be endangered, however minuscule that danger may be. HASBEEN, as I said to STEELEREDUX, I (personally) couldn't envisage a day where I'd feel sufficiently comfortable with sharing the highways and byways with any level 4 autonomous vehicle - who knows, sometime in the near future, science will no doubt further develop a system, so safe and so foolproof even an old goat like me, would probably accept it? In conclusion, given our ages are similar, as are our earlier backgrounds, it is more likely that someone like me will acquiesce to this brilliant technology, before you? Why you ask, because of your long association with 'speed' (beginning with Navy fighters), and auto racing in all it's different classes, you would be more entrenched and established in your own abilities behind the wheel than I, as an ordinary old man driver? I dunno mate, what do you think? Posted by o sung wu, Saturday, 2 April 2016 12:35:43 PM
| |
o sung wu one thing I must correct is your impression I like speed. I am interested in controlling both horses & mechanical contrivances, & completely disinterested in how fast they will go in a straight line.
I love sailing, but prefer a yacht at 9 knots, over a catamaran at 20 knots. Landing an aircraft at 130 knots is fun, but flying for hours at 30,000 ft., at 450/500 knots is the definition of boring. I prefer a horse at a very controlled speed show jumping to galloping around the cross country course. I love driving a quick car around a bend as fast as it will go, sometimes at less than 60 KM/H, where I am in complete control, but I was very unhappy, approaching scared, driving a light 1000 pound Formula 1 down Conrod straight at 180 MPH, bouncing from bump to bump, & very much not in full control. Nothing will get me to ride a Ferris wheel. I am not interested in being injured because some labourer forgot to tighten a critical nut. In the same way I have no desire to depend on some computer programming geek getting it right with a driverless car. I am sure they will get it right & successful in the future, but like the failures with computer controlled aircraft, quite a few will die while they are getting there. Does it really matter whether it is a driverless car or a fool hoon who crashes into you, head on at 100 Km/H? I guess not, but I am stuck with the hoons, due to our gutless judiciary, I see no reason to introduce yet another danger we don't actually need. Posted by Hasbeen, Saturday, 2 April 2016 1:45:21 PM
|
http://www.drive.com.au/new-car-reviews/tesla-model-s-p90d-review-20151025-gkic8a.html
A new future is just around the corner.
I recently participated in a Stanford University online survey (now finished) regarding attitudes to autonomous cars which raised some quite interesting scenarios. For instance if a car in full autonomous mode was to strike and kill a pedestrian who should shoulder the responsibility if that person was not at fault themselves? The driver? The car's manufacturer? The software engineer?
Where things get even more interesting is around decision making within the software. Should we be expecting that ethical decision making be programmed into the vehicle?
For instance should a car swerve to avoid 3 pedestrians on the road to hit one pedestrian on the footpath? This is a classic philosophical quandary called the Trolley Problem.
http://www.youtube.com/watch?v=bOpf6KcWYyw
Let's extend it and say the car is faced with the scenario of being unable to brake in time to avoid one of two obstacles, a pedestrian or a parked truck. Each would potentially kill a human, either the pedestrian or the vehicle occupant. Whose safety is paramount?
Would you purchase a car that was programmed with the sort of software that doesn't place your and your family's well-being at the top of the list?