Where I live in rural France it's not unusual to have a deer jump out of the vineyards at the side of the road, or for a wild boar (built like a tank but twice as agressive) to saunter across the road and defy you to try your luck. BUT the roads are really narrow with deep drainage ditches on both sides, so swerving is likely to be a very expensive mistake. Since I've been here I have twice managed emergency stops just a few cm from a wild boar, and lost assorted bits off the front of of the car to a jumping deer.
MY guess is that any half-decent AI driving system would have hit the brakes many milliseconds before I did - making no difference to the boars, but maybe saving the life of the deer and hundreds of Euros in car damage.

It seems clear to me that self-driving cars will be forced upon us, whether we want/like them or not, by the Powers That Be who either think they know what's best for us ("it'll be safer!") and/or don't know/care about the details (e.g. the ethical "kill pedestrians or my passengers" dilemma) and/or the technology (Kodak CEO and upper management knew nothing about their own technology, and look where that got them) - - or who just don't care about what's best for us, as long as it saves/earns them ever more money. Eventually the 1% will have sucked so much money out of the 99%, that it will reach the point where only the 1% will be able to afford their own cars anyway, and self-driving cars will be used like a taxi service until the 99% can't afford that either. Then everyone will bicycle until they can't afford bicycles, and eventually everyone will be reduced to walking, and the car-centric urban centers will either empty or be replaced with something else. Of course, this may be a longer view than you had in mind...

only the 1% will be able to afford their own cars anyway

If you are in the 1% then you probably don't drive your own car anyway. Having a human driver is just one more person you can feel superior to on a regular basis. Unless its a $250,000 Ferrari in which case you don't want it to drive itself.

As for the emergency braking, I believe that is a feature in some non self-driving cars already.

As I mentioned, I think that an early entry into the driverless car space will be companies like Uber (who is already tackling the problem, of course), as they have the financial means, the pressure, and the motivation to make it happen quickly. People getting into Ubers put their lives into the hands of the drivers all the time already. I think the ethical dilemmas revolving around what a car should decide when human A would rather hit an animal and human B would rather injure themselves over an animal are a moot point for Uber's generation of cars. Personally, I would feel a billion times more comfortable in an Uber driverless car than being driven around by a random Uber human.

Member Avatar for diafol

I imagine the ideal system would calculate 20 different scenarios in a blink of an eye and take the appropriate action whereas the human would just react subconsciously - or maybe consciously - in the so called thinking time aspect of the overall stopping time. But the question still remains as to what the appropriate action would be. Human drivers are very prone to human error. What would driverless cars be prone to? Programmer error? Where does the responsibility lie in the event of a fatality for example? Sue the arsehole with more money than sense or sue HAL by deleting him?

How about letting the potential owner fill out an ethical questionaire with choices in common scenarios. The car would decide based oion the weights assigned from the answers. That way the car is acting as the agent of the owner and the owner can thereby be sued for damages, wrongful death, etc. These questionaires already exist online for ethical research.

My car is equipped with a 'pedestrian' detecting device. If I drive slower than 30km/h my car will stop automatically. Ok let's say it's a progress.

I believe that, in the case of an accident, the entity most at fault or most negligent should be held responsible.

If it was a result of user error or user maintenance, such as not properly cleaning off the sensors from snow, not ensuring the car has installed the most up to date software updates, using the device beyond its recommended lifespan, not undergoing regular maintenance and ensuring that all hardware is operating normally, etc., then the car owner is at fault.

If it was a result of a programming glitch, or something beyond the end-user's control, then the car manufacturer who allowed their code to be released into the wild is at fault. We cannot blame a specific Tesla engineer, for example, for a bug that may have accidentally been introduced and was not caught by any quality assurance before making it into the wild, or an engineer who programmed the ethical algorithm they were instructed to. The car manufacturers should be responsible for the algorithms they release as well as ensuring the proper QA checks exceed their standards, as the software is used in life or death situations.

In other words, the entity at fault is the one whose behavior or negligence contributed the most to the accident.

I don't really see much of a unique ethical dilemma at all, beyond what our civilization already deals with when it comes to technology that is life affecting.

For example, if Johnson&Johnson releases a critical medical device that malfunctions as a result of a software bug and directly causes a fatality, is the doctor at fault because he was the one using the device at the time, despite there being absolutely no user error or negligence on his part? IMHO it should be Jonhnson&Johnson's moral and legal responsibility to ensure that their products which have the capacity of causing death if there is a software maifunction are bug-free. If the doctor misused the device, did not properly follow all optimal usage instructions, etc., then that's different.

Either way, I don't see a car as a device operated by its end-user any differently than so many other ethical tech hypotheticals that already exist today.

commented: Great analogy. +0
Member Avatar for diafol

OK, I see the sense of your argument Dani. Will autopilot make road travel safer? I believe it will but accidents may cause sensations.

the entity most at fault or most negligent should be held responsible

That's the way it is supposed to work but we all know the lawyers go for the deep pockets. No point in sueing the guy who is making minimum wage.

I think you hit the nail on the head WRT the medical devices analogy.

Will autopilot make road travel safer?

That depends entirely on the quality and functions of the autopilot equipment and software.
If it's created to the standards used in commercial aviation, yes, but only if by law everyone is required to use it at all times.

If it's created to the standards of most software overall, absolutely not.

The ethical considerations regarding "If the AI can avoid a fatal collision involving its own car only by causing a 20 car pileup of other cars, should it do so or should it sacrifice itself and its passengers?" are important and interesting and, like almost all ethical considerations, probably has no agreed-upon solution. Does that doom the concept out of the gate? Can one program a self-driving car WITHOUT programming in answers to these questions? Do you weight the value of a child being injured differently from an elderly person? What if the INSURANCE COMPANIES start weighing in on who gets hit based on how much money will need to be paid out? How about the Machiavellian decision to intentionally run over accident victims a second time because it costs less if the victim is killed (the veracity of this article is in dispute. Snopes has it as "unproven").

http://www.slate.com/articles/news_and_politics/foreigners/2015/09/why_drivers_in_china_intentionally_kill_the_pedestrians_they_hit_china_s.html

Decisions, decisions. To assume that the AI will be programmed based on morality versus the almighty dollar is to assume a lot. Ford Pinto anyone?

IMO a good thing about these AI debates is that we are FINALLY debating this stuff, which we should have been debating all along. I don't recall ever being told that it was my duty to intentionally absorb an accident if the alternative meant plowing into a bunch of innocent kids on the sidewalk. My personal morality tells me to absorb the accident, but I know a lot of people who think quite highly of their own importance and would take out the kids IF they thought they could convince a jury that "It all happened so fast. I didn't even see those kids".

Technologically I believe it's all possible and it will be implemented in self-contained systems like military convoys to great success. You'll have vehicles going 100 mph two feet from each other not colliding. The problem is what to do with the mix where some cars have it and some cars don't and even the ones who have it don't have the same programs. Sometimes your catlike reflexes on the brakes will cause the guy without catlike reflexes tailgating you to hit you.

If it's created to the standards of most software overall, absolutely not.

Ah yes. The old "we skip the testing and pass the savings on to you" business model.

I think that there will need to be government laws of the road that standardize the algorithms for all driving cars. The same way humans have rules of the road (person on the right goes first at a stop sign, etc.), I believe there will be algorithmic rules that all driving cars must adhere to. The driverless cars will be less likely to collide if they can predict the behavior of all other driverless cars around them, because ethical algorithms, etc. will all be standardized. The wildcards always have and will continue to be the humans who each drive with a different sense of morality, different level of attention and coordination, different level of distractedness, etc. I think if driverless cars are each able to predict down to the millisecond and inch the behavior of every other driverless car around them, the roads will be much safer than they are today. And I think they would be able to do that if standardized rules of the road are put into play that the algorithms for all car manufacturers must adhere to.

OK, I see the sense of your argument Dani. Will autopilot make road travel safer? I believe it will but accidents may cause sensations.

So wouldn't that be a glorious thing? Imagine a world where car accidents are so rare that, should one occur, it will be news-worthy. Who wouldn't want to work towards this?

Ah yes. The old "we skip the testing and pass the savings on to you" business model.

"early access", sign up now for discounts on future DLC...

Who wouldn't want to work towards this?

What's the catch? One way to make car accidents an extreme rarity is to make car ownership illegal or otherwise impossible (like by making the cost prohibitive for all but government officials driving cars out of other peoples' tax money).
I wouldn't want to work towards that.

As is, car accidents are already pretty rare, at least accidents that cause such serious injuries that people are permanently disabled or killed.
Those are the few and far between ones that make national news, rather than a sideline about why a major road was blocked for an hour or so while wreckage was removed (and even those articles aren't all that common).

The only reason you tend to see them as much as you do is the sheer volume of traffic on the roads. And the only realistic way to get it down is to reduce that volume which, without a major investment in both the frequency, capacity, travel time (reduction...), and cost (reduction) of public transport will require a massive campaign to do just what I described, make operating a car illegal or financially impossible for the vast majority of people.

And making public transport both more frequent, have higher (or better more easily adjustable) capacity, AND reduce the travel times and cost to the user all at the same time has over the decades turned out to not be posible either.
Increasing schedules and/or capacity always ends up increasing cost, making services run faster (and thus reducing travel time) means reducing the number of stops and thus having many areas get less frequent service, etc. etc.

I've for my current job looked seriously at public transport or an electric car.
BUT, public transport would add 4 hours a day to my commute, clearly not an option, and an electric car would add at least 45 minutes and more likely 90 minutes to my commute as I'd have to charge it on the way to and/or from work at a roadside charging station, given that current budget model electric cars lack the range to make the round trip, and there is no charging station at my office, nor is there one in my suburb I can charge overnight (and I doubt the city council would approve of me running a charging cable from my upstairs window, through my garden, across the sidewalk, probably across the street, to my car, the houses here don't have garages or carports, it's all curbside parking...).

These aren't so much technical problems, but infrastructure problems that will need to be addressed. Simply declaring a law that states "by 2040 all cars must be self driving and electric" isn't going to work (never mind what some countries' governments may think who're doing just that), and the same is going to be the case for self driving vehicles, which in large part will have the same kind of problems.
Without communicating with each other and road side sensor networks they'll never be able to react fast enough to changing conditions, as a slew of accidents during testing over the last few years have shown time and again.
But the standards for those infrastructure changes aren't there, nor is the resulting implementation that would of necessity need to be done by national road management agencies and city councils (the same guys who'll probably have to get involved in installing charging stations for electric cars in every suburb and city street, at every house, as well as at every gas station in the country (or at least give incentives for gas station operators to make the investment there)).

"What would driverless cars be prone to? Programmer error?"

Mostly their algorithms will have know error rates - e.g. image recognition misses something or gets something wrong - which aren't really anyone's "fault" because they are based on machine learning which isn't perfect. I'm sure there will be gov't mandated limits on what those error rates can be.

"As is, car accidents are already pretty rare, at least accidents that cause such serious injuries that people are permanently disabled or killed."

Is 40,000 fatalities per year in the USA "pretty rare"?

"an electric car would add at least 45 minutes and more likely 90 minutes to my commute as I'd have to charge it on the way to and/or from work at a roadside charging station"

Really? You already have a >2h daily commute? Why not get a plug-in hybrid? The fuel savings from that commute would probably pay for itself within a few years.

I think that there will need to be government laws of the road that standardize the algorithms for all driving cars.

Ugh. I suppose it's inevitable and the least bad solution, but I tend to cringe when government is tasked with determining what my ethics should be, on anything. Government is a necessary evil. Necessary yes, but evil. "Evil" and "ethics" together. Shudder.

Government does pretty well on non-ethical rules like driving on the left side versus the right side. Neither is better, but everyone has to pick the same side, so government steps in.

For the sake of argument, let's say I'm in charge of the new Ethical Driving Department for the US government (I don't even want to tackle internationalizing morality just yet, but it's going to complicate things even more since companies will be outsourcing some of the algorithm work to other countries). Even in this thread we can't come to agreement. Given a choice between his car running over a cat versus swerving into oncoming traffic and killing him AND ME, he wants to save the cat. I'm pretty sure he's in the minority here in that straight-up choice, but now let's say the AI decides that there's a 99.9% chance of swerving into oncoming traffic, then swerving back in time to avoid hitting anything at all. Lots of folks will still say hit the cat, but for me, there's a probability somewhere where I'm down with risking peoples' lives to spare the cat. That critical value is going to vary for dogs, cats, deer, etcetera. It's also going to vary, for me personally, based on the drivers involved. If I see a KKK bumper sticker on a car, I'm less likely to value your life. I also value Oakland Raider fans' lives over San Francisco 49er fans, but that's just hometown pride talking. I wouldn't actually program that into my algorithm for real, but Lord knows some people would. As the head of this agency, I'm stuck with determining that 98.7128% is the correct cutoff point in this case. NO ONE will be happy no matter what number I pick.

The ugly fact is that we're going to have to plug in numbers to this algorithm based on all sorts of probabilities. If we actually honestly tackle the issue, we're going to have to assign certain groups as being more valuable than other groups. We ALREADY do this with kids, with the already written laws regarding speed limits in school zones, and you are getting punished far worse if you plow into an innocent kid with her whole life ahead of her as opposed to running over a middle aged parolee.

Now, do we actually PROGRAM these biases into the algorithms and have the car act upon these biases? And do we REALLY want the government in charge of whose life is worth more than others?

And do we REALLY want the government in charge of whose life is worth more than others?

The question is: who else?
A strictly-for-profit company? Fundamentialist religious types? The general public (who voted for Trump and Brexit)?

The question is: who else?

Who else should? Or who else will? We've been discussing who SHOULD a lot in this thread, but if history is any judge, pesky issues like this get wrestled with AFTER the horse is already out of the barn. So whatever entity does it first. That could be the US military, some other military, Bill Gates, Mark Cuban, some other eccentric billionaire, or some company which may or may not pay much attention to this aspect of things, which means that it defaults to the particular programmer's ethics who slips in his/her viewpoint unchallenged. If that's Happy Geek, that means less roadkill. Eventually someone will sue and it'll go to court or somehow or other get looked at ten years after we have millions of those cars on the road. THEN we'll "solve" it.

I'm trying to think of a new technology that tried to solve these problems BEFORE they came up and I can't think of any. We invented the internet, THEN worried about security. We invented those drones with cameras and we still haven't tackled privacy, congested airspace, etcetera, etcetera. So too with self-driving cars. Society and regulation is reactive. Expect lots of accidents while the kinks get worked out.

A big problem for all of this discussion IMO is the assumption that an algorithm can accurately estimate the probability of collisions in various situations. Given that most of these algorithms are developed by machine-learning on training data, I find it hard to believe there will be the training data available to get much accuracy on such predictions. Not to mention that even distinguishing different animals from a high resolution image isn't all that accurate yet - never mind trying to figure out the age of the occupants of an oncoming car in the rain within the amount of time necessary to make the decision to avoid/collide.

I think a lot of these issues are just not going to be applicable because the algorithms aren't omniscient and making a fast decision will be prioritized over making the most optimal one because it doesn't matter if you get the exactly optimal decision if you don't make that decision before the crash actually happens. IMO the most likely solution is just going to be to minize the impact force of any collision that happens : so prefer hitting a stationary object vs one moving towards you, and prefer hitting a smaller object vs a larger one - the only exception that needs to be made is for bicyclists because I hope a general consensus would be that hitting a deer/wall is better than hitting a cyclist.

I'm trying to think of a new technology that tried to solve these problems BEFORE they came up and I can't think of any.

The only one I can think of which comes close is cloning. It didn't take long after the first large mammal was cloned for human cloning to be deemed unethical and banned in most (all?) countries.

never mind trying to figure out the age of the occupants of an oncoming car

If, as I suspect, we soon either all get chipped or in some other way register as occupants of the car, all voluntarily of course, when we get into it, there's nothing to figure out. Any two oncoming cars immediately exchange all the relevant information. Paranoid? I'm not so sure. Again, voluntarily, sort of, just like we voluntarily allow all of our purchases to be tracked at the grocery store by using their cards in order to get the discounts which are only available by using the card.

making a fast decision will be prioritized over making the most optimal one because it doesn't matter if you get the exactly optimal decision if you don't make that decision before the crash actually happens.

Computers are fast enough now, or soon will be with Moore's Law and all that, to crunch all the actual collision-related data (speed, angles, road conditions, etc.) with ample time left over to calculate stuff like "If I intentionally kill everybody, the insurance company will have to pay out less in medical bills than if I slam on the brakes and slow down so that there is a horrible, but non-fatal collision".

Am I being too cynical?

commented: I think it is impossible to be too cynical. +0

Any two oncoming cars immediately exchange all the relevant information.

I'm not convinced this will happen. Making self-driving cars capable of talking remotely with each other also makes them vulnerable to hacking and what is gained? Almost all the information a car need to avoid/optimize collisions could be obtained by cameras & light signals which will have to be there anyway in case someone has decided they want to drive themselves. A car AI which can't drive itself properly without communicating with other cars is not going to be viable until it is illegal for a human to drive vehicle sharing the road with the self-driving cars.

Computers are fast enough now, or soon will be with Moore's Law and all that, to crunch all the actual collision-related data (speed, angles, road conditions, etc.)

But again that the computer "knows" everything about it's situation. But most car accidents happen because the driver doesn't know everything about their situation -> e.g. they don't know that this part of the road is particularly slippery, they don't know there is a patch of black ice under the snow, they don't see the oncoming car because of fog or an incline. Or the driver is inattentive/irrational. It is the latter case that makes most of the ethical dilemmas not really matter when it comes to commerical viability because people will care more about the 50X lower chance of getting into an accident (or whatever number it turns out to be) more than the specific ethical trade-offs for a 1:1million chance event (at least they should).

PS Moore's Law is most likely over.

Here's something else to think about: subverting self-driving vehicles by 'hacking' street signs.

I think the law already has that one covered. There's two types of streetsign defacing. The first type is the largely harmless art of writing "Trump" or whatever on a stop sign to make a witty (depending on the audience) political statement or whatever. Everyone knows it's still a stop sign so no real danger. You potentially could get prosecuted for it, but I've never heard of it.

The second way to deface a sign is to take it down or hide it or switch speed limits or warning signs or whatever, something that could actually cause an accident. I'm not sure this would be a much bigger problem for self driving cars than human driven cars. I could cause a serious accident right now by doing that if I wanted to. Why don't I? The same reason that I don't go around randomly shooting people. It's not a game and it won't be treated as a game by the police or a jury. If I intentionally changed a sign with the express intent of folling the AI of a car to cause an accident, that's the same as me removing all the stop signs at an interesection in order to cause an accident IMO and I'll get locked up for it.

The nice thing about AI is that it's reprogrammable and it learns, PLUS IT WILL LEARN FROM OTHERS, and even better than humans do.

As for the signs that apparently fooled these cars, forgive me for shooting from the hip about a topic I don't know much about, but the signs in that article shouldn't be fooling anything. Any AI misreading those signs never should ahve gotten past the QA tests. The ONLY sign that looks remotely like a stop sign is a stop sign. It's red and octagonal and it has STOP written on it in white. All the signs in that article do after the "defacement". What could that be other than a stop sign?

If you are driving up a steep hill, you usually can't see if a bridge that lies behind is broken or not. Neither can the sensors of my selfdriver I guess. So a GPS system that is updated very regularly is of the utmost importance. And while we're at it, why not integrate the traffic signs into the GPS system? We don't have to know that any more. It's the car's responsibilty! We can enjoy the landscape. Those ugly traffic signs being replaced by greens and flowers. :)

Will the 3 laws of Asimov still hold for self driving cars?

Be a part of the DaniWeb community

We're a friendly, industry-focused community of developers, IT pros, digital marketers, and technology enthusiasts meeting, networking, learning, and sharing knowledge.