Self-driving cars decide who dies???

MonacoMike

Well-Known Member
Sep 15, 2009
14,721
Indiana lakes and Lake Michigan
Boat Info
2000 Cruisers 3870
8.2 Mercs
Engines
85 Sea Ray Monaco 197
260hp Alpha 1
Last edited:
You need not look much further than Lion air 737 Max 8 crash in Indonesia. Allegedly, someone made the decision that pilot should be over ridden if the autopilot is off and the plane is in "Manual Mode". A bad sensor fooled the computer into thinking the plane was stalling. The pilots fought to pull back the yoke that the computer slammed forward. Even though the computer could record the pilots pulling back on the yoke.....it refused to disengage. The result: 189 people die due to a software glitch.

Who is at fault? Probably Boeing and Lion Air since the sensor had acted up on the inbound flight and had cleared maintenance. Who is really at fault? The guy who approved the requirements and the guy who accepted the code. I don't believe any pilot in an emergency situation wants a computer to over ride their actions.

In terms of cars making decisions.....the same logic applies. Whoever approved the requirements and accepted the code can't hide from the fact that algorithms (including AI) can kill people faster than they can do it on their own.
 
You need not look much further than Lion air 737 Max 8 crash in Indonesia. Allegedly, someone made the decision that pilot should be over ridden if the autopilot is off and the plane is in "Manual Mode". A bad sensor fooled the computer into thinking the plane was stalling. The pilots fought to pull back the yoke that the computer slammed forward. Even though the computer could record the pilots pulling back on the yoke.....it refused to disengage. The result: 189 people die due to a software glitch.

Who is at fault? Probably Boeing and Lion Air since the sensor had acted up on the inbound flight and had cleared maintenance. Who is really at fault? The guy who approved the requirements and the guy who accepted the code. I don't believe any pilot in an emergency situation wants a computer to over ride their actions.

In terms of cars making decisions.....the same logic applies. Whoever approved the requirements and accepted the code can't hide from the fact that algorithms (including AI) can kill people faster than they can do it on their own.


Fascinating story, yes the computer took over and could not be overridden, but it was not pre-programmed to make a life or death situation. These auto driven cars are as the article explains in the hypothetical story.

MM
 
Isn't it still an algorithm that someone wrote or the computer "learned" reference points from a human?

"Big yellow bus.....must avoid at all costs even if it means destruction of vehicle and passengers."


The only good news is that current "Auto Brake" Federal standards are not heading in the direction of Avoidance. DOT does want to mandate auto braking but there are a lot of situations where auto braking is problematic so I believe they need to figure out that first before HAL starts making more exotic decisions.
 
Happen to be very familiar with the Tunnel of Trees road north of Harbor Springs. It is a beautiful road with wonderful views. It is also the very last place you would want a computer to be driving your cars. It is narrow with almost no shoulders. There are many hidden driveways. It is hilly and curvy. It is a fun road to drive in a sports car. Let your computer do the driving so you can sight see? Not a smart move.
Computers have a role to play. Most likely their best venues will be on freeways with limited access and frequented by seasoned comuters who make routine trips getting to work. Maybe the occupants will even pay attention to the areas of the road that are known for trouble, and override the machines if need be.
 
You need not look much further than Lion air 737 Max 8 crash in Indonesia. Allegedly, someone made the decision that pilot should be over ridden if the autopilot is off and the plane is in "Manual Mode". A bad sensor fooled the computer into thinking the plane was stalling. The pilots fought to pull back the yoke that the computer slammed forward. Even though the computer could record the pilots pulling back on the yoke.....it refused to disengage. The result: 189 people die due to a software glitch.

Who is at fault? Probably Boeing and Lion Air since the sensor had acted up on the inbound flight and had cleared maintenance. Who is really at fault? The guy who approved the requirements and the guy who accepted the code. I don't believe any pilot in an emergency situation wants a computer to over ride their actions.

In terms of cars making decisions.....the same logic applies. Whoever approved the requirements and accepted the code can't hide from the fact that algorithms (including AI) can kill people faster than they can do it on their own.
Is not culpability dependent on intention or willful disregarded? In other words, was best effort and proven process used to develop the HW and SW without neglect? If so, lesson learned, fix the problem make best settlement on loss without assumption of fault.
 
I was hoping for insight on the decision making programming that truly self driving cars will need.
Read this about a year ago. I know we have some very smart people on CSR. I'm interested in what you think about what this says and implies.

https://www.usatoday.com/story/mone...s-programmed-decide-who-dies-crash/891493001/

https://www.usatoday.com/story/mone...s-programmed-decide-who-dies-crash/891493001/

MM

Were any of you able to get to and read the article? I clicked the link and it does not work for me. It goes to viglink ads for some reason on my pad.

MM
 
I was hoping for insight on the decision making programming that truly self driving cars will need.


Were any of you able to get to and read the article? I clicked the link and it does not work for me. It goes to viglink ads for some reason on my pad.

MM
Worked on my iPad.
 
It worked fine when I read it. I'm not sure what you are looking for. Liability for autonomous vehicles is evolving. To a large extent it will rest on a combination of government standards and implementation decisions by the manufacturer.

I don't buy the author's existentialist ethics dilemma. With that line of thinking we all would be riding horses and bikes to work and no power tools would exist since they are dangerous to use.

I believe we are on a much longer road to safe autonomous vehicles than anyone wants to admit. DOT has done a good job of addressing 1st world problems with cars: Bumpers, seat belts, airbags and auto braking have saved lives. I see the possibility of slow speed autonomous vehicles in cities in a few years but the idea of sharing a highway with one is not reassuring since the impact of mistakes and bad decisions is magnified exponentially.
 
It worked fine when I read it. I'm not sure what you are looking for. Liability for autonomous vehicles is evolving. To a large extent it will rest on a combination of government standards and implementation decisions by the manufacturer.

I don't buy the author's existentialist ethics dilemma. With that line of thinking we all would be riding horses and bikes to work and no power tools would exist since they are dangerous to use.

I believe we are on a much longer road to safe autonomous vehicles than anyone wants to admit. DOT has done a good job of addressing 1st world problems with cars: Bumpers, seat belts, airbags and auto braking have saved lives. I see the possibility of slow speed autonomous vehicles in cities in a few years but the idea of sharing a highway with one is not reassuring since the impact of mistakes and bad decisions is magnified exponentially.

So it is imagination that the cars computer would have to decide between hitting a pedestrian or hitting a garbage truck head on? Obviously the the latter is far riskier for the occupants than the pedestrian accident.

If the computer is in control it must be programmed to choose one of them. A Daimler engineer got in trouble for saying, "its autonomous vehicles would prioritize the lives of its passengers over anyone outside the car. The company later insisted he’d been misquoted, since it would be illegal “to make a decision in favor of one person and against another.”

The decision must be made, I wanted to see what other folks thought about how that decision gets made.

MM
 
The company later insisted he’d been misquoted, since it would be illegal “to make a decision in favor of one person and against another.”

That's the answer. The government will decide it not the manufacturer. Certainly in Europe it would be illegal for a computer to pick one life over another based on probabilities. I'm not so sure in the US. We have a tendency to let disaster happen a few times before we adjust for it.

We are a Nation of Laws and people. I have a hard time imagining that we would let AI make life and death decisions without strict accountability. The amount of class action and individual lawsuits would be staggering if computers started making life/death decisions.
 
I was in the auto insurance industry for a few years. Interesting dilemma. Do you die or do you hit the young kid who ran into the street? Extreme example. What if there were four people in your car? Do four die, versus the one? These are specific decision paths that are being coded into these programs. They have to be. It is very sobering and freightening to think that life and death rules are being codified so concisely. I think the rub is that the decision is out of our hands in those circumstances, which is what is so offending or unnatural for anyone to say “yeah, sign me up”.
 
I was in the auto insurance industry for a few years. Interesting dilemma. Do you die or do you hit the young kid who ran into the street? Extreme example. What if there were four people in your car? Do four die, versus the one? These are specific decision paths that are being coded into these programs. They have to be. It is very sobering and freightening to think that life and death rules are being codified so concisely. I think the rub is that the decision is out of our hands in those circumstances, which is what is so offending or unnatural for anyone to say “yeah, sign me up”.

Thank you, I feel this is the first post that recognizes the gravity of what is happening in the coding going on in auto driving computers. I sometimes feel like a voice in the wilderness. The sad part is if people of the intellect of CSR members do not see this as ground breaking and of concern, no matter their opinion, the general population will never comprehend.

MM
 
I'm reluctant to enter into this discussion. Full disclosure - I skimmed the article and didn't read it word for word.

My informed opinion is we, as a society, are headed into a world of capitulation....

We give up our right to privacy, which in the US, could be argued is a Constitutional right, and we simply don't fight for it.

Now, we are willing to jump into devices designed with AI built by the same guy that is sounding the alarm bells regarding AI yet no one is listening and following along like lemmings for ease and convenience.

The article discusses a hypothetical scenario that is an ethical issue...

Personally, my argument is well before any ethical issue. It's the fundamental judgement of giving control of your life, and the lives of others, to a third party entity. The discussion here is focused on AI and programming with the best of intentions. But, what if the intentions are more nefarious?

What if some state agent simply didn't like a person or their positions? How difficult would it be, either within a company or outside forces, to control the device? Alter GPS...Change a brake function...make the vehicle run into the side of a building? Do you seriously think the system can't be "hacked" (manipulated)?

While my thoughts might seem extreme to some people, consider the world we are in today. Autonomous vehicles are Putin's dream. But it doesn't have to be someone as high profile as Putin. It would be so easy to have a device have an accident and call it a glitch. Would the general public even be suspicious?

Elon talks about AI...He's created some of this problem... and we go further down the rabbit hole.

I'm going to the beach...
 
I'm reluctant to enter into this discussion. Full disclosure - I skimmed the article and didn't read it word for word.

My informed opinion is we, as a society, are headed into a world of capitulation....

We give up our right to privacy, which in the US, could be argued is a Constitutional right, and we simply don't fight for it.

Now, we are willing to jump into devices designed with AI built by the same guy that is sounding the alarm bells regarding AI yet no one is listening and following along like lemmings for ease and convenience.

The article discusses a hypothetical scenario that is an ethical issue...

Personally, my argument is well before any ethical issue. It's the fundamental judgement of giving control of your life, and the lives of others, to a third party entity. The discussion here is focused on AI and programming with the best of intentions. But, what if the intentions are more nefarious?

What if some state agent simply didn't like a person or their positions? How difficult would it be, either within a company or outside forces, to control the device? Alter GPS...Change a brake function...make the vehicle run into the side of a building? Do you seriously think the system can't be "hacked" (manipulated)?

While my thoughts might seem extreme to some people, consider the world we are in today. Autonomous vehicles are Putin's dream. But it doesn't have to be someone as high profile as Putin. It would be so easy to have a device have an accident and call it a glitch. Would the general public even be suspicious?

Elon talks about AI...He's created some of this problem... and we go further down the rabbit hole.

I'm going to the beach...

A very insightful reply. So jealous I can't ponder this at the beach.

MM
 
Once all cars are AI driven I'd think they'd communicate with others in their proximity....would there be many accidents at that point?

On a different note I'd refer to a conspiracy nuts dream but it's already become a reality. Tracking, it won't be long and all vehicles will be tracking you, lots of folks don't know that. They can record how you drive, they can tap your smartphone, they can report on you in real time. Cars are already becoming another data source no different the than a Google or Facebook.

Privacy....Haaa, not even in bit heaven.
 
Once all cars are AI driven I'd think they'd communicate with others in their proximity....would there be many accidents at that point?

On a different note I'd refer to a conspiracy nuts dream but it's already become a reality. Tracking, it won't be long and all vehicles will be tracking you, lots of folks don't know that. They can record how you drive, they can tap your smartphone, they can report on you in real time. Cars are already becoming another data source no different the than a Google or Facebook.

Privacy....Haaa, not even in bit heaven.

The P word, often invoked by folks that remember Regan but rarely uttered by the younger generations.

Why do we value it and so many do not?

On the automobile front, they already have license plate readers that can scan 60 plates per second, thats over 200,000 an hour. Now they can search by color, seven body types, 34 makes, and nine visual descriptors in addition to the standard plate number, location, and time.

And no one but me and a few like minded even think on it, and fewer ever act to have some privacy.

MM
 
Abusing/miss-using all this great technology can't be resisted. License plate reading will be antiquated stuff once the bad guys 'collaborate' with the AI systems. The response to Ed Snowdens revelations was a bit disappointing.
 

Forum statistics

Threads
112,945
Messages
1,422,752
Members
60,928
Latest member
rkaleda
Back
Top