Driving Philosophy

I blogged a short while ago about a resurgence of interest in the Trolley Problem, and that one of the reasons for this is the need to program driverless cars to make appropriate decisions when they determine that an accident is unavoidable.  Does the car allow the accident to occur unimpeded (beyond braking and other factors), or should the car be programmed to take drastic actions which could save the lives of others by intentionally sacrificing the life of the driver?

You need to know that this is a real, practical application of philosophy and ethics and morality that could directly affect you as the potential operator of a driverless car.  To me, the whole question should also focus philosophical consideration of what can be known without a shadow of a doubt.  In other words, how do you program a vehicle to be so certain of an outcome that it can justify the death of the driver in order to avoid it?  And regardless of whether you can program a vehicle with such information, how do we as the human programmers ever really know this?

All of which pushes the question of the divine, the supernatural so to speak.  Many of us have had moments where we were certain that something was going to happen – often something bad – but then it didn’t.  We have no explanation for this.  No way to explain why the obvious didn’t materialize.  A theist might attribute it to the hand of God, divine intervention, angels – some agency which has the ability and inclination to cause sure things not to be so sure.

It could be argued that such situations are more demonstrative of our faulty reasoning and evaluation rather than divine agency.  But even if you were to grant that premise, then the question remains as to how human programmers subject to such faulty reasoning and evaluation can program a car so as not to be faulty.

And I haven’t seen anybody go down the further dark path of utilitarianism to argue that, if the overriding premise out to be the maximum happiness of as many people as possible, does that mean programmers tell the vehicle that if it knows a collision is unavoidable, and that fatalities are unavoidable, the vehicle should speed up or disengage the seatbelts or take any number of other actions necessary to ensure that people die as quickly and painlessly as possible?  I’m sure that some utilitarians would argue that this is outrageous – that no car can know for certain how to ensure death in the quickest and least painful means possible.

But then how can we assume that a car programmed by humans can perfectly and always know that collision and fatality is unavoidable?

And you thought you wouldn’t have to grapple with philosophy if you just focused on the hard sciences!

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s