The Robotic Chauffeur: ethics and the adoption of driverless cars

In our recent article for Citywire magazine ‘Fundmanagers as Futurologists’, we analysed how technological developments in the automobile, payments and energy industries influence our investment choices. In this article we further investigate the auto sector and the prospects for the driverless car.

Valentine’s Day can be problematic for many of us, but for one company this year it was a complete car-crash. Not only did Google’s self-driving car collide with a public bus, but, for the first time in history, the car itself was deemed partially responsible[1].

Robotics has been influencing the way we travel for decades:  the birth of the aeroplane auto-pilot function in 1912 and London’s very own driverless Docklands Light Railway (DLR) being just two examples. The most recent developments in such technologies mark a turning point: firms such as Google and Tesla are now developing cars with no need for drivers to actually touch the steering wheel, or in some cases the abolition of the steering wheel and pedals altogether– symbolising the first steps towards the transition from autonomous to self-driving vehicles[2].

Fig. 1: Pedal-less and steering wheel-less: Google’s self-driving car (2014)[3]

The potential benefits of self-driving cars are far-reaching. With respect to safety:  it is predicted that crashes could be reduced by approximately 90% following a wholesale switchover from manual to self-driving cars[4]. With respect to the environment: the main prototypes are all electronic vehicles (EVs) and various futuristic models have suggested widespread car-pooling services would eventually replace individual ownership[5]. They have the potential to allow those without the ability to obtain driving licenses (such as the blind) to travel independently.

Although it has been suggested that such cars could be ready for general use in the near future, there are two main obstacles which will impact the extent to which this actually materialises[6]. The first is the locus of responsibility for accidents (predominantly a legal issue) and the second, and the one which we will consider here, is the need to ethically programme cars to make pre-crash decisions.

The analysis of the pre-crash decision is best illustrated by an application of an age-old philosophy thought experiment known as the ‘trolley problem’: Imagine you are driving a car through a tunnel. You are heading towards a vehicle with 4 occupants that has stopped suddenly. The situation is such that a sudden brake will not avoid a collision. Therefore, you have two options:

  1. Brake anyway, resulting in a collision and the death of the 4 occupants of the vehicle.
  2. Swerve into the wall on your left, resulting in your death.

Fig. 2: the trolley problem in the 21st century

Here, the ‘right’ action is unclear. Although choosing option (2) may reflect a more admirable moral character, choosing option (1) can simply reflect instinctive self-protection in the heat of the moment. It is therefore difficult to blame a person for choosing (1) – few of us would actually say it was downright ‘wrong’.

However, when it comes to driverless cars, such gut reactions have to be pre-programmed into their system. The crux of the ethical dilemma is to decide on which option to pick, and, more broadly speaking, which ‘ethics package’ a car should be programmed to act in accordance with. Put simply, such ethics packages fall into two categories:

  1. Utilitarian: car should act so as to minimize the amount of aggregate harm
  2. Rule-based: car should act in adherence to a set of rules that considers factors other than aggregate harm

The rule-based package is the least straightforward. Various people (most notably, Isaac Asimov) have sought to construct a basic set of rules to guide robotic action[7]. However, the problem with such an approach (as Asimov himself noted) is that it cannot account for all eventualities – rules would need to be “continuously added or clarified to cover unique scenarios”[8]. This renders such an approach redundant, as it would require a superior moral authority to provide such addition and clarification – leaving us back at square one.

The utilitarian is likely to pick option (2) in Fig. 2 – less harm is created by the death of 1 than by the death of 4 passenger(s)[9]. However, this raises the issue of alienating drivers (or, more accurately, passengers) from their moral convictions[10]. It is questionable as to whether individuals would feel comfortable purchasing a car they knew would be programmed to kill the passenger in the event of a trade-off similar to that described above. This is crucial as, in order to achieve the stated benefits of a manual-car free world, consumer demand for driverless cars actually needs to take hold[11].

To avoid such alienation, consumers could customise their cars’ ethics packages to mirror their own beliefs. However, this is morally problematic as it effectively permits certain types of pre-meditated discrimination. For example, certain users may prioritise their own life over others depending on arbitrary factors such as race or religion.  Indeed, some survey results along these lines are shocking: one suggests 64% of drivers would let a child die in order to save their own life[12].

Currently, firms such as Tesla are trying another strategy to circumnavigate this dilemma: an automatic human takeover function in the event of an accident. However, this is problematic for four reasons. First, the car must correctly identify when a near accident will occur, which is often far from clear-cut. Second, the sudden switch-over often leaves the driver unprepared in the event of an accident. Indeed, it has been suggested that the instances of collision would actually increase if this feature was widely adopted. Third, the stated benefit of enabling individuals without a license to travel independently could not materialise under this system. Fourth, it does not eradicate the possibility of discriminatory decision-making – it merely changes it from being knee-jerk to pre-meditated.

So, we are still left with a dilemma: the need to make driverless cars morally and publicly acceptable. The multi-faceted nature of this dilemma means it has attracted expert research across several disciplines: engineering, philosophy and public policy to name a few. In our studies, we spoke with two such experts. The first, Dr Noah Goodall, is a civil engineer at the Virginia Transportation Research Council. He suggests a solution to the dilemma could be ‘risk management’: a strategy adopted within many other industries, such as those that involve radiation exposure[13]. It is a quantitative method, whereby the probability of an action occurring is multiplied by the magnitude of its effects. This calculation should be conducted across the full range of possible actions, in the case above, both brake (1) and swerve (2). The action with the lowest overall risk value should then be selected. However, a problem here is that magnitude is a normative value. For example, some may consider the death of 1 child to have greater magnitude than the death of 2 adults and vice versa. Or, with reference to the example above, some may consider the death of themselves to have greater magnitude than the death of 4 others. So although Goodall’s solution helps to quantify the dilemma, it does not allow us to uniformly quantify it, and therefore the dilemma remains.

We spoke also with Professor Roger Crisp, Philosophy Fellow at St Anne’s College, Oxford. He suggests two main points. First, the public response to the utilitarian ethics package is unlikely to be as negative as surveys have thus far made out. This is because the likely incidence of collision will be reduced substantially so the chances of a passenger actually experiencing a situation similar to that shown in Fig.2 are extremely unlikely. Moreover, the utilitarian package is also much more self-protective than it first appears– if each ‘passenger’ agrees to a package that minimizes overall harm, it could indeed be in their own personal interest. This is because there is a possibility for anyone to be in the blue, as opposed to the green, car in Fig. 2. Second, even if the previous point doesn’t hold true, it is not clear that car manufacturers must produce cars that exactly replicate human behaviour – it would surely be more beneficial to reach outcomes that are closer to what human beings should do, as opposed to simply what they do do. Indeed, it is unlikely that surveys even capture the latter as they fail to ignite realistic emotional responses. As a result, answers may actually indicate what respondents would do as opposed to what they do do or should do. Therefore public opinion surveys ought to have an influence upon, but not form the basis of, the solution to the driverless car dilemma. These two points work to substantially reduce the dilemma, and at the very least, make it less problematic.

Conclusion: An integrated solution to the driverless car dilemma is unlikely to be reached without ongoing proto-type development and, critically a deep set of interactions between the worlds of science and philosophy. It will be interesting to see which countries legalise completely driverless cars (i.e. those without human over-ride) on public roads and how long this will take. Our prediction is the initial phase of adoption is most likely in city-states with no domestic auto industry, such as Singapore. It is not clear how governments will manage the switchover from manual to driverless. Given the impediments and powerful lobbies backing the global auto industry, the investment risks are currently very high. The future foretells a world of electric and driverless cars but the path to adoption will likely lag the required science and perhaps also the philosophy.


[7] Asimov, I. I Robot (Gnome Press, 1950).

[8] Goodall, N.J. Ethical Decision Making During Automated Vehicle Crashes (Virginia Transport Research Council, 2013).

[9] This is assuming we quantify harm in terms of death/injury as opposed to monetary value. There are other, more complex, utilitarian accounts which may suggest that picking (1) actually minimizes more long-term harm, however I will not go into these here.

[11] Bonnefon,J. et al Autonomous Vehicles Need Experimental Ethics: Are We Ready For Utilitarian Cars? (Cornell University, 2015).

[13] Goodall, N.J. Away from Trolley Problems and Towards Risk Management (under submission, 2016).

Print Article

By |2016-03-30T15:18:12+00:00March 30th, 2016|Asset Class Returns, Cerno Capital Posts, Other Posts|

About the Author:

James is a co-founder of Cerno Capital and lead manages a number of the firm’s collective and private portfolios. After qualifying as a chartered accountant in London (Coopers & Lybrand, 1989) he relocated to Asia. Between 1991 and 2004 he worked as an equity analyst, head of research, and latterly as an equity strategist at WI Carr, Paribas, HSBC and UBS, based variously in Hong Kong, Singapore and Jakarta. James graduated from the University of St Andrews, Scotland with an MA in Philosophy & Logic in 1986. James is a Member of the Chartered Institute for Securities & Investment.