Self-driving vehicles


A completely self-driving vehicle is – everywhere and always - able to carry out the driving task completely by itself, without human involvement. It will probably take at least several decades for these vehicles to become commercially available, if they ever will. Yet, vehicles in which part of the driving task is automated, for example automated braking, accelerating and steering, are already available. The human driver of these partly self-driving vehicles is responsible for safety and has to intervene when necessary.

While further developing partly self-driving vehicles, several infrastructural, technical, ethical and legal obstacles need to be overcome. And several scientific and technical questions need to be answered, such as: how will the driver remain sufficiently alert to be able to take over the driving task, how will the driver know precisely what to expect from the vehicle at any moment, and how may (partly) self-driving vehicles safely interact with conventional vehicles and vulnerable road users.

Fully self-driving vehicles will prevent a large number of crashes due to the elimination of human error as a crash cause. At the same time, new problems may arise due to system defects, for example by a malfunctioning algorithm or faulty sensors. For partly self-driving vehicles, some specific causes of concern can be added. Drivers may, for example, have trouble monitoring the road environment if they are not actively involved in the driving task, or may rely on the system too much, and it may sometimes be unclear whether they are responsible for a specific task or whether the system is. By now, there are several partly self-driving vehicle on the road. Yet, there are insufficient data to allow us to say anything about the eventual effect of self-driving vehicles on road safety.

What do we mean by self-driving vehicles?

Self-driving vehicles differ in the extent to which they support driving tasks. A fully self-driving vehicle is – in all circumstances - able to carry out the driving task completely by itself, without driver intervention. But there are also partly automated systems which, to a greater or lesser degree, leave certain responsibilities to the driver. A fully self-driving vehicle is not yet available. Some think that developments in automation technology will imply that the entire driving task will be carried out by technology when we are unwilling or unable to drive ourselves [1]. As yet, there are too many uncertainties for predictions about the term within which this will be realised [2] and some people doubt whether this will ever be the case [3].

This fact sheet concerns self-driving vehicles in a broad sense, ranging from vehicles in which only a small part of the driving task is taken over by automation technology to vehicles that take over the entire driving task (see the question Which levels of self-driving vehicles can be distinguished?). Moreover, this fact sheet also concerns all vehicle types with automation technology, ranging from self-driving cars and trucks (see, for instance, the question How safe are platooning trucks?) to minibuses (see, for instance, the question How safe are self-driving shuttles/people movers?).

Which levels of driving automation can be distinguished?

We most often distinguish different levels of self-driving vehicles by referring to the so-called SAE Levels of Driving Automation [4]. This classification consists of six levels which are based on the extent to which technology supports the driving task: from level 0, where only the driver carries out the driving task, to level 5, where the vehicle completely carries out the driving task. Below the six levels are explained:

  • Level 0: Only the driver carries out the driving task. The vehicle may be equipped with systems that provide alerts and/or that momentarily intervene. Examples are alerts when the vehicle leaves its lane (Lane Departure Warning) or a system for emergency braking (Autonomous Emergency Brake). Currently, most vehicles on the road belong to this category (see SWOV fact sheet Intelligent transport and advanced driver assistance systems (ITS and ADAS)).
  • Level 1: The vehicle can steer or brake/accelerate by itself. This concerns support technology that helps keep the car in its lane (Lane Keeping System) or keep a safe distance to the vehicle in front (Adaptive Cruise Control) (see SWOV fact sheet Intelligent transport and advanced driver assistance systems (ITS and ADAS)). The driver, however, has to continuously monitor whether the vehicle carries out the automated task properly, and has to intervene immediately if necessary.
  • Level 2: At the previous level, one of two support technologies could be active. At level 2, both support technologies are active. This implies that the vehicle can steer and brake/accelerate. At this level, once more, the driver has to continuously monitor the automated task, and has to intervene immediately if necessary. The Tesla Autopilot [5] is an example of a Level 2 vehicle.
  • Level 3: The vehicle is able to drive by itself in specific conditions. The conditions may, for instance, refer to road type, weather, and traffic intensity. If the specific conditions no longer apply, the driver has to take over the driving task. Think, for example, of a vehicle that can usually drive by itself, but is no longer able to do so in heavy rain or in conditions where road markings are invisible.
  • Level 4: As was the case at the previous level, the vehicle is only able to drive by itself in specific conditions. But at level 4, the driver does not need to take over the driving task when the specific conditions no longer apply. The vehicle can park itself, for example, before the specific conditions cease to be applicable.
  • Level 5: At the top level, the vehicle is completely self-driving, implying that it can drive itself in all circumstances.


Will self-driving vehicles improve road safety?

The question whether self-driving vehicles will improve road safety cannot be answered with a simple ‘ yes’ or ‘no’. For, the consequences of incorporating self-driving vehicles into road traffic are not exactly clear.

In principle, self-driving vehicles could reduce crash risk, because the risk of human failure could substantially decrease. Typical human errors that may cause crashes are made in the following areas: 1) recognition (e.g. inattentiveness), 2) decision making (e.g. aggressive driver behaviour), 3) performance (e.g. inaccurate steering), or 4) no performance at all (e.g. falling asleep) [6]. In preventing these crash causes due to vehicle automation taking over driving control, the number of crashes could, in theory, substantially decrease. However, self-driving vehicles could also result in new safety concerns, preventing road safety improvement or even worse increasing unsafety. This concern will particularly be relevant to vehicles that are only partly self-driving [7]. In such a vehicle, the driver continuously needs to be alert in order to intervene or take over control when the vehicle cannot manage road conditions anymore. Yet, road users have trouble monitoring the environment continuously and adequately, and therefore find it hard to take over control when this is called for [8]. Other potential problems are that drivers may unduly rely on the system or think that the system will offer support when this is not the case. Other – external – safety risks concern cyber attacks and software vulnerabilities which may keep the system from functioning [7]. In addition, problems could arise due to system errors, for instance because of algorithm malfunctioning or faulty sensors.

There are numerous potential safety effects of self-driving vehicles that have only been  researched to a relatively limited extent. Increased use of self-driving vehicles could, for instance, result in more vehicles on the road, which would increase mobility and, thus, the number of crashes. A favourable effect on road safety could arise if self-driving cars were to be used for car sharing. Such a shared self-driving car could take a passenger from A to B and then drive on to transport another passenger. If a lot of self-driving cars were shared and people forfeited their own cars, fewer car parks would be needed. The freed-up space could then be used to redesign roads to improve road safety, for example by widening bicycle paths/lanes to make them safer.

When will the first fully self-driving cars emerge on the market?

It is hard to anticipate the moment when fully self-driving cars will emerge on the market. Currently, there are very many infrastructural and technological uncertainties and challenges (see the question What infrastructural and technical obstacles may hinder the transition to self-driving vehicles?), concerning drivers (see the question What driver-related obstacles are involved in self-driving cars?) and concerning other road users (see the question How will other road users react to self-driving vehicles?). Estimates therefore vary considerably.

According to a 2018 study by SWOV, in collaboration with RAI Automotive Industry NL ( [9]) it may take decades before cars will be able to drive fully automated in all conditions. The study resulted in the conclusion that fully self-driving vehicles could reach the market around 2065 in case of a fast transition path, and in 2085 in case of a slow transition path with technical constraints.

In 2017, researchers specifically studied the situation for the Netherlands [10]. According to their assessment, fully self-driving vehicles will be commercially available between 2025 and 2045. They do, however, add that the actual timeline will depend on matters such as technological development, policy, the economy, and customer opinion.

At this point in time, in early 2022, several years have passed since the 2017 and 2018 assessments. Research shows that the estimated date of fully self-driving vehicles appearing on the road is postponed again and again [11]. Currently, there are too many uncertainties to allow us to give a date for the market launch of fully self-driving vehicles, and only a rough estimate can be given [2]. Some experts even doubt whether fully self-driving vehicles will ever become commercially available [3].

Are self-driving vehicles allowed on public roads?

Currently, partly self-driving vehicles that are commercially available are already driving on Dutch public roads. They can steer and keep a safe distance to vehicles in front. The driver does, however, have to monitor the situation continuously and intervene when necessary. This concerns the so-called Level 2 vehicles (see the question Which levels of driving automation can be distinguished?), of which Tesla’s Autopilot [5] is a well-known example.

In the Netherlands, new innovative and adaptive legislation has been adopted to enable large-scale pilots with self-driving vehicles on public roads. In 2015, legislation concerning exceptional transport (Boev [12] ) was extended and in 2019 the Experiment Act (2019 [13]) was introduced. The Netherlands Vehicle Authority has to grant the self-driving vehicle (temporary) exemption for a field trial. The exemption, which is a prerequisite for a self-driving vehicle to use public roads, can only be acquired after a test procedure. Exemption is granted for a specific trial and only applies to that trial. The appropriate exemption enables a field trial on public roads without the need for an actual driver to occupy the vehicle. Monitoring takes place from outside the vehicle. SWOV regularly advises on the human/behavioural aspects of pilots with self-driving vehicles on public roads [14]. The advice is taken into account when deciding on the exemption.

The Netherlands has been at the forefront in testing self-driving vehicles on public roads. Quite a few trials with self-driving vehicles have been carried out, are in progress, or have been planned; for example, the so-called last mile trials. During these trials, a self-driving minibus, or shuttle or people mover (see Figure 1 and the question How safe are self-driving shuttles/people movers?), drives a set and relatively short route between two locations, such as a public transport stop and a hospital (for example [15]).

Figure 1. Example of a self-driving shuttle or people mover. © Marina Popova

Other countries are also working on allowing self-driving vehicles on public roads. The Declaration of Amsterdam [16]. is a first step towards a co-ordinated European approach. The declaration describes the joint targets and actions. This will prevent excessive variation of regulations and legislation within Europe, which would inhibit further development. In addition, there are worldwide partnerships that have, for instance, formalised co-operation between Europe, the United States and Japan [2].

Are autopilot vehicles really self-driving?

Currently, completely self-driving vehicles are not yet commercially available in the Netherlands. The presently available vehicles with autopilot (also called pro-pilot) features are not fully self-driving. This implies that they cannot drive autonomously. A well-known example is the Tesla Autopilot feature, which turns a Tesla into a Level 2 automated vehicle (for an overview of the different levels, see the question Which levels of driving automation can be distinguished?). These Teslas can steer and keep a safe distance to the vehicle in front but, at all times, all aspects of the driving task will require the driver’s full attention. If the Autopilot fails, the driver must be able to intervene immediately. Therefore, it is important and even mandatory that drivers keep their hands on the steering wheel. The Autopilot feature supports the driving task, but does not make the car fully self-driving.

That an autopilot vehicle is not really self-driving, is not always evident to all its users. Analyses of crashes involving the Tesla Autopilot show that drivers sometimes rely on the Autopilot feature too much [17]. Drivers will not always keep their hands on the steering wheel, for instance, or will not sufficiently monitor the road environment. A study of drivers who are using Level 2 automation shows that, even when drivers keep their hands on the steering wheel and watch the road, they may overestimate what the vehicle can and will do [18]. This may result in hazardous situations. Some experts (for example [19]) note that the term ‘Autopilot’ may suggest that the system can do more than it actually can, which may contribute to crash occurrence. The choice of the term ‘ Autopilot’ has also been criticised by organisations and governments [20]. The German government, for example, sent Tesla a letter asking not to use the term in advertisements any longer.

Which driver assistance systems contribute to the development of self-driving vehicles?

Several existing driver assistance systems can be seen as first steps in the development of a completely self-driving vehicle. In answering the question above, we will briefly discuss what these systems entail; for more information, see SWOV fact sheet Intelligent transport and advanced driver assistance systems (ITS and ADAS). IIntelligent transport and driver assistance systems are technology applications in vehicles and infrastructure to make traffic safer, more efficient, comfortable, reliable and eco-friendly.

Some of these systems are intended to support the driving task. Examples are: systems that can send out warnings and/or momentarily intervene; for instance, warning when a vehicle leaves its lane (Lane Departure Warning) or intervening by an emergency brake (Autonomous Emergency Brake). Some driver assistance systems can partly take over the driving task. This concerns technology that can keep a vehicle in its lane  (Lane Keeping System) or make sure that the vehicle keeps a safe distance to the vehicle in front (Adaptive Cruise Control). If vehicles are equipped with both these technologies that partly take over the driving task, they are called Level 2 self-driving vehicles (see the question Which levels of driving automation can be distinguished?). The technologies need to be further developed and supplemented to create a fully self-driving vehicle.

How safe are platooning trucks?

How the introduction of platooning trucks affects road safety is not yet exactly clear (see the question Will self-driving vehicles improve road safety?). At any rate, a number of potential hazards still need to be resolved.
Platooning trucks are trucks that can maintain a very short headway distance due to automation. The trucks can communicate with one another. They exchange information, such as speed and acceleration, to synchronise their operations [21]. Several trials with platooning trucks have already been carried out, for example as part of the European Truck Platooning Challenge [22] and the European Ensemble project led by TNO, the Netherlands Organisation for applied scientific research [23].

Platooning trucks allow for a shorter distance between trucks, which creates more road space. Space saving increases as the distance between the automated trucks decreases. Shorter distances do, however, also create potential safety issues. For example, it must not be possible for platooning trucks to crash with each other or with other road users on account of technical problems for instance [24]. The idea is that only the driver at the head of the platoon should monitor the traffic situation. Before technology has reached that point, the drivers at the rear of the platoon should also do so however. They should all retain proper attention and intervene on time when necessary [25]. Danger may occur if other vehicles change lanes and try to merge into the platoon [24] [26]. It appears that such a scenario is inevitable and, therefore, solutions need to be found to ensure safety. Another potential hazard is that platooning trucks may veer off the road when trying to avoid a crash with other road users.

How safe are self-driving shuttles/people movers?

It is not entirely clear yet to what extent self-driving shuttles, also called people movers, are safe for their occupants and for other road users. This is because these vehicles are still being developed and are not widely used yet.

A self-driving shuttle is a highly automated minibus that can transport several passengers, usually 4-12 [27]. Currently, there are self-driving shuttles on specific roads in specific European areas, of which some are located in the Netherlands.

The self-driving shuttles that are now undergoing road testing, have not been fully developed yet. For example, they stop when they detect anything (road users, static objects, etc.) within a certain distance [28]. Moreover, they generally drive very slowly, with an average speed of less than 21 km/h [28] or sometimes even with a maximum speed of 15 km/h [15]. These shuttle characteristics may result in dangerous overtaking manoeuvres by other road users [28], which could increase crash and injury risk [15].

Figure 2. Example of a self-driving shuttel on a bicycle track in the Netherlands. © Corné Sparidaens.

When the self-driving shuttles have been further developed, they may improve road safety since they may reduce the number of road conflicts [29], although the effect of allowing this type of vehicle onto the public road is not precisely clear yet (see the question Will self-driving vehicles improve road safety?).

What infrastructural and technical obstacles may hinder the transition to self-driving vehicles?

Infrastructural obstacles

Relatively speaking, the Netherlands have a great many motorways. Higher-level self-driving vehicles (see the question Which levels of driving automation can be distinguished?) are likely to be introduced on motorways first, in which case the Dutch dense motorway network may be an advantage. However, problems may arise because Dutch motorways are used intensively and are provided with a great many entry and exit ramps. This makes the Dutch motorway system relatively complex [10]. Complexity could be reduced if self-driving vehicles were to drive on separate lanes, isolated from other road users. Thus, conflicts between self-driving vehicles and other road users could be minimised. A prerequisite for separate lanes for self-driving vehicles is a sufficient number of vehicles and sufficient space. In the Netherlands, the latter can by no means be taken for granted. In addition, the current infrastructure needs more general adjustments to prevent crashes with self-driving vehicles, since is was designed to be used by people and not by machines [30]. The infrastructure/roadside will, for instance, have to be modified for communication with self-driving vehicles.

Technical obstacles

A possible technical obstacle is, that the sensors of a self-driving vehicle can only observe a relatively small part of the environment. Moreover, they may malfunction because of obstruction or weather conditions. The observation capacities of a self-driving vehicle can be extended by exchanging information between the vehicle and everything that may affect the vehicle or everything that may be affected by the vehicle, such as infrastructure and other road users [31]. This is also called V2X (vehicle-to-everything) communication. In order to realise this and to guarantee reliability, the necessary technology, such as communication networks, needs to be developed further and collaboration between all parties involved is called for [31].

Driving is usually a relatively easy task for people, but for a system it is hard to specify. For every generic rule, such as keeping a safe distance to a preceding vehicle, there are context-specific exceptions. And new, unforeseen situations may occur. To ensure that the system will be able to cope, it is important to determine what kind of data and how many data need to be collected to train the system. New methods and approaches are needed to have all this finetuned [32].

Which judicial and ethical issues are involved in self-driving vehicles?

The technological development of self-driving vehicles entails ethical issues as well as judicial issues concerning government policy, traffic regulations, and technical standards.

Judicial issues

Crash responsibility and liability is an important judicial issue in crashes with self-driving vehicles. Currently, drivers of commercially available, partly self-driving vehicles are obliged to keep their hands on the steering wheel and they are always responsible for safety. For a crash, therefore, the driver will usually be held liable instead of the developer or producer of the system. When higher-level self-driving vehicles become commercially available, and most or all tasks are carried out by the vehicle, it will technically be possible to let go of the steering wheel for quite some time. To offer drivers this possibility as a properly embedded option, legislation will first have to be modified. The Netherlands Road Authority is currently examining how legislation concerning self-driving vehicle needs to be adapted [33].

Ethical issues

Important ethical issues are also involved in the introduction of self-driving vehicles. Ideally, a self-driving vehicle should always make the appropriate decision, even in emergencies. Yet, it remains to be seen whether this is feasible, and if not, on what grounds choices will then have to be made [34]. Should the vehicle protect its occupants or rather other road users? Does the decision depend on the number of road users that are at risk, or who is at risk? An extreme example in the Moral Machine Experiment by Awad et al. [35] illustrates this (see Figure 1).


Figure 1. Ethically speaking, what is the correct decision? An extreme example: what if a completely self-driving vehicle with three occupants loses control and is no longer able to brake. The vehicle for which the traffic light is green approaches a pedestrian crossing. Three older pedestrians are crossing the road in spite of the pedestrian red light. If the vehicle keeps its lane, it will hit the three older pedestrians. The vehicle may swerve to the other lane but, in that case, it will hit a concrete road block endangering the three occupants. From: Awad et al., 2018 [35]; see:

The example is, of course, extreme, but in less risky situations decisions will also need to be made about how to distribute risks across the parties involved. Car manufacturers and policy makers are struggling with such dilemmas that are hard to resolve with the current ethical principles [35]. There is a strong need for developing moral algorithms that can cope with such dilemmas in accordance with acceptable ethical standards [34].

What driver-related obstacles are involved in self-driving cars?

In the interaction between driver and self-driving vehicle, several aspects are of importance. Interaction problems may arise if one of the aspects is found wanting. The literature often mentions the following aspects: mode awareness, calibrated trust, due attention and retention of skills. Below, each of the aspects is discussed. Problems can be diminished or even prevented by appropriate communication between driver and vehicle. How this may be realised is discussed by answering the question How can self-driving vehicles, drivers and other road users communicate?.

Mode awareness

Mode awareness implies that the driver is, at all times, aware of the state or mode of the self-driving vehicle: which party is responsible for (a part of) the driving task, the driver or the vehicle? This should ensure that the drivers are aware of their responsibilities and able to act accordingly. If mode awareness is wanting, mode confusion may arise. This implies that the driver is confused about the prevailing state of the vehicle. This may result in hazardous situations since the driver’s confusion may cause him to take an inappropriate course of action. Such a course of action is also called a mode error [36]. A driver may, for instance, wrongly believe that the system that keeps a safe distance to a preceding vehicle is active and may, therefore, be (too) late to hit the brakes.

Calibrated trust

Calibrated trust implies that the driver’s trust in the self-driving vehicle is appropriate and realistic. The extent to which the driver trusts the vehicle should correspond to what the vehicle is actually capable of. When trust is excessive, this may result in improper and therefore hazardous use of the vehicle [37]. On the other hand, too little trust, or even distrust, may result in disuse of the automation features, which means potential safety benefits would be lost [38].

Due attention

The driver should focus attention on activities that are critical to road safety. However, self-driving vehicles of Level 3 and higher (see the question Which levels of driving automation can be distinguished?) enable drivers to engage in activities that are not related to the driving task, such as texting. But if drivers are responsible for the driving task, it is important that this task should gain their attention. Therefore, they should stop engaging in activities unrelated to the driving task in good time, and focus their attention on the driving task whenever required. At critical times, drivers must neither be distracted nor too tired. It may be harder to be properly focused in a self-driving vehicle than in a conventional vehicle [39] [40]. It may, for example, be very tiring to continuously stay focused on the driving situation without being actively involved in the driving task (see SWOV fact sheet Fatigue).

Retention of skills

As the level of automation increases, active driver participation will decrease. In the long run, this may reduce driving skills [41]. Particularly in the case of Level 3 self-driving vehicles, this is considered a potential problem. At this level, the driver remains responsible for taking over the driving task when required, whereas in certain conditions the vehicle can drive by itself. Drivers should obviously retain their driving skills, even when they have not needed to rely on these skills for some time.

Do people want to drive in a self-driving vehicle?

Aspirations to drive in a self-driving vehicle are expected to increase when purchase costs decrease considerably, and people are satisfied with the vehicle. Self-driving vehicles can only change mobility if they are accepted and used. Research into potential users of self-driving vehicles [42] shows that possible safety benefits, absence of parking issues, and the option to engage in other activities while being on the road are most attractive. Costs, less control over the car, and crash liability are less attractive. These opinions affect aspirations to use self-driving vehicles. Researchers [43] predict that self-driving vehicles will only be widely used when costs show an annual decrease of 15% to 20% and when all users are happy with their purchase. Another study [44] lshows that purchase of a self-driving vehicle is primarily linked to the expected vehicle performance, expected operating simplicity, and the influence of the social environment. As individuals are generally more inclined to go along with innovations, these links are stronger. This finding implies that innovation-minded consumers, in particular, can facilitate the widespread introduction of self-driving vehicles. To boost acceptation and introduction, marketing efforts should, therefore, probably focus on such consumers.

How do other road users react to self-driving vehicles?

Reactions of fellow road users to self-driving vehicles seem to differ from their reactions to conventional vehicles. Research in this area is limited, however, and there is much that we do not know yet. Below, we discuss the current findings, first for vulnerable road users, and subsequently for drivers of conventional vehicles.

Vulnerable road users

Cyclists and pedestrians are as yet unsure whether self-driving vehicles will see them and will stop for them [45]. It comes as no surprise, therefore, that vulnerable road users seem more conservative in crossing the road when a self-driving vehicle approaches than when a conventional vehicle does [46] [47] [48]. Good communication of the vehicle with vulnerable road users about its intentions (I am driving on, I am going to stop) may ensure that vulnerable road users will cross the road more easily and more confidently [49]. This kind of communication will be further discussed when answering the question How can self-driving vehicles, drivers and other road users communicate?

Several studies suggest that cyclists and pedestrians adapt their behaviour based on previous experience with self-driving vehicles [28] [46]. Thus, it has been shown that cyclists are more positive about a self-driving shuttle if they have dealt with it before, but that they are, then, also less inclined to yield right of way to such a vehicle [28].

Drivers of conventional vehicles

Research into how drivers of conventional vehicles react to self-driving vehicles has focused on platooning trucks, or automated trucks that drive very closely together (see the question How safe are platooning trucks?). Drivers of conventional vehicles appear to feel less comfortable and safe, and more stressed, in proximity to platooning trucks [21]. Moreover, drivers may adapt their behaviour by reducing their own distance to the platooning trucks [50]. as well. Risks when overtaking were also observed (see the question How safe are platooning trucks?). Yet, another study finds no difference between driver behaviour when platooning trucks versus non-automated trucks are overtaken [21]. Whether drivers of conventional vehicles do or do not merge with platooning trucks depends on the extent of the distance between the trucks, implying that drivers are less inclined to overtake if the distance between the trucks is shorter [21].

In a US study [51] datasets of crashes involving self-driving vehicles have been analysed. The study shows that all self-driving vehicle crashes belonging to the dataset were crashes with other, conventional vehicles. Of course, conventional, non-self-driving vehicles crashing with other vehicles also regularly occurred (68%), but they included collisions with stationary (16%) and non-stationary objects (14%), or crashes without any collision (2%). The researchers suggest that self-driving vehicles do not seem to be responsible for the crashes they are involved in, but rather the drivers of conventional vehicles. They think a possible explanation of this finding is that drivers of conventional vehicles feel unsure of what to expect from self-driving vehicles. Yet, the causes of crashes with self-driving vehicles are probably complex and cannot be attributed to a single factor. Thus, there are also challenges for users of self-driving vehicles (see the question What driver-related obstacles are involved in self-driving cars?) that could be at play.

How can self-driving vehicles communicate with drivers and other road users?

Inside a self-driving vehicle, the vehicle and driver communicate by means of a Human Machine Interface (HMI) (see for example [52]). Outside a self-driving vehicle, the vehicle and other road users may communicate by means of an external Human Machine interface (eHMI) (see for example [53]). HMI and eHMI must ensure safe interaction with the vehicle. Communication may take place by stimulating one or more senses of the driver or other road user. An HMI can show icons on the dashboard, generate sound alerts and steering wheel vibrations. If drivers want to communicate with their vehicle, they can push steering wheel buttons, use a touch screen or voice control. An eHMI may announce the intentions of the self-driving vehicle to other road users, for instance by a text message on an external display, by laser projection on the road and by means of sounds. Communication of other road users with the vehicle may proceed via vehicle sensors that pick up non-verbal signals, such as the trajectory of the other road users.

Because of the importance of good communication, the best design for an HMI and eHMI is being studied. Currently, general design principles for intelligent transport and driver assistance systems are already available (see SWOV fact sheet Intelligent transport and advanced driver assistance systems (ITS and ADAS). Further research into HMIs is now underway, for example in the European MEDIATOR project in which an HMI is being developed for partly and completely self-driving vehicles to enable communication between the driver and the vehicle ( [54]). For eHMIs, further research is also needed, since many aspects still have to be explored to ensure that communication will be at its most effective [53].

What consequences do self-driving vehicles have for driver training and driving test requirements?

Operating a (partly) self-driving vehicle requires specific skills. Targeted training, for instance during driving lessons, can be very helpful. Whether this should lead to adaptation of formal driver training and driving test requirements divides opinions. When vehicles are fully automated they may carry out all driving aspects without human intervention. Human driving skills and fitness to drive would then theoretically no longer be relevant, and a driving licence would technically no longer be needed. In the meantime, for lower levels of self-driving vehicles there are still challenges of driving skills and fitness to drive to be addressed.

In partly self-driving vehicles, driver and vehicle carry out the driving task together. Driver and vehicle have to collaborate adequately to ensure safe task performance [55]. A key factor is driver adaptability. A driver’s ability to cope with responsibility requires mode awareness, calibrated trust and due attention to manage responsibility. In addition, driving skills have to be retained (see the question What driver-related obstacles are involved in self-driving cars?). Good communication between the self-driving vehicle and the driver are important factors (see the question How can self-driving vehicles communicate with drivers and other road users?). All these concerns may change driving skill requirements.

In 2020, an analysis of the literature and of training needs was carried out, combined with stakeholder consultations [56]. This resulted in the conclusion that it could be worthwhile to offer custom training to drivers of self-driving vehicles, but that this does not necessarily have to be incorporated into the current driving lessons and driving licence requirements. Monitoring and assessing new insights, however, has to continue. Thus, the need for adaptation of driving lessons, driving skill and fitness-to-drive requirements will be noted on time. Research among driving examiners, however, shows that they are unanimous in thinking that attention should also be paid to driving with the most common versions of driver assistance systems [57].

Publications and sources

Below you will find the list of references that are used in this fact sheet; all sources can be consulted or retrieved. Via Publications you can find more literature on the subject of road safety.

[1]. NHTSA (2021). Automated Vehicles for Safety. National Highway Traffic Safety Administration, NHTSA, Washington D.C. Accessed on 10-08-2021 at

[2]. ERTRAC (2019). Connected automated driving roadmap. European Road Transport Research Advisory ERTRAC, Brussels.

[3]. Tabone, W., Winter, J. de, Ackermann, C., Bärgman, J., et al. (2021). Vulnerable road users and the coming wave of automated vehicles: Expert perspectives. In: Transportation Research Interdisciplinary Perspectives, vol. 9, p. 100293.

[4]. SAE (2021). Surface vehicle recommended practice: Taxonomy and definitions for terms related to driving automation systems for on-road motor vehicles. SAE J 3016-2021. SAE International.

[5]. Tesla (2021). Autopilot. Accessed on 01-10-2021 at

[6]. NHTSA (2008). National motor vehicle crash causation survey. Report DOT HS 811 059. National Highway Traffic Safety Administration, NHTSA, Washington, D.C.

[7]. Milakis, D., Arem, B. van & Wee, B. van (2017). Policy and society related implications of automated driving: A review of literature and directions for future research. In: Journal of Intelligent Transportation Systems, vol. 21, nr. 4, p. 324-348.

[8]. Hagenzieker, M.P. (2015). 'Dat paaltje had ook een kind kunnen zijn'. Over verkeersveiligheid en gedrag van mensen in het verkeer. Intreerede 21 oktober 2015 ter gelegenheid van de aanvaarding van het ambt van hoogleraar Verkeersveiligheid aan de faculteit Civiele Techniek en Geowetenschappen van de Technische Universiteit te Delft. TU Delft, Delft.

[9]. SWOV & RAI (2018). Veilig onderweg met de auto. SWOV, Den Haag.

[10]. Milakis, D., Snelder, M., Arem, B. van, Wee, B. van, et al. (2017). Development and transport implications of automated vehicles in the Netherlands: scenarios for 2030 and 2050. In: European Journal of Transport and Infrastructure Research, vol. 17, nr. 1.

[11]. Bazilinskyy, P., Kyriakidis, M., Dodou, D. & Winter, J. de (2019). When will most cars be able to drive fully automatically? Projections of 18,970 survey respondents. In: Transportation Research Part F: Traffic Psychology and Behaviour, vol. 64, p. 184-195.

[12]. (2005). Besluit ontheffingverlening exceptioneel vervoer. Accessed on 17-03-2022 at

[13]. (2019). Regeling vergunningverlening experimenten zelfrijdende auto. Accessed on 17-03-2022 at

[14]. Hoekstra, A.T.G. & Mons, C. (2020). Advisering over praktijkproeven met zelfrijdende voertuigen. Herziening risicomatrix en lessen uit eerdere proeven. R-2020-14. SWOV, Den Haag.

[15]. Jansen, R.J., Mons, C., Hoekstra, A.T.G., Louwerse, W.J.R., et al. (2019). Advies praktijkproef. HagaShuttle. R-2019-10. SWOV, Den Haag.

[16]. Rijksoverheid (2016). Declaration of Amsterdam ‘Cooperation in the field of connected and automated driving’

[17]. OvV (2019). Veilig toelaten op de weg. Lessen naar aanleiding van het ongeval met de Stint. Onderzoeksraad voor Veiligheid, OvV, Den Haag.

[18]. Victor, T.W., Tivesten, E., Gustavsson, P., Johansson, J., et al. (2018). Automation expectation mismatch: Incorrect prediction despite eyes on threat and hands on wheel. In: Human Factors, vol. 60, nr. 8, p. 1095-1116.

[19]. Carsten, O. & Martens, M.H. (2019). How can humans understand their automated cars? HMI principles, problems and solutions. In: Cognition, Technology & Work, vol. 21, nr. 1, p. 3-20.

[20]. Dixon, L. (2020). Autonowashing: The greenwashing of vehicle automation. In: Transportation Research Interdisciplinary Perspectives, vol. 5, p. 100113.

[21]. Aramrattana, M., Habibovic, A. & Englund, C. (2021). Safety and experience of other drivers while interacting with automated vehicle platoons. In: Transportation Research Interdisciplinary Perspectives, vol. 10, p. 100381.

[22]. IenM (2015). Brochure European Truck Platooning Challenge. Minister of Infrastructure and the Environment, Den Haag.

[23]. Ensemble (2021). Platooning together. Accessed on 19-11-2021 at

[24]. Axelsson, J. (2017). Safety in vehicle platooning. A systematic literature review. In: IEEE Transactions on Intelligent Transportation Systems, vol. 18, nr. 5.

[25]. Biondi, F., Alvarez, I. & Jeong, K.-A. (2019). Human–vehicle cooperation in automated driving: A multidisciplinary review and appraisal. In: International Journal of Human–Computer Interaction, vol. 35, nr. 11, p. 932-946.

[26]. Litman, T. (2022). Autonomous vehicle implementation predictions: Implications for transport planning. Victoria Transport Policy Institute, VTPI.

[27]. Paddeu, D., Parkhurst, G. & Shergold, I. (2020). Passenger comfort and trust on first-time use of a shared autonomous shuttle vehicle. In: Transportation Research Part C: Emerging Technologies, vol. 115, p. 102604.

[28]. Hagenzieker, M., Boersma, R., Velasco, P.N., Ozturker, M., et al. (2020). Automated buses in Europe: An inventory of pilots. TU Delft, Delft.

[29]. Oikonomou, M.G., Orfanou, F.P., Vlahogianni, E.I. & Yannis, G. (2020). Impacts of autonomous shuttle services on traffic, safety and environment for future mobility scenarios. In: IEEE 23rd International Conference on Intelligent Transportation Systems.  p. 1-6.

[30]. Lengyel, H., Tettamanti, T. & Szalay, Z. (2020). Conflicts of automated driving with conventional traffic infrastructure. In: IEEE Access, vol. 8.

[31]. Hetzer, D., Muehleisen, M., Kousaridas, A., Barmpounakis, S., et al. (2021). 5G connected and automated driving: use cases, technologies and trials in cross-border environments. In: EURASIP Journal on Wireless Communications and Networking, vol. 2021, nr. 1, p. 97.

[32]. Czarnecki, K. (2019). Software engineering for automated vehicles: Addressing the needs of cars that run on software and data. In: 2019 IEEE/ACM 41st International Conference on Software Engineering: Companion Proceedings (ICSE-Companion). IEEE. p. 6-8.

[33]. Rizoomes (2020). De veiligheid van de zelfrijdende auto. Accessed on 17-8-2021 at

[34]. Barabás, I., Todoruţ, A., Cordoş, N. & Molea, A. (2017). Current challenges in autonomous driving. In: IOP conference series: materials science and engineering. IOP Publishing. p. Vol. 252, No. 251, p. 012096.

[35]. Awad, E., Dsouza, S., Kim, R., Schulz, J., et al. (2018). The Moral Machine experiment. In: Nature, vol. 563, nr. 7729, p. 59-64.

[36]. Sarter, N.B. & Woods, D.D. (1995). How in the world did we ever get into that mode? Mode error and awareness in supervisory control. In: Human Factors, vol. 37, nr. 1, p. 5-19.

[37]. Lee, J.D. & See, K.A. (2004). Trust in automation: designing for appropriate reliance. In: Human Factors, vol. 46, nr. 1, p. 50-80.

[38]. Parasuraman, R. & Riley, V. (1997). Humans and automation: Use, misuse, disuse, abuse. In: Human Factors, vol. 39, nr. 2, p. 230-253.

[39]. Matthews, G., Wohleber, R., Lin, J. & Panganiban, A.R. (2019). Fatigue, automation, and autonomy. Challenges for operator attention, effort, and trust. In: Mouloua, M., Hancock, P.A. & Ferraro, J. (red.), Human performance in automated and autonomous systems. Current theory and methods. CRC Press, Boca Raton.

[40]. Stapel, J., Mullakkal-Babu, F.A. & Happee, R. (2019). Automated driving reduces perceived workload, but monitoring causes higher cognitive load than manual driving. In: Transportation Research Part F: Traffic Psychology and Behaviour, vol. 60, p. 590-605.

[41]. Navarro, J. (2019). A state of science on highly automated driving. In: Theoretical Issues in Ergonomics Science, vol. 20, nr. 3, p. 366-396.

[42]. Howard, D. & Dai, D. (2014). Public perceptions of self-driving cars: The case of Berkeley, California. In: Transportation Research Board 93rd annual meeting, vol. 14, nr. 4502, p. 1-16.

[43]. Talebian, A. & Mishra, S. (2018). Predicting the adoption of connected autonomous vehicles. A new approach based on the theory of diffusion of innovations. In: Transportation Research Part C: Emerging Technologies, vol. 95, p. 363-380.

[44]. Leicht, T., Chtourou, A. & Ben Youssef, K. (2018). Consumer innovativeness and intentioned autonomous car adoption. In: The Journal of High Technology Management Research, vol. 29, nr. 1, p. 1-11.

[45]. Vissers, L., Kint, S. van der, Schagen, I. van & Hagenzieker, M. (2016). Safe interaction between cyclists, pedestrians and automated vehicles; What do we know and what do we need to know? R-2016-16. SWOV, The Hague.

[46]. Nuñez Velasco, P., Farah, H., Arem, B. van & Hagenzieker, M. (2017). Interactions between vulnerable road users and automated vehicles: A synthesis of literature and framework for future research. In: Proceedings of the Road Safety and Simulation International Conference.  p. 16-19.

[47]. Vlakveld, W., Kint, S. van der & Hagenzieker, M.P. (2020). Cyclists’ intentions to yield for automated cars at intersections when they have right of way: Results of an experiment using high-quality video animations. In: Transportation Research Part F: Traffic Psychology and Behaviour, vol. 71, p. 288-307.

[48]. Vlakveld, W.P. & Kint, S. van der (2018). Hoe reageren fietsers op zelfrijdende auto’s? Gedragsintenties bij ontmoetingen op kruispunten. R-2018-21. SWOV, Den Haag.

[49]. Chang, C.M., Toda, K., Sakamoto, D. & Igarashi, T. (2017). Eyes on a car: an interface design for communication between an autonomous car and a pedestrian. In: Proceedings of the 9th international conference on automotive user interfaces and interactive vehicular applications.  p. 65-73.

[50]. Gouy, M., Wiedemann, K., Stevens, A., Brunett, G., et al. (2014). Driving next to automated vehicle platoons: How do short time headways influence non-platoon drivers’ longitudinal control? In: Transportation Research Part F: Traffic Psychology and Behaviour, vol. 27, p. 264-273.

[51]. Schoettle, B. & Sivak, M. (2015). A preliminary analysis of real-world crashes involving self-driving vehicles. UMTRI-2015-34. University of Michigan Transportation Research Institute.

[52]. Naujoks, F., Wiedemann, K., Schömig, N., Hergeth, S., et al. (2019). Towards guidelines and verification methods for automated vehicle HMIs. In: Transportation Research Part F: Traffic Psychology and Behaviour, vol. 60, p. 121-136.

[53]. Dey, D., Habibovic, A., Löcken, A., Wintersberger, P., et al. (2020). Taming the eHMI jungle: A classification taxonomy to guide, compare, and assess the design principles of automated vehicles' external human-machine interfaces. In: Transportation Research Interdisciplinary Perspectives, vol. 7, p. 100174.

[54]. MEDIATOR (2021). MEdiating between Driver and Intelligent Automated Transport Systems on Our Roads. Accessed on 16-08-2021 at

[55]. Petermeijer, S.M., Tinga, A.M., Reus, A. de, Jansen, R.J., et al. (2021). What makes a good team? – Towards the assessment of driver-vehicle cooperation. AutomotiveUI.

[56]. Regan, M., Prabhakharan, P., Wallace, P., Cunningham, M.L., et al. (2020). Education and training for drivers of assisted and automated vehicles. AP-R616-20. Austroads.

[57]. Vlakveld, W.P. & Wesseling, S. (2018). ADAS in het rijexamen. Vragenlijstonderzoek onder rijschoolhouders en rijexaminatoren naar moderne rijtaakondersteunende systemen in de rijopleiding en het rijexamen voor rijbewijs B. R-2018-20. SWOV, Den Haag.

1 + 6 =
Solve this simple math problem and enter the result. E.g. for 1+3, enter 4.
foto_zelfrijdende voertuigen

Would you like to cite this fact sheet?

SWOV publication

This is a publication by SWOV, or that SWOV has contributed to.