7Sensor Data Fusion in Automotive ApplicationsPanagiotis Lytrivis, George Thomaidis and Angelos AmditisInstitute of Communication and Computer SystemsGreeceOpen Access Database www.intechweb.org1. IntroductionSensor data fusion plays an important role in current and future vehicular active safetysystems. The development of new advanced sensors is not sufficient enough without theutilisation of enhanced signal processing techniques such as the data fusion methods. Astand alone sensor cannot overcome certain physical limitations as for example the limitedrange and the field of view. Therefore combining information coming from different sensorsbroadens the area around the vehicle covered by sensors and increases the reliability of thewhole system in case of sensor failure.In general, data fusion is not something innovative in research; a lot has been done formilitary applications, but it is rather a new approach in the automotive field. The state-ofthe-art in the automotive field is the fusion of many heterogeneous onboard sensors, e.g.radars, laserscanners, cameras, GPS devices and inertial sensors, and the use of map datacoming from digital map databases.A functional model very similar to the Joint Directors of Laboratories (JDL), which is themost prevalent in data fusion, is used in automotive fusion. According to this model thedata processing is divided to the following levels: signal, object, situation and application.All these levels communicate and exchange data through a storage and system manager.The JDL model is only a functional model which allows different architectures for fusionimplementation. These architectures are divided in centralized, distributed and hybrid; eachone has advantages and disadvantages.In the data fusion process the main focus is on object and situation refinement levels, whichrefer to the state estimation of objects and the relations among them, correspondingly. Thediscrimination between these levels is also made by using the terms low and high levelfusion instead of object and situation refinement.There are several vehicular applications that fusion of data coming from many differentsensors is necessary. These can be divided into three main categories: longitudinal support,lateral support and intersection safety applications.There is a current tendency to exploit also wireless communications in vehicles. Talking carsforming ad hoc networks may be useful in future applications to cover more safety casesthat can not be covered so far, due to physical limitations of onboard sensors. In this way theelectronic horizon and the awareness of the driver can be extended even to some kilometresaway. A lot of ongoing research is focused on the design of efficient protocols andarchitectures for vehicular ad hoc networks and on the standardization of this kind ofvehicular communication.Source: Sensor and Data Fusion, Book edited by: Dr. ir. Nada Milisavljević,ISBN 978-3-902613-52-3, pp. 490, February 2009, I-Tech, Vienna,

124Sensor and Data Fusion2. The revised JDL modelSensor data fusion systems can be met in several applications, from military to civilian.Despite the wide variety of all those application domains the data fusion functional modelis common and it was developed in 1985 by the U.S. Joint Directors of Laboratories (JDL)Data Fusion Group. The goal of this group was to develop a model that would helptheoreticians, engineers, managers and users of data fusion techniques to have a commonunderstanding of the fusion process and its multiple levels. Since then the model wasconstantly revised and updated and the one described in Fig. 1 is from the 1998 revision(Hall & Llinas, 2001).Fig. 1. Joint Directors of Laboratories (JDL) model Level 0: Preprocessing of sensor measurements (pixel/signal-level processing).Level 1: Estimation and prediction of entity states on the basis of inferences fromobservations. Level 2: Estimation and prediction of entity states on the basis of inferred relationsamong entities. Level 3: Estimation and prediction of effects on situations of planned orestimated/predicted actions by the participants. Level 4: Adaptive data acquisition and processing related to resource management andprocess refinement.The question raised is how this model can be applied in multi-sensor automotive safetysystems (Polychronopoulos et al., 2006). The corresponding revised JDL model is depictedin Fig. 2.According to the automotive fusion community, level 4 does not belong to the core fusionprocess and hence it has been left out of the model in Fig. 2. A key topic in the automotiveindustry is Level 5, which corresponds to the Human Machine Interface, but it is notconsidered as part of the data fusion domain (see Fig. 1). While the scope of the first datafusion systems was to replace the human inference and let the system decide on its own,recently the human became more and more important in the fusion process and there arethoughts on extending the JDL model in order to include the human in the

Sensor Data Fusion in Automotive Applications125Fig. 2. Revised JDL model for automotive applications3. Fusion architecturesThe revised JDL model does not imply explicitly how the fusion process is implemented andhow information among different levels is exchanged. Due to this fact a variety ofarchitectures can be extracted from this functional model. Based on the way that informationis fused, three different architectures may be implemented: centralized, distributed andhybrid. Each one has its own advantages which are mentioned in the following paragraphsand inside Table 1 (Blackman & Popoli, 1999).3.1 Centralized architectureThis architecture is theoretically the simplest and ideally has the best performance when allthe sensors are accurately aligned, that is when the sensors measure identical physicalquantities. In this architecture the raw measurements from all sensors are collected in acentral processing level (Fig. 3). On the one hand, this is the main advantage of thecentralized architecture, that all raw data is available at the data fusion algorithm. On theother hand, the data fusion algorithm is much more complex compared to the one used inFig. 3. Centralized Fusion

126Sensor and Data Fusionthe case of distributed architecture, since it has to analyze and process raw data at a higherrate. The Multiple Hypothesis Tracking (MHT) algorithm is easily implemented since alldata is available inside the central processor.Inefficiencies in this method can occur due to the large amount of data that have to betransferred on time in the central processor. Centralized ArchitectureAccurate data association andtrackingOptimization of the estimatedposition and track of an objectReduced weight, volume, power andproductive cost with regard todistributed architecture (lessprocessors used)Increased HW reliability (lessprocessors needed in the data fusionchain)Logic and implementation are directUse of Multiple Hypothesis Tracking(MHT) algorithm is direct Distributed ArchitecturePre-processing of data reduces theload in the central processor(moderate data transferrequirements)More efficient utilization of theindividual sensor characteristicsOptimization of signal processing ineach sensorLeast vulnerable to sensor failureFlexibility in the number and type ofsensors used which allows addition,removal or change of sensors withoutsignificant changes in the structure ofthe fusion algorithmCost effective since it allowsadditional fusion in an existing multisensor configurationTable 1. Advantages of centralized and distributed architecture3.2 Distributed architectureThe distributed fusion architecture is depicted in Fig. 4. The main advantage of adecentralized architecture is the lack of sensitivity regarding the correct alignment of thesensors. Additionally, this architecture has scalable structure, avoiding centralizedcomputational bottlenecks, is robust against sensor failure and modular.In the case of distributed fusion pre-processed data is the input in the central processor. Foreach sensor the signal level processing can be carried out in the frequency domain or in thetime domain or in pixel based (image processing) and the final input to the central processorwill be the entity with its attributes, with a certain level of confidence for further fusion incentral level. The hidden assumption made here is that the sensors are acting independently,which is not true for all the cases. Suffering from redundant information is the maindrawback of this architecture.3.3 Hybrid architectureIn hybrid architecture the centralized architecture is complemented from different signalprocessing algorithms for each sensor, which can provide also input to a backup sensor leveldata fusion algorithm (Fig. 5). The hybrid architecture keeps all the advantages of thecentralized architecture and additionally allows the fusion of tracks coming from individualsensors in a sensor level fusion process. The main disadvantages of this hybrid approach arethe increased complexity of the process, the potential high requirements in data transfer andthe probable cross correlation between local and central

Sensor Data Fusion in Automotive Applications127Fig. 4. Distributed Fusion ArchitectureFig. 5. Hybrid Fusion Architecture4. Object refinementObject refinement lies on the first level of the JDL fusion model and it concerns the estimationof the states of discrete physical objects (vehicles in our case). The analysis in this paragraph isbased on the distributed architecture that was described previously. The reason for selectingthe distributed approach is mainly due to its modularity and theoretically easier adaptation todifferent vehicles (independently of the sensors used with slight further processing). Hence, itcan be considered as the most promising approach for future vehicular applications, if a levelof processing is carried out inside each sensor or sensor system and no raw data is used. Themain parts of object refinement are the

128 Sensor and Data FusionMeasurements pre-processingSensor level trackingSpatial & temporal alignmentTrack-to-track associationTrack level fusion algorithmRoad geometry estimation4.1 Measurements pre-processingSometimes, in practical problems, when the sensors provide raw data a first step of preprocessing is required. For example, the long range radar sensors, used for automotiveapplications, provide object data as output, which do not need further pre-processing, whilethe laserscanner sensors provide polygons that need to be classified to vehicles and roadborders by implementing appropriate pre-processing.4.2 Sensor level trackingThis function corresponds to the first boxes in Fig.4, which take as input the sensormeasurements. In these boxes gating, association, filtering and local track management takeplace. First of all, for the tracking algorithm a motion model should be selected for updatingthe Kalman filter (the transition matrix of the Kalman filter). The motion models that arewidely used in the automotive field are the constant acceleration (CA) and the constantacceleration and turn rate model (CTRA) that are described in detail in (Bar-Shalom & Li,1993; Blackman & Popoli, 1999).After the selection of the motion model follows the measurement-to-track associationproblem that is the problem of finding the best association between tracks andmeasurements. Several association methods exist and the most common are the GlobalNearest Neighbor (GNN) and the Joint Probabilistic Data Association (JPDA). The former isone-to-one measurement to track assignment, while the latter uses more than onemeasurement to update one track and more than one track can be updated by the samemeasurement. The selection of one of the two methods depends on the quality and nature ofthe sensor measurements. For instance, for a tracking algorithm carried out for a long rangeradar sensor, the GNN approach is adequate (Blackman & Popoli, 1999; Floudas et al., 2007).Then, according to the results of the association problem, the track management moduleshould decide for initialization of new tracks and confirmation or deletion of existing tracks.The decision process is based on simple rules of consecutive “hits” and “misses”, where ahit is defined when there is a successful association between at least one measurement and atrack and a miss when a track remains without an assigned measurement for this cycle ofthe process (Floudas et al., 2008).The final step in sensor level tracking is the filtering and prediction function, where the newtentative tracks (unassigned measurements) and the previously existing confirmed tracksare filtered and outputted to the track-to-track association procedure. Also according to theselected motion model the future position of the updated tracks (new and existing) ispredicted and the gate for further track processing is calculated. The scope of this gate is toreduce the computational load of checking all measurements with all tracks and justinvestigate the association of the tracks with measurements that fall inside their

Sensor Data Fusion in Automotive Applications1294.3 Spatial & temporal alignmentThe next step, right after the sensor level tracking, is the spatial and temporal alignment ofall the tracks that are coming from the different sensors. For further association and fusionof these tracks a common coordinate system and time reference are needed. In most casesthe coordinate system that is used has its origin in the geometrical center of the vehicle andthe longitudinal axis is the x-axis. As a time reference the time provided by the ControllerArea Network (CAN) bus is used. CAN is actually a network protocol, designed specificallyfor automotive applications, that allows communication among the electronic control unit(s)of each vehicle with other devices and sensors connected to it. All tracks that are comingfrom different sensors are fed into the CAN bus and in this way time synchronization isaccomplished.4.4 Track-to-track associationAfter the tracks that are coming from the sensor level tracking have been aligned in spaceand time, the track-to-track association is executed. The aim of the association is to decidewhich tracks that are coming from different sensors correspond to the same object. This isuseful in cases that we have multiple sensors with common or complementary areas ofsurveillance. The multidimensional assignment approach is used in case that three or moresensors are observing the same object. For this kind of problems the Lagrangian relaxationmethod is directly applicable (Deb et al., 1997).4.5 Track level fusion algorithmThere are several methods to update two or more tracks (using state vectors and covariancematrices) with track-to-track fusion; some of them are summarized in the following lines.Regarding the selection of fusion method for two tracks update several methods areapplicable; starting from Simple Fusion (Singer & Kanyuck, 1971) that implies that the tracksare uncorrelated thus it is a suboptimal method. The Weighted Covariance Fusion (BarShalom, 1981; Blackman & Popoli, 1999) accounts for correlation between trackers (commonprocess noise) producing the cross covariance matrix from the existing covariance matrices.The fusion finally selected when reliable tracks are available is the Covariance Intersectionmethod (Uhlmann, 1995). Covariance Intersection method deals with the problem of invalidincorporation of redundant information.The Covariance Union method (Uhlmann, 2003) solves the problem of informationcorruption from spurious estimates. Covariance Union method guarantees consistency aslong as both the system and the measurement estimates are consistent, but it iscomputationally demanding. Covariance intersection method is a conservative solution butsuperior to weighted covariance method.However in many practical cases the covariance of obviously not reliable tracks can lead toinaccurate estimates, and therefore a constant predefined weight can be used for these cases.Finally, according to the road environment, the computational load, the process noise, thecorrelation of sensor measurements and the independency assumption, the proper methodfor fusion can be selected.4.6 Road geometry estimationThe role of object refinement is not only to estimate the state of each vehicle, but also toestimate the status of other objects in the road environment such as the road borders.Parallel to sensor level tracking a road geometry estimation algorithm is running.

130Sensor and Data Fusionmathematical model for the road geometry representation could be either the clothoid(Lamm et al., 1999) or the B-Splines (Piegl & Tiller, 1996) model. The basic sensor used forextracting the road geometry is a camera. This camera, after image processing, providesinformation about the lanes, the lane markings, the curvature of the road etc. and utilizingthe clothoid or the B-Splines model a first estimation of the road geometry is calculated.Moreover, the road geometry is estimated based on information coming from a digital mapdatabase. The current position of the vehicle in the map is extracted based on a GPS or adifferential GPS sensor and advanced map matching techniques. A way of extracting theroad geometry using digital maps is described in detail by (Tsogas et al., 2008a).The fusion of these estimations to obtain the final road geometry estimation is carried outusing a fuzzy system (Jang et al., 1997) or a Dempster-Shafer (Dempster, 1968; Shafer, 1976)reasoning system. Additionally, other active sensors, e.g. radars or laserscanners, can beused as input to the fusion process to increase the robustness of the system(Polychronopoulos et al., 2007; Tsogas et al., 2008a). The fusion process is based on theassumption that camera and laserscanner data is more reliable close to the vehicle, whilemap data is more accurate far ahead from the vehicle.5. Situation refinementSituation refinement belongs to the second level of the JDL model and it refers to therelations among the various objects in the road environment. Quite often the term highlevel fusion is used instead. Within situation refinement the meaning of the current situationaround the vehicle is tried to be comprehended. Some questions that are dealt with here are:‘Is this group of slow moving vehicles involved in a traffic jam?’, ‘Are the trajectories of twovehicles approaching each other intersecting? Is there a danger of collision?’ and so on.The three most known theories that are used in high level fusion, proportional to theproblem, are: Fuzzy systems (Dubois & Prade, 1980; Jang et al., 1997), Bayesian probabilitytheory (Bernardo & Smith, 2000; Bolstad, 2007) and Dempster-Shafer theory (Dempster,1968; Shafer, 1976). In this chapter, an overview of the main parts of situation refinementwill be outlined, but the selection of the most appropriate theory is not explicitly indicated.The outcome of situation refinement enriches the environment model including additionalattributes of the ego-vehicle and other objects (e.g. predicted paths, detected maneuvers).Summarizing, it can be said that situation refinement is the basis to assess the risk of presentand predicted future situations, given that all involved participants act in a predictable way.Finally, situation refinement can be uncertain due to incompleteness of knowledge anduncertain information sources (Tsogas et al., 2007; Tsogas et al., 2008b).The main parts of situation refinement discussed here, are the following: Path prediction Maneuver detection Driver intention Assignment of a lane to an object High level events5.1 Path predictionPath prediction is a key component of situation refinement and it can be divided into threeparts. The first part is to calculate the future path of the vehicle based on its

Sensor Data Fusion in Automotive Applications131dynamical state and the adoption of a specific motion model. This model could be theConstant Velocity (CV), Constant Acceleration (CA), Constant Turn Rate (CTR) ConstantTurn Rate and Acceleration (CTRA) or Bicycle Model (BM) (Liu & Peng, 1996; Pacejka,2006), a combination of two or three of these models with the use of an Interacting MultipleModel (IMM) filter or a dynamically adaptive rule-based model. A Kalman Filter is alsouseful for smoothing the vehicle’s dynamics (e.g. speed, yaw rate) and reducing themeasurement noise. The second part consists of the extraction of the future path based onthe estimation of the road borders and assuming that the driver will follow the roadgeometry without performing any maneuver. Moreover in this part a dedicated motionmodel is required. Almost always a CV model suffices. The third and more sophisticatedpart is the combination of the first two parts. The fusion of these paths can be performed inseveral different ways. The simplest way is to use a weighted average estimation. For thecalculation of the short term future path the dynamic state of the vehicle is more important,while for the long term path the estimation of the road geometry has major influence.Significant work on this issue has been carried out by (Polychronopoulos et al., 2007).5.2 Maneuver detectionThe purpose of this algorithm is to identify the maneuver performed by the driver. Thiscalculation can be realized with a Dempster-Shafer reasoning system. At the beginning theset of the maneuvers that the system can detect should be formed. An example set is thefollowing:Ω {free motion, lane change, overtaking, following another vehicle}According to the above set and the information sources, the Dempster-Shafer reasoningsystem can estimate the actual maneuver that is performed by the vehicle. The informationsources could be: the estimated time that the vehicle will cross the lane, the minimumdistance of the vehicle to the lane marking, the time in which this minimum distance will beachieved, the curvature of the road, the curvature of the vehicle’s path and the distance fromthe vehicle in front. For each one of these sources a basic probability assignment functionwill be assigned for calculating the evidence masses. Then the fused evidence masses will becalculated and the belief and plausibility values will be extracted in order to evaluate thefinal confidence.Here is an example, how the algorithm calculates the performed maneuver: First of all let’sassume that the ego vehicle is overtaking another vehicle. The time to cross the lane shouldhave very small values and the ego vehicle should be following another vehicle in relativelysmall distance. If this is the input information to the system, then the algorithm shoulddetect an overtaking maneuver with high confidence.5.3 Driver intentionAnother important function in the situation refinement domain is checking whether themaneuver performed by the driver was intended or not. This can be of great importanceespecially for the Human Machine Interface (HMI) application. For example, if the output ofthe driver intention module is that the current performed maneuver is not intended, thenthere is a high possibility of an upcoming unpleasant situation, so the HMI system shouldintervene and inform the driver before it is too late. A Dempster-Shafer or Rule based orFuzzy inference system can be used for identifying the driver’s intention. The input

132Sensor and Data Fusionto this system comprise the output of maneuver detection algorithm, the type of the road(rural, highway, construction area etc.), the curvature of the road, and other vehicle datasuch as the status of the indicator (ON/OFF), the velocity etc.The formulation of the rules in a rule based system or the membership functions in a Fuzzyinference system or the basic probability assignment functions in a Dempster-Shafer systemare based on simple guidelines. For instance, the possibility for intended lane change insharp-curved road segments is lower than in cases of straight road segments. When thecurvature exceeds a threshold then it is very unlikely that the driver will change lane.5.4 Assignment of a lane to an objectThis part of situation refinement is responsible for assigning a lane index to every fusedobject relative to the future path of the ego vehicle. It indicates the relationship among thedetected objects in the road, the lanes of the road and the ego vehicle. A Dempster-Shaferreasoning system is applicable also in this case.The sources that can be used to estimate the assigned lane index to the object are thefollowing: offset of the position of each vehicle from the position of the ego vehicle exploiting thefuture path calculated previously using different motion models (CA, CTR & CTRA) distance of the detected object from the ego vehicleThe offset is calculated using the future trajectory of the ego vehicle and the coordinates ofthe detected object.The basic probability assignment functions are formulated based on the following rules: The closer the detected object is located to the lane borders, the lower evidence mass isassigned to the corresponding proposition. The further the detected object is, the lower the evidence mass assigned to thecorresponding information source is.5.5 High level eventsSince situation refinement is also called high level fusion, high level events such asestimation of weather conditions and traffic, should be taken into account within this fusionlevel. Both the estimation of the traffic density and of the weather conditions could be basedon a Bayesian network approach (Jensen & Nielsen, 2007; Korb & Nicholson, 2004).As far as the traffic is concerned, it could be classified in light, medium or dense traffic. Forthis calculation the fused objects from object refinement as well as the road attributes suchas lane markings, road offset, lane offset, road width, lane width and heading, curvatureand curvature rate of the corresponding segment are needed.The estimation of the weather conditions (fog, rain, icy road) is much more complex,because for this kind of calculations, input from specific sensors is needed.6. Application and use casesIn the automotive field there are several applications that fusion of data of various sensors isnecessary. For all around coverage and for supporting at the same time a lot of differentapplications, data fusion becomes a complicated procedure. The sensors used are veryheterogeneous and vary in quality. Some sensors are of poor quality, others, like the longrange radars, are of high quality but due to their limited field of view support from

Sensor Data Fusion in Automotive Applications133sensors with wider coverage area is necessary. The synchronization of all these sensors, theprocessing power needed (many embedded PCs), the space they need for installation in thecar and the cost comprise constraints for the fast incorporation of such systems in themarket. Despite all the above facts, the key challenge in all these applications which wouldlead the future active safety systems in success is a robust and reliable data fusion.Fig. 6. Coverage areas for various automotive safety applicationsThe figure above shows many different automotive safety applications and their coverageareas. It is obvious that there is a significant variety of applications in the automotive field,such as Adaptive Cruise Control (ACC), front/rear collision mitigation, parking aid,front/rear collision avoidance, blind spot support, lane change and lane keeping support,vulnerable road users (e.g. pedestrians, cyclists) protection and so on.The aim of this chapter is not to refer to all these applications but to highlight the mostimportant ones and these that will contribute to the reduction of road accidents andrespectively to the fatalities.6.1 Intersection safetyIntersections comprise a major accident hotspot according to statistics, as proved by the datataken out of CARE2005 and provided by Renault. Above 40% of all injury accidents inEurope take place at intersections, while approximately 25% and 35% of the fatalities andthe serious injuries come out from intersections respectively. The aim of intersection safetyapplications is to assist and protect not only the drivers, but also the vulnerable road users(e.g. pedestrians, cyclists). Accident scenarios at intersections are amongst the mostcomplicated, since intersections are frequented by many and different road usersapproaching from different directions. Some examples of accident scenarios are thefollowing: Collisions with oncoming/crossing traffic while turning into or crossing over anintersection Violation of the traffic light (red light runner)

134Sensor and Data FusionFor this type of applications advanced on-board sensor systems are necessary, but even suchsensors maybe don’t suffice. The exploitation of wireless cooperation among the road usersand especially infrastructure support at intersections is more than essential. In thisparagraph the analysis will be restricted on the in-vehicle systems.An example of an equipped vehicle with advanced on-board sensors for intersectionscenarios is depicted in Fig. 7.Fig. 7. Equipped vehicle for intersection safety applicationsThe key factors at intersection are the use of sensors with wide field of view, like thelaserscanners, and highly accurate vehicle localisation. The laserscanner can detect othervehicles, pedestrians, cyclists and natural landmarks. The camera, after image processing,can extract information about the lane markings. Highly accurate vehicle localisation can beperformed by fusing information from camera, laserscanner and map data extracted from adetailed map of the intersection with the use of a GPS/DGPS sensor.6.2 Safe speed and safe distanceThis appli

x Level 4: Adaptive data acquisition and processi ng related to resource management and process refinement. The question raised is how this model can be applied in multi-sensor automotive safety systems (Polychron opoulos et al., 2006). . Sensor Data Fusion in Automotive Applications .