Efficient Resource Allocation for Attentive Automotive Vision Systems

Stephan Matzka, Andrew M. Wallace, Yvan R. Petillot

Research output: Contribution to journalArticlepeer-review

21 Citations (Scopus)


We describe a novel architecture for automotive vision organized on five levels of abstraction, i.e., sensor, data, semantic, reasoning, and resource allocation levels, respectively. Although we implement and evaluate processes to detect and classify other participants within the immediate environment of a moving vehicle, our main emphasis is on the allocation of computational resource and attentive processing by the sensor suite. To that end, an efficient multiobjective resource allocation method is formalized and implemented. This includes a decision-making process dependent upon the environment, the current goal, the available sensors and computational resource, and the time available to make a decision. We evaluate our approach on road traffic test sequences acquired by a test vehicle provided by Audi. This vehicle includes lidar, video, radar, and sonar sensors, in addition to conventional global positioning system (GPS) navigation, but our evaluation is confined to lidar and video data alone.
Original languageEnglish
Pages (from-to)859-872
Number of pages14
JournalIEEE Transactions on Intelligent Transportation Systems
Issue number2
Publication statusPublished - Jun 2012


  • Driver-assistance systems
  • resource allocation
  • safe navigation
  • sensor data processing
  • traffic participant classification


Dive into the research topics of 'Efficient Resource Allocation for Attentive Automotive Vision Systems'. Together they form a unique fingerprint.

Cite this