Automatic reasoning about causal events in surveillance video

Neil M. Robertson, Ian D. Reid

Research output: Contribution to journalArticlepeer-review

16 Citations (Scopus)
46 Downloads (Pure)


We present a new method for explaining causal interactions among people in video. The input to the overall system is video in which people are low/medium resolution. We extract and maintain a set of qualitative descriptions of single-person activity using the low-level vision techniques of spatiotemporal action recognition and gaze-direction approximation. This models the input to the sensors of the person agent in the scene and is a general sensing strategy for a person agent in a variety of application domains. The information subsequently available to the reasoning process is deliberately limited to model what an agent would actually be able to sense. The reasoning is therefore not a classical all-knowing strategy but uses these sensed facts obtained from the agents, combined with generic domain knowledge, to generate causal explanations of interactions. We present results from urban surveillance video. © 2011 Neil M. Robertson and Ian D. Reid.

Original languageEnglish
Article number530325
JournalEURASIP Journal on Image and Video Processing
Issue numberFebruary
Publication statusPublished - 7 Feb 2011


Dive into the research topics of 'Automatic reasoning about causal events in surveillance video'. Together they form a unique fingerprint.

Cite this