Area-Of-Interest (AOI) statistics

This document contains technical information about the analysis process and its outputs, but does not describe psychological interpretation of the output.

Data Capture

The Stim Engine viewer records very high quality video of participants while they watch “stims” (stimuli) – video or image media. The video of the participant is then analyzed in two or three key ways:

  • Emotion classification by facial coding
  • Eye-tracking, eye-gaze and visual attention
  • Implicit response testing – we measure the degree of association by the time required to make decisions about concepts

Eye Tracking

Eye tracking tells us what content people are paying attention to, and when. It tells us which features are driving emotional responses. It also highlights whether key messaging about brand awareness is being picked up. It is relatively easy to interpret, since it is very objective. Robust conclusions can be drawn from relatively small samples (15-30).

Creating AOIs

You can define AOIs by editing a stim in the “Stims” section of the Stim Engine interface. Click the thumbnail of a stim to open the AOI editor. AOIs are rectangular areas in a stim. In video stims, AOIs have a time-window as well. We measure when participants’ attention went inside the box during a specific time period.

You can create AOIs before or after participants view stims.

Analysing AOIs

AOIs are analyzed by requesting the statistics in the Results page (available from the menubar). Note that you should select the relevant subset of participants for your analysis. It is important to only select participants with high accuracy for some statistics, e.g. the percentage of the population who looked. If all participants are selected, those without good tracks may skew the results.

See the “Aggregating and filtering results” article for more information on this.

Output Format

The Area of Interest statistics are provided as CSV data that you can download from the “Results” section of the Stim Engine interface. First, generate this type of result. Once the output has been created as part of a Dataset, it can be downloaded as a zip file.

When you import the CSV to Excel or other software, use commas as the delimiter – not spaces – otherwise the columns may be badly formatted.

The CSV files have a number of columns, described below. The CSV files have a header row, one row per participant and per AOI, and one row per AOI for “all participants”. Note “all” here means “all selected participants with eye-gaze data”.

  • For example, if we have 3 participants and 2 AOIs, will have have:
    1 header row
  • 2 rows for the AOIs for participant A
  • 2 rows for the AOIs for participant B
  • 2 rows for the AOIs for participant C
  • 2 rows for the AOIs for “all participants”
  • TOTAL 9 rows

Output Columns

Column Name



The user given name of the visitor.


This will typically be a “sub-AOI” meaning an AOI within a stim; in the underlying technology we also track data between stims and other screen elements. You can ignore this column.


This is the name you entered when creating the AOI


The start time, in seconds, of the time-window where the AOI is active. The time is relative to the stim becoming visible.


The end time, in seconds, of the time-window where the AOI is active. The time is relative to the stim becoming visible.


If the participant entered the AOI, then 1. Else 0. If “all” participants, then a count of the number of participants who entered.


The time spent inside the AOI, in seconds.


The time spent inside the AOI, as a fraction of the AOI duration.


For video stims, the first time (in seconds) that participant attention went inside the AOI relative to the time the AOI is active. For image stims, the time is relative to the time the image is shown (because image AOIs don’t have a time window). If “all participants”, then a mean average over the participants whose eye gaze entered the AOI.


As above, but expressed as a fraction of AOI visible time.