DeepGlance TuneUp is an AI-based solution that allows you to run large-scale analysis on people who come into contact with a retail touchpoint. Advanced computer vision algorithms allow you to continuously monitor consumers’ real gaze direction over a wide distance range and measure which products they are interested in, classifying them by gender and age groups.
Designed to be installed on the surface to be analyzed with the smallest footprint. It is connected to the controller via a flat cable.
Processes the information in real-time and stores the anonymized results. When connected to the Internet it uploads the results to the web platform.
Pay-per-use Web platform
Automatically monitor the traffic and engagement of your areas of interest and analyze customers’ behaviors by segmenting them down to the smallest detail. For example, how long on average a woman between 20 and 30 years old has looked at the products under test?
Use the platform for free to test and pay only for your real in-store studies, contact us for more information.
An innovative computer vision algorithm analyzes the images of the person’s eyes and continuously estimates the direction of the gaze.
The system detects and tracks multiple people at the same time. A unique anonymous identifier is assigned to each person.
For each detected person the physical appearance is analyzed. In this way, the system estimates the sex and the age group.
Emotions are recognized and classified in real time through an advanced analysis of facial expressions.
HOW IT WORKS
POSITION IT IN THE SHELF
Position the microcamera in front of the datastrip of the in test shelf approximately at eye level and pointing to the surrounding environment. The controller must be anchored to a hidden part at the top or bottom of the shelf. Once the power is connected, the device starts recording.
TAKE A SNAPSHOT
Take a picture of the shelf to use as a snapshot for the projection of information and calculation of product-related metrics. The photograph must be taken from the front and the micro camera must be visible, in this phase must be taken some simple measurements to calculate the physical proportions.
ACCESS TO THE WEB PLATFORM
As soon as the device is connected to the Internet, the data are uploaded to the Web platform and are available for analysis. If the device is always connected, the data is updated in real-time. In the web platform, it is possible to insert the areas of interest, view, and export the calculated metrics and visual outputs such as heatmap and opacity map.
Ultra-wide tracking distance from 0.5m up to 4m.
Designed to work even when face masks are worn.
Remote eye tracking with no need to wear any tracking device.
The device does not require any calibration procedure.
The multi-device setup is used to analyze several touchpoints at the same time or to cover large shelves. The device automatically maintains synchronization with the others and aggregates the metrics related to the same shopper even during transit across multiple devices.
What Our Customers Say
“It is not a generic analysis done with camera intercepting the direction of the face, it is a true passive eye tracking able of measuring with great granularity and precision what was looked at and for how long, up to visual areas of at least 15 cm.
Compared to the active ET we have a universal sample and for each of them we identify the subjects and code them for the entire shopper experience.”
|Size||Camera: 31 x 30 x 10 mm
Controller: 119 x 100 x 38 mm
|Camera Flex Cable Length||Up to 2m|
|Frequency||15 – 20 Hz|
|Operating distance||0.5 – 4.0 m|
|Field of View||Horizontal: 62.2 degrees
Vertical: 48.8 degrees
|AOIs||Multiple AOI tracking, minimum single AOI Size: 14 cm|
|Recognized emotions||Neutral, surprise, sad, happy, fear, disgust, angry.|
|Metrics||Traffic: Number of unique tracked people.
Engagement: Number of people who look at each AOI, how long each AOI is seen.
Shopper Profile: Gender and age estimation.
Examples of additional details:
Attention time, Number of fixations for each AOI, fixation duration, order in which the AOI are viewed, time to first
AOI fixation, first AOI fixation duration, time to last AOI fixation, last AOI fixation duration, number of AOI visits, AOI visit average duration.
Visual Outputs: Heatmap, opacity map.
|Additional features||Multi-device setup|