The anti-theft surveillance technology that Veesion - pronounced Vision – developed has taken the world by storm it seems. The software is now helping catch shoplifters in over 25 countries. While the overall response seems to be positive, others have been skeptical. For instance, Dutch newspaper Algemeen Dagblad (AD) wrote that the deployment of the software was non-compliant with the GDPR stating that the software was identifying persons using special categories of personal data. Though this claim is contested by experts, it raises some interesting questions about this increasingly popular tool, the risks of AI and the future of shopping. To get to the bottom of this subject, that lies on the cutting edge of legal and technological developments, we interviewed Thibault David and Menno Weij. Thibault David is the CEO and cofounder of Veesion and has been working on the software for over 6 years. Menno Weij is a legal expert in the field of technology law with BDO Legal. Though he has spoken out against Veesion’s software in the past, he’s revised his stance on the qualification of the software under the GDPR, citing the technical specifications of the AI tool.

Why did Veesion develop its anti-shoplifing software?

Thibault David: “In summary, we've met many retailers, and we've understood our aim is to alleviate the problem of shoplifting. All over the world shoplifting is a huge plague that costs retail stores between 1.5 to 3% of their revenue. This is even more harmful to their economy considering it's a low margin industry. This loss represents between 50 to 100% of their profits. And we know this problem is expanding dramatically.

To give you a concrete example: We've heard that Jumbo announced that they suffered more losses, because of shoplifting, than there were net profits. They reportedly lost 100 million euros because of shoplifting and made 80 million in net profits. It leads to terrible consequences because it increases the prices of products. It impacts the purchasing power of the public, leads to layoffs and even bankruptcy. This problem also contributes to the weakening of the physical stores to the advantage e-commerce. These are the main reasons why we started Veesion six years ago in the first place; to help retailers efficiently fight the shoplifting. Prior to our software, there was no efficient solution to this problem.”

What are the privacy considerations that you’ve considered in developing Veesion AI?

Thibault David: “Our software is based on real-time gesture recognition technology. The software focusses on the gestures it detects based on real-time analysis of existing camera footage. Our AI has been trained to detect gestures that we call ‘gestures of interest’. Because of this the AI does not detect a shoplifter as such, it merely detects the motions that a shoplifter would perform. Meaning it doesn’t identify the person, what it does instead is autonomously analyze an existing video stream and send short video alerts to the people in the store that oversee security. The AI is not here to analyse personal data. It only analyses if the video stream contains actions by persons such as; taking items into their pants, into a jacket or into a backpack or shopping bag. If the AI detects such an action, it sends a video alert to the security staff. The AI won’t say whether the person is going to shoplift or not.

The only thing our software really changes is that before, no one was really watching the video feeds or there were too many cameras for one single person. Now, thanks to our technology, we enable security staff to focus on specific movements, based on AI gesture analysis. If the AI finds a high likelihood of theft, it will send a notification with the respective video clip, to the security department of the given retail store. That way they’re able to take decisions based on facts and the AI is facilitating efficient decision making. The AI is just one more lever in an existing process and by sending relevant alerts to the staff, it enables them to detect crime more effectively.

As I said before, the software doesn’t identify and doesn’t keep memory. It doesn't track people, nor keep memory of the alerts it creates. It's a no memory system that is detecting gesture based on what it has been trained on to send video clips based on existing camera footage to security personnel. In doing this we help them base their judgements on facts.”

Image: a snapshot of what Veesion software ‘sees’

What are the GDPR aspects of the deployment of AI surveillance software in semi-public spaces such as retail stores?

Menno Weij: “Well, let me first start with when I was on the Dutch news radio, I made an assumption which clearly is not correct with what we just heard. The assumption at the time was ‘Veesion sofware uses biometrics’. For me, it's obvious that it’s not. At that point the privacy perspective becomes more relaxed. Because it starts with the question: Are you processing personal data? Well supermarkets are already using cameras. And so, the supermarket is responsible for this processing. In privacy terms, the supermarket is controller and apparently has a ground for this or the original camera that is there.

Now if we put Veesion in the equation, it's important to note that, as Thibault explained, there's basically nothing on the Veesion systems. The AI-tool is something that you plug in let's say on your camera surveying systems at a supermarket or any other type of company that wants to do something against shoplifting. What I think totally missing in other media stories on what Veesion does is that there is already a camera that is recording anywhere in its own right. This impacts on everybody in supermarkets, one could argue that the impact of that original footage is bigger than, the impact of what Veesion AI tool ‘sees’.

Dutch newspaper Algemeen Dagblad wrote something that in my view, is just incorrect because while I do still believe that Veesion is involved in the processing of personal data, there’s no legal basis in their statements saying that Veesion processes special personal data. However, I also agree that to the maximum extent possible, Veesion is trying to limit their processing of regular personal data as much as it can. The AI doesn't, as Thibault explained, use any, let's say personal identification type of data. It's only looking at what was mentioned, gestures. And if it sees movements that it considers to likely be shoplifting, then it just flags the behavior and it's up to the supermarket to decide how to act on this flag.

There are some criticasters in the privacy space that say, yeah, but supermarkets are creating this problem themselves because of the self-scan registers that have been implemented in a lot of Dutch supermarkets. But I think that's the world upside down. If it comes to shoplifting, the problem is people that take stuff from supermarkets or other retail shops and just don't pay for it.”

Given the possibility of discrimination through biased judgement in AI. Could you explain how Veesion deals with this black box problem to prevent discrimination in its judgements?

Thibault David: “First thing is that, as you said, our AI is a help in the human decision making and the vision product doesn't do, doesn't make any decision. Therefore, we are not in the category of automated decision making because we do whatever is sent to the security guard that is already in charge of cameras and that is already used to take decision based on what he's seen from cameras.

The second thing is that we think that biases are an important topic in the AI field, and we take it very seriously. Our AI is trained on gesture data that are completely agnostic to anything related to identification. We trained our AI so that it can completely and only focus on analytics and analysis of spatial temporal differences between different frames because only this can help identify if an item is being taken off the shelf and into a backpack.

Therefore, we can clearly see how this kind of analysis is not at all related to any physical characteristics that could go into bias. I would add that we think that our AI is very neutral in terms of analysis as we do not do identification. And at the same time, it helps the work of security guards because it helps them take decisions based on facts which are gestures and not on any human bias they could have. We know that there is an interesting and important debate regarding biases in AI. But I think in our use case, which is very specific in the AI ecosystem, we are helping getting decisions based on objective reasoning. For this reason, I would argue that our software could help decrease preventable human biases.”

To stay up to date on privacy and data protection matters, subscribe to our newsletter.

Voor meer verdieping PONT | Data & Privacy , opent in nieuw tabblad

Over de auteurs

  • Christian Cordoba Lenis

    Christian Cordoba Lenis is nieuwsredacteur voor PONT | Data & Privacy. Cordoba Lenis is geïntrigeerd door het raakvlak tussen technologie en recht. Cordoba Lenis heeft zowel een juridische als een technische achtergrond en waagt zich nu aan het journalistieke vak.

    PONT | Data & Privacy

Gerelateerd nieuws

Waarom je als General Counsel nú werk moet maken van een Fundamental Rights Impact Assessment (FRIA)

Welke ethische afwegingen maak je als General Counsel bij de inzet van AI? De FRIA (Fundamental Rights Impact Assessment) helpt om risico’s voor grondrechten vroegtijdig te signaleren. Lees waarom dit essentieel is en hoe het bijdraagt aan compliance, transparantie en vertrouwen.

Van belofte naar beleid: hoe de EU desinformatie structureel wil aanpakken

Hieronder volgt het eerste deel van een tweedelige blogreeks, geschreven door Tijn Iserief, Consultant Privacy & Data Protection bij Lex Digitalis.

ChatGPT bij de bestuursrechter: hulpmiddel of hindernis?

AI-toepassingen zoals ChatGPT zijn de afgelopen jaren doorontwikkeld van een innovatie naar een gangbaar hulpmiddel. De technologie biedt ontelbare mogelijkheden. Een logisch gevolg daarvan is dat de technologie ook wordt toegepast door betrokkenen bij een juridische procedure. De Afdeling Bestuursrechtspraak van de Raad van State (hierna: Afdeling) heeft een uitspraak gedaan over het gebruik van ChatGPT als hulpmiddel. Anne de Jong (advocaat bij Poelmann van den Broek) legt in deze blog uit dat vertrouwen op ChatGPT voor een deskundigenoordeel (vooralsnog) niet mogelijk is.

AI hard op weg om grootste energieverbruiker te worden

Het wereldwijde gebruik van kunstmatige intelligentie groeit razendsnel. Toch blijft het energieverbruik dat daarmee gepaard gaat grotendeels onzichtbaar. Onderzoek van datawetenschapper Alex de Vries-Gao wijst uit dat AI op weg is een van de grootste energieverbruikers in de digitale wereld te worden. Aan toezicht en transparantie ontbreekt het.