Transcript Dutch parliamentary hearing on EU Chat Control and Client Side Scanning

On the 11th of October, Dutch parliament organized a hearing on the EU “Chatcontrol” proposal, with a focus on client side scanning. Dutch parliament had earlier passed two motions calling on the Dutch government not to support this proposal, but our government has declared it will ignore those motions. This hearing was timely because next week, or even this Friday, EU member states vote on how to proceed with this terrible proposal (Lawfare review from a US perspective).

I’ve translated my introduction in the Dutch parliamentary hearing because I think it might add a little bit to the debate, and also because it sheds a light on the poor state of Dutch democracy, and some sunlight might help.

More context on this proposed EU law can be found, in Dutch, in a post I wrote earlier. Automated translation will likely do a decent job on this. Or follow Patrick Breyer’s explanation here. He’s been on the case for years now.

Transcript (machine transcribed, machine translated, lightly edited, a few sentences added for international context):

I have been involved in this subject for about fifteen years now. I once supplied software to the Dutch police for investigations into CSAM. I was also a regulator of the Dutch intelligence and security agencies, so I know a bit about proportionality and subsidiarity and law.

For a long time I’ve also been on occasional talks between industry and government about how child pornography can be combated or made more difficult. So the subject is very familiar to me. It’s also about AI, because the AI will have to scan our communications. I have also written a small book about AI and a training course. So I hope I know a little bit about it.

And in contrast to my esteemed, highly educated predecessors, I will perhaps talk about the subject in very plain language.

[Holds up QR code] I want to show this to the viewers at home for a moment. You can find my position paper here. This worked very well last time I presented here.

And the QR code is perhaps good to keep in mind how many feelings this aroused among people around the corona crisis. What we are talking about now is that in all our WhatsApp groups, our Signal groups, our Telegram groups, a new participant will join the conversation, namely an EU logo.

And all the photos we put there, all the videos we put there, are scanned by a computer. And that computer first has a list of known bad material. But the EU proposal also says that we should also have AI and it should say, “Oops, that child photo is the wrong kind of child photo.”

In addition, the law is quite general about it, says the AI should detect the “approaching of children”. Well, that’s a very broad term, but they mean grooming. And if the system sees one of those things, then there will be a report to an EU official.

And this person gets reports all day long, “Is this CSAM or not?”. And only if the material is evidently incorrect, no action will be taken. So if the photo is of a toaster, and not a child, that is the end of it. But in almost all other cases, the process continues. Then the material goes to two places. It goes to a local authority that will do research there. And the law actually doesn’t say much about what happens then. And that is crucial. Because we often say “Think about the children.” Let’s do that too.

But there is a fantastic stream of reports from this system. Because it is still very difficult to determine the nature of a photo. For example, the Dutch Forensic Institute has said, “We are not going to look at a photo and say whether this is a minor or not.” The NFI has said, “We can’t do that. It doesn’t work. We need X-rays. We just can’t.”

But the European Union has said, “Our computer can do that.” This leads to a a fantastic stream of “Maybe it’s wrong, maybe it’s not wrong” material. And it is then passed on to a local police station or an authority that will look at it. They might start an investigation.

And the first thing a citizen notices is that they receive a letter from the police. At such and such time, the EU has found that you have shared a photo that we unfortunately cannot include here. Because it is possibly CSAM, so we cannot show you. But at 5.34 p.m. we have found a photo that was not good and we started an investigation.

I just said, “Think of the children.” And we really have to do that. And we want to make their lives better. But their lives will not get any better if this system starts a huge stream of unjust investigations.

“When we found five justifiable investigations, how many unjust investigations have we launched. Did we make it worse?” Because an unjust investigation into someone can end up terribly bad. Especially if the person under investigation is not perfect.

And many people think, “It’s not that bad to be investigated, because then I go to the police station and I just explain it, I explain that my own child was in the bath and so on.” You can really forget that illusion.

At the moment the investigation starts, you have nothing more you can “just explain”. But you are in all kinds of databases. And I want to come back to that. The people at Europol have already said, “We are going to keep that database.” Including all the things that turned out to be unjust.

Because they said, “Who knows? Maybe we can do something with it.” And that brings me to my second topic here. This is not a law that needs to be tuned a little bit and then it is good. This has never happened before. In my old work, when I had to regulate the AIVD and the MIVD [Dutch Intelligence, Security and SIGINT agencies], we were asked individually whether we could listen to these ten people or not.

And then we held a meeting about it with two wise people and me. And now we are talking about 500 million Europeans, and saying, “Let’s just apply those scanners!” That is incredible. This is not something… If we approve this as a country, if we as the Netherlands vote in favour of this in Europe and say, “Do it,” we will cross a threshold that we have never crossed before.

Namely, every European must be monitored with a computer program, with a technology…

With a technology of which the vast, overwhelming majority of scientists have said, “It is not finished.” I mentioned earlier the example that the Dutch National Forensic Institute says, “We cannot do this by hand.” The EU has now said, “Our computer can do that.”

420 scientists have signed a petition saying, “We know this technology, some of us invented it, we just can’t do it.” We can’t even make a reliable spam filter. Making a spam filter is exactly the same technology, by the way, but then much easier. It just doesn’t work that well, but the consequences aren’t that scary for a spam filter.

Nevertheless, there are now MPs who say, “Well, I feel this is going to work. I have confidence in this.” While the scientists, including the real scientists who came here tonight, say, “Well, we don’t see how this could work well enough”.

And then government then says, “Let’s start this experiment with those 500 million Europeans.”

WhatsApp and Telegram - well, Telegram is not going to implement this law, but WhatsApp will. They will not hesitate to announce this to all their users with a large EU logo. “Pay attention, the EU is watching you. We have installed software on your phone, on behalf of the government that will keep an eye on you.”

And I don’t think that is going to go down very well.

Also, if we want to learn how to implement such scanning technology, if we want to learn internationally how to do this, there is only one country in the world that can help us.

And that is China. And I don’t know if that’s such a good prospect.

Thank you.

[“Chairperson: that certainly called a spade a spade”]