Chat Control: how to make a proposal without knowing the topic

@dzaladan.creator.blue

We all know what's going on nowadays: a lot of websites are currently on a warpath.

From one side we have payment processors trying to decide what people are supposed to by with their own money, from the other we have the umpteenth UK law that claims to be protect children, only to actually teach them how to use VPNs.

The last one, in particular, is quite egregious for a reason: it applies to every website, including ones trying to give help to children who are victims of abuse.
How's a kid supposed to act, should a case like this emerge, if their abuser is their parent and the kid has no ID?

Noble intentions on paper, but the proposed solution is like trying to solve the problem of obesity by banning food. After all, thinking about a solution is hard, and nobody wants to think anymore nowadays.

So, in all of this mess, there's still Europe's GDPR, so our privacy is protected in the EU at least, right?

Well... uuuh...
Someone, apparently, forgot it exists.

The infamous "Chat Control"

In May 11th 2022, the EU commission advances a proposal formally named "Proposal for a REGULATION OF THE EUROPEAN PARLIAMENT AND OF THE COUNCIL laying down rules to prevent and combat child sexual abuse", or just "Regulation on Child Sexual Abuse Material".

If you have the time, however, I recommend reading it in full because I simplified the gist of it.
Don't be lazy, think critically: remember that I'm a random person writing a blog post.

In summary, the proposal expects the "providers of hosting or interpersonal communication services" (chats, email, websites) to prevent the diffusion of child sexual abuse material (CSAM).

Summarized like this, sounds like a noble proposal... but here's the catch.

They don't seem to have any clue about how to do that.

Let's get into some of the points, and why there are campaigns warning about this.

I'm gonna give you a spoiler: it's more about incompetence than malice.

Application and definitions

As mentioned before, this proposal applies to whoever operates in the EU.
As by Article 1:

This Regulation lays down uniform rules to address the misuse of relevant information society services for online child sexual abuse in the internal market.

It establishes, in particular:

(a)obligations on providers of relevant information society services to minimise the risk that their services are misused for online child sexual abuse;

(b)obligations on providers of hosting services and providers of interpersonal communication services to detect and report online child sexual abuse;

(c)obligations on providers of hosting services to remove or disable access to child sexual abuse material on their services;

(d)obligations on providers of internet access services to disable access to child sexual abuse material;

(e)rules on the implementation and enforcement of this Regulation, including as regards the designation and functioning of the competent authorities of the Member States, the EU Centre on Child Sexual Abuse established in Article 40 (‘EU Centre’) and cooperation and transparency.

So, the rules are pretty clear: if you provide a service that allows people to exchange messages, you have to provide moderation (to say the least).
Given the used terminology, the application is very broad, and doesn't exclude services that support end-to-end encryption.

Especially this passage:

(b)obligations on providers of hosting services and providers of interpersonal communication services to detect and report online child sexual abuse;

Now, "detecting" doesn't imply necessarily the use of an automated system, especially in this context where it says "detect and report".
So... maybe it refers on relying on users?

Yeah... "maybe".

The problem, as you can see, is the lack of specification.

Most companies do that already: they rely on users to report this kind of material, but this stuff is usually inside private circles (for example, a Telegram or WhatsApp Group), where nobody would be able report them. So how's a company supposed to "detect and report" this material to authorities?
Before it's sent?
After it's sent?

Who knows, they didn't specify. ¯\_(ツ)_/¯

Don't worry, it will happen again.


So... who will have to respect this law, you may ask?

Article 2 gives the exact definitions. Since I'm talking about who hosts the services, I cut some things for brevity (you can still read them in the proposal):

For the purpose of this Regulation, the following definitions apply:

(a)‘hosting service’ means an information society service as defined in Article 2, point (f), third indent, of Regulation (EU) …/… [on a Single Market For Digital Services (Digital Services Act) and amending Directive 2000/31/EC];

(b)‘interpersonal communications service’ means a publicly available service as defined in Article 2, point 5, of Directive (EU) 2018/1972, including services which enable direct interpersonal and interactive exchange of information merely as a minor ancillary feature that is intrinsically linked to another service;

[...]

(f)‘relevant information society services’ means all of the following services:

(i) a hosting service;

(ii) an interpersonal communications service;

(iii) a software applications store;

(iv) an internet access service.

(g)‘to offer services in the Union’ means to offer services in the Union as defined in Article 2, point (d), of Regulation (EU) …/… [on a Single Market For Digital Services (Digital Services Act) and amending Directive 2000/31/EC];

[...]

(k)‘micro, small or medium-sized enterprise’ means an enterprise as defined in Commission Recommendation 2003/361 concerning the definition of micro, small and medium-sized enterprises;

[...]

Uuuuh... yep, that's a lot.
And, again, broad, and can apply even to whoever uses encryption.

Risk assessment and mitigation

Now, after declaring who this law applies to, what are they supposed to do?

Article 3 states:

1. Providers of hosting services and providers of interpersonal communications services shall identify, analyse and assess, for each such service that they offer, the risk of use of the service for the purpose of online child sexual abuse.

So, they're supposed to determine how to handle the situation in case it happens, and it applies to every service they own, if multiple.

*2.When carrying out a risk assessment, the provider shall take into account, in particular: *

*(a)any previously identified instances of use of its services for the purpose of online child sexual abuse; *

(b)the existence and implementation by the provider of a policy and the availability of functionalities to address the risk referred to in paragraph 1, including through the following:

–prohibitions and restrictions laid down in the terms and conditions;

–measures taken to enforce such prohibitions and restrictions;

–functionalities enabling age verification;

–functionalities enabling users to flag online child sexual abuse to the provider through tools that are easily accessible and age-appropriate;

Aaaand... here we go again with the vagueness. Told you it would happen.
To make a long story short, this portion here can be summarized as "we don't know how. You're the PC expert, just do it".
For the following reasons:

  • They didn't specify what they mean with "age verification". It can be literally anything, ranging from Newgrounds' system based on cross-referencing to using an ID.
    Spoiler: you can find stolen IDs online. Need I say more?
  • The functionalities to allow users to flag CSAM exist already, but they apparently don't know because I can't explain this specification otherwise. If a lawyer is reading, tell me if repetitions of this kind are common.

I'll skip the subsequent points mostly because, after reading them, they seem overall fair (you're always welcom to tell me that I'm wrong).

However, I wanted to mention this paragraph:

The costs incurred by the EU Centre for the performance of such an analysis shall be borne by the requesting provider. However, the EU Centre shall bear those costs where the provider is a micro, small or medium-sized enterprise, provided the request is reasonably necessary to support the risk assessment.

It's... somewhat interesting.
Paraphrased: "If implementing this costs you money, we can help you as long as you're not a big company and you really need it."

So... uuuh... if a non-big company decides to use an AI to vet the content, does this count as "absolutely necessary"?

... Did I say "who knows" and "vague", already?


Now, how do they expect companies to handle this?
Let's see what Article 4, about Risk Mitigation, says.

The first point, states usual stuff that can be summarized as "be ready to adapt, reinforce your moderation protocols, cooperate with other providers within the respect of the competition law"

Alright, sounds fair to me.

Then, we have this:

2.The mitigation measures shall be:

(a)effective in mitigating the identified risk;

Yeah... uuuh... of course they should. Why make a law about it otherwise?

(b)targeted and proportionate in relation to that risk, taking into account, in particular, the seriousness of the risk as well as the provider’s financial and technological capabilities and the number of users;

What.
You mean that if a company cannot afford to take action, they can just... do that as cheaply as possible? Does "we didn't do anything because we couldn't afford it" count?

It probably wouldn't because the proposal specifies that they can ask the EU Centre for funding. Again, can a lawyer come and help? I'm confused.

(c)applied in a diligent and non-discriminatory manner, having due regard, in all circumstances, to the potential consequences of the mitigation measures for the exercise of fundamental rights of all parties affected;

(d)introduced, reviewed, discontinued or expanded, as appropriate, each time the risk assessment is conducted or updated pursuant to Article 3(4), within three months from the date referred to therein.

OK, this actually sounds reasonable: "take action, but don't violate anyone's rights".
So, this might define that what a company does must still be within the existing laws.

But I emphasized "might" for a reason. You should know why by now.

Obligations for software application stores

Yeah, this regards them as well.
So, the likes of Google Play, Apple Store and, probably, F-Droid as well.

As by Article 6:

1.Providers of software application stores shall:

(a)make reasonable efforts to assess, where possible together with the providers of software applications, whether each service offered through the software applications that they intermediate presents a risk of being used for the purpose of the solicitation of children;

(b)take reasonable measures to prevent child users from accessing the software applications in relation to which they have identified a significant risk of use of the service concerned for the purpose of the solicitation of children;

(c)take the necessary age verification and age assessment measures to reliably identify child users on their services, enabling them to take the measures referred to in point (b).

Remember what I said earlier?
"We don't know how. You're the PC expert, just do it"?
Here it is again.

No specific guidelines, no binding rules, just "do whatever as long as it works".

Dang, writing laws for an unknown topic sure is easy, huh?

Hanlon's Razor

What is it? It's methodological principle, based off Occam's Razer, that can be summarized as:

Never attribute to malice that which can be adequately explained by incompentence.
He actually said "stupidity", but I'd rather not use that word.

I didn't even get into implications like what if a parent shares via WhatsApp a picture of their own child to show to a relative? Is it going to be considered CSAM as well? We don't know, because it's vague.

Do I think whoever proposed this law is incompentent?
In terms of information technology, yes, likely.
I could be wrong, but that's the feeling I got so far. Why not malice, you may ask?

Because the GDPR exists, and the main issue with the proposal is mostly how utterly vague it is, despite trying to remind multiple times that user's privacy must be preserved.

It may not put our privacy into turmoil, but the potential loopholes, and contradictory/paradoxical cases that may emerge could cause several problems.
In its current state, is not acceptable, and shouldn't even be considered.

Do we really need to wait for lawsuits that would set a precedent to emerge in order to fix those issues?
Do we really need to waste precious time to solve the problems you caused because you didn't bother to talk with multiple experts to help you lay down the guidlines?

No, because to me it's nonsensical.

It's their job. Just write a better law and avoid the problem in the first place.


Now, a lot of people would end the article here and wave goodbye.

But... nope, not me, I'm not that kind of person.
I believe that, whenever someone talks about these topics, they should at least attempt to provide some good solutions, otherwise why talk about them then? You'd just fuel doomerism and compliance, if you ask me.

What to do?

For starters, you can head to Fight Chat Control, as this website gives all the info you need about the proposal, how to contact your MEPs and, most importantly, sources.

Ideally, only people whose MEPs are undecided and/or in favor should be contacted; though it could help to have a country against the proposal in order to get an extra hand to convince others.

I'd also suggest the Stop Scanning Me campaign... but with some reserves: I wasn't able to find, on their own website, a direct link to the Chat Control proposal, which honestly made me raise a number of eyebrows: why wouldn't they easily allow people to read the proposal themselves? Even if they put the link somewhere, why not make it obvious?
Despite of that, however, they still raise valid points about several laws being unapplied and should be brought up in your requests.

To add my two cents about what could be suggested:

  • Enforcement of cross-referencing (again, Newgrounds' system) to determine an user's age, over automated systems;
  • Companies that manage huge amounts of users should have a team specifically made for content moderation.

This is all I believe is necessary.

There's one more thing to do that I believe would solve most of the child sexual abuse: education.
Both for the kids and the parents alike.

I may talk about it in another article.

dzaladan.creator.blue
DZ-Aladan ♒

@dzaladan.creator.blue

(He/Him/Any) Artist, network technician (in training), blabberer, gamer, science fan, and guy conceived by self combustion.

Socials:
https://dzaladan.carrd.co/

Art Feed: #DZ-Art
https://bsky.app/profile/did:plc:5uufr7jxk3emfh2usflm4vao/feed/aaah3zgvckqte

Post reaction in Bluesky

*To be shown as a reaction, include article link in the post or add link card

Reactions from everyone (0)