Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
AI Encryption Privacy United Kingdom

WhatsApp Boss Says No To AI Filters Policing Encrypted Chat (theregister.com) 38

An anonymous reader quotes a report from The Register: The head of WhatsApp will not compromise the security of its messenger service to bend to the UK government's efforts to scan private conversations. Will Cathcart, who has been at parent company Meta for more than 12 years and head of WhatsApp since 2019, told the BBC that the popular communications service wouldn't downgrade or bypass its end-to-end encryption (EE2E) just for British snoops, saying it would be "foolish" to do so and that WhatsApp needs to offer a consistent set of standards around the globe. "If we had to lower security for the world, to accommodate the requirement in one country, that ... would be very foolish for us to accept, making our product less desirable to 98 percent of our users because of the requirements from 2 percent," Cathcart told the broadcaster. "What's being proposed is that we -- either directly or indirectly through software -- read everyone's messages. I don't think people want that."

Strong EE2E ensures that only the intended sender and receiver of a message can read it, and not even the provider of the communications channel nor anyone eavesdropping on the encrypted chatter. The UK government is proposing that app builders add an automated AI-powered scanner in the pipeline -- ideally in the client app -- to detect and report illegal content, in this case child sex abuse material (CSAM).

The upside is that at least messages are encrypted as usual when transmitted: the software on your phone, say, studies the material, and continues on as normal if the data is deemed CSAM-free. One downside is that any false positives mean people's private communications get flagged up and potentially analyzed by law enforcement or a government agent. Another downside is that the definition of what is filtered may gradually change over time, and before you know it: everyone's conversations are being automatically screened for things politicians have decided are verboten. And another downside is that client-side AI models that don't produce a lot of false positives are likely to be easily defeated, and are mainly good for catching well-known, unaltered CSAM examples.

This discussion has been archived. No new comments can be posted.

WhatsApp Boss Says No To AI Filters Policing Encrypted Chat

Comments Filter:
  • by Anonymous Coward

    How 'bout China? Will they stand so tall there?

    • by Askmum ( 1038780 )
      China has blocked WhatsApp since 2017. Yes they stand so tall there and the consequence is that China uses their own messenger which of course is highly backdoored.
  • by NoNonAlphaCharsHere ( 2201864 ) on Tuesday August 02, 2022 @07:37PM (#62757418)
    We TOTALLY believe Facebook when they say that they're absolutely NOT NOT NOT listening to our "encrypted" communications on their platforms....
    • Re:Of Course (Score:5, Interesting)

      by Joce640k ( 829181 ) on Tuesday August 02, 2022 @08:39PM (#62757542) Homepage

      There's probably no value in Whatsapp beyond knowing your contact list, how often you talk to each person on it, what time of day you talk to each one, your IP address, etc. They can get all that without knowing the message contents.

      Security researchers would soon find out if it was sending any info apart from messages. It would be a reputation-maker for most of them and a complete disaster for whatsapp.

      On the whole I'd say the odds are very, very low that Facebook is reading the messages.

      • Yep (Although I'd be 100% more likely to believe facebook would do this than, say, apple). I suspect more than anything the value of whatsup to facebook is its brand. And killing that brand by destroying its reputation as supposedly safe would kill what little value it has.

        I think this is something people miss about this. Companies dont do malicious things because its malicious. They do it because the value equasion favors it in a specific way that outweighs the harm. Apple has absolutely nothing to gain ou

        • Er by "surveil political dictatorships" read "political dissidents". This archaic backwards ass website needs an edit button.

          • Re: (Score:2, Insightful)

            by Teun ( 17872 )
            It has an edit button, you get to (p)review your writings before a final posting.
      • by khchung ( 462899 )

        On the whole I'd say the odds are very, very low that Facebook is reading the messages.

        On the whole, I would say most definitely Facebook is reading every key you typed, including those not sent, on WhatsApp.

        Remember, this is Facebook we are talking about.

      • by gweihir ( 88907 )

        Exactly. While these aps will look like black magic to the average person, there are enough people that can analyze them and some will.

      • On the whole I'd say the odds are very, very low that Facebook is reading the messages.

        Reading as in sending to the borg and having someone look at them, not every message.
        But Facebook has already implemented client side scanning of messages even if they are E2EE as they are sent. It has been actively covered in the past that their anti-abuse systems flag messages which are sent separately to the borg for analysis on certain keywords. Everyone actually assumed they'd be okay with scanning for CSAM given that Facebook already confirmed they did this kind of scanning on Messenger so everyone na

    • I do.

      Set aside your TDS for a moment and apply some common sense. Facebook doesn't skirt the edges of privacy laws because it's immoral, they do it because it's profitable. Even then, they do try their best to stay within the law, advertising is picking up pennies in front of steamrollers and the steamroller of privacy law is a bit capricious.

      Unless you can come up with some plausible way of earning enough money off whatsapp surveillance to offset the damage to it being discovered, you aren't using common s

  • No, you can't. (Score:5, Insightful)

    by aardvarkjoe ( 156801 ) on Tuesday August 02, 2022 @07:53PM (#62757444)

    The upside is that at least messages are encrypted as usual when transmitted: the software on your phone, say, studies the material, and continues on as normal if the data is deemed CSAM-free.

    Only if (1) you trust the messaging software to do as it says -- which you can't, since you can't audit the source code or verify what the "phone app" you got from Google/Apple actually contains -- and (2) you trust that the AI "black box" that is scanning for CSAM hasn't been altered to scan for anything else -- which you can't, because most of these "AI models" can't be understood through any conventional analysis.

    The first is always going to be a problem with the current model of software distribution on phones, but letting the government inject some AI model that will allow it to capture an unencrypted message whenever they detect something is going to be too tempting an opportunity for them to ignore. They'll just add other training data to detect messages related to any crime that they'll particularly interested in, CSAM or not.

    • Re:No, you can't. (Score:4, Insightful)

      by dgatwood ( 11270 ) on Wednesday August 03, 2022 @12:46AM (#62757868) Homepage Journal

      The upside is that at least messages are encrypted as usual when transmitted: the software on your phone, say, studies the material, and continues on as normal if the data is deemed CSAM-free.

      Only if (1) you trust the messaging software to do as it says -- which you can't, since you can't audit the source code or verify what the "phone app" you got from Google/Apple actually contains -- and (2) you trust that the AI "black box" that is scanning for CSAM hasn't been altered to scan for anything else -- which you can't, because most of these "AI models" can't be understood through any conventional analysis.

      Exactly this. The British government wants their own CSAM machine learning model installed on everyone's phones, but would have an absolute fit if China wanted to install their own models on phones owned by British citizens, reporting back to the Chinese government whenever they find something. And that's always the end goal of this sort of technology.

      Why would they want to do that, and why would they want to prevent others from doing it? Because they know that it would be very easy to train a model in such a way that it appears to detect child porn, but that also recognizes (for example) certain code words that are likely to be used by foreign operatives, so that when they get flagged by the model, the government knows that they are spooks and can arrest them, execute them on the spot, or feed them false information to wreck their intelligence machinery. Every government wants this capability, which IMO is an adequate reason in and of itself to never allow them to have it.

      So anybody who thinks this proposal is a good idea either doesn't understand enough about technology to understand how it can be abused or is a complete idiot who knows that it can and almost certainly *will* be abused horribly, wants to abuse it horribly, and naïvely believes that it is somehow possible to do so without getting caught and without opening the door for others to do so in response. So either these people are the masterminds behind a machine learning arms race that likely can and will topple world governments, silence dissidents, and generally sow chaos, or they are useful idiots who are being used by those sorts of people for nefarious purposes. There's really not much of a third option.

      Either way, anyone suggesting these ideas absolutely should not be trusted to have any regulatory power whatsoever, because either they don't know what they're doing or they're deliberately trying to destroy the world.

    • whatsapp has already been compromised by the british intelligence services multiple times because of "bugs"

      the gov asking for CSAM is out in the open and unfortunately meta marketing either is not aware of a legal system works... or they are trying to influence it they can abide by the law or be fined and banned

      you don't get to appeal legislation if your service is against the law you get fined or put in jail
      (maybe your other services get banned I'm sure the shareholders wont care at all loosing a populous

  • by Powercntrl ( 458442 ) on Tuesday August 02, 2022 @08:05PM (#62757482) Homepage

    Scanning for CSAM on WhatsApp is about as effective at stopping child sexual abuse as Disney's theme park security checks are at stopping gun crime in Florida. The whole point of this type of security theater is purely to prevent bad PR. You can still get shot in Florida, but it's not likely to happen inside a theme park.

    Likewise, the despicable scum who deal in kiddy porn will still find ways of doing it. But the companies who implement CSAM scanning get to say "don't blame us, we did something."

  • ... European Union ... proposed legislation that puts much of the responsibility for ferreting out and exposing such material on providers.

    CS-AM is so rare that it's not cost-effective for any government to go hunting for it. This is why governments are repeatedly demanding the fascist answer of corporations doing the hunting for them.

    • Commercial networks tend to be large and taking them down after being discovered gives lots of arrest for the manpower involved. Too many eyeballs to hide VIP targets though, very embarassing.

      • by gweihir ( 88907 )

        "Commercial networks"? what are you talking about? Sure, some non-commercial sites get taken down occasionally, but I am not aware of any commercial sites that have ever been found. Got a reference? You seem to confuse this with the drug-trade in the darknet. That one is certainly commercial.

        • Welcome to video.

          If a large ring operates on requiring abuse images for access rather than financial, infiltrating it will still generate a ton of arrests. Hunting incidental offenders is best automated, but it's certainly efficient for governments to have some specialists.

  • STASI-fication (Score:5, Insightful)

    by Tokolosh ( 1256448 ) on Tuesday August 02, 2022 @10:08PM (#62757652)

    The trend is for the military-industrial-spying-snooping-government complex to outsource tracking and prosecuting those they deem to be doing illegal things. Out course, the cost of all this outsourcing, and the unintended consequences, are for the public.

    Banks have to catch money-launderers, ISPs have to catch piraters, email providers have to catch spies and pregnancies, cloud companies have to catch kiddie wankers, TSA has to catch dope smokers, doorbell cameras have to catch the darkies, airlines have to catch everybody. Al that is left for the cops is civil forfeiture and tolchoking.

    When just about anything can be construed a crime, what could possibly go wrong?

  • When you consider the Five Eyes relationship, you have to wonder whether the prying would end at the borders of the UK.

  • I just do not buy this is as big as various groups claim. We could not of developed to who we are today if this was such common behaviour.
    • by gweihir ( 88907 )

      Well, such materials exist. And since nobody can admit having checked on the actual size of the problem and even trying to is a very bad idea, those exploiting the existence of such material for their own ends are free to lie and fantasize about the problem being much, much bigger than it probably is. For example, a high-placed police expert that I heard talk at a workshop said there is basically no commercial angle to it. This makes sense as the police is really, really good at following money-trails. Yet

  • making our product less desirable to 98 percent of our users because of the requirements from 2 percent

    It is not the users that ask for what the UK government is asking, it is the government that is. So it is at best 0,5 % (1 out of 195 countries).

  • Our current crop of want to be leaders make Trumps claim of being a stable genius credible.
  • Another problem is that if any such service does not advertise the snooping service, they are distributing a trojan horse. And if they do, they are still distributing spyware.
  • What is EE2E?

So you think that money is the root of all evil. Have you ever asked what is the root of money? -- Ayn Rand

Working...