The Ai thread

I have no idea how accurate this is, but I have a couple of, well paranoid friends. As a result, we first moved our text group to What'sApp and all was fine until FB took it over and then we moved to Signal. All good.

But according to this guy, AI will be able to "see" what is on your screen, so it won't matter if it is end-to-end encrypted. It will only be as secure as the person you send a message to. So if User A has AI turned off, but sends a message to User B who has it turned on, then AI can see what A sent.

Just to add, the issue here depends a lot on how the inference work is done. Part of the "lack of privacy" here hinges on the fact that your messages are sent to a service for processing or harvested for training data. Which if it is happening, does undermine E2E encryption. This is one reason Apple is doing local inference and investing in private cloud compute: https://security.apple.com/blog/private-cloud-compute/

But someone using a tool like GPT to rewrite an e-mail reply can lead to the original e-mail being leaked to OpenAI since they do processing on their servers. Using Apple's features that invoke GPT will do the same (but the OS will ask in this case and I don't think they've deployed these yet). And you can't really stop the person on the other end of the conversation from just copy/pasting into an AI chatbot.

But the real problem is that the average person can't audit this. So on one hand, we can find out it's worse than we think (see the data broker that hoovered up precise location data just by listening to an ad network's requests for targeted ads). On the other, if there are cases where a company is trying to adhere to some privacy policy, it's just ignored by the public. This is what's happening to Apple right now. Microsoft generally has a better privacy policy than people think. Yet because Microsoft has put ads in Office when used for free, they were indirectly leaking personal data to the data broker I mention above.

It's a giant mess.
 
Back
Top