Nycturne
Elite Member
- Joined
- Nov 12, 2021
- Posts
- 1,798
Yeah someone is misleading us, either Apple or app developers that listen when they say they are not. This is where we would trust Apple's security to ensure they can't cheat it when the mic is turned off in the security settings somehow but it's clearly not working, at least on my end and I have several verifiable instances where I know nothing else could've been listening on items I've also never texted or googled.
The thing really boils down to this, IMO:
Apple requires all app developers to request permissions from the OS to get access to the mic. And when the mic is active, the orange light will be on. Unless app developers are literally shipping OS exploits to bypass that, and exploiting the external microcontroller that drives the indicator lights (as Cliff points out), I believe that it is unlikely app developers are listening to you without you having given microphone permissions at some point. And because Apple is baking in the indicator lights into the microcontroller, it seems unlikely Apple would be that overtly malicious. In my experience doing dev work at big companies, this is a sort of maliciousness that almost always comes from the top, which Tim doesn't strike me as the type. If anything, Tim strikes me as a private person, who pushed the privacy initiatives precisely because he's a private person. So I'm still dubious of the claim here, as it requires some extraordinary maliciousness and/or incompetence on the part of at least 2 parties.
I mean, consider the phenomenon of Doomscrolling, and the algorithms that are built to basically keep you scrolling. Why did that happen? Someone wanted to increase the time spent on Facebook, so that Facebook could sell more ad spots. That's it. But we've got a whole new slang word, and societal phenomenon from the simple thing of "more time on service = good, more time = more ads". And we got all these ugly side effects as a result of "make _this_ line go up".
And remember the fact that Target was able to just use _predictive models_ based on shopping history to suggest pregnancy-related items to a teenage girl. Before she even knew she was pregnant. There are certain trends in human behavior that are predictable enough to be creepy and uncanny without having to record your voice to do it. And that was over a decade ago. Consider how much more data we've fed into similar models, using related behaviors, and not having to rely just on one behavior (like purchase history). https://www.forbes.com/sites/kashmi...teen-girl-was-pregnant-before-her-father-did/
Now. Are there companies that are willing to take advantage of what you _do_ give them? Sure. But it may not be the sources you think they are, and it may be like what happened to push notifications: You enable push notifications for DoorDash and Uber Eats so you know when food will show up, but then they use it as a side-channel to poke you to get more repeat business because some internal study pointed out that if you poke them around meal times, you'll get some percentage of extra orders. I fully expect the same with giving things microphone access, anything it does record and sent up to the cloud could very well wind up with the speech-to-text content in some advertising archive out there.