Thing is, age-gating some content in the digital world does make sense. But there's a lot of problems with the way we can achieve that at present, privacy being one of the major ones. What methods do we use to validate a user's age without needing too much information from them other than age, yet being able to trust the answer we get? Ignoring pornography there's a wealth of other use cases; Gambling websites (although by nature of requiring a credit card to begin with there's some inherent protection there), and similar content that in the physical realm we age-gate. Perhaps there's a reasonable need for a standardised approach to age restrictions and parental control systems across all computing vendors with regulatory backing.
Imagine a system where a central authority, or perhaps a series of decentralised authorities, vouch for your age. Places like government portals, banks, and other entities that already have a legitimate reason to tie your identity to your age, would create and hold public keys corresponding to your identity. Now you try to access an age restricted site like PH. You enter in an identifier matching a public key, but probably in some human friendly format like a username, and the website encrypts a packet containing: (randomChallenge, minAge, maxAge?, t, f , contentPub) with the public key. t and f are random values. Though it is encrypted for your public key, and the corresponding private key is known by the authority, the website you're accessing does not send this directly to the authority, but the package is sent to your device directly. You then open your "Age Verifier" app and pass the data packet on and validate your identity with a 2FA code in a standard scheme. The authority decrypts the packet. If your identity's age is within the min-max range, it XORs the randomChallenge with t. If it is not, it XORs it with f. It encrypts the result with contentPub, the public key belonging to the content you're trying to access and sends the result back to you so you can pass it on to the content. Assuming I haven't made any mistakes here, only the authority can do the t/f XOR challenge on decrypted content (depending on encryption scheme we can replace with multiply, add, exponentiate or whatever) and only the content holder can decrypt the result and the answer of course should not be malleable to flipping t and f. To prevent that we probably need the original packet to contain another value "o" representing the ordering of the other variables so t and f are not always the same bits in the encrypted package.
With this scheme, the content doesn't learn anything about you, not even your actual age, only whether you're within the allowed range or not. The authority doesn't learn what page you're trying to visit assuming the public key of the content holder is generated per-request, and the authority should be trusted. Two issues remain. The authorities get to know how frequently you try to access age-gated content which can also be seen as a privacy violation, and the content holder can possibly join up with other content providers to map your public key's activities to you. Here's how I propose to solve that.
1) As part of standardising this technology, all OS vendors put in daemons that continually make request up. All HTTP(S) requests come with an attempt following a valid but unused execution of the above protocol. As all requests are now effectively age-gated it no longer really means anything. Note that 2FA is not required here. Since the answers to the requests don't matter and the protocol can be made such that the authority sends its result package and 2FA request in one single message with the result only being accessible after 2FA'ing it, it cannot tell the difference between a 2FA unlocked request and a one shot ignored request.
2) After every request, a new public key is generated for your identity. The exact algorithm for it can be anything really as long as content providers cannot deterministically guess it, but it could incorporate the challenge from the prior request and your 2FA code as well as username
Just a quick off the top of my head idea for what it could look like. Then we just need regulation requiring certain types of content to use this standard. - Alternatively we could just use the already existing parental control features in modern operating systems, standardise those and require web content to ask the OS about parental control levels and just make parents setup those parental control systems