Many might’ve seen the Australian ban of social media for <16 y.o with no idea of how to implement it. There have been mentions of “double blind age verification”, but I can’t find any information on it.

Out of curiosity, how would you implement this with privacy in mind if you really had to?

  • conciselyverbose@sh.itjust.works
    link
    fedilink
    arrow-up
    18
    ·
    26 days ago

    You can’t.

    Age verification is not compatible with any remotely acceptable version of the internet. It’s an obscene privacy violation in all cases by definition.

    Any implementation short of a webcam watching you while you use the site is less than trivial to bypass with someone else’s ID while opening numerous massive tracking/security holes for no reason.

    • actually@lemmy.world
      link
      fedilink
      arrow-up
      2
      ·
      26 days ago

      Doesn’t this assume the issuing agency has all employees who are morally sound and not leaking data, unnoticed by an internally badly designed system, which is designed by people who are out of touch? Most things like this are designed that way, irregardless of country .

      I’m sure one can make it watertight but it’s so hard and still depends in trusting people. The conversation here is about one thing of a larger system. There are probably a hundred moving parts in any bureaucracy.

      • demesisx@infosec.pub
        link
        fedilink
        English
        arrow-up
        10
        ·
        26 days ago

        This is the understanding ANYWHERE. How do we know there aren’t back doors in our OS’s? We literally have no clue. We do THE BEST WE CAN using the clues we have.

        • pro3757@programming.dev
          link
          fedilink
          arrow-up
          4
          ·
          edit-2
          26 days ago

          Yeah, these things quickly boil down to the trusting trust thing (see Ken Thompson’s Turing award lecture). You can’t trust any system until you’ve designed every bit from scratch.

          You gotta put your trust somewhere, or you won’t be able to implement jack.

          • socsa@piefed.social
            link
            fedilink
            English
            arrow-up
            1
            ·
            24 days ago

            This isn’t as limiting as it seems at first glance though. Sending pictures of a true one time pad cipher doesn’t rely on the security of the transport or the camera. From there you can choose to make a compromise of convenience and get to things like Private key cryptography where the ciphers are done via basic xor arithmetic you can do by hand.

    • leisesprecher@feddit.org
      link
      fedilink
      arrow-up
      1
      ·
      26 days ago

      God I hate cryptography so much for making me feel stupid every time I read anything about it.

      I want to feel smat!

      • demesisx@infosec.pub
        link
        fedilink
        English
        arrow-up
        5
        ·
        edit-2
        26 days ago

        I find it intimidating for sure. They say “never roll your own crypto” and I take those words to heart. Still, it would suck to have to hire someone and just trust their work. That person could be another Sam Bankman Fried or Do Kwan and you’d be party to their scam and you’d have no idea.

        • leisesprecher@feddit.org
          link
          fedilink
          arrow-up
          0
          ·
          26 days ago

          I’m not sure what these things have to do with each other. How exactly would cryptography have prevented SBF, you know, a crypto bro.

          • demesisx@infosec.pub
            link
            fedilink
            English
            arrow-up
            6
            ·
            26 days ago

            It wouldn’t have. You totally misunderstood my comment. Reread it.

            To paraphrase: when you hire a cryptographer to work on your project you have to hope that they are not a scammer because they could easily lie to you about the soundness of their cryptography and you’d have no idea. You see, SBF and Do Kwan were liars. If they had been cryptographers (they aren’t and weren’t) their employer would have to believe them since they would be an expert in something nearly impossible for a layman to understand.

            Do you get it yet?

            • leisesprecher@feddit.org
              link
              fedilink
              arrow-up
              0
              arrow-down
              2
              ·
              26 days ago

              I get what you’re trying to say, but I’m not sure it makes sense.

              I mean, that’s literally every field you’re not an expert in. And most of us are experts in less than one field.

              You don’t know about medicine, car engines, electricity or tax laws, you have your guys for that. Even in our field, we have guys for databases, OSes, networking, because quite frankly nobody understands those really.

              So I’m not sure what the point of your comment is. That having experts is good? Yeah, I guess? Did we need to have that reinforced?

              • demesisx@infosec.pub
                link
                fedilink
                English
                arrow-up
                8
                ·
                26 days ago

                If a doctor or mechanic was wrong, at least you’d have an inkling that things were wrong and you’d be able to sue them. Whereas with cryptography, no one has ANY IDEA WHATSOEVER if there are back doors until they are used to rob people blind. In all of the cases you mentioned, victims of those abuses have recourse whereas in cryptography, if things are wrong, they often CANNOT be patched and it’s even exceptionally hard for an expert to prove what went wrong.

      • demesisx@infosec.pub
        link
        fedilink
        English
        arrow-up
        5
        arrow-down
        1
        ·
        edit-2
        26 days ago

        You seem to be joking but ZK and Homomorphic encryption don’t necessarily need to involve blockchain but they can.

        This is like someone mentioning UUID’s and you leave a weird sarcastic comment about databases (and everyone suddenly villainizing them due to them being used for scams).

        • PoolloverNathan@programming.dev
          link
          fedilink
          arrow-up
          2
          ·
          26 days ago

          I believe they were referring to last year’s trend of blockchain being introduced to everything unnecessarily (as a marketing buzzword, similar to AI).

          • demesisx@infosec.pub
            link
            fedilink
            English
            arrow-up
            4
            ·
            edit-2
            26 days ago

            I got the joke. What I didn’t get is why it was even remotely relevant to the discussion at hand since ZK is used a lot in crypto but it’s also used everywhere else. It muddied the waters and made the joke somewhat nonsensical, IMO. Perhaps OP was unaware of how prevalent ZK is in the crypto world…

            Oh well. Have a good day.

            • jonathan@lemmy.zip
              link
              fedilink
              arrow-up
              0
              ·
              edit-2
              26 days ago

              You say you got the joke, but everything else you said suggests you didn’t. Just to be clear I wasn’t being critical of your reply, I was mocking the cryptobros the other poster mentioned.

  • eyeon@lemmy.world
    link
    fedilink
    arrow-up
    9
    ·
    25 days ago

    All I can think of are some variations of you trusting a service to validate your id and give you a token that just asserts your id has been validated.

    But it’s still not really privacy preserving because it relies on trusting both parties to not collaborate against your privacy. if at some point the id provider decides to start keeping records of what tokens were generated from your id, and the service provider tracking what was consumes with that token, then you can still put it all back together.

    • phlegmy@sh.itjust.works
      link
      fedilink
      arrow-up
      1
      ·
      25 days ago

      That’s when you add an extra point of failure validator.
      Server 1 generates a token for server 2 to validate.
      You send the token to server 2, who validates and generates you a token for server 3. Then finally server 3 validates the token and grants/denies your access.

      The more nodes you have across different countries, the harder it is for the last server to discover your identity.

      Definitely not without its flaws, but I wonder if a decentralised node setup similar to the tor network could work.

  • e0qdk@reddthat.com
    link
    fedilink
    arrow-up
    8
    ·
    edit-2
    26 days ago

    Frankly, the only sane option is an “Are you over the age of (whatever is necessary) and willing to view potentially disturbing adult content?” style confirmation.

    Anything else is going to become problematic/abusive sooner or later.

  • letsgo@lemm.ee
    link
    fedilink
    English
    arrow-up
    7
    ·
    26 days ago

    Not a cryptographic expert by any means but maybe something like this would work. This’d be implemented in common places people shop: supermarkets for instance. You’d go up to customer service and show your ID for visual confirmation only; no records can be created. In return the service rep would give you a list of randomised GUIDs against which the only permissible record can be “has been taken”. Each time you need to prove your age you’d feed in one of those GUIDs.

      • litchralee@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        3
        ·
        edit-2
        26 days ago

        Sadly, this type of scheme suffers from: 1) repudiation, and 2) transferability. An ideal system would be non-repudiable, meaning that when a GUID is used, it is unmistakably an action that could only be undertaken by the age-verified person. But a GUID cannot guarantee that, since it’s easy enough for an adult to start selling their valid GUIDs online to the highest bidder en-masse. And being a simple string, it can easily and confidentially be transferred to the buyer, so that no one but those two would know that the transaction actually took place, or which GUID was passed along.

        As a general rule, when complex questions arise which might possibly be solved by encryption, it’s fairly safe to assume that expert cryptographers have already looked at the problem and that no easy or obvious solution exists. That’s not to say that cryptographers must never be questioned, but that the field is complicated enough that incomplete answers abound.

        IMO, the other comments have it right: there does not exist a general solution to validate age without also compromising anonymity or revealing one’s identity to someone. And that alone is already a privacy compromise.

        • JeremyHuntQW12@lemmy.world
          link
          fedilink
          arrow-up
          0
          ·
          edit-2
          26 days ago

          You upload identity to a site and it gives you a date stamped token which confirms your age.

          Then when that token is uploaded to an SM site, it verfies the identity of the giver with the site that gives the token. The identity is a hash generated by the token site and contained in both the token and a namespace at the token site, so only the token site knows the real identity. Once the token has been confirmed, the namespace is re-used.

          So you can’t really sell the token, because its linked back to the identity you uploaded to the token site. You need to be logged in to the token site.

          • litchralee@sh.itjust.works
            link
            fedilink
            English
            arrow-up
            3
            ·
            edit-2
            26 days ago

            To make sure we’re all on the same page, this proposal involves creating an account with a service provider, then uploading some sort of preexisting, established proof-of-identity (eg passport data page), and then requesting a token against that account. The token is timestamped and non-fungible, so that when the token is presented to an age-restricted website, that website can query the service provider to verify that: 1) the token is still valid, 2) the person associated with the token is at least a certain age.

            If I understood that correctly, what you’re describing is an account service combined with an identity service, which could achieve the objectives of a proof-of-age service, but does not minimize privacy complications. And we already have account services of varying degrees and complexity: Google Accounts, OAuth, etc. Basically any service where you log-in, since the point of logging in is to associate to a account, although one person can have multiple accounts. Passing around tokens isn’t strictly necessary since you can just ask the user to prove account ownership by signing into their Google Account, for example. An account service need not necessarily verify age, eg signing in to post a comment on a news article.

            Compare this with an identity service like ID.me, which provide records on an individual; there cannot be multiple records for the same live person. This type of service is distinct from an account service, but some accounts are necessarily tied to a single identity, such as online banking. But apart from KYC regulations or filing one’s taxes online, an identity service isn’t required for most day to day activities, and any additional uses pose identify theft concerns.

            Proof-of-age – as I understand it from the Australian legislation – does not necessarily demand an identity service be used to satisfy the law, but the question in this Lemmy thread is whether that’s a distinction without a difference. We don’t want to be checking identities if we don’t have to, for privacy and identity theft reasons.

            In short, can a person be uniquely, anonymously age-verified online? I suspect not. Your proposal might be reasonable for an identity service, but does not move us further towards a theoretical privacy-centric proof-of-age validation mechanism. If such a mechanism doesn’t exist, then the Australian legislation would be mandating identity checks for subject websites, which then become targets for the holder of those identity records. This would be bad.

  • PlexSheep@infosec.pub
    link
    fedilink
    arrow-up
    8
    arrow-down
    1
    ·
    edit-2
    25 days ago

    If the governments would get their shit together, we could have something like age assertion with the eid chips in our IDs. Imagine that. The important thing is that website.com just asks the government “is this user an adult?” And the government replies “yes”. No information besides the relevant one is provided, and it’s through a trusted authority.

    Yeah, not gonna happen, just like using the keys in my Personalausweis to send encrypted mail.

    • FooBarrington@lemmy.world
      link
      fedilink
      arrow-up
      2
      ·
      24 days ago

      The system would have to be built so that the government can’t connect the user to the website, as you don’t want the government to build profiles on website usage by person. Though the bigger challenge here is trust - even a technically perfect system could be circumvented by the operators.

      A good example for this were the COVID tracking apps. The approach was built so that as little information was leaked as possible.

      • Buddahriffic@lemmy.world
        link
        fedilink
        arrow-up
        2
        ·
        24 days ago

        Could have a system where a government site cryptographically signs a birth year plus random token provided by the site you want to use.

        Step 1: access site
        Step 2: site sends random token
        Step 3: user’s browser sends token plus user authentication information
        Step 4: gov site replies with a string containing birth year, token, and signature
        Step 5: send that string to the other site where it uses the government’s public key to verify the signature, showing the birth year is attested by the government

        No need to have any direct connection with the user’s identity and the site or been the gov and site.

  • Simulation6@sopuli.xyz
    link
    fedilink
    arrow-up
    5
    ·
    25 days ago

    Sites are just going to ask people ‘Are you over 16? (Y/N)’. Site is now legally covered, and that is all anyone cares about.

    • Aussiemandeus@aussie.zone
      link
      fedilink
      arrow-up
      1
      ·
      24 days ago

      Just like porn and grog is Australia already .

      Not to mention my space you needed to be over 16vor something so we all lied

  • incogtino@lemmy.zip
    link
    fedilink
    English
    arrow-up
    4
    ·
    26 days ago

    A joke answer, but with the kernel of truth - IRL age verification often requires a trusted verifier (working under threat of substantial penalty) but often doesn’t require that verifier to maintain any documentation on individual verification actions

    https://chinwag.au/verification/

    • onlinepersona@programming.devOP
      link
      fedilink
      English
      arrow-up
      0
      ·
      26 days ago

      As in, you have to roll up to an “age verification bureau” and say “I’d like to sign up to $platform, please verify that I’m of legal age to use it and tell them so”, then you buy a “token” that you can enter upon signing up? Am I understanding that correctly?

      Anti Commercial-AI license

  • hector@sh.itjust.works
    link
    fedilink
    arrow-up
    6
    arrow-down
    2
    ·
    26 days ago

    My friend has worked with a government to create zero-knowledge proof from IDs. Turns out there’s a lot of good software engineered to solve that problem.

    The UX is still shit tho

  • Kissaki@programming.dev
    link
    fedilink
    English
    arrow-up
    3
    ·
    edit-2
    24 days ago

    Who has age authority? A state agency or service. Like the state issues an ID with age.

    Preferable, we want the user to interact with a website, that website request age authentication, but not the website to talk to the government, but through the user.

    Thus, something/somewhat like

    1. State agency issues a certificate to the user
    2. User assigns a password to encrypt the user certificate
    3. User connects to random website A
    4. Random website A creates an age verification request signed to only be resolveable by state agency but sends it to the user
    5. User sends the request to a state service with their user certificate for authentication
    6. State agency confirms-signs the response
    7. User passes the responds along to the random website A

    There may be alternative, simpler, or less verbose/complicated alternatives. But I’m sure it would be possible, and I think it lays out how “double-blind”(?) could work.

    The random website A does not know the identity or age of the user - only to the degree they requested to verify - and the state agency knows only of a request, not its origin or application - to the degree the request and user pass-along includes.

    • robinm@programming.dev
      link
      fedilink
      arrow-up
      2
      ·
      24 days ago

      I never realised it was that simple to do. Thanks a lot to answer the OP question. I had the same for longer than I wish to admit given how easy the answer was!

  • Asidonhopo@lemmy.world
    link
    fedilink
    English
    arrow-up
    3
    ·
    26 days ago

    I seem to remember Leisure Suit Larry verified age using trivia questions that only older people would answer correctly. I know this because at 8 years old I guessed enough of them on my father’s friends computer to play it.

    • Kissaki@programming.dev
      link
      fedilink
      English
      arrow-up
      2
      ·
      24 days ago

      I talked to a friend of mine last week and they didn’t know of the old PS/2 mouse/keyboard cable/sockets. They’ve seen it before, but it wasn’t familiar to them. Nobody only having used USB devices will remember those.

      • Asidonhopo@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        24 days ago

        I was just getting used to PS/2 connectors replacing serial mice and keyboards and then friggin USB comes along…

  • MajorHavoc@programming.dev
    link
    fedilink
    arrow-up
    3
    ·
    26 days ago

    If I really had to, I would require everyone to whip out whatever assets of sexual maturity they happen to have, and let the computer analyze it and decide a maturity level.

    I would also keep copies for blackmail purposes, because the world is a better place if we all mistrust this solution and anything remotely like it. It’ll be in the legal fine print, which I’m confident no one will read.

    Every answer (other than “trust the user to self identify”) is at least remotely like mine, but I’m proposing we cut out the half-measures on the way.

    To avoid personal consequences, the system I architect will probably wait on a dead-man-switch for me to die or be incarcerated.

    Then it will publish everything it has ever seen, along with AI generated commentary. I’m confident that some of it will be hilarious, and I am hopeful that it will piss everyone off enough that we stop doing this kind of thing.

  • Draconic NEO@programming.dev
    link
    fedilink
    arrow-up
    3
    arrow-down
    1
    ·
    26 days ago

    It can’t. It requires invasion of privacy to verify information about the individual they don’t have the right to access.

    Digital age verification goes against privacy. Let’s not delude ourselves into thinking it can.

  • ben_dover@lemmy.ml
    link
    fedilink
    arrow-up
    5
    arrow-down
    3
    ·
    26 days ago

    in blockchain tech, there’s the concept of “zero knowledge proofs”, where you can prove having certain information without revealing the info itself

    • sinceasdf@lemmy.world
      link
      fedilink
      arrow-up
      3
      ·
      25 days ago

      Would be interesting to see a govt tackle setting up a trustless system like it required for cybersecurity best practices. I think it’s a thorny issue without a trusted authority though.

      What stops an ID for being posted publicly or shared en masse? So one ID can be used unlimited times - just share the key with minors for $1 at no risk to oneself since there’s no knowledge of the ‘transaction’ being sent around. Better for individual privacy but that undermines the political impetus for wanting the verification. Usage would probably have to be monitored or capped, kind of defeating the advantage of the anonymous protocol (or accept that abuse is unenforceable).

    • IphtashuFitz@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      26 days ago

      So how would you use it to solve this problem? There still needs to be some sort of foolproof way of saying “person X is only 14 years old”.

      • planish@sh.itjust.works
        link
        fedilink
        arrow-up
        5
        ·
        25 days ago

        You would prove something like “I possess a private key that matches a public key that is in this list of public keys belonging to people at least X years old”. But without revealing which item in the list is the specific one for you. Which is the zero knowledge proofs’ cool trick.