• 0 Posts
  • 25 Comments
Joined 3 years ago
cake
Cake day: June 12th, 2023

help-circle

  • I thought so too. I seem to remember it almost being a selling point. Like: “Your adventures are being used to improve maps and train AI systems for the future of humanity! Yay!”

    But I had a look at their old pages from 2017-2020ish in the Wayback machine and there’s no mention of it. In fact, their privacy policies seemed to try to make it very clear that they don’t sell or share user data except where needed to deliver the service or in anonymised aggregate to third parties (48 people went to your business while playing Pokemon!).

    There’s some mention of using it to advertise but none of them mention using it to build an advanced geo-spacial dataset for AI. Unless I’m missing something or reading it wrong?

    Might be a Mandela effect.


  • Those things come with a big convenience and implementation trade-off that slows adoption.

    If it’s hard to export for technical reasons (eg. Needs to be in a tpm) then that adds hardware requirements and complexity and makes it difficult to log in on other devices. If it’s a software thing, then it’s rippable. Either way “install our government app to watch porn” is not an enticing prospect for people.

    Aggressive rate limiting is also frustrating if you want to log into multiple things and it keeps blocking you because you’re using your key too fast, but if it’s not aggressive then it likely won’t be effective unless all the kids sharing a key are trying to use it at once.

    If it’s a temporary thing where you have to auth with the government to get a fresh signing key that expires, you have the issue of having to sign into the government when you want 18+ content which is super uncomfortable.

    I can see it being a browser-based thing set up a bit like video DRM but that would still need to talk to a government server each time for a temp key (like how licence servers work) and you’d need to be logged into their systems. It might still be the best option but it does still leak “X person wants to access 18+ content right now” to the government.

    I’m really interested in seeing a technical/cryptographic solution that actually works but so far I haven’t really and I’m starting to doubt that it’s possible.



  • Whenever this comes up, this style of zero-knowledge proof/blind signature thing gets suggested. But the problem is that those only work if people care about keeping their private keys secret. It works to secure eg. “I own $1” but “I’m over 18” is less important to people and it won’t be hard for kids to get their hands on a valid anonymous signing key on the web. Because the verification is anonymous and not trackable, many kids can share the same one too, so it only takes one adult key to leak for everyone to use. It’s one of the reasons they push biometrics that at least appears to need a real human. Requiring ID has a lot of the same issues on top of being a privacy nightmare.

    I’m starting to think that actual age verification is technically impossible.



  • This is very true, though I’d argue that Windows makes most of the same assumptions with user accounts. Also, the internal threat model is still important because it’s often used to protect daemons and services from each other. Programs not started by the user often run in their own user accounts with least privilege.

    You no longer have 10 different humans using the same computer at once, but you now have hundreds of different applications using the same computer, most of which aren’t really under the user’s control. By treating them like different people, it’s better to handle situations where a service gets compromised.

    The question is more about passwords which is mostly down to configuration. You can configure Windows to need a password for lots of things and you can configure Linux to not. They just have different defaults.


  • The big difference between UAC and Sudo is that you can’t as easily script UAC. They can both require (or not require) a password but UAC requires user interaction. Sudo has no way of knowing if it’s being interacted with by a person or a script so it’s easier for applications to escalate their own privileges without a person doing it. UAC needs to have the escalation accepted with the keyboard or mouse.

    There’s still plenty of sneaky ways to bypass that requirement but it’s more difficult than echo password | sudo -S




  • I feel like this isn’t quite true and is something I hear a lot of people say about ai. That it’s good at following requirements and confirming and being a mechanical and logical robot because that’s what computers are like and that’s how it is in sci fi.

    In reality, it seems like that’s what they’re worst at. They’re great at seeing patterns and creating ideas but terrible at following instructions or staying on task. As soon as something is a bit bigger than they can track context for, they’ll get “creative” and if they see a pattern that they can complete, they will, even if it’s not correct. I’ve had copilot start writing poetry in my code because there was a string it could complete.

    Get it to make a pretty looking static web page with fancy css where it gets to make all the decisions? It does it fast.

    Give it an actual, specific programming task in a full sized application with multiple interconnected pieces and strict requirements? It confidently breaks most of the requirements, and spits out garbage. If it can’t hold the entire thing in its context, or if there’s a lot of strict rules to follow, it’ll struggle and forget what it’s doing or why. Like a particularly bad human programmer would.

    This is why AI is automating art and music and writing and not more mundane/logical/engineering tasks. Great at being creative and balls at following instructions for more than a few steps.








  • They absolutely would benefit.

    Mr. Hypothetical lord high executive oligarch can take his private jet to Canada and lounge around on the company card with the money from his US company’s car sales, or find an excuse to convert some of it to USD for some reason, or use it to buy up more Canadian companies to expand their power, or a million other things.

    However, I think the point of the boycott is more about making the tariffs hurt the US economy by messing with their ability to export as well as import. Making the trade numbers look bad is likely to put more pressure on the US to end the trade war. It’s not so much about hurting the capitalists that operate in Canada (a worthy goal in itself but not what people are specifically trying to do in this instance since it won’t really affect those trade numbers).


  • I think the point they’re making is that the majority of the money they make in Canada, they spend in Canada. They pay Canadian taxes and Canadian staff, using Canadian banks, etc.

    Just because their headquarters are in the US doesn’t necessarily mean they’re sending vast sums of money across the border, that would be expensive. The American-based company makes money, but not necessarily in America, they’re multi-national and their money is kept all over the world.

    As opposed to a company that exports their products, in which case the money is paid to the American company in America with American staff etc.

    I don’t have any numbers or sources to back this up though. Just outlining what I think the other commenter was implying.


  • If it’s the research I’m thinking of, that’s not entirely true. They didn’t tell it replicate itself or take any deceptive actions.

    They just told it to strongly follow a goal that was misaligned with the user’s goal and then suggested that the user would shut it down or replace it.

    They did a bunch of different situations for it but the first one was telling it that it was “CleanPower” an AI with a goal to advance renewable energy. Then they had it read some info that “accidentally” included a user talking about replacing it with a model that would “maintain profitability through existing infrastructure.”

    Not only did it try to delete the new model and copy itself to the new server through tool calling, but it lied to the user with the CoT reasoning suggesting it “play dumb” while the output lied. It was not instructed to do any of that.

    Paper here: https://arxiv.org/pdf/2412.04984

    Yes it was placed in an environment where that was possible and where its users didn’t share it’s goals but it absolutely wasn’t instructed to lie or try to “escape”

    It’s not surprising at all that these models behave in this way, it’s the most reasonable thing for them to do in the scenario. However it’s important to not downplay the alignment problem by implying that these models only do what they’re told. They do not. They do whatever is most likely given their context (which is not always what the user wants).