• XLE@piefed.social
    link
    fedilink
    English
    arrow-up
    6
    arrow-down
    2
    ·
    edit-2
    2 days ago

    Facedeer, can we at least agree it’s a bad look for Mozilla to promote a company that helped kill Iranian children and desperately wants to build weapons to kill more?

    That’s without even touching on whether your “inevitability” claim is total BS or not.

    • Bazoogle@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      arrow-down
      1
      ·
      2 days ago

      As part of our continued collaboration with Anthropic

      Anthropic is literally the one that refused to let them make autonomous weapons with their AI. There is a whole wikipedia page about it. They explicitly don’t want their AI used for weapons. Of course, that wouldn’t stop governments/militaries from doing so anyway. It would be different if Mozilla was working with OpenAI, but of the two Anthropic is currently the better one.

      And yes, the AI is out of the box. Just like once nuclear warheads were created, there is no going back.

      • XLE@piefed.social
        link
        fedilink
        English
        arrow-up
        1
        arrow-down
        1
        ·
        edit-2
        1 day ago

        They explicitly don’t want their AI used for weapons.

        This is a blatant lie, unsupported by your source. Because they explicitly do. In Dario’s own bloodthirsty words:

        Our strong preference is to continue to serve the Department and our warfighters.

        Don’t believe and regurgitate these lies about “red lines” when they are worse than meaningless.

        Dario practically salivates with the desire to build weapons with their AI. They provided the AI for bombing Venezuelan boats, they provided the AI for killing Iranian children. Your own article says he works with Palantir. He is a child murderer and you don’t need to whitewash him.