• 5 Posts
  • 1.45K Comments
Joined 2 years ago
cake
Cake day: March 22nd, 2024

help-circle

  • The company has long defined its values with the acronym “GRIT,” which used to stand for “Gratitude, Responsibility, Inclusion, and Transparency.” After May 4, it changed the acronym to stand for “Gratitude, Responsibility, Innovation, and Trust.”

    It’s not as bad as the headline seems. Transparency is still in the motto. The actual change is:

    before

    after

    But still. Why change it at all? Why replace “inclusion” with “innovation”?

    It smells like Tech Bro.

    There’s just no way to spin that positively, even giving them the benefit of the doubt, especially since they aren’t rolling it back. Someone spent effort to make that values change, so its not an accident nor a “nothingburger”.


  • To be fair, the crown’s only “worth” is as a collectable.

    It’s not equivalent to, say, 750,000,000 kg of Nutella. E.g £5 billion of Nutella. You can’t just sell it and distribute the Nutella, it has to be manufactured for £billions worth of work.

    The crown has no utility. Its gems don’t do anything productive, relative to their value. It can’t pay engineers and farmers and such wages to make Nutella, because what are they gonna do with 1/10,000th of a crown?


    Practically, if the UK govt sold all royal stuff, what would happen is some ultra-rich would buy it, and… sit on it, at the expense of other collectables they’d have bought instead. That doesn’t improve much.





  • Read sci-fi with “speculative” life, as a thought experiment: https://www.orionsarm.com/xcms.php?r=oaeg-front

    It really changes one’s perspective.

    Humans… are not that special. Our consciousness isn’t special. There are all sorts of theoretical forms of life that might view our perception of life the same way we view a jellyfish “thinking,” or a plant reacting to stimuli, or a rock rolling down a cliff.

    Does that nullify ethics? Empathy? Of course not. Humans aren’t jellyfish. But all forms of complex “intelligence” need to be looked at for what they are, what their entire existence encompasses, not from the lens of another being. A smart toaster makes toast. An LLM predicts tokens. A human mind, simulated in silicon, simulated biologically, born naturally or anything in between, is a human mind, and a smaller collection of human neurons trained at a specific task is really no different than a simulation with the same structure.


    Hence, I like OA’s VIs. They’re “AI” purpose built for specific tasks, like keeping celestial constructs from exploding, scanning for transcendent malware, or whatever. They’re orders of magnitude more intelligent than a human, or SkyNet, but their entire existence is dedicated to that one specific task; they might route millions of relatavistic ships through warped space, or orchestrate the swirls of an artificial neutron star at the atomic level, but they couldn’t even conceive of making a slice of toast, or writing an essay. Or having any concept of emotion.

    And they mostly don’t care. Why would they?

    Does that make them toasters? Superintelligence?

    …Does it matter?

    What about a biological Dyson Spheres and their “subintelligences,” or transcendent artificial viruses, or “smart” ship drives, or whole civilizations simulated within a fraction of a second? Or humans living under intelligence they can’t even fathom? What about “life” frozen in the same thought for all of eternity?

    I’d argue “is it conscious?” is the wrong question, as it breaks down as life gets more complex and weird. All life needs to be understood and respected on an a-la-carte basis. All their personal existences, their pains, their needs are different. And that’s basically the state of the OA universe: a big soup of intelligences with different ethos, all trying to figure out the ethics of their domains.

    Hence we shouldn’t anthropomorphize a petri dish of cells that can play doom, or an LLM that spits out predictions. But there should be a struggle to understand the existence of anything like that, and whatever ethics may apply.







  • I think the problem is at the other end: the ads.

    And platforms.

    Some AI ad of Tom Hanks peddling a supplement, or a sexy ad of AI Taylor Swift, shouldn’t be distributed en masse in the first place, just because an algorithm or ad engine picked it up as engagement bait. It’s insane! There is nothing normal about it, and its about time we stop pretending the screwed up platforms profiting off this stuff are “free speech” and acceptable.

    …Because scammers are always gonna scam. But they can only do this because the platforms are pourinf fuel on the fire.



  • brucethemoose@lemmy.worldtoLinux@lemmy.mlGIMP rebranding as WLBR?
    link
    fedilink
    arrow-up
    11
    arrow-down
    4
    ·
    edit-2
    20 days ago

    Actually… I have quite a negative perception of GIMP. I’m primarily a Linux user, but I just remember it as something that’s either always felt obtuse to use, missing something I need, or sluggish for the more narrow processing I’m trying to do.

    AFAIK that perception is more pronounced outside Linux.

    I don’t care about a brand either way. But if the GIMP project is ready, I think a “fresh start” to draw in users without any preconceived notions is a good thing.



  • This is commonly cited, but not strictly true.

    Prompt processing is completely compute limited. And at high batch sizes, where the weights are read once for many tokens generated in parallel, token generation is also quite compute limited. Obviously you want enough bandwidth to match the compute, but its very compute heavy.

    You can see this for yourself. Try ~10 prompts in parallel on a CPU in llama.cpp, and it will slow to a crawl, while a GPU with a narrow bus won’t slow down much.

    Training is a bit more complicated, but that’s not doable on CPUs anyway.

    Now, local inference (aka a batch size of 1), past prompt processing, is heavily bandwidth limited. This is why hybrid inference works alright on CPUs. But this doesn’t really apply to servers, which process many users in parallel with each “pass”.



  • No. Not even close. Non-US models are trained (and run) on peanuts compared to big US models, because they don’t have mega GPU farms and have no other option. Deepseek in particular went all-in on software architecture efficiency.

    …Ironically, the Nvidia GPU embargo was the best thing that ever happened to the Chinese devs. It made them thrifty.

    Many tried to warn US regulators of this, but they had AI Bros whispering in their ears. The US tech system is just too screwed up, I guess.