

lol what a cuck


lol what a cuck


That’s because it isn’t true. Retraining models is expensive with a capital E, so companies only train a new model once or twice a year. The process of ‘fine-tuning’ a model is less expensive, but the cost is still prohibitive enough that it does not make sense to fine-tune on every single conversation. Any ‘memory’ or ‘learning’ that people perceive in LLMs is just smoke and mirrors. Typically, it looks something like this:
-You have a conversation with a model.
-Your conversation is saved into a database with all of the other conversations you’ve had. Often, an LLM will be used to ‘summarize’ your conversation before it’s stored, causing some details and context to be lost.
-You come back and have a new conversation with the same model. The model no longer remembers your past conversations, so each time you prompt it, it searches through that database for relevant snippets from past (summarized) conversations to give the illusion of memory.


Besides, tech bros didn’t program this in, this is just an LLM getting stuck in the data patterns stolen from toxic self-help literature.
That’s not necessarily true. The AI’s output is obviously shaped by the training data, but much of it is also shaped by the prompt (and I don’t just mean your prompt as a user).
When you interact with (for example) ChatGPT, your prompt gets merged into a much larger meta-prompt that you don’t get to see. This meta-prompt includes things like what tone the AI should use, how the AI should identify itself, how the AI should steer the conversation, what topics the AI should avoid, etc. All of that is under the control of the people designing these systems, and it’s trivially easy for them to adjust the way the AI behaves in order to, for example, maximize your engagement as a user.


That’s like asking me to pay 3 cents…


Not goona be missed


They should cancel the pensions of these corrupt assholes and distribute that money to their victims instead.


NGL, mandatory WFH would probably boost support for the war by a few percentage points.


This seems like such a glaringly-obvious solution to lower inference cost that surely there must be some fundamental flaw in it… otherwise all of the big AI firms would be doing it, right?
Right…?


The apple doesn’t fall far from the anus.


deleted by creator


Of all the shitty AI products flooding the market right now, Atlassian’s Rovo has got to be the most useless I’ve had the misfortune of using.
They should be hiring more workers to fix their AI slop, not replacing them with even more of it.


Lately it feels like most of the job loss AI keeps promising is coming from inside the house…


Damn… it’s always the ones you most suspect


Introducing: Microsoft Cosmos!
Send your data to heaven while we turn the planet into hell!


My understanding is that these “datacenters” would be used exclusively for model training, where latency doesn’t matter.
It is still an outrageously stupid idea for a zillion other engineering reasons, though.


most moons
Pretty much every moon but Titan. Titan, however, would be excellent for heat dissipation. Long before generative AI was even a thing, scientists have speculated that Titan would be the perfect place for datacenters because low-temperature computation is so much more efficient.
Of course, building a datacenter on Titan would be a several-hundred-trillion dollar endeavor, so… good luck bootstrapping your way into that industry.


It’s also clever politics. Minnesota has the largest iron mining operations in the entire United States, so choosing iron as your core battery technology is a smart (albeit cynical) way to drum to some local support with the promise of bringing new demand back to the taconite mines.
Whether that will be strong enough to overcome the extreme negative sentiments around datacenter projects? Who knows…


There have been some pretty high-profile departures from Anthropic over the past few months, so… I dunno, seems like there are plenty of insiders who are unhappy with the company’s current trajectory.
What “usefulness” do you get out of them?