

Until they started to generate slopilot labels for them. Because nothing is safe from the slop.


Until they started to generate slopilot labels for them. Because nothing is safe from the slop.


Reasoning is woke propaganda anyway.
It was just not good enough for “We wrote a sci-fi story”.


Well. the plots are NFTs, so it’s just a subset of it really.
Part of the so-called “web3” bullshit, the crypto people’s wet dream of a unified metaverse where every shitty asset is a financial investment and video games are not fun unless they’re grindy AI slop with a promised RoI.


He “fell into the trap of hallucinations”. Yeah, maybe we need to stop pretending people who ask LLM to slop their work for them are victims.
“Look, the robot found an irresistible quote, how could I not use it?!” That’s great journalism for you.


They asked copilot to slop an update for them and it tried to kill itself. Tragic, really.


Zuckerberg not caring about people’s consent is literally the core principle that lead to facebook. He must be such a proud bot daddy.


Unless it’s a part of meta horizons I didn’t know about (which is very possible, I interacted about 3 minutes with it), that particular metaverse might have been some crypto/NFT bullshit rather.
This sounds like something à la “decentraland” a.k.a cryptobros trying to sell plots in a terrible empty virtual world because some day it’ll sell for bazillions dollars, trust us.


Oh great, because billionaires shitting on Tolkien wasn’t enough.


Since there is still nothing actually worth calling “metaverse” as the sci-fi concept it was promised to be, they might go for something around AGI.


I don’t live near big scaly things full of teeth, so what do I know, but making it thrash about in anger would not be my first idea.


Sounds a bit like those Anthropic researchers who keep finding new ways Claude did something unexpected and scary every other week.
We don’t care whether you’re scared or amazed : TALK ABOUT IT.


So… this is still a ridiculous case, but they’re wealthy enough they aren’t too worried even if they lose it? All right.


In fact they should put random odds on whether you actually get extra power, so the customers can enjoy some surprise mechanics too.


I think those games that are made to be boring and absurdly grindy and then offer you to pay to skip the boring parts are even worse. And they’re not limited to phone, too.
“Pay to not play”, when we ensure our gameplay loop is so bad that you literally think your time spent in it has negative value.


The second one is not really a way to check if it’s AI, only if it may be deceiving you, and the third one’s conclusion is not “yes” but “use responsibly”, like it’s in the power of the common person to even choose to use AI and like corporations aren’t the ones pushing it with no regard to impact anyway.
The problem is those 3 questions are very vague and would need complex answers, and maybe the guy vould have been able to give these, but in any case they’re not in the article.


Theoretically in the way some particles have been theorized to maybe exist according to physical models but have never been observed.


I’ll save you a click. that article asks 3 basic questions : is it dangerous, how to tell something is AI and is it bad for the environment.
They get only non-answers. Thanks, BBC.


Yeah, part of the usual “it’s not bad, you’re using it wrong” arsenal. Definitely not the clever hack they think it is.
This probably has as much potential to create new errors as to find old ones. LLMs are trained to be “helpful”, if you tell it with total confidence something is wrong, it will answer like there is something to correct, and anything will do.
So even if it had something about right to begin with, now it will thank you for your “insightful” question and output some bullshit to please you.
Meh. If I am paid to endure a bunch of openAI corporate bullshit, I don’t think I am going lower than $3,000 for the trouble.