

“Wait, so you’re not going to help us eliminate most of our media staff? We’re out.”


“Wait, so you’re not going to help us eliminate most of our media staff? We’re out.”


https://startrek.website/pictrs/image/594694c8-e884-4882-95a6-49164d9f4c37.png
Huh, not what I was expecting. It looks like rotoscope, which while a cool look, it seems unlikely that the cast would be planning to fully act out episodes in person (though maybe they are, which would be kinda cool). If they’re just doing it for the look without the real life acting, that seems like a whole heap of extra effort that would bog down production. Is it possible that the final look will be significantly different?
Is someone forcing you to watch all of these movies? If so, you should probably call the police instead of whinging online.


Adam Baldwin is a major right-wing ragebaiter that was at the center of the gamergate bullshit, even being credited with coining the term itself. Since then he’s just gotten worse.


Original cast set to return
I’d be ok if there was at least one exclusion there…


5 democrats voted against reining in this current attack. And 3 republicans voted for it. But sure, they’re both the same 🙄
“Surely I won’t end up under the bus!” - Every idiot in this circus


18% are, as per the article.


Super glad to have gotten off that shit-train last year. Still dealing with the Linux learning curve, but never going back.


This is one of those things that, while it’s awful, it’s also a lucky break since they are clearly incapable of providing acceptable medical service.


What a terrible fucking headline. Even if they kept a record of every single interaction on hand for the chatbot to reference (which is preposterous), the notion that it could reiterate it verbatim to a degree that is sufficient to hold up as evidence of anything is ludicrous. It’s also wildly inaccurate to the actual story. Bad job all around.


It’s uncanny.


Actual AI would be more than “just math”, but LLMs aren’t AI, so the comparison is moot.
Now we’ve built a collection of simulated neurons, at a scale close to that of the human brain, and trained it on the entirety of the human language
We are not even close to anything of the sort. We’ve got a probability machine that’s mostly decent at previous collections of human language. The other two are much farther down the road (if they’re even possible) than you or the rest of the tech bros are trying to convince everyone else of.


Though commonly reported, Google doesn’t consider it a security problem when models make things up
To be clear, all llms “make things up” with every use - that’s their singular function. We need to stop imparting any level of sentience or knowledge onto these programs. At best, it’s a waste of time. At worst, it will get somebody killed.
Also, querying the program on why it fabricated something as if it won’t fabricate that answer as well is peak ignorance. “Surely it will output factual information this time!”


And yet here you are, acting like one. “Phones bad. Kids dumb. My generation’s better.”


You got that boomer energy.


Same tactic used by scammers sending “bad” messages - it’s at least partially in purpose to single out the good marks.
It being a right-wing production company co-started by Ben Shapiro would be the first first red flag. The rest makes a lot more sense with that in mind.