

Eventually all that dry air will end up above the ocean and absorb more water to balance the system. I don’t think it’s really an issue, we weren’t getting rain clouds from the Sahara anyway.


Eventually all that dry air will end up above the ocean and absorb more water to balance the system. I don’t think it’s really an issue, we weren’t getting rain clouds from the Sahara anyway.


deleted by creator


Linux-native Dota is a bit worse than Windows Dota, to the point that I tried to run it in Proton instead (doesn’t work). With the right start config (-dx11) it runs fine though. Same for Deadlock, it was almost unplayable without -dx11.


Is Jim Davis cool? I can’t really find anything, so that proves he isn’t MAGA at least.


I think this is completely missing the point when it’s talking about “the minutiae of art”. It’s making two claims at the same time: art is better when you suffer for it and the art is good whether or not you suffered. But none of that is relevant.
When Wyeth made Christina’s World, I don’t know if he suffered or not when painting that grass. What I do know is that he was a human with limited time and the fact that he spent so much of his time detailing every blade of grass means that he’s saying something. That The Oatmeal doesn’t draw backgrounds might be because he’s lazy, but he also doesn’t need them. These are choices we make to put effort in one part and ignore some other part.
AI doesn’t make choices. It doesn’t need to. A detailed background is exactly the same amount of work as a plain one. And so a generated picture has this evenly distributed level of detail, no focus at all. You don’t really know where to look, what’s important, what the picture is trying to say. Because it’s not saying anything. It isn’t a rat with a big butt, it’s just a cloud of noise that happens to resemble a rat with a big butt.


Atypical is pretty good, it’s a coming-of-age about an autistic teen. It managed to evade the Department of Premature Cancellations for 4 seasons and even reached a satisfying conclusion.


It’s always so funny to see someone squirm under oath. They don’t dare lie, but they’ll do anything but tell the truth. Just lie at that point dude, reciting the alphabet to avoid answering is just sad.


I don’t get Americans’ obsession with putting religion into schools. Didn’t y’all flee Europe because of religious oppression? Isn’t religious freedom one of the pillars of USAmerican society? Surely some of the Christians must realize that if the state makes “Christianity” the state religion, sooner or later that becomes a specific variety of Christianity and it probably won’t be their specific variety. Or are there no actual Christians left and do they only care about the symbols of “Christianity” as stand-ins for the symbols of white supremacy? (Don’t tell me.)


I also have to laugh when someone takes a very rough estimate (around a hundred miles) and converts it to metric with 4 significant figures (160.9 km). Even 160 is too precise when talking about a distance of 80-120 miles. If the original number has 1 sigfig, the conversion should too, even if that feels way off.


Neither LLMs nor ICMs are AI in any sense of the word, is my point. LLMs happen to give the illusion of intelligence because of their language-based nature, but they’re not fundamentally different from ICMs.


Image classification model isn’t really “AI” the way it’s marketed right now. If Google used an image classification model to give you holiday recommendations or answer general questions, everyone would immediately recognize they use it wrong. But use a token prediction model for purposes totally unrelated to predicting the next token and people are like “ChatGPT is my friend who tells me what to put on pizza and there’s nothing strange about that”.


Google Lens already did that though, all you need is decent OCR and an image classification model (which is a precursor to the current “AI” hype, but actually useful).


I just reply to things that show up my feed honestly and pretty randomly
No one asked you to and in fact temp banned you for it, so try to read the room next time


Why the hell would you go on a book community, open a discussion post that is clearly asking for personal opinions and go “I dunno lol go ask chatgpt”. I would’ve banned you too tbqh.


This is probably because of a lack of training data, where it is referencing only one example and that example just had a mistake in it.
The one example could be flawless, but the output of an LLM is influenced by all of its input. 99.999% of that input is irrelevant to your situation, so of course it’s going to degenerate the output.
What you (and everyone else) needs is a good search engine to find the needle in the haystack of human knowledge, you don’t need that haystack ground down to dust to give you a needle-shaped piece of crap with slightly more iron than average.


So when I download only some files from a torrent, it’s likely that I can’t seed all of those files to the next person? I have done partial leeches before and left them seeding under the impression that I could at least seed exactly those files if anyone else wanted them. If that’s impossible (or at least unlikely to work because of chunking), then I might download the whole thing next time (or just leave the swarm).


Telling people your age in response to a boomer joke is such a xoomer thing to do


Torrents are already very hard to block. You don’t actually need a tracker, because all modern torrent clients support DHT (distributed hash table). You only need some way to get the initial hash for a torrent, so that’s where trackers are still useful, but once you’re connected to the swarm, you can only be blocked if the entire swarm is blocked.
Tracking though… It’s too easy to get IP addresses for the entire swarm and I don’t see how you could ever fix that. Tor doesn’t really solve that issue either, it just moves it to places where you won’t get in legal trouble or to people who don’t mind getting in legal trouble, a bit like VPN providers.


Available image generators are already capable of generating those images and they weren’t even trained on it. Once a neural network can detect/generate two separate concepts, it can detect/generate the overlap. It won’t be as fine-tuned obviously, but can still turn out scarily accurate.
It is a cute shirt and we could easily make it a reality: