*With AI review :)
*With AI review :)


I kept up with the drama until about a week ago so what I’m saying here is the status from back then. Someone please add any new context if I’m missing any new developments:
From what it appeared, view counts dropped but ad revenue stayed the same. Even before this whole thing, YouTube pays out for ads watched (and clicked). Pay out was not dependent on raw view count for a long time, if ever.
This suspicious behavior of view count dropping but ad revenue staying the same is actually what tipped people off that the issue was adblock related. The fact that channels with a larger focus on a younger audience seeing less of a drop also helped.
Now those view counts dropping could still have an indirect, negative effect on ad revenue, if it, e.g. automatically leads to YouTube recommending their videos less prominently.
Another European here to chime in that l also learned to write capital As like that in cursive.
The rs, fs and ts don’t look like how we were taught though.


I’ve been to multiple museums in Japan (which is somewhat relevant because Nintendo is Japanese) that either flat out ban all photography (e.g. Ghibli Museum, Aomori Museum of Modern Art) or have some exhibits that you’re not allowed to take pictures of (e.g. Tokyo National Museum). One exhibit I wanted to take a picture of had a “no photography” sticker on it, but it was on the opposite side from where I approached so I didn’t see it, causing staff to run up to me when I pulled out my phone to point out the sign.
I’ve also heard from other tourists that “no photos” seems to be rather common there.
Btw, I’m not at all saying that they’re justified at all, just saying that there are indeed places that forbid photos for copyright reasons. In my opinion, no photo would ever match seeing the exhibits in person so it is entirely pointless to ban them. Even professional, official scans of pieces don’t come close.


Even funnier: This news article is so well written/edited, that the headline doesn’t even state that he ignored the flaws; It clearly states that he ignored the sub itself. Truly the pinnacle of journalism.


On the second part. That is only half true. Yes, there are LLMs out there that search the internet and summarize and reference some websites they find.
However, it is not rare that they add their own “info” to it, even though it’s not in the given source at all. If you use it to get sources and then read those instead, sure. But the output of the LLM itself should still be taken with a HUGE grain of salt and not be relied on at all if it’s critical, even if it puts a nice citation.


I don’t think it’s more crime because more tension. It’s instead a self fulfilling prophecy. Who do you think detects and records crime if not the police? Therefore more police in a area increases the number of crime data points in that area.


One field it impacts is radio astronomy. We can already see Musk’s satellites mess with it (unintentionally) and it’s probably only going to get worse from here.


Re LLM summaries: I’ve noticed that too. For some of my classes shortly after the ChatGPT boom we were allowed to bring along summaries. I tried to feed it input text and told it to break it down into a sentence or two. Often it would just give a short summary about that topic but not actually use the concepts described in the original text.
Also minor nitpick but be wary of the term “accuracy”. It is a terrible metric for most use cases and when a company advertises their AI having a high accuracy they’re likely hiding something. For example, let’s say we wanted to develop a model that can detect cancer on medical images. If our test set consists of 1% cancer inages and 99% normal tissue the 99% accuracy is achieved trivially easy by a model just predicting “no cancer” every time. A lot of the more interesting problems have class imbalances far worse than this one too.


AI can be good but I’d argue letting an LLM autonomously write a paper is not one of the ways. The risk of it writing factually wrong things is just too great.
To give you an example from astronomy: AI can help filter out “uninteresting” data, which encompasses a large majority of data coming in. It can also help by removing noise from imaging and by drastically speeding up lengthy physical simulations, at the cost of some accuracy.
None of those use cases use LLMs though.


I wanna add to what other users already answered that this problem is not created by federation, only exacerbated.
If I’m mod of a community and I ban your Lost_My_Mind@lemmy.world account, I cannot stop you from creating, e.g. Lost_My_M1nd@lemmy.world and coming back. Most servers have some barriers against spam account creation in place, but I’d wager you could easily create a handful of accounts on a server until they start to grip.
Even completely centralized platforms such as Twitter and Reddit are the same. You can easily ban/block evade a couple times per timeframe.


Whcih makes sense when explained, but it seems like few hear that kind of comparison.
And then you bring up defederation and/or how instances can die at any time and you lose them again…
At least that’s how it usually goes for me and trying to advertise Lemmy. Not really a fan of “microblogging” to begin with no matter the platform.


People who claim “guys” is gender neutral would most often only count men when asked the question “How many guys did you sleep with in your life?”
Until I find a single person who immediately thinks of people of any gender at that question, I will not fall for the internalized misogyny of “‘guys’ is gender neutral” meme. (Same with “dudes” and all the other ones I’ve seen over the years. I’ve even seen someone say “bro” is gender neutral.)


deleted by creator


That data is also publicly available (of course), so a model could be trained on it. I’d love to say I’d doubt Google/YouTube would ever do that, but at this point nothing would surprise me.


Usually, if there’s a scam, someone’s making money off it. This is them. They want to keep making money.


I trained the generative models all from scratch. Pretrained models are not that helpful when it’s important to accurately capture very domain specific features.
One of the classifiers I tried was based on zoobot with a custom head. Assuming the publications around zoobot are truthful, it was trained exclusively on similar data from a multitude of different sky surveys.


Does it? I worked on training a classifier and a generative model on freely available galaxy images taken by Hubble and labelled in a citizen science approach. Where’s the theft?


deleted by creator
Even within Swiss German itself, the people in the Canton of Valais speak such a strong dialect (actually a group of dialects) that most of the rest of Swiss German people don’t understand them.