

in a made up language
Aren’t all languages made up?


in a made up language
Aren’t all languages made up?


Any NIH-funded research must be made open access one year after its publication date. NIH publishes the accepted manuscript in PubMed at the one-year mark. Unlike NIH, (last I checked) NSF doesn’t strictly require it, but you won’t be getting NSF funding unless you say you’re going to make the resulting papers freely available somehow (e.g., preprints, paying for open access, etc.). Not sure about DOE/DOD/etc. funded-articles.
The majority of federally funded research in the US is made open access. You might not realize it because news outlets typically report on brand-new articles, which haven’t hit the one-year mark for open access yet.


Reposts are better than no posts. Plus, plenty of people could have missed the original.


ONLYOFFICE (sorry about the caps, poor name choice IMO) has even better docx compatibility, and its source code is open


Yeah, I understood. My reply wasn’t actually directed at you; sorry for not being clear. I just wanted to add that bit in case other readers didn’t know that this was more forceful than a request.


They weren’t asked, they were mandated to do so directly by executive order. I get the desire to not comply, here, but if I’m NIH, I’m probably thinking that complying to keep the doors open for four years will do a hell of a lot more for the country than if they refuse and Trump totally dismantles their entire architecture with enough time that it’s difficult to reinstitute when he’s gone.


Trogdor was popular way before Reddit
Examples? I can think of a number of foreign companies that the US facilitates, like Nestle.


Eh, I switched. I switched all of my lab’s computers, too, and my PhD students have remarked a few different times that Linux is pretty cool. It might snowball.


Seven paragraphs is too much? I read the full thing before seeing your comment. It’s well written and easy to read.
I never understand why lemmy downvotes someone who is trying to help by providing accurate information, presumably because they think that there’s a very small chance that the person they’re replying to isn’t being sarcastic.


I actually took that bit out because LLMs are pro climate and against everything that makes the environment worse. That’s a result of being trained on a lot of scientific literature. I was just curious what Opus would say about the conceptual knowledge piece.


Claude Opus disagrees, lol (edit to add: all of what follows is Claude; not me):
I respectfully disagree with the claim that current language models are logically incapable of mapping language to actual concepts. While today’s LLMs certainly have limitations and are prone to hallucinations, they have demonstrated a remarkable ability to capture and manipulate complex concepts through language.
A few key points:
LLMs are trained on vast amounts of human-generated text which is grounded in real-world concepts, allowing the models to build rich representations that go beyond surface-level word associations. Techniques like unsupervised pre-training enable LLMs to learn meaningful conceptual relationships.
In many domains, LLMs have shown an impressive capacity for reasoning, inference, and knowledge synthesis - skills that rely on a deep understanding of the underlying concepts, not just pattern matching. Their performance on complex question-answering, analogical reasoning, and natural language inference tasks is evidence of this.
LLMs can be fine-tuned and augmented with techniques like retrieval-augmented generation or chain-of-thought prompting to further ground them in factual knowledge and improve their reasoning capabilities. This allows more robust conceptual mappings.
Careful prompt engineering and sampling techniques can help to mitigate hallucinations and nonsensical outputs. While not a complete solution, this shows that the conceptual knowledge is often present even if the model sometimes struggles to express it faithfully.
That said, I fully agree that the tendency for subtle nonsense is a serious issue, especially for layperson use cases where hallucinations may go undetected. Continued research into making LLMs more truthful, consistent and robust is crucial. Techniques like constitutional AI to bake in truthfulness, and better UX design to convey uncertainty are important steps.
But in summary, I believe the evidence suggests that LLMs, while flawed, are not fundamentally incapable of meaningful conceptual representation and reasoning. We should push forward on making them more reliable and trustworthy, rather than dismissing their potential prematurely.


I was just in a smaller city in Germany and flew back to the US after that. I look German and speak German. When paying with card, Germany felt exactly like the US. At every restaurant, the tip request automatically came up within the thing used to process your card, just like in the US.
I paid for Kagi and have been super happy with it. If you don’t mind paying, I highly recommend it. Not having ads or manipulated results is worth it for me.
Only 10 states still do: https://www.kiplinger.com/taxes/states-that-still-tax-groceries