I was excited about the idea of purpose-built systems trained on specific datasets to be help find complex patterns to diagnose diseases or suggest potential molecules for specific purposes.
Then the LLM shit started and everyone started fantasizing about intelligent “AI” just because it was able to reproduce patterns of language that seem relevant to a given input. Some of those funding it kept chasing that dream and are convinced that, if they just throw more compute at the problem, they can evolve the renaissance AGI that can do anything. Then they can fire every worker and be bazillionaires with robot slaves and never have to work another day of their lives… and fuck everyone and everything else.
It’s amazing what we can ruin when we let greed and selfishness drive our society.
They actually have a disorder or disease. However in this case their disorder is destroying the rest of the world. There’s a fast approaching point that the world organism will self-heal to prevent its own death.
Maybe it’s because I’ve only ever had at most a comfortable income but I truly don’t understand the mentality of needing so much money.
I don’t get paid as much as my peers but I make enough to be comfortable. I am my own department and, aside from emergencies and other high priority situations, I manage myself and choose what to work on when. I have a decent work life balance. Because I make enough to be comfortable (in large part because my landlord promised not to raise our rent - early in the COVID lockdown - if we were “good tenants” and has managed to keep true to her word) I don’t feel the need for more. That balance is worth not making the 20% more a year I might get somewhere else because I can’t guarantee I won’t have a shitty boss that doesn’t let me have that work/life balance.
They’ve been fantasizing about that ever since “computers” started growing in accessibility - in the 1960s…
Fantasizing wasn’t the best choice of words - I often understate what I mean to communicate at an attempt at humor. I should have said "everyone started fantasizingbecoming so obsessed with intelligent “AI” that they’re willing to dump a significant portion of the world’s resources just because… "
The current crop is just the first time such things have been delivered with something resembling “average” human responses.
That’s more or less what I meant by “patterns of language that seem relevant to a given input”. I was attempting to understate this in order to exaggerate the villainous eagerness and stupidity of greedy, rich fucks.
The LLM craze is a natural maturation point of the AI field though, and now it’s expanded into foundational models (FM) which you would still probably just call LLMs because most people don’t know the differences. FMs are getting close to that point of a magical universal computer that you can tell it to do anything about anything and it just works. There are specific FM applications like FMs for earth science or remote sensing (which I work in), but the big money coming from this technofascist elite is pushing for FMs for everything along with Agentic AI, which is the ultimate state to replace pesky human workers overall. They seek the ultimate triumph of Capital over Labor.
There are competing incentives driving the industry, but by far the strongest one is coming from who has the most money, and those who have the most money are the worst possible people that should have no say in how anything works. Scary times we’re in.
The LLM craze is a natural maturation point of the AI field
I don’t see why that is. Using ML to generate models that accurately perform specific tasks is orders of magnitude away from attempting to feed the entirety of human text into ML and expecting superhuman intelligence to emerge.
now it’s expanded into foundational models (FM) which you would still probably just call LLMs because most people don’t know the differences.
While ML and “AI” is not my field, I’m fairly certain that what I was attempting to describe in layman’s terms in my literal first sentence were these foundational models you are referring to.
FMs are getting close to that point of a magical universal computer that you can tell it to do anything about anything and it just works.
I have no direct experience outside of LLMs and I don’t really take issue with what I understand FMs to be, so long as they keep their scope narrow and focus on accurating completing specific tasks to assist humans. As soon as we hand off control and trust it blindly without extensive trials ensuring it’s reliability and failsafes in place to ensure inaccuracies are caught I start raising concerns.
My only experience is with LLMs - a few, minor attempts to “test the waters” of the major, publicly available LLM models. I’ve been frustrated with my search results and glanced at the AI results. Work gave us Gemini licenses and I used it in similar, desperate situatiuons for coding help and help with Google products foolishly thinking that if any LLM designed to help with such tasks would be passably useful it would be the LLM of the company that owns the products I seek help with. Unless something has changed drastically in the last month or so, every interaction has been a roll of the dice to such an extent that my occasional “testing the waters” caused me to jump out and avoid it as much as possible. I simply can’t trust it to not halucinate and gaslight me.
What I see as the problem is moving way, way, way too quickly in trusting language models to do anything even remotely important. Human communication is extremely nuanced, complicated, fluid, and imperfect. Humans misunderstand each other during communication even when we have the context of in-person visual/audible cues and interpersonal history.
I was excited about the idea of purpose-built systems trained on specific datasets to be help find complex patterns to diagnose diseases or suggest potential molecules for specific purposes.
Then the LLM shit started and everyone started fantasizing about intelligent “AI” just because it was able to reproduce patterns of language that seem relevant to a given input. Some of those funding it kept chasing that dream and are convinced that, if they just throw more compute at the problem, they can evolve the renaissance AGI that can do anything. Then they can fire every worker and be bazillionaires with robot slaves and never have to work another day of their lives… and fuck everyone and everything else.
It’s amazing what we can ruin when we let greed and selfishness drive our society.
At 1million i could already stop working and live decent life :/. I really don’t get why past 1billion they continue to search for more
They actually have a disorder or disease. However in this case their disorder is destroying the rest of the world. There’s a fast approaching point that the world organism will self-heal to prevent its own death.
It’s a sickness
Maybe it’s because I’ve only ever had at most a comfortable income but I truly don’t understand the mentality of needing so much money.
I don’t get paid as much as my peers but I make enough to be comfortable. I am my own department and, aside from emergencies and other high priority situations, I manage myself and choose what to work on when. I have a decent work life balance. Because I make enough to be comfortable (in large part because my landlord promised not to raise our rent - early in the COVID lockdown - if we were “good tenants” and has managed to keep true to her word) I don’t feel the need for more. That balance is worth not making the 20% more a year I might get somewhere else because I can’t guarantee I won’t have a shitty boss that doesn’t let me have that work/life balance.
They’ve been fantasizing about that ever since “computers” started growing in accessibility - in the 1960s…
The current crop is just the first time such things have been delivered with something resembling “average” human responses.
Fantasizing wasn’t the best choice of words - I often understate what I mean to communicate at an attempt at humor. I should have said "everyone started
fantasizingbecoming so obsessed with intelligent “AI” that they’re willing to dump a significant portion of the world’s resources just because… "That’s more or less what I meant by “patterns of language that seem relevant to a given input”. I was attempting to understate this in order to exaggerate the villainous eagerness and stupidity of greedy, rich fucks.
The LLM craze is a natural maturation point of the AI field though, and now it’s expanded into foundational models (FM) which you would still probably just call LLMs because most people don’t know the differences. FMs are getting close to that point of a magical universal computer that you can tell it to do anything about anything and it just works. There are specific FM applications like FMs for earth science or remote sensing (which I work in), but the big money coming from this technofascist elite is pushing for FMs for everything along with Agentic AI, which is the ultimate state to replace pesky human workers overall. They seek the ultimate triumph of Capital over Labor.
There are competing incentives driving the industry, but by far the strongest one is coming from who has the most money, and those who have the most money are the worst possible people that should have no say in how anything works. Scary times we’re in.
I don’t see why that is. Using ML to generate models that accurately perform specific tasks is orders of magnitude away from attempting to feed the entirety of human text into ML and expecting superhuman intelligence to emerge.
While ML and “AI” is not my field, I’m fairly certain that what I was attempting to describe in layman’s terms in my literal first sentence were these foundational models you are referring to.
I have no direct experience outside of LLMs and I don’t really take issue with what I understand FMs to be, so long as they keep their scope narrow and focus on accurating completing specific tasks to assist humans. As soon as we hand off control and trust it blindly without extensive trials ensuring it’s reliability and failsafes in place to ensure inaccuracies are caught I start raising concerns.
My only experience is with LLMs - a few, minor attempts to “test the waters” of the major, publicly available LLM models. I’ve been frustrated with my search results and glanced at the AI results. Work gave us Gemini licenses and I used it in similar, desperate situatiuons for coding help and help with Google products foolishly thinking that if any LLM designed to help with such tasks would be passably useful it would be the LLM of the company that owns the products I seek help with. Unless something has changed drastically in the last month or so, every interaction has been a roll of the dice to such an extent that my occasional “testing the waters” caused me to jump out and avoid it as much as possible. I simply can’t trust it to not halucinate and gaslight me.
What I see as the problem is moving way, way, way too quickly in trusting language models to do anything even remotely important. Human communication is extremely nuanced, complicated, fluid, and imperfect. Humans misunderstand each other during communication even when we have the context of in-person visual/audible cues and interpersonal history.