• MoogleMaestro@lemmy.zip
      link
      fedilink
      English
      arrow-up
      11
      ·
      28 days ago

      It’s definitely financially motivated. Linus said himself that AI has been very lucrative for Linux as it has expanded investment from companies that normally wouldn’t give a fuck (he name dropped NVidia specifically) on that one LTT video.

    • Horsey@lemmy.world
      link
      fedilink
      English
      arrow-up
      19
      arrow-down
      12
      ·
      28 days ago

      Saying no to code just because it was AI generated is like saying you can’t trust excel to be your bookkeeper. It’s a tool, and the person using the tool being at fault is exactly what happened here.

        • Feyd@programming.dev
          link
          fedilink
          English
          arrow-up
          5
          arrow-down
          6
          ·
          28 days ago

          You can actually set it up to give the same outputs given the same inputs (temperature = 0). The variability is on purpose

          • EzTerry@lemmy.zip
            link
            fedilink
            English
            arrow-up
            6
            ·
            28 days ago

            You can, at that will cause the same output on the same input if there is no variation in floating point rounding errors. (True if the same code is running but easy when optimizing to hit a round up/down and if the tokens are very close the output will diverge)

            The point the people (or llm arguing against llms) miss is the world is not deterministic, humans are not deterministic (at least in a practical way at the human scale). And if a system is you should indeed not use an llm… Its powere is how it provides answers with messy data… If you need repeatability make a scripts / code ect.

            (Note I do think if the output is for human use it’s important a human validate its useful… The llms can help brainstorm, can with some tests manage a surprising amount of code, but if you don’t validate and test the code it will be slop and maybe work for one test but not for a generic user.

            • Feyd@programming.dev
              link
              fedilink
              English
              arrow-up
              1
              arrow-down
              4
              ·
              edit-2
              28 days ago

              You can, at that will cause the same output on the same input if there is no variation in floating point rounding errors. (True if the same code is running but easy when optimizing to hit a round up/down and if the tokens are very close the output will diverge)

              There are more aspects to the randomness such as race conditions and intentionally nondeterministic tiebreaking when tokens have the same probability, apparently.

              I actually think LLMs are ill suited for the vast majority of things people are currently using them for, and there are obviously the ethical problems with data centers bringing new fossil fuel power sources online, but the technology is interesting in and of itself

                • Feyd@programming.dev
                  link
                  fedilink
                  English
                  arrow-up
                  1
                  arrow-down
                  4
                  ·
                  28 days ago
                  1. Floating point math is deterministic.
                  2. Systems don’t have to be programmed with race conditions. That is not a fundamental aspect of an LLM, but a design decision.
                  3. Systems don’t have to be programmed to tie break with random methods. That is not a fundamental aspect of an LLM, but a design decision.

                  This is not hard stuff to understand, if you understand computing.

            • Feyd@programming.dev
              link
              fedilink
              English
              arrow-up
              3
              arrow-down
              4
              ·
              28 days ago

              You also have to run the model with the input to determine what the output will be, no way to determine it BEFORE running. With a deterministic system, if you know the code you can predict the output with 100% accuracy without ever running it.

              This is not the definition of determinism. You are adding qualifications.

              I did look it up and I see now there are other factors that aren’t under your control if you’re using a remote system, so I’ll amend my statement to say that you can have deterministic inference systems, but the big ones most people use cannot be configured to be by the user.

                • Feyd@programming.dev
                  link
                  fedilink
                  English
                  arrow-up
                  1
                  arrow-down
                  4
                  ·
                  edit-2
                  28 days ago

                  Deterministic systems are always predictable, even if you never ran the system. Can you determine the output of an LLM with zero temperature without ever having ran it?

                  You don’t have to understand a deterministic system for it to be deterministic. You are making that up.

                  And even disregarding the above, no, they are still NOT deterministic systems

                  I conceded that setting temperature to 0 for an arbitrary system (including all the remote ones most people are using) does not mean it is deterministic after reading about other factors that influence inference in these systems. That does not mean there are not deterministic implementations of LLM inference, and repeating yourself with NO additional information and using CAPS does NOT make you more CORRECT lol.

    • michaelmrose@lemmy.world
      link
      fedilink
      English
      arrow-up
      4
      arrow-down
      16
      ·
      28 days ago

      Unlike brilliant people like you who have created nothing one millionth the importance of Linux

        • michaelmrose@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          arrow-down
          10
          ·
          28 days ago

          Yes. Dude who created one of the most useful projects in software history in large part because of pragmatic decision making makes a pragmatic decision and Joe Rando says “Must be in the pockets of big AI!” because he can’t grasp any singular aspect of a complex issue. Can’t even hold in his head a tiny number of things just vomits crap over the internet. That person needs to spend a lot more time reading and thinking and less typing.