• Sandbar_Trekker@lemmy.today
    link
    fedilink
    English
    arrow-up
    2
    ·
    16 hours ago

    Technically, you can get the same answer twice from an LLM, but only when you control the full input. When a model is being run, a random seed/hash is applied to the input. If you run the model locally you could force the seed to always be the same so that you would always get the same answer for a given question.

    • Eager Eagle@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      13 hours ago

      Barely. Even with the code and seeds, it’s still a struggle to do that. There’s plenty of questions from people running pytorch and tensorflow models that can’t reproduce results. Maybe you isolate enough variables that consecutive runs actually produce the same output, but the study is about commercial models. You’ll never get deterministic output from those.