• 0 Posts
  • 53 Comments
Joined 1 year ago
cake
Cake day: November 8th, 2024

help-circle





  • First of all, they are not FOSS. I know it seems tangential to the discussion, but it’s important because biases cannot be reliably detected without the starting data. You should also not trust humans to see bias because humans themselves are quite biased and will generally assume that the LLM is behaving correctly if it aligns with their biases, which can be shifted in various ways over time, too.

    Second, local LLMs don’t have the benefit of free software where we can modify them freely or make forks if there are problems. Sure, there’s fine tuning, but you don’t get full control that way, and you need access to your own tuning data set. We would really just have the option to switch products, which doesn’t put us much further ahead than using the closed off products available online.

    I’m all for adding them to the arsenal of tools, but they are deceptively difficult to use correctly, which makes it so hard for me to be excited about them. I hardly see anyone using these tools for the purposes they are actually good for, and the things they are good for are also deceptively limited.







  • Hold on, am I missing something? I don’t see anyone in here talking about that time proton openly endorsed the Republican party. Did we forget about or forgive them for that? Is it just irrelevant right now? They backtracked later but like https://archive.ph/2yWGz

    When organizations make a move like that, they usually don’t stop pushing in that direction, even if they backtrack in response to pushback. While I’m sure they’re still better than google, I have a hard time trusting them after that. It feels relevant to talk about because like you said, using proton is adding another trust point.





  • Alright, you do have a point there. Reading docs and asking questions is a skill too, and if you haven’t learned them yet then chatgpt can stunt your growth there, i agree with that much.

    I still think that chatgpt, if used correctly, can be a huge boon to your education. Knowing how to interact with those bots to avoid their shortcomings and not use them as a crutch I think is also a skill worth learning.



  • No, learning is there part where you have to think. That’s not when you use the robot. You use the robot when the documentation is trash and unusable and every answer you find is out of date. You use the robot when you know exactly what you want to do and how to do it and you don’t have time to trawl through the docs for the next 2 hours. You use the robot when the only gimp 2.10 tutorial on earth for how to write plugins tells you to use this funny program called gimptool but you’re new to gimp dev so you look online to see what that is only to find that there’s no mention of it literally anywhere besides your current tutorial and a disjointed man page where you can’t find the source anywhere, and the devs are all on irc and you don’t want to bother them and you’re worried that they’re just going to tell you to read the tutorial you already came from and you’ll leave empty handed. That’s when you ask the robot. It has a use, you don’t have to substitute your thinking to use it.



  • Well I’m not trying to argue against one being written, just that once it is written, it will still not be what it was made to be.

    LinkedIn is defined by its user base of recruiters and corporations, which draws in professionals seeking jobs. It becomes a cycle, but importantly for this conversation, it is controlled by recruiters.

    This is in contrast to any other social media where regular people draw in more regular people. A FOSS LinkedIn will not only have the network effect to fight, but it will also be up against the will of corporations, not just the slow buildup of users.

    I guess another way to put it is a FOSS LinkedIn cannot grow with a few users joining here and there. You have to convince Amazon to move over first.