

For sure, it’s amazing for some things. But it also appears to do more than you think it does until you become familiar with it. I think everyone new to using AI should quiz it on topics they are knowledgeable in, to realise how much shit it makes up.
Also yeah I’m specifically talking about LLMs because I think that’s 95%+ of AI usage right now in volume.
Maybe? But to give an example of how I think it’s been pretty cool, is summarising my Dungeons & Dragons session notes, and being available to answer questions, or spin up ideas on the fly. I can take horrible and inconsistent notes with holes in them, but an LLM straightens them all out into any format I need. If I need a small piece of world building and ran out of time I can get it to spit a few ideas at me. Often generic ideas and tropes are actually what I am after. If I forgot something that happened 6 months ago I can just…ask it. It can pull up stuff I noted offhand and totally forgot about no problem. This sort of use where it’s like an admin assistant, and being inaccurate is totally unimportant, it’s a good tool.
Maybe that’s a really niche example but it’s one of the few cases where I can see long term use with zero downsides.
Ultimately it’s powerful at consolidating large volumes of information and allowing the user to probe at that information. As long as the use case can tolerate inaccuracies and hallucinations then it’s fine.