

Oh yeh what happened to them? They added a bit of colour round here


Oh yeh what happened to them? They added a bit of colour round here
I should probably learn how you link all this stuff up, probably not doing myself a lot of favours ignoring it.
Is the AI vision local?
This could come in pretty handy for me. What’s he edit on that does this?


And not even the final one


Oh yeh that looked cool I was pretty pumped about that. Is it not so good after all?


I am become door


There is so much context I’m not a party to here re: the text of this post. There’s a certain random weirdness to Lemmy sometimes that’s quite unique to the platform. You can be doomscrolling a bunch of news articles, memes, asklemmy questions, niche and specific posts that are specifically constrained to a community where it makes sense and has context and then suddenly out of nowhere it’s like you’re in a room of people who are all deeply involved in some weird cult or secret mission all speaking in a shared code they all understand except you, who’s suddenly just there with them eating chips asking “what are you guys talking about?”


How recent is this history? Because I remember many years back now something like an AMA or some other post by the CEO of Brave, promoting it ahead of it being released I think and getting a pretty frosty reception on Reddit once people asked a bit more about how it was supposed to work. As time went on the reception seemed only to get worse as rumours spread of crypto scams and dodginess and even just on a basic level, questions of “why would we use this when we could just use Firefox with an ad blocker?”
I get the impression though that Reddit has been changing.
He was, undoubtedly. But in my memory and my conception of the two characters, their interactions were always like this . With Miss Piggy the angry diva or upset girlfriend to Kermit having a chuckle. It might actually have been that that happened maybe a couple of times ever but as a little kid you don’t have the best grasp of nuance so that’s the impression it formed. I also had obviously a lot less appreciation for characters having roles and functions so even though that exchange was actually really funny and very memorable since I still remember it even today, as a child I only take it on a very literal level, so Miss Piggy isn’t a diva character that creates funny situations by being quick to anger and not tolerating anything she considers beneath her, instead she’s “mean” or “fussy” you know? Obviously different as a grown up, but it made me not like the character as a kiddo.
As a kid I always hated Miss Piggie since she as sort of the humourless foil to Kermit but I think I can appreciate porcine Yoda a bit more now if only for the voice.
Yeh he always seemed rock and roll!


Won’t somebody please think of the warlords!


They should replace him with a painted corrupt shit stain instead.


If the username doesn’t have to be unique, couldn’t you impersonate people?


I don’t feel like LLMs are conscious and I act accordingly as though they aren’t, but I do wonder about the confidence with which you can totally dismiss the notion. Assuming that they are seems like a leap, but since we don’t really know exactly what consciousness is, it seems difficult to rigorously decide upon what does and doesn’t get to be in the category. The usual means by which LLMs are explained not to be conscious, and indeed what I usually say myself, is something like your “they just output probability based on current context” or some variation of “they’re just guessing the next word”, but… is that definitely nothing like what we ourselves do and then call consciousness? Or if indeed that is definitively quite unlike anything we do, does that dissimilarity alone suffice to declare LLMs not conscious? Is ours the only possible example of consciousness, or is the process that drives the behaviour with LLMs possibly just another form or another way of arriving at consciousness? There’s evidently something that triggers an instinctual categorising, most wouldn’t classify a rock as conscious and would find my suggestion that ‘maybe it’s just consciousness in another form than ours’ a pretty weak way to assert that it is, but then again there’s quite a long way between a literal rock and these models running on specific rocks arranged in a particular way and which produce text in a way that’s really similar to the human beings that we all collectively tend to agree are conscious. Is being able to summarise the mechanisms that underpin the behaviour who’s output or manifestation looks like consciousness, enough on it’s own to explain why it definitely isn’t consciousness? Because, what if our endeavours to understand consciousness and understand a biological basis for it in ourselves bear fruit and we can explain deterministically how brains and human consciousness work? In that case, we could, if not totally predict human behaviours deterministically, then at least still give a pretty good and similar summarisation of how we produce those behaviours that look like consciousness. Would we at that point declare that human beings are not conscious either, or would we need a new basis upon which to exclude these current machine approximations of it?
I always felt that things such as the Chinese Room thought experiment didn’t adequately deal with what I was driving at in the previous paragraph and it seems to me that dismissals of machine consciousness on the grounds that LLMs are just statistical models that don’t know what they are doing are missing a similar point. Are we sure that we ourselves are not mechanistically following complicated rules just as neural networks and LLMs are and that’s simply what the experience of consciousness actually is - an unconscious execution of rulesets? Before the current crop of technology that has renewed interest in these questions, when it all seemed a lot more theoretical and perennially decades off, I was comfortable with this uncomfortable thought. Now that we actually have these impressive models that have people wondering about the topic, I seem to be skewing more skeptical and less generous about ascribing consciousness. Suddenly now the Chinese Room thought experiment as a counter to whether these conscious-looking LLMs are really conscious looks more convincing, but that’s not because of any new or better understanding on my part. I seem to be just goal post shifting when faced with something that does a better job of looking conscious than any technology I’d seen previously.


MSN Messenger for the win!
Ok so what I’m about to say does come with the caveat that I do the pricking a hole in the bottom before cooking and also pouring cold water on immediately after boiling thing so maybe that’s why it works for me but honestly I think this will work even if you don’t. Your problem is that you want to remove the membrane between the shell and the egg, WITH the shell, otherwise it becomes very hard to peel in one go and it takes little chunks of cooked egg white with it at variable depths.
To do that, I smack the top and bottom of the boiled egg on a hard surface to precrack the shell at those two places, then I roll the whole egg very gently on it’s side on that hard surface to create little cracks allover, it’s important not to press too hard especially if it’s softer boiled because it’ll just bisect equatorially with the shell still stuck to both halves in broken fragments. Then, the crucial bit, once you’ve precracked the egg all over, you HAVE to start peeling at the bottom of the egg, that’s the fatter end. There should be an air pocket in there that the precracked shell has sorta of collapsed in to, but hasn’t broken off in shards because it’s all held together by this membrane. If you pinch that loosened cracked eggshell at the bottom between your thumb and index finger to gather and collect it, you can kind of pull it up and off to the side a bit which will cleanly make a tear in the membrane allowing you to just kinda push the rest of the shell off in one piece. Because you can just sorta ease it off, sometimes it likes to come off in a nice big chunk like a jacket, sometimes you need to do a continuous spiral but as long you did that pinch technique you’ll be pulling the membrane off at the same time as the shell attached to it and in so doing you don’t have to pull off little bits of shell by themselves it’s one continuous piece and it can’t take chunks of the actual cooked egg with it like happened in your picture.