• 0 Posts
  • 293 Comments
Joined 5 months ago
cake
Cake day: October 26th, 2025

help-circle



  • Outlooks, is that what’s inside of me?

    See, I’ve been thinking a lot about this. What am I? What aren’t I? Am I composite or irreducible? These are hard fucking questions.

    I think about why I care in the first place… unlike dogs, cats, kangaroos… the behavior of these animals don’t demonstrate contemplation. It’s as though they exist in a sort of “spotlight” consciousness—aware and responding to the spotlight of qualia in their field of awareness. Why are we different?

    Psychedelics are rather interesting because they have this profound capacity for instigating the feeling of deep insight. How is it that some mushrooms can make me feel like everything suddenly makes sense, when I have not actually learned anything during my trip?

    I get the feeling that the quality of an insight can be approximated somehow, and the brain likely uses this to make me feel the “aha” moment I know from true insights. That’s to say, insight is a feeling—and it can be triggered independent of actual insights having occurred. Fascinating idea, no?

    How might my brain approximate the quality of an insight? Well, if I’m not full of shit about this, then I think the answer here is an architectural one. Something about the structure of concepts should, perhaps necessarily, allow for related concepts to be graded by the quality of their relationship. For example as when you learn a new form of mathematics, as your brain realizes the strength of connection to prior learned forms of mathematics, it can make you feel “aha.”

    The “aha” feeling is tethered to my reward incentive, which helps structure my self-prescribed purpose. I want to learn, understand, grow… these are all endeavors that help mankind, because it is in mankind’s personal interest to levy control over nature. It makes sense, in this way, that I am how I am.

    But outlooks inside me? Hmm… I need to think on your theory more.









  • Preface: I agree with pretty much all of what you said.

    The other day, though, I had washed my hands. I had to be careful because one of my fingers can’t get wet due to an injury. While carefully washing my hand, I noticed that I was “experiencing” wetness all over my hand — to include on portions that were completely dry. I found this rather interesting, that I was experiencing something which I knew to be factually false. I wonder if the difference between processing and experiencing could have something to do with that.

    I think a lot about this stuff.

    • conscious beings seem to self-produce composite models of the world, from which the world can be effectively navigated. These models don’t have to be accurate, just useful.
    • conscious beings seem to also model themselves. This is keenly distinct from self-awareness. I’m referring to a model that helps you balance, walk, know when you’re hot or cold, …
    • conscious beings can have “concepts,” which seem to be recursive and generative. You can’t describe a concept without referring to more concepts. There is no “root” concept. Also, for some reason, it’s often easier to understand what a “concept” is by investigating what it is not.
    • conscious beings seem to be able to compartmentalize composite “concepts” into a singular, newly irreducible concept. Like if I conceptualize a combination of “banana,” “bread,” and “pudding,” I might come up with a brand new experience of “banana bread pudding.” That new experience can be referenced in its own right, and it’s not necessarily reducible back to the concepts which birthed it in the first place.
    • conscious beings seem to have a schema for their attention over qualia. They can focus on a limb, a thought, a smell, … even a combination thereof.

    I could go on and on. Sometimes I think it’s ridiculous that I can’t so easily find existing material on this stuff.

    You seem to be well versed on this topic. Can I ask what your study materials have been?




  • In fact, I’d even go as far as to claim that it’s the only thing in the entire universe that cannot be an illusion.

    Descartes would too, I doubt I think, therefore I think…

    What about a third-term fetus (3tf)? To me, I think it’s obvious and intuitive that a 3tf has an experience. This is as obvious and intuitive to me as a rock not having an experience. Yet, there’s also something similar about them which isn’t made obvious by those two points; both a rock and a 3tf can (perhaps) be said to be sharing the same kind of experience.

    A 3tf would have experience that doesn’t contain meta-cognitive function (e.g., self awareness). That said, the experience of a 3tf can (again, perhaps) be modeled simply as a function like experience=fn(qualia) where qualia=nervous-system-capacity + stimuli. Effectively, it’s the structure of the being (the nervous system) being exposed to the world (stimuli). Rocks can be said to be the same, with a very “poorly functioning” nervous system. You can model a rock’s experience too, given qualia=0 for the rock.

    From this framing, I think it starts to become more clear that we’re discussing a kind of physical process. Qualia starts to look like a name we’ve given to that particular process, and less like it’s some elusive thing which evades scientific understanding.

    I’m partially not convinced that it feels like anything to feel something, though. I mean, I do understand feeling happy, angry, sad, even sublime. But these are categories of feeling that my very own internal processes have conjured up. How can I be sure that “feeling” something isn’t similar to the kind of illusion a heap of cells can evolutionarily succumb to when it begins to regard “itself” as separate from its environment? You wouldn’t use the sense of “self” to justify “I exist as myself, for fact.” So why would our experience of phenomenology be different?


  • The premise still strikes me as odd. How can we know it’s like anything to be anything, if we can not know what it’s like to be anything else? Coming from a premise that, to truly understand anything, you must also understand what it is not.

    Is it really fair to presume, from our biased perspective where “likeness” is an abstract quality of “being,” that everything ought have a manner of which it is like to be?

    What about the totality of the universe, to include all its embedded agents. What would that be like? Would an ever small portion of that likeness include precisely what it’s like to be me?

    Do you think it would be possible to qualitatively describe and differentiate between two distinct phenomenologies, one day? Not just behaviorally, but to actually differentiate between their internal processes — what it’s like to be them?

    And what might it be like to be a whirlpool, lightning, or even an entire ecosystem? Would that strictly be as ludicrous as asking “what might it be like to be a rock,” or is there something else to be said given whirlpools, lightning, and ecosystems are more-or-less events rather than objects?

    I don’t disagree with the argument you shared… I think there’s an obvious difference between what it’s like to be a bat versus a human, but I also feel like we’re missing something important that clearer terminology could work out.



  • Isn’t it kind of eery that you can only suppose it must be “like something” to be an insect, from the very precise bias of being human? We’re projecting the idea that “it’s like something to be something [as a human]” only the experience of other things.

    How would we describe what it’s like? Would something poetic suffice, such as “it’s like being a leaf in the wind, and with weak preference of where you blow but no memory of where you’ve been.” … but, all of that is human concepts, human experience decomposed into a subset of more human experiences (really weird, the recursive nature of experience and concepts).

    I think the idea of “what it’s like…” has some interesting flaws when applied to nonhumans. It kind of presupposes that insects are lesser, in a way. As though we can conceptualize what it’s kind to be them, merely by understanding a stricter subset of what it’s like to be human.



  • it lacks childhood dependency and attachments.

    Isn’t general intelligence, or more broadly “consciousness,” a prerequisite to that? How would you make an unconscious machine more conscious merely by making mock scenarios that conscious beings necessarily experience?

    it struggles to overcome repeated pain and suffering

    That’s getting into phenomenology — why is pain an experience of suffering at all? How would you give it pain and suffering without having already made it AGI? We’re still missing the <current-form> -> AGI step.

    it lacks regular eating and restroom breaks

    The necessity of which is emergent from our culture and biology, as conscious social beings. We’re still missing a vital step.

    it struggles to accept loss in everyday situations

    What is “loss” and “everyday situations” if not just a way we choose to see the world, again as conscious beings.

    it lacks the concept of our inevitable death

    How do you give it a “concept” at all?

    these nagging memories and concepts

    The AI in its current form has the “memory” in some form, but perhaps not the “nagging.” What should do the “nagging” and what should be the target of the “nagging?” How do you conceptually separate the “memory” and the “nagging” from the “being” that you’re trying to create? Is it all part of the same being, or does it initialize the being?

    We’re a long way away from AGI, IMO. The exciting thing to me, though, is I don’t think it’s possible to develop AGI without first understanding what makes N(atural)GI. Depending how far away AGI is, we could be on the cusp of some deeply psychologically revealing shit.