• 0 Posts
  • 20 Comments
Joined 1 year ago
cake
Cake day: November 30th, 2024

help-circle
  • pcalau12i@lemmygrad.mltoMemes@lemmy.mlWhat is real?
    link
    fedilink
    English
    arrow-up
    3
    ·
    15 days ago

    I’m going to reiterate my original claim because much of your comment misses the point. In the comment above I argued that quantum theory has interesting philosophical implications.

    You didn’t read my original comment, then, since the whole point in my reply was to demonstrate that QM does not change the situation at all when it comes to the metaphysics, i.e. it does not have philosophical implications which classical mechanics did not have.

    So when you assert materialism this is intellectual honesty, but when someone argues for an anti-materalist stance, based on observable evidence as strange as quantum entanglement (which you are quick to explain away) this is just personal metaphysics?

    I don’t know if your reading comprehension really is that poor or you are just intentionally misinterpreting what I stated.

    No, I did not claim that materialism is being “intellectually honest” here, I claimed that the ones being intellectually honest are the ones who do not pretend like quantum mechanics supports their metaphysics, which includes materialists, at least not any more than classical physics did.

    Occam’s razor doesn’t allow us to flippantly dismiss positions we deem unintuitive.

    Sure, but Sagan’s razor does, if you present your mystical claims without a shred of evidence.

    Again, you’re familiar with the physics side but are incapable of considering alternate philosophical points of view.

    You are incapable of being intellectually honest and want to desperately pretend that quantum physics proves idealism. I at least have the intellectual honesty to not pretend quantum mechanics is relevant to such questions of metaphysics.



  • pcalau12i@lemmygrad.mltoMemes@lemmy.mlWhat is real?
    link
    fedilink
    English
    arrow-up
    3
    ·
    edit-2
    15 days ago

    Furthermore, Bell-type experiments, which are a part of the broader quantum theory, display quantum entanglement such that measuring one half of the experiment decides the outcome of the other.

    That is just non-locality. It also doesn’t “decide the outcome” of the other. It is more complicated than that. Bell’s theorem is about a locally stochastic theory having to obey Reichenbachian factorization, which is the idea that a joint probability distribution between two objects should be factorizable if you condition on a common cause in their backwards light cone where they locally interacted. If you assume this, it places certain statistical bounds on what results you can expect, which is broken in practice.

    If you interpret quantum mechanics as a stochastic theory without altering its mathematics, then the outcomes are just random so nothing determines them by definition, but what one observer does in their lab does affect the kind of statistical correlations they would expect to find with another person’s lab if they later compare results. In a deterministic model that does add something, like Bohmian mechanics, this model is also contextual, so the deterministic trajectories depend upon the full experimental context. Ultimately, the particle’s trajectory is still ultimately determined by its initial state, but the observer changing the configuration of the measurement devices while the particle is mid-flight does alter the physical context of the experiment and thus can alter those trajectories.

    To be clear, Bernard does not promote skepticism about reality or its objectivity. But he argues convincingly that the evidence is inconsistent with materialism.

    If you presented him accurately then he undeniably does. You cannot claim X then turn around saying you’re not claiming X. If there are no facts about things until you look at them then there is no objectivity. That is literally solipsism.

    Whether you agree with Bernard is immaterial (pun intended). The larger point here is that reasonable people can disagree with materialism giving the probabilistic, relational, and epistemologically problematic nature of subatomic particles.

    I don’t see what is non-materialistic about statistics. One of the most famous and influential materialists in history, Friedrich Engels, heavily criticized causality in his writings, viewing cause-and-effect as an abstraction such that the same system could be described in a different context where what is considered the cause and what is considered the effect swap places. The physicist Dmitry Blokhintsev, the man who invented the concept of the graviton, was personally inspired by Engels’ writings and even cited this in a paper he wrote criticizing the Copenhagenists for thinking lack of “Laplacian determinism” as he called it implies a contradiction with materialism, saying that materialist of his school had already rejected Laplacian determinism since the 1800s.

    Again, the arguments you’re making have nothing to do with quantum mechanics at all. If they have literally no relevance to quantum mechanics, then it makes no sense to try and use quantum mechanics as an argument in your favor. One can also imagine existing in a universe where the laws of physics are classical without quantum mechanics at all, but systems still undergo fundamentally random perturbations. These are classical perturbations which cannot violate Bell inequalities, but would still disallow you from tracking the definite states of particles and they could only be tracked with a vector in configuration space that is a linear combination of basis states.

    If one wants to argue that randomness somehow contradicts with materialism, then the same argument could be made in that universe, and so the argument must have nothing to do with quantum mechanics.

    These insights obviously conflict with our understanding of materialism! We cannot simply presume the truth of materialism because we find it more intuitive. At best, scientists can justify their assumption of materialism on practical grounds.

    Sagan’s razor. “Extraordinary claims require extraordinary evidence.” “Intuitive” refers to things which are blatantly obvious and self-evident and are supported by all of our observations. To deny it thus requires a much greater burden of evidence. If you want to claim everything we perceive is a lie, that we all live inside of a grand illusion and reality actually works fundamentally differently than to what we perceive, then this is, indeed, quite an extraordinary claim, and I am simply going to dismiss it unless you can provide extraordinary evidence for it.

    Yet, no extraordinary evidence is ever presented. Only vague loose philosophical arguments. That is just not convincing to me. The reality is that we already know you can fit the predictions of special relativity and quantum mechanics to simple theories point particles moving deterministically in 3D space with well-defined values at all times evolving in an absolute space and time. The point is, again, not that we should necessarily believe such a model, but the fact we know such models can be constructed disproves any claim that we cannot interpret quantum mechanics as a realist theory. If you don’t add anything to it, you have to interpret it as a stochastic theory, but I have no issue with statistics. My issue only arises when people claim a system described by a statistical distribution has “no fact” about it in the real world.

    That is just mysticism not backed by anything.

    I take a very “conservative” approach to philosophy. If you are going to introduce some brand new world-shattering “paradigm shift” metaphysics, then I am going to be your biggest skeptic. I will want you to demonstrate that this is a necessity, either a logical or empirical necessity, such that all more trivial ways to conceive of the world have been exhausted.

    Our belief in objective reality and object permanence isn’t just something we farted out one day for fun because we have an “unreasonable bias.” People believe these things because they fit our day-to-day self-evident empirical observations and do a great job to make sense of things. If you are going to throw them out, you therefore better have a damned good reason, rather than just complaining that we’re being “biased” based on our “intuition.”

    That’s just a cop-out.

    2/2


  • pcalau12i@lemmygrad.mltoMemes@lemmy.mlWhat is real?
    link
    fedilink
    English
    arrow-up
    3
    ·
    edit-2
    15 days ago

    You may have good arguments for one camp within this discussion (e.g., sophisticated materialism) but to dismiss the philosophical implications outright prima facie indicates either a lack of familiarity with the philosophy of physics or perhaps a dismissal of metaphysics as a fruitful enterprise.

    No, it reflects something called intellectual honesty. It is always possible for two different groups of people, given the same predictive body of mathematics, to draw different metaphysical conclusions from them. The idea that the mathematics necessitate someone’s particular metaphysics is just intellectual dishonesty pushed by people with bizarre views who can’t defend them on any other grounds other than to dishonestly pretend that the mathematics somehow proves them.

    Call this “strong objectivity”. In contrast, Bernard d’Espagnat, theoretical physicist and philosopher of science, argues against materialism on the grounds that standard quantum mechanics is only “weakly objective”. (See his book, “On Physics and Philosophy”.) Although our observations are intersubjectively valid, quantum mechanics is predictive rather than descriptive: it does not describe the world as consisting of mind-independent entities that have determinate properties before they are observed/measured.

    This is blatantly obviously his personal metaphysical interpretation which is in no way necessitated from the mathematics. I can just look at the exact same body of mathematics and interpret it as describing an objective but stochastic world. Even in a purely classical world, but one which evolves through random perturbations, we would find that we cannot track the definite states of objects at a given time. We could thus only track an evolving probability distribution. But it is understood, typically, that when it comes to probability, that there is an underlying configuration of the system in the real world, but we just do not know which one it is.

    To deny this is to deny object permanence. These properties are not invisible, they are directly observable. We just happen to not be observing them in the moment, but they still possess observable properties and thus are observable under a counterfactually conceived circumstance. This is the basis of object permanence, that we don’t reject the existence of observable things just because we are not observing them in the precise moment, as long as they can be observed under a counterfactual.

    There is no fact of the matter concerning the state of the system before we measure it.

    This is to devolve into crackpot solipsism. Humans are made out of particles. If particles have no fact of the matter about them until you look, then other humans also have no fact about them either before you look. This was Schrodinger’s point about his “cat” thought experiment. He was trying to point out that your beliefs about fundamental particles cannot be confined to fundamental particles, that they necessarily also imply things about macroscopic objects as well, like cats, or other people.

    There is, again, literally nothing in the theory that forces you to accept this premise. The delusion goes back to John von Neumann who was a brilliant mathematician but also a crackpot who originated the “consciousness causes collapse” interpretation of quantum mechanics and was a major advocate for starting a WW3 nuclear holocaust. In one of his books on the mathematics of quantum mechanics, he tries to offer a mathematical “proof” that objective reality doesn’t exist, by showing that, if quantum mechanics is just a stochastic theory, then it should follow certain statistical laws, and shows that it violates those laws.

    However, John Bell would later debunk von Neumann’s “proof” in his own response paper, published at the same time he published his famous theorem. Since von Neumann was a brilliant mathematician, there were no mathematical flaws in his “proof,” and so it had a major impact and caused many physicists to start agreeing with von Neumann’s mysticism. But Bell pointed out that the issue is not in the mathematics, but the premises. von Neumann’s assumptions about statistics are not just rules underlying pure statistics, but also include physical assumptions as well, specifically he adopted an assumption of additivity which only makes sense if the underlying physics are classical. If the underlying physics are not classical, then there is no reason for such an assumption to hold.

    All von Neumann really proved was that the underlying statistical dynamics cannot be governed by classical physics. This is why Bell also published his other paper in the same year published his paper in response to the EPR paper as well, showing that Einstein, Podolsky, and Rosen’s beliefs that the underlying physics can be reduced to a classical stochastic theory are false. These physicists with crackpot beliefs love to present a false dichotomy where the only two possibilities are (1) quantum mechanics is a classically stochastic theory or (2) objective reality doesn’t exist. What Bell was trying to argue was that quantum mechanics is a non-classically stochastic theory.

    What is “non-classical” about it is debatable, but the most trivial answer which was the one Bell identified is that it is simply not a local theory. In the modern day literature, this non-locality is sometimes more accurately referred to as contextuality. The stochastic dynamics simply depend upon the full experimental context. For example, consider the Elitzur-Vaidman experiment: https://arxiv.org/abs/hep-th/9305002

    This experiment proves that the mere presence or absence of a barrier alters the statistical behavior of a photon which never interacts with the barrier, because the photon’s stochastic evolution depends upon the entire experimental context, not just what it directly interacts with at the moment. This is why von Neumann’s additivity assumption does not hold. It assumes that if we only consider the photon that passed through path A while B is blocked, and path B while A is blocked, then the statistics of the photons passing through A or B when neither is blocked should just be Pr(A)+Pr(B). But, as shown from the Elitzur-Vaidman setup, this is obviously not the case, because the photon, even in the individual case, is influenced by the presence or absence of a barrier it does not interact with, so even if a photon takes path A, if there is no barrier on path B, it can influence its statistical behavior differently than if a barrier were present. You therefore cannot meaningfully add together Pr(A_barrier)+Pr(B_barrier) and expect it to yield Pr(A_nobarrier)+Pr(B_nobarrier). They are not the same.

    But, despite von Neumann’s proof being debunked by Bell, these same crackpots in physics academia took Bell’s theorem and started to run around claiming Bell’s new theorem is proof objective reality doesn’t exist, even though Bell never claimed that. Bell was literally a major proponent of realist models, publishing a paper trying to develop Bohm’s pilot wave theory, as well as published a stochastic model that could reproduce quantum field theory. Non-locality isn’t the only option. It’s just the simplest and most intuitive one where all the supposed “paradoxes” disappear in a puff of smoke when you accept that it’s just a contextual stochastic theory. However, there have been arguments made to drop other assumptions, like temporality rather than locality, based on the Two-State Vector Formalism. I am not a fan of non-temporality but I still respect such a position way better than denying objective reality even exists.

    1/2


  • pcalau12i@lemmygrad.mltoMemes@lemmy.mlWhat is real?
    link
    fedilink
    English
    arrow-up
    8
    ·
    16 days ago

    It really does not. Physics academia is just filled with crackpot mystics. I like to call them the metaphysical-physicists, the physicists who do not just immerse their mind in practical work but start talking metaphysics.

    In 1964, the physicist John Bell proved that if you assume (1) that objective reality exists, (2) quantum mechanics is correct, and (3) special relativity is correct, then you run into a contradiction, and so one of the assumptions must be wrong. Deranged physicists in academia concluded #1 one is wrong and started to promote the crackpot mystical views that objective reality doesn’t actually exist. Like 90% of the quantum mysticism you see these does not originate from non-physicists like Deepak Chopra but from actual PhD physicists.

    This is, at least, the story the mystics like to tell, that Bell’s theorem “proved” there is no objective reality. But this is a historical falsification, because if you actually check the historical record, you find that physicists in academia started to come to the “consensus” that objective reality isn’t real back in the 1927 Solvay conference, decades before John Bell ever published his theorem, and many more decades before it was ever confirmed in experiment, with Albert Einstein pretty much the last major holdout criticizing this turn of events, once asking Abraham Pais, “do you really believe that the moon doesn’t exist when you’re not looking at it?”

    They already decided it doesn’t exist before they had any theorem or any empirical evidence that the theorem was correct. Bell’s theorem genuinely has nothing to do with this turn of events.

    What is even more absurd is that we have known since the day special relativity was introduced in 1905 that it is not even necessary to make the right predictions of special relativity. Lorentz had proposed a theory in 1904 which is mathematically equivalent to special relativity without special relativity, and hence we know you can drop #3 without actually dropping the empirical predictions of #3. There is zero empirical necessity for premise #3.

    Metaphysical-physicists love historical falsification. They make up this completely bologna narrative that we should accept the truth of special relativity because “it is the most tested theory in the history of physics,” but the statement is nonsensical, because it is mathematically equivalent to Lorentz’s theory. Hence, every “test” for special relativity is also a test of Lorentz’s theory.

    You see this dishonest line of argumentation pushed a lot by the metaphysical-physicist crowd. They will push the most absurd metaphysics you can imagine that is entirely incoherent and when you say you don’t agree with that, they accuse you of denying the science because it is “well-tested.” But none of their crackpot metaphysics has been tested at all. There is no experiment you can conduct that proves a particle doesn’t have a definite value when you are not looking at it. This is just a delusion.



  • This is sadly pseudoscience, that only gets talked about because one smart guy endorsed it, but hardly anyone in academia actually takes it seriously. What you are talking about is called Orch OR, but Orch OR is filled with problems.

    One issue is that Orch OR makes a lot of claims that are not obviously connected to one another. The reason this is is an issue is because, while they call the theory “falsifiable” because it makes testable predictions, even if the predictions are tested and it is found to make the correct prediction, that wouldn’t actually even validate the theory because there is no way to actually logically or mathematically connect that experimental validation to all of its postulates.

    Orch OR has some rather bizarre premises: (1) Humans can consciously choose to believe things that cannot be mathematically proven, therefore, human consciousness must not be computable, (2) you cannot compute the outcome of a quantum experiment ahead of time, therefore there must be an physical collapse that is fundamentally not computable, (3) since both are not computable, they must be the same thing: physical collapse = consciousness, (4) therefore we should look for evidence that the brain is a quantum computer.

    Argument #1 really makes no sense. Humans believing silly things doesn’t prove human decisions aren’t computable. Just look at AI. It is obviously computable and hallucinates nonsense all the time. This dubious argument means that #3 doesn’t follow; there is no good reason to think consciousness and “collapse” are related.

    Argument #2 is problematic because physical collapse models are not compatible with special relativity or the statistical predictions of non-relativistic quantum mechanics, and so they cannot reproduce the predictions of quantum field theory in all cases, and so they aren’t particularly popular among physicists, and of course there is no evidence for them. Most physicists see the “collapse” as an epistemic, not a physical, event.

    Orch OR also arbitrarily insists on using the Diósi–Penrose model specifically, even though there have been multiple models of physical collapse proposed, such as GRW. There is no obvious reason to use this model specifically, it isn’t connected to any of the premises in the theory. Luckily, argument #2 does present falsifiable claims, but because #2 is not logically connected to the rest of the arguments, even if we do prove that the Diósi–Penrose model is correct, it doesn’t follow that #1, #3, or #4 are correct. We would just know there are physical collapses, but nothing else in the theory would follow.

    The only other argument that propose something falsifiable is #4, but again, #4 is not connected to #1, #3, or #4. Even if you desperately searched around frantically for any evidence that the brain is a quantum computer, and found some, that would just be your conclusion: the brain is a quantum computer. From that, #1, #2, and #3 do not then follow. It would just be an isolated fact in and of itself, an interesting discovery but wouldn’t validate the theory. I mean, we already have quantum computers, if you think collapse = consciousness, then you would have to already think quantum computers are conscious. A bizarre conclusion.

    In fact, only #2 and #4 are falsifiable, but even if both #2 and #4 are validated, it doesn’t get you to #1 or #3, so the theory as a whole still would remain unvalidated. It is ultimately an unfalsifiable theory but with falsifiable subcomponents. The advocates insist we should focus on the subcomponents as proof it’s a scientific theory because “it’s falsifiable,” but the theory as a whole simply is not falsifiable.

    Also, microtubules are structural. They don’t play any role in information processing in the brain, just in binding cells together, but it’s not just brain cells, microtubules are something found throughout your body in all kinds of cells. There is no reason to think at all they play any role in computations in the brain. The only reason you see interest in them from the Orch OR “crowd” (it’s like, what, 2 people who just so happen to be very loud?) is because they’re desperate for anything that vaguely looks like quantum effects in the brain, and so far microtubules are the only things that seem quantum effects may play some role, but this role is again structural. There is no reason to believe it plays any role in information processing or cognition.


  • I think a lot of proponents of objective collapse would pick a bone with that, haha, although it’s really just semantics. They are proposing extra dynamics that we don’t understand and can’t yet measure.

    Any actual physicist would agree objective collapse has to modify the dynamics, because it’s unavoidable when you introduce an objective collapse model and actually look at the mathematics. No one in the physics community would debate GRW or the Diósi–Penrose model technically makes different predictions, however, and in fact the people who have proposed these models often view this as a positive thing since it makes it testable rather than just philosophy.

    How the two theories would deviate would depend upon your specific objective collapse model, because they place thresholds in different locations. For GRW, it is based on a stochastic process that increases with probability over time, rather than a sharp threshold, but you still should see statistical deviations between its predictions and quantum mechanics if you can maintain a coherent quantum state for a large amount of time. The DP model has something to do with gravity, which I do not know enough to understand it, but I think the rough idea is if you have sufficient mass/energy in a particular locality it will cause a “collapse,” and so if you can conduct an experiment where that threshold of mass/energy is met, traditional quantum theory would predict the system could still be coherent whereas the DP model would reject that, and so you’d inherently end up with deviations in the predictions.

    What’s the definition of interact here?

    An interaction is a local event where two systems become correlated with one another as a result of the event.

    “The physical process during which O measures the quantity q of the system S implies a physical interaction between O and S. In the process of this interaction, the state of O changes…A quantum description of the state of a system S exists only if some system O (considered as an observer) is actually ‘describing’ S, or, more precisely, has interacted with S…It is possible to compare different views, but the process of comparison is always a physical interaction, and all physical interactions are quantum mechanical in nature.”

    The term “observer” is used very broadly in RQM and can apply to even a single particle. It is whatever physical system you are choosing as the basis of a coordinate system to describe other systems in relation to.

    Does it have an arbitrary cutoff like in objective collapse?

    It has a cutoff but not an arbitrary cutoff. The cutoff is in relation to whatever system participates in an interaction. If you have a system in a superposition of states, and you interact with it, then from your perspective, it is cutoff, because the system now has definite, real values in relation to you. But it does not necessarily have definite, real values in relation to some other isolated system that didn’t interact at all.

    You can make a non-separable state as big as you want.

    Only in relation to things not participating in the interaction. The moment something enters into participation, the states become separable. Two entangled particles are nonseparable up until you interact with them. Although, even for the two entangled particles, from their “perspectives” on each other, they are separable. It is only nonseparable from the perspective of yourself who has not interacted with them yet. If you interact with them, an additional observer who has not interacted with you or the three particles yet may still describe all three of you in a nonseparble entangled state, up until they interact with it themselves.

    This is also the first I’ve heard anything about time-symmetric interpretations. That sounds pretty fascinating. Does it not have experimenter “free will”, or do they sidestep the no-go theorems some other way?

    It violates the “free will” assumption because there is no physical possibility of setting up an experiment where the measurement settings cannot potentially influence the system if you take both the time-forwards and time-reverse evolution seriously. We tend to think because we place the measurement device after the initial preparation and that causality only flows in a single time direction, then it’s possible for the initial preparation to affect the measurement device but impossible for the measurement device to affect the initial preparation. But this reasoning doesn’t hold if you drop the postulate of the arrow of time, because in the time-reverse, the measurement interaction is the first interaction in the causal chain and the initial preparation is the second.

    Indeed, every single Bell test, if you look at its time-reverse, is unambiguously local and easy to explain classically, because all the final measurements are brought to a single locality, so in the time-reverse, all the information needed to explain the experiment begins in a single locality and evolves towards the initial preparation. Bell tests only appear nonlocal in the time-forwards evolution, and if you discount the time-reverse as having any sort of physical reality, it then forces you to conclude it must either be nonlocal or a real state for the particles independent of observation cannot exist. But if you drop the postulate of the arrow of time, this conclusion no longer follows, although you do end up with genuine retrocausality (as opposed to superdeterminism which only gives you pseudo-retrocausality), so it’s not like it gives you a classical system.

    So saying we stick with objective collapse or multiple worlds, what I mean is, could you define a non-Lipschitz continuous potential well (for example) that leads to multiple solutions to a wave equation given the same boundary?

    I don’t know, but that is a very interesting question. If you figure it out, I would be interested in the answer.


  • Many of the interpretations of quantum mechanics are nondeterministic.

    1. Relational quantum mechanics interprets particles as taking on discrete states at random whenever they interact with another particle, but only in relation to what they interact with and not in relation to anything else. That means particles don’t have absolute properties, like, if you measure its spin to be +1/2, this is not an absolute property, but a property that exists only relative to you/your measuring device. Each interaction leads to particles taking on definite states randomly according to the statistics predicted by quantum theory, but only in relation to things participating in those interactions.

    2. Time-symmetric interpretations explain violations of Bell inequalities through rejecting a fundamental arrow of time. Without it, there’s no reason to evolve the state vector in a single time-direction. It thus adopts the Two-State Vector Formalism which evolves it in both directions simultaneously. When you do this, you find it places enough constructs on the particles give you absolutely deterministic values called weak values, but these weak values are not what you directly measure. What you directly measure are the “strong” values. You can interpret it such that every time two particles interact, they take on “strong” values randomly according to a rule called the Aharonov-Bergmann-Lebowitz rule. This makes time-symmetric interpretations local realist but not local deterministic, as it can explain violations of Bell inequalities through local information stored in the particles, but that local information still only statistically determines what you observe.

    3. Objective collapse models are not really interpretations but new models because they can’t universally reproduce the mathematics of quantum theory, but some serious physicists have explored them as possibilities and they are also fundamentally random. You assume that particles literally spread out as waves until some threshold is met then they collapse down randomly into classical particles. The reason this can’t reproduce the mathematics of quantum theory is because this implies quantum effects cannot be scaled beyond whatever that threshold is, but no such threshold exists in traditional quantum mechanics, so such a theory must necessarily deviate from its predictions at that threshold. However, it is very hard to scale quantum effects to large scales, so if you place the threshold high enough, you can’t practically distinguish it from traditional quantum mechanics.


  • That’s literally China’s policies. The problem is most westerners are lied to about China’s model and it is just painted it as if Deng Xiaoping was an uber capitalist lover and turned China into a free market economy and that was the end of history.

    The reality is that Deng Xiaoping was a classical Marxist so he wanted China to follow the development path of classical Marxism (grasping the large, letting go of the small) and not the revision of Marxism by Stalin (nationalizing everything), because Marxian theory is about formulating a scientific theory of socioeconomic development, so if they want to develop as rapidly as possible they needed to adhere more closely to Marxian economics.

    Deng also knew the people would revolt if the country remained poor for very long, so they should hyper-focus on economic development first-of-foremost at all costs for a short period of time. Such a hyper-focus on development he had foresight to predict would lead to a lot of problems: environmental degradation, rising wealth inequality, etc. So he argued that this should be a two-step development model. There would be an initial stage of rapid development, followed by a second stage of shifting to a model that has more of a focus on high quality development to tackle the problems of the previous stage once they’re a lot wealthier.

    The first stage went from Deng Xiaoping to Jiang Zemin, and then they announced they were entering the second phase under Hu Jintao and this has carried onto the Xi Jinping administration. Western media decried Xi an “abandonment of Deng” because western media is just pure propaganda when in reality this was Deng’s vision. China has switched to a model that no longer prioritizes rapid growth but prioritizes high quality growth.

    One of the policies for this period has been to tackle the wealth inequality that has arisen during the first period. They have done this through various methods but one major one is huge poverty alleviation initiatives which the wealthy have been required to fund. Tencent for example “donated” an amount worth 3/4th of its whole yearly profits to government poverty alleviation initiatives. China does tax the rich but they have a system of unofficial “taxation” as well where they discretely take over a company through a combination of party cells and becoming a major shareholder with the golden share system and then make that company “donate” its profits back to the state. As a result China’s wealth inequality has been gradually falling since 2010 and they’ve become the #1 funder of green energy initiatives in the entire world.

    The reason you don’t see this in western countries is because they are capitalist. Most westerners have an mindset that laws work like magic spells, you can just write down on a piece of paper whatever economic system you want and this is like casting a spell to create that system as if by magic, and so if you just craft the language perfectly to get the perfect spell then you will create the perfect system.

    The Chinese understand this is not how reality works, economic systems are real physical machines that continually transform nature into goods and services for human conception, and so whatever laws you write can only meaningfully be implemented in reality if there is a physical basis for them.

    The physical basis for political power ultimately rests in production relations, that is to say, ownership and control over the means of production, and thus the ability to appropriate all wealth. The wealth appropriation in countries like the USA is entirely in the hands of the capitalist class, and so they use that immense wealth, and thus political power, to capture the state and subvert it to their own interests, and thus corrupt the state to favor those very same capital interests rather than to control them.

    The Chinese understand that if you want the state to remain an independent force that is not captured by the wealth appropriators, then the state must have its own material foundations. That is to say, the state must directly control its own means of production, it must have its own basis in economic production as well, so it can act as an independent economic force and not wholly dependent upon the capitalists for its material existence.

    Furthermore, its economic basis must be far larger and thus more economically powerful than any other capitalist. Even if it owns some basis, if that basis is too small it would still become subverted by capitalist oligarchs. The Chinese state directly owns and controls the majority of all its largest enterprises as well as has indirect control of the majority of the minority of those large enterprises it doesn’t directly control. This makes the state itself by far the largest producer of wealth in the whole country, producing 40% of the entire GDP, no singular other enterprise in China even comes close to that.

    The absolute enormous control over production allows for the state to control non-state actors and not the other way around. In a capitalist country the non-state actors, these being the wealth bourgeois class who own the large enterprises, instead captures the state and controls it for its own interests and it does not genuinely act as an independent body with its own independent interests, but only as the accumulation of the average interests of the average capitalist.

    No law you write that is unfriendly to capitalists under such a system will be sustainable, and often are entirely non-enforceable, because in capitalist societies there is no material basis for them. The US is a great example of this. It’s technically illegal to do insider trading, but everyone in US Congress openly does insider trading, openly talks about it, and the records of them getting rich from insider training is pretty openly public knowledge. But nobody ever gets arrested for it because the law is not enforceable because the material basis of US society is production relations that give control of the commanding heights of the economy to the capitalist class, and so the capitalists just buy off the state for their own interests and there is no meaningfully competing power dynamic against that in US society.


  • China does tax the rich but they also have an additional system of “voluntary donations.” For example, Tencent “volunteered” to give up an amount that is about 3/4th worth of its yearly profits to social programs.

    I say “voluntary” because it’s obviously not very voluntary. China’s government has a party cell inside of Tencent as well as a “golden share” that allows it to act as a major shareholder. It basically has control over the company. These “donations” also go directly to government programs like poverty alleviation and not to a private charity group.


  • pcalau12i@lemmygrad.mltoMemes@lemmy.mlAmericans and socialism
    link
    fedilink
    English
    arrow-up
    2
    ·
    edit-2
    1 year ago

    I have the rather controversial opinion that the failure of communist parties doesn’t come down the the failure of crafting the perfect rhetoric or argument in the free marketplace of ideas.

    Ultimately facts don’t matter because if a person is raised around thousands of people constantly telling them a lie and one person telling them the truth, they will believe the lie nearly every time. What matters really is how much you can propagate an idea rather than how well crafted that idea is.

    How much you can propagate an idea depends upon how much wealth you have to buy and control media institutions, and how much wealth you control depends upon your relations to production. I.e. in capitalist societies capitalists control all wealth and thus control the propagation of ideas, so arguing against them in the “free marketplace of ideas” is ultimately always a losing battle. It is thus pointless to even worry too much about crafting the perfect and most convincing rhetoric.

    Control over the means of production translates directly to political influence and power, yet communist parties not in power don’t control any, and thus have no power. Many communist parties just hope one day to get super lucky to take advantage of a crisis and seize power in a single stroke, and when that luck never comes they end up going nowhere.

    Here is where my controversial take comes in. If we want a strategy that is more consistently successful it has to rely less on luck meaning there needs to be some sort of way to gradually increase the party’s power consistently without relying on some sort of big jump in power during a crisis. Even if there is a crisis, the party will be more positioned to take advantage of it if it has already gradually built up a base of power.

    Yet, if power comes from control over the means of production, this necessarily means the party must make strides to acquire means of production in the interim period before revolution. This leaves us with the inevitable conclusion that communist parties must engage in economics even long prior to coming to power.

    The issue however is that to engage in economics in a capitalist society is to participate in it, and most communists at least here in the west see participation as equivalent to an endorsement and thus a betrayal of “communist principles.”

    The result of this mentality is that communist parties simply are incapable of gradually increasing their base of power and their only hope is to wait for a crisis for sudden gains, yet even during crises their limited power often makes it difficult to take advantage of the crisis anyways so they rarely gain much of anything and are always stuck in a perpetual cycle of being eternal losers.

    Most communist parties just want to go from zero to one-hundred in a single stroke which isn’t impossible but it would require very prestine conditions and all the right social elements to align perfectly. If you want a more consistent strategy of getting communist parties into power you need something that doesn’t rely on such a stroke of luck, any sort of sudden leap in the political power of the party, but is capable of growing it gradually over time. This requires the party to engage in economics and there is simply no way around this conclusion.


  • pcalau12i@lemmygrad.mltoMemes@lemmy.mlAmericans and socialism
    link
    fedilink
    English
    arrow-up
    16
    ·
    edit-2
    1 year ago

    You people have good luck with this? I haven’t. I don’t find that you can just “trick” people into believing in socialism by changing the words. The moment if becomes obvious you’re criticizing free markets and the rich and advocating public ownership they will catch on.



  • On the surface, it does seem like there is a similarity. If a particle is measured over here and later over there, in quantum mechanics it doesn’t necessarily have a well-defined position in between those measurements. You might then want to liken it to a game engine where the particle is only rendered when the player is looking at it. But the difference is that to compute how the particle arrived over there when it was previously over here, in quantum mechanics, you have to actually take into account all possible paths it could have taken to reach that point.

    This is something game engines do not do and actually makes quantum mechanics far more computationally expensive rather than less.


  • Any time you do something to the particles on Earth, the ones on the Moon are affected also

    The no-communication theorem already proves that manipulating one particle in an entangled pair has no impact at al on another. The proof uses the reduced density matrices of the particles which capture both their probabilities of showing up in a particular state as well as their coherence terms which capture their ability to exhibit interference effects. No change you can make to one particle in an entangled pair can possibly lead to an alteration of the reduced density matrix of the other particle.


  • pcalau12i@lemmygrad.mltoOpen Source@lemmy.mlProton's biased article on Deepseek
    link
    fedilink
    English
    arrow-up
    1
    arrow-down
    1
    ·
    edit-2
    1 year ago

    There is no “fundamentally” here, you are referring to some abstraction that doesn’t exist. The models are modified during the fine-tuning process, and the process trains them to learn to adopt DeepSeek R1’s reasoning technique. You are acting like there is some “essence” underlying the model which is the same between the original Qwen and this model. There isn’t. It is a hybrid and its own thing. There is no such thing as “base capability,” the model is not two separate pieces that can be judged independently. You can only evaluate the model as a whole. Your comment is just incredibly bizarre to respond to because you are referring to non-existent abstractions and not actually speaking of anything concretely real.

    The model is neither Qwen nor DeepSeek R1, it is DeepSeek R1 Qwen Distill as the name says. it would be like saying it’s false advertising to say a mule is a hybrid of a donkey and a horse because the “base capabilities” is a donkey and so it has nothing to do with horses, and it’s really just a donkey at the end of the day. The statement is so bizarre I just do not even know how to address it. It is a hybrid, it’s its own distinct third thing that is a hybrid of them both. The model’s capabilities can only be judged as it exists, and its capabilities differ from Qwen and the original DeepSeek R1 as actually scored by various metrics.

    Do you not know what fine-tuning is? It refers to actually adjusting the weights in the model, and it is the weights that define the model. And this fine-tuning is being done alongside DeepSeek R1, meaning it is being adjusted to take on capabilities of R1 within the model. It gains R1 capabilities at the expense of Qwen capabilities as DeepSeek R1 Qwen Distill performs better on reasoning tasks but actually not as well as baseline models on non-reasoning tasks. The weights literally have information both of Qwen and R1 within them at the same time.

    Speaking of its “base capabilities” is a meaningless floating abstraction which cannot be empirically measured and doesn’t refer to anything concretely real. It only has its real concrete capabilities, not some hypothetical imagined capabilities. You accuse them of “marketing” even though it is literally free. All DeepSeek sells is compute to run models, but you can pay any company to run these distill models. They have no financial benefit for misleading people about the distill models.

    You genuinely are not making any coherent sense at all, you are insisting a hybrid model which is objectively different and objectively scores and performs differently should be given the exact same name, for reasons you cannot seem to actually articulate. It clearly needs a different name, and since it was created utilizing the DeepSeek R1 model’s distillation process to fine-tune it, it seems to make sense to call it DeepSeek R1 Qwen Distill. Yet for some reason you insist this is lying and misrepresenting it and it actually has literally nothing to do with DeepSeek R1 at all and it should just be called Qwen and we should pretend it is literally the same model despite it not being the same model as its training weights are different (you can do a “diff” on the two model files if you don’t believe me!) and it performs differently on the same metrics.

    There is simply no rational reason to intentionally want to mislabel the model as just being Qwen and having no relevance to DeepSeek R1. You yourself admitted that the weights are trained on R1 data so they necessarily contain some R1 capabilities. If DeepSeek was lying and trying to hide that the distill models are based on Qwen and Llama, they wouldn’t have literally put that in the name to let everyone know, and released a paper explaining exactly how those were produced.

    It is clear to me that you and your other friends here have some sort of alternative agenda that makes you not want to label it correctly. DeepSeek is open about the distill models using Qwen and Llama, but you want them to be closed and not reveal that they also used DeepSeek R1. The current name for it is perfectly fine and pretending it is just a Qwen model (or Llama, for the other distilled versioned) is straight-up misinformation, and anyone who downloads the models and runs them themselves will clearly see immediately that they perform differently. It is a hybrid model correctly called what they are: DeepSeek R1 Qwen Distill and DeepSeek R1 Llama Distill.


  • pcalau12i@lemmygrad.mltoOpen Source@lemmy.mlProton's biased article on Deepseek
    link
    fedilink
    English
    arrow-up
    7
    arrow-down
    2
    ·
    edit-2
    1 year ago

    The 1.5B/7B/8B/13B/32B/70B models are all officially DeepSeek R1 models, that is what DeepSeek themselves refer to those models as. It is DeepSeek themselves who produced those models and released them to the public and gave them their names. And their names are correct, it is just factually false to say they are not DeepSeek R1 models. They are.

    The “R1” in the name means “reasoning version one” because it does not just spit out an answer but reasons through it with an internal monologue. For example, here is a simple query I asked DeepSeek R1 13B:

    Me: can all the planets in the solar system fit between the earth and the moon?

    DeepSeek: Yes, all eight planets could theoretically be lined up along the line connecting Earth and the Moon without overlapping. The combined length of their diameters (approximately 379,011 km) is slightly less than the average Earth-Moon distance (about 384,400 km), allowing them to fit if placed consecutively with no required spacing.

    However, on top of its answer, I can expand an option to see its internal monologue it went through before generating the answer, which you can find the internal monologue here because it’s too long to paste.

    What makes these consumer-oriented models different is that that rather than being trained on raw data, they are trained on synthetic data from pre-existing models. That’s what the “Qwen” or “Llama” parts mean in the name. The 7B model is trained on synthetic data produced by Qwen, so it is effectively a compressed version of Qen. However, neither Qwen nor Llama can “reason,” they do not have an internal monologue.

    This is why it is just incorrect to claim that something like DeepSeek R1 7B Qwen Distill has no relevance to DeepSeek R1 but is just a Qwen model. If it’s supposedly a Qwen model, why is it that it can do something that Qwen cannot do but only DeepSeek R1 can? It’s because, again, it is a DeepSeek R1 model, they add the R1 reasoning to it during the distillation process as part of its training. They basically use synthetic data generated from DeepSeek R1 to fine-tune readjust its parameters so it adopts a similar reasoning style. It is objectively a new model because it performs better on reasoning tasks than just a normal Qwen model. It cannot be considered solely a Qwen model nor an R1 model because its parameters contain information from both.


  • As I said, they will likely come to the home in form of cloud computing, which is how advanced AI comes to the home. You can run some AI models at home but they’re nowhere near as advanced as cloud-based services and so not as useful. I’m not sure why, if we ever have AGI, it would need to be run at home. It doesn’t need to be. It would be nice if it could be ran entirely at home, but that’s no necessity, just a convenience. Maybe your personal AGI robot who does all your chores for you only works when the WiFi is on. That would not prevent people from buying it, I mean, those Amazon Fire TVs are selling like hot cakes and they only work when the WiFi is on. There also already exists some AI products that require a constant internet connection.

    It is kind of similar with quantum computing, there actually do exist consumer-end home quantum computers, such as Triangulum, but it only does 3 qubits, so it’s more of a toy than a genuinely useful computer. For useful tasks, it will all be cloud-based in all likelihood. The NMR technology Triangulum is based on, it’s not known to be scalable, so the only other possibility that quantum computers will make it to the home in a non-cloud based fashion would be optical quantum computing. There could be a breakthrough there, you can’t rule it out, but I wouldn’t keep my fingers crossed. If quantum computers become useful for regular people in the next few decades, I would bet it would be all through cloud-based services.


  • If quantum computers actually ever make significant progress to the point that they’re useful (big if) it would definitely be able to have positive benefits for the little guy. It is unlikely you will have a quantum chip in your smartphone (although, maybe it could happen if optical quantum chips ever make a significant breakthrough, but that’s even more unlikely), but you will still be able to access them cheaply over the cloud.

    I mean, IBM spends billions of on its quantum computers and gives cloud access to anyone who wants to experiment with them completely free. That’s how I even first learned quantum computing, running algorithms on IBM’s cloud-based quantum computers. I’m sure if the demand picks up if they stop being experimental and actually become useful, they’ll probably start charging a fee, but the fact it is free now makes me suspect it will not be very much.

    I think a comparison can be made with LLMs, such as with OpenAI. It takes billions to train those giant LLMs as well and can only be trained on extremely expensive computers, yet a single query costs less than a penny, and there are still free versions available. Expense for cloud access will likely always be incredibly cheap, it’s a great way to bring super expensive hardware to regular people.

    That’s likely what the future of quantum computing will be for regular people, quantum computing through cloud access. Even if you never run software that can benefit from it, you may get benefits indirectly, such as, if someone uses a quantum computer to help improve medicine and you later need that medicine.