• 0 Posts
  • 30 Comments
Joined 2 years ago
cake
Cake day: July 7th, 2024

help-circle
  • There are nonlocal effects in quantum mechanics but I am not sure I would consider quantum teleportation to be one of them. Quantum teleportation may look at first glance to be nonlocal but it can be trivially fit to local hidden variable models, such as Spekkens’ toy model, which makes it at least seem to me to belong in the class of local algorithms.

    You have to remember that what is being “transferred” is a statistical description, not something physically tangible, and only observable in a large sample size (an ensemble). Hence, it would be a strange to think that the qubit is like holding a register of its entire quantum state and then that register is disappearing and reappearing on another qubit. The total information in the quantum state only exists in an ensemble.

    In an individual run of the experiment, clearly, the joint measurement of 2 bits of information and its transmission over a classical channel is not transmitting the entire quantum state, but the quantum state is not something that exists in an individual run of the experiment anyways. The total information transmitted over an ensemble is much greater can would provide sufficient information to move the statistical description of one of the qubits to another entirely locally.

    The complete quantum state is transmitted through the classical channel over the whole ensemble, and not in an individual run of the experiment. Hence, it can be replicated in a local model. It only looks like more than 2 bits of data is moving from one qubit to the other if you treat the quantum state as if it actually is a real physical property of a single qubit, because obviously that is not something that can be specified with 2 bits of information, but an ensemble can indeed encode a continuous distribution.

    This is essentially a trivial feature known to any experimentalist, and it needs to be mentioned only because it is stated in many textbooks on quantum mechanics that the wave function is a characteristic of the state of a single particle. If this were so, it would be of interest to perform such a measurement on a single particle (say an electron) which would allow us to determine its own individual wave function. No such measurement is possible.

    — Dmitry Blokhintsev

    Here’s a trivially simple analogy. We describe a system in a statistical distribution of a single bit with [a; b] where a is the probability of 0 and b is the probability of 1. This is a continuous distribution and thus cannot be specified with just 1 bit of information. But we set up a protocol where I measure this bit and send you the bit’s value, and then you set your own bit to match what you received. The statistics on your bit now will also be guaranteed to be [a; b]. How is it that we transmitted a continuous statistical description that cannot be specified in just 1 bit with only 1 bit of information? Because we didn’t. In every single individual trial, we are always just transmitting 1 single bit. The statistical descriptions refer to an ensemble, and so you have to consider the amount of information actually transmitted over the ensemble.

    A qubit’s quantum state has 2 degrees of freedom, as it can it be specified on the Bloch sphere with just an angle and a rotation. The amount of data transmitted over the classical channel is 2 bits. Over an ensemble, those 2 bits would become 2 continuous values, and thus the classical channel over an ensemble contains the exact degrees of freedom needed to describe the complete quantum state of a single qubit.


  • I got interested in quantum computing as a way to combat quantum mysticism. Quantum mystics love to use quantum mechanics to justify their mystical claims, like quantum immortality, quantum consciousness, quantum healing, etc. Some mystics use quantum mechanics to “prove” things like we all live inside of a big “cosmic consciousness” and there is no objective reality, and they often reference papers published in the actual academic literature.

    These papers on quantum foundations are almost universally framed in terms of a quantum circuit, because this deals with quantum information science, giving you a logical argument as to something “weird” about quantum mechanic’s logical structure, as shown in things like Bell’s theorem, the Frauchiger-Renner paradox, the Elitzur-Vaidman paradox, etc.

    If a person claims something mystical and sends you a paper, and you can’t understand the paper, how are you supposed to respond? But you can use quantum computing as a tool to help you learn quantum information science so that you can eventually parse the paper, and then you can know how to rebut their mystical claims. But without actually studying the mathematics you will be at a loss.

    You have to put some effort into understanding the mathematics. If you just go vaguely off of what you see in YouTube videos then you’re not going to understand what is actually being talked about. You can go through for example IBM’s courses on the basics of quantum computing and read a textbook on quantum computing and it gives you the foundations in quantum information science needed to actually parse the logical arguments in these papers and what they are really trying to say.


  • Moore’s law died a long time ago. Engineers pretended it was going on for years by abusing the nanometer metric, by saying that if they cleverly find a way to use the space more effectively then it is as if they packed more transistors into the same nanometers of space, and so they would say it’s a smaller nanometer process node, even though quite literal they did not shrink the transistor size and increase the number of transistors on a single node.

    This actually started to happen around 2015. These clever tricks were always exaggerated because there isn’t an objective metric to say that a particular trick on a 20nm node really gets you performance equivalent to 14nm node, so it gave you huge leeway for exaggeration. In reality, actual performance gains drastically have started to slow down since then, and the cracks have really started to show when you look at the 5000 series GPUs from Nvidia.

    The 5090 is only super powerful because the die size is larger so it fits more transistors on the die, not because they actually fit more per nanometer. If you account for the die size, it’s actually even less efficient than the 4090 and significantly less efficient than the 3090. In order to pretend there have been upgrades, Nvidia has been releasing software for the GPUs for AI frame rendering and artificially locking the AI software behind the newer series GPUs. The program Lossless Scaling proves that you can in theory run AI frame rendering on any GPU, even ones from over a decade ago, and that Nvidia’s locking of it behind a specific GPU is not hardware limitation but them trying to make up for lack of actual improvements in the GPU die.

    Chip improvements have drastically slowed done for over a decade now and the industry just keeps trying to paper it over.


  • Mathematics is just a language to describe patterns we observe in the world. It really is not fundamentally more different from English or Chinese, it is just more precise so there is less ambiguity as to what is actually being claimed, so if someone makes a logical argument with the mathematics, they cannot use vague buzzwords with unclear meaning disallowing it from it actually being tested.

    Mathematics just is a language that forces you to have extreme clarity, but it is still ultimately just a language all the same. Its perfect consistency hardly matters. What matters is that you can describe patterns in the world with it and use it to identify those patterns in a particular context. If the language has some sort of inconsistency that disallows it from being useful in a particular context, then you can just construct a different language that is more useful in that context.

    It’s of course, preferable that it is more consistent than not so it is applicable to as many contexts as possible without having to change up the language, but absolute perfect pure consistency is not necessarily either.


  • Historically they often actually have the reverse effect.

    Sanctions aren’t subtle, they aren’t some sneaky way of hurting a country and so the people blame the government and try to overthrow it. They are about as subtle as bombing a country then blaming the government. Everyone who lives there sees directly the impacts of the sanctions and knows the cause is the foreign power. When a foreign power is laying siege on a country, then it often has the effect of strengthening people’s support for the government. Even the government’s flaws can be overlooked because they can point to the foreign country’s actions to blame.

    Indeed, North Korea is probably the most sanctioned country in history yet is also one of the most stable countries on the planet.

    I thought it was a bit amusing when Russia seized Crimea and the western world’s brilliant response was to sanction Crimea as well as to shut down the water supply going to Crimea, which Russia responded by building one of the largest bridges in Europe to facilitate trade between Russia and Crimea as well as investing heavily into building out new water infrastructure.

    If a foreign country is trying to starve you, and the other country is clearly investing a lot of money into trying to help you… who do you think you are winning the favor of with such a policy?

    For some reason the western mind cannot comprehend this. They constantly insist that the western world needs to lay economic siege on all the countries not aligned with it and when someone points out that this is just making people of those countries hate the western world and want nothing to do with them and strengthening the resolve of their own governments, they just deflect by calling you some sort of “apologist” or whatever.

    Indeed, during the Cuban Thaw when Obama lifted some sanctions, Obama became rather popular in Cuba, to the point that his approval ratings at times even surpassed that of Fidel, and Cuba started to implement reforms to allow for further economic cooperation with US government and US businesses. They were very happy to become an ally of the US, but then suddenly Democrats and Republicans decided to collectively do a 180 u-turn and abandon all of that and destroy all the good will that have built up.

    But the people of Cuba are not going to capitulate because the government is actually popular, as US internal documents constantly admits to, and that popularity will only be furthered by the increased blockade. US is just going to create a North Korean style scenario off the coast of the US.



  • The reason quantum computers are theoretically faster is because of the non-separable nature of quantum systems.

    Imagine you have a classical computer where some logic gates flip bits randomly, and multi-bit logic gates could flip them randomly but in a correlated way. These kinds of computers exist and are called probabilistic computers and you can represent all the bits using a vector and the logic gates with matrices called stochastic matrices.

    The vector necessarily is non-separable, meaning, you cannot get the right predictions if you describe the statistics of the computer with a vector assigned to each p-bit separately, but must assign a single vector to all p-bits taken together. This is because the statistics can become correlated with each other, i.e. the statistics of one p-bit depends upon another, and thus if you describe them using separate vectors you will lose information about the correlations between the p-bits.

    The p-bit vector grows in complexity exponentially as you add more p-bits to the system (complexity = 2^N where N is the number of p-bits), even though the total states of all the p-bits only grows linearly (complexity = 2N). The reason for this is purely an epistemic one. The physical system only grows in complexity linearly, but because we are ignorant of the actual state of the system (2N), we have to consider all possible configurations of the system (2^N) over an infinite number of experiments.

    The exponential complexity arises from considering what physicists call an “ensemble” of individual systems. We are not considering the state of the physical system as it currently exists right now (which only has a complexity of 2N) precisely because we do not know the values of the p-bits, but we are instead considering a statistical distribution which represents repeating the same experiment an infinite number of times and distributing the results, and in such an ensemble the system would take every possible path and thus the ensemble has far more complexity (2^N).

    This is a classical computer with p-bits. What about a quantum computer with q-bits? It turns out that you can represent all of quantum mechanics simply by allowing probability theory to have negative numbers. If you introduce negative numbers, you get what are called quasi-probabilities, and this is enough to reproduce the logic of quantum mechanics.

    You can imagine that quantum computers consist of q-bits that can be either 0 or 1 and logic gates that randomly flip their states, but rather than representing the q-bit in terms of the probability of being 0 or 1, you can represent the qubit with four numbers, the first two associated with its probability of being 0 (summing them together gives you the real probability of 0) and the second two associated with its probability of being 1 (summing them together gives you the real probability of 1).

    Like normal probability theory, the numbers have to all add up to 1, being 100%, but because you have two numbers assigned to each state, you can have some quasi-probabilities be negative while the whole thing still adds up to 100%. (Note: we use two numbers instead of one to describe each state with quasi-probabilities because otherwise the introduction of negative numbers would break L1 normalization, which is a crucial feature to probability theory.)

    Indeed, with that simple modification, the rest of the theory just becomes normal probability theory, and you can do everything you would normally do in normal classical probability theory, such as build probability trees and whatever to predict the behavior of the system.

    However, this is where it gets interesting.

    As we said before, the exponential complexity of classical probability is assumed to merely something epistemic because we are considering an ensemble of systems, even though the physical system in reality only has linear complexity. Yet, it is possible to prove that the exponential complexity of a quasi-probabilistic system cannot be treated as epistemic. There is no classical system with linear complexity where an ensemble of that system will give you quasi-probabilistic behavior.

    As you add more q-bits to a quantum computer, its complexity grows exponentially in a way that is irreducible to linear complexity. In order for a classical computer to keep up, every time an additional q-bit is added, if you want to simulate it on a classical computer, you have to increase the number of bits in a way that grows exponentially. Even after 300 q-bits, that means the complexity would be 2^N = 2^300, which means the number of bits you would need to simulate it would exceed the number of atoms in the observable universe.

    This is what I mean by quantum systems being inherently “non-separable.” You cannot take an exponentially complex quantum system and imagine it as separable into an ensemble of many individual linearly complex systems. Even if it turns out that quantum mechanics is not fundamental and there are deeper deterministic dynamics, the deeper deterministic dynamics must still have exponential complexity for the physical state of the system.

    In practice, this increase in complexity does not mean you can always solve problems faster. The system might be more complex, but it requires clever algorithms to figure out how to actually translate that into problem solving, and currently there are only a handful of known algorithms you can significantly speed up with quantum computers.

    For reference: https://arxiv.org/abs/0711.4770


  • If you have a very noisy quantum communication channel, you can combine a second algorithm called quantum distillation with quantum teleportation to effectively bypass the quantum communication channel and send a qubit over a classical communication channel. That is the main utility I see for it. Basically, very useful for transmitting qubits over a noisy quantum network.


  • The people who named it “quantum teleportation” had in mind Star Trek teleporters which work by “scanning” the object, destroying it, and then beaming the scanned information to another location where it is then reconstructed.

    Quantum teleportation is basically an algorithm that performs a destructive measurement (kind of like “scanning”) of the quantum state of one qubit and then sends the information over a classical communication channel (could even be a beam if you wanted) to another party which can then use that information to reconstruct the quantum state on another qubit.

    The point is that there is still the “beaming” step, i.e. you still have to send the measurement information over a classical channel, which cannot exceed the speed of light.


  • bunchberry@lemmy.worldtoMemes@lemmy.mlVictims of Communism
    link
    fedilink
    arrow-up
    25
    arrow-down
    1
    ·
    edit-2
    4 months ago

    It is the academic consensus even among western scholars that the Ukrainian famine was indeed a famine, not an intentional genocide. This is not my opinion, but, again, the overwhelming consensus even among the most anti-communist historians like Robert Conquest who described himself as a “cold warrior.” The leading western scholar on this issue, Stephen Wheatcroft, discussed the history of this in western academia in a paper I will link below.

    He discusses how there was strong debate over it being a genocide in western academia up until the Soviet Union collapsed and the Soviet archives were open. When the archives were open, many historians expected to find a “smoking gun” showing that the Soviets deliberately had a policy of starving the Ukrainians, but such a thing was never found and so even the most hardened anti-communist historians were forced to change their tune (and indeed you can find many documents showing the Soviets ordering food to Ukraine such as this one and this one).

    Wheatcroft considers Conquest changing his opinion as marking an end to that “era” in academia, but he also mentions that very recently there has been a revival of the claims of “genocide,” but these are clearly motivated and pushed by the Ukrainian state for political reasons and not academic reasons. It is literally a propaganda move. There are hostilities between the current Ukrainian state and the current Russian state, and so the current Ukrainian state has a vested interest in painting the Russian state poorly, and so reviving this old myth is good for its propaganda. But it is just that, state propaganda.

    Discussions in the popular narrative of famine have changed over the years. During Soviet times there was a contrast between ‘man-made’ famine and ‘denial of famine’.‘Man-made’ at this time largely meant as a result of policy. Then there was a contrast between ‘man-made on purpose’, and ‘man-made by accident’ with charges of criminal neglect and cover up. This stage seemed to have ended in 2004 when Robert Conquest agreed that the famine was not man-made on purpose. But in the following ten years there has been a revival of the ‘man-made on purpose’ side. This reflects both a reduced interest in understanding the economic history, and increased attempts by the Ukrainian government to classify the ‘famine as a genocide’. It is time to return to paying more attention to economic explanations.

    https://www.researchgate.net/publication/326562364


  • Subhuman lemmy posters: “We are spending way too much!!! $0.5m on scientific research!!! Outrageous!”

    Me: “Bro we spend billions killing children around the world who tf cares there are other places you should be concerned about budget.”

    Subhuman lemmy posters: “Errrm actually stfu stop bringing that up, we want to cut everything but that!”

    kys you people are freaks, this place is just as bad as reddit, entirely comprised of genocidal US ultranationalist sociopaths. I need to go to a forum that is not English-speaking.


  • Interesting you get downvoted for this when I mocked someone for saying the opposite who claimed that $0.5m was some enormous amount of money we shouldn’t be wasting, and I simply pointed out that we waste literally billions around the world on endless wars killing random people for now reason, so it is silly to come after small bean quantum computing if budgeting is your actual concern. People seemed to really hate me for saying that, or maybe it was because they just actually like wasting moneys on bombs to drop on children and so they want to cut everything but that.






  • bunchberry@lemmy.worldtoMemes@lemmy.mlForgot the disclaimer
    link
    fedilink
    arrow-up
    1
    arrow-down
    3
    ·
    edit-2
    1 year ago

    Ah yes, crying about “privilege” while you’re here demanding that people shouldn’t speak out against a literal modern day holocaust at the only time when they have the political power to make some sort of difference. Yeah, it’s totally those people who are “privileged” and not your white pasty ass who doesn’t have to worry about their extended family being slaughtered.


  • bunchberry@lemmy.worldtoMemes@lemmy.mlForgot the disclaimer
    link
    fedilink
    arrow-up
    13
    arrow-down
    1
    ·
    edit-2
    1 year ago

    Good. That’s when Democrats should be criticized the most, because that is the only time you have the power to exercise any leverage over them. Why would you refuse to criticize them when you actually have a tiny bit of leverage and wait until you have no power at all and your criticism is completely irrelevant and will be ignored? That is just someone who wants to complain but doesn’t actually want anything to change.



  • Honestly, the random number generation on quantum computers is practically useless. Speeds will not get anywhere near as close to a pseudorandom number generator, and there are very simple ones you can implement that are blazing fast, far faster than any quantum computer will spit out, and produce numbers that are widely considered in the industry to be cryptographically secure. You can use AES for example as a PRNG and most modern CPUs like x86 processor have hardware-level AES implementation. This is why modern computers allow you to encrypt your drive, because you can have like a file that is a terabyte big that is encrypted but your CPU can decrypt it as fast as it takes for the window to pop up after you double-click it.

    While PRNG does require an entropy pool, the entropy pool does not need to be large, you can spit out terabytes of cryptographically secure pseudorandom numbers on a fraction of a kilobyte of entropy data, and again, most modern CPUs actually include instructions to grab this entropy data, such as Intel’s CPUs have an RDSEED instruction which let you grab thermal noise from the CPU. In order to avoid someone discovering a potential exploit, most modern OSes will mix into this pool other sources as well, like fluctuations in fan voltage.

    Indeed, used to with Linux, you had a separate way to read random numbers directly from the entropy pool and another way to read pseudorandom numbers, those being /dev/random and /dev/urandom. If you read from the entropy pool, if it ran out, the program would freeze until it could collect more, so some old Linux programs you would see the program freeze until you did things like move your mouse around.

    But you don’t see this anymore because generating enormous amounts of cryptographysically secure random nubmers is so easy with modern algorithms that modern Linux just collects a little bit of entropy at boot and it uses that to generate all pseudorandom numbers after, and just got rid of needing to read it directly, both /dev/random and /dev/urandom now just internally in the OS have the same behavior. Any time your PC needs a random number it just pulls from the pseudorandom number generator that was configured at boot, and you have just from the short window of collecting entropy data at boot the ability to generate sufficient pseudorandom numbers basically forever, and these are the numbers used for any cryptographic application you may choose to run.

    The point of all this is to just say random number generation is genuinely a solved problem, people don’t get just how easy it is to basically produce practically infinite cryptographically secure pseudorandom numbers. While on paper quantum computers are “more secure” because their random numbers would be truly random, in practice you literally would never notice a difference. If you gave two PhD mathematicians or statisticians the same message, one encrypted using a quantum random number generator and one encrypted with a PRNG like AES or ChaCha20, and asked them to decipher them, they would not be able to decipher either. In fact, I doubt they would even be able to identify which one was even encoded using the quantum random number generator. A string of random numbers looks just as “random” to any random number test suite whether or not it came from a QRNG or a high-quality PRNG (usually called CSPRNG).

    I do think at least on paper quantum computers could be a big deal if the engineering challenge can ever be overcome, but quantum cryptography such as “the quantum internet” are largely a scam. All the cryptographic aspects of quantum computers are practically the same, if not worse, than traditional cryptography, with only theoretical benefits that are technically there on paper but nobody would ever notice in practice.