• 0 Posts
  • 78 Comments
Joined 2 years ago
cake
Cake day: May 29th, 2024

help-circle

  • Maybe they could be synced using RF over fiber. This has been proposed as candidate technology for 6g wireless networks, to enable cell free massive MIMO.

    That would mean that you would need to run optical fiber to each of them, though we’ve already seen fiber drones spool out kilometers of the stuff as they fly.

    EDIT: I just remembered this interesting article about doing radio interferometry over a fiber network using cheap quartz oscillators instead of atomic clocks. My (layman’s) understanding is that the quartz oscillators are good enough over a few milliseconds, but will fall out of sync with each other over longer time spans. Meanwhile the fiber optic reference signal (distributed from a central atomic clock) can be kept correct on average by reflecting the reference back down the fiber and doing active correction of the changing path length (caused by thermal fluctuations and vibrations along the fiber) but will be incorrect on a millisecond-to-miliscond basis because of light speed lag and the path length being a moving target. So they use the quartz oscillators over small time scales and use the fiber reference signal to keep them synced over long time scales. Surprisingly the article says they actually get a better sync this way than with using multiple atomic clocks.

    So perhaps something like that is possible.




  • The usage rates in Japanese cities are among the highest in the world, as are the punctuality and reliability of the intercity trains.

    Could the system be less convoluted? Absolutely. But IMO most European countries aren’t in much of a position to criticize given that they aren’t even willing to step up to the plate to anywhere near the same degree, to say nothing of North America.

    Now, one might argue that this has more to do with city form than it does with the quality of the PT infrastructure, but that is infrastructure too, and those two types of infra are two sides of the same coin. And yeah, the city form isn’t completely perfect either, but when it comes to moving a greater proportion of people in the safest and most energy and space efficient way, the numbers are just higher than most other places.




  • Late 19th/Early 20th century had about 1/3rd of all cars on the road be electric.

    Long before lithium batteries were ever a thing.

    You want to tell me what the top speed and range of those cars were?

    Also, Theres a much higher demand thanks to the modern resurgence of electric cars, for better, cheaper batteries.

    I think you’ll find that the first modern resurgence in EV interest came in the 1970s, with the 1973 oil crisis.

    If you research the history of battery technology I think you’ll also find that it hasn’t been static since 1900 with lithium ion popping up out of nowhere in 2008. In between we had things like nickel metal hydride cells, and for a few years before Li-ion became practical there were even some EVs that came with the option of molten salt batteries (called “ZEBRA” batteries) for extra range. Those things needed to be heated to 572° F in order to function. Nobody would have done that if they could’ve just instantly pulled a better battery technology out of their ass like you seem to think they can. By the way, the name “ZEBRA” comes from “Zeolite Battery Research Africa”, the scientific project that invented them, which was started in 1985.

    Just like computers have much increased demand for ram today than they did in the 1970s.

    I promise you that people wanted more computer memory in the 1970s.

    While we’re on the topic of computers though, do you know what the current state of the art is in chip fabrication? It is extreme ultraviolet photolithography, or EUV.

    The first commercial product made with EUV was released in 2019 (the Samsung Galaxy Note 10) but the first EUV demonstration took place in 1986 at the Japan Society of Applied Physics. Originally they thought EUV would be ready by 2006, but it took an extra 13 years to develop.

    Notably a number of other technologies, like contact lithography, electron beam projection, ion beam projection, and proximity x-ray were being developed simultaneously, in competition with EUV. EUV won out in the end but for a long time people were not sure which would be the most practical to implement.

    So yes, the pop-sci articles written about stuff like this are stupid, but the idea that things are fake unless they can move from the lab to the factory floor within a year is just not how the world works.










  • Since the portable radio doesn’t have much power, you may need to use digital modes to get through.

    I don’t know much about radio stuff, but ever since I learned about LoRA I’ve wondered what kind of range a station could get if the longwave or AM bands were repurposed for use with a spread spectrum digital protocol. And what kind of bandwidth something like that would have.

    I think being able to do datacasting over really long ranges would be useful, so, for example, you could send emergency alerts to people even if the local cell infrastructure was down. But with the way things are headed I guess that role will be taken up by satellites.




  • The thing about this perspective is that I think its actually overly positive about LLMs, as it frames them as just the latest in a long line of automations.

    Not all automations are created equal. For example, compare using a typewriter to using a text editor. Besides a few details about the ink ribbon and movement mechanisms you really haven’t lost much in the transition. This is despite the fact that the text editor can be highly automated with scripts and hot keys, allowing you to manipulate even thousands of pages of text at once in certain ways. Using a text editor certainly won’t make you forget how to write like using ChatGPT will.

    I think the difference lies in the relationship between the person and the machine. To paraphrase Cathode Ray Dude, people who are good at using computers deduce the internal state of the machine, mirror (a subset of) that state as a mental model, and use that to plan out their actions to get the desired result. People that aren’t good at using computers generally don’t do this, and might not even know how you would start trying to.

    For years ‘user friendly’ software design has catered to that second group, as they are both the largest contingent of users and the ones that needed the most help. To do this software vendors have generally done two things: try to move the necessary mental processes from the user’s brain into the computer and hide the computer’s internal state (so that its not implied that the user has to understand it, so that a user that doesn’t know what they’re doing won’t do something they’ll regret, etc). Unfortunately this drives that first group of people up the wall. Not only does hiding the internal state of the computer make it harder to deduce, every “smart” feature they add to try to move this mental process into the computer itself only makes the internal state more complex and harder to model.

    Many people assume that if this is the way you think about software you are just an elistist gatekeeper, and you only want your group to be able to use computers. Or you might even be accused of ableism. But the real reason is what I described above, even if its not usually articulated in that way.

    Now, I am of the opinion that the ‘mirroring the internal state’ method of thinking is the superior way to interact with machines, and the approach to user friendliness I described has actually done a lot of harm to our relationship with computers at a societal level. (This is an opinion I suspect many people here would agree with.) And yet that does not mean that I think computers should be difficult to use. Quite the opposite, I think that modern computers are too complicated, and that in an ideal world their internal states and abstractions would be much simpler and more elegant, but no less powerful. (Elaborating on that would make this comment even longer though.) Nor do I think that computers shouldn’t be accessible to people with different levels of ability. But just as a random person in a store shouldn’t grab a wheelchair user’s chair handles and start pushing them around, neither should Windows (for example) start changing your settings on updates without asking.

    Anyway, all of this is to say that I think LLMs are basically the ultimate in that approach to ‘user friendliness’. They try to move more of your thought process into the machine than ever before, their internal state is more complex than ever before, and it is also more opaque than ever before. They also reflect certain values endemic to the corporate system that produced them: that the appearance of activity is more important than the correctness or efficacy of that activity. (That is, again, a whole other comment though.) The result is that they are extremely mind numbing, in the literal sense of the phrase.