

Putting fiber in the ground is expensive. I work for an ISP, and we estimate fiber overbuild costs at $15/ft. So a mile of underground fiber costs about $79,200.


Putting fiber in the ground is expensive. I work for an ISP, and we estimate fiber overbuild costs at $15/ft. So a mile of underground fiber costs about $79,200.
You know, that’s a fair point. But I think it will still be a measurable shift if people start using privacy forks of their codebase.


Unfortunately, there aren’t many options in the 2025 internet browser market.
Unless something has changed, the gecko engine Firefox uses is the only distinctly different engine from Chrome, and I don’t think writing a browser engine from scratch is easy. So if the solution is to hard pivot away from Firefox entirely, I don’t know how you don’t end up using some Chrome based browser.
At least Mozilla hasn’t tried to kill adblockers like Google clearly is trying to.
Forking the codebase and stripping out any AI code is much easier than trying to invent another wheel.


It took me getting arrested over some bullshit to get me out, then it was just time and therapy.
I can’t recommend a good therapist enough. Mine has helped me untangle lots of things, and I’m still getting better 5 years after the split.


Fire up Wireshark on a different machine and transfer a file between two other machines, you won’t see anything.
This is true, but only because we’ve replaced Ethernet hubs with switches.
An Ethernet hub was a dumber, cheaper device that imitated a switch, but with a fundamental difference: all connected devices were in the same collision domain.
I don’t know too much about WiFi but it probably does the same, it’s just a bridge to the same network.
Wireless communication has the same problem as Ethernet hubs, with no real solution like a switch though. Any wireless transmission involves an antenna, and transmitting is similar to standing in your yard with a bull horn to talk to your buddy two houses down. Anyone with an antenna can receive the wireless signal you send out. Period.
So some really smart people found ways to keep the stuff you send private, but anyone can sit nearby and capture data going through the air, it’s just not anything you can use because of the encryption.


“You mean if I delete data, then it’s gone? No matter what platform?”


It’s easy to post on a forum and say so.
Maybe you even are actually asking AI questions and researching whether or not it’s accurate.
Perhaps you really are the world’s most perfect person.
But even if that’s true, which I very seriously doubt, then you’re going to be the extreme minority. People will ask AI a question, and if they like the answers given, they’ll look no further. If they don’t like the answers given, they’ll ask the AI with different wording until they get the answer they want.


It’s a single data data point, nothing more, nothing less. But that single data point is evidence of using LLMs in their code generation.
Time will tell if this is a molehill or a mountain. When it comes to data privacy, given that it just takes one mistake and my data can be compromised, I’m going to be picky about who I park my data with.
I’m not necessarily immediately looking to jump ship, but I consider it a red flag that they’re using developer tools centered around using AI to generate code.


There it is. The bold-faced lie.
“I don’t blindly trust AI, I just ask it to summarize something, read the output, then read the source article too. Just to be sure the AI summarized it properly.”
Nobody is doing double the work. If you ask AI a question, it only gets a vibe check at best.


If you want to trade accuracy for speed, that’s your prerogative.
AI has its uses. Transcribing subtitles, searching images by description, things like that. But too many times, I’ve seen AI summaries that, if you read the article the AI cited, it can be flatly wrong on things.
What’s the point of a summary that doesn’t actually summarize the facts accurately?


Sure, but with all the mistakes I see LLMs making in places where professionals should be quality checking their work (lawyers, judges, internal company email summaries, etc) it gives me pause considering this is a privacy and security focused company.
It’s one thing for AI to hallucinate cases, and another entirely to forget there’s a difference between = and == when the AI bulk generates code. One slip up and my security and privacy could be compromised.
You’re welcome to buy in to the AI hype. I remember the dot com bubble.


There’s been evidence in their github repo that they’re using LLMs to code their tools now.
It’s making me reconsider using them.


https://www.pcmag.com/news/brave-browser-caught-redirecting-users-through-affiliate-links
I’m not going to defend Mozilla by any means, but if you care about privacy, you wouldn’t use a browser based on Chrome anyway.


You could replace “Brave Browser” with Firefox and the statement would still be true.
At least Firefox wasn’t caught hijacking affiliate links.


Depriving someone of years of their life isn’t a trivial thing. If someone was wrongfully convicted of a crime, the time they spend in jail is time that they could have been spending making a career, saving for retirement, building equity, etc. The things people do to prepare for retirement.
Should we just say “oops, our bad, no hard feelings right?” and just leave them to be homeless?


If you want to fully wipe the disks of any data to start with, you can use a tool like dd to zero the disks. First you need to figure out what your dive is enumerated as, then you wipe it like so:
sudo dd if=/dev/zero of=/dev/sdX
From there, you need to decide if you’re going to use them individually or as a pool.
!< s


I’m not disagreeing with anything you’ve said?
I’m saying that just adding Mozilla’s PPA to your sources won’t change apt’s behavior when installing Firefox unless you tell apt to prefer the package offered by the Mozilla PPA.
As someone who uses Kubuntu as a daily driver, I’m well aware of the snap drama and have worked around it using the method I pasted above.
Even though it’s an underhanded move by Cannonical, I’m still glad the OS is open source since it makes the workaround so trivial.


It takes a little more than just adding a different repository to your package manager, you have to tell apt which to prefer:
echo ’
Package: *
Pin: origin packages.mozilla.org
Pin-Priority: 1000
Package: firefox*
Pin: release o=Ubuntu
Pin-Priority: -1’ | sudo tee /etc/apt/preferences.d/mozilla
There’s a difference between ‘language’ and ‘intelligence’ which is why so many people think that LLMs are intelligent despite not being so.
The thing is, you can’t train an LLM on math textbooks and expect it to understand math, because it isn’t reading or comprehending anything. AI doesn’t know that 2+2=4 because it’s doing math in the background, it understands that when presented with the string
2+2=, statistically, the next character should be4. It can construct a paragraph similar to a math textbook around that equation that can do a decent job of explaining the concept, but only through a statistical analysis of sentence structure and vocabulary choice.It’s why LLMs are so downright awful at legal work.
If ‘AI’ was actually intelligent, you should be able to feed it a few series of textbooks and all the case law since the US was founded, and it should be able to talk about legal precedent. But LLMs constantly hallucinate when trying to cite cases, because the LLM doesn’t actually understand the information it’s trained on. It just builds a statistical database of what legal writing looks like, and tries to mimic it. Same for code.
People think they’re ‘intelligent’ because they seem like they’re talking to us, and we’ve equated ‘ability to talk’ with ‘ability to understand’. And until now, that’s been a safe thing to assume.