

The problem isn’t the tool, it’s the user: they don’t know if they’re getting good code or not, therefore they cannot make the prompt to improve it.
In my view the problems occur when using AI to do something you don’t already know how to do.


The problem isn’t the tool, it’s the user: they don’t know if they’re getting good code or not, therefore they cannot make the prompt to improve it.
In my view the problems occur when using AI to do something you don’t already know how to do.


They don’t need to use semantic versioning. I doubt coreutils itself uses it, though I admit I haven’t checked. Actually I think semantic versioning is less popular in practice than it looks like.
For a set of tools to that completely replaces another one, announcing a 1.0 version would be a message that the developers think the project has actually reached its initial goals. “0.2” does not.


Rust is great, but might it be a bit premature to replace the venerable coreutils with a project boasting version number 0.2, which I imagine reflects its author’s view on its maturity?


When user enters a prompt, the backend may retrieve a handful a pages to serve that prompt. It won’t retrieve all the pages of a site. Hardly different from a user using a search engine and clicking 5 topmost links into tabs. If that is not a DoS attack, then an agent doing the same isn’t a DDoS attack.
Constructing the training material in the first place is a different matter, but if you’re asking about fresh events or new APIs, the training data just doesn’t cut it. The training, and subsequenctly the material retrieval, has been done a long time ago.


This is not about training data, though.
Perplexity argues that Cloudflare is mischaracterizing AI Assistants as web crawlers, saying that they should not be subject to the same restrictions since they are user-initiated assistants.
Personally I think that claim is a decent one: user-initiated request should not be subject to robot limitations, and are not the source of DDOS attack to web sites.
I think the solution is quite clear, though: either make use of the user identity to walz through the blocks, or even make use of the user browser to do it. Once a captcha appears, let the user solve it.
Though technically making all this happen flawlessly is quite a big task.


They presumably assume they’d be selling so little that it wouldn’t be worth the trouble.
They’ll probably wait out this situation for a while and see what the competition does…


If you just do it on your own computer, the packet will be already dropped by your own gateway. You can fake whichever address in your local subnet, but those are very likely remapped anyway in your gw to the one given by your ISP.
If you would have access to the switch port used by your ISP in the Internet exchange point (IX), you would have more liberties in choosing the IP.


That’s a bit surprising, given DDG uses Bing, Bing is Microsoft and Microsoft owns Github.
Did you try the same search with Bing, or have an example to share?


Did you give it to it?
It can be a pretty nice feature for using map-based apps in the browser.
I haven’t used such websites for a while and I don’t see Firefox in the recent users of the location API, even though I use Firefox Android all the time. (Info available in Android under Settings/Location.)
And did mentioning these things just make the message disappear on US-based lemmy-instances?
I don’t believe it did.


Perhaps many, but I have over 500 accounts in my password manager, yet none of have been leaked per the password exposure report (which I assume is based on the https://haveibeenpwned.com/ database).
So perhaps the problem is overblown in practice, assuming you don’t use the same password in many sites.


Realistically, how often does this happen?
Maybe find a solution when it happens.


Well, except perhaps for the fact that Discord has a Linux version, while the Facebook App doesn’t.
And—clearly!—it seems rather popular as well.
Right on! People should only share news articles that pertain to my interests.
Where should they be “taking” funding instead?


In theory, yes. But if you follow the link and that leads to downloading the JS and running it, you’re already too late inspecting it.
And even if you review it once (and it wasn’t too large or obfuscated via minification), the next time you load a page, the JS can be different. I guess there could be a web browser extension for pinning the code?
The only practial alternative I know of is to have a local client you can review once (and after updates).


So the trick is to use the #fragment part of the URL, that is not sent to the server.
Of course the JS one downloads from the server could easily upload it to it, so you still need to trust the JS.


Alas my game PC is going to stick with Windows due to bad state of VR in Linux :/. And therefore one day it might need to update to Windows 11.
In particular if you have a headset that is not Valve Index, though apparently with Meta Quest one can use ALVR, as long as you get the actual games running.
Maybe consider static ip assignment in your DHCP server (e.g. internet router) if at all possible… Then you can add a name to it to /etc/hosts.
Alternatively you could use Avahi to provide mdns names to your local network
I wonder though: if Google can link this account to you as its actual owner, I wonder if there’s a risk if the bot does something against the ToS?
I hope you have backups of your Google account…