I don’t see a problem here. Maybe Perplexity should consider the reasons WHY Cloudflare have a firewall…?
Can’t believe I’ve lived to see Cloudflare be the good guys
Lesser of two bad guys maybe?
They’re not. They’re using this as an excuse to become paid gatekeepers of the internet as we know it. All that’s happening is that Cloudflare is using this to menuever into position where they can say “nice traffic you’ve got there - would be a shame if something happened to it”.
AI companies are crap.
What Cloudflare is doing here is also crap.
And we’re cheering it on.
When a firm outright admits to bypassing or trying to bypass measures taken to keep them out, you think that would be a slam dunk case of unauthorized access under the CFAA with felony enhancements.
Fuck that. I don’t need prosecutors and the courts to rule that accessing publicly available information in a way that the website owner doesn’t want is literally a crime. That logic would extend to ad blockers and editing HTML/js in an “inspect element” tag.
That logic would not extend to ad blockers, as the point of concern is gaining unauthorized access to a computer system or asset. Blocking ads would not be considered gaining unauthorized access to anything. In fact it would be the opposite of that.
gaining unauthorized access to a computer system
And my point is that defining “unauthorized” to include visitors using unauthorized tools/methods to access a publicly visible resource would be a policy disaster.
If I put a banner on my site that says “by visiting my site you agree not to modify the scripts or ads displayed on the site,” does that make my visit with an ad blocker “unauthorized” under the CFAA? I think the answer should obviously be “no,” and that the way to define “authorization” is whether the website puts up some kind of login/authentication mechanism to block or allow specific users, not to put a simple request to the visiting public to please respect the rules of the site.
To me, a robots.txt is more like a friendly request to unauthenticated visitors than it is a technical implementation of some kind of authentication mechanism.
Scraping isn’t hacking. I agree with the Third Circuit and the EFF: If the website owner makes a resource available to visitors without authentication, then accessing those resources isn’t a crime, even if the website owner didn’t intend for site visitors to use that specific method.
Site owners currently do and should have the freedom to decide who is and is not allowed to access the data, and to decide for what purpose it gets used for. Idgaf if you think scraping is malicious or not, it is and should be illegal to violate clear and obvious barriers against them at the cost of the owners and unsanctioned profit of the scrapers off of the work of the site owners.
to decide for what purpose it gets used for
Yeah, fuck everything about that. If I’m a site visitor I should be able to do what I want with the data you send me. If I bypass your ads, or use your words to write a newspaper article that you don’t like, tough shit. Publishing information is choosing not to control what happens to the information after it leaves your control.
Don’t like it? Make me sign an NDA. And even then, violating an NDA isn’t a crime, much less a felony punishable by years of prison time.
Interpreting the CFAA to cover scraping is absurd and draconian.
If you want anybody and everyone to be able to use everything you post for any purpose, right on, good for you, but don’t try to force your morality on others who rely on their writing, programming, and artworks to make a living to survive.
I’m gonna continue to use ad blockers and yt-dlp, and if you think I’m a criminal for doing so, I’m gonna say you don’t understand either technology or criminal law.
When sites put challenges like Anubis or other measures to authenticate that the viewer isn’t a robot, and scrapers then employ measures to thwart that authentication (via spoofing or other means) I think that’s a reasonable violation of the CFAA in spirit — especially since these mass scraping activities are getting attention for the damage they are causing to site operators (another factor in the CFAA, and one that would promote this to felony activity.)
The fact is these laws are already on the books, we may as well utilize them to shut down this objectively harmful activity AI scrapers are doing.
Nah, that would also mean using Newpipe, YoutubeDL, Revanced, and Tachiyomi would be a crime, and it would only take the re-introduction of WEI to extend that criminalization to the rest of the web ecosystem. It would be extremely shortsighted and foolish of me to cheer on the criminalization of user spoofing and browser automation because of this.
Do you think DoS/DDoS activities should be criminal?
If you’re a site operator and the mass AI scraping is genuinely causing operational problems (not hard to imagine, I’ve seen what it does to my hosted repositories pages) should there be recourse? Especially if you’re actively trying to prevent that activity (revoking consent in cookies, authorization captchas).
In general I think the idea of “your right to swing your fists ends at my face” applies reasonably well here — these AI scraping companies are giving lots of admins bloody noses and need to be held accountable.
I really am amenable to arguments wrt the right to an open web, but look at how many sites are hiding behind CF and other portals, or outright becoming hostile to any scraping at all; we’re already seeing the rapid death of the ideal because of these malicious scrapers, and we should be using all available recourse to stop this bleeding.
DoS attacks are already a crime, so of course the need for some kind of solution is clear. But any proposal that gatekeeps the internet and restricts the freedoms with which the user can interact with it is no solution at all. To me, the openness of the web shouldn’t be something that people just consider, or are amenable to. It should be the foundation in which all reasonable proposals should consider as a principle truth.
If I put a banner on my site that says “by visiting my site you agree not to modify the scripts or ads displayed on the site,” does that make my visit with an ad blocker “unauthorized” under the CFAA?
How would you “authorize” a user to access assets served by your systems based on what they do with them after they’ve accessed them? That doesn’t logically follow so no, that would not make an ad blocker unauthorized under the CFAA. Especially because you’re not actually taking any steps to deny these people access either.
AI scrapers on the other hand are a type of users that you’re not authorizing to begin with, and if you’re using CloudFlares bot protection you’re putting into place a system to deny them access. To purposefully circumvent that access would be considered unauthorized.
That doesn’t logically follow so no, that would not make an ad blocker unauthorized under the CFAA.
The CFAA also criminalizes “exceeding authorized access” in every place it criminalizes accessing without authorization. My position is that mere permission (in a colloquial sense, not necessarily technical IT permissions) isn’t enough to define authorization. Social expectations and even contractual restrictions shouldn’t be enough to define “authorization” in this criminal statute.
To purposefully circumvent that access would be considered unauthorized.
Even as a normal non-bot user who sees the cloudflare landing page because they’re on a VPN or happen to share an IP address with someone who was abusing the network? No, circumventing those gatekeeping functions is no different than circumventing a paywall on a newspaper website by deleting cookies or something. Or using a VPN or relay to get around rate limiting.
The idea of criminalizing scrapers or scripts would be a policy disaster.
Ehhhh, you are gaining access to content due to assumption you are going to interact with ads and thus, bring revenue to the person and/or company producing said content. If you block ads, you remove authorisation brought to you by ads.
Carefull, this way even not looking at an ads positioned at the bottom of the page (or anyway not visible without scrolling) would mean to remove authorisation brought to you by ads.
That doesn’t make any logical sense. You cant tie legal authorization to an unsaid implicit assumption, especially when that is in turn based on what you do with the content you’ve retrieved from a system after you’ve accessed and retrieved it.
When you access a system, are you authorized to do so, or aren’t you? If you are, that authorization can’t be retroactively revoked. If that were the case, you could be arrested for having used a computer at a job, once you’ve quit. Because even though you were authorized to use it and your corporate network while you worked there, now that you’ve quit and are no longer authorized that would apply retroactively back to when you DID work there.
Right? Isn’t this a textbook DMCA violation, too?
Perplexity argues that a platform’s inability to differentiate between helpful AI assistants and harmful bots causes misclassification of legitimate web traffic.
So, I assume Perplexity uses appropriate identifiable user-agent headers, to allow hosters to decide whether to serve them one way or another?
Its not up to the hoster to decide whom to serve content. Web is intended to be user agent agnostic.
Gee that’s a real removed it ain’t it perplexity?
You could say they are… Perplexed.
That’s the entire point, dipshit. I wish we got one of the cool techno dystopias rather than this boring corporate idiot one.
they cant get their ai to check a box that says “I am not a robot”? I’d think thatd be a first year comp sci student level task. And robots.txt files were basically always voluntary compliance anyway.
Cloudflare actually fully fingerprints your browser and even sells that data. Thats your IP, TLS, operating system, full browser environment, installed extensions, GPU capabilities etc. It’s all tracked before the box even shows up, in fact the box is there to give the runtime more time to fingerprint you.
Yeah and the worst part is it doesn’t fucking work for the one thing it’s supposed to do.
The only thing it does is stop the stupidest low effort scrapers and forces the good ones to use a browser.
Recaptcha v2 does way more than check if the box was checked.
you’re not wrong, but it also allows more than 99.8% of the bot traffic through too on text challenges. Its like the TSA of website security. Its mostly there to keep the user busy while cloudflare places itself in a man in the middle of your encrypted connection to a third party. The only difference between cloudflare and a malicious attacker is cloudflares stated intention not to be evil. With that and 3 dollars I can buy myself a single hard shell taco from tacobell.
Uh… good?
Here comes the ridiculous offer to buy Google chrome with money they don’t have: easy delicious scraping directly from the user source
This is why companies like Perplexity and OpenAI are creating browsers.
It seems like it’s some kind of distraction to make people think things aren’t as bad as they really are, it just sounds too far-fetched to me.
It’s like a bear that has eaten too much and starts whining because a small rabbit is running away from him, even though the bear has already eaten almost all the rabbits and is clearly full.
You’d think that a competent technology company, with their own AI would be able to figure out a way to spoof Cloudflare’s checks. I’d still think that.
Or find a more efficient way to manage data, since their current approach is basically DDOSing the internet for training data and also for responding to user interactions.
This is not about training data, though.
Perplexity argues that Cloudflare is mischaracterizing AI Assistants as web crawlers, saying that they should not be subject to the same restrictions since they are user-initiated assistants.
Personally I think that claim is a decent one: user-initiated request should not be subject to robot limitations, and are not the source of DDOS attack to web sites.
I think the solution is quite clear, though: either make use of the user identity to walz through the blocks, or even make use of the user browser to do it. Once a captcha appears, let the user solve it.
Though technically making all this happen flawlessly is quite a big task.
Personally I think that claim is a decent one: user-initiated request should not be subject to robot limitations, and are not the source of DDOS attack to web sites.
They are one of the sources!
The AI scraping when a user enters a prompt is DDOSing sites in addition to the scraping for training data that is DDOSing sites. These shitty companies are repeatedly slamming the same sites over and over again in the least efficient way because they are not using the scraped data from training when they process a user prompt that does a web search.
Scraping once extensively and scraping a bit less but far more frequently have similar impacts.
When user enters a prompt, the backend may retrieve a handful a pages to serve that prompt. It won’t retrieve all the pages of a site. Hardly different from a user using a search engine and clicking 5 topmost links into tabs. If that is not a DoS attack, then an agent doing the same isn’t a DDoS attack.
Constructing the training material in the first place is a different matter, but if you’re asking about fresh events or new APIs, the training data just doesn’t cut it. The training, and subsequenctly the material retrieval, has been done a long time ago.
ask AI how to do it?