As unscrupulous AI companies crawl for more and more data, the basic social contract of the web is falling apart.
Honestly it seems like in all aspects of society the social contract is being ignored these days, that’s why things seem so much worse now.
It’s abuse, plain and simple.
I explicitly have my robots.txt set to block out AI crawlers, but I don’t know if anyone else will observe the protocol. They should have tools I can submit a sitemap.xml against to know if i’ve been parsed. Until they bother to address this, I can only assume their intent is hostile and if anyone is serious about building a honeypot and exposing the tooling for us to deploy at large, my options are limited.
The funny (in an “wtf” not “haha” sense) thing is, individuals such as security researchers have been charged under digital trespassing laws for stuff like accessing publicly available ststems and changing a number in the URL in order to get access to data that normally wouldn’t, even after doing responsible disclosure.
Meanwhile, companies completely ignore the standard mentions to say “you are not allowed to scape this data” and then use OUR content/data to build up THEIR datasets, including AI etc.
That’s not a “violation of a social contract” in my book, that’s violating the terms of service for the site and essentially infringement on copyright etc.
No consequences for them though. Shit is fucked.
Remember Aaron Swartz
Just thought of a nasty hack the browser makers (or hackers) could use to scrape unlisted sites - by surreptitiously logging user browser history for a crawl list
While there are some extensions that do this, last I saw Google didn’t use Chrome for populating Search:
https://blogs.perficient.com/2017/03/15/does-google-use-chrome-to-discover-new-urls-for-crawling/
deleted by creator
Corporations are people except when it comes to liability. Compare the consequences of stealing several thousand dollars from someone by fraud vs. stealing several thousand dollars from someone by fraud as an LLC.
Well the trump era has shown that ignoring social contracts and straight up crime are only met with profit and slavish devotion from a huge community of dipshits. So. Y’know.
Only if you’re already rich or in the right social circles though. Everyone else gets fined/jail time of course.
Meh maybe. I know plenty of people who get away with all kinds of crap without money or connections.
This is a very interesting read. It is very rarely people on the internet agree to follow 1 thing without being forced
Most every other social contract has been violated already. If they don’t ignore robots.txt, what is left to violate?? Hmm??
It’s almost as if leaving things to social contracts vs regulating them is bad for the layperson… 🤔
Nah fuck it. The market will regulate itself! Tax is theft and I don’t want that raise or I’ll get in a higher tax bracket and make less!
This can actually be an issue for poor people, not because of tax brackets but because of income-based assistance cutoffs. If $1/hr raise throws you above those cutoffs, that extra $160 could cost you $500 in food assistance, $5-$10/day for school lunch, or get you kicked out of government subsidied housing.
Yet another form of persecution that the poor actually suffer and the rich pretend to.
And then the companies hit the “trust thermocline”, customers leave them in droves and companies wonder how this could’ve happened.
removed by mod
deleted by creator
what is it?
Robots.txt is a file that is is accessible as part of an http request. It’s a backend configuration file that sets rules for what automatically running web crawlers are allowed. It can set both who is and who isn’t allowed. Google is usually the most widely allowed domain for bots just because their crawler is how they find websites for search results. But it’s basically the honor system. You could write a scraper today that goes to websites that it is being told it doesn’t have permission to view this page, ignore it, and still get the information
robots.txt is a file available in a standard location on web servers (example.com/robots.txt) which set guidelines for how scrapers should behave.
That can range from saying “don’t bother indexing the login page” to “Googlebot go away”.
IT’s also in the first paragraph of the article.
Alternative title: Capitalism doesn’t care about morals and contracts. It wants to make more money.
Exactly. Capitalism spits in the face of the concept of a social contract, especially if companies themselves didn’t write it.
Capitalism, at least, in a lassie-faire marketplace, operates on a social contract, fiat money is an example of this. The market decides, the people decide. Are there ways to amass a certain amount of money to make people turn blind eyes? For sure, but all systems have their ways to amass power, no matter what
Capitalism is a concept, it can’t care if it wanted and it even can’t want to begin with. It’s the humans. You will find greedy, immoral ones in every system and they will make it miserable for everyone else.
Capitalism is the widelly accepted self-serving justification of those people for their acts.
The real problem is in the “widelly accepted” part: a sociopath killing an old lady and justifying it because “she looked funny at me” wouldn’t be “widelly accepted” and Society would react in a suitable way, but if said sociopath scammed the old lady’s pension fund because (and this is a typical justification in Investment Banking) “the opportunity was there and if I didn’t do it somebody else would’ve, so better be me and get the profit”, it’s deemed “acceptable” and Society does not react in a suitable way.
Mind you, Society (as in, most people) might actually want to react in a suitable way, but the structures in our society are such that the Official Power Of Force in our countries is controlled by a handful of people who got there with crafty marketing and backroom plays, and those deem it “acceptable”.
It’s deemed “acceptable”? A sociopath scamming an old lady’s pension is basically the “John Wick’s dog” moment that leads to the insane death-filled warpath in recent movie The Beekeeper.
This is the kind of edgelord take that routinely expects worse than the worst of society with no proof to their claims.
This is the kind of shit I saw from the inside in Investment Banking before and after the 2008 Crash.
None of those assholes ever gets prison time for the various ways in which they abuse markets and even insider info for swindeling amongst other Pension Funds, so de facto the Society we have with the power structures it has, accepts it.
People will always find justification to be asholes. Capitalism tried to harvest that energy and unleashed it’s full potential, with rather devastating consequences.
Sure, but think-structures matter. We could have a system that doesn’t reward psychopathic business choices (as much), while still improving our lives bit by bit. If the system helps a bit with making the right choices, that would matter a lot.
That’s basically what I wrote, (free) market economy especially in combination with credit based capitalism gives those people a perfect combination of a system to thrive in. This seems to result in very fast progress and immense wealth, which is not distributed very equally. Than again, I prefer Besos and Zuckerberg as CEOs rather than politicians or warlords. Dudes with big Egos and Ambitions need something productive to work on.
I would be shocked if any big corpo actually gave a shit about it, AI or no AI.
if exists("/robots.txt"): no it fucking doesn't
Robots.txt is in theory meant to be there so that web crawlers don’t waste their time traversing a website in an inefficient way. It’s there to help, not hinder them. There is a social contract being broken here and in the long term it will have a negative impact on the web.
deleted by creator
Yeah I always found it surprising that everyone just agreed to follow a text file on a website on how to act. It’s one of the worst thought out/significant issues with browsing still out there from the beginning pretty much.
No laws to govern so they can do anything they want. Blame boomer politicians not the companies.
Why not blame the companies ? After all they are the ones that are doing it, not the boomer politicians.
And in the long term they are the ones that risk to be “punished”, just imagine people getting tired of this shit and starting to block them at a firewall level…
Also, by the way, violating a basic social contract to not work towards triggering an intelligence explosion that will likely replace all biological life on Earth with computronium, but who’s counting? :)
That would be a danger if real AI existed. We are very far away from that and what is being called “AI” today (which is advanced ML) is not the path to actual AI. So don’t worry, we’re not heading for the singularity.
I request sources :)
https://www.lifewire.com/strong-ai-vs-weak-ai-7508012
Strong AI, also called artificial general intelligence (AGI), possesses the full range of human capabilities, including talking, reasoning, and emoting. So far, strong AI examples exist in sci-fi movies
Weak AI is easily identified by its limitations, but strong AI remains theoretical since it should have few (if any) limitations.
https://en.m.wikipedia.org/wiki/Artificial_general_intelligence
As of 2023, complete forms of AGI remain speculative.
Boucher, Philip (March 2019). How artificial intelligence works
Today’s AI is powerful and useful, but remains far from speculated AGI or ASI.
https://www.itu.int/en/journal/001/Documents/itu2018-9.pdf
AGI represents a level of power that remains firmly in the realm of speculative fiction as on date
Ah, I understand you now. You don’t believe we’re close to AGI. I don’t know what to tell you. We’re moving at an incredible clip; AGI is the stated goal of the big AI players. Many experts think we are probably just one or two breakthroughs away. You’ve seen the surveys on timelines? Years to decades. Seems wise to think ahead to its implications rather than dismiss its possibility.
See the sources above and many more. We don’t need one or two breakthroughs, we need a complete paradigm shift. We don’t even know where to start with for AGI. There’s a bunch of research, but nothing really came out of it yet. Weak AI has made impressive bounds in the past few years, but the only connection between weak and strong AI is the name. Weak AI will not become strong AI as it continues to evolve. The two are completely separate avenues of research. Weak AI is still advanced algorithms. You can’t get AGI with just code. We’ll need a completely new type of hardware for it.
Before Deep Learning recently shifted the AI computing paradigm, I would have written exactly what you wrote. But as of late, the opinion that we need yet another type of hardware to surpass human intelligence seems increasingly rare. Multimodal generative AI is already pretty general. To count as AGI for you, you would like to see the addition of continuous learning and agentification? (Or are you looking for “consciousness”?)
That said, I’m all for a new paradigm, and favor Russell’s “provably beneficial AI” approach!
Deep learning did not shift any paradigm. It’s just more advanced programming. But gen AI is not intelligence. It’s just really well trained ML. ChatGPT can generate text that looks true and relevant. And that’s its goal. It doesn’t have to be true or relevant, it just has to look convincing. And it does. But there’s no form of intelligence at play there. It’s just advanced ML models taking an input and guessing the most likely output.
Here’s another interesting article about this debate: https://ourworldindata.org/ai-timelines
What we have today does not exhibit even the faintest signs of actual intelligence. Gen AI models don’t actually understand the output they are providing, that’s why they so often produce self-contradictory results. And the algorithms will continue to be fine-tuned to produce fewer such mistakes, but that won’t change the core of what gen AI really is. You can’t teach ChatGPT how to play chess or a new language or music. The same model can be trained to do one of those tasks instead of chatting, but that’s not how intelligence works.
I don’t think glorified predictive text is posing any real danger to all life on Earth.
Until we weave consciousness with machines we should be good.
🤣🤣🤣🤣🤣🤣🤣 “robots.txt is a social contract” 🤣🤣🤣🤣🤣🤣🤣 🤡
I’ve just converted to polytheism and have begun praying to the Emoji God asking them to use 1,000 origami cry laughing Emojis to smite you down, so that you may die how you lived.
I hope it won’t be quick, or painless, but that’s up to the Gods now.
I hope it won’t be quick, or painless, but that’s up to the Gods now.
Considering that we’re talking about emojis, it’ll definitely be silent.
Silent, but deadly.
Why should I care about a text file lol
deleted by creator
All laws are just words on peices of paper. Why should you care?
This seems to interestingly prove the point made by the person this is in reply to. Breaking laws come with consequences. Not caring about a robots.txt file doesn’t. But maybe it should.
My angle was more about all rules being social contructs, and said rules being important for the continued operation of society, but that’s a good angle too.
Lots of laws don’t come with real punishments either, especially if you have money. We can change this too.
good. robots.txt was always a bad idea
Like so many terrible ideas, it worked flawlessly for generations