1980: TVs will fry your brain
1990: Videogames will fry your brain
2000: Computers will fry your brain
2010: Smartphones will fry your brain
2020: AI will fry your brain
Any takes for the 2030s?
Well looking around at where we are today, maybe TVs did fry our brains.
I mean, based fully on our current dystopian reality, I feel you just made a really good point about tech growing to a point where it fully captures you from reality, and indeed frys your brain by convincing you that fantasies are real.
MAGA is a great example of people with brains so fried they think a pedophile exconman with 34 felonies who killed millions of Americans trough a poor pandemic response is somehow helping them by destroying USAID, DEI, Healthcare, and Social Security.
Their brains are gonzo, all through the constant applied exploitation of all the tech you just mentioned combined.
AI will absolutley make it worse.
Climate change.
Literally.
Neural implants? Only this time they’re really going to fry your brain.
2030: Cyborg w/AI will fry your brain. Literally though.
2030: Critical thought will fry your brain
And before that books and comics. But LLMs are different: they pretend to be your friend but actually just encourage whatever you come up with. You can easily fry people’s brains by being their sycophant, now everyone can subscribe to one.
Should we trust a researcher whose brain got fried. Did they remember to do the old double-blind setup before the frying of the brains occurred?
i think reading the title of this post hurt my brain. like what are we doing here? making medical claims using sensationalist and meaningless language… seems unhelpful
AI is like a dog looking at itself in a mirror.
Some dogs are smart, and understand that this is a tool and that it is there to help you see things better… Some dogs are fucking morons and think their reflection is another dog, and they wanna fuck and fight…
There are a ton of good use cases for ai, and none of them include coquettish sexbots or drawings of me as a Simpson or a Ghibli sketch.
According to a new study by researchers at Carnegie Mellon, MIT, Oxford, and UCLA,
Study should be solid I guess.
participants who were given AI assistants (in this case, a chatbot powered by OpenAI’s GPT-5 model) would have the aid pulled from them without warning during the test
Wow, interesting idea. 👍
where they had their assistant removed, the AI group saw the solve rate fall off a cliff. They had a solve rate about 20% lower
And even worse IMO:
They also had nearly double the skip rate, meaning they simply chose not to solve the questions.
This seems very alarming IMO, because this indicates they lost some of their ability to think constructively on how to actually solve a problem!
I know there have always been some who cried wold every time new technology has become available, like calculators and computers. Even dictionaries were once claimed to be harmful once!
But maybe this time there is a real danger, because AI takes away a lot of the need to actually think creatively and constructively. And that’s an ability we must not lose.The last paragraph of the article is even worse. As it mentions 2 studies that show these effects are also long term!!!
Changing the terms of the test in the middle of it, without warning, is disruptive. I’m not convinced it “fried their brains.” The same would happen with a calculator suddenly removed during the middle of an exam.
Or any task change really. You tell me that I’m here for a writing task, then halfway through it becomes a math test? There’s no way I’m doing anywhere near as well as if they told me what was happening ahead of time.
You are disregarding the last paragraph, where 2 other studies showed similar results, without having the “disruptive” factor.
Here’s that last paragraph. Microsoft’s finding actually sounds like it does have the disruptive factor: people are trained to use AI and then it is removed. And finally, finally in the very last sentence of the entire article we get the one piece of information that’s been missing the entire time: doctors perform better with AI help, but then worse than ever without it.
My conclusion? Let people have AI and perform better with it.
Carpenters trained on power tools will suddenly perform worse with hand tools than carpenters who were never given power tools. But if they are given power tools, they can build homes faster.
No shit?
The findings are also in line with a study Microsoft published last yearthat looked at cognitive decline among knowledge workers, which found that the more people lean on AI, the worse they perform when asked to work without support. It also echoes a study out of Poland, which found that while doctors are better at spotting cancer risks with AI assistance, they perform worse than the no-AI baseline once that assistance is removed.
Carpenters trained on power tools will suddenly perform worse with hand tools than carpenters who were never given power tools.
Now you are just making shit up. None of these examples are about people being trained on AI. The comparison would be if a carpenter using power tolls for 10 minutes, suddenly becomes worse at using the traditional tools he is trained to use.
Your claim is baseless, there is no evidence for it, and the lack of any evidence of it, makes it an unreasonable assumption based on your prejudice alone, and should not be believed.
Let people have AI and perform better with it.
Again a very loaded statement, nobody is preventing anybody from using AI based on this research. But maybe people are not really performing better, or at least not always, it may depend on the task.
Your logic is fundamentally flawed and inconsistent, and you seem to lack any ability to see this as a potential problem, so much so that it reeks of you having an agenda.
Your flawed logic and prejudice does not beat 3 research papers.
I laugh in your face. This article has a clear agenda, not me.
Yes the article reporting on a research paper has an agenda, and not the random guy ignoring the evidence to contradict it. With absolutely zero to show for your argument, and clearly using flawed logic.
All I hear is the laugh of ignorance.
Ah yes, Gizmodo, arbiter of scientific truths. Their agenda is clear: to get you to click, typically with an outragey clickbait headline that reinforces your favorite narrative.
You need to learn the difference between debating someone and shouting at them that they have no argument, no logic, no evidence, and ill motivations. I can think of a couple other things you also need to do, but I’ll keep it PG.
If I use AI for my personal coding projects I’ve found that if the task is unsolvable by the ai model, I’m not able to sit down and do it myself until the next day. It’s like I’ve got to reset my brain.
If I want to save time and use AI for a specific part of the code, it probably saves me 5 hours of work. But then I spend five hours yelling at the ai to try to get it to actually solve it. Next day I’ll just fix it myself in 2 hours.
That sounds a lot like what the studies show. And IMO that sounds like a serious problem.
I’m really just tricking my brain to think I’m being more productive lmao.
But then again, some of the stuff I’m working on is in principle quite easy to do, but is also outside of my skillet, for these cases I benefit from using AI.
IMO the challenge is knowing how and when to use AI. Small companies using AI correctly can probably benefit massively from it. Although it’s risky
This paper shows that a person who has performed a task 12 times performs better than a person who has never performed the same task.
They also do not properly control for performance loss due to context switching which is a well known contributor to performance loss.
It’s a paper on arXiv, it hasn’t been peer reviewed or published.
No the test is not training, that’s a weird thing to claim. The switch is what is tested, and you disregard that 2 other tests have shown similar results. An actual decline in critical and problem solving thinking.
Here is the paper: https://ai-project-website.github.io/AI-assistance-reduces-persistence/
No the test is not training, that’s a weird thing to claim.
The control group solved 12 questions manually and then the 3 test questions manually. The AI grouped solved 0 questions manually and the 3 test questions manually. One group had 12 more manual math tasks to prepare for the manual math test the other group had 0 and also had to context switch.
The AI-assisted group was dealt a context switch, which results in a pretty severe performance loss. A context switch causes performance loss of around 40% according to this paper, which was peer-reviewed and published and is also the most cited paper on the topic, in the APA: https://www.apa.org/pubs/journals/releases/xhp274763.pdf
The AI-assisted group also did not have 12 questions to adjust to the new context, like the control group did. If they wanted to wipe out the context switching performance loss they should have kept asking questions to see if, after 12 questions, the AI-assisted group had a similar performance.
The switch is what is tested, and you disregard that 2 other tests have shown similar results.
No, they did not switch what was tested. Here is an image from the actual paper.
They were given 12 tasks with one group using AI and another doing mental math and then 3 tasks doing mental math. One group had 12 more tasks worth of preparation than the other.

Nothing, not even the article in theOP, says that they did math and swapped to reading to test.
They did 3 different experiments, in each experiment they gave 12 tasks and then disabled the AI for one group and gave 3 more tasks as a test. At no point did they ask 12 math questions and then finish with 3 reading questions or vice versa. They did 2 experiments using math tasks and 1 experiment using reading comprehension tasks.
So one group had 15 math tasks and one group had 12 ‘how to ask an AI’ tasks and then 3 math questions.
They also did not control for context switching losses, which is a well documented (see the APA paper) effect. The proper control would be to continue asking questions so the AI group also had 12 math tasks before the test.
There’s a reason that this is published on arXiv and not in a peer-reviewed journal. Designing a poor quality experiment doesn’t tell you anything useful even if you do multiple different versions of the same experiment.
This paper demonstrates a lack of a proper control group, specifically a failure to control for context switching performance loss.
The picture you post contradict your claims. The 2 groups are getting the same question, but one has AI assistance, the other has not.
Again you fail to show anything to support your claims.I also wrote text.
If you’re just going to cherry pick a single point and dismiss everything else then we’re done here.
Not training, no, but warm up. And no, it is not about critical thinking, it’s about reading comprehension and calculations.
The test seems kind of dogshit, you could make the same argument against any tool, calculators or even abacuses would have the same effect.
I’m made to use it for work and it does speed up some tasks, however for some stuff it ends up being like the experiment where not doing the work the first time means the whole process takes longer at the end.
To add to this, we already know that context switching causes a loss in performance.
A person who’s thinking about how to solve a problem one way and then has to suddenly think about solving it in another way will perform worse.
The Neuroscience Behind the Pain
Context switching isn’t just annoying — it’s neurologically expensive. When you shift from debugging a race condition to answering emails, your brain doesn’t simply “change tabs.” It goes through a complex process:
-Memory consolidation: Storing your current mental model
-Attention disengagement: Breaking focus from the current task
-Cognitive reloading: Building a new mental model for the next task
-Re-engagement: Getting back into flow
Research from Carnegie Mellon shows that even brief interruptions can increase task completion time by up to 23%. For complex cognitive work like programming, this cost multiplies dramatically.
Here’s another article from CMU discussing the same thing: https://www.sei.cmu.edu/blog/addressing-the-detrimental-effects-of-context-switching-with-devops/
What this study shows is that a person who is faced with an unexpected context switch performs worse on a task than a user who has spent the last 12 questions performing the task the same way.
This exact problem would happen if you replaced AI with a calculator, or made a person swap from using paper to doing mental math. The problem here is context switching, not AI.
The way to ensure that the problem is AI and not the context switch, would be to continue the quest and see if the first group reverts back to baseline after 12 questions. 12 questions is how long the control group had to become acclimated to the task before their last context swap at the start of the test.
Also, of note, this is a paper on arXiv it is not published so it has not gone through a peer-review process which would certainly catch the failure to set a proper control group.
Context switching isn’t just X — it’s Y.
Are we sure this was written by a human?
AI being released was basically an apocalypse for people who use EM dash.
Here’s the most cited, human created (2001), paper on the topic of context switching performance loss: https://www.apa.org/pubs/journals/releases/xhp274763.pdf
Thanks.
And I’m all for em dashes. After all, I started using them after reading enough books. It’s just that particular construct that strikes me as especially LLM-y.
AI was trained on human writing. If it produces a certain tone, then that’s probably a result of the material that was favoured in training it. That construction was common in human writing before it became common in AI too.
What makes it stick out is when AI uses it in contexts where humans normally wouldn’t, but this kind of assertion is common in scientific papers and articles. It would make sense to train an AI on scientific writing, since that tone sounds authoritative and like you have some idea of what you’re talking about.
So I don’t think this is an LLM-construct; it’s an instance of the original style that LLMs copy.
I’d like to see a study on that, I see it mentioned so much it’s almost achieved meme status.
It could very well be a Baader–(👀)Meinhof phenomenon.
Studies show that using a bulldozer for plowing a field decreases the farmers muscle density after just one day of use.
Christ. What a load of shit.
I really do see the issue with AI. I see people around me outsource thinking to it too much. Like literally. As if they are happy that a machine can make their life choices for them. This is extremely worrying It’s About how people use it
Those are important studies but nothing shocking. The conclusion to draw from them is the same one we’ve drawn from all technologies that have improved our lives to some degree: Without them, we tend to either be incompetent as losing access to them isn’t worth planning for, or we are demotivated because why would we deprive ourselves from technology that makes our work so much less exhausting?
It doesn’t necessarily remove our capacity to think (and the article falsely generalises to critical thinking), it shifts what kind of thinking we do.
If AI is as good or better than I am at writing code, then I’ll switch my brain to only do the orchestrating and architecture rather than the writing code part. And yes, if you remove AI, then the switch will cause me to perform less than I used to before AI, but not permanently, only until I get used to it again.
If an AI is better than a doctor at finding cancer indicators, then the doctor will focus their mind on finding solutions only rather than splitting it on both the detection and solution.
This is not new, not bad, and I’ll even go to the extent of saying it’s a great use of AI: Humans evolved for specialization. The less varied the tasks are, the better we are at the subset we specialize in. That’s what has driven our rapid technological and societal advances in the past millenia.
But, AI has many issues and many detrimental applications as well, so don’t see this comment as a full endorsement of AI.

Well, when I communicate with the AI for more than two to five minutes, I almost always find myself something like in a picture, if someone didn’t understand, it’s a character from the idiocracy movie.
I don’t want it, all it does is to negate years of learned experience and ability to organically formulate ideas.

Reaearchers: “Is the AI in the room with us now?”
Test Subjects: “No Asshole! You just took it from me while I was in the middle of using it!”
But which 10 minutes?
One sec, maybe ChatGPT knows….
I think that if you use AI responsibly (as an assisting tool) like mentioned in the article, then you are pretty much on the safe side.
But when you have AI do everything for you, then there’s a big problem.
Personally I try not to use it at all, not a fan of all the problems that come with it.
You clearly didn’t read the article, and you are dead wrong.
Except you are right that if you let the AI do everything, it’s worse, and you lose a lot of ability for critical thinking.
The last paragraph of the article even shows that other studies have shown that using AI assistance over time, will even have long term effect of lowering problem solving abilities!!Personally I try not to use it at all, not a fan of all the problems that come with it.
This is the way. 😀
deleted by creator









