Note: this lemmy post was originally titled MIT Study Finds AI Use Reprograms the Brain, Leading to Cognitive Decline and linked to this article, which I cross-posted from this post in !fuck_ai@lemmy.world.
Someone pointed out that the “Science, Public Health Policy and the Law” website which published this click-bait summary of the MIT study is not a reputable publication deserving of traffic, so, 16 hours after posting it I am editing this post (as well as the two other cross-posts I made of it) to link to MIT’s page about the study instead.
The actual paper is here and was previously posted on !fuck_ai@lemmy.world and other lemmy communities here.
Note that the study with its original title got far less upvotes than the click-bait summary did 🤡
Are history teachers wasting their time?
The obvious AI-generated image and the generic name of the journal made me think that there was something off about this website/article and sure enough the writer of this article is on X claiming that covid 19 vaccines are not fit for humans and that there’s a clear link between vaccines and autism.
Neat.
Thanks for pointing this out. Looking closer I see that that “journal” was definitely not something I want to be sending traffic to, for a whole bunch of reasons - besides anti-vax they’re also anti-trans, and they’re gold bugs… and they’re asking tough questions like “do viruses exist” 🤡
I edited the post to link to MIT instead, and added a note in the post body explaining why.
Public health flat earthers
Does this also explain what happens with middle and upper management? As people have moved up the ranks during the course of their careers, I swear they get dumber.
That was my first reaction. Using LLMs is a lot like being a manager. You have to describe goals/tasks and delegate them, while usually not doing any of the tasks yourself.
Fuck, this is why I’m feeling dumber myself after getting promoted to more senior positions and had only had to work in architectural level and on stuff that the more junior staffs can’t work on.
With LLMs basically my job is still the same.
that’s the peter principle.
people only get promoted so far as their inadequacies/incompetence shows. and then their job becomes covering for it.
hence why so many middle managers primary job is managing the appearance of their own competence first and foremost and they lose touch with the actual work being done… which is a key part of how you actually manage it.
Yeah, that’s part of it. But there is something more fundamental, it’s not just rising up the ranks but also time spent in management. It feels like someone can get promoted to middle management and be good at the job initially, but then as the job is more about telling others what to do and filtering data up the corporate structure there’s a certain amount of brain rot that sets in.
I had just attributed it to age, but this could also be a factor. I’m not sure it’s enough to warrant studies, but it’s interesting to me that just the act of managing work done by others could contribute to mental decline.
I just asked ChatGPT if this is true. It told me no and to increase my usage of AI. So HA!
I wonder what social media does.
16 hours after posting it I am editing this post (as well as the two other cross-posts I made of it) to link to MIT’s page about the study instead.
Better late than never. Good catch.
You write essay with AI your learning suffers.
One of these papers that are basically “water is wet, researches discover”.
cognitive decline.
Another reason for refusing those so-called tools… it could turn one into another tool.
It’s a clickbait title. Using AI doesn’t actually cause cognitive decline. They’re saying using AI isn’t as engaging for your brain as the manual work, and then broadly linking that to the widely understood concept that you need to engage your brain to stay sharp. Not exactly groundbreaking.
Sir this is Lemmy & I’m afraid I have to downvote you for defending AI which is always bad. /s
Heyyy, now I get to enjoy some copium for being such a dinosaur and resisting to use it as often as I can
deleted by creator
And using a calculator isn’t as engaging for your brain as manually working the problem. What’s your point?
Seems like you’ve made the point succinctly.
Don’t lean on a calculator if you want to develop your math skills. Don’t lean on an AI if you want to develop general cognition.
I don’t think this is a fair comparison because arithmetic is a very small and almost inconsequential skill to develop within the framework of mathematics. Any human that doesn’t have severe learning disabilities will be able to develop a sufficient baseline of arithmetic skills.
The really useful aspects of math are things like how to think quantitatively. How to formulate a problem mathematically. How to manipulate mathematical expressions in order to reach a solution. For the most part these are not things that calculators do for you. In some cases reaching for a calculator may actually be a distraction from making real progress on the problem. In other cases calculators can be a useful tool for learning and building your intuition - graphing calculators are especially useful for this.
The difference with LLMs is that we are being led to believe that LLMs are sufficient to solve your problems for you, from start to finish. In the past students who develop a reflex to reach for a calculator when they don’t know how to solve a problem were thwarted by the fact that the calculator won’t actually solve it for them. Nowadays students develop that reflex and reach for an LLM instead, and now they can walk away with the belief that the LLM is really solving their problems, which creates both a dependency and a misunderstanding of what LLMs are really suited to do for them.
I’d be a lot less bothered if LLMs were made to provide guidance to students, a la the Socratic method: posing leading questions to the students and helping them to think along the right tracks. That might also help mitigate the fact that LLMs don’t reliably know the answers: if the user is presented with a leading question instead of an answer then they’re still left with the responsibility of investigating and validating.
But that doesn’t leave users with a sense of immediate gratification which makes it less marketable and therefore less opportunity to profit…
arithmetic is a very small and almost inconsequential skill to develop within the framework of mathematics.
I’d consider it foundational. And hardly small or inconsequential given the time young people spend mastering it.
Any human that doesn’t have severe learning disabilities will be able to develop a sufficient baseline of arithmetic skills.
With time and training, sure. But simply handing out calculators and cutting math teaching budgets undoes that.
This is the real nut of comparison. Telling kids “you don’t need to know math if you have a calculator” is intended to reduce the need for public education.
I’d be a lot less bothered if LLMs were made to provide guidance to students, a la the Socratic method: posing leading questions to the students and helping them to think along the right tracks.
But the economic vision for these tools is to replace workers, not to enhance them. So the developers don’t want to do that. They want tools that facilitate redundancy and downsizing.
But that doesn’t leave users with a sense of immediate gratification
It leads them to dig their own graves, certainly.
You better not read audiobooks or learn form videos either. That’s pure brianrot. Too easy.
Look at this lazy fucker learning trig from someone else, instead of creating it from scratch!
Don’t worry scro
I don’t refute the findings but I would like to mention: without AI, I wasn’t going to be writing anything at all. I’d have let it go and dealt with the consequences. This way at least I’m doing something rather than nothing.
I’m not advocating for academic dishonesty of course, I’m only saying it doesn’t look like they bothered to look at the issue from the angle of:
“What if the subject was planning on doing nothing at all and the AI enabled the them to expend the bare minimum of effort they otherwise would have avoided?”
You haven’t done anything, though. If you’re getting to the point where you are doing actual work instead of letting the AI do it for you, then congratulations, you’ve learned some writing skills. It would probably be more effective to use some non-ai methods to learn as well though.
If you’re doing this solely to produce output, then sure, go ahead. But if you want good output, or output that actually reflects your perspective, or the skills to do it yourself, you’ve gotta do it the hard way.
sad that people knee jerk downvote you, but i agree. i think there is definitely a productive use case for AI if it helps you get started learning new things.
It helped me a ton this summer learn gardening basics and pick out local plants which are now feeding local pollinators. That is something i never had the motivation to tackle from scratch even though i knew i should.
Saw you down-voted and wanted to advise that I am glad you went on to learn some things you had been meaning to, that alone makes the experiment worthwhile as discipline is a rare enough beast. To be clear I myself have a Claude subscription that is about to lapse, and find the article unfortunately spot on. I feel fortunate to have moved away from LLMs naturally.
deleted by creator
That’s the thing about cognitive decline…
The people experiencing it only realize it’s happening during brief reprieves from the symptoms
So if someone is experiencing cognitive decline, they’re literally incapable of recognizing it. They all think they completely fine…
A constant refrain I’ve found myself using with a Facebook “friend” is “you lack the ability to even understand why you are wrong”. Like I’m convinced he actually thinks anecdotal stories carry as much weight as troves of data proving him wrong.
I bet you think you’re totally fine… ;)