AI-created “virtual influencers” are stealing business from humans::Brands are turning to hyper-realistic, AI-generated influencers for promotions.
Oh no, fake people are stealing fake jobs from fake people!
Anyways…
That makes sense. The goal always seemed to be as fake as possible.
I prefer to think of it as leveling the playing field. You don’t have to be a 20 year old woman with the right face and body ratios to be an instagram model anymore. Anyone can! Seems like true equality to me.
You just need to be a geek who can afford some 4090 and the software which produce the pretty lady.
Yeah let’s do this instead of banning predatory advertising lol.
Wasn’t there a social media website that did a massive bot purge a while ago and most influencers found out that like 90+% of their audiences were actually bots anyway? sounds like this is just a logical conclusion and the rest of us can get on with our lives while bots entertain bots.
Whew. Skynet distracted itself from killing all humans.
I hate influencers, aka living adverts
“Then they came for the influencers, and I said nothing…”
Because fuck em.
Can’t you see us insulting them? We said a lot as we watched them descend into a dark hole never to return.
Guess they’ll have to get a real job now.
Well, the real jobs are going to AI too
Are there any that are real in the sense that they contribute something of value to society?
Even doctors are liable to be replaced by AI. I don’t know what counts as “something of value to society” to you, and frankly that’s the sort of argument that is never worth having. But generally speaking, it doesn’t get much more valuable for society than doctors.
Being a doctor would be a real job, but the only jobs I’ve seen actually getting replaced are things like clickbait content farms, scams, marketing, exploitive gambling-centric video games, and other such garbage. Unlike being a doctor it’s never been hard to shit that stuff out into the world. And since these neural networks aren’t actually that good, I’ll believe they can replace doctors when I see it.
You haven’t heard of good writing and good artwork being replaced by AI?
I’ve seen some pretty interesting images and some funny text but nothing that amounts to a big enough vision that it’s something cohesive like a complete movie or a book. I’ve seen Joel Haver videos but those aren’t made by pushing a button and getting a video.
I’ll believe they can replace doctors when I see it.
And when that day comes we’re not going back.
People’s identities become fully commodified then a technology is invented to simulate it. Late stage capitalist dystopia things.
And nothing of value was lost.
Good. Fuck anyone who treats ‘influencing’ as a career.
One step closer to the Dead Internet Theory becoming reality
For people like me that hadn’t heard bout the theory.
“The dead Internet theory is an online conspiracy theory that asserts that the Internet now consists mainly of bot activity and automatically generated content that is manipulated by algorithmic curation, marginalizing organic human activity”
Here’s the summary for the wikipedia article you mentioned in your comment:
The dead Internet theory is an online conspiracy theory that asserts that the Internet now consists mainly of bot activity and automatically generated content that is manipulated by algorithmic curation, marginalizing organic human activity. Proponents of the theory believe these bots are created intentionally to help manipulate algorithms and boost search results in order to ultimately manipulate consumers. Furthermore, some proponents of the theory accuse government agencies of using bots to manipulate public perception, stating "The U.S. government is engaging in an artificial intelligence powered gaslighting of the entire world population". The date given for this "death" was generally around 2016 or 2017.The theory has gained traction because much of the observed phenomena is grounded in quantifiable phenomena like increased bot traffic. However, the idea that it is a coordinated psyop has been described by Kaitlin Tiffany, staff writer at The Atlantic, as a "paranoid fantasy," even if there are legitimate criticisms involving bot traffic and the integrity of the internet.
Am i real?
no, you’re merely an ai
I’m not entirely unsympathetic here - we all do what we can to survive. For some of us, that does mean cashing in on nature’s gifts.
There is a darker side here, as much as I like to joke, influencers are people and most people draw the line somewhere. There are some things no-one wants their face tied to. AI personas on the other hand…
This is a problem for the whole internet. I’ve made a long version of my argument here, but tl;dr as companies clutter the internet with cheaper and cheaper mass produced content, the valuable places will also get ruined. There’s an analogy to our physical world: Because we build cheap and ugly cities that roughly look the same, the few places that are beautiful and unique are also ruined, because they’re just too valuable; everyone wants to go there. I think that we’re already seeing beginning, with pre-existing companies like Reddit that have high quality human-generated content walling themselves off more and more as that content becomes more valuable.
To me, this is just part of the progress. With the introduction of technology, they were the ones to take advantage of Photoshop, Instagram filters and all. Now the technology advanced enough to not only be an instrument to enhance their looks, but to fully replace them.
Progress to where? To complete alienation?
Lately the benefits of technological advancement seem to mostly serve to make some executives wealthier, rather than benefit the whole of society. Same goes here. Rather than somewhat affected by brand deals these figures can be entirely fabricated so that every word of them is optimized for sales.
Even as someone who used to be excited for AI personality developments, looking at this gives me an awful dystopian vibe.
Human influences have always given me dystopian vibes. And they were just making some executives and themselves rich, is not such a big loss…
Human influencers are just celebrities at a smaller scale, and frankly the assumption that influencer/celebrity culture will go away if influencers are replaced I’m seeing in this thread is completely unrealistic. We will just get Coca-ColAIna and L’ÓreAI-chan instead of people occasionally peddling products.
If there’s any real concern of artificiality and parasocial following as a replacement for real human connections behind this disdain at influencers, then in no way replacing them with AI is going to fix anything. It will only make it worse. It will lead to custom-tailored indoctrination by brands.
Worse than that, I already see people treating actual artists much in the same way. That the human element in culture doesn’t matter as much as having an endless source of nebulous content, and that anyone making art should get a “real job” instead. Nevermind that those are also in line for automation…
‘Influencer’ as a job has only existed for what, 10 years? I don’t think society will collapse without them.
Influencers have a lot of overlap with artistic expression online, but this is not even all that it is about. This is not going to end simply with replacing Logan Paul and stopping at that. This is only one more step in a trend to replace a lot of creative, intellectual and service jobs. Which wouldn’t even be so bad if those people had a guarantee of a living and could do anything they want with their time… but this is not how it goes.
We couldn’t guarantee a living to all the people who had to go around picking up horse shit or lighting gas streetlamps either. Sure, a UBI would be nice, but technology advances. And I really do not believe it is a slippery slope from ending the career of Logan Paul to ending the career of a future Leonardo Da Vinci.
Back then we couldn’t guaranteed. But since productivity has grown immensely. We do grow more than enough food to feed every single person, and often that food it thrown out for a myriad of economic reasons. Technology advances but we see less and less of the benefits. It used to be at least that it freed us from manual labor into service work, but if it takes that too, then what?
You may not believe it all you want, artist are already seeing their careers diminishing in financial viability. Before we even could speculate about the threat to influencers, there were already visual artists and voice actors who gave up because their commissioners and employers decided to use AI instead. One might say “this means they weren’t very good so no loss”, but how does an artist get good if not practice? Nah, we aren’t sliding from ending Logan Paul to ending a prospective Leonardo Da Vinci, likely we already ruined the chances of that Da Vinci and now it’s sliding towards influencers.
And you know what, I don’t even think Logan Paul is going to lose his job considering how established he is. But some smaller, more integrous and creative influencers might.
Replacing influencers with ai is not going to fix anything for that we should dismantle social media and have a serious talk all 8 billion of us, but it’s not going to make anything worse either, it is already custom tailored indoctrination by brands and a handful of assholes are making stupid amounts of money. I’m not going to cry if that money shifts to different hands.
Yes artists come up often in this kind of discussions, the ones that are losing their job to ai never had one in the first place, same as influencers. What are we talking about, Jim that makes you a custom logo and business cards for your business?
The guy that gets a commission from the newly opened local microbrewery for graffiti-ing their walls is hardly losing any work to ai. If anything they could integrate ai in their creative process.
Good luck convincing 8 billion people all to agree on anything, especially to drop something that has become so enmeshed with people’s lives already.
But it is going to make it worse. All the data they are collecting from us will be directly funelled into how best to manipulate us in an individual manner. It is not custom tailored to a personal level yet. Even the most cynical and greedy influencer doesn’t have the means to individualize ads. But if it’s all AI-created, then it can be done.
Yes artists come up often in this kind of discussions, the ones that are losing their job to ai never had one in the first place
Nice No True Scotsman, sounds like you don’t really value their work, that anyone who could be replaced never deserved to earn a living to begin with. I don’t think there is anything I can respond to that, because at that point we have a fundamental conflict of values and worldview.
I believe artists, even small artists, deserve to be supported and that our world and culture is better off for that. Including Jim.
The guy that gets a commission from the newly opened local microbrewery for graffiti-ing their walls is hardly losing any work to ai.
That is, until a drone can physically print AI-created graffiti and replace that guy in the same way that the digital artists get replaced
If anything they could integrate ai in their creative process.
Assuming said artist even wants to do that, why would that business hire someone to use an AI if it could do it themselves? The benefit of AI is making content creation easier and faster. It’s not enough to say that “artists could just use it” because inevitably that makes it so less artists would be needed or hired for any given work. Say the graffitti artist manages to use said AI and drones and get by. Well, then it doesn’t need a team and apprentices anymore. And these won’t manage to do the same because the graffiti worked is already handled.
Ultimately, what is all this for? Rather than automation freeing us to have leisure and be creative, it’s freeing us to carry boxes in an Amazon warehouse.
I take your point, but in this specific application (synthetically generated influencer images) it’s largely something that falls out for free from a wider stream of research (namely Denoising Diffusion Probabilistic Models). It’s not like it’s really coming at the expense of something else.
As for what it’s eventually progressing towards - who knows… It has proven to be quite an unpredictable and fruitful field. For example Toyota’s research lab recently created a very inspired method of applying Diffusion models to robotic control which I don’t think many people were expecting.
That said, there are definitely societal problems surrounding AI, its proposed uses, legislation regarding the acquisition of data, etc. Often times markets incentivize its use for trivial, pointless, or even damaging applications. But IMO it’s important to note that it’s the fault of the structure of our political economy, not the technology itself.
The ability to extract knowledge and capabilities from large datasets with neural models is truly one of humanity’s great achievements (along with metallurgy, the printing press, electricity, digital computing, networking communications, etc.), so the cat’s out of the bag. We just have to try and steer it as best we can.
The technology itself may be very interesting and it may not be ultimately the core of the problem, but because there is no attempt to address the problems that arise as its use is spread, it can’t help but harm our society. Consider how companies may forgo hiring people to use AI to replace them, which threatens not only influencers but anyone working with writing, visual arts, voice work and consequently communication and service. How it can be used manipulatively to exploit people at a rate never seen before. As many amazing uses there may be for it, there are just as many terrible possibilites.
Meanwhile the average person cannot do much with it beyond using it as a toy, really.
Ultimately the real problem is the system, but as the system refuses to change we are in a collision course. There are calls to ban AI, but that is not the ideal solution, and I don’t think it can be done in any case. But we are not having the societal changes direly needed to be able to embrace it and end up with a better world. Sure it will bring massive profits to all sorts of business and industries, but that most likely will come at direct expense of people’s livelihoods. Can we even trust the scientific and industrial uses when financial interests direct them in such a way that products are intentionally sabotaged to be less functional and durable, or even which believes “curing diseases is not a sufficiently profitable model”?
These days I just dread the future…
Since the forces that determine policy are largely tied up with corporate profit, promoting the interests of domestic companies against those of other states, and access to resources and markets, our system will misuse AI technology whenever and wherever those imperatives conflict with the wider social good. As is the case with any technology, really.
Even if “banning” AI were possible as a protectionist measure for those in white-collar and artistic professions, I think it would ultimately be unfavorable with the ruling classes, since it would concede ground to rival geopolitical blocs who are in a kind of arms race to develop the technology. My personal prediction is that people in those industries will just have to roll with the punches and accept AI encroaching into their space. This wouldn’t necessarily be a bad thing, if society made the appropriate accommodations to retrain them and/or otherwise redistribute the dividends of this technological progress. But that’s probably wishful thinking.
To me, one of the most worrying trends, as it’s gained popularity in the public consciousness over the last year or two, has been the tendency to silo technologies within large companies, and build “moats” to protect it. What was once an open and vibrant community, with strong principles of sharing models, data, code, and peer-reviewed papers full of implementation details, is increasingly tending towards closed-source productized software, with the occasional vague “technical report” that reads like an advertising spiel. IMO one of the biggest things we can lobby for is openness and transparency in the field, to guard against the natural monopolies and perverse incentives of hoarding data, technical know-how, and compute power. Not to mention the positive externality spillovers of the open-source scientific community refining and developing new ideas.
It’s similar to how knowledge of the atomic structure gave us both the ability to destroy the world, or fuel it (relatively) cleanly. Knowledge itself is never a bad thing, only what we choose to do with it.
Progress to something better or to self-destruction, nothing is forever. The whole social media may disappear at some point, it all depends on the community and human kind as a whole. The simple truth is that people want entertainment, if AI is capable of delivering better, it will be embraced.
I’m not saying that this is good or bad, I don’t like it either. So I do what I can to support what I think is good and give my disapprove to what I think is bad. If Instagram becomes a place for AI influencers, I’ll just ditch it. This should be the natural reaction of everyone, unfortunately this is what all “influencer” thing was heading to. From the very beginning of their careers they advertise fantasizes, they used every piece of technology available to enhance their looks and lifestyle.
Seems like people are all too eager for this to destroy the field of influencers as a whole, but that is extremely unlikely. If AI influencers don’t stick, the human ones will just keep at it as usual, but if it works, then it only becomes more artificial and manipulative. Say what you will about influencers, they don’t have the capability to tailor their ads to every single user, but AI could.
Betting on the whole of social media to disappear is wishful thinking, frankly. This genie won’t go back in the bottle. The human need for connections is too strong to simply drop it is not going to happen, and any substitute will need to fight uphill against very entrenched massive businesses that shaped it how it is today.
AI will follow a similar curve as computers in general: At first they required giant rooms full of expensive hardware and a team of experts to perform the most basic of functions. Over time they got smaller and cheaper and more efficient. So much so that we all carry around the equivalent of a 2000-era supercomputer in our pockets (see note below).
2-3 years ago you really did need a whole bunch of very expensive GPUs with a lot of VRAM to train a basic diffusion (image) model (aka a LoRA). Today you can do it with a desktop GPU (Nvidia 3090 or 4090 with 24GB of VRAM… Or a 4060 Ti with 16GB and some patience). You can use pretrained diffusion models at reasonable speeds (~1-5 seconds an image, depending on size/quality settings) with any GPU with at least 6GB of VRAM (seriously, try it! It’s fun and only takes like 5-10 minutes to install automatic1111 and will provide endless uncensored entertainment).
Large Language Model (LLM) training is still out of reach for desktop GPUs. ChatGPT 3.0 was trained using 10,000 Nvidia A100 chips and if you wanted to run it locally (assuming it was available for download) you’d need the equivalent of 5 A100s (and each one costs about $6700 plus you’d need an expensive server capable of hosting them all simultaneously).
Having said that you can host a smaller LLM such as Llama2 on a desktop GPU and it’ll actually perform really well (as in, just a second or two between when you give it a prompt and when it gives you a response). You can also train LoRAs on a desktop GPU just like with diffusion models (e.g. train it with a data set containing your thousands of Lemmy posts so it can mimic your writing style; yes that actually works!).
Not only that but the speed/efficiency of AI tools like LLMs and diffusion models improves by leaps and bounds every few weeks. Seriously: It’s hard to keep up! This is how much of a difference a week can make in the world of AI: I bought myself a 4060 Ti as an early Christmas present to myself and was generating 4 (high quality) 768x768 images in about 20 seconds. Then Latent Consistency Models (LCM) came out and suddenly they only took 8s. Then a week later “TurboXL” models became a thing and now I can generate 4 really great 768x768 images in 4 seconds!
At the same time there’s been improvements in training efficiency and less VRAM is required in general thanks to those advancements. We’re still in the “early days” of AI algorithms (seriously: AI stuff is extremely inefficient right now) so I wouldn’t be surprised to see efficiency gains of 1,000-100,000x in the next five years for all kinds of AI tools (language models, image models, weather models, etc).
If you combine just a 100x efficiency gain with five years of merely evolutionary hardware improvements and I wouldn’t be surprised to see something even better than ChatGPT 4.0 running locally on people’s smartphones with custom training/learning happening in real time (to better match the user’s preferences/style).
Note: The latest Google smartphone as of the date of this post is the Pixel 8 which is capable of ~2.4 TeraFLOPS. Even 2yo smartphones were nearing ~2 TeraFLOPS which is about what you’d get out of a supercomputer in the early 2000s: https://en.wikipedia.org/wiki/FLOPS (see the SVG chart in the middle of the page).
Here’s the summary for the wikipedia article you mentioned in your comment:
In computing, floating point operations per second (FLOPS, flops or flop/s) is a measure of computer performance, useful in fields of scientific computations that require floating-point calculations. For such cases, it is a more accurate measure than measuring instructions per second.
Wait, are we supposed to think of influencers as humans?
Ohhh no…Logan Paul will need to learn a discernable skill of some sort.
His skill is finding dead bodies
He’s gotten into wrestling for WWE, and he’s actually pretty good at it. He’s probably getting paid pretty well there.
No probably about it, he’s one of their top paid stars (although still a ways off of Roman reigns and Lesnar). Which is wild given he only wrestled 6 times this year. But he brings eyes to the product, so WWE have done the maths and deemed it to be worthwhile.