• @The_Tired_Horizon@lemmy.world
    link
    fedilink
    English
    137 months ago

    I gave up reporting on major sites where I saw abuse. Stuff that if you said that in public, also witnessed by others, you’ve be investigated. Twitter was also bad for responding to reports with “this doesnt break our rules” when a) it clearly did and b) probably a few laws.

  • @Krudler@lemmy.world
    link
    fedilink
    English
    29
    edit-2
    7 months ago

    I just would like to show something about Reddit. Below is a post I made about how Reddit was literally harassing and specifically targeting me, after I let slip in a comment one day that I was sober - I had previously never made such a comment because my sobriety journey was personal, and I never wanted to define myself or pigeonhole myself as a “recovering person”.

    I reported the recommended subs and ads to Reddit Admins multiple times and was told there was nothing they could do about it.

    I posted a screenshot to DangerousDesign and it flew up to like 5K+ votes in like 30 minutes before admins removed it. I later reposted it to AssholeDesign where it nestled into 2K+ votes before shadow-vanishing.

    Yes, Reddit and similar are definitely responsible for a lot of suffering and pain at the expense of humans in the pursuit of profit. After it blew up and front-paged, “magically” my home page didn’t have booze related ads/subs/recs any more! What a totally mystery how that happened /s

    The post in question, and a perfect “outing” of how Reddit continually tracks and tailors the User Experience specifically to exploit human frailty for their own gains.

    Edit: Oh and the hilarious part that many people won’t let go (when shown this) is that it says it’s based on my activity in the Drunk reddit which I had never once been to, commented in, posted in, or was even aware of. So that just makes it worse.

    • @mlg@lemmy.world
      link
      fedilink
      English
      107 months ago

      Its not reddit if posts don’t get nuked or shadowbanned by literal sitewide admins

      • @Krudler@lemmy.world
        link
        fedilink
        English
        57 months ago

        Yes I was advised in the removal notice that it had been removed by the Reddit Administrators so that they could keep Reddit “safe”.

        I guess their idea of “safe” isn’t 4+ million users going into their privacy panel and turning off exploitative sub recommendations.

        Idk though I’m just a humble bird lawyer.

  • @Fedizen@lemmy.world
    link
    fedilink
    English
    147 months ago

    media: Video games cause violence

    media: Weird music causes violence.

    media: Social media could never cause violence this is censorship (also we don’t want to pay moderators)

    • @Eximius@lemmy.world
      link
      fedilink
      English
      4
      edit-2
      7 months ago

      Since media (that you define by the trophes of unsubtantiated news outlets) couldnt sensibly refer to a forum like reddit or even facebook, this makes no sense.

  • PorkSoda
    link
    fedilink
    English
    48
    edit-2
    7 months ago

    Back when I was on reddit, I subscribed to about 120 subreddits. Starting a couple years ago though, I noticed that my front page really only showed content for 15-20 subreddits at a time and it was heavily weighted towards recent visits and interactions.

    For example, if I hadn’t visited r/3DPrinting in a couple weeks, it slowly faded from my front page until it disappeared all together. It was so bad that I ended up writing a browser automation script to visit all 120 of my subreddits at night and click the top link. This ended up giving me a more balanced front page that mixed in all of my subreddits and interests.

    My point is these algorithms are fucking toxic. They’re focused 100% on increasing time on page and interaction with zero consideration for side effects. I would love to see social media algorithms required by law to be open source. We have a public interest in knowing how we’re being manipulated.

    • @Fedizen@lemmy.world
      link
      fedilink
      English
      87 months ago

      I used google news phone widget years ago and clicked on a giant asteroid article, and for whatever reason my entire feed became asteroid/meteor articles. Its also just such a dumb way to populate feeds.

    • @Carlo@lemmy.ca
      link
      fedilink
      English
      87 months ago

      Yeah, social media algorithms are doing a lot of damage. I wish there was more general awareness of this. Based on personal experience, I think many people actually like being fed relevant content, and are blind to the consequences. I think Lemmy is great, because you have to curate your own feed, but many people would never use it for that very reason. I don’t know what the solution is.

      • Corhen
        link
        fedilink
        English
        47 months ago

        thats why i always use youtube by subscribed first, then only delve into regular front page if theres nothing interesting in my subscriptions

  • @skozzii@lemmy.ca
    link
    fedilink
    English
    407 months ago

    YouTube feeds me so much right wing bullshit I’m constantly marking it as not interested. It’s a definite problem.

    • @afraid_of_zombies@lemmy.world
      link
      fedilink
      English
      77 months ago

      I can’t prove that they were related but I used to report all conservative ads (Hillsdale Epoch times etc) to Google with all caps messages how I was going to start calling the advertisers directly and yell at them for the ads, about 2-3 days after I started doing that the ads stopped.

      I would love for other people to start doing this to confirm that it works and to be free of the ads.

    • @Duamerthrax@lemmy.world
      link
      fedilink
      English
      77 months ago

      It’s amazing how often I get a video from some right wing source suggested to me companting about censorship and being buried by youtube. I ended up installing a third party channel blocker to deal with it.

    • @CaptPretentious@lemmy.world
      link
      fedilink
      English
      27 months ago

      YouTube started feeding me that stuff too. Weirdly once I started reporting all of them as misinformation they stop showing up for some reason…

  • @atrielienz@lemmy.world
    link
    fedilink
    English
    47 months ago

    So, I can see a lot of problems with this. Specifically the same problems that the public and regulating bodies face when deciding to keep or overturn section 230. Free speech isn’t necessarily what I’m worried about here. Mostly because it is already agreed that free speech is a construct that only the government is actually beholden to. Message boards have and will continue to censor content as they see fit.

    Section 230 basically stipulates that companies that provide online forums (Meta, Alphabet, 4Chan etc) are not liable for the content that their users post. And part of the reason it works is because these companies adhere to strict guidelines in regards to content and most importantly moderation.

    Section 230©(2) further provides “Good Samaritan” protection from civil liability for operators of interactive computer services in the good faith removal or moderation of third-party material they deem “obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable, whether or not such material is constitutionally protected.”

    Reddit, Facebook, 4Chan et all do have rules and regulations they require their users to follow in order to post. And for the most part the communities on these platforms are self policing. There just aren’t enough paid moderators to make it work otherwise.

    That being said, the real problem is that this really kind of indirectly challenges section 230. Mostly because it very barely skirts around whether the relevant platforms can themselves be considered publishers, or at all responsible for the content the users post and very much attacks how users are presented with content to keep them engaged via algorithms (which is directly how they make their money).

    Even if the lawsuits fail, this will still be problematic. It could lead to draconian moderation of what can be posted and by whom. So now all race related topics regardless of whether they include hate speech could be censored for example. Politics? Censored. The discussion of potential new laws? Censored.

    But I think it will be worse than that. The algorithm is what makes the ad space these companies sell so valuable. And this is a direct attack on that. We lack the consumer privacy protections to protect the public from this eventuality. If the ad space isn’t valuable the data will be. And there’s nothing stopping these companies from selling user data. Some of them already do. What these apps do in the background is already pretty invasive. This could lead to a furthering of that invasive scraping of data. I don’t like that.

    That being said there is a point I agree with. These companies literally do make their algorithm addictive and it absolutely will push content at users. If that content is of an objectionable nature, so long as it isn’t outright illegal, these companies do not care. Because they do gain from it monetarily.

    What we actually need is data privacy protections. Holding these companies accountable for their algorithms is a good idea. But I don’t agree that this is the way to do that constructively. It would be better to flesh out 230 as a living document that can change with the times. Because when it was written the Internet landscape was just different.

    What I would like to see is for platforms to moderate content posted and representing itself as fact. We don’t see that nearly enough on places like reddit. Users can post anything as fact and the echo chambers will rally around it if they believe it. It’s not really incredibly difficult to radicalise a person. But the platforms aren’t doing that on purpose. The other users are, and the algorithms are helping them.

    • @yamanii@lemmy.world
      link
      fedilink
      English
      37 months ago

      Moderation is already draconian, interact with any gen Z and you gonna know what goon, corn, unalive, (crime) in Minecraft, actually mean.

      These aren’t slangs, this is like a second language developed to evade censorship from those platforms, things will only get worse.

      • @atrielienz@lemmy.world
        link
        fedilink
        English
        1
        edit-2
        7 months ago

        It’s always been that way though. Back in the day on Myspace or MSN chatrooms there were whole lists of words that were auto censored and could result in a ban (temp or permanent). We literally had whole lists of alternates to use. You couldn’t say sex, or kill back then either. The difference is the algorithm. I acknowledge in my comment that these platforms already censor things they find objectionable. Part of that is to keep Section 230 as it is. A perhaps more relevant part of it is to keep advertisers happy so they continue to buy ad space. A small portion of it may even be to keep the majority of the user base happy because users who don’t agree with the supposed ideologies on a platform will leave it and that’s less eyeballs on ads.

    • @RatBin@lemmy.world
      link
      fedilink
      English
      177 months ago

      Completely different cases, questionable comparison;

      • social media are the biggest cultural industry at the moment, albeit a silent and unnoticed one. Cultural industries like this are means of propaganda, information and socilalization, all of which is impactful and heavily personal and personalised for everyone’s opinion.

      • thus the role of such an impactul business is huge and can move opinions and whole movements, the choices that people takes are driven by their media consumption and communities they take part in.

      • In other words, policy, algorhitms, GUI are all factors that drive the users to engage in speific ways with harmful content.

      • @RealFknNito@lemmy.world
        link
        fedilink
        English
        -87 months ago

        biggest cultural industry at the moment

        I wish you guys would stop making me defend corporations. Doesn’t matter how big they are, doesn’t matter their influence, claiming that they are responsible for someone breaking the law because someone else wrote something that set them off and they, as overlords, didn’t swoop in to stop it is batshit.

        Since you don’t like those comparisons, I’ll do one better. This is akin to a man shoving someone over a railing and trying to hold the landowners responsible for not having built a taller railing or more gradual drop.

        You completely fucking ignore the fact someone used what would otherwise be a completely safe platform because another party found a way to make it harmful.

        polocy and algorithm are factors that drive users to engage

        Yes. Engage. Not in harmful content specifically, that content just so happens to be the content humans react to the strongest. If talking about fields of flowers drove more engagement, we’d never stop seeing shit about flowers. It’s not them maliciously pushing it, it’s the collective society that’s fucked.

        The solution is exactly what it has always been. Stop fucking using the sites if they make you feel bad.

        • @RatBin@lemmy.world
          link
          fedilink
          English
          87 months ago

          Again, no such a thing as a neutral space or platform, case in point, reddit with its gated communities and the lack of control over what people does with the platform is in fact creating safe spaces for these kind of things. This may not be inentional, but it ultimately leads towards the radicalization of many people, it’s a design choice followed by the internal policy of the admins who can decide to let these communities be on one of the mainstream websites. If you’re unsure about what to think, delving deep into these subreddits has the effect of radicalising you, whereas in a normal space you wouldn’t be able o do it as easily. Since this counts as engagement, reddit can suggest similar forums, leading via algorhitms to a path of radicalisation. This is why a site that claims to be neutra is’t truly neutral.

          This is an example of alt-right pipeline that reddit succesfully mastered:

          The alt-right pipeline (also called the alt-right rabbit hole) is a proposed conceptual model regarding internet radicalization toward the alt-right movement. It describes a phenomenon in which consuming provocative right-wing political content, such as antifeminist or anti-SJW ideas, gradually increases exposure to the alt-right or similar far-right politics. It posits that this interaction takes place due to the interconnected nature of political commentators and online communities, allowing members of one audience or community to discover more extreme groups (https://en.wikipedia.org/wiki/Alt-right_pipeline)

          And yet you keep comparing cultural and media consumption to a physical infrastructure, which is regulated as to prevent what you mentioned, an unsafe management of the terrain for instace. So taking your examples as you wanted, you may just prove that regulations can in fact exist and private companies or citizens are supposed to follow them. Since social media started to use personalisation and predictive algorhitms, they also behave as editors, handling and selecting the content that users see. Why woul they not be partly responsible based on your argument?

          • @RealFknNito@lemmy.world
            link
            fedilink
            English
            -6
            edit-2
            7 months ago

            No such thing as neutral space

            it may not be intentional, but

            They can suggest similar [communities] so it can’t be neutral

            My guy, what? If all you did was look at cat pictures you’d get communities to share fucking cat pictures. These sites aren’t to blame for “radicalizing” people into sharing cat pictures any more than they are to actually harmful communities. By your logic, lemmy can also radicalize people. I see anarchist bullshit all the time, had to block those communities and curate my own experience. I took responsibility and instead of engaging with every post that pissed me off, removed that content or avoided it. Should the instance I’m on be responsible for not defederating radical instances? Should these communities be made to pay for radicalizing others?

            Fuck no. People are not victims because of the content they’re exposed to, they choose to allow themselves to become radical. This isn’t a “I woke up and I really think Hitler had a point.” situation, it’s a gradual decline that isn’t going to be fixed by censoring or obscuring extreme content. Companies already try to deal with the flagrant forms of it but holding them to account for all of it is truly and completely stupid.

            Nobody should be responsible because cat pictures radicalized you into becoming a furry. That’s on you. The content changed you and the platform suggesting that content is not malicious nor should it be held to account for that.

  • @ItsMeSpez@lemmy.world
    link
    fedilink
    English
    47 months ago

    As much as I believe it is a breeding ground for right wing extremism, it’s a little strange that 4chan is being lumped in with these other sites for a suit like this. As far as I know, 4chan just promotes topics based on the number of people posting to it, and otherwise doesn’t employ an algorithm at all. Kind of a different beast to the others, who have active algorithms trying to drive engagement at any cost.

  • ✺roguetrick✺
    link
    fedilink
    English
    -17 months ago

    They’re appealing the denial of motion to dismiss huh? I agree that this case really doesn’t have legs but I didn’t know that was an interlocutory appeal that they could do. They’d win in summary judgement regardless.

  • @Kalysta@lemmy.world
    link
    fedilink
    English
    47 months ago

    Love Reddit’s lies about them taking down hateful content when they’re 100% behind Israel’s genocide of the Palestinians and will ban you if you say anything remotely negative about Israel’s govenment. And the amount of transphobia on the site is disgusting. Let alone the misogyny.

    • @captainlezbian@lemmy.world
      link
      fedilink
      English
      17 months ago

      Lol, yeah I moderated major trans subreddits for years. It was entirely hit and miss if we’d get support from the admins

  • @blazera@lemmy.world
    link
    fedilink
    English
    -17 months ago

    Personally I believe in free will. Nothing should take any responsibility away from the one that chose to kill.