• skisnow@lemmy.ca
    link
    fedilink
    English
    arrow-up
    17
    ·
    21 hours ago

    But what if bias was not the reason? What if your face gave genuinely useful clues about your probable performance?

    I hate this so much, because spouting statistics is the number one go-to of idiot racists and other bigots trying to justify their prejudices. The whole fucking point is that judging someone’s value someone based on physical attributes outside their control, is fucking evil, and increasing the accuracy of your algorithm only makes it all the more insidious.

    The Economist has never been shy to post some questionable kneejerk shit in the past, but this is approaching a low even for them. Not only do they give the concept credibility, but they’re even going out of their way to dishonestly paint it as some sort of progressive boon for the poor.

  • I Cast Fist@programming.dev
    link
    fedilink
    English
    arrow-up
    6
    ·
    17 hours ago

    Yeah, nothing says “this person will repay their loans” like looking at their face and nothing fucking else.

    I love how you can just call it capetalismo in portuguese, capeta = devil

  • humanspiral@lemmy.ca
    link
    fedilink
    English
    arrow-up
    7
    ·
    18 hours ago

    Dystopian neutrality in article.

    without discriminating on grounds of protected characteristics

    AI classification is trained based on supervised (the right answers are predetermined) learning. MechaHitler for the fascist nationalism’s sake, will rate Obama’s face as a poor employee, and Trump’s as the bestest employee.

    Open training data sets would be subject to 1. zero competitive advantage to a model, 2. massive complaints about any specific training data.

    For some jobs, psycopaths AND loyalty are desirable traits, even though they can be opposite traits. Honesty, integrity, intelligence can be desirable traits, or obstacles to desperate loyalty. My point is that if there are many traits determined by faces, much more training data is needed to detect them, and then human hiring decision based on 10 or 30 traits matching to a (impossibly unique) position, where there direct manager only cares about loyalty, without being too talented, but higher level managers might prefer a candidate with potential to replace their direct manager, but all of them care about race or pregnancy risk, and then post training based on some “illegal characteristics”.

    A Gattaca situation where, everyone has easy time getting great job, and moving to a greater job, OR being shut out of all jobs, creates a self contradicting prediction on “loyalty/desperation” controlability traits. If job duties are changed to include blow job services, then surely those agreeable make a better employee, despite any facial ticks responding to suggestion.

    Human silent “illegal discrimination” is not eliminated/changed, but the new capability, you can use a computer to do the interviewing, and waste more interviewees’ time at no human cost to employer is why this will be a success. A warehousing company recently looked at facial expressions to determine attention to safety, and this leads to “The AI punishments to your life will continue until you smile more.” Elysium’s automated parole incident interviews is a good overview of the dystopia.

    • humanspiral@lemmy.ca
      link
      fedilink
      English
      arrow-up
      1
      ·
      17 hours ago

      A classification problem is correlation vs causation. Sunspots and mini skirts have been correlated with stock market returns to some degree, but it tends to be a tenuous connection not guaranteed to hold up over time, or to actually have any meaningful relevance whatsoever. It’s easy to oversell models.

  • buttnugget@lemmy.world
    link
    fedilink
    English
    arrow-up
    14
    ·
    22 hours ago

    Actually, what if slavery wasn’t such a bad idea after all? Lmao they never stop trying to resurrect class warfare and gatekeeping.

  • pyre@lemmy.world
    link
    fedilink
    English
    arrow-up
    7
    ·
    20 hours ago

    this should be grounds for a prison sentence. open support for Nazism shouldn’t be covered by free speech laws.

  • ZILtoid1991@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    ·
    19 hours ago

    That image reminds me a meme from “Scientific diagrams that look like shitposts”. It was titled something like “Mask of Damascus(?)/Triagones(?) - Acquire it (from a prisoner(?)) with a scimitar!”

  • psycotica0@lemmy.ca
    link
    fedilink
    English
    arrow-up
    56
    ·
    edit-2
    3 days ago

    "Imagine appearing for a job interview and, without saying a single word, being told that you are not getting the role because your face didn’t fit. You would assume discrimination, and might even contemplate litigation. But what if bias was not the reason?

    Uh… guys…

    Discrimination: the act, practice, or an instance of unfairly treating a person or group differently from other people or groups on a class or categorical basis

    Prejudice: an adverse opinion or leaning formed without just grounds or before sufficient knowledge

    Bias: to give a settled and often prejudiced outlook to

    Judging someone’s ability without knowing them, based solely on their appearance, is, like, kinda the definition of bias, discrimination, and prejudice. I think their stupid angle is “it’s not unfair because what if this time it really worked though!” 😅

    I know this is the point, but there’s no way this could possibly end up with anything other than a lazily written, comically clichéd, Sci Fi future where there’s an underclass of like “class gammas” who have gamma face, and then the betas that blah blah. Whereas the alphas are the most perfect ughhhhh. It’s not even a huge leap; it’s fucking inevitable. That’s the outcome of this.

    I should watch Gattaca again…

    • Tattorack@lemmy.world
      link
      fedilink
      English
      arrow-up
      17
      ·
      3 days ago

      Like every corporate entity, they’re trying to redefine what those words mean. See, it’s not “insufficient knowledge” if they’re using an AI powered facial recognition program to get an objective prediction, right? Right?

      • JackbyDev@programming.dev
        link
        fedilink
        English
        arrow-up
        1
        ·
        edit-2
        1 day ago

        The most generous thing I can think is that facial structure is not a protected class in the US so they’re saying it’s technically okay to descriminate against.

    • morriscox@lemmy.world
      link
      fedilink
      English
      arrow-up
      6
      ·
      3 days ago

      People see me in cargo pants, polo shirt, a smartphone in my shirt pocket, and sometimes tech stuff in my (cargo) pants pockets and they assume that I am good at computers. I have an IT background and have been on the Internet since March of 1993 so they are correct. I call it the tech support uniform. However, people could dress similarly to try to fool people.

      People will find ways, maybe makeup and prosthetics or AI modifications, to try to fool this system. Maybe they will learn to fake emotions. This system is a tool, not a solution.

  • panda_abyss@lemmy.ca
    link
    fedilink
    English
    arrow-up
    38
    ·
    edit-2
    3 days ago

    Racial profiling keeps getting reinvented.

    Fuck that.

    They then used data on these individuals’ labour-market outcomes to see whether the Photo Big Five had any predictive power. The answer, they conclude, is yes: facial analysis has useful things to say about a person’s post-mba earnings and propensity to move jobs, among other things.

    Correlation vs causation. More attractive people will be defaulted to better negotiating positions. People with richer backgrounds will probably look healthier. People from high stress environments will show signs of stress through skin wrinkles and resting muscles.

    This is going to do nothing but enforce systemic biases, but in a kafkaesque Gattica way.

    And then of course you have the garden of forking paths.

    These models have zero restraint on their features, so we have an extremely large feature space, and we train the model to pick features predictive of the outcome. Even the process of training, evaluating, then selecting the best model at this scale ends up being essentially P hacking.

    • Jason2357@lemmy.ca
      link
      fedilink
      English
      arrow-up
      1
      ·
      19 hours ago

      I cant imagine a model being trained like this /not/ end up encoding a bunch of features that correlate with race. It will find the white people, then reward its self as the group does statistically better.

      • CheeseNoodle@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        18 hours ago

        Even a genuinely perfect model would immediately skew to bias; the moment some statistical fluke gets incorporated into the training data that becomes self re-enforcing and it’ll create and then re-enforce that bias in a feedback loop.

        • Jason2357@lemmy.ca
          link
          fedilink
          English
          arrow-up
          1
          ·
          17 hours ago

          Usually these models are trained on past data, and then applied going forward. So whatever bias was in the past data will be used as a predictive variable. There are plenty of facial feature characteristics that correlate with race, and when the model picks those because the past data is racially biased (because of over-policing, lack of opportunity, poverty, etc), they will be in the model. Guaranteed. These models absolutely do not care that correlation != causation. They are correlation machines.

    • ssladam@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      22 hours ago

      Exactly. It’s like saying that since every president has been over 6’ tall we should only allow tall people to run for president.

  • verdi@feddit.org
    link
    fedilink
    English
    arrow-up
    21
    ·
    edit-2
    3 days ago

    FYI, it’s not a paper, it’s a blog post from well connected and presumably highly educated people benefiting from the institutional prestige to see their poorly conducted study be propagated ad eternum without a modicum of relevant peer review.

    edit: After a few more minutes, it’s an unreliable psychopath detector.

  • entwine@programming.dev
    link
    fedilink
    English
    arrow-up
    13
    ·
    2 days ago

    This fascist wave is really bringing out all the cockroaches in our society. It’s a good thing you can’t erase anything on the internet, as this type of evidence will probably be useful in the future.

    You’d better get in on a crypto grift, Kelly Shue of the Yale School of Management. I suspect you’ll have a hard time finding work within the next 1-3 years.

    • 3abas@lemmy.world
      link
      fedilink
      English
      arrow-up
      6
      arrow-down
      1
      ·
      2 days ago

      They absolutely can erase things on the internet, are you archiving this for when the other archives die? Are you gonna be able to share it when the time comes? And will anyone care?