Major Social Media Sites for LGBTQ+ People Are Failing: A Report

Major social media companies are failing to protect LGBTQ+ users from hate speech and harassment, according to a new report from GLAAD on Tuesday.

The Social Media Safety Index report highlights how major platforms do not have or enforce policies to protect user data; refusing to protect users against online hate; and cannot or will not stop the spread of harmful stereotypes and misinformation about LGBTQ+ people.

Now in its fourth year, the report has ranked six social media platforms, including Facebook Meta, Instagram and Threads, as well as TikTok, YouTube and X (formerly Twitter), on 12 different criteria. Those metrics included whether each company has explicit policies to protect trans, non-binary and gender non-conforming users against misogyny and gender misogyny; users have options to add their pronouns to profiles; protects legitimate LGBTQ+ related ads; and tracks and exposes violations of LGBTQ+ inclusion policies.

GLAAD found that social media companies miss the mark on nearly all of these metrics—and allow harmful rhetoric to proliferate on their platforms, even as they rake in the billions in advertising profits.

Almost all platforms received an F rating and a corresponding percentage. TikTok, however, received a D +, a slight improvement on last year’s rating, because it recently adopted a policy to prevent advertisers from targeting users based on their sexual orientation or gender identity.

While many of these social media companies currently have policies that appear to protect LGBTQ+ users on paper, the report notes that the platforms do little to stop the spread of harmful and false information.

For example, X, which received the lowest rating by percentage, experienced a sharp rise in misrepresentations about LGBTQ+ people from “anti-LGBTQ” influencers. The Libs of TikTok account, for example, run by Chaya Raichik, is notorious for posting misinformation about gender affirming care and equating LGBTQ+ people with “groomers” and “pedophiles.” The account has numerous reports of bomb threats at schools, gyms, and children’s hospitals.

Elon Musk, the owner of X, has also promoted anti-trans content from Raichik and others, including posts that suggested restrictions on trans women participating in sports. Republican lawmakers, who have introduced a record number of anti-LGBTQ bills in state houses across the country every year since 2020, have amplified and promoted anti-LGBTQ+ sentiment on social media as the the same.

“There is a direct line between dangerous rhetoric online and the targeting of violent offline behavior against the LGBTQ community,” Sarah Kate Ellis, CEO of GLAAD, wrote in the report.

Although X was one of the biggest platforms for anti-LGBTQ+ rhetoric, it only took in $2.5 billion in ad revenue in 2023. Meta — which allowed posts that equated trans people with “terrorists,” “distortion ,” and the “mental illness”. ” to stay on its platforms — generated $134 billion in revenue last year.

Social media companies have also targeted legitimate LGBTQ+ content and made their platforms less safe and accessible for LGBTQ+ users, the report says.

The report notes one case from March of this year, when the non-profit Men Ag Babies shared a photo of two gay dads and their newborn baby in an Instagram post. Soon after its posting, the organization saw that the platform listed the Men Having Babies post as “sensitive material” that “may contain graphic or violent content.”

That label is typically used to “tone down extreme content,” said Leanna Garfield, GLAAD’s social media safety program manager. News Pink earlier this year. “That should not include something as innocent as a photo of two fathers with their newborn.”

Increased use of artificial intelligence tools for content moderation could lead to LGBTQ+ posts being targeted even more. An investigation by Wired in April found that AI systems like OpenAI’s Sora exhibited biases in its portrayal of strange people.

Companies like Facebook have sometimes relied “exclusively” on automated systems to review content, leaving out any human review in the process, Axios reported last year. A GLAAD report released around the same time stated that this practice is “of great concern” and could endanger the safety of all users, including those who are LGBTQ+.

A new GLAAD report claims that other tech companies, which the report did not name, have created “gender auto-identification” technology that purports to predict a person’s gender in order to better sell products through targeted ads. But privacy advocates have warned that these technologies could be taken a step further, to try to categorize and monitor people in gendered or gender-segregated spaces such as bathrooms and locker rooms.

Some countries and regions, like the European Union, have adopted restrictions on AI and regulated the practices of social media platforms, but the United States has lagged behind. The GLAAD report recommends that platforms strengthen and enforce their current policies to protect LGBTQ+ people—including by stopping advertisers from targeting LGBTQ+ users and by improving content moderation without automating it.

Related…

Leave a Reply

Your email address will not be published. Required fields are marked *