A new AI claims it can help remove racism on the web. So I put it to work

ai.png

Can AI flag truly problematic content?

I tend to believe technology can’t solve every problem.

Why, it’s not even managed to solve the vast problems caused by technology.

Yet when I received an email headlined: “AI to remove racism,” how could I not open it? After all, AI has already removed so many things. Human employment, for example.

The email came on behalf of a company called UserWay. It claims to have a widget that is “the world’s most advanced AI-based auto-remediation technology.”

Within its paid offering, UserWay now has what it calls an AI-Powered Content Moderator. This, it hopes, will allow companies to ensure their websites identify problematic language — reflecting racial bias, gender bias, age bias, and disability slurs, for example — so that they can decide whether to change it or remove it.

As far as UserWay is concerned, this is “the first general, cross-website content moderation tool designed specifically for the greater public.”

UserWay says it performed a test on 500,000 websites and found that 52% had examples of racial bias, 24% had examples of gender bias, and 12% featured age bias.

To give you an example of the sorts of words and phrases flagged, these include whitelist, blacklist, black sheep, chairman, and mankind. As well as language of the more overtly offensive sort.

Naturally, I asked UserWay to undertake another test. I gave it the names of some well-known news and business websites and wondered which of these might be great offenders. Or not. At least according to this AI.

I fear that, given our fractious times, your own political antennae may already be sending signals of acceptance or rejection. Please bear with me, as one or two of these results might surprise.

UserWay says it examined a representative sample of 10,000 pages from each of these sites — ranging from Fox News to The Huffington Post, from The Daily Caller to The New York Times — and then offered me its artificially intelligent conclusions.

The AI declared that, overall, the Washington Examiner had the most problematic content, followed by The Daily Caller. This wasn’t so much because of pages with racial bias, but because of pages with gender bias and racial slurs.

But before you cheer for your side or begin to throw objects, please let me tell you which site — according to UserWay — had the most pages including racial bias. It was, in this sample, ESPN.com. Followed by CNBC.com.

And what if I told you that this AI believes ACLU.org has more problematic pages than FoxNews.com?

While you’re digesting that, I’ll add this: The AI also declared that FoxNews.com has fewer pages with gender bias than do CNN.com and The Washington Post’s website.

I have no interest in besmirching any of these sites. At least publicly.

These results may make one or two people wonder, however, whether racism, sexism, and gender bias aren’t the exclusive preserve of one political bent or another. It may also make some wonder about the very essence of AI as a content moderator.

A considerable element of such AI is the selection of criteria by which it makes its decisions. That’s why the companies that operate the sites have to decide themselves which words and phrases are acceptable and which aren’t.

If there’s one thing that’s sure about AI, it’s that human nuance is not its strength. Sometimes it’ll identify words and phrases without exactly understanding the context. And, who knows, certain terms that are currently acceptable may not be so positively received in even a few months’ time.

When I asked UserWay how it chooses the words and phrases to be flagged, it told me it “curates the terms internally based on our own research.”

Which did tend a little toward Facebook-speak. 

Talking of which, I asked UserWay to look at Facebook.com too. Oddly, it couldn’t produce any results.

UserWay’s Founder and CEO Allon Mason told me: “It seems that Facebook is proactively preventing scanners and bots from scanning its site.” 

I’m taken aback.

View original article here Source