Can robots show racial bias?

Technically Incorrect: A beauty contest judged by artificial intelligence -- and sponsored by Microsoft and Nvidia among others -- seems to throw up a bias against dark-skinned entrants.

Technically Incorrect offers a slightly twisted take on the tech that's taken over our lives.

Will a robot think her beautiful?

Beauty.Ai/YouTube screenshot by Chris Matyszczyk/CNET

Of course it was a good idea.

We can't have the likes of F-list celebrities and business moguls deciding what beauty is.

We need objectivity. We need absolutism. We need robots to be the judges.

That was the premise behind a recent beauty contest that was judged by artificial intelligence.

You might wonder what robots know about beauty. In essence, nothing. But it would surely be amusing to see what they think is beautiful, wouldn't it?

So along came

This was the creation of Youth Laboratories in Russia and Hong Kong. It was supported by Microsoft and Nvidia among others.

It requested images from all over the world and declared that its robots would judge them all fairly.

It promised that the robots would look at "symmetry, skin color, wrinkles and many other parameters affecting our perception of your beauty."

Wrinkles? What's wrong with wrinkles?

This jury actually enjoyed a robot whose title was "Wrinkle Director." The robot's name? Rynkl.

It all seemed even more entertaining than Miss Universe. Then the results came out.

From the more than 6,000 entrants, there were 44 winners. Most were white. Six were Asian. Only one was dark-skinned.

Were Rynkl and his friends racist?

"There were really few submissions from Afroamerican, Asian and Native American representatives' submissions," Beauty.Ai's head of beauty science, Anastasia Georgievskaya told me.

Indeed, she supplied me with an ethnicity chart that showed more than 75 percent of entries were European and a mere 1.56 African.

But could that have been the only problem?

Alex Zhavoronkov, the organization's chief science officer, admitted to the Guardian that in constructing the algorithm, the source data simply didn't include enough minorities.

Which expresses an enormous issue for any artificial intelligence. Like a teacher or a police officer, it's only as good as the data it feeds on.

We're being encouraged to become more and more dependent on these supposedly intelligent machines. But intelligence is one thing. Objectivity, infallibility and even accuracy are quite different.

How easy it is for those constructing artificial intelligence to insert their own biases to such a depth that these biases become norms.

Georgievskaya told me that a new contest will be held in October.

"We will work on the present algorithms and make them more advanced and will add new robots to the jury," he told me.

Again, what is "more advanced" is subject only to the interpretation of the creators. And what gets inserted or omitted is never known by those who are subject to the machines' whims.

I cannot confirm that senior Google executives have been asked to sit on the jury.

Still, here's something to look forward to.

"We will probably have a contest with physical robots next year in one Asian country," Georgievskaya told me.

Will the humans be allowed to tell the robots how ugly they are?