X

Facebook trains computers to learn more like humans do

The social network's AI research team says it's made a "breakthrough" with its self-supervised computer vision model known as Seer.

Queenie Wong Former Senior Writer
Queenie Wong was a senior writer for CNET News, focusing on social media companies including Facebook's parent company Meta, Twitter and TikTok. Before joining CNET, she worked for The Mercury News in San Jose and the Statesman Journal in Salem, Oregon. A native of Southern California, she took her first journalism class in middle school.
Expertise I've been writing about social media since 2015 but have previously covered politics, crime and education. I also have a degree in studio art. Credentials
  • 2022 Eddie award for consumer analysis
Queenie Wong
3 min read
facebook-seer-ai.png

Facebook describes Seer as a "billion-parameter self-supervised computer vision model that can learn from any random group of images on the internet."

Facebook

Facebook chief artificial intelligence scientist Yann LeCun fiddles with a pen as he explains how he wants machines to learn through observation, just like babies do. 

In the first few months of life, babies pick up cues about how the world works by looking at what's in front of them, LeCun says. They learn that objects are three-dimensional and can be hidden. Dropping the pen to make a point during a video interview, LeCun notes babies also learn that if an object isn't supported it will fall.

"We'd like artificial intelligence systems to learn how the world works by observation because that will have a huge implication," he said. "It would allow machines to have some level of common sense."

The social media giant's AI research team is nudging computers closer to that goal, teaching them to fill in the blanks without relying on humans to label or curate data. The approach, known as self-supervised learning, has the potential to improve Facebook, including content moderation. The social network's AI team said on Thursday it achieved a "breakthrough" in the effort when its self-supervised computer vision model known as Seer (short for SElf-supERvised) was able to learn from a billion random, unlabeled and uncurated public Instagram images.

After learning from these images, Seer correctly identified and categorized the dominant object in photos with an accuracy rate of 84.2%. Seer outperformed the best existing self-supervised systems by one percentage point, according to the study.

The findings are a "major breakthrough that ultimately clears the path for more flexible, accurate and adaptable computer vision models in the future," according to a blog post that accompanied the study.

Recognizing and categorizing images correctly could help improve a variety of products. Facebook and other social networks use AI to rank content in feeds and flag images and videos that violate their rules against hate speech or nudity. AI is used in cars to help drivers avoid collisions and in medical imaging to streamline diagnoses. Facebook, which plans to release its first pair of smart glasses this year, also uses AI in its virtual and augmented reality systems to track a person's position in an area.  

"The advantage of self-supervised learning is that you can train very big networks and it will still be accurate," LeCun said.

Online images can be tough for a machine to recognize because they can be blurry or taken from odd angles. If machines are able to learn on their own, they can adapt to those circumstances. 

Self-supervised learning could possibly help reduce biases that have cropped up in some AI research, LeCun said. For example, some studies have shown that facial recognition systems have a harder time correctly identifying minorities, possibly because researchers use photo sets that include more white people. Removing the human labeling element might reduce some of the bias, he said, cautioning the theory is "a little speculative."

While training machines to be as intelligent as humans might conjure up concerns that AI will outsmart humanity, it isn't a future LeCun worries about. Intelligence, he notes, isn't connected to a desire to take over the world.

"The desire to dominate other entities is hardwired into human nature," he said, "but there is absolutely no reason to hardwire it into our AI systems or robots."