Artificial intelligence can make pretty pictures while potentially breaking the law.
Nvidia, and researchers from Aalto University and MIT, trained the AI's neural network by feeding it thousands of photos. But instead of offering it before-and-after photos with both corrupted and optimal examples, researchers only let the AI study corrupted photos.
"It is possible to learn to restore signals without ever observing clean ones, at performance sometimes exceeding training using clean exemplars," the researchers said in their paper. "[The neural network] is on par with state-of-the-art methods that make use of clean examples -- using precisely the same training methodology, and often without appreciable drawbacks in training time or performance."
See the AI in action:
Previously, using a different technique, Nvidia trained a deep learning system to, adding frames after the video was shot. By showing it thousands of reference videos in the desired slow-mo, the researchers taught the AI to predict how the missing frames were supposed to look.
The researchers are presenting their work at the International Conference on Machine Learning in Stockholm, Sweden this week.