The fact is that it’s usually impossible to achieve this because there isn’t enough detail in a small number of pixels to eke out a clearer image than what you already have in the original file. A team of Google researchers have figured out a way to do that, Joinfo.com reports with reference to The Next Web.
In the image above, the images on the left show the 64 square pixel source images, while the ones in the middle show what Google Brain’s new system can construct from them. The rightmost column of images are higher resolution images from the source photos; you can compare it to the middle column to see just how accurate Google’s system is.
Ryan Dahl, Mohammad Norouzi and Jonathon Shlens came up with a pixel recursive super resolution model that can synthesize details in low-res photos by combining the power of two neural networks.
First, the conditioning network attempts to map the 8×8 source image against similar high-resolution images and approximates what the image might look like when zoomed in.
Then, the prior network adds realistic details to the final output. It does this by learning what each pixel in a low-res sample generally corresponds to in high-res images.
As you can see, the system is fairly competent. As Ars Technica explained, human observers who were shown a real high-resolution photo of a celebrity’s face alongside Google Brain’s synthesized image were fooled 10 percent of the time (where 50 percent would count as a perfect score). When images of a bedroom were used, 28 percent of human subjects were fooled by the computed image.
While the tech isn’t nearly ready for prime-time, it could certainly come in handy in assisting police in solving crimes. However, as the images aren’t real (they’ve computed approximations), they can’t serve as evidence in criminal cases. Still, it’s interesting to see how neural networks are getting closer to bringing sci-fi to life.