
A new Google Brain software may help bring new life to old photos. How will it do so?
A new Google Brain software may help bring new life to old photos. How will it do so? By using technology to sharpen the low-resolution photos.
The “zoom in – enhance” move is quite a common trope in TV shows. Most frequent in crime and investigation episodes, the technology behind them has just taken a step towards real-life.
An announcement on the matter came from the Google Brain. This a deep learning research project. Initiated back in 2011, the team has already released a number of interesting technologies. For example, its software technology is used in the speech recognition system. And also when performing photo searches on Google+.
Now, the latest Google Brain software may help bring an image revival. It should do so based on tiny, pixelated image sources. Basically, they will be the base material for new, detailed images. But how will it do so?
The Google Brain software has already been tested. One such analysis saw the use of an 8×8 image. This contains just about 64 source pixels. The Google software takes this image and tries to extract information from it.
Based on it, it then starts adding details. But how can it do this? With help from two neural networks. The Google Brain software is, quite expectedly, powered by artificial intelligence.
A first step in the image revival is taken by the conditioning network. This will try and map the initial 8×8 image. It will compare it to other high-resolution images.
To make things fair, it will also reduce this latter. The high-res photo will be reduced to the same 8×8 format. As it does so, the network will try to find a match between the two images.
The second step will involve a prior network. This will already contain a large quantity of image data. They will include high-resolution, real images. This prior network will try to add details to the initial 8×8 image.
It will do so as it implements PixelCNN. With it, it will try to add high-resolution, realistic details to the source image. The process will begin after the 8×8 is upscaled. New pixels will be added to the image.
They will be chosen so as to match to its image class. The match-up will be decided by the software based on what it “knows” about its class. To put it simply, it will choose details so as to place human details where it should, on a portrait.
In order to produce the final results, the two networks will have to combine their outputs. Usually, this results in a plausible and probable result.
The current Google Brain software had quite positive real-world testing results. Human observers were shown the resulting upscaled computed image. Which was compared to a high-res initial photo. The same method was used for landscape photos.
More details about the technology were released in a study paper. This can be accessed online on the Cornell University Library website. The research paper is titled as follows. “Pixel Recursive Super Resolution”.
The system could come to help restore old photos. And also offer an idea as to the aspect of a pixelated image. An accurate picture is as yet unavailable. Nonetheless, the Google Brain software may help recreate a certain object, location, or person in a picture.
Image Source: Flickr
Leave a Reply