Social media is flooded with fake news and tampered images, easy enough to grab attention and accept it without fact-checking. Experts all over the globe are worried about new AI tools that are being used to edit videos and images. Most of these tools are being developed by Adobe. However, the company is also working on various solutions by researching how machine learning can be incorporated to detect edited images.
At CVPR computer vision conference, the company highlighted its latest work and gave a demo on how digital forensics done by humans can be automated by machines in lesser time. The research paper does not epitomize the innovation in the field and the product isn’t available yet commercially. However, the interesting thing is that Adobe took interest in this area.
The spokesperson said that this was an early stage research project. However, in the future, the company will play a pivotal part in developing technology that verifies and authenticates the digital media.
Adobe has never released a software to spot the fake pictures, but the company is working with law enforcement agencies using digital forensics to help find missing children. The research paper portrays how machine learning can be used to classify three types of image manipulation.
- Splicing (Two parts of different images are combined)
- Cloning (Objects within an image are copy and pasted)
- Removal (Object is edited out altogether)
In order to find this type of meddling, forensics experts generally look for hints in hidden layers of the image. When these types of edits are made, the images are prone to inconsistencies in the random differences in brightness and color created by image sensors. Splicing up two different images leave behind background noise that does not match.
With other Machine Learning Systems, Adobe learned using a large dataset of edited images. It helped in identifying the common patterns that point towards altering. It tots up higher in some tests as compared to similar systems developed by other teams. However, the research did not cover spotting deepfakes, a new form of edited videos created using AI.
According to the digital forensic expert, Hany Farid, the benefits associated with new Machine Learning approaches is that they are able to identify artifacts not obvious and not known previously. The downside of these approaches is that they are only good as the training data fed into the networks, and are less likely to learn higher-level artifacts like discrepancies in the geometry of reflections and shadows.
Keeping the warnings aside, it is worthy to see more research being conducted to help spot digital replicas. Thus, these tools are helpful in differentiating fact from fiction.