Advancements in Deep Learning have made it possible for lay-persons to create photo-realistic computer generated images (CGI) cheaply. Such CGIs are c

pushpendre / flagplant

submited by
Style Pass
2021-06-05 08:00:07

Advancements in Deep Learning have made it possible for lay-persons to create photo-realistic computer generated images (CGI) cheaply. Such CGIs are called "DeepFakes". DeepFakes can be used to spread malicious lies and a lot of attention has been posed to the direct risk posed by deepfakes. However, deepfakes create an even bigger, indirect problem, that malicious actors can discredit genuinely real images by claiming that they are deepfakes. Similar concerns were raised in a recent report about an alleged deepfake of a teenager link. This potential for abuse is readily apparent to even a casual observer, as evident from the youtube screenshot below. So, my motivation is to prove that a real image is not a fake.

A slightly different, secondary, problem that this app solves is of staking a claim to a content without revealing the content to the world. This is a niche problem that is important enough that developers have built complicated web-services to support this use-case. This ycombinator thread also describes some other approaches. My approach has the benefit that it does not require a web-service, just a simple UI that never needs to communicate to the web. Therefore, it is more secure. Moreover, my approach creates a human-readable and verifiable proof of content possession.

As I mention above, my motivation is to prove that a real image is not fake. However I actually solve a closely related problem of proving that a user possessed the given file before a certain time. Let's say that user possesses a digital artefact/content with value X at time t. The method comprises of the following steps:

Leave a Comment
Related Posts