As with everything, trust is required eventually. It’s more about reducing the amount of trust required than removing it entirely. It’s the same with HTTPS - website certificates only work if you trust the root certificate authorities, for example. Root manufacturer keys may only be certified if they have passed some level of trust with the root authority/authorities. Proving that trust is well-founded is more a physical issue than an algorithmic one. As it is with root CAs it may involve physical cybersecurity audits, etc.
- 0 Posts
- 5 Comments
This is just standard public key cryptography, we already do this for website certificates. Your browser puts a little lock icon next to the URL if it’s legit, or provides you with a big, full-page warning if something’s wrong with the cert.
You, the end user, don’t have access to your camera’s private key. Only the camera IC does. When your phone / SD card first receives the image/video it’s already been signed by the hardware.
Video evidence is relatively easy to fix, you just need camera ICs to cryptographically sign their outputs. If the image/video is tampered with (or even re-encoded) the signature won’t match. As the private key is (hopefully!) stored securely in the hardware IC taking the photo/video, any generated images or videos can’t be signed by such a private key.
It’s pretty standard practise these days to have some form of secure enclave on an SoC - Arm’s TrustZone, Intel’s SGX, AMD’s SME/SEV. This wouldn’t be any different. Many camera ICs are already using an Arm CPU internally already.