Keeping unseen safe: Improving digital privacy for blind people

University of Colorado Boulder
Danna Gurari

Associate Professor Danna Gurari

Blind people, like sighted people, post on Instagram, swipe on Tinder, and text photos of their children to a group chat. They also use photos to learn about their visual surroundings.

Blind users often share images with identification software such as Microsoft’s Seeing AI, Be My Eyes and TapTapSee to learn about their visual surroundings. There’s a high demand too. Seeing AI, for instance, has been used over 20 million times.

When blind people share photos, however, there is an added risk that they could unknowingly capture information considered private, such as a pregnancy test or a return address.

To Assistant Professor Danna Gurari, this shouldn’t have to be a concern.

Gurari, the founding director of the Image and Video Computing group in the Department of Computer Science, is part of a cross-institutional team that has been awarded over $1 million through a Safe and Trustworthy Cyberspace (SaTC) grant from the National Science Foundation to study the issue.

Currently, blind people must either trust friends or family members to vet their images for private information before sharing publicly, which can have social repercussions of its own. Or they can accept the risk to their privacy when they post.

The goal of the team’s four-year interdisciplinary project is to create a novel system that can alert users when private information is present in an image and, if the blind person wants to, obscure it.

Working with human-centered computing expert Leah Findlater from the University of Washington and privacy expert Yang Wang from the University of Illinois at Urbana-Champaign, Gurari’s group is leading the automatic analysis of images for the project. Their goal is to turn the desires of users and theories of private information into actionable knowledge.

This comes with a number of challenges, both technical and philosophical.

Because AI makes mistakes, you have to be careful how certain you make an analysis sound.

“We really want to endow the appropriate level of trust but also give decision-making power,” Gurari said.

The Image and Video Computing group is creating ways to share what private information might be present in an image and let the user decide to use the image as-is, discard it, or obscure the private information and then share it.

The other problem to solve for Gurari’s group is how to determine what the most prominent object in an image is and obscure everything else.

Because blind people often share photos for object identification, this feature could reduce the amount of private information introduced during this straight-forward task.

Illustration of envisioned user interaction pipeline for empowering users to safeguard private content in their pictures and videos. (a) For the general use case, our tool will notify the user about what private content is detected and then provide a choice to either discard the media, share it as-is, or share an edited version where private content (teel mask overlaid on image) is obfuscated. (b) For the scenario where a user wants assistance to learn about an object, the tool will share an edited version

Illustration of envisioned user interaction pipeline for empowering users to safeguard private content in their pictures and videos. (a) For the general use case, the tool will notify the user about what private content is detected and then provide a choice to either discard the media, share it as-is, or share an edited version where private content (teel mask overlaid on image) is obfuscated. (b) For the scenario where a user wants assistance to learn about an object, the tool will share an edited version with all content outside of the foreground object (teel mask overlaid on image) obfuscated.

/Public Release. This material from the originating organization/author(s) may be of a point-in-time nature, edited for clarity, style and length. The views and opinions expressed are those of the author(s).View in full here.