Generative AI (GenAI) technologies like ChatGPT, DALL·E, and multimodal assistants are revolutionizing accessibility tools for Blind and Low-Vision (BLV) individuals. These tools empower BLV users to independently interpret and manage visual content, from identifying objects to describing scenes. However, this newfound independence brings a complex challenge: visual privacy. In this article, we delve deeply into how BLV individuals navigate GenAI-powered tools, drawing insights from a qualitative study involving 21 participants. We explore six core privacy-sensitive scenarios, reveal current user strategies and emerging design preferences, and offer a framework for privacy-centric GenAI design that prioritizes user autonomy, dignity, and safety.
In October 2024, ChatGPT Search was introduced. It allows ChatGPT to search the web in an attempt to make more accurate and up-to-date responses.[37][28]
For generations, Blind and Low-Vision (BLV) individuals have relied on family, friends, or human assistants to navigate the visually dense world around them. Whether choosing an outfit, navigating a busy street, or verifying sensitive documents, these everyday visual tasks were often mediated by human intermediaries. However, the emergence of Generative AI, particularly large multimodal models (LMMs) and visually capable assistants, is transforming this dynamic.
Today, AI tools can describe images, recognize faces, explain diagrams, and even redact sensitive content—all by analyzing uploaded images or real-time video. For many BLV users, these tools are liberating: “Before, I asked my mom. Now, I ask ChatGPT,” says one participant, summarizing the promise of GenAI-enabled independence.
But this empowerment is not without cost. Uploading visual content to AI systems means sharing sensitive visual data, often involving people, places, and documents that BLV users may not fully understand themselves. The consequence? A complex balancing act between accessibility and privacy—a tension that is especially pronounced for a population that has historically lacked control over how their personal data is represented, shared, or interpreted.
Historically, tools like screen readers, Braille displays, and navigation aids have enabled BLV users to access text and spatial data. But the visual world—appearance, layout, expressions—remained largely inaccessible, requiring human intermediaries.
Recent advances in computer vision and generative AI have enabled new forms of interaction. Applications like Seeing AI, Be My Eyes, and ChatGPT with Vision allow users to:
Interpret complex scenes
Read menus or receipts
Describe facial expressions
Identify clothing or colors
Provide feedback on visual presentation (e.g., “Do I look presentable?”)
However, these AI services often require cloud-based processing and camera data submission, making privacy a significant concern.
For BLV individuals, who may not even know what’s in a photo they’ve taken, managing the privacy implications of sharing visual content with AI introduces a new asymmetry of knowledge and control.
To understand these dynamics, researchers conducted in-depth, semi-structured interviews with 21 BLV participants, focusing on:
How they currently use GenAI to interpret visual data
What privacy concerns arise in this process
What trade-offs they are willing to make
How they imagine better, more privacy-conscious design
Participants spanned diverse backgrounds—professionals, students, caregivers—ranging from complete blindness to varying degrees of visual impairment. Many were early adopters of GenAI tools, with daily use ranging from 15 minutes to several hours.
Participants reported concerns about visual privacy in six key contexts:
BLV users rely on GenAI to ensure they look professional, well-dressed, or aligned with social expectations before meetings, dates, or public events. But uploading selfies to a server for analysis raises deep questions:
Who sees the image?
Is it stored?
Can it be traced back to me?
Participants expressed discomfort at relying on cloud-based models for something so personal.
“I want to know if my tie is crooked—but I don’t want a stranger or AI to keep that photo.”
BLV individuals often use GenAI to identify clutter, check room organization, or find misplaced objects. But in doing so, they may inadvertently reveal:
Medicine labels
Family photos
Home layouts
Financial documents
The potential for unintentional oversharing is high—especially when users don’t fully know what’s captured in the image.
“I asked it to describe my kitchen. It told me what cereal I eat and what brand of medication I use. That freaked me out.”
Navigating the outdoors with GenAI tools sometimes involves scanning surroundings for safety or orientation. However, images can include license plates, children playing, or strangers’ faces.
This raises both legal and ethical questions about capturing and sharing data involving others without consent.
BLV individuals sometimes share photos with others for feedback or memories. GenAI tools help them decide:
Is this photo flattering?
Is anyone blinking?
Is the background appropriate?
But the emotional and interpersonal dynamics of sharing sensitive moments—birthdays, family gatherings, religious rituals—are impacted by trust in AI systems.
“I wanted to post a picture of my mom and me. But I didn’t know she had food in her teeth until someone commented. I wish the AI had warned me.”
Professionals use GenAI to check the formatting of documents, understand presentation slides, or identify errors in layouts. But this content may be confidential or governed by workplace policies.
“I uploaded a company chart to ChatGPT. Later, I realized it had sensitive HR info. I felt I broke company rules without knowing.”
Perhaps most concerning was the use of GenAI to interpret IDs, bills, and contracts. These often contain:
Legal names
Bank account numbers
Government IDs
Participants expressed a clear preference for on-device processing or explicit content redaction when handling such data.
Participants reported various ad-hoc strategies to mitigate privacy risks, including:
Cropping photos manually (if partially sighted)
Asking trusted humans for final checks
Using AI models with perceived stronger privacy policies
Switching between tools depending on context
However, these strategies are time-consuming, inconsistent, and often rely on incomplete information about how AI systems process and store visual data.
Based on the interviews, researchers identified a set of concrete design preferences for GenAI tools among BLV users:
Local processing (no cloud transmission) for sensitive images.
Especially preferred for ID documents, faces, and home environments.
Clear statements that uploaded content is not stored, shared, or reused.
Request for ephemeral memory and “no training from my data.”
Automated redaction of license plates, faces, names, or sensitive logos.
User-adjustable settings: e.g., "redact all text unless I ask otherwise."
Instead of “You look great,” users want:
“There is a food stain on your shirt.”
“Your jacket is wrinkled.”
Transparent, respectful feedback—not judgmental or generic.
Haptic or audio feedback (e.g., vibrating if sensitive content is detected).
Mirrored interfaces that combine voice, touch, and physical cues.
“If it could buzz to warn me when I’m sending something risky—that would help a lot.”
Researchers propose a user-centered design framework for visual privacy in GenAI systems tailored to BLV users, structured around three pillars:
Empower users to select what is shared and how.
Default settings should lean toward minimal disclosure.
Clear, plain-language policies explaining:
What data is processed
Where it is processed
How long it is retained
Avoid infantilizing or overly protective design.
Provide tools for informed decision-making, not blanket restrictions.
For developers: Accessibility cannot come at the cost of privacy. Tools should offer granular privacy settings, detailed logs of uploads, and opt-out-of-training-by-default policies.
For policymakers: Visual privacy for BLV individuals should be explicitly addressed in AI regulations. Consent should consider the asymmetry of knowledge—users may not see what they’re sharing.
For researchers: Further study is needed on how BLV users understand visual risk, and how intersectional identities (e.g., gender, disability, race) shape privacy concerns.
GenAI is transforming accessibility for blind and low-vision individuals, offering a new level of autonomy in navigating visual information. But with this autonomy comes a new kind of vulnerability—one that requires us to reimagine visual privacy not as absence of exposure, but as presence of control, trust, and dignity.
As we build the next generation of GenAI systems, we must recognize that accessibility is not only about giving access—but giving agency. That means putting users—especially marginalized ones—at the center of design, policy, and evaluation.
In the words of one participant:
“I love what this tech can do. But I want it to work for me, not just watch me.”
As we enter an era where artificial intelligence shapes our understanding of ourselves, we must ask...