
Project Detail
Vision S
Imagine stepping into someone else's shoes, seeing the world exactly as they do. This is the core idea behind VisionS, a technology that embodies the concept of 'vision-sharing.' With VisionS, users can share and control their visual experiences in real time, offering a unique first-person perspective. This design approach not only enhances empathy but also bridges understanding gaps, especially for those with limited mobility. To me, VisionS is a window to a broader, more inclusive world, inviting us to experience life through the eyes of others.
Sole Developer
8 Months
Seattle, WA
VR Demo
Background and Research
Through my own experiences, I have seen moments that deeply resonated with me and shaped my vision for VisionS. I once observed a wheelchair user laboriously reversing upon encountering an obstacle. It struck me – if they could have seen the path from my perspective, they might have been able to avoid this challenge. Another experience further confirmed my determination to develop VisionS. A childhood friend of mine dreamt of traveling the world, but her battle with leukemia restrained her freedom. I wondered, what if I had been able to share my travels with her? Could her dream come true?

For more research details, please see at:
Design Sketches
Inspired by the personas, I realized sketching alone couldn't fully capture their interaction with VisionS. So, I learned Rhino 3D myself to more realistically depict the design.


Using Rhino, I created 3D model sketches of the smart glasses, including an Electrooculography (EOG) Sensor for eye-tracking. This feature was crucial as it allowed the glasses to accurately share the user's view, enriching the empathy experience. I also depicted how the sensor would interact with users' eye movements.
3d Modeling
In order to make the glasses more real and authentic to users, using concrete 3d modeling and rendering, I had a better visual representations below:
VR Storyboard
After sketching and building 3d modeling, I started a VR ideation sketch where I brainstormed how the VR visuals would look by implementing one of the personas, Amelia. The presentation was initiated from the first point of view to reflect screen-sharing features with eye tracking sensor and motion detection (e.g. hand gesture to perform commands).

In the first narrative of Amelia, you just woke up

You put on glasses and recognized a new message. You move your hand to check

A professional hiker Mike sent you a request to share his screen with you because he saw your wish that you want to see aurora one day

If you want to accept this, eye looking up and swiping your hand to the right; if you reject, eye looking down and swiping your hand to the left.

You decided to accept this. You looked up and swiped your hand to the right. The screen shows you’ve said yes, and Mike was going to share the screen with you shortly.

You successfully saw the aurora from your eyes and you can easily look around and move your vision

You swipe up and find the message window with Mike. And swipe up again to shoot stars to send Appreciation to Mike
Quantifiable Impact
I conducted user tests with individuals I had previously interviewed and with some new users. This resulted in a 40% increase in user interest, indicating their preference for the product after seeing its design. Upon reviewing the survey feedback, I discovered that most participants were excited about the design and expressed eagerness to try the product if it were to become available in the market.
Next Step
The biggest step for later on is to iterate this concept on wearable VR sets such as Meta Quest 3. Next goal is to test and find any implications and improvements for the features. Screen-sharing can be represented in many different formats so creating more options and methods of visualization and interaction would be helpful.
I designed the VR setting and demo based on one of the personas I created, Amelia. Even though the Aurora case also fits in the other person's scenario, I love to create a new VR setting for wheelchair users and show more possibilities addressing their concerns and issues with seeing the world broadly.
Reflection

Overall, I'm proud of this project because it allowed me to practice critical thinking skills and encouraged me to think bigger for creative solutions. As a designer, I had no prior experience with Rhino or Unity. However, I realized that my design solution would fall short of my expectations without more technical support. Initially, when my ideas were just sketches on paper, they couldn't be fully understood just from a design wireframe. This realization led me to explore and self-learn new
techniques that could better explain and represent my ideas. Throughout this process, I maintained a strong design intention, infused with my values, aimed at benefiting individuals with disabilities or those limited in their ability to explore the world freely. Although this product may not be fully refined yet, the journey of learning, self-reflection, and maintaining a positive attitude while solving impactful problems has been tremendously fulfilling and rewarding!
rh692@cornell.edu
©Gloria Hu 2025