SPEAKUP VR
Page Under Progress


AT-A-GLANCE
This project was completed as part of the AR/VR Application Design master's course at the School of Information, University of Michigan. The primary focus was to explore and address the following question: "How might we use VR's immersive features to design a learning experience that improves people's public speaking confidence?"
Timeline
1.5 months
Project Type
Group (2)
My Role
XR Designer
Tools
Bezi, Figma, Sketchfab
PROBLEM
Public speaking can be a nerve-wracking experience, especially for those with limited practice. Common challenges include maintaining consistent microphone positioning, navigating physical setups like approaching a podium, and responding to audience reactions. These obstacles often make it difficult for individuals to deliver a confident speech, highlighting the need for a possible tool that can help address these issues.
OUTCOME
My partner and I collaborated to design a VR prototype for a virtual speaking environment. We started by brainstorming ideas, creating storyboards for various features, and developing a low-fidelity prototype. We then tested the lo-fi prototype with a user group, gathering constructive feedback to refine our design. Using this input, we iterated and developed a high-fidelity prototype as our final product.
BRAINSTORM
INTERACTIONS
After finalizing our design prompt, we utilized divergent thinking to brainstorm a variety of potential interactions for our VR experience. Through convergent thinking, we refined these ideas and selected the most impactful ones. The four main interactions we designed around are outlined below:

Audience
Performing in front of an audience can be intimidating, so we wanted to include several options to let users adjust the difficulty of the experience and gradually build their confidence at their own pace.

Locomotion
Less experienced speakers often struggle with steady microphone handling and walking to the podium. To address this, we wanted to include movement to let users practice both in a realistic setting.

Sound
An audience reacts in real-time. Therefore, to simulate this we wanted to include responsive sounds as users progress through their speech.

Feedback
Finally, we also wanted to include real-time pop-ups to provide instant feedback and help users improve their performance in the moment.
STORYBOARD
Here’s a storyboard showcasing some of the interactions that were implemented.
(swipe through to view)
DESIGN PROCESS
INITIAL PROTOTYPE
Our first attempt at creating the VR experience faced challenges. We used Bezi for prototyping, but its web-based platform made finding compatible assets difficult.
Initially, we planned for a seated audience, but the high-polygon 'sitting people' assets (160k polys each) from Sketchfab caused our VR environment to crash. Bezi supports up to 1 million polys per environment, but our prototype exceeded 2 million.
To resolve this, we explored more options and ultimately switched to 'standing people' assets, which would now feature as part of the audience levels.
UI COMPONENTS
Before implementing the four interactions in our prototype(s), we designed the following UI components in Figma. First, we created an interactive menu as the primary control hub for the entire VR experience. Next, we designed the Begin and Restart buttons, along with three onboarding prompts to introduce users to our VR experience, which we decided to call SpeakUp VR. Finally, the blue text bubbles were created to simulate audience feedback and reactions.

LO-FI PROTOTYPE
Our lo-fi prototype demo showcases the full VR experience, focusing on four core interactions. Using Bezi's State Machine tool, I refined these interactions for smooth transitions and intuitive usability. We set the simulation in a classroom— where many first experience public speaking. Users walk naturally to the front, reinforcing immersion before stepping up to the podium.
Once at the podium, the user can start the SpeakUp VR experience by selecting the "Begin" button, initiating a brief onboarding process. To ensure a seamless user experience, we positioned the control hub to the left of the user but incorporated an interaction that allows them to drag and reposition the UI component. This design choice improves accessibility by allowing users to move the control hub out of their field of view, reducing distractions while maintaining ease of interaction.
Our primary objective was to create a VR experience that feels intuitive and stress-free, particularly for users who experience anxiety with public speaking. We recognized that added complexity in a VR environment could heighten anxiety rather than alleviate it. Therefore, we prioritized a minimal, distraction-free interface and natural interactions to help users build confidence while practicing public speaking in a low-pressure, immersive environment.
HI-FI PROTOTYPE
Our hi-fi prototype includes a more detailed classroom setting with improved textures for a realistic environment. Based on testing and feedback from our instructor/peers, we adjusted several features from the lo-fi prototype, refining interactions to improve usability. Below are the updated interactions in action.
Onboarding
For the onboarding process of SpeakUp VR, we focused on keeping it simple and intuitive. As mentioned previously, when users step up to the podium, they're automatically welcomed into the simulation after pressing the "Begin" button.
One key update in our hi-fi prototype was adding interactive light switches to the control hub. This design choice makes selections more visually clear and engaging, improving accessibility and usability.
Audience Levels
To give users better control over the audience size, we implemented a three-tier selection system: 'None', 'Small', and 'Large'. When users select one of these options, it lights up and changes color to provide clear feedback that their choice has been applied.
We wanted the difference between audience sizes to feel meaningful, so we set 'Small' to four people— just enough for an intimate setting— while 'Large' includes 12 to 16 people, creating a more dynamic and engaging environment. This way, users can truly experience the shift between presenting to a small group versus a larger crowd, making the setting feel more realistic and adaptable to their needs.
To keep the experience smooth, I also designed a transition animation that makes the audience shift feel natural rather than abrupt. Instead of people just popping in and out, the transition adds fluidity, making the change visually appealing and less distracting.
Microphone Positioning
To help users get comfortable with holding a microphone while presenting, we introduced an interactive virtual microphone in our VR experience. We wanted to keep this feature optional, so users can choose to enable or disable it from the control hub based on their preference.
If they opt to use the microphone, it appears hovering on the right side of the podium until the user picks it up. Once grabbed, users can adjust its positioning by tapping and interacting with it. Due to the limitations of Bezi’s interactive features and our project’s time constraints, we weren’t able to implement a fully grabbable microphone as originally planned, but we designed a workaround that still gives users a sense of control and presence while presenting.
Live Feedback
The final interaction we designed for our VR experience was audience feedback. In our lo-fi prototype, we started with simple auditory cues like clapping and thoughtful "hmm" sounds to simulate audience engagement. However, after gathering feedback from peers and instructors, we saw an opportunity to enhance this feature and make feedback more meaningful.
So, through iterative testing, we introduced 'verbal text feedback' to provide users with real-time insights on their presentation. If a user makes eye contact with the audience, they receive constructive feedback highlighting strengths and possible areas for improvement.
We also designed a microphone guidance system to help users maintain optimal mic positioning. If the microphone isn’t held at the right distance, SpeakUp VR will notify the user to adjust its height, ensuring a more natural speaking experience.
CONCLUSION
REFLECTION
This project was my first time designing beyond my usual scope, having only recently learned the fundamentals of XR and VR experience design. As the final assessment for my AR/VR Application Design course, it provided an opportunity to apply my newly acquired skills and bring an idea to life— one that my partner and I had a personal connection with.
Both of us resonated with the challenge of public speaking, having faced struggles in the past and still working to overcome the nerves of presenting in front of an audience. This made the project feel especially meaningful, as we wanted to create a tool that could genuinely help others build confidence.
Through this experience, I gained valuable insights into designing for VR, particularly in balancing realism with usability to create intuitive and accessible interactions. I also learned how to design features that enhance engagement while working within platform limitations. Adapting our ideas to technical constraints strengthened my iterative design process and problem-solving skills. Beyond refining my VR design abilities, this project deepened my understanding of how thoughtful interaction design can help users build confidence in real-world scenarios. Nonetheless, this was a challenging but rewarding experience— one that I’ll look back on as a meaningful milestone in my design journey.
Full Project Demo
NEXT STEPS
If I had the opportunity to continue working on this project, I would focus on refining interactions to further enhance the overall user experience. One key improvement would be making the audience more realistic by incorporating a greater variety of 3D models to add diversity. I would also expand the virtual environments beyond just a classroom, introducing settings like an office or an auditorium to create more immersive experiences.
Additionally, I’d improve the auditory feedback by integrating more sound cues and potentially using AI-generated voice models to simulate audience responses, making interactions feel more dynamic. Most importantly, given more time, I would transition the project from Bezi to Unity to leverage Meta SDKs. This would allow for greater improvements to the microphone-handling interactions, further immersing users in the simulation. These changes would help refine the project and bring it closer to becoming a full VR app.
WORKS CITED
Tools Used
Figma (UI components)
Bezi (VR environment & interactions)
ChatGPT (used to generate images of the 2 windows in our VR environment)
Assets
Person 1: https://skfb.ly/o7Srp
Person 2: https://skfb.ly/o7TOo
Mic: https://skfb.ly/oVARF
Everything else is from Bezi’s asset store
Contributors
Reshad Alam
Junhee Chung
Vivek Selvaraj (Mentor)
Michael Nebeling (Professor)