Access research Phase 2: Reflecting on our learnings from our research at Kent University

Following our first phase at The Cockpit Theatre to research how to make Where We Meet more accessible, we have had the opportunity to prototype and co-create our first ideas at the Galvanising Shop in Chatham, with iCCi and the University of Kent.


Our key objectives

  • d/Deaf and HoH access: Our goal was to understand how to make our system more compatible with diverse hearing aid technologies and enhance the overall experience for d/Deaf and HoH audiences. We focused on developing and evaluating various creative caption formats (projected and AR), testing hardware for hearing aid connection, and continued our work with BSL interpreters.

  • Sensory access and consent considerations: We held workshops with neurodiverse consultants to identify potential sensory overwhelm points, explore relaxed space options, and refine onboarding processes to prepare audiences effectively.


Learnings from our creative captions exploration

What we tested

  • Captions projected on the floor near each dancer. This is building on our existing floor projection setup.

  • Captions projected on the walls behind each dancer

  • AR captions as a head-mounted display (0 DoF). The captions are attached to the headset and are always displayed in the right-hand corner of the field of view. The captions switch automatically based on the proximity to the dancer. Prototyped on a Quest 3.

  • AR captions displayed in a fixed position in the space above each dancer (6 DoF). Captions only appear when getting close enough to avoid visual noise and to mimic the behaviour of the audio.

Learnings

  • AR Captions (Head-Mounted Display) worked best: They allowed participants to move easily and read captions simultaneously with interactions, aiding both performer and participant engagement. Challenges remain in easily identifying who is speaking, suggesting the need for colour-coding or positional cues.

  • AR Captions above dancers were hard to read: Generally AR elements, especially text can present challenges when switching points of focus between text and dancers. This is due to differences in depth perception between the reality and the text.

  • Projected Captions (Floor and Walls) presented challenges: They often diverted audience focus from the dancers, especially during interactions, and could affect the performance's aesthetic.

  • Future Captioning Opportunities: After our co-creation session session, a new opportunity emerged of developing captions on personal devices like phones or tablets (which are already used in the performance). This offers an exciting and straightforward access solution that can be built tomorrow.


Exploring BSL integration into the choreography

Based on the previous phase, we had decided to put aside BSL interpretation as our main access provision for d/Deaf and HoH participants. While working in the space, it became apparent that the most challenging moments are during the interactions with the dancers. We then explored how we could integrate some BSL into the choreography. It quickly enhanced intimacy for BSL users and aligned strongly with Deaf culture.

Hearing aid technology

Technologies tested:

  • Silent T-loop headphones. Shaped like overhead headphones, they provide a very localised induction loop signal to create a personalised stereo feed.

  • Induction hook. Similar to the above, but in the shape of a hook that is placed behind the ears, around the hearing aid.

  • Noise-cancelling headphones

Learnings

Hearing aid compatibility varies greatly, and not all devices are loop-enabled. While T-Loop headphones delivered a good sonic experience, noise-cancelling headphones also worked well without requiring hearing aid setting changes. Providing multiple options is crucial, and the best option comes down to the type of hearing aid technology and personal preference. Dedicated time at the start of a performance to test audio compatibility is vital. 

Learnings from the sensory and consent session

  • Positive Elements: Soft darkness in the space, repeated phrases like "This is where we meet" (to signal interaction end), and caring, optional interactions were highly valued.

  • Interaction refinements: Eye contact can be overwhelming for some, and anxiety can arise from feeling pressured to "perform" or "do it wrong." Suggestions include clear communication about interaction intent, explicit permission to leave interactions, and the option for a seated experience.

  • Technology & onboarding: The initial onboarding can feel rushed and headphones can be sensorially challenging. Clear technical support, options to experience the technology early, and visual information (timers, written instructions) are important.

  • Establishing boundaries: Explicit consent and boundaries for both audience and dancers, along with clear instructions on how to leave the experience if needed, are essential.

  • Pre-performance information: Audio and video guides, opportunities to preview the soundscape and space, and developing a "sensory map" can greatly prepare audiences.

Our next steps

  • d/Deaf and HoH Access:

    • Refine AR captions with color and position indicators to clarify who is speaking.

    • Explore using alternative AR glasses beyond the Quest 3 headset.

    • Explore the possibilities of captions on phones, include a more dynamic display, responding to the position of the audience

  • Sensory & Consent Access:

    • Create and record audio and video versions of pre-show information to maximise audience preparation.

    • Develop a detailed onboarding script incorporating consultant feedback, delineating roles for live and recorded instructions.

    • Analyse and refine audio language for interactions to better convey their intent and address feedback.

Our next phase will take place at Proto in Gateshead in May 2025 where we will run additional co-creation sessions, as well as a workshop with other makers and practitioners.

__

Acknowledgements:

This Access Research project is led by Clarice Hilton, with the support of Jané E Mackenzie, focusing on D/deaf and hard-of-hearing. Clarice also leads the sensory and consent research.
Thank you to MC Geezer and Xan Dye for participating in the research.
Thank you to our BSL interpreters for supporting this journey: Dee King and Sophie Kennard.
Thank you to iCCi and the University of Kent for having us!

Next
Next

Access research Phase 1: Reflecting on our learnings at The Cockpit Theatre