This paper explores prototyping as an iterative process to create virtual reality opera. I am drawing upon Human-Computer Interaction, spatialized sound, and games design methodologies to create new models for self-experiential prototyping. By fusing digital architecture, composition and extended reality (XR), as a composer, I am undertaking a creative process and investigative process, which is documented so that others can use it to learn to create new opera for a digital future. Sound is key to creating compelling virtual reality (VR) interactive experiences. Sound can aid immersion and add presence, as sound is a diegetic presence builder, sound is a navigation aid, sound adds believability, and sound can induce a mood. Music technology and extended reality opera creation techniques were used to create the opera XR Artemis. Autoethnographic prototyping methodology employs iterative models and written problem statements. The figures explain the steps needed for self-experiential prototyping to create new VR opera experiences.