Nov 27th, 2025
This September, our lab traveled to Busan, Korea to take part in ACM UIST 2025—one of the premier forum for innovations in human-computer interfaces. Our presence at UIST spanned across 3 UIST papers, a very special UIST vision keynote, and a UIST workshop.
Our lab published three new interactive systems at ACM UIST 2025, you can watch all our papers in action in this recap video (shot in a single take).
Watch a recap video of all our UIST 2025 papers (Curious about this video? Read this1).
Conference story: In a way, I have been working on this keynote for more than a decade (in 2011 my website was already titled "Human computer Integration" as a play on words on HCI—but also to denote my interests in muscle stimulation). In 2015 or so, I developed (together with my PhD advisor Patrick Baudisch) a diagramatic view of my vision, displayed by these "bubbles" that you can find everywhere on my pages, papers, etc. In fact, during my vision keynote, I asked the audience to draw their own version of this diagram. for this end, I created a website that was running on my computer that they could submit their drawings of. It was a really interesting experience—you can see this part of the talk here (this link jumps to that section)— and hundreds of attendees submitted their own drawings. These are worth their own essay, so please check this out here!
.Authors: Yudai Tanaka, Hunter Mathews, Pedro Lopes. In Proc. UIST'25. (📄 PDF download of the paper or 🖥️ Learn more about this project)
Key contribution: We present Primed Action, a novel interface concept that leverages this type of TMS-based faster reactions. What sets Primed Action apart from prior work that uses muscle stimulation to “force” faster reactions is that our approach operates below the threshold of movement—it does not trigger involuntary motion, but instead it “primes” neurons in the motor cortex by enhancing their neural excitability. As we found in our study, Primed Action best preserved participants’ sense of agency than existing interactive approaches based on muscle stimulation (e.g., Preemptive Action). We believe this novel insight enables new forms of haptic assistance that do not sacrifice agency, which we demonstrate in a set of interactive experiences (e.g., VR sports training).
Conference story: This is Yudai's first paper on exploring how to speed up reaction time. While our lab has a long history of exploring this topic using electrical muscle stimulation (check this overview of these papers since 2019 on this topic), Yudai's UIST 2025 paper presents a completely different approach to this space, by not only leveraging brain stimulation, but importantly, exploring it without violating the user's sense of agency (i.e., Primed Action does not force users to move, it only primes). We had a great back and forth with the UIST reviewers on this one (thanks for them!), which not only resulted in this paper but also in an appendix, which provides a deeper analysis of our results from the paper—check it out! Also, this is Hunter's first paper with our lab—he was a highschool student working with us! He is now off to continue his studies of engineering at the University of Alabama, we are super proud of Hunter! If you are wondering what he did, check out Figure 9 on the UIST 2025 paper and appreciate the wearable TMS platform that Hunter designed and engineered!
Watch Yudai giving the Primed Action talk at UIST 2025
Authors: Antonin Cheymol, Pedro Lopes. In Proc. UIST'25. (📄 Download the paper as PDF 🖥️ Learn more about this project)
Key contribution: Our brain’s plasticity rapidly adapts our senses in VR, a phenomenon leveraged by techniques such a redirected walking, handredirection, etc. However, while most of HCI is interested in how users adapt to VR, we turn our attention to how users need to adapt their senses when returning to the real-world. We found that, after leaving VR, (1) participants’ hands remained redirected by up to 7cm, indicating residual proprioceptive distortion; and (2) participants incorrectly recalled the virtual location of objects rather than their actual real-world locations (e.g., remembering the location of a VR-extinguisher, even when trying to recall the real one). We discuss the lingering VR side-effects may pose safety or usability risks.
Conference story: This paper was a tour-de-force by our former PhD intern Antonin Cheymol (from INRIA, France). Antonin came to our lab excited to combine his expertise in VR embodiment with our interest in VR illusions. Rather than exploring a new illusion, we turned to the side-effects of existing illusions, such as hand redirection! Not only we wrote this paper but we even ran a third study that did not make it to the main paper, but which you can see in this part of our video (this link directly jumps to this new study!).
Watch Antonin giving the VR side effects talk for UIST 2025.
Authors: Kensuke Katori, Yudai Tanaka, Yoichi Ochiai, Pedro Lopes. In Proc. UIST'25. (📄 Download paper as PDF here 🖥️ Learn more about this project)
Key contribution We demonstrate how the vestibular system (i.e., the sense of balance) influences the perception of hand position in VR. By exploiting this via galvanic vestibular stimulation (GVS), we can enhance the degree to which we can redirect the user’s hands in VR without them noticing it. The trick is that a GVS-induced subtle body sway aligns with the user’s expected body balance during hand redirection. This alignment reduces the sensory conflict between the expected and actual body balance. Our user study validated that our approach raises the detection threshold of VR hand redirection by 45~55%. Our approach broadens the applicability of hand redirection (e.g., compressing a VR space into an even smaller physical area).
Conference story: This was the work of Kensuke Katori apropos his internship in our lab! Ken came from Ochiai's lab at the University of Tsukuba and his experience at our lab was featured in this news segment. Also, check out the impressive live demo that he gave on stage during his talk, dropping all the script and asking an audience member to control his sense of balance using our vestibular stimulation (click here to jump directly to that moment in the video).
Watch Ken(suke) giving the talk at UIST 2025, including an impressive live demo!
Pedro, in collaboration with Yujie Tao (Stanford University), Tan Gemicioglu (Cornell Tech), Sam Chin (MIT Media Lab), Bingjian Huang (University of Toronto), Jas Brooks (MIT CSAIL), Sean Follmer (Stanford University) and Suranga Nanayakkara (National University of Singapore), organized a workshop on Everyday Perceptual and Physiological Augmentation. We invited Prof. Thad Starner (Georgia Tech) as our keynote speaker and we organized a number of hands-on activities, including demos and prototyping!
Our lab also contributed in service roles at UIST. Yudai Tanaka served as Data Chair, helping with the UIST interactive program; while, Jasmine Lu served as Sustainability Chair, improving UIST's ecological footprint!
It takes a village to organize such a exciting and smooth conference as this UIST—especially one that broke all attendance records (this UIST was sold out)! Many thanks to the organizing committee for their hard work.
Next UIST will happen in Detroit (not far from the University of Michigan, where the two general chairs—Michael and Steve—work at). We hope to see you at the next UIST in the midwest region, and if you pass by Chicago on the way to UIST, let us know!
P.s.If you are interested in a reflection about how we see the UIST community exploring and embracing AI, Yudai wrote an essay about that.
Footnote 1: Video credits & more info for curious people:
1. This video was an experiment in shooting in one take, we actually liked the first take more than this one (this was the second) but during the first take the camera stopped recording due to battery (classic!)—so we are left with this one!
2. Yun Ho and Bruno Felaga helped prepare the shot.
3. This video was edited entirely on the command line, thanks to ffmpeg.
4. Bruno Felaga was behind the camera and he did a great job, you can see him in the first frame of the video with the clapper.
5. Everything was improvised, we only knew the sequence of the demos to keep the time minimal as we are putting on/off the wearable devices. As you can see in this video, Yudai also joined the improvisation, explaining his project Primed Action in this live demonstration.