Using biometrics – measurements of physiological characteristics to identify someone – has made interacting with our mobile devices a lot easier by exchanging passcodes for facial scans and fingerprints. But are there other ways our physical interactions with devices can make them more user-friendly? Researchers in Japan think so, by staring deeply into a user’s eyes through a selfie camera.
Tomorrow marks the start of the 2022 Conference on Human Factors in Computing Systems (or CHI for short) in New Orleans. The focus of the conference is on bringing together researchers studying new ways in which people can interact with technology. That includes everything from virtual reality controllers that can simulate the feel of a virtual animal’s furuntil breakthroughs in simulated VR kissingeven touchscreen upgrades through the use of bumpy screen protectors†
As part of the conference, a group of researchers from Keio University, Yahoo Japan and Tokyo University of Technology will present a new way to detect how a user is holding a mobile device such as a smartphone, and then automatically adjust the user interface to make it more user-friendly. . For now, the research focuses on six different ways a user can hold a device such as a smartphone: with both hands, left only or right only in portrait mode, and the same options in horizontal mode.
As smartphones have grown in size over the years, it has only gotten harder and harder to use one. But with a user interface that adapts itself accordingly, such as dynamically moving buttons to the left or right edge of the screen, or shrinking the keyboard and aligning it left or right, using a smartphone with just one hand can be a lot. be simpler. The only problem is that a smartphone automatically knows how it is being held and used, which is what this team of researchers discovered without the need for additional hardware.
With a sufficient level of screen brightness and resolution, a smartphone’s selfie camera can track a user’s face staring at the screen and take a CSIstyle superzoom to focus on the reflection of the screen on their pupils. It is a technique used in visual effects to calculate and mimic the lighting around actors in a filmed shot that is digitally magnified. But in this case, the pupillary reflectance (as grainy as it is) can be used to figure out how a device is held by analyzing its shape and looking for the shadows and dark spots that form when a user’s thumbs hit it. cover screen.
Some training is required for the end user, typically taking 12 photos of them performing each gripping pose so that the software has a significant sample size to work with, but the researchers have found that they can accurately figure out how a device becomes approximately 84 held % of the time. That may improve further as the resolution and capabilities of front-facing cameras on mobile devices do, but that also raises some red flags about how much information can be captured from a user’s learners. Can nefarious apps use the selfie camera to record data, such as a user entering a password via an on-screen keyboard, or monitor their browsing habits? Maybe it’s time we all go back to using smaller phones that are one-handed friendly and also start blocking selfie cameras with sticky notes.