Image recognition systems not only require that the user's hands are clearly visible to them at all times, but also raise some privacy controversies, especially if the user's face appears in the video material. Electronic gloves are most often uncomfortable, easy to damage and simply impractical in many situations. In short, scientists thinking about the further development of such areas as gesture control of electronic devices, virtual reality or prosthetic hands must make much more effort.
The presented solution is a perfect example of this, and it refers to the UC Berkeley team of scientists with a processor of a thin band wrapped around the forearm. The user starts using it by making a group of gestures, one after the other - during this time, the wristband's electronic sensors detect nerve signals at 64 points of the hand. This data is used to train a literally bespoke AI algorithm that learns the specific signals accompanying specific gestures.
When, after the training process, the user makes such a gesture or even thinks about it, the system will be able to determine which gesture is being used using the collected database. Currently, the solution supports 21 different gestures, including fist, thumbs up, flat hand, counting on fingers or pointing a specific finger.
Interestingly, the algorithm is also ready to automatically update the gestures it has already learned to react to variables such as sweat on the hand or a hand held in an unusual position. Moreover, all computation takes place in the system itself, so no data is sent to the cloud. It cannot be denied that the possibilities of this solution are impressive and give hope, among others for real dentures of the future that will be as functional as our own hands.