Multitouch gestures with Lean Touch for Unity
Garamantis’ multi-touch developers have extended the Unity Lean Touch with important touch gestures and developed context-based touch recognition. The intelligent touch recognition processes the currently displayed content on which the gestures are performed and then implements the action that the user is most likely to want to perform in the individual case..
As soon as it comes to recognition and processing of multitouch gestures on multitouch screens, there is hardly a way around the Unity Module Lean Touch. The widely used open source software processes the most common multitouch gestures such as scroll, zoom, pinch, rotate, etc. and thus provides the basis for multi-touch screen input. In addition, the software is being further developed and is therefore up to date.
Weaknesses of Lean Touch
A disadvantage of Lean Touch is the limited number of supported multitouch gestures. For example, no capacitive object recognition is supported, in which three touch points applied to a marker are recognized. Garamantis has therefore enhanced Lean Touch for Unity for object recognition, so that different objects, their exact position and rotation are reliably processed. Also an implementation of the TUIO protocol and the recognition of finger sizes on different resolutions is unfortunately missing.
In addition, freely movable and scalable info cards are often used on multi-touch tables. These info cards contain content such as pictures, texts and videos through which the user navigates. A typical usage situation is that users stand on all four sides of the multitouch table and work with one or even several info cards, move them around and consume the content they contain. This is precisely where usability weaknesses exist in Lean Touch: The software cannot distinguish whether the user wants to move or enlarge the information card himself with his touch gestures – or whether he wants to navigate through the contents of the card.
Example: A user touches the content of a map and slides his finger across the screen. In this case, does he want to move the card or scroll the text displayed on it? Or is his finger on a button he might want to activate? Often what the user actually wanted does not happen and he is irritated or even frustrated.
Usability Optimization: Context-based Touch Recognition
Garamantis’ multi-touch developers have added important touch gestures to Lean Touch and have developed context-based touch recognition. The intelligent touch recognition processes the currently displayed content on which the gestures are performed and then implements the action that the user is most likely to perform in a particular case. For example, if his finger is on an info card on a scrollable text and moves his finger up and down, he will probably want to scroll the text at that moment. If, on the other hand, he moves his finger to the left or right, he will probably want to move the whole card. The same is true for buttons on moving info cards: Context-based touch detection does not activate the button immediately upon the first touch, but “waits” for a few milliseconds to see if the user actually wants to move the card (the time delay is so short that the user does not notice it – he only notices with satisfaction that what he actually wanted to do happens).
These and other usability optimizations based on Lean Touch have eliminated the weaknesses mentioned above. This has resulted in a universally applicable touch recognition, which is used in almost all Garamantis multi-touch projects. Projects can thus be realized without additional hardware, whereby the user experience is particularly high-quality and intuitive.