Any idea why these are not often used with touchscreen mobile interfaces, e.g. press for contextual pie menu? Even without OS support, they could be implemented within apps.
Also see my comment above about the problem of non-transparent fingers.
Swiping gestures are essentially like invisible pie menus, but actual pie menus have the advantage of being "Self Revealing" [5] because they have a way to prompt and show you what the possible gestures are, and give you feedback as you make the selection.
They also provide the ability of "Reselection" [6], which means you as you're making a gesture, you can change it in-flight, and browse around to any of the items, in case you need to correct a mistake or change your mind, or just want to preview the effect or see the description of each item as you browse around the menu.
Compared to typical gesture recognition systems, like Palm's graffiti for example, you can think of the gesture space of all possible gestures between touching the screen, moving around through any possible path, then releasing: most gestures are invalid syntax errors, and they only recognizes well formed gestures.
There is no way to correct or abort a gesture once you start making it (other than scribbling, but that might be recognized as another undesired gesture!). Ideally each gesture should be as far away as possible from all other gestures in gesture space, to minimize the possibility of errors, but in practice they tend to be clumped (so "2" and "Z" are easily confused, while many other possible gestures are unused and wasted).
But with pie menus, only the direction between the touch and the release matter, not the path. All gestures are valid and distinct: there are no possible syntax errors, so none of gesture space is wasted. There's a simple intuitive mapping of direction to selection that the user can understand (unlike the mysterious fuzzy black box of a handwriting recognizer), that gives you the ability to refine your selection by moving out further (to get more leverage), return to the center to cancel, move around to correct and change the selection.
Pie menus also support "Rehearsal" [7] -- the way a novice uses them is actually practice for the way an expert uses them, so they have a smooth learning curve. Contrast this with keyboard accelerators for linear menus: you pull down a linear menu with the mouse to learn the keyboard accelerators, but using the keyboard accelerators is a totally different action, so it's not rehearsal.
Pie menu users tend to learn them in three stages: 1) novice pops up an unfamiliar menu, looks at all the items, moves in the direction of the desired item, and selects it. 2) intermediate remembers the direction of the item they want, pop up the menu and moves in that direction without hesitating (mousing ahead but not selecting), looks at the screen to make sure the desired item is selected, then clicks to select the item. 3) expert knows which direction the item they want is, and has confidence that they can reliably select it, so they just flick in the appropriate direction without even looking at the screen.
I wrote some more stuff about pie menus in the previous discussion of Fitts' Law. [8]
[1] Android Pie Menus: https://play.google.com/store/apps/details?id=com.lazyswipe
[2] iOS Pie Menus: https://github.com/tapsandswipes/iphone-pie-menu
[3] Momenta Pen Pie Menus: https://www.microsoft.com/buxtoncollection/detail.aspx?id=17...
[4] Palm ConnectedTV Finger Pie Menus: http://uk.pcmag.com/connectedtv/29965/review/turn-your-palm-...
[5] Self Revealing: http://uxmag.com/sites/default/files/uploads/Brave-NUI-World...
Self-revealing gestures are a philosophy for design of gestural interfaces that posits that the only way to see a behavior in your users is to induce it ( afford it, for the Gibsonians among us). Users are presented with an interface to which their response is gestural input. This approach contradicts some designers’ apparent assumption that a gesture is some kind of “shortcut” that is performed in some ephemeral layer hovering above the user interface. In reality, a successful development of a gestural system requires the development of a gestural user interface. Objects are shown on the screen to which the user reacts, instead of somehow intuiting their performance. The trick, of course, is to not overload the user with UI “chrome” that overly complicates the UI, but rather to afford as many suitable gestures as possible with a minimum of extra on-screen graphics. To the user, she is simply operating your UI, when in reality, she is learning a gesture language.
[6] Reselection: https://www.billbuxton.com/PieMenus.html
In general, subjects used approximately straight strokes. No alternate strategies such as always starting at the top item and then moving to the correct item were observed. However, there was evidence of reselection from time to time, where subjects would begin a straight stroke and then change stroke direction in order to select something different.
Surprisingly, we observed reselection even in the hidden menu groups. This was especially unexpected in the Marking group since we felt the affordances of marking do not naturally suggest the possibility of reselection. It was clear though, that training the subjects in the hidden groups on exposed menus first made the option of reselection apparent. Clearly many of the subjects in the Marking group were not thinking of the task as making marks per se, but of making selections from menus that they had to imagine. This brings into question our a priori assumption that the Marking group was using a marking metaphor, while the Hidden group was using a menu selection metaphor. This may explain why very few behavioral differences were found between the two groups.
Reselection in the hidden groups most likely occurred when subjects began a selection in error but detected and corrected the error before confirming the selection. This was even observed in the "easy" 4-slice menu, which supports the assumption that many of these reselections are due to detected mental slips as opposed to problems in articulation. There was also evidence of fine tuning in the hidden cases, where subjects first moved directly to an approximate area of the screen, and then appeared to adjust between two adjacent sectors.
[7] Rehearsal: https://www.billbuxton.com/MMUserLearn.html
Requirement: Novices need to find out what commands are available and how to invoke the commands. Design feature: pop-up menu.
Requirement: Experts desire fast invocation. Once the user is aware of the available commands, speed of invocation becomes a priority. Design feature: easy to draw marks.
Requirement: A user's expertise varies over time and therefore a user must be able to seamlessly switch between novice and expert behavior. Design feature: menuing and marking are not mutually exclusive modes. Switching between the two can be accomplished in the same interaction by pressing-and-waiting or not waiting.
Our model of user behavior with marking menus is that users start off using menus but with practice gravitate towards using marks and using a mark is significantly faster than using a menu. Furthermore, even users that are expert (i.e., primarily use marks) will occasionally return to using the menu to remind themselves of the available commands or menu item/mark associations.
[8] TLDR: bla bla bla pie menus bla bla bla. ;) https://news.ycombinator.com/item?id=11219792