""" By the way, Sketchpad was the first system where it was discovered that the light pen was a very bad input device. The blood runs out of your hand in about 20 seconds, and leaves it numb. And in spite of that it’s been re-invented at least 90 times in the last 25 years. """
from http://archive.org/details/AlanKeyD1987 around 7:10
Think of it like this: Alan Kay isn't wrong, but you could say the same thing about any input. The mouse is a very bad input device. It takes forever to use it for on-screen typing. The keyboard is a very bad input device. It can't tell how hard you're hitting each key when you do musical typing in GarageBand. The microphone is a very bad input device. Voice control is way slower than just clicking the menu item you want ...
If this thing is real, then within a couple of years there will be a dozen reasons to refuse to buy a computer without one.
You're papering over the problems with relativism.
The keyboard is rapid to use and flexible. Although it has the possibility of physical health problems with RSI, with light pen interaction you have the certainty of pain and fatigue. Sign speakers don't need sign-to-speech - there are keyboards already. There's been multiple attempts at text to speech as well, and the result is more fiddly, less flexible and less rapid than you can get with a keyboard.
There could be an argument that keyboards are complicated and have a learning curve. But computers are the super-tool of our age. Why would you not learn use of a tool that gives you combines great power with flexibility.
I think we still have discussions about alternative user interfaces that hark back to the way humans interact with each other because most of the population are not yet expert keyboard users. This will change. Once the developed world is flush with expert keyboard users, user interfaces will go back to putting greater emphasis on them.
This also isn't a light pen. You do not need to hold it close to the display.
Finally, there is way more casual computer interaction. A light pen could be fine for short interactions (especially if you do not have to search for the pen first)
Heck, consider how light a painter's brush is in comparison to a fencer's epee (which only weighs about 1 lb at the most, but still, try holding one for an hour without having any experience!)
(sample size of one: myself)
Visual slam is great for medium distances, but pointclouds aren't really that dense and are slow to update. Also the lidar to make the point clouds is stupid expensive.
Add one of these guys onto your robot and you've got a really cool set of 'wiskers.' Short range, highly sensitive, super fast update. I'd love to put several of these on a robot and use that to give it a sensitive field surrounding its body.
Depending on how open the software and hardware are this will be a great addition to the robotics community.
Also, from the Ars Technica post on LEAP:
"The company says the breakthrough in resolution comes not from the hardware, which consists of relatively standard parts, but from what CTO David Holz calls 'a number of major algorithmic and mathematical problems that had not been solved or were considered unsolvable.'"
I'm conflicted by that statement. As a current academic, I hope they publish these supposed breakthroughs, as hiding them behind trade secrets makes me sad. As an entrepreneurial-minded person, however, I understand the desire for competitive advantage.
Perhaps I'll reconsider at some other time.
Pre-orders ($70) only ship domestic (for now) around winter.
20,000 dev kits are being made. We want to ensure this tech becomes ubiquitous.
We're getting slammed with launch response. But if you guys have questions, we'll try to answer them here shortly.
-Chris Community Builder
Question: How can you render the "other side" of the hand at the 51+ second mark? If this is indeed possible, that's quite a remarkable technology you have.
What language bindings will it ship with?
Any partnerships on the way already?
good
* Fruit Ninja - made for single finger input, short gameplay
* Pinching and zooming maps - good because it's usually a short activity
* CAD camera interactions - good for periodic strange rotation needs or showing a client that doesn't understand the normal movement hotkeys
* Periodic writing with pencil tool
bad
* Shooters - can't turn player around, gameplay too long.
* Longterm writing or drawing - too tiring
Very very inconvenient. Writing on air is completely different and considerably more tricky that writing on paper. Physically more demanding too. It's one of those things that sound nice in theory, but not really practical.
-Chris
However, I would make the good/bad list a bit more general: * good for manipulating UI elements that represent 3D * bad for manipulating UI elements that represent 2D
Maps and camera interactions (CAD) are perfect examples of things that represent 3D elements. Short games are another area that can represent 3D - longer games might also work well, but the user is likely to get tired of waving his/her arms around.
Much of what we do on computers today is strictly 2D. Coding, word processing, most web browsing, email, etc. Pencil tools/drawing tools are similarly usually just a 2D activity, so using a 3D-capable tool and reducing your movements to 2D doesn't really make sense.
As a counterexample, Emoviv gave a TED talk a while ago showing off a headset that lets you control your computer with your mind. When you visit their website you discover that you can only develop with a $500 “developer edition” headset that comes with a single, nontransferrable license to use the SDK (additional licenses are $99). The consumer model of the headset only runs approved applications.
Pre-orders are for consumers at $70 and ship this winter.
The idea is to give all the hackers maximum access to create awesome apps and then deliver a healthy shiny ecosystem to the consumer. Also, we'd like to see a larger shift towards people creating things, so encouraging early adopters to get aboard the coding train is a positive trend.
It's a huge new interaction space, and we're looking for innovators to explore it!
Other people have asked about openness… will there be any kind of control over what programs can use it (i.e. do they need to be approved by you guys to work?)
Well, that's a big turnoff for a dev. Why instead of maximizing the short term profit you concentrate on the long-term and make this system open with certain limitations to assure your business?
Funding announcement: http://www.marketwatch.com/story/leap-motion-announces-1275-...
That being said; I really like the idea and would love to know the tech behind it.
I think the key however the key will be in the recognition of subtler gestures. If you can show me a man using two hands to type, then moving them not far from the keyboard to activate simple gestures for navigating a document, I'd be really sold that this is for everybody.
- I have to see it before I believe it.
- If this works as advertised, this company will never ship the product. They will be bought within months for a huge sum of money, even if they do not want to be bought. Reason for that is:
- Litigation, litigation! They will need deep pockets to defend themselves against patent claims.
To me this looks amazing and although LEAP seem to be pushing for you to get rid of your mouse/keyboard, personally I think this is probably best as an addition to it. Imagine if you had one of these built into the keyboard.
You're typing an email need to add a location switch over to google maps, hands off keyboard as you manipulate it around to get a decent resolution, 'tap' the address bar to copy it, swipe left to switch back to the email program, tap again to paste and boom carry on typing.
You wouldn't need to be using it all the time for it to be extremely useful.
We keyboard jockeys sometimes forget how much faster something like this would make the less shortcut-key knowledgeable users!
There are a lot of issues to consider - what if I mean to swipe one thing but the system recognizes another? how is that handled? Does it calculate the position of my head and the perspective I'm seeing?
I agree with the above comment that this would ideally be an addition to our growing arsenal of HIDs, including keyboard, mouse, touch mouse, joystick, wacom tablets, and others are not intended to replace one or the other, most of them are complementary to other HIDs.
Now imagine if each position was mapped to a different shortcut...
How many unique positions could you map? (compare to keyboard).
So while there are a basically continuous range of positions you can put your fingers into (and with fifty or so degrees of freedom), the differences between many similar positions are subtle and require feedback for you to see which of them you're in---looping in your visual system and slowing down the interaction considerably. Which is what you want for certain sorts of continuous-ish interactions, and not at all what you want for certain sorts of digital-ish interactions.
All of which is to say, this sounds suuuuuper cool, but it's not going to replace the keyboard.
The point is that if the technology works (which is the important question, IMO), then users will adapt and embrace it.
Type type type. Throw that away. Type type. Make that bigger. Type type type type. Run that.
I can't wait.
There are so many use cases where it doesn't even work that would require a complete rethink of how anything is presented on in internet. For example how about the HN comments where it's pasted as code and it scrolls horizontally. Target that and scroll it with that system. As soon as you have a single use case where a mouse and keyboard is more effective you blow your value.
Moreover, it's not enough to just copy our current understanding of UI over to a form of input like this. Of course that's the natural inclination, but in reality, interfaces will change and adapt to things like this (meaning scrolling may not exist and a whole new form of pagination may be invented). The "how" is up to the designers.
More than anything, you really articulated my point by saying that "it really won't." You're right: as things stand in terms of interaction, this would become tiring. You just have to think of a way to make it not.
They do claim sub-mm accuracy; maybe applications in the small are realistic.
So instead of arm-waving, think of rotating your hand just above the touchpad to rotate and object in 3d space, but briefly. And the touchpad would still work like a regular touchpad, but maybe you don't even need to touch it.
Sub-mm accuracy seems to imply that really subtle gestures could work.
Disagree about the physiological concern. Provided your elbows are resting on the table, it would be easy to get used to. Humans would adapt and it would be healthier than our current much talked about static postures.
Aside from end use issues, the tech behind this is very nice
I guess some kind of infrared thing?
You can get pretty impressive accuracy and precision at short distances, and the plane of depth points matches what their demo video shows: http://www.youtube.com/watch?v=_d6KuiuteIA&feature=playe...
I could of course be wrong, so if someone knows different
Believe it when I see it live.
We asked one simple question: ‘What feel[’]s natural?’
Two or three hundred thousand[s] lines of code later
The Leap is a small iPod[ ]sized USB peripheral
Do you support [w]indows?
When do dev-kits ship[ ]
They are in serious need of a copyeditor.So could this be a Flutter.io competitor? Flutter.io is purely software driven. I feel flutter will release a SDK/API as well at some point.
So I think intuition suggests that whoever can execute these best, will likely succeed:
- Rich feature set to capture gestures.
- Simple API. Should be easy to integrate with 3rd party apps.
- Performance.
Leap already has an edge in that they are releasing a SDK, Flutter should follow this quickly (hopefully). Flutter makes it easy to get it to people as it is just pure software, but can they achieve capturing rich gestures?
Exciting times.
How can I get a free developer kit?
We’re distributing thousands of kits to qualified developers, because, well, we want to see what kinds of incredible things you can all do with our technology. So wow us. Actually, register to get the SDK and a free Leap device first, and then wow us.
Do you support windows?
Yes! We also support native touch emulation for Windows 8.
How about Linux?
Linux support is on the agenda.
When do dev-kits ship
Depending on which batch you’re in; anywhere from 1-3 months.
What are the tech specifications for the LEAP?
TBD.
I'd be interested to see what you could do if you projected an image onto a work surface to make that more interactive. Seems like it would be easier to draw or manipulate 2D things on a plane rather than trying to wave your hand in 3D space. (Image Manipulation, Graphic Editing, Maps, etc)
I've already started thinking about some gestures that could be used for this, but I'm wondering, how hard it's going to be on the hand(s)? I mean with the mouse and keyboard (supposing PC gaming) the hands are resting on the table 90% of the time, with this the hand(s) will be up in the air.
...unless someone puts a nice glass table on top of that thing so that my hands could rest... could this work?
I dropped out of state U. after my 3rd year (math major), but that was years ago. At my current start-up, I have recently been forced to learn much more than I was expecting to about probabilistic graphical models and curve similarity measures (gladly though; always been interested in pattern recognition).
Anyone with a vision for this, consider dropping me a line. I might be able to help.
On the other hand, converting sign language to text/speech seems like it should be quite straightforward. Not knowing anything about sign language, I'm assuming signs map (more or less) one-to-one with words. The input from LEAP appears to be extremely high resolution, so if the sign gestures are properly normalized (and judging from the demo video, it looks like the LEAP SDK itself already does a good degree of input normalization), you should be able to just train your classifier (neural network, SVM, etc.) right out of the box.
Of course, things are never as easy as they look so in all likelihood there are plenty of complications I'm completely overlooking at first glance. But I agree with you 100% that it sounds totally doable.
-Chris
The promo video doesn't show a physical device, the price point seems ridiculously low especially for a resolution of 0.01mm. And also there is this http://bit.ly/KOqDi2 the physical hand and the point cloud don't match. It's like someone is moving their hand(s) fast to mimic the movement of the visualization.
I'm still undecided. Perspective on each supposedly "fake" screenshot could explain the mismatch. In your example, you actually don't see how far his fingers are apart. Also, it might be attributed to that particular finger going to the borders of the 3D interaction space of Leap.
For some tasks (e.g. changing to a diff browser tab five across from the current one) I can imagine that pointing at it would be the quickest and easiest way to switch to it.
I can imagine it'd get a bit tiring if you were relying on it too exclusively.
I don't think people in the gesture interface market are looking for ways to replace the essential function of the keyboard. For all intents and purposes, it's probably the best way to input textual data into a machine.
On a completely different note, though, I wonder what the range on these things is. Could have excellent applications to robotics, I hope they don't completely close the vision outputs behind their own gesture APIs or something...
However, I can see this being huge in the commercial market. I can easily imagine using something like this in a shop, or for presentations at work.
I would not think of getting a Kinect but my thought upon seeing the video was "I want to get one of these and use it to control the RasPi I'm sticking in some fake taxidermy along with a pico projector for a micro media PC."
But the idea of bringing such gesture based interaction to just about any device is really great.
Beside that, it really doesn't matter as these were only two examples of a large variety of possible applications. If it doesn't fit the need - don't use it. There are other interfaces. There is no need for one interface to rule them all but for interfaces that really hits the spot for particular applications.
Regarding Leap: Looks really promising. Though i would prefere if it would be "hidden". Anyhow, can't wait to get my on it. Or over it.
Now they just gotta turn it into a protocol and build it into monitors.
Say 3 with your fingers to prove you are not a robot.
And we've got nothing against our northern neighbors, nor is it some grand favoritism conspiracy. Pure rollout logistics. Lots of people are getting confused on this point, we'll announce more shortly.
But not all ideas are good. It violates Fitt's law by placing all user interaction within a vertical band next to your computer. This is very uncomfortable and decreases the usefulness for most applications (because missing the interaction band is likely).
If these were wireless gloves that I could more easily (with my arms in any location) then I'd love this.
Sat in front of a desk, with a keyboard and mouse in front of you, I can imagine it will have some pretty attractive uses, but might not be as comfortable long-term as gesticulating at a mounted whiteboard.
You are forgetting that "most applications" are designed for a mouse and keyboard.
A more useful critique would be to analyze the type of applications that can be built with LEAP in mind.
You're literally arguing a logical fallacy by the way: http://rationalwiki.org/wiki/Negative_proof
Also, this is the first iteration of the tech. The same tech can be used to cover a football field and more.
Gloves would give significantly greater control, flexibility, and functionality. Three examples:
1) Control: Since the computer is aware of my digits significantly more complex movements are possible. With two hands you'd have a ton of customization.
2) Flexibility: I can do all actions sitting comfortably from my chair. My arms don't have to leave the arm rests. Or, I can be across the room swiping through my media or photos.
3) Functionality: Fingers, or motions like raising a hand, could act like key commands. Want to submit a password? Turn your hand like a key. Want to refresh a page? Drum my fingers. Want to clear the screen? Slide my hand across the table. None of these would be possible with LEAP because each would be obscured by or outside of the field of view.
Similar tech seems to already exist in Microsoft's Kinect. It can also cover entire rooms, but would do so more usefully. And the technology is already there to identify human hands, skeletons, and faces.
I want amazing technology just like you, but we need to be willing to not trumpet bad ideas just because it could be cool given enough effort/marketing.