Thoughts on the MYO

Holy jumpin’ gestures, Batman!

The MYO just announced themselves on the scene today.  It looks crazy good.  The website claims that it intercepts the electrical signals coursing through your forearm to pick up on what you will do.

two_rings

Read that again….WILL do.  They claim it might actually be a little faster than what you actually do in real life with your actual appendage.

I did have one thing on their website that sat with me the wrong way, though.

They say “Wave goodbye to camera based gesture control”.

Now, as amazing as this thing looks to be, there’s a storm a brewin’.  Gather round and I’ll tell you all about it.

The first gesture/motion consumer device that I can think of is the Sony Playstation Eye Toy.  It’s camera based, and guess what, it’s still used.  Sony added some glowy balls to it so the camera could see better and called it the “Move”.  With the PS4, Sony is getting rid of the glowy balls and making the controller all glowy.

Of course, the thing that the MYO is really taking a shot at bringing down is the Kinect.  This is a camera based system, but adds depth with some infrared rays that bounce all over your room, like sonar.  It overcomes the Kinect by being able to operate in total darkness – but the sun (with it’s flood of infrared radiation) is total kryptonite for it.

But the big thing about the Kinect is that it’s spawning innovation.  The OpenNI foundation sprung up and not only provides device drivers for other Kinect like devices, but provides middleware for all.  So whether you use a Kinect, a Carmine, or an Asus Xtion, you can work with gestures and skeletal data.  In fact, Primesense, the company behind the original MS Kinect tech is supposedly shipping bulk orders of it’s teeny tiny sensor to mobile device manufacturers.

Camera’s ain’t dead – they’re just starting.  Couple this with the Leap that will be coming out shortly, and most likely a metric crap ton of sensors that will make their way to the market in the next few years.

We’re venturing into “using the right tool for the right job territory”.   RGBD cameras (that’s red, green, blue, depth) are great for tracking your whole skeleton.  They are great for tracking multiple users at once.  They are better and worse then RGB cameras for being able to operate in the dark, and NOT operate in the sun respectively.  The Leap will be better than all-purpose RGBD cameras for their ability to focus in and capture the complexities of your fingers, though there is OpenNI middleware that does just that.

The MYO sounds amazing and it will be nice having a good one on one connection for your gestures – so long as you limit what you want to capture to your arms and hands and limit the number of users to the number of devices you want to purchase.

Like I said, there’s a storm a brewin’, and we’re gonna see a huge gesture and motion tracking market with some overlap, sure – but I think the MYO and RGBD cameras will definitely be able to stand on their own just fine.  I have a Kinect, an Asus Xtion, a Leap on pre-order, and plan to get a MYO when they get released.  There will be correct situations for all, and all will be a blast to play with.

 

 

Leave a Reply