My first Node.js plugin for 3D motion sensing

Well that’s a little bit of an awkward title – its not ONLY my first Node.js plugin I’ve released on NPM but it ALSO does 3D motion sensing! So brand new on both counts – I’m no Node.js plugin/addon veteran by any means!

You can grab my NuiMotion project version 0.1 on NPM and Github. You can read why I did it, and about how I’m on a crusade for letting your interfaces move your body on the project page.

That said, I learned a lot of stuff. I think the project serves as a shining example of how one guy accomplished some rather difficult and not so ordinary things you’d need to do with Node.js. I won’t claim it’s necessarily the right way – just one way. I was a little scared of C++ before all of this, but I jumped in because I had a need that I wanted to fill, and C++ was the only way to get it done.

The C++/Javascript bridge is pretty cool to see how it works. There are all sorts of problems you run into with type conversion – your C++ variables to those loose Javascript variables.

The hugest hurdle was breaking down the architecture to something that wouldn’t block the entire Node.js main process. In all of the OpenNI examples, they would run a big old while loop that will grab frames of video/depth and pull out the features we need like gestures and skeleton data.

This is SO not cool for Node.js, so I needed to delve into how to achieve threading in the V8 engine with “lib_uv”. I still don’t understand everything about the final lib_uv code I used (why some things are declared as they are), but I successfully broke it out into a new thread that runs as fast as your machine will let it. We reach in and grab our joints using a custom defined interval to poll at, and we event out when gestures and other events are encountered.

Of course, all of this NEEDS to be threadsafe. If you access the wrong thing inside a thread, you crash your entire process.

You can checkout the main logic of all of this, completely with C++/JS communication and threading here:

I didn’t do this alone, either. I asked a couple questions to the extremely awesome Node.js Google Groups. One was around the threads question, and the other was around C++ compiling. To demonstrate how much of a noob I was, my compiling question was that I didn’t realize you had to include ALL your *.cpp files in your sources target. I thought since the main.cpp references other things, they would be automatically included. NOPE! Live and learn.

Anyway – I’m of the opinion that this project probably represents some of the most difficult things you could ever need to know how to do in a Node.js addon (without getting into domain specific C++ code which could be infinitely complex for sure). So feel free to have a gander and learn!


7 thoughts on “My first Node.js plugin for 3D motion sensing”

  1. You’re focus seems to be on gestures and skeleton tracking (not a criticism of course, just an observation 🙂 Does NuiMotion provide access to the raw depth data coming from the sensor?

    1. My plugin does not. Perhaps someday it will, but I don’t plan to do it anytime soon. The usecase I’m really excited about is natural user interaction in your browser (or through some visual extension of Node.js). As I’m primarily going to a browser right now, I wasn’t thinking that a depth data stream would keep up that well over websockets.

      That said, it shouldn’t be that difficult to do considering the work I’ve already done. Perhaps this summer! Maybe I or someone else will fork this and make a new project. I know it would be useful for pointcloud generation, 3D scanning, etc. What did you want to do with it?

      1. I’m working on a Beaglebone powered autonomous sumo robot. We have an Asus Xtion sensor and our original plan was to use pyOpenNI and write our logic in Python, but we couldn’t get that working.

        Javascript is the language I’m most comfortable with, and I’ve been interested in doing something with Node.js for a while. The Beaglebone comes with Node.js and Cloud9 preinstalled so it seems like a good opportunity. Now I just need to write the addon to get the depth data from OpenNI.

        I haven’t use C++ since 2001 (right before I changed majors out of comp. sci. 🙂 so this is a little daunting, but your posts and some time studying your code should be enough to get me started.

        Have you used Node-ffi at all? I want to give that a shot before diving in and writing my own addon.

        1. I’m right there with you, though, I’d change the date to 1998 when I discovered that a comp sci major wasn’t right for me when my assembly language class was making my head spin.

          And yes, it totally looks daunting – and it was challenging while I was doing it. I had a tiny bit of help looking over one developer’s WiiMote Node.js addon code, and some time spent on the Node.js Google Groups. I can’t say enough good things about the Google Groups – those people in there are awesome and will help. But yes, I think my code will put you eons ahead in your project (not because everything I write is so awesome, just because its done and works).

          I haven’t used Node-ffi. I think I briefly checked it out when I was halfway through writing the plugin – and thought it might work, but I was already too far along.

          Good luck!

    2. Jason (and Ben),

      As it happens I am doing a summer research project involving streaming the output from OpenNI depth sensors into the browser (specifically, multiple depth sensors to do 360 degree reconstruction, but 1 sensor should work just fine). Once I get nuimotion working, my library will use it to do some useful tricks internally (more on that later).

      The code’s all up on my GitHub profile at – mostly in the pcl-streamer repository, but also a couple of other repos as necessary.

      It’s very much a work in progress at the moment but hope to get it somewhere near working in the next couple of months. (Also, if you’d like to contribute I would appreciate the help 😀 )

  2. (In my last comment, by “output” I mean full depth maps / point clouds as opposed to skeleton based representations)

    1. Very cool – I actually had someone contact me recently that needed the depth maps from OpenNI. I took a couple hours to see if I could add it to my plugin, and got as far as getting access to each frame, but not knowing enough C++ to send it on as buffered data over to Node. He took over and did the rest and it worked for him. Although, I’ve sat down with it for a few minutes and couldn’t get it working myself. It’s in the master branch of my project on Github now.

      Unfortunately, I’m making a big move to California this weekend (driving across country), so I won’t be as responsive as i’d like. But I’d really love to check out Chris’ project when I can! Chris, I really do hope you get it working – I still need to read your latest Github ticket comment, I hope we don’t reach a deadend on this where you just can’t build it.

Leave a Reply

Your email address will not be published. Required fields are marked *