When we opened our brand new Makerspace at the Sun Gallery, I for sure wanted to make an exhibit! This was when generative AI was just reaching the point where it was good, but could also be run locally. I took an older desktop PC I had, upgraded the graphics card and installed Stable Diffusion. I learned lots about running it through its REST API.
The experience is created with Tensorflow.js body segmentation running live through a webcam, and the UI is done with Lit/Web components. The code is freely available on GitHub.
It's a fun little exhibit and it plays off the fears that AI will replace artists. So as you step up to the exhibit, you press a button and speak into the microphone to say what AI will replace you with, and then it generates the image, and cuts you out of the picture with body segmentation, and then replaces you with the AI generated image.
the downloadable output after using the installation
a video my wife recorded using the installation