Brilliantly insane machine learning hacks


A gardening robot with a flamethrower, a poop search in the garden and a strangely sleep-inducing podcast. Your Python wants to have fun, so go for it!

Words by Jan Strmiska

Victor designed software that locks his computer screen at regular intervals and only unlocks it if he makes five push-ups.

Machine learning is one of the hottest applications of computer science today. And the magic is that it’s becoming more and more accessible to anyone (who knows what they’re doing) thanks to cloud solutions, pre-trained models, open source libraries and synthetic data. You can train your model into working software in a few hours. And of course, these great capabilities open up room for creativity and goofing around by data scientists and programmers that would have been out of the question just a few years ago. We’ve selected some of the craziest modern data science applications for you, DIY style. Have fun and learn.

Machine learning that helps you kick your workout habit

Maybe it’s a time illusion, but time just passes differently on the computer. You don’t even notice, and a minute is suddenly hours and days. But our bodies aren’t built for sitting in front of a computer. So Victor Sonck decided to do something about his work rhythm. Alarm clock reminders aren’t quite the same. Snooze, snooze, snooze… you know it yourself.

Victor designed software that locks his computer screen at regular intervals and only unlocks it if he makes 5 push-ups. He used a Raspberry Pi with a webcam attached to it to recognize the push-ups. But it turned out that the microcomputer’s power alone wasn’t enough, as it needed to process large amounts of raw image data.

So he did it by capturing everything with a 4K Luxonis OAK-1 camera, which has a built-in machine learning processor (and runs on the same neural network as the OAK-D). You can install all sorts of image recognition systems on it, including MediaPipe’s pre-trained Blazepose, which can recognize a person’s position in an image. OAK-1 sends the results as a series of coordinates (which describe the position of the person’s head, torso and limbs) to the Raspberry Pi via USB. A second machine learning model then runs on the Pi. This analyses the preprocessed dataset and recognizes the push-ups. It’s time to exercise, you lazy bum!

Gardening robot with a flamethrower (Ultimate Weed Killing Robot)

Dave Niewinski from Canada addressed a different problem. In his spare time, he wanted to enjoy his garden and not get rid of weeds by hand. Dave had probably never heard of Hornbach, so the only logical solution popped into his head: “I need to build a crawler robot with a flamethrower, controlled autonomously by AI, to burn the weeds.” A reasonable idea.

The whole device is installed on the Agile-X Robotics Bunker. The flamethrower is moved by a six-axis Kinova Robots Gen 3 arm. The whole thing is controlled via a Connect Tech Rudi-NX box, the brain behind the Nvidia Jetson Xavier NX Edge AI computing engine. Dave solved the weed recognition by first shooting the grass and weeds details and supplementing that with pre-trained RoboFlow models. The actual AI training was written in Python using Google Collaboratory, based on Jupyter Notebook. The main wonder of Collaboratory is that it gives you direct and free access to GPUs on the cloud. So you don’t have to have a hugely expensive hardware GPU at home.

If perhaps you feel like burning a few dandelions, you can find Dave’s software on the WeedBot GitHub. And be careful; you’d better not fall asleep while sunbathing.

AI dog poop recognition

Caleb Olson is our favourite machine learning inventor. In this case, he decided to identify unwanted poop in his garden from his dog Twinkie — using a camera and an AI image detection system. But in a completely different way than you might think.

Caleb wasn’t entirely satisfied with just looking for brown lumps in the picture. He decided to train his system to recognize the typical “pooping position” of his dog. The online map then shows likely points with localized targets. Caleb used DeepLabCut to analyze the animal’s position. This is an open-source method running on Python. It is for 2D and 3D markerless position estimation and is, of course, based on deep neural network learning.

DeepLabCut was initially developed at Harvard University. It can detect the movement of multiple objects or animals at the same time. Today, it’s all on GitHub.

But that’s not all. Caleb first just saw the poop in a real-time image on the website and went to pick it up according to the map on his phone. He later improved his software by adding a cheap robotic arm to the system, to which he attached a laser pointer. Using OpenCV, he was thus able to aim a glowing green dot at each excrement. In addition, his code optimizes the route of the walk, so he doesn’t run back and forth across the garden.

Oh, and because Caleb is a perfectionist, he applied the same algorithm that recognizes Twinky’s behaviour to himself. The system recognizes when Caleb lowers to poop and then erases the poop’s position from the database because it knows it’s been picked up. Brilliant. :-)

Computer-generated sleep podcast

Stavros Korokithakis loves falling asleep with fairy tales. So, with the help of machine learning, he created a system that generates slightly disjointed and surreal fairy tales and has a synthetic voice read them to him. The project uses OpenAI’s GPT-3 language generation to create content that sounds meaningful enough, but not enough to create a real plot. Perfect for falling asleep.

Stavros has really played around with the system, including creating dramatic pauses in the right places, using background music, and fading and amplifying audio. For this, he used pydub, a Python library for manipulating sound.

You can find the code on GitHub here, and if you want to listen to the actual Dada Fairy AI podcast, click here.

A system that uses AI image recognition to detect when an infant is hungry

And one more time Caleb Olson. This time, instead of a dog, he decided to monitor his own offspring. He’s created something like a baby monitor, but on a much more sophisticated level. Again, he used body position recognition and facial expression recognition, and this time he used the Google Media Pipe Library.

Caleb also tracks lip movements, pacifier refusal, fist clenching and other markers in his system. The moment the system assesses that hunger is imminent, Caleb receives a text message and thus avoids the baby crying. Progress and parenting at their best!



Creative Dock Corporate Venture Builder

We create new ventures by building & scaling them on top of big companies' existing assets. With the aim to increase the value of clients.