The world is drowning in trash. As humans, we produce around 1.3 billion tons of trash yearly — far more than we can properly recycle. And you don’t have to go through 6 seasons of The Sopranos to know that waste management is an extremely difficult business. It’s also an important one — a correct way to segregate it either in the source of origin or in garbage processing plants is the key for reducing the negative impact on our environment. Even though we are aware of that, we often lack focus or knowledge on how to do it right.
In this article we will try to sort out this problem using some generally available technologies to improve waste segregation in its place of origin — our households.
This is a pretty important matter, as all the members of the EU, are obliged to recycle at least 50% of the municipal waste by 2025 so as to avoid sanctions.
Waste segregation rules are complex
Growing up, I was told on multiple occasions that programmers tend to be lazy. Well, that’s not entirely true, but this drive that makes you spend hours coding, so as to never do something manually again — that’s indeed a quality of the best developers.
Memorizing waste segregation rules is not something I would consider an efficient way to up my waste-game. Applying technology to do the dirty work for me sounds way better (and fun).
Let’s make things happen
Wouldn’t it be cool to have a trash segregation unit that suggests proper sorting fraction whenever you approach it, so that you don’t have to bother thinking where to throw your trash?
With that idea in mind, a spare Raspberry Pi lying intact for almost 6 months in my drawer, and an experimentation week on a horizon, I decided I’d take it for a spin and learn a couple of things in the process!
Our trash segregation unit will be able to recognize when we are about to dispose of some trash, capture an image of whatever we are disposing of and, having sent the image to a cloud-hosted Machine Learning model, indicate the right sorting fraction. Streamlining this process will yield more accurate segregation results in our household with less thinking and no more googling the recycling rules.
What a world we live in in to even consider this kind of facilitators !
Mr. Bin is a 100% recycled prototype built from Ariel washing capsule box. It turned out to be a perfect fit for all the RPi components and wiring and, at the same time, an accurate replica of what the target device could look like.
The set of components required for our device to be fully operational is quite minimal and setting it up shouldn’t be a problem even if you are just starting your adventure with microcomputers (which was the case for me).
To recreate Mr. Bin you will need:
- Raspberry Pi —to run our app. I used model 4B with 2GB of RAM, but Raspberry Pi Zero shold be enough and is certainly more convenient when it comes to size & cost.
- Raspberry Pi Camera — to capture photos of trash for classification.
- Distance Sensor — to capture photos from controlled distance. I used HC-SR04, as it’s cheap and yields very good results.
- LEDs — four LEDs of different colors to indicate sorting fraction.
- Breadboard — to create a prototype we don’t need to be welding anything.
- Resistors — to control voltage in created circuits.
- Jumper wires — to connect all the pieces together.
When you are done with your shopping (and laundry!), it’s time to assemble your device. The below image depicts complete schematics for connecting LEDs, distance sensor and camera to Raspberry Pi.
As soon as our trash bin is properly wired, we can deploy on Mr. Bin. It will coordinate all the actions required for establishing the correct sorting fraction for the trash we approach the unit with.
It’s a simple application written in Python that relies gpiozero library to talk to our components. It abstracts the gory details of interfacing with all the components we hooked up to our Raspberry Pi. It’s also super intuitive to use and follows object programming paradigm which, in my opinion, is the most natural one when learning this stuff.
You can find the code for garbage-detector-app under following GitHub repository: https://github.com/mgorsk1/garbage-detector-app
The most important part of Mr. Bin is definitely its brain. To take a more practical approach, I’ve decided to use Google Cloud Auto ML Vision service to quickly train the Machine Learning single-classification model. It will distinguish 4 sorting fractions out of the provided data. To achieve this, I’ve uploaded publicly available labeled kaggle data to GCP Storage. Then, I’ve created GCP Auto ML Vision dataset and started training single-classification model on it.
I’ve used budget of 8 node hours, which means 8 machines working in parallel for one hour and falls into free tier of 40 hours available to new GCP users. The screenshots below show how easy the whole process is.
After the model is trained, you can test it from the GCP Auto ML Vision UI by uploading some test pictures or use Python client to make predictions programmatically, which is how garbage-detector-app communicates with our model.
When all the separate pieces are all set, the most interesting question arises — how does it all come together? Below, you can find a short demo I’ve recorded showing the device dealing with multiple different objects — a plastic deo, a paper manual, a glass bottle and a paper postcard. I approach the device with each one of them separately and, after my presence is established, a photo is taken & sent for classification. The final step of the process is a visual suggestion (a flashing LED light of specific color) to which sorting fraction the object should be thrown out.
Check it out yourself!
We’ve managed to build a whole device using ridiculously cheap materials, generally available cloud services and just a pinch of programming.
Now, you’re going to throw the trash away like a boss (but hopefully not of a mob).