Machines

This page details all my completed and in-progress builds.

I’m a beginner and at times, my code may be sloppy, documentation weak, and I may describe problems that have obvious solutions. So, bear with me.

Work in Progress:

R2D2 desk companion

I have been binging Star Wars these past two months, so I had to make something dedicated to my favourite droid. I’ve been wanting to learn ML-based command recognition. I want to use one of those paper templates available online to create an 8-10 inch tall R2D2. I want it to respond to the command “Hey Artoo!” and play its characteristic whistle-beep sounds while changing its LED colour from red to blue. Some basic animation through wake-word detection. Based on my research, there are ways of command recognition through existing tools, but I wanted to set up my own model.

How far I’ve gotten:

I used Edge Impulse, a development platform for edge machine learning models. It’s pretty great for a newbie like me. The entire way this works is so cool to me. The code essentially converts any sensor data or audio stream into a spectrogram and then uses image recognition techniques to recognise and learn features. I’ve successfully created a model that can recognise “Hey Artoo”. It has pretty decent precision. It has been trained on my voice (and some low/high-frequency versions I generated) alone, so I’m sure there is lots of room for improvement.

Next steps:

My boards do not have an inbuilt microphone. The sample code that the site generated for deployment is only for certain boards with inbuilt microphones. Currently trying to configure an external microphone with my ESP32. There are a million different kinds of microphone drivers so I had to figure out which one I need. Once I get satisfactory input, I’ll try to edit the code the site generates so that my board can run it. I was intimidated by the 50 files that were downloaded. Don’t know where to start, but I’ll get there.

R2’s body is shaping up quite nicely. I had made the head, but then accidentally sat on it, so I need to get a new set printed on some cardstock. I’ll get to this once I’m done with the circuitry.

Binary Display Watch

I made this watch this summer break because I’ve been wanting to get better at PCB design and level up from programming microcontroller boards to bare chips.

This watch is based on the STM32G070KBT6 microcontroller. I designed the PCB and wrote the firmware from scratch. The watch face displays each digit of the time in binary.

P.S. The time shown is 17:52 (0001 0111 0101 0010 in binary decimal, with 1s representing lighted LEDs). Over the course of designing the watch and wearing it over the past day, I’ve become quite accustomed to the format. It’s not that hard once you get used to it.

Relevant links:

Github

Blog Post

IKSHANA: A real-time text to braille convertor

Our team created IKSHANA, a real-time text-to-braille convertor as part of a 5 week bootcamp hosted by the Indian Institute of Technology, Delhi. It can scan text from physical sources like newspapers, postcards, or phones and instantly generate braille. 

Our project consists of 2 parts: the scanner, and the refreshable braille device (RBD). Through our custom OCR system and camera module (8 MP for clear imaging), the scanner first scans external text into a digital form. The microprocessor converts digital text into a form understandable by the RBD using our novel translation algorithm. Our RBD is composed of a matrix of solenoids with custom-printed shafts for optimum tactile acuity. The RBD displays the braille characters in rapid succession.

Key innovations by our team include-

  1. Paper-embossed braille is heavy, expensive, and only useful for a brief period of time, hence it is only available for a limited part of digital content. IKSHANA can translate content from a wide variety of sources ranging from newspapers and books to even phone screens.
  2. Braille printer prints pre-entered digital text. This method is costly, time consuming, and not real time. IKSHANA doesn’t require any digitally entered text. The OCR system specially designed for imperfect images is able to scan text from physical sources with decent accuracy.
  3. The spatial aspect of text required for learning, spelling, algebra, and equations is not provided by speech synthesis software. This constraint is overcome by the Refreshable Braille System of IKSHANA. Our team designed an RBD using solenoids and a set of custom-printed shafts. 
  4. Our team created a novel algorithm for translation of English text to braille characters.
  5. IKSHANA also has a portable and sleek design, giving it an upper hand over braille printers and books. 

Our current project has been created as a proof of concept for one character in English. Our final design consists of a keyboard-like matrix of characters with multiple rows and columns that refresh after the user is at the last character of the matrix. Future iterations also include functionality for handwritten text and other languages and reduced size.

Relevant links: Documentation

Ray: The Smart UV Tracking Necklace

“Ray” is a wearable technology project that combines fashion with healthcare. This project introduces a necklace that is integrated with a UV sensor capable of tracking UV radiation. Paired with a mobile app hosted using Blynk, “Ray” aims to revolutionize how individuals interact with and manage their sun exposure.

Key Features:

  • UV Monitoring: Continuous tracking of UV levels to give real-time and historical data through a chart and value display.
  • Harshness Index: UV monitoring technology currently available in the market displays only real-time UV values. However, prolonged UV exposure can be harmful, and existing solutions do not take into account the duration of exposure.Additionally, existing solutions do not account for the application of sunscreen by a user. Our solution utilizes Machine Learning to develop a “Harshness Index”. It incorporates a temporal component, hence, considers both the time of exposure and the level of UV radiation. Additionally, our project takes in user input about the SPF applied. Based on this data, it makes personalized suggestions to users to increase or decrease their sun exposure.
  • SPF Recommendations: Intelligent algorithm that suggests suitable SPF protection based on the current UV index, and user-inputted value of SPF applied.
  • Skin Cancer Likelihood: Incorporates machine learning to predict how much more or less likely the user is to develop skin cancer based off historical raw UV data
  • Sun Intensity Alerts: Push notifications to warn users of high UV levels and advise on protective measures.
  • User-friendly App Interface: Easy-to-navigate app providing a seamless user experience for monitoring and managing sun exposure data.

Relevant links: Documentation

IRRIGO: IoT-based remote watering and monitoring system for houseplants

Improper watering is one of the main reasons why household plants die. Most people do not have any concrete methods of watering their plants when they are away. ‘Irrigo’, short for ‘Irrigation on the Go’, aims to resolve this problem. 

Irrigo is an IOT based smart gardening system. It allows people to remotely water and monitor the environmental factors (moisture, humidity, temperature) of their plants. This leads to convenience and an increase in the plant’s lifespan. 

The project consists of 2 parts; the ‘on-site’ model and the app. The model was made using the ESP32 microcontroller which has been coded using the Arduino IDE. The app was made using the Blynk software. 

The app graphically presents the humidity and temperature of the plants. It has watering buttons that allow a user to water their plants. The most recent prototype also offers a live feed. 

The code has been written by referring to a large database of plant growth values. Using these, I defined upper and lower bounds for different plants. Hence, the model can water and monitor all kinds of plants according to their individual water needs which is preferable for house plants. 

Relevant links: Documentation

Teebot: The TShirt-folding Robot

Teebot is a tshirt folding robot that uses servos and a 4-panel system to fold tshirts. This robot was inspired by Sheldon from TBBT who used a platic contraption to fold his TShirts. Teebot offers two modes; the first is the ‘single’ mode which can be used to fold tshirts out one-by-one. The ‘loop’ mode, on the other hand, asks the user for the number of tshirts the user would like to fold and runs the folding mechanism multiple times till all the tshirts are folded.

Relevant links:

Teabot: The kettle-switching Robot

Teabot (a spoof of teebot) is an arduino-based kettle switcher. It was made for my father as a father’s day gift because he drinks a lot of tea and makes me switch on his kettle twenty times a day.

Relevant links:

IoT-Enabled Crab Eyes

I made this crab from old electronic components as an art project. I wanted to test out Blynk 2.0 and Arduino Cloud with Alexa integration, so used the LEDs of the crab for fun. I can control the LEDs using the commands “Eye 1 on!” or “Eye 2 on!”.

I hooked up some DHT11 and moisture sensors too later to test other features out.

Relevant links: GitHub Repo

Blog at WordPress.com.

Up ↑