hackathon_id int64 1.57k 23.4k | project_link stringlengths 30 96 | full_desc stringlengths 1 547k ⌀ | title stringlengths 1 60 ⌀ | brief_desc stringlengths 1 200 ⌀ | team_members stringlengths 2 870 | prize stringlengths 2 792 | tags stringlengths 2 4.47k | __index_level_0__ int64 0 695 |
|---|---|---|---|---|---|---|---|---|
10,486 | https://devpost.com/software/road-of-aromas | The University of Newcastle campus is filled with beautiful natural scenery. The journey in the campus environment through the tranquil greens and vibrant gathering areas enriches the student and staff experience. Walking is a common way to travel around campus, and mosquitoes are very much not welcome to join us for a walk. Hence, the use of essential oils can help repel mosquitoes in walkways and common spaces.
Spray Essential Oils on the Walkways
Humans have long used plants and plant derivatives to repel insects or treat the bites they leave behind. Ancient Greek and Roman scholars wrote about using plants on skin and clothing, so it is no surprise that nowadays, the essential oils are popular to be used in the fight against the bite!
The school can use essential oils, by diluting or mixing the oil and spraying it on the walkway to prevent mosquitoes near main circulation and gathering areas. Natural essential oils will not harm the ecological environment. And, the aromas can be used to keep mosquitoes at bay, without the use of chemicals which may be harmful to both the environment and our health.
Essential Oils that Repel
Peppermint oil, lemon eucalyptus oil, citronella and other essential oils can repel mosquitoes, at the same time smell wonderful. One of the essential oils is lavender which give a pleasant, soothing scent that helps to relax and reduce stress, perfect for students especially during the exam period. And, the mosquitoes hate that smell, so it keeps them away too!
Spraying essential oils reduces the number of mosquitoes in our walkways over time, lowering the risk of getting mosquito bites. A great day for a walk! | Road of Aromas | Spray Essential Oils on the Walkways | ['Kevin Lim'] | [] | [] | 21 |
10,486 | https://devpost.com/software/mozzie-trap | Mosquitoes are a major problem in warm weather. The sound of mosquitoes buzzing around our ears is annoying, but their bites are potentially dangerous to our health. Large populations of mosquitoes make it almost impossible for students and staff to enjoy outdoor space and activities.
We can get rid of mosquitoes with an easy handmade trap. Mosquitoes find their victims by following carbon dioxide trails, which we produce as we exhale our breath. Many commercial traps attract bugs by burning propane to produce carbon dioxide.
Easy Handmade Mozzie Trap
All you need is plastic bottle, penknife, tape, 1/4 cup brown sugar, 1 cup hot water, and 1 packet of dry active yeast. Cut the top of the bottle. Place the sugar, yeast and water bait inside the bottle. This creates a carbon dioxide plume that lures the mosquitoes into the trap. Invert the bottle top into the bottle and tape the top edges.
Hungry mosquitoes follow the trail into the bottle and down through the funnel. When they realize there is no food to be found, they fly along the surface of the brown sugar mixture until they reach the sides of the bottle. The bugs then fly up the side of the bottle, but their escape is blocked by the inverted funnel (the reason for sealing the edges with duct tape). The little biters are trapped!
After a period of being trapped, the mosquitoes become tired and fall into the liquid to drown. Most of the mosquitoes that enter the trap will perish there.
Mosquito traps are better than other bug-killing methods
The trap is easy to make from inexpensive materials. You also do not need any special equipment to make the trap work. That means you do not need electricity, extension cords, or timers that other traps might require.
The trap works 24 hours a day. No maintenance is required. No fans or other moving parts that can break down are involved. The mosquito bait is completely organic and can be used safely in the environment.
Student and Staff Involvement
School can organize workshops to allow students and staff to learn how make trap from bottle or container. These traps can be placed in assigned location around campus such as on outdoor tables. Aesthetics can be considered by providing creative solutions to decorate the bottle. The cleaner or student volunteers can change the bottle weekly. Let’s make our Uni a better place – Mozzie free! Trap those mozzies and keep them away! | Mozzie Trap | Trap those mozzies and keep them away! | ['Kevin Lim'] | [] | [] | 22 |
10,486 | https://devpost.com/software/chemical-olfaction-and-elimination | The buzz in the ear during an afternoon nap, the stings on the legs during a Sunday hike and the odd one that goes in your mouth while playing cricket, Mosquitoes are annoying and that is an inspiration to fight against. However, the primary inspiration is the mosquito transmitted pathogens that kill people around the world every year. There are 19 such fatal pathogens known.
Chemical olfaction uses cheap volatile chemicals (components of human sweat) to attract mosquitoes. Once concentrated in a region there are various ways to eliminate.
I have not built it yet.However, building it is mere mixing of a few cheap chemicals.
The primary challenge is that these chemicals need to be volatile and hence need to be replenished. The task will be to optimise the chemical compositions and associated costs.
I am proud only of my will to solve problems.
I learned that killing mosquitoes is as essential as it is tough, but there are smart ways o9f doing it.
Execution of the idea. To optimise the chemicals.
Built With
none | Chemical Olfaction and Elimination of Mosquitoes | To repel mosquitoes for all is a big task. Attracting them is easier. So why not attract and eliminate. And that can be done with very cheap chemicals and minimum effort. | ['Rohan Borah'] | [] | ['none'] | 23 |
10,486 | https://devpost.com/software/a-mosquito-control-that-sounds-fishy | Inspiration - I was inspired by the idea of walking on campus in the Winter without being bitten by a mosquito and thought wouldn't it be nice if every season could be like this.
What it does- We introduce a native species to our swamp and ponds such as the Crimson Spotted Rainbow Fish (goldcoast fish control, 2020) which consumes the mosquito larvae. Studies in Africa have shown there was a significant decrease in the mosquito population, about 75%, using Talipa fish of which the population was controlled as the locals consumed the fish (Howard et al, 2007). The use of guppies may also be used as a cheap alternative (warbanski et al, 2017) or the introduction of koi ponds to increase the structural appearance of the campus in addition to decreasing the mosquito population.
Challenges I ran into- The only potential problem is population control, however being on a university campus one can use the fish to study anatomy or other scientific experiments. In addition, if using a consumable fish such as Talipa we could capitalize on the fish market
What I learned- Mosquitoes breed on stagnant water, where once the eggs hatch the larvae will consume the microorganisms in the environment. A pond is a great environment for these organisms which is often the chosen breeding spot for mosquitoes (wang et al, 2020).
What's next for A mosquito control that sounds fishy- initiate controlled environments to see how much of a decrease occurs in the mosquito population and whether or not the fish population itself can be controlled.
References:
1) Howard, A. F., Zhou, G., & Omlin, F. X. (2007). Malaria mosquito control using edible fish in western Kenya: preliminary findings of a controlled study. BMC public health, 7(1), 199.
2) Goldcoast mosquito control using pond fish:
https://www.goldcoast.qld.gov.au/documents/bf/mosquito-control-ponds-dams.pdf
Warbanski ML, Marques P, Frauendorf TC, Phillip DAT, El-Sabaawi RW. Implications of guppy (Poecilia reticulata) life-history phenotype for mosquito control. Ecol Evol. 2017;7(10):3324-3334. Published 2017 Apr 1. doi:10.1002/ece3.2666
4)Wang Y, Cheng P, Jiao B, et al. Investigation of mosquito larval habitats and insecticide resistance in an area with a high incidence of mosquito-borne diseases in Jining, Shandong Province. PLoS One. 2020;15(3):e0229764. Published 2020 Mar 4. doi:10.1371/journal.pone.0229764 | A mosquito control that sounds fishy | University of newcastle was built on a swamp and contains numerous ponds. My proposition is to introduce NATIVE, LARVAE EATING species of fish to these ponds. | ['Jaskarn Bains'] | [] | [] | 24 |
10,487 | https://devpost.com/software/dumbwaiter-zf95is | A sneak-peak of our interface. The map updates as the robot (blue) moves toward the brown table.
A close-up of the table our bot docks into
A close-up of one of our wheels
Our restaurant setting
Inspiration
Jobs 20 years from now are going to look totally different from jobs today. As robotics and AI continue to progress, more and more low-skilled labor jobs will disappear. We wanted to see whether we could use today's technology to create an autonomous waiter: a "dumbwaiter".
What it does
"Kitchen staff" put food on our bot, which then proceeds to make its way through the "restaurant", finding the corresponding table and delivering the food.
How I built it
We use NVIDIA Jetson TX2 to perform all of the necessary computer vision to localize the bot in its environment. We build a custom robot in the shape of a table that can move around a fixed environment and localize itself using april tags. We used 80/20 and plywood to construct the robot itself.
Challenges I ran into
We had a lot of problems obtaining motors that were strong enough to move our robot. We also ran into trouble designing an energy solution that could power all of our hardware at the correct amperage.
Accomplishments that I'm proud of
The table looks sick.
What I learned
Always read data sheets and do all the torque math BEFORE you build the rest of the bot.
What's next for Dumbwaiter
Smarter localization!! We actually already have code for a localization system that doesn't require april tags on the ground, but we didn't have time to integrate it with the rest of the system.
Built With
80/20
arduino
flask
jetson
opencv | Dumbwaiter | A robot to autonomously bring food from a restaurant kitchen to your table! | ['Ioana Zaharescu'] | ['Best Overall Hack'] | ['80/20', 'arduino', 'flask', 'jetson', 'opencv'] | 0 |
10,487 | https://devpost.com/software/tryour-a-mirror-which-suggests-you-styles | Workflow
Example
Model
Inspiration
Google as part of their customary April fool's prank made a device that would tell a person about their style! We actually liked the idea of detecting style, but to make people cool!
What it does
So, it is a mirror that is capable to project on the surface. We can use this to virtually display images on the mirror. We are using this virtually project your clothes on your body.
The mirror knows all the clothes you have in your wardrobe, Be it T-Shirts, Shirts, Jackets or Lowers. Know it uses our custom algorithm to suggest an outfit for the day!
The mirror is capable of matching different colors. Our algorithm is able to distinguish which colors look good together and which don't. Also, it has added advantage of determining the current temperature and weather conditions to recommend the ideal type of clothes to be worn.
How I built it
We started by making the hardware of the mirror. We took an old LCD Monitor and mounted a two-way glass on top of it. It enables the viewer to view his reflection along with partial image from the back.
Then we started the software implementation. Our software uses Google Cloud Vision API to detect the "Upper Body" & "Lower Body" and gives us the coordinates for the same. We use these coordinates to mask images of the clothes recommended by our algorithm on top of the viewer's body. The coordinates from Google Cloud Vision API are pased to Unity which enables optimal placement of the image on the body.
Then we started the implementation of our algorithm which suggests clothes from the wardrobe. Currently, the algorithm uses two methods to suggest a combination:
Color Matching - We match color combinations based on defined presets based upon the data from EffortlessGent.com
Weather Prediction - We use openweathermap api to detect the predicted temperature. If it increases a threshold, thinner clothes are suggested
Finally, We implemented basic clothes recommendation and transaction system where the mirror suggests the user, which clothes to buy whose transactions are verified by CapitalOne's Purchase API
Challenges I ran into
Recognizing and segregating the human body into the upper and lower half to impose two different images was a major challenge. Google Cloud Vision API helped a lot for the same. Integrating the same with OpenCV and Unity for Real-Time Detection has also been one of the challenges.
Deciding what colors look good with each other and what combinations can be used has also been a significant challenge. Finally, we went forward with one of the most widely accepted pattern from EffortlessGent
What's next for TRYOUR - 'A Mirror Which Suggests You Styles'
A lot!!!
We had a lot of ideas in mind to better our algorithm's efficiency but due to time constraints, we were not able to pull them off!
In Future, We can use Pinterest and Tag-Walk to scrape the latest designs and trends available in the market and suggest something similar to that
Also, TRYOUR can be developed into a complete platform where the mirror will suggest clothes that a user can buy to enrich the experience and be up to date with current fashion trends. With a single gesture, user can place order for the clothes which can automatically be updated in their digital wardrobe
Built With
2-way-glass
automl
camera
capital-one
google-cloud
google-vision
led-display
opencv
python | TRYOUR - 'A Mirror Which Suggests You Styles' | Experience Trendy Combinations of Your Existing Wardrobe! | ['Mihai Toma'] | ['FIFA Tournament (First Place)', 'Best Overall Hack (2nd Place)'] | ['2-way-glass', 'automl', 'camera', 'capital-one', 'google-cloud', 'google-vision', 'led-display', 'opencv', 'python'] | 1 |
10,487 | https://devpost.com/software/opti-bot | Landing page
Inspiration
Being on a robotics team for over 2 years it had been evident to me that the things that make a good robot were the features, weight, speed, and balance. Every year it takes months of planning to get the robot that's perfect for the competition, with this and the robot sumo battle that was supposed to take place in mind, I had decided to create a site that allowed you to make the perfect bot. With the future updates allowing for custom parts and more options to diversify the robot, it allows users to get an idea of the fundamentals of robotics.
What it does
Opti-Bot allows user to create their custom robots and take them out for a test ride by facing against other opponents. Using real-world materials to create the bots allows for a near realistic feel to the battle, from designing to battling, Opti-Bot allows user to make their own battles. This mainly allows users to spend less time experimenting at competitions and get the perfect bot ready beforehand.
How I built it
I built it in many parts, firstly I designed the site on Figma and then I designed wanted the first bot to look like on tinkercad. Following the wireframes, I developed the front end using HTML and CSS, after this, I developed the actions using Javascript. Once the site was done I created the rest of the bots on tinkercad and added them to the site. I then used netlify to publish the site to the web.
Challenges I ran into
The main challenges I ran into were when designing the robots as they just did not come out the way I wanted them to. Also when the javascript was having several errors related to the math for the Optimal percentage.
Accomplishments that I'm proud of
Im proud of the fact that I was able to finish the main portion of the site and I am proud of the many tinkercad designs I had done.
What I learned
I learned a lot more about designing cad drawings and javascript.
What's next for Opti-Bot
The next steps for the site is to finish the multiplayer feature and the feature where players can challenge other players. Other than this I want to add a feature to choose from real parts when making the robot.
Built With
css3
html5
javascript
tinkercad
Try it out
opti-bot.netlify.app
github.com | Opti-Bot | Productivity Platform that allows user to Design, Develop and battle faster on our site to gain experience for the real life competitions | ['Vishnudev Poil'] | ['Best UI/UX'] | ['css3', 'html5', 'javascript', 'tinkercad'] | 2 |
10,487 | https://devpost.com/software/73-vw-beetle-tachometer-oil-temp-sensor | Display/arduino module
Oil temp adapter with sensor
Hall effect sensor with magnet (RPM)
display module in position - working
Inspiration
I got a 73' VW beetle as my first car, and right off the bat I started missing some of the features that came as standard in the cars I had learned to drive. The car was missing essential sensors such as the RPM and engine oil temp.
What it does
The device measures/ displays the speed of the engine (RPM) and the temperature of the engine oil.
How I built it
The base of the project is an Arduino nano, a very small yet capable microprocessor. In addition to the Arduino I used a couple open source sensors and a 16bit display. In order to measure the revolutions per minute of the engine (engine RPM) I made use of a Hall effect sensor. This sensor, simply put, is a switch that turns on/off whenever a magnetic field is applied. I installed the sensor, and a magnet to the flywheel of the engine. Every time the engine turned the switch would be activated, and by measuring how many times the switch was activated you could measure RPM.
In addition to the RPM I wanted to measure the temperature of the engine oil so that I could know if the engine was overheating. In order to install the sensor I had to make an adapter so that I could use an existing port on the engine block. I installed the sensor into the adapter and made sure it did not leak.
Challenges I ran into
Because of how fast the engine flywheel spins, and the tight tolerances it was very difficult to keep the magnet attached at high RPM
The adapter for the oil sensor had to go through 2 iterations, the first consisted of high temperature glue, which in initial tests leaked way too much, so I decided to sacrifice some time to make a more reliable one with welded metal.
Accomplishments that I'm proud of
I am proud of the code for the RPM sensor, because it was very hard to smooth the data from the sensor in a way that would make it look good
What I learned
I learned how to use the arduino 16bit display library, as well as how to use a Hall effect sensor in order to measure RPM
What's next for 73' VW Beetle Tachometer & Oil Temp Sensor
I am going to install a wheel speed sensor in order to measure the speed at which the car is moving because the speedometer that the car came with is almost 50 years old and is not very accurate.
Built With
arduino
c++
opensourcehardware | 73' VW Beetle Tachometer & Oil Temp Sensor | Old VW beetles do not have either an oil temp sensor or tachometer. Because of this when driving you do not know if your engine is overheating, or whether you are over raving the engine. | ['Luca Rosu'] | ['Social Impact Award'] | ['arduino', 'c++', 'opensourcehardware'] | 3 |
10,487 | https://devpost.com/software/augmented-reality-gundam-rx-78-2 | main page
vr mode
Inspiration
As Gundam fans, I'm always wanted to travel Japan to view most iconic build Gundam scale 1:1, but due to pandemic I have to postponed to go there,hence I created my own simple AR gundam using WebAR technologies and EchoAR
What it does
View 3D model in web that can be enable Augmented Reality mode of iconic Gundam model RX-78-2 on plane surface, plus link to 360 photo that show scale 1:1 of gundam at Odaiba Japan
How I built it
Technologies I used in these project are listed below:
HTML 5,
Bootstrap
EchoAR
Challenges I ran into
Plane surface tracker on EchoAR is not compatibility on android device. lack of tutorial on HTML echoAR viewer
This project only work on Iphone 6s and above, got the problem with android phone to detect the plane.
The loading time to wait 3D model to load are quite slow to display the 3D model of Gundam
Accomplishments that I'm proud of
This is my first time use EchoAR to develop WebAR project
Built With
bootstrap
echoar
html5
Try it out
rx-78-gundam-ar.glitch.me | WebAR-Gundam RX 78-2 | View 3D model of iconic Gundam RX-78-2 using EchoAR in WebAR | ['Amir Hamzah'] | ['Best AR/VR Application using the echoAR platform'] | ['bootstrap', 'echoar', 'html5'] | 4 |
10,487 | https://devpost.com/software/dre-kcr4g8 | Prototype
We built a device to launch M&M's directly into your mouth.
How I built it
The Hardware
We use a spring-loaded catapult to launch the M&M. The catapult sits on a plate with two servos - one to control side to side motion, and the other to tilt it up and down. The device uses these two servos to take aim at an open mouth, and finally, a last servo actuates the release mechanism to fire.
The robot uses a Logitech webcam to sense where the user is, connected to a laptop via USB port. The laptop talks to an Arduino Uno over serial in order to send the servos commands.
The Software
In order to detect the location of the face in the camera frame, we take advantage of the Microsoft Cognitive Services API. This API conveniently returns a bounding box around the face, from which we get the position of the user's mouth. Finally, we get the depth the user is at based on the size of this bounding box.
We've calibrated the device using multivariate interpolation in order to model its parabolic trajectory. This lets us get a decently accurate aim based off of this position.
Challenges I ran into
Designing an effective, accurate shooting mechanism that auto-reloads is non-trivial - we ended up having to go through several iterations of designs, beginning with a barrel rifle system and ending with a spring loaded catapult.
Additionally, we used a simple off-the-shelf Logitech webcam for our vision system. This meant that getting depth was a little tricky - we ended up using a method that used the bounding box of the detected face to compute a metric of depth.
Improvements I'd Like to Make
The largest source of inaccuracy in the device comes from build quality of the device - since it was prototyped quickly, there's a decent amount of play and inconsistency in the joints between runs. We'd ideally like to rebuild it to minimize this.
Moreover, the launching mechanism can be much improved. We'd ideally like to use compressed air in the future, instead of a spring loaded system.
Finally, the software can be optimized quite a bit to speed up the aiming speed. Right now, we send uncompressed images to the API at a low rate, which can be sped up significantly.
Built With
arduino
machine-learning
python | Dre | Launching M&Ms | ['Andrei Ionascu'] | ['FIFA Tournament (Second Place)'] | ['arduino', 'machine-learning', 'python'] | 5 |
10,487 | https://devpost.com/software/dot-delivery-o4vpcz | Air storage and release manifold
Canon Gimbal - side view
Canon Gimbal - top view
Solenoid valve manifold
Components:
Cannon:
Re-purposed schrader valve interface to bike pump for pressurization
Large PVC pressure vessel for air storage, rated to 20 PSI (plenty for such a large air volume)
Two high-flow brass solenoid valves: one to a small auxiliary tank that allows for repeated firings, and the second to fire the cannon itself.
Interchangeable cannon barrels supporting Dots and Peanut M&Ms
Gimbal:
2-axis stepper-motor aiming system supporting the cannon barrel
Laser cut acrylic design facilitating direct pan drive and linear tilt drive
Electronics and software:
Nvidia Jetson running OpenCV for face tracking, distance reckoning and active estimation of positional error
Arduino with stepper drivers for interfacing with gimbal and solenoid valves | Dot Delivery | Air cannon that aims and delivers gumdrops straight into your mouth! | ['Marcel Atanasie'] | ['FIFA Tournament (Third Place)'] | [] | 6 |
10,487 | https://devpost.com/software/vlogger-pro-vhg7da | EE for the win
front facing camera same quality selfie
lightweight selfie stick
Inspiration
One of our friends is an avid vlogger. They told us about how she would like to use her back camera on her phone for better quality, but usually doesn't because she isn't able to make sure that her face is always in the center of the frame as she moves around. We wanted to build a solution for her.
What it does
An iOS app we developed will use the back camera and recognize the location of your face with respect to the frame. The app will instruct the angular movement of the phone's mount on the selfie stick to a server that a computer is connected to. Via bluetooth, the computer tells the servo to move in the correct direction.
Challenges I ran into
Servo
We thought this would be the easy part, but since we were using a high-torque servo, a 5V power supply was insufficient to move the servo. After hours of testing code, using an oscilloscope to test the circuit, and trying out other servos, we realized that we simply need a 9V power supply.
Bluetooth/Server Connection
The iPhone could not directly connect to the HC-05 bluetooth port that was available to us. Therefore, we needed to first create a server and talk to a computer, then the computer communicated via bluetooth to the servo.
iOS app development
Creating an iOS app in <24 hours is difficult. We struggled at every step from figuring out how to use the back camera to taking a screenshot.
Face Recognition Model
Accomplishments that I'm proud of
We're proud that we built a functional app that talks to hardware in <24 hours! It's also really cool that we're using CoreML's algorithm to detect faces.
What I learned
If it can be avoided, don't try to make an app in <24 hours, but it was a good learning experience. Also the hi torque servo motors need 9V to drive (we literally spent a good hour debugging this). We also learned that iPhones don't like HC-05 bluetooth chips (thanks Apple).
What's next for Vlogger Pro
The future is limitless and focused on you. Literally. Smoother PID control so you can focus on whatever it is you need to do. Imagine a full smooth experience to record you in real time.
Built With
arduino
bluetooth
face-recognition
machine-learning
servo
swift | Vlogger Pro | A self-adjusting selfie stick that allows vloggers to consistently be in the center of the frame | ['Corina Dragan'] | ['Fortnite Tournament (First Place)', 'Fortnite Tournament (Second Place)', 'Fortnite Tournament (Third Place)'] | ['arduino', 'bluetooth', 'face-recognition', 'machine-learning', 'servo', 'swift'] | 7 |
10,487 | https://devpost.com/software/portrait-mode-irl-kh1qel | Optical Path Bending
Captured Photo
Inspiration
We hate smartphone portrait mode.
Not because it isn't pretty, but because it tries to fake a real optical effect (depth of field) with mAcHIne LeArnIng, which is never a good sign...
To take real "portrait mode" photos, what we really need is a DSLR camera with a long zoom lens. We take that, stand 15-20 ft away, and now you've got a sharp, flattened portrait with that iconic background blur.
But this is super inconvenient! We can't do stand far away in tight or narrow spaces, or say, in a photo booth.
Which is where our project comes in...
What it does
Portrait Mode IRL uses a set of mirrors to bend the camers's optical path in a compact space, such that you can take beautiful, zoomed-in portraits while standing only 3-5 ft away from the camera, not 15!
Shaped like a photobooth, you can easily place this in a tight studio or narrow hallway, and still use your high zoom lenses to get backgound blur and flatter, more aesthetic pictures.
Check out some examples below!
Built With
80/20
dslr
leds
mirror
optics | Portrait Mode IRL | Take beautiful zoomed-in portrait photos from close-up | ['Luca Marin'] | [] | ['80/20', 'dslr', 'leds', 'mirror', 'optics'] | 8 |
10,487 | https://devpost.com/software/lorax-luring-others-to-retain-our-abode-extensively | Hello and thank you for judging my project. I am listing below two different links and an explanation of what the two different videos are. Due to the time constraints of some hackathons, I have a shorter video for those who require a lower time. As default I will be placing the lower time video up above, but if you have time or your hackathon allows so please go ahead and watch the full video at the link below. Thanks!
3 Minute Video Demo
5 Minute Demo & Presentation
For any questions or concerns, please email me at
joshiom28@gmail.com
Inspiration
Resource extraction has tripled since 1970. That leaves us on track to run out of non-renewable resources by 2060. To fight this extremely dangerous issue, I used my app development skills to help everyone support the environment.
As a human and a person residing in this environment, I felt that I needed to used my technological development skills in order to help us take care of the environment better, especially in industrial countries such as the United States. In order to do my part in the movement to help sustain the environment. I used the symbolism of the LORAX to name LORAX app; inspired to help the environment.
_ side note: when referencing firebase I mean firebase as a whole since two different databases were used; one to upload images and the other to upload data (ex. form data) in realtime. Firestore is the specific realtime database for user data versus firebase storage for image uploading _
Main Features of the App
To start out we are prompted with the
authentication panel
where we are able to either sign in with an existing email or sign up with a new account. Since we are new, we will go ahead and create a new account. After registering we are signed in and now are at the home page of the app. Here I will type in my name, email and password and log in. Now if we go back to firebase authentication, we see a new user pop up over here and a new user is added to firestore with their user associated data such as their
points, their user ID, Name and email.
Now lets go back to the main app. Here at the home page we can see the various things we can do. Lets start out with the Rewards tab where we can choose rewards depending on the amount of points we have.
If we press redeem rewards, it takes us to the rewards tab, where we can choose various coupons from companies and redeem them with the points we have. Since we start out with zero points, we can‘t redeem any rewards right now.
Let's go back to the home page.
The first three pages I will introduce are apart of the point incentive system for purchasing items that help the environment
If we press the view requests button, we are then navigated to a page where we are able to view our requests we have made in the past. These requests are used in order to redeem points from items you have purchased that help support the environment. Here we would we able to
view some details and the status of the requests
, but since we haven’t submitted any yet, we see there are none upon refreshing. Let’s come back to this page after submitting a request.
If we go back, we can now press the request rewards button. By pressing it, we are navigated to a form where we are able to
submit details regarding our purchase and an image of proof to ensure the user truly did indeed purchase the item
. After pressing submit,
this data and image is pushed to firebase’s realtime storage (for picture) and Firestore (other data)
which I will show in a moment.
Here if we go to firebase, we see a document with the details of our request we submitted and if we go to storage we are able to
view the image that we submitted
. And here we see the details. Here we can review the details, approve the status and assign points to the user based on their requests. Now let’s go back to the app itself.
Now let’s go to the view requests tab again now that we have submitted our request. If we go there, we see our request, the status of the request and other details such as how many points you received if the request was approved, the time, the date and other such details.
Now to the Footprint Calculator tab, where you are able to input some details and see the global footprint you have on the environment and its resources based on your house, food and overall lifestyle. Here I will type in some data and see the results.
Here its says I would take up 8 earths, if everyone used the same amount of resources as me.
The goal of this is to be able to reach only one earth since then Earth and its resources would be able to sustain for a much longer time. We can also share it with our friends to encourage them to do the same.
Now to the last tab, is the savings tab. Here we are able to find daily tasks we can simply do to no only save thousands and thousands of dollars but also heavily help sustain and help the environment. *
Here we have some things we can do to save in terms of transportation and by clicking on the saving, we are navigated to a website where we are able to view what we can do to achieve these savings and do it ourselves. *
This has been the demonstration of the LORAX app and thank you for listening.
How I built it
For the navigation, I used react native navigation in order to create the authentication navigator and the tab and stack navigators in each of the respective tabs.
For the incentive system
I used Google Firebase’s Firestore in order to view, add and upload details and images to the cloud for reviewal and data transfer. For authentication, I also used
Google Firebase’s Authentication
which allowed me to create custom user data such as their user, the points associated with it and the complaints associated with their
user ID
. Overall,
Firebase made it EXTREMELY easy
to create a high level application. For this entire application, I used Google Firebase for the backend.
For the UI
for the tabs such as Request Submitter, Request Viewer I used React-native-base library to create modern looking components which allowed me to create a modern looking application.
For the Prize Redemption section and Savings Sections
I created the UI from scratch trialing and erroring with different designs and shadow effects to make it look cool. The user react-native-deeplinking to navigate to the specific websites for the savings tab.
For the Footprint Calculator
I embedded the
Global Footprint Network’s Footprint Calculator
with my application in this tab to be able to use it for the reference of the user of this app. The website is shown in the
tab app and is functional on that UI
, similar to the website.
I used expo for wifi-application testing, allowing me to develop the app without any wires over the wifi network.
For the Request submission tab, I used react-native-base components to create the form UI elements and firebase to upload the data.
For the Request Viewer, I used firebase to retrieve and view the data as seen.
Challenges I ran into
Some last second challenges I ran to was the manipulation of the database on Google Firebase. While creating the video in fact, I realize that some of the parameters were missing and were not being updated properly. I eventually realized that the naming conventions for some of the parameters being updated both in the state and in firebase got mixed up. Another issue I encountered was being able to retrieve the image from firebase. I was able to log the url, however, due to some issues with the state, I wasnt able to get the uri to the image component, and due to lack of time I left that off. Firebase made it very very easy to push, read and upload files after installing their dependencies.
Thanks to all the great documentation and other tutorials I was able to effectively implement the rest.
What I learned
I learned a lot. Prior to this, I had not had experience with
data modelling, and creating custom user data points. **However, due to my previous experience with **firebase, and some documentation referencing
I was able to see firebase’s built in commands allowing me to query and add specific User ID’s to the the database, allowing me to search for data base on their UIDs. Overall, it was a great experience learning how to model data, using authentication and create custom user data and modify that using google firebase.
Theme and How This Helps The Environment
Overall, this application used
incentives and educates
the user about their impact on the environment to better help the environment.
Design
I created a comprehensive and simple UI to make it easy for users to navigate and understand the purposes of the Application. Additionally, I used previously mentioned utilities in order to create a modern look.
What's next for LORAX (Luring Others to Retain our Abode Extensively)
I hope to create my
own backend in the future
, using
ML
and an
AI
to classify these images and details to automate the submission process and
create my own footprint calculator
rather than using the one provided by the global footprint network.
Built With
apis
data-modelling
expo-permissions
expo.io
footprint-calculator
google-firebase
google-firebase-authentication
google-firestore
google-storage
react-native
react-native-base
the-global-footprint-network
Try it out
github.com | LORAX (Luring Others to Retain our Abode Extensively) | Gamifying and rewarding those who help the environment through their actions and lifestyle | ['Om Joshi'] | ['3rd Place', 'Best Design', 'Wolfram Award for Top 30 Hacks'] | ['apis', 'data-modelling', 'expo-permissions', 'expo.io', 'footprint-calculator', 'google-firebase', 'google-firebase-authentication', 'google-firestore', 'google-storage', 'react-native', 'react-native-base', 'the-global-footprint-network'] | 9 |
10,487 | https://devpost.com/software/second-eye-3zo2s4 | Inspiration
We wanted to create a custom FPV for a quadcopter drone that would be able to be modified to follow faces, turn according to device orientation, and view stereoscopic images in VR.
What it does
The Raspberry Pi camera interfaces with a Linux machine over untethered WiFi. The drone is outfitted so that the controller can view their path of flight as it occurs.
How I built it
We initially deployed a Node.js app to a Raspberry Pi camera through an Apache server in order to convert images from the feed into stereoscopic view. We later switched to a feed directly transmitted from the Pi camera to a Linux machine.
Challenges I ran into
Accomplishments that I'm proud of
What I learned
What's next for Second Eye
Built With
linux
rasberry-pi | Second Eye | Custom FPV quadcopter camera control | ['Alexandru Ion'] | [] | ['linux', 'rasberry-pi'] | 10 |
10,487 | https://devpost.com/software/bestbuy | Items and Categories Listed
Shops Listed
Item description
Shop Description
Unified Consoled for both iOS , web and smartphone
Inspiration
The problem Best Buy solves
Ever since the outbreak of a pandemic, we all have faced the situation that maximum shops are opened for a particular time and the rush during any time is at peak. Different people gather Infront of different shops and crowd the area. Here in the application, it sorts out the problem.
The application fills the needs of people. People can register over the app and see different slots available for the same. The slots would be available throughout the day and can be booked in advance. The vendors can list their own shop and the details of everything starting from payments, door-step delivery, and through the application, they can also start accepting people’s basket list (what people are going to purchase).
If the vendors know that the people who are coming to the shop what items they are going to purchase, that can already keep the items ready for the same so that the rush in their store reduces to a great extent.
People can also provide the list of articles they need to purchase so that their parcels are ready and they just need to pick it up from the delivery counters of the shop.
There would be vendors who can list items which are open for online delivery and yes, for every small item there cannot be an option for online delivery as it would incur a huge burden. But if customers from the same locality purchase together, then this can be sorted and thing can be delivered right to their door-step.
Through this application, people will definitely get the benefit and would have a great impact on their lives, and it can serve as a good medium to overcome the crowding in the nearest areas.
What it does
Sorts out the problem for long queues in the markets, shops
What I learned
I learnt the most important concept i,e is the management of time
What's next for BestBuy
Challenges I ran into
Starting as a single member of the team, I looked into the design, thinking and simulating. After coming across some tech stacks, I finally decided to move further with flutter, as building an application on flutter do reduce the burden of native mobile developing, as well gives an option to alter the devices to which you are going to deploy the same. Flutter applications generally, do run on iOS, Android and Web Devices. Building apps on flutter makes it really easy to connect to the database and firebase. Building the complete working of the applications was getting tougher for me, still managed to forecast a good amount of work, so that a generalised idea can be shown through the application. I would definitely take up this project in future and further enhance the listing parts, and how one can send the list of items he/she wants to purchase and pass it on to the vendor side through normal text messaging applications such as WhatsApp, telegram, or even normal text message. The idea can be implemented in a real-time scenario, where the country's rising fear of cases heading up, this can sort out as an intermediary in manipulating the crowd in varied time intervals. This as well helps the vendors who are selling the items, they can start pre-packaging of products on the receive the long list a day before or whenever they receive the list so that they can enrich their selling of products and their business also do run fine in the situation where most of the business is getting hampered.
Built With
flutter
ios
xcode
Try it out
github.com | BestBuy | The problem of standing in a queue or line is ruled out . You can skip the line and save the time | ['Amrit Sahani', 'smruti vicky'] | [] | ['flutter', 'ios', 'xcode'] | 11 |
10,487 | https://devpost.com/software/assist-cgiy3m | In these challenging times, our generation is very involved in the societal conflicts we face. However, those who want to join the movement often struggle on where to start and how to join. This app allows you to join peaceful protests near you which we find through Facebook and Twitter posts. This was my first time using Figma to design something, but after getting used to the basic controls, it was smooth sailing from there. Not only that, Figma also offers a feature that allows you to transform your designs into real code, making it much simpler when I build the real app. Before making the home screen, it was very difficult for me to find inspiration on what the app should look like, color scheme, etc.. But after finishing the home screen, it gave me motivation and an idea of what the finished app should look like. All in all, I am really satisfied with what I've designed in the last 48 hours.
Built With
figma
Try it out
www.figma.com | Assist | Finding peaceful protests and rallies of all sorts in a consolidated app. | ['Jason Zhu'] | [] | ['figma'] | 12 |
10,487 | https://devpost.com/software/train-o | A look at the inside components
A look at Train-O
Inspiration
PLEASE NOTE: The video above is not correct. Please find it here
https://streamable.com/dbnqmu
For most, the journey of adopting a dog begins with a breeder or a trainer, but our journey began at a rescue shelter, where we met Laverne. Unable to work with a professional trainer due to Laverne's health, we had to take the process into our own hands. That's when we realized just how difficult and time-consuming dog training is. Not only that but just how hard it is to leave Laverne alone while we do other things. That's why we developed Train-O, to be a trainer, and companion to Laverne.
What it does
Train-O uses Open-CV motion tracking frameworks to identify and interact with users. This is done by following our furry friends and giving them tasks via audio cues to "Sit!". Upon successful completion of the task, the good boy is rewarded with a tasty treat automatically, conditioning them to listen to future commands. This can be used to train your dog, associating actions with verbal commands; but it can also be used to keep your dog company, following them around and giving them treats.
How I built it
We combined all of our knowledge on both physical and hardware hacking to create Train-O. On the physical side, we used an Arduino and several motors (and a lot of wiring) to provide a function to the body we built out of cardboard. The mechanisms of treat delivery when a behavior is reinforced, NPN transistor as an audio module switch, H-bridge as a motor driver, a webcam for image recognition and other features are included. On the software side, we trained our own image processing models using computer vision and developed our own algorithms to track and classify poses. We simplified the images down to key points of the body to reduce latency and provide a quicker response, which is important in reinforcing good behavior.
Challenges I ran into
From the start, our team was set on using Google Cloud to classify poses. However, we struggled with overfitting because of our limited data source. This struggle caused us to explore other solutions, which lead us to develop our own system for classifying images. Additionally, we had to find a way to provide audio cues for each action, which was challenging due to Arduino's limited audio capabilities.
Accomplishments that I'm proud of
Even the simplest parts of Train-O make us very proud, but witnessing the convergence of our hardware and software into one single innovation was extremely rewarding. We were able to create our own classification system and build an entire robot out of tiny miss-matched parts.
What I learned
We certainly learned a lot: AutoML, OpenCV, Pose Recognition, Arduino Circuits, Serial Communication, and a ton of other things. Some of our members experienced their first hackathon, and they have discovered a new passion, and plan to attend more in the future. An important lesson that was learned early is that communication is key when creating a product with this many parts. We developed the hardware and software in tandem so as to prevent issues when we combined them, and that proved incredibly useful for us. Between lessons in hardware, software, frameworks, tools, and teamwork, we learned quite a lot.
What's next for Train-O
The best part of Train-O is yet to come; our whole team cannot wait to present Laverne with their new friend and watch as the idea she gave us becomes a reality. Hopefully, we will continue to enhance the algorithms that make Train-O possible and eventually maybe even create another version, but first, we have to get feedback from the most important contributor to the project, Laverne.
Built With
arduino
json
numpy
open-cv
open-pose
pandas
python | Train-O | A good boy for YOUR good boy | ['Matei Cosmin'] | [] | ['arduino', 'json', 'numpy', 'open-cv', 'open-pose', 'pandas', 'python'] | 13 |
10,491 | https://devpost.com/software/seaspace | The homepage of our website
Our discord bot in action!
Satellite image obtained from Sea Stats centered at 0, 0
Visualization of our Geo-spatial Algorithm
Inspiration
Discord bots can make learning about any topic more fun and easy. This is why we decided to make a bot that is focused on educating the public about the ocean.
What it does
Our discord bot, named Sea Space, is designed to be playful and friendly. It is equipped with 5 main features:
Sea Stats
. This command utilizes the Meteomatics API to find information about the ocean given a coordinate, such as temperature, salinity, and depth. Users can also request a satellite image over an area that displays the temperature as a heat map
Sea Spot
. This command calculates the coordinates of a location and can determine whether the location is on land or on the sea. If the location is on land, Sea Space will give directions to the nearest ocean.
Sea Size
. This command returns constant value information about the ocean, like its volume or mass. However, Sea Space can easily convert between units, allowing users to understand how large the ocean is in everyday measurements like cups
Sea Species
. This feature will activate whenever Sea Space detects an image being uploaded. Using TensorFlow, the bot can run a Keras Convolution Neural Network in order to classify sea animals that may appear in these pictures
Sea Support
. No bot is complete without a help command. Sea Support will list all the commands, as well as the format for properly calling them
How we built it
We used the popular python API
discord.py
. Once we had a basic bot structure, we integrated a machine learning model, built with the
VGG16
architecture, to run our Sea Species feature.
After this, we implemented several APIs, like
Meteomatics
and
Opencage
, in order to access information about the ocean at a moments notice. We integrated all of these into python modules in order to function as the commands.
In order to computationally find the nearest ocean and figure out if the current coordinate is land or sea, we developed a
python module
that maps geospacial coordinates to a stored equirectangular projection of the world. When paired with a space search algorithm, we are able to locate the nearest desired feature and create a navigation link for the user's convenience.
We designed a website where users who are interested in inviting Sea Space into their own Discord servers.
Finally used
Google Cloud
to host our website via
Apache2
, as well as run Sea Space, through a Compute Engine setup with a Static IP.
We made a public discord for anyone who is interested in testing the bot.
What's next for SeaSpace
There are more features coming to SeaSpace including:
Sea Song, a way to play all your favorite ocean themed songs
Sea Scene, a function that displays dazzling sea images of a given area
Sea Soothe, a shortcut to relaxing sea tunes
Built With
bootstrap
css3
discord
google-cloud
html5
javascript
jquery
keras
meteomatics
opencage
python
tensorflow
Try it out
thesea.space
discord.gg
github.com | SeaSpace | Sea Space is the first ever ocean-centric discord bot, aimed at education and entertainment for all your oceanic needs. Users can easily access ocean satellite images, directions, and more. | ['Pranish Pantha', 'Mohit Chhaya', 'Maanav Singh', 'Sachet Patil'] | ['Honorary Mentions', 'Best Hacks'] | ['bootstrap', 'css3', 'discord', 'google-cloud', 'html5', 'javascript', 'jquery', 'keras', 'meteomatics', 'opencage', 'python', 'tensorflow'] | 0 |
10,491 | https://devpost.com/software/save-the-earth-a-clicker-game | Team Name - The Scratchers
Inspiration
I got inspired by many of the clicker games you see online
What it does
You have to save the earth by gaining money, which you use to improve the environment, thus, saving the earth
How I built it
I used the Scratch Programming Engine
Challenges I ran into
Trying to save space on my Scratch project was definitely a challenge
Accomplishments that I'm proud of
These are my first Hackathons and I'm excited to participate in
What I learned
I learned so much about Scratch (this is my first game)
What's next for Save the Earth! (A clicker game)
I might update later if I have time
How to open game
Open a new Scratch Project, click "File", select load from computer, and select the .sb3 file from this project
NOTE: This game is in a .sb3 file so you will have to import the game into a Scratch project to play it. You could also just use the link I put in the "Try it Out" section to play the game
Built With
scratch
Try it out
scratch.mit.edu | Save the Earth! (A clicker game) Team Name - The Scratchers | A retro game to teach kids about environmental safety | [] | ['Best Hacks'] | ['scratch'] | 1 |
10,491 | https://devpost.com/software/techie-helper-bot | GIF
Techie Helper bot in action!
Inspiration
We were inspired to create a discord bot that would allow students to find resources on the spot.
What it does
The discord bot retrieves posts from a slack-channel workspace.
How we built it
We utilized Discord.js, Node JS, Slack's API, and MySQL.
Challenges I ran into
Developing a react web app that also displayed the same posts that were in the workspace.
Accomplishments that I'm proud of
Implementing the Slack API and Discord.js API for the first time.
What I learned
How to use Trello Boards for project management. Workflow Agile.
What's next for Techie Helper Bot
Developing the web app, and allowing the user to tell the bot how many entries they would like displayed.
Try it out
discord.com | Techie Helper Bot | A Tech Helper Bot that retrieves the most recent posts from a maintained slack work-sapce full of resources. | ['James Chang', 'Giancarlo Garcia Deleon', 'Hello World'] | ['Honorary Mentions', 'Best Discord Bot'] | [] | 2 |
10,491 | https://devpost.com/software/intern-grind-bot | Discord Bot for Resumes / Internships ·
Created for Intern Grind Discord Server
This project creates a discord bot which moves resume files from a channel where users can post their resumes and comment on others to a channel containing only the resume PDFs in order to simplify finding resumes in the server. The project also sorts job postings from one chat channel into another channel to make the postings easier to view. Apart from this it alson has some additional functionalities, checkout {prefix}help for more
Installing / Getting started
A quick introduction of the minimal setup you need to get a hello world up &
running.
Run yarn install to install dependencies.
Create a bot on discord.com/developers and generate token.
Create config.json in main directory and edit the contents as below.
{
"prefix": ".",
"token":"Enter bot token here"
"resume_channel": "736822249491005502",
"DOG_API_KEY": "NzMyNDM3Mjg3MjcwNjEzMTIy.Xw0lYQ.BIbyP_0IWXuUtmd1jSInyqlO5T4",
"CAT_API_KEY": "7bd3f747-6193-41bc-97d6-2643494791bd",
"image-channel": "images",
"job_channel": "736980807788265655",
"job_board": "736980807788265655"
}
After above edit bot_data.json
{
"prefix": ".",
"resume_channel": "<channel id>",
"token": "",
"DOG_API_KEY": "NzMyNDM3Mjg3MjcwNjEzMTIy.Xw0lYQ.BIbyP_0IWXuUtmd1jSInyqlO5T4",
"CAT_API_KEY": "7bd3f747-6193-41bc-97d6-2643494791bd",
"image_channel": "images",
"job_channel": "<channel id>",
"job_board": "<channel id>",
"leetcode_channel": "<channel id>"
}
commands here
Developing
Built With
Built with discord.js and Node.js
Leetcode data file from
https://github.com/SeanPrashad/leetcode-patterns/blob/master/src/data/index.js
Prerequisites
No prerequisites.
Setting up Dev
How to develop the project further:
git clone https://www.github.com/arjundubey-cr/bot-discord
cd bot-discord
yarn install
.
.
.
git checkout -b <branch_name>
.
.
.
Licensing
Project created using GCC license and text version can be found at
https://opensource.org/licenses/GPL-3.0
Built With
discord.js
mongodb
node.js
thecatapi
thedogapi
Try it out
github.com | Intern Grind Bot | Intern Grind Discord server bot for handling resumes reviews, job postings, and LeetCode questions | ['Haley Kell', 'Olina Wong', 'Arjun Dubey'] | ['Honorary Mentions', 'Best Discord Bot'] | ['discord.js', 'mongodb', 'node.js', 'thecatapi', 'thedogapi'] | 3 |
10,491 | https://devpost.com/software/write-live | The landing page
Word Count Alert that pops up when you enter a word count goal
Writing the story in the editable section
Clicking the "Save Work" at the bottom moves the story up to non-editable section
Clicking on the Editing Mode button to change it's mode to "On" moves story back to editing section
Clicking on the Editing Mode button once again brings back the "Save Work" button where you can save the story
And the story moves back to the non-editable section
Inspiration
As a writer, I've often struggled with the tendency to stop as I write to edit my work as I go. This is a bad practice for writers as it inhibits the ability to move forward with the story. I know many others who have this problem, so I figured having a platform that allows writers to save their work with an option of not going back and editing it would help many overcome this struggle.
What it does
Write Live has an Editing Mode On and Off option. While on Editing Mode: Off, you can write and save you work, but once saved, your work is moved to an uneditable section where you cannot change anything. This allows you to stay focused on moving forward with the story. You can always go back to Editing Mode: On by clicking the button and this will move your story back to the editable section. There is also a word count feature where you can set your word count goal and as you save your work, the word count tracker updates your current word count.
How I built it
I used HTML, CSS, JavaScript, and Bootstrap to create this project.
Challenges I ran into
Since it was my first time using Bootstrap, I had to learn how to use the different components. There were also some simple functionality issues that were easy to fix.
Accomplishments that I'm proud of
I'm proud of creating a platform that can hopefully help writers overcome a pretty common struggle!
What I learned
I learned about Bootstrap and it's capabilities. It really made developing the UI for this platform so much easier!
What's next for Write Live
The next thing would be to add a log-in feature so that users can save their work in their account and go back anytime to add to it. I'd also like to add a story-prompt generator to help writers gets started if they're out of ideas.
Built With
bootstrap
css
html
javascript | Write Live | Restrain your inner-editor and write freely with Write Live | ['Shaili M'] | ['Honorary Mentions', 'Certified Dank'] | ['bootstrap', 'css', 'html', 'javascript'] | 4 |
10,491 | https://devpost.com/software/doge-bot | doge-bot
such doge. wow. amaze bot.
add
TOKEN
in your .env file :3
discord bot for
https://intern-grind.devpost.com/
Please check out the video here
https://youtu.be/eSKJl6iMxfg
Commands
?doge dogeify supplied sentence uses
dogeify-js
?meme fetches from /r/doge from this
repo
?doggo fetches random image and blesses you
dog.ceo
?swole places target user's avatar with swole doge head :3
Built With
javascript
Try it out
github.com | doge-bot | such doge. wow. amaze bot. | ['Shriji Kondan'] | ['Honorary Mentions', 'Certified Dank'] | ['javascript'] | 5 |
10,491 | https://devpost.com/software/happy-3alwgc | Who's a good girl
Github and Instagram
About me
List of accomplishments
Home page
toy gallery
Inspiration
I want to build a personal website in the future, but I need a lot more practice, so I decided to build one for my dog, Happy. When I saw this super-chill hackathon, I felt like it was a great opportunity for me to continue working on this project, since most hackathons don't allow you to submit previous projects.
What it does
It showcases Happy's personality, skills, and toys, and allows visitors to get a better understanding of her and visit her instagram page.
How I built it
I used mostly HTML and CSS, but I used a bit of javascript as well.
Challenges I ran into
I signed up when this hackathon was already halfway through which means I only had a few hours so I have to not be too ambitious with my planning and just add a few features. I also am pretty new to web development, so there were a lot of features I had to learn how to add.
Accomplishments that I'm proud of
Since my dog is a doge, I made the fonts comic sans so it's super dank and memey. I also colored the site logo that my friend helped draw, and it turned out pretty great.
What I learned
I felt like it was really fun to try to write from a dog's perspective, because we really don't know how dogs talk, humans just assume they talk with incorrect grammar in a really cute voice but who knows? Maybe dogs talk really seriously in a deep scary voice but we'll never know!
What's next for happy
I need to improve the responsiveness feature. I also want to learn how to host this site on github but i unfortunately didn't have time to complete it by the deadline. More art and more efficient styling could also improve this website.
Built With
css
html
javascript
Try it out
github.com | Happy - A Chonky Doge! | My first website I've built on my own that I've decided to come back to. | ['Riley Chou'] | ['Honorary Mentions', 'Certified Dank'] | ['css', 'html', 'javascript'] | 6 |
10,491 | https://devpost.com/software/drop-a-smile | dropasmileinthis.space
Inspiration
The world around us can be a sad and depressing place. However we are all willing to help another person out if we see or know about someone in distress or needing help. With technology we can make the helping nature of people come to life and spread smiles!
.
What it does
this is like a waze of helping people. When we see or notice someone is trouble or needing help (not a 911 emergency, we absolutely recommend calling the authorities for anything observed of a serious nature) a user can place a pin with an emoji on it using our app. The emoji can signify what the problem is, and can be accompanied by a small text description and/or an image.When someone uses the app to see this, they have the chance to help the situation and can then share that they have changed this by changing the emoji of the pin into a smile :)
.
How we built it
frontend was built with react native
the maps were built using GCP google maps api
the backend was built using GCP google serverless functions
the database was hosted on mongodb atlas .
Challenges we ran into
keeping the map active and refreshed
emoji animation
.
Accomplishments that we're proud of
the system actually works.
What we learned
we were actually surprised this wasnt a thing before.
What's next for Drop-a-Smile
hopefully some charitable organization can use this system, we would be glad to build out a robust version for free use..
domain registered: dropasmileinthis.space
Built With
google-cloud
mongodb
react-native
Try it out
github.com | Drop-a-Smile | Happy communities! | ['Ebtesam Haque', 'Muntaser Syed'] | ['Certified Dank'] | ['google-cloud', 'mongodb', 'react-native'] | 7 |
10,491 | https://devpost.com/software/ar-portal-ios-app | Inspiration
When the pandemic began, the lockdown started and everything got indoors then the urge to travel, to explore new worlds,the desire to feel free while travelling inspired us to come up with this idea of
ANYWHERE DOOR
and work towards it's development.
We were all beginners at the moment but the will to create something interesting, to be able to live that moment of turning our imagination into reality kept us moving and motivated. When it finally could be made we called it the
DREAM PORTAL
which truly became a source of turning our dreams true, we were able to live that moment of joy and experience life once again!
## What it does
The app which we have created lets the user enter a desired location and then creates a virtual door through which one can enter into the the location virtually and experience that place with a 360° view and also can see the room left behind himself on the other end of the door which makes this project all the way more charismatic.
How We built it
We first started gaining information as to how and what skills were required in our process which took a long time as we were all beginners but we finally came across unity engine and learned how to work on that within three days and also we worked on A-frame to finalize the project. The incorporation of the 360° view gave us a really hard time and we were not able to move forward but after watching a lot of tutorials, we finally made our
DREAM PORTAL
.
Challenges We ran into
When the whole code was prepared and finalized, the task of uploading it to GitHub became really difficult as GitHub would not accept the code as it is so we had to break it into chunks of code and then upload it and that took a lot of time and made us anxious as to what would happen if we were not able to upload the code in time, but in the end we were able to make it and proceed further.
Accomplishments that We're proud of
When we first started we were complete beginners but as we progressed with our work we learnt a lot and have grown as developers. The project that we were able to make, gives me the confidence that we will give a tough competition to others and not only do we strive to compete but also strive to win this hackathon!
What we learned
We learnt a quite a number of things like the Unity Engine, A-frame.
Then we learnt never to give up and keep solving the problem until its resolved. This project not only increased our skills but also our confidence and motivation to keep growing and learning.
What's next for AR-portal-Ios-App
We Will Create an application in which we will integrate
Google Maps
with our Idea and whenever the user will search a location, they will be able to experience how it feels to be there! Not only that we will integrate the nearby hotels and stores along with the view of the location. And we will keep updating it further.
Built With
c
c++
html
objective-c
objective-c++
shell
Try it out
github.com
www.canva.com | DREAM PORTAL | where imagination meets reality. | ['bruce waybe', 'Maninder Singh', 'Ruhee Jain'] | ['Certified Dank'] | ['c', 'c++', 'html', 'objective-c', 'objective-c++', 'shell'] | 8 |
10,491 | https://devpost.com/software/vlearn-fxhc3t | Inspiration
Awareness and Education are two of the essential ingredients of developing belief. Awareness has been highlighted by many as a key indicator of success in a range of performance environments. It is arguably the most important ingredient for belief as every other skill, quality, and task you have and undertake can be traced back to awareness. Being aware will give you an insight into your beliefs and whether they are positive or holding you back. But it takes a lot more than information to make kids understand and follow things. While on the other hand education is important to shape an individual. I wanted to make something that helps create awareness, about the do's and don't s of Covid-19 among kids, alongside it being entertaining, immersive, and educational. What better way than games to do this, and as VR is the best immersive technology available out there and keeping in mind the tendency of kids to explore new things, this application has been developed.
What it does
It is a multiplayer Virtual Reality Quiz Application. The app has many topics to choose from to play, which also helps spread awareness about COVID-19, other preliminary things that kids need to learn. It also has a real-time leaderboard of every topic that people choose to play.
How we built it
A. Unity3D- It is built on unity3d which is a powerful cross-platform 3D engine and a user-friendly development environment. I used unity to build the whole game from UI to Realtime database system to the game itself.
B. Google VR SDK - a new open-source Cardboard SDK for iOS and Android. I used the Google VR SDK to develop the VR game scenes, which is not possible without it.
C.Photon PUN - Photon Unity Networking (PUN) re-implements and enhances the features of Unity's built-in networking. I used it for networking.
D. Google Firebase - Firebase is Google's mobile application development platform that helps you build, improve, and grow your app. I used Firebase, to make manage database systems to verify credentials, sore data, retrieve data, update leaderboards.
E. Photoshop - I used Photoshop for the development of user interface elements.
Challenges we ran into
As this is a multiplayer application, to store and retrieve data in real-time (real-time database) I used Google-Firebase (Unity SDK), integrating it with unity has been tough work. As this is the first time I was working on networking using PUN, it has been a problem, as networking is not as easy as it seems to be, with PUN having many internal issues in my version of unity, I had to make the whole non-networking scenes again in a new version that supported PUN.
Accomplishments that we're proud of
I could finish the development of the application in less than a day.
What we learned
Integration of realtime databases with unity apps, networking.
What's next for VLearn
The VR application currently supports Android, Windows and hence the next goal would be to make an ios version, redefine UI, and releasing it to production so that users can have an immersive experience of modern gaming and education techniques.
Built With
c#
firebase
googlevr
photon
unity
Try it out
github.com | VLearn | Immersive Approach To Awareness and Education. | [] | ['submitted the same hack to multiple hackathons and did not realize this is not a serious hackathon?'] | ['c#', 'firebase', 'googlevr', 'photon', 'unity'] | 9 |
10,491 | https://devpost.com/software/safety-first-wzopx0 | ERROR: type should be string, got "https://docs.google.com/presentation/d/14bqbPn2GB5mZ6hJkgGejKjyEgY0WrKAhjjppV4mMxnE/edit#slide=id.g8c9fb82bba_0_180\n\nhttps://docs.google.com/presentation/d/14bqbPn2GB5mZ6hJkgGejKjyEgY0WrKAhjjppV4mMxnE/edit#slide=id.g8c9fb82bba_0_180\n\nhttps://docs.google.com/presentation/d/14bqbPn2GB5mZ6hJkgGejKjyEgY0WrKAhjjppV4mMxnE/edit#slide=id.g8c9fb82bba_0_180\n\nhttps://docs.google.com/presentation/d/14bqbPn2GB5mZ6hJkgGejKjyEgY0WrKAhjjppV4mMxnE/edit#slide=id.g8c9fb82bba_0_180\n\nhttps://docs.google.com/presentation/d/14bqbPn2GB5mZ6hJkgGejKjyEgY0WrKAhjjppV4mMxnE/edit#slide=id.g8c9fb82bba_0_180\n\nhttps://docs.google.com/presentation/d/14bqbPn2GB5mZ6hJkgGejKjyEgY0WrKAhjjppV4mMxnE/edit#slide=id.g8c9fb82bba_0_180\n\nhttps://docs.google.com/presentation/d/14bqbPn2GB5mZ6hJkgGejKjyEgY0WrKAhjjppV4mMxnE/edit#slide=id.g8c9fb82bba_0_180\n\nhttps://docs.google.com/presentation/d/14bqbPn2GB5mZ6hJkgGejKjyEgY0WrKAhjjppV4mMxnE/edit#slide=id.g8c9fb82bba_0_180\n\nhttps://docs.google.com/presentation/d/14bqbPn2GB5mZ6hJkgGejKjyEgY0WrKAhjjppV4mMxnE/edit#slide=id.g8c9fb82bba_0_180\n\nhttps://docs.google.com/presentation/d/14bqbPn2GB5mZ6hJkgGejKjyEgY0WrKAhjjppV4mMxnE/edit#slide=id.g8c9fb82bba_0_180\n\nhttps://docs.google.com/presentation/d/14bqbPn2GB5mZ6hJkgGejKjyEgY0WrKAhjjppV4mMxnE/edit#slide=id.g8c9fb82bba_0_180\n\nInspiration\n\nAs we all know coronavirus is a fatal problem in today's society causing many deaths and sicknesses all around the world. As covid-19 keeps increasing, more and more people are becoming ill, but there are also more deaths happening due to an indirect factor: depression. For some, life has been a hardship now. People are unable to go outside, some are fired from their jobs, and others are just are unable to cope with the stress and anxiety from the lockdown at home. These are all general factors that lead to the coronavirus.\n\nSo far, 15.3 MILLION PEOPLE HAVE BEEN CONFIRMED TO HAVE THE CORONAVIRUS, AND 624,000 PEOPLE HAVE DIED FROM IT. AS FOR DEPRESSION, studies have shown that 1 OUT OF EVERY 113 people will get depression in their lifetime. In 2016 alone, there were over 16.2 million people affected by the coronavirus, meaning 5% OF THE PEOPLE IN THE USA DURING 2016 HAD DEPRESSION SYMPTOMS. 6.7% of all adults also have depression symptoms.\n\nWhat it does\nTo treat all this outbreak going on, I decided to make a webapp and app that is able to help everyone no matter where they live during these tough times. My project has three parts. First there is the social distancing app, that uses api's to make sure everyone is social distancing. If you arent social distancing you will be alerted. Then I made a free coronavirus quiz. It is an animated form which evaluates which symptoms of coronavirus you show and gives additional information and testing sites if needed. The last part is an algorithm that is able to see if you show symptoms of depression or not. The algorithm analyzes your typing speed and connotations of the words you type, and uses regression to find a pattern between them\n\nHow I built it\nI used flask, html, css, swift ui, swift, python\n\nChallenges I ran into\nAs a beginner making my own website from scratch was extremely hard due to the fact that I have only used templates before. I also didnt know what was regression or any of the other important machine learning type concepts that I used for this project.\n\nAccomplishments that I'm proud of\nI'm proud of learning regression, and somehow being able to complete this project on time! This was my first time using swift ui and I am very proud of that.\n\nWhat I learned\nI learned swift, swift ui, flask in depth, and html and css in depth to make a website. I also learned simple machine learning concepts like panda dataframes, and linear regression\n\nWhat's next for Safety First\nI want to publish my app on the app store, and also make the depression symptom analyzer voice recognition related so there might be more accurate results.\n\nBuilt With\n\ncss3\n\nflask\n\nhtml5\n\npython\n\nswift\n\nswiftui\n\nTry it out\n\ngithub.com\n\ndocs.google.com\n\ndrive.google.com" | Safety First | Empowering safety for all generations around the world during the coronavirus pandemic | ['Neeral Bhalgat'] | [] | ['css3', 'flask', 'html5', 'python', 'swift', 'swiftui'] | 10 |
10,491 | https://devpost.com/software/astute-bot | Initial design
With object detection
Final product
Inspiration
Desire is always a need to start something off. “Necessity is the mother of Innovation”. We are reviewing one of our older projects with a twist for tackling this pandemic. It is already seen that the world is now trying to tackle this jinxed virus, COIVD-19. These robots are being used across the world to serve the patients at the isolated wards and even to the people in quarantine. It can be clearly observed that the generation of robots has now come, due to the shortage of the personal protective equipment (PPE). Countries are trying to adopt artificial beings which are called robots. It is statistically proven that the robots had already seen a hike in demand. one of the main reasons for switching to robots are, it can be autonomous. Multiple technologies can be added to aid the patient and the doctors, stationing the robots helps to reduce further human contact which is sort of a taboo in the current situation. The aid -form robots can range from entertaining the patients to taking their required tests and furthermore by even spraying disinfectants in meantime.
What it does
There is a wide range of variation when it comes to the assist we get from a robot. Especially in this case, we can expect a lot as the technology keeps on improving gradually. As told above in the inception of the robot, it can be used being more interactive with the people it has come to check with like the medical staff, patients and many others. It can be employed to keep spraying disinfectants on the floor, and can be used to test the people's “remote help for the doctors” which in turn helps by lowering the risk for the medical workers, it will soon be capable of delivering food!! To the patients in the isolation wards.
Deploying these robots will certainly embrace safety and allow humans and robots to work together.
How we built it
It was our very first idea for a rover which can be equipped with a camera which provides live video feed. It has a very basic chassis and components such as raspberry-pi, motor controller, dc motors and jumper cables. It was built as a project for submission during internal assessment. We scavenged for parts form wherever possible and tried to build it. It will be very evident by the pictures as it is robust. It has object detection software which can help in recognizing people. We programmed it to be controlled by numerous methods,namely, bluetooth,website and a self-autonomous mode too. The website has controls and a window for live video feed, currently with zero lag if provided with a stable connection.
Challenges we ran into
1.Integrating IBM cloud-based API for object detection was one of the challenges, it was harder due to the unstable WiFi connection.
2.Controlling gpio pins of rover using HTML tags was difficult this was overcome by defining gpio pins and functions in separate files.
3.Getting a hold of many parts was impossible due to the pandemic.
4.Network issues were at their peaks and no one could do their work properly.
Accomplishments that we're proud of
We made this in our first year when everyone else was busy with making new friends and trying to fit in, this was our way of having fun.
2.It is our very first project and it still works.
There is a lot of scope for modifications if provided with right amount of funds.
4 Our university recognizes our efforts towards this project.
As of now, the pandemic has prevented us from further development.
What we learned
We've learnt a lot since our very first project. we've learnt to respect each other's perspectives on the same idea. Learning is a continuous process, events like these remind us that there's always something new to learn. "Slow and steady wins the race", we want this project to be widely used in the near future but we will not rush things by sacrificing the quality of the build.
What's next for ASTUTE BOT
We have been selected for the IBM disaster management bounty for the same idea and are looking forward for financial help from our university to make our very first large-scale model. This team also receives help from students in different parts of the country, from various universities. Our aim is to make these products commercially available with numerous modifications/attachments so that they can be used various situations and not only for a pandemic.
Built With
arduino
html/css
ibm-cloud
javascript
nodemcu
object-detection
python
raspberry-pi
Try it out
github.com
csk27.github.io | Bot_Alpha | There is always an urgency to solve an emergency. This bot can do it all with different mods available. | ['Chaitanya Sk', 'MINEKING987 Pranav', 'Mann Patel'] | [] | ['arduino', 'html/css', 'ibm-cloud', 'javascript', 'nodemcu', 'object-detection', 'python', 'raspberry-pi'] | 11 |
10,491 | https://devpost.com/software/hackermatch-hke7qy | Inspiration
For many hackathons, we have had trouble finding new teammates to fit our interests and abilities to work with. The DevPost system of finding a hacker that was looking for a team, and then emailing, and usually waiting weeks for responses, only to hear that the hacker was with another team. Because of the tedious process of making teams, we wanted to create a platform that connects hackers quickly through an easy to use mobile app that acts similar to Tinder. You swipe on hackers' profiles depending if you like their skillset, and you are matched with a team to fit your needs.
What it does
Our app serves to connect Hackers and form teams for Hackathons by connecting users with similar interests and abilities. To register, the user gives some identification and also fills out a short survey. Then, the results of the survey are passed into an algorithm where the backend matches users with similar interests and technological abilities, making the best matches possible. The best matches show up on the users swiping page, where the user can swipe right to indicate a pass, or swipe left to indicate a match. For a match, the user's information and the match user's information are sent to the backend, where they are combined to form a team. Then, a team forms on the team page and users can connect with each other through their discords. Basically, its like Tinder but for matching hackers into teams of a certain size.
How we built it
The frontend was built with Flutter. We chose Flutter because it is easy to work with, gives excellent widgets essential for the app's swiping functionality, and easy to integrate with the backend. We used two backends to hold user data: Firebase and MongoDB. The user's information was stored in both, however Firebase was used to call the user information directly from the app, and that information was passed into http post requests to the backend, where the actual matching happened.
The main backend was a database hosted on mongodb atlas and the matching algorithm and access methods were implemented on GCP using google's serverless functions.
The algorithm works basically like Tinder, except instead of making pairs it forms teams of multiple people.
The lists of users shown are first filtered by some of the user entered factors, for example some hackers may not want to be in a mixed gender team, and some hackers may not be comfortable in a team with all newcomers or all experienced hackers.
We then sort based on some key indicators such as
what the main aims of the hacker are at the hackathon
how comfortable they are with new technology
how focused they are on the area they chose
how open they were to choosing a different idea if they already had one
etc.
When matches are mutual, they get grouped into teams which then can be seen by users.
Challenges we ran into
Making an abundance of users was tedious because we had to fill out information and take the survey for 15+ users, since we wanted a diverse array of users to be matched according to their interests. We also had a bit of trouble communicating between our front end and backends, since for firebase and mongoDB, we used slightly different formats to store the user data.
Accomplishments that we're proud of
We were able to complete our product. We also were proud to seamlessly integrate the two backends together and connect them to the frontend.
What we learned
We learned that using two backends instead of one when communicating with Flutter is much easier than the one mongoDB database. We learned how easy it was to use firebase with Flutter, and we will be using it much more often in the future.
What's next for HackerMatch
We want to extend our service for multiple hackathons, and not just for the single hackathon that is currently available for our app. We want to make some kind of menu where you can sign into your DevPost account, and from your hackathons, choose teams using our app.
Built With
flutter
google-cloud
python
Try it out
github.com | HackerMatch | A platform to connect hackers and form teams for Hackathons! | ['James Han', 'Muntaser Syed'] | [] | ['flutter', 'google-cloud', 'python'] | 12 |
10,491 | https://devpost.com/software/giftswapr | Inspiration
This app is inspired by the popular and super fun game Secret Santa that we have all played during Christmas. Except, with a twist... This game runs all year long, and gives celebrating birthdays an element of surprise.
What it does
Giftswapr is a mobile app that allows you to create wish lists for your birthday, and anonymously gift your friends stuff on their wish-list. It is a fun app that gamifies the process of celebrating birthdays! You can search for you friends' username and claim items on their wish-list, as well as customize and create your own wish-list for your friends to see.
How we built it
Figma for design.
React-native for front-end.
Flask python for back-end server.
MongoDb for storing user data.
Gcloud for web hosting.
Challenges we ran into
We decided to participate when there were only four hrs left for submission (lol) so we had major time constraints.
Getting the react native to work fast.
Web hosting
Accomplishments that we're proud of
Pulling off a fully functional mobile app in such less time.
Creating a responsive and clean UI.
Setting up a functional backend server and db.
What we learned
How to use React Native.
MongoDb.
Gcloud hosting.
What's next for GiftSwapr
We want to make the login system more sophisticated, with password reset.
Notifications for friends' birthdays, and integration with calendar.
Sophisticated search engine using elastic
Built With
appengine
flask
gcp
mongodb
python
react-native
Try it out
github.com | GiftSwapr | Be the Secret Santa on your best friend's special day. | ['Veer Gadodia', 'Nand Vinchhi', 'Muntaser Syed', 'Ebtesam Haque'] | [] | ['appengine', 'flask', 'gcp', 'mongodb', 'python', 'react-native'] | 13 |
10,491 | https://devpost.com/software/crunch-lsg1ku | Inspiration
We noticed that a huge number of restaurants and cafes had lost their sales due to COVID-19. They were giving out coupons and offers to get back business but not many came to know of those because of the lockdown (usual means of marketing like word of mouth and newspapers were not working anymore).
What it does
We decided to help such restaurants and cafes by showing their promotions on our website where they can easily sign up and display their offers.
How we built it
We built it through HTML, CSS, JS, and Google sheets API. We made a website through HTML and CSS and then added Javascript to the form and the frontend. We then integrated Sheets API and connected it to the form so we could use a spreadsheet as a database.
Challenges we ran into
Integrating the Sheets API with the signup form was quite tough and this was a major challenge.
Accomplishments that we're proud of
This was the first time that we used Google Sheets API as a database and connected it to a frontend.
What we learned
We refined our HTML and CSS skills and learned the fundamentals of Javascript and database management.
What's next for Crunch
We will now add more features like nearby hotels, more promotions, and mailing offers. We will also try to spread the word about our creation
Built With
api
css
html
javascript
sheets | Crunch | Crunch lets you see offers and promotions of nearby restaurants and cafes. | ['Rushank Goyal'] | [] | ['api', 'css', 'html', 'javascript', 'sheets'] | 14 |
10,491 | https://devpost.com/software/police-brutality-forum | Inspiration
Police brutality has been in the news lately so we decided we wanted that to be the subject of our project. After realizing that more mainstream social media sites are sometimes forced to take down or flag content related to police brutality we decided there was a need for a more independent community.
What it does
It is a forum for police brutality victims and allies to build a community where everyone can share their experiences and resources.
How we built it
We used flask as the framework for our web application. Flask handled the logic and served up the web pages coded using HTML. After completing a back-end and a basic front-end we used a CSS framework, Bootstrap, to make everything look better.
Challenges we surpassed
Creating the database and figuring out the relationship between Users and Posts, Preventing duplicate users/usernames & mismatched passwords during registration, Creating forms using WTForms, Figuring out notifications, Profile picture integration using Gravatar, Email website error notifications to administrators, Accepting Cryptocurrency donations through Coinbase
Accomplishments that we're proud of
We made a functioning website that can be used to help people who have been victims of police brutality.
What we learned
Web Development using flask
What's next for Police Brutality Forum
Next, we will continue to improve on the website by adding more features and sources and well as continuing to make the forum more user-friendly. Examples of upcoming features include implementing a like/dislike system and methods for sorting posts such as by popular and by most likes, password recovery through email, the ability to delete/edit posts, image/link support for posts, and the implementation of tags to sort posts into categories.
Built With
bootstrap
flask
html
jinja
python
sqlalchemy
werkzeug
wtforms
Try it out
github.com | Police Brutality Forum | A forum for police brutality victims and allies to build a community where everyone can share their experiences and resources. | ['Dhruv Batra', 'siya batra'] | ['Honorable Mention'] | ['bootstrap', 'flask', 'html', 'jinja', 'python', 'sqlalchemy', 'werkzeug', 'wtforms'] | 15 |
10,491 | https://devpost.com/software/c-care | When our app worked, Satisfied
Inspiration
During this current COVID 19 pandemic, I see health worker is curing the patients, doctors are innovating new medicine, the police is controlling the crowd movement and even bus drivers are helping people to get back to home. As a future engineer, I felt like my contribution is none, so I felt motivated to do my part and try to bring a positive change and to make sure my product can also be used in a future pandemic.
problem our project solves
Offices and workplaces are opening up and as the lockdown loosen we have to get back to work, but there is a massive possibility that infection can spread in our workplace as, When a person is infected he can be asymptomatic for up to 21 days and still be contagious, so the only way to contain the spread is by wearing a mask and maintaining hand hygiene. WHO and CDC report said that if everyone wears a mask and maintains hygiene then the number of cases can be reduced three folds. But HOW we will do that? , How can we make ever one habituated to the following safety precaution so the normalization can take place. So we have come up with a solution called C-CARE 1st ever preventive habit maker that will bring a positive change.
What our project does
Our app is 1st of its kind safety awareness system, which works on google geofencing API, in which it creates a geofence around the user home location and whenever the user leaves home, he will get a notification in the C-CARE app ( ' WEAR MASK ' ) and as the users return home he will get another notification ( ' WASH HANDS '), ensuring full safety of the user and their family. It is also loaded with additional features such as i.) HOTSPOT WARNING SYSTEM in which if the user enters into a COVID hotspot region he will be alerted to maintain 'SOCIAL DISTANCING' And it also has a statics board where the user can see how many times the user has visited each of these geofences. With repeated Notification, we will make people habituated of wear masks, washing hands, and social distancing which will make each and every one of us a COVID warrior, we are not only protecting ourselves but also protecting others, only with C-CARE.
Challenges we ran into
1,) we lack financial support as we have to make this app from scratch.
2.) the problem in collecting data regarding government-certified hotspot and also we have to do a lot of research regarding the spread pattern of COVID-19.
3.) Due to a lack of mentors, whenever the app stop working we had to figure out by ourself, how to correct the error.
4.) It took us too long to use it in real-time as during lockdown it was too hard to go outside in the quarantine but finally, after lockdown loosens a bit we tested it and it gave an excellent result.
5.) we didn't know much about geofencing before that so we have to learn it from scratch using youtube videos.
Accomplishments that we're proud of
WINNER at Global Hacks in the category of HEALTH AND MEDICINE.
WINNER at MacroHack As the best Android Application.
WINNER at MLH Hackcation in the category ( Our first Hackcation ).
TOP 5 in innovaTeen hacks.
TOP 10 in Restartindia.org and Hack the crisis Iceland.
What we learned
All team members of C-CARE were able to grow their area of competence by participating in the whole process of idea definition, market research, validation, prototyping, and presentation. Through different mentor sessions, we learned that problems could be approached by many means, but most importantly our mission should be clear.
What's next for C - CARE
COVID cases are increasing every day, and chances are low that we can create a vaccine immediately, apps like C-CARE will play a crucial role in lower the spread of infection till a proper vaccine is made.
Our app can also be used for seasonal diseases such as swine flu or bird flu or possible future pandemic such as Hantavirus, G4 Virus, bubonic flu, Monkeypox.
Built With
android-studio
geofence
google-maps
java
sqlite
Try it out
drive.google.com | C - CARE | C - CARE An app that makes every person a COVID warrior. | ['Anup Paikaray', 'Arnab Paikaray'] | ['Track Winner: Health and Medicine'] | ['android-studio', 'geofence', 'google-maps', 'java', 'sqlite'] | 16 |
10,491 | https://devpost.com/software/ice-in-case-of-community-emergency | Inspiration
A recent Federal Emergency Management Agency (FEMA) survey found that nearly
60 percent of American adults have not practiced what to do in a disaster by participating
in a disaster drill or preparedness exercise at work, school, or home in the past year. Further, only
39 percent of respondents have developed an emergency plan
and discussed it with their household. This is despite the fact that
80 percent of Americans live in counties that have been hit
with a weather-related disaster since 2007, as reported by the Washington Post. Additionally,
48% of Americans report having seen at least some news they thought was made up
about the recent coronavirus virus. With
dangerously large amounts of false information regarding how to prepare for the coronavirus (52%)
in the general public, with nearly** 61% of _ Americans having not prepared a emergency plan in case of a widespread emergency in their local area
and **no basic technologies
present to sufficiently provide information to local authorities to provide help to one another, I felt the need to develop this app in order to build a tool for both,
the average household and authorities
, in order for local authorities to better plan, better make decisions and provide knowledgable information in order to save lives. _ Additionally, I have been personally affected by a similar situation, providing me better insight regarding the experience.
What it does
I have created a hybrid mobile application which has three different primary pages and intentions. On the home page, there are
three different feeds
that incorporate the main intentions of alerting the user end of
Reliable information from reliable sources based on the subject criteria.
These
three information feeds include the recent COVID-19 pandemic, an information feed from the government /city and the National Weather Service/other informations in relation to extreme weather. **For the second page, the user must enable location features In order to have full access to the app. **The app utilizes the current location of the user and uploads to my Backend in real time, which is viewable to the local authorities with authentication to the backend. This can be used in many instances such as a fire, to see whether any people are inside the fire, other natural disasters, to help rescue people based on these given locations, and also to alert other surrounding you for help.
Your location on the map will show up to people around you (in a certain radius). On the second page, there is a search option where the user can input a location and find the
Est. amount of people at that location(using a self-created algorithm), the risk level (in terms of COVID), the amount of people who are in "help status" near that location and the amount of COVID cases **in your state. **These features are very helpful during the current pandemic as essentials need to be visited. This will help uses find the time which has the least people present and the least cases recorded.
On the third page, there are
verified resources to help inform you what to do when various kinds of emergencies occur
and how to prepare for them in advance.
Overall, these functions will help inform the user of the correct information of what to do, how to be prepared, stay informed with reliable information in these categories, help avoid contact and contraction of the novel coronavirus and help stay safe in natural disasters/other similar emergencies by being able to send data to users nearby for help and local authorities.
How I built it
I built this app using the hybrid application development platform called React Native. I used expo for faster testing and a better and managed workflow. I wrote this entire app in Javascript. Now to the construction of the main features of the app. For the Information feeds, I used a
News API with several endpoints
to gather information in relation to its respective topic by reliable sources. For the search page, I used
react-native-maps in order to gather the accurate location (long, lat, geolocation) of the user and create the maps UI
. Additionally, I used
Google Firebase as my backend
in order to store this data in a database where it would be accessible to the local authorities in real time. For the East people at a location and risk level , I used an
algorithm I created using multiple data points such as the population of the residing city
, the amount of users recorded in the database and density of the city. For the last page, I used individual reliable sources to provide Preparation resources to be ready for and prepared for emergency situations.
Challenges I ran into
Overall, there were many challenges I faced over the course of this entire project. One of the earlier issues I had was in relation to the tracking of geolocation and using that data, uploading it to the backend and then redisplaying it onto the map. Although this may have not been visible during the demonstration, I wanted to ensure full functionality. The reason some of the data was not showing was due to my incorrect way of passing the props and setting state and overall scope of the project, which I eventually resolved. One of the other Issues I had was in relation to the News API feed which had multiple failed requests and was not pulling through. The issue, I eventually figured out, was due to the incorrect formatting and mapping of data, which did not load the data properly. Additionally, I wasn't assigning keys properly.
Accomplishments that I'm proud of
Some of the accomplishments I am proud of is creating the
algorithm to calculate risk level based on several data points such s population in the residing city, he amount of users recording in the database in that city and the density of the residing city.
I was also proud of creating an interface that was able to use locations and map this data to a backend. I am also proud of implementing my first multi-endpoint api in a react native app.
What I learned
I learned a lot of new things during this entire project. I learned how to implement react-native-maps, how to implement some basic functionality's such as webview and deep linking, allowing me to access certain websites for resources in my prep resources tab and overall how to troubleshoot in low time constraints. Overall, this time constraint helped me work more efficiently and prioritize.
What's next for In Case of an Emergency
In the future, I hope to host this algorithm on an API endpoint rather than on the app itself. Additionally, I would like to my own api endpoint for the resources since I am using a website webview at the moment.
I also would like to create more features in my backend such as alerting the app that you will be going somewhere to adjust the algorithm accordingly and have a lot more user input. I look forward to that in the future and hope this can help the overall community in such times.
Built With
algorithm
api
firebase
google
javascript
native
news
node.js
react
react-native
react-native-webview
Try it out
github.com | ICE (In case of Community Emergency) | The ICE App, helping you stay cool during times of emergency | ['Om Joshi'] | ['1st Place Continuation Hack'] | ['algorithm', 'api', 'firebase', 'google', 'javascript', 'native', 'news', 'node.js', 'react', 'react-native', 'react-native-webview'] | 17 |
10,491 | https://devpost.com/software/covidwebscraping-project-91d208 | Inspiration
I want everyone to understand that covid is not a joke, maybe it they see that there are people around them that are being affected, they will understand
What it does
the user can search for a country or state and the program will print out covid stats about it
How I built it
I used IDLE and I used youtube to get started learning bs4
Challenges I ran into
i was usure of how to organize the data but i settled on inserting the webscraped data into a sqlite3 database
Accomplishments that I'm proud of
This is the most complicated program i have made.
What I learned
I am now more sure of my ability to webscrape.
What's next for covidwebscraping project
I can insert more tables into the database and project those too
Built With
beautiful-soup
python
sqlite
Try it out
www.filefactory.com | covidwebscraping project | covid stats | ['Rachel Ding'] | [] | ['beautiful-soup', 'python', 'sqlite'] | 18 |
10,491 | https://devpost.com/software/miia-medical-intelligence-applied | App screens for miia
Overview
Here are some quick links to some of the resources we developed while creating our project:
💡 • Website
📐 • Wireframe
📱 • Prototype
📕 • Documentation
Inspiration
As our population ages we will begin to have a lot of multimorbidities. The aging population will have higher rates of diabetes, hypertension, and other chronic ailments. Mobile health (mHealth) platforms using smartphones have proven effective for monitoring blood pressure, glucose and other health related symptoms. However, applications are not always accessible for the elderly population. Finger sensitivity and mobility can be an obstacle for the elderly as it impairs their ability to interact with apps. Features such as larger font size, high contrast, and text to speech functionality are often neglected due to the lieu of modern design trends intended to appeal to younger audiences.
We designed our app, miia (Medical Intelligently Applied) to be accessible and usable by most seniors. Miia is an application that will help track and manage health conditions for the elderly population. For instance, we implemented a Chatbot function to help seniors input their vital signs. The chatbot can be made to speak aloud, while the senior can utilize their voice which is then converted to text. The chatbot can also ask questions to monitor symptoms and mood to screen for infection or depression, respectively. Furthermore, our app will track mobility and activity functions of our users through drawing data from the built-in accelerometer, gyroscope, and other smartphone sensors. This will help us predict activity level and potentially prevent frailty and traumatic falls with seniors.
How to use miia
Miia can be used through entering
https://miia.me/
and signing in with gmail or by creating a new account. Once you've logged into miia you're greeted by the main dashboard that provides an overview of your profile along with several different tabs. Here users can chat with miia, sync wearables, and receive diagnostic reports from health checkups. Current functionality of the application is limited to conducting conversations with the chatbot while also completing facial recognition scans that detect mood and BMI.
Nonetheless, our current figma prototype serves as a better representation of the apps final functionality and design.
In contrast to the web application the prototype is developed for mobile devices to better serve the elderly through prioritizing convenience and mobility. The prototype itself is fully interactive as users have the ability to click, scroll and drag through both caregiver and patient interfaces.
What it does
The system leverages AI technology to analyze data collected from facial recognition, speech recognition, wearable devices and/or IoT on a daily basis, and alert the caregivers if there is any identified risks. The platform also provides a way to facilitate communication between caregivers and care recipients, while aiding with health management to alleviate caregiver stress.
Main features
Health data collection
We ensure the health data collection process is easy to follow by having the whole health check up process guided by our AI chatbot miia, which include the following:
Facial recognition - facial image taken for analysis of cardiovascular diseases risks, emotions, BMI and etc.
Speech recognition - speech recorded and analyzed for emotions and mood
AI chatbot - collect health data unavailable in facial and speech recognition/ wearable devices
Phone sensors - detection of fall
Wearable devices/sensors - measurements including but not limited to blood pressure/ heart rate/ sleeping pattern/ activity
Elderly focus design
Voice control - elderly users can choose to interact with chatbot by voice or text
AI Chatbot to stimulate human interactions
Enlarged text and other accessibility features
Reminder system - visual and sound alerts can be snoozed until the elderly login and complete the health monitoring daily
Data visualization for caregivers
Data analytics dashboard - show key metric of elderly over one month
Detailed health reports of elderly - details of each health parameter
Alert system for identified issues - caregivers can set threshold values according to elderly's condition; red warning symbols and notification pop up when value above/ below normal
App Guide
Caregiver
Signs up in the app and makes a profile for both themselves and their care recipient.
After choosing the caregiver option, they will set up an account with their email and phone number, and set a password.
Then, the caregiver will add the patient’s name and phone number.
They can then add the pre-existing medical conditions of their care recipient. In this case, the preset conditions are common chronic diseases but there is also the option to add more conditions and background information.
The caregiver can choose important metrics to monitor for certain chronic conditions, such as blood sugar level for diabetes, or mood for depression.
After adding the background information for the patient, a unique pin will be generated for connecting the caregiver with the care recipient.
A confirmation screen will also show the patient’s conditions and metrics to follow.
If there are multiple care recipients, the caregiver can add another patient.
On a daily basis, caregivers log in and monitor health of care recipients, the most important metric on display. The red notification symbol indicates a warning that requires caregivers to follow up on a metric.
In the patient profile, the caregiver can change or add more metrics to monitor, chat with the patients, or edit the patient profiles.
Elderly/ Care Recipient
Care recipient received a text message from the caregiver with his/ her unique pin. If a senior is unfamiliar with technology, the caregiver can help him/ her to set up the app.
Choose to sign up as a patient, and enter the pin received.
Our chatbot, guide seniors through the whole health checkup process on a daily basis
The patient can choose to text or speak to the chatbot.
Miia will proceed to initiate the process of health check by taking their facial image
Miia will first ask a few questions regarding their physical and mental health, such as body temperature, blood pressure, or mood and the senior can input manually or tell miia their measurements. For voice inputs, Miia will repeat the measurement to verify.
Depending on the needs of the senior and caregiver, the chatbot can also ask about other metrics, give reminders, or chat with the senior.
After health check, users will be redirected to a health overview which summarizes results for the senior.
Key metrics of seniors are shown in measurements. If the user is interested in knowing more of a particular metric, they can click the metric and look into the details.
If seniors have any concerns, they can contact their caregivers using the in-app chat function.
If desired, they can also choose to add or remove wearable devices and sensors.
Lastly, they can check their profile, which shows personal information, settings and caregiver information.
How we built it
Software
• Frontend Dev using Angular, FireBase Authentication.
• Node Libraries Likes charts.js PWAs, BootStrap, Material Design, etc.
• Hosting and CICD setups using Netlify and Heroku and GitHub.
• Domain and SSL certificate from Namecheap and Let's Encrypt.
• SQL DB connected to the app with Restful API.
• Google Colab notebooks to execute heavy GPU workloads and ML Algorithms.
• Invision for developing WireFrames
• Figma for creating final prototype
• Slack for Internal Communications & Google Drive for Documents, Images, etc.
Machine learning
We collected datasets from varies sources such as Kaggle, JAFFE and IMFDB and trained the machine learning model for a couple of tasks: the identification of emotions from facial expressions, identification of BMI from face images, identification of emotions from speech, and detection of falls from phone sensors. Determination of cardiovascular disease risk is also achieved by reviewing cohort studies and results in medical journals. After training the model, we deployed a demo of the emotion prediction model, BMI prediction model, and cardiovascular disease risk using Heroku service.
Challenges we ran into
It is difficult to find quality labelled data for training machine learning models, which in turn affects the accuracy rate. Given that this is a remote hackathon, we were also unable to test connection with wearables. While there is flexibility to use the app without external sensors, we plan to integrate with multiple wearable devices and platforms in the future.
Market Evaluation
To facilitate the adoption of our technology, we plan to target caregivers (B2B) as our primary target demographic. Currently there are 34 million caregivers for the elderly in the United States, with 5 million of them being long distance caregivers. Our goal is to introduce our product, while increasing our adoption rate, and thus solidify our application as an essential tool for caregivers worldwide.
Currently miias distributions channels will be limited to mobile app stores found on both android and ios devices. In later iterations miia will transition to being available as a web application for desktops.
Our go-to-market strategy during distribution will include a combination of freemium and viral approaches. This in-turn provides us with financial incentives for early adopters, who are able to take advantage of the 2-month free trial while having the ability to subscribe later. We’d also like to introduce a referral system where users are able to promote our application while being rewarded for successful signups. In addition to this, we aim to partner with health organizations (clinics/ hospital/ national health insurance) alongside deploying through-the-line marketing tactics in order to enhance customer reach and maximize customer acquisition.
What's next for miia!
App Development
Health data collection via speech recognition and wearables
Data analytics dashboard
In-app chat
Wearables
Water-proof watch for seniors
Water-proof necklace for seniors
Recruitment
We are planning to bring the project to the next stage. Shoot us a message if you're interested!
Built With
angular.js
cicd
figma
firebase
github
invision
ml
netlify
pwa
python
Try it out
www.figma.com
github.com
github.com
emotionpredict.herokuapp.com
bot.dialogflow.com | miia - medical intelligence applied | Digital health solution for elderly and caregivers | ['Ava Chan', 'Rohail Khan', 'Alice Tang', 'Billy Zeng'] | ['Best Designed Hack'] | ['angular.js', 'cicd', 'figma', 'firebase', 'github', 'invision', 'ml', 'netlify', 'pwa', 'python'] | 19 |
10,492 | https://devpost.com/software/fitness-voice | Welcome and help
Home
Doing sport
Stats
Yoga sample exercise
Inspiration
"Fitness Voice" was created to help all people who want to do sports at home, who now don't go to the gym for comfort or health. Normally when you're doing an exercise, you can't stop to touch the mobile screen or the computer screen, that's why I thought about using the voice in this project.
In addition, "Fitness voice" checks your posture with artificial intelligence, to help you count the repetitions (example: surf training) and to help you do the posture well (example: yoga).
Furthermore, "Fitness Voice" has been designed with privacy in mind from the beginning: webcam images aren't sent to the internet and voice is only sent after the wake-word "coach" has been detected offline.
Finally, the app allows you to change the voice of the coach to be more customizable towards the user and so that the user is aware that technology currently allows doing this with the computer voice.
What it does
"Fitness Voice" is a fully voice-controllable webapp that helps you exercise. It allows:
control the entire web application with your voice.
choose the exercise you want to do: gym, surfing (arms) and yoga figure.
helps you count the repetitions of the exercises, it uses deeplearning body pose recognition.
helps you know if the yoga posture is correct.
shows your total statistics by exercise.
has a privacy-based approach. The web application is waiting offline to hear the wake-word "coach", and only when it hears that word then it sends the following words to wit.ai to do NLP. That is, this application will not mistakenly send conversations to the Internet, unless it hears the word "coach" first.
In addition, the application allows to change the voice of the coach. These voices have been created with deeplearning. Change your voice for example and then do a surfing exercise.
How I built it
First I've used wit.ai to recognize the utterances of the user. I have trained the wit.ai model to understand these different intents:
Lets go
go home
help me
I want to train {gym, surfing, yoga}
I want to change voice to {Bill, Her, Morgan, Joker ...}
show me the stats
Afterwards, I used tensorflowjs to detect the word "coach" offline. To do this, I have trained a model with different pronunciations of the word "coach".
Then I used again the tensorflowjs library and the pretrained "posenet" model to detect the body in the webcam image, in realtime. Then I've tried to detect specific positions of the body and I've succeeded by looking at the relative position of the points of the body.
After that, I used the "Real-Time-Voice-Cloning" library to modify the computer voice to allow the user to choose more familiar voices for the coach. This process wasn't totally realtime, so finally I've taken the most used sentences by the web application and I've cloned them for the 6 coach voices (clone voices created: Bill, Morgan, Morpheus, Her, Yellow and Joker).
Finally, I've built the web application "Fitness Voice" that brings all this together.
And I've added some fun details such as clapping sounds at the end of an exercise and the drawing of the sixpack on the webcam when the user reaches 10 repetitions in the surfing exercise.
To indicate to the user when the web is listening, I've put the microphone animation. If the microphone animation is stopped, then "Fitness voice" is not listening. But if you say the wake-word "coach", then the microphone will start the animation until you finish saying the voice command, which will then be sent to wit.ai to "translate" it into something than the web application can understand.
Also, to help the user discover what voice commands to say, "Fitness Voice" periodically suggests some. These suggestions can be read on screen (below the microphone animation) and suggestions can also be heard after some voice responses.
Challenges I ran into
The biggest challenge has been to synchronize the offline operation of the wake-word "coach" with the online operation of wit.ai recognition, with the main objective of guaranteeing the privacy of users. Now the code is very secure and only voice commands that are spoken after the word "coach" are sent to the internet. In addition to being more secure and offering greater privacy, this solution is also more efficient, because the number of requests to the wit.ai api is reduced.
It has also been a challenge to work with other libraries that I hadn't used before, such as: offline speech recognition with tensorflow, posenet with tensorflow or Real-Time-Voice-Cloning for voice modification.
I had not worked with wit.ai before either, but it was easy.
Accomplishments that I'm proud of
I am very proud of the product created, which is a totally usable webapp by voice, which is also useful (it is not comfortable to use a web/app with your hands while you are doing sport) and which is designed based on user privacy (using wake-word offline). Also, I'm very proud of all the technologies that I have used and that I didn't know before this project (wit.ai, posenet, speech recognition of tensorflow, clone voice, etc ...).
What I learned
I've learned that a web application can be made that is totally controlled by voice, and that the user and the web can have a conversation with natural language ("I want to train surfing", "show me the stats", "help me ", etc...). I have also learned to use wit.ai and other libraries that I didn't know before and that are very useful together in this project.
What's next for Fitness Voice
In the future, I want:
Adding more different exercises.
Detecting more different postures, such as yoga postures.
Improving the statistics (so that they are not only global, but also statistics of progression and ranking among users).
I want to continue training the wit.ai model. I have seen that the wit panel shows the phrases that the user says. I think this is very important to keep improving the application. I will periodically check this list to see what unsupported utterances users use. Thus, I will improve the model and add new features.
Built With
chart.js
html5
javascript
tensorflow
wit
wit.ai
Try it out
javiercampos.es | Fitness Voice | AI voice-controlled trainer in your web browser, using NLP (wit.ai), body pose recognition and voice clone, with a privacy-based approach. | ['Javier Campos'] | ['First Place'] | ['chart.js', 'html5', 'javascript', 'tensorflow', 'wit', 'wit.ai'] | 0 |
10,492 | https://devpost.com/software/flo-kzvgmy | Login page
Dashboard
on going meeting
End of meeting summary download
Calendar
Contacts page
Profile page
Wit.ai App ID
938304383356549
Inspiration
Our inspiration came from trying different video conferencing applications ever since most of the world went remote due to Covid-19 and experiencing the different features each one has to offer and what we could add to make meetings easier, in our case eliminating secretaries while also accommodating people with different schedules. We discovered that instead of having someone watch a whole meeting recording, they could choose to read everything or just get an idea of what they missed out or if someone is busy with other activities, they can just use voice commands and run meetings.
What it does
Our application allows users to use voice commands by leveraging wit.ai speech feature, they can create group and individual video meetings either instantly or by scheduling it. At the end of a meeting, our application generates a meeting transcript as well as a summary of the whole meeting.
How we built it
We built the application using python flask for the back-end and react for the front-end and postgresql for the database. Our design was based on using WebRTC to implement multiple peer connection calls. We also wanted to make our application interactive and more user friendly by making our application accessible using natural speech voice commands and for this we used Wit.ai by Facebook.
Challenges we ran into
Learning wit.ai for the first time was a bit of a challenge for all of us and trying to figure out the best way to implement it in a product that is relevant for us and is also bound to be widely accepted by the public. Another challenge was having to implement certain features that were not readily supported by Wit.ai but we did come up with methods to resolve the issues.
Accomplishments that we're proud of
We are proud of the short space of time, two weeks, that we were able to work on our project from brainstorming ideas, coming up with one that has potential to be expanded further and building the application while working remotely.
What we learned
We learned the basics of using Wit.ai for voice recognition, transcribing speech and running intents
What's next for Flo
Our goal is to further train our application to be more effective when running voice commands as well as listening to conversations during the meeting so that we can produce accurate transcripts and summaries. We also plan on adding the annotations in multiple languages during a meeting so that our application can fully accommodate and connect different users.
Built With
flask
postgresql
python
react
socket.io
sqlalchemy
wit.ai
Try it out
fierce-wildwood-03256.herokuapp.com | Flo | Automate meeting notes and summarization with the help of our AI assistant, Julia | ['Shammah Matasva', 'Sandesh Chinchole', 'Swapnil Madhavi', 'Daniel Santos'] | ['Second Place'] | ['flask', 'postgresql', 'python', 'react', 'socket.io', 'sqlalchemy', 'wit.ai'] | 1 |
10,492 | https://devpost.com/software/dance-with-ar | Choose Genre
Starting Screen
AR Scene
Inspiration
Dancing is one of the cheapest and most fun ways of exercising. I wanted to build an app that would close down the barriers that would prevent people from enjoying dancing. This app acts as a middle ground between video dance tutorials and in-person class: it provides a cheap and accessible alternative to an in-person class while also maintaining the interactivity between the teacher and the dancers.
What it does
Dancing with AR allows the user to select a music genre and then creates an augmented reality dance teacher that would teach the user how to follow the dance moves. The user can then interact with the dance teacher simply by talking to the app: ask the dance teacher to start dancing, slow down, stop, or turn around, and the teacher would do it for you!
How I built it
I used Unity for building the scenes and for putting everything together. After training and building the intents in Wit.ai, I used Wit3D to take the user voice input and to convert it into text. After fetching the Wit.ai response, I parsed it to retrieve the user intent and to play the dancer animation accordingly.
What's next for Dance With AR
The current version only has a basic dance move for each genre. Inside each genre, I would like to add several dances with several dance moves so that the user can learn the whole dance using the app. It would be great if I can incorporate motion capture to make trendy, up-to-date dances available. Another improvement would be to make the interaction between the user and the character more conversational by adding a speech response to the AR character.
Built With
c#
mixamo
unity
wit.ai
Try it out
github.com | Dance With AR | Exercise the fun way with "Dance With AR"! Choose your favorite genre and summon an AR dance teacher that will listen to your commands and teach you the dance moves. | ['Curie Kim'] | ['Third Place'] | ['c#', 'mixamo', 'unity', 'wit.ai'] | 2 |
10,492 | https://devpost.com/software/journai | // Inspiration
These brazen and often confusing times have often led to the most bizzare conversations pertaining to what if scenarios. Even media agencies, news channels have delivered information in the most bemusing ways. The narrative keeps changing almost daily, and all our thoughts have become even more so staggered. This prompted us as a team, to go ahead with making of an app, that would help us organise our randomest thougts, the most vital of information, any reminders that we might have, any interesting anecdote, that we just remembered, because of the never ending deja-vus of our childhood, post having to come back, and stay put with our family. In this day and age, using a pen and paper to do so, just feels too cumbersome, and all the services in the market are either too restrictive in nature, or generally are clouded with the doubts of absolute privacy. As soon as we saw the potential of what wit.ai can do with a small training set, It was clear in everybody's mind in our team, that a smarter journal is what we were going to be making.
// What it does
JournAI seems like any other mundane note keeping app in the market, capable of storing journal entries both via voice and text forms. However the lucidity of leveraging the power of wit.ai 's API to get the meaning of sentences at a coarse level helped us propel our application to something more. JournAI breaks down the meaning of the note stored by the user, all of them unassorted, but it provides the user with a very powerful fuzzy search, ranking the notes most relevant to query made by the user. The USP of the product lies in the fact, that besides accessing the usual way in which notes are displayed according to the time of creation, relevant notes are found, and the sorting of the notes are done pretty quickly even if the note contents are incosistent in their turn.
// How we build it
The application was built as an Android Application, made using Kotlin as the base language, supporting SDK 23 and higher. The database used for the backend, is stored locally, since the expected size of the text is minimal, and due to the need for privacy, the entire database is stored locally in the private folder of the app.
// Challenges we ran into
We took part in this competition to learn something new, and with this vision, we decided to not go ahead with tools that we were already familiar with. No one in the team new how to code in Kotlin, and that served as the gateway to many a challenges with regards to coding in such a powerful, albeit comparatively less documented language for Android application development. Another major hurdle was the never ending debate of what to keep and what not to keep, amongst the user defined entities, and possible intents of each of the sentences uttered. Besides, journal entries can be arbitrarily long, whilst the restriction on the API query made to wit.ai is to not allow more than 280 characters in a single go. This disparity created by differently sized journal entries, made the designing of the scheme to store all entries tougher.
// Accomplishments we are proud of
Not so much of an accomplishment, but this application is one of the first, that actually got made in the way, that we intended it to be, and is therefore something that will come in handy for us in our day to day proceedings, something we hope follows suit, with people who come across this application.
// What we learned
This entire experience has been rewarding, we learned a lot right from an introduction to an amazingly simple, but extremely powerful platform that can be leveraged to make applications, that would have earlier taken a lot more effort, time and skill, but can now be made with way less effort. Besides, we got to learn a new language, and increased our experience of coding up Android applications, our experience in which was almost negligible.
// What's next for JournAI
When we started out, the idea was to make JournAI a Spotify kind of an equivalent for searching across our journal entries. Giving us, some daily thought of the day suggestions, upon gauging the overall mood we are in, inferred from the kind of journal entries we have been writing. Organizing our journal entries in a way, that allows for a cross connection to other apps, that help us set reminders. There is also an intention to expand the application to put in a reminiscince column, showcasing old anecdotes, that were written by the user, again depending on the kind of the mood gauged, by the kind of entries being made. Finally we also wanted to incorporate a Vision component so as to include figures, and have even more powerful search options. We have other disconnected vague future plans as well, and will probably be using JournAI itself to organise all of our plans into a coherent roadmap for improvements.
Built With
android
Try it out
github.com | JournAI | Ever had a random train of thought pop up, but forgotten next moment. Never again, with JournAI you can revisit the old thoughts with a quick search | ['Damodar Nayak', 'Yash Raj Sarrof'] | [] | ['android'] | 3 |
10,492 | https://devpost.com/software/knowledgebot-ai | When you chat with the KnowledgeBot, it looks like this! Here are search results offered, as well as a feedback button at the top!
Inspiration
When I was learning about researching better using keywords, something clicked. Instead of just using basic searching tricks to get more out of google's content-browsing algorithm, what if you could use NLP to find very specific articles using synonyms?
What it does
Knowledge bot uses NLP to convert strings of text inputted by the user through Messenger into search query, giving the user
informative research suggestions
. It calls a thesaurus API in order to have
more items for the search query
, making the
searches
even
more specific
, and offers features such as finding related topics to explore, done with the SERP API
How I built it
I built the webhook using Heroku Server Hosting:
link
I used node.js as a framework for creating the application, using the following npm packages:
request
express
body_parser
axios
node-wit
I used the Two Following APIs:
Thesaurus API:
https://dictionaryapi.com/
Scale Serp:
link
I also used wit, but had a node package called 'node-wit'
Challenges I ran into
One of the biggest challenges I had was
creating the webhook
. I had no experience of creating servers on express or even with servers at all, and it took a while of looking through a great many tutorials before even creating my first curl request.
Another major challenge was getting the API get requests and JSON to format correctly. It was especially frustrating because of how dynamic the APIs were, and with different requests, they had completely different parameters, so adapting was quite difficult.
Accomplishments that I'm proud of
I am especially proud of learning how to use APIs and Webhooks, as it was a new concept for me. I was also very proud of creating a full scale project, using a language I had learned very recently. I also take pride in the fact that I learned how to utilize node.js, as well as how to use an Natural Language Processor.
What I learned
I learned many things, but among most, I learned
how to use API and Webhook connections
to create a user experience to solve a problem. Along with that I picked up a few new
Javascript programming tips
, as well as more insights on
how to use
pre-made front end
applications(like Messenger)
, to create your own applications that use backend architectures.
What's next for KnowledgeBot.ai
Adding more resources (eg. subject based dictionaries)
Adding more subject classifications, such as classroom topics
Adding ability to recognize images of content and analyze
Adding a teacher mode, where they can edit parameters to keep students on task and still learn from a specific part of the web
Finish Features In-Development currently(eg. study guide creator)
Built With
heroku
javascript
json
messenger
node.js
serp
sublime-text
thesaurus
wit.ai
Try it out
www.facebook.com
github.com | KnowledgeBot.ai | KnowledgeBot.ai is a convient, easy to use chatbot, that allows anybody to access any internet information. This allows people to get various detailed resources to accelerate their learning, easily. | ['Ankit Nakhawa'] | [] | ['heroku', 'javascript', 'json', 'messenger', 'node.js', 'serp', 'sublime-text', 'thesaurus', 'wit.ai'] | 4 |
10,492 | https://devpost.com/software/ai-snake-game | Basic Initial Look
Post collision with wall
Inspiration
Allow differently-abled individuals to play snake game without requiring to use any extra hardware device
Allow minimal touch interactions in pandemic situations
What it does
Allows to play the popular snake game using voice commands namely "up", "down", "left", "right", "start", and "exit".
How I built it
I built it using Python programming language incorporating Wit.ai for voice interactions(speech to text)
Challenges I ran into
Integrating voice commands
Accomplishments that I'm proud of
This was the first voice game I ever made
What I learned
Basics of Game development using Pygame
Usage of Wit.ai
What's next for AI Snake Game
Making voice commands non-blocking
Training the model only on required commands
Adding pause functionality
Built With
python
wit.ai
Try it out
github.com | AI Snake Game | AI Snake Game is an NLP version of the popular Snake game where the snake is controlled to eat food blocks using voice commands instead of the traditional way via keystrokes. | [] | [] | ['python', 'wit.ai'] | 5 |
10,492 | https://devpost.com/software/shontaefitnessproto | Tap on Mic Icon
Say your voice note
The voice note will be sent via SMS
Inspiration
Our inspiration came from wanting to build a simple to use android application that we might find useful. There are many times at the grocery store when you have to write down a list or text it to yourself. How wonderful it would be to just say the list out loud and have a written record instantly available at the store!
What it does
The user simply presses the microphone and speaks their voice note. The app converts the voice note into text and sends an SMS to the phone number associated with the device
How we built it
This was built in Android Studio using Kotlin as the language of choice. We used the built in voice capture technology within Android as well as SMS.
Challenges we ran into
Having never used the WIT.ai platform there was a bit of confusion on how to use the API in a voice enabled application but we figured it out. Also, there is quite a bit of deprecated code in android references since the move to kotlin so finding useful resources was a bit tricky
Accomplishments that we're proud of
Finishing the application
Making something that works and is actually useful
What we learned
How to capture voice audio in an Android Device --much simpler than we thought.
What's next for SMSNotes
Perhaps, making it a bit prettier :)
Built With
android
android-studio
kotlin
wit
wit.ai
Try it out
github.com | SMSNotes | A quick way to send yourself SMS notes, without typing it. | ['Summer Gautier'] | [] | ['android', 'android-studio', 'kotlin', 'wit', 'wit.ai'] | 6 |
10,492 | https://devpost.com/software/lighthouse-health-dialog | Inspiration
LIGHTHOUSE delivers coordinated care service for chronic conditions to Medicare patients. With the COVID lockdown, gaps in care are increasingly dangerous. It also turns out Grandma is on facebook, knows how to use messenger (mostly) and is open to getting a portal. The quality of care is only as good as the quality of the conversation we can host, and so we did this hackathon to build conversation muscles and generate insight into how real seniors use bots.
What it does
LIGHTHOUSE helps seniors with chronic conditions build core skills in diet, physical activity, taking their meds and writing stuff down. With wit.ai's help, we started off with remote data monitoring, education plan delivery, digital reference and guided self care.
How I built it
A little AWS Lamdba. Some session management. Ten seniors who were kind enough to sit down and use it in order to illustrate for me the necessity of "flexible" dialog management and the need to free the experience from locked down paths.
Challenges I ran into
User testing -- my most common phrase for my Golden Years test panel was "why would you ask for it that way"? followed closely by "you did it the same way 8 times in a row...why different this time?"
I think I didn't get a good enough education on wit.ai before starting -- I didn't understand the value of post-mortem processing of real requests to fuel the learning engine, and I didn't find the "synonyms" section until after we had build our synonym rationalizer.
I am still struggling to solve the handling of recording "log my blood pressure of 110 over 80" -- my boy wit.ai is a champ at pulling out {blood pressure} as a {measure} and the {110 as a wit$number}, but I can't seem to break it's back on looking at "over 80" as a "wit$current". I need some lunch, then I'll beat up on it some more.
There's something in the way I'm trying to parse the wit.ai part of data-source (in particular where keys are formed like "measurement:measurement") that is breaking my flow. I solved it with inelegant brute force and would like to revisit it for some beauty.
Accomplishments that I'm proud of
We built a good hierarchy for routing prioritization that ended up being pretty fast and flexible:
// HIERARCHY OF ROUTING:
// 1. quickreply
// 2. Unfulfiled INTENT with new information (requires session data)
// 3. WIT intent
// a. priorities
// b. record
// c. learn
// d. content
// e. connect TO PHYSICIAN/PORTAL experience -- XX NOT AVAILABLE FOR DEVPOST SUBMISSION
// f. reference
// g. guide
// h. reports
// 4. small talk handling
// 5. free text handling
// 6. gently manage any conversation not yet directed and LOG the failure data
// 7. Wrap up wth any priorities LIGHTHOUSE might have -- e.g., completing a profile, setting a reminder
What I learned
My gut reaction as to how people will ask for stuff is often wildly wrong
Dialogs look more like drunken sailors than thoroughbreds on a straight-away
I need to get better at parsing in nodejs, it feels very clunky
What's next for LIGHTHOUSE Health Dialog
A. Integration with Portal by Facebook in Q4 (bringing together VOICE and BOT)
B. More content
C. Recipes (trying to design intuitive utterances/traits/slots, etc)
D. Figuring out a scalable way to take FAILS/Error Handling and turn it into better experiences.
E. Lunch
TRY IT OUT
Let me clean out the API creds and some "build the education plan" special sauce" that's in there and I'll get it up on Github this week/
Built With
lambda
mysql2
node.js
rds
wit.ai | LIGHTHOUSE Health Dialog | The US healthcare system is looking at an 80K physician shortfall in the next decade and bots are going to be critical to filling those gaps in care. LIGHTHOUSE brings care to keyboard. | ['Dave Vockell'] | [] | ['lambda', 'mysql2', 'node.js', 'rds', 'wit.ai'] | 7 |
10,492 | https://devpost.com/software/robin-job-finding-assistant | Company Insights
Reminders
https://miro.com/app/board/o9J_kmQTyfw=/
Inspiration
Problem statement:
“As a job seeker I want to find a job that fits with my skills, career goals, values, and salary expectations but the process is very stressful, overwhelming and I need to invest too much time doing research, analyzing each job description, dealing with spam and preparing for each interview.
Global context:
In the past months, hundreds of millions of people worldwide have lost their jobs. In the USA alone, more than 40 million Americans filed for unemployment. People across all industries have been impacted in some way either through losing their job or having their hours reduced.
Competition for jobs is higher than normal resulting in heightened emotions for everyone. On top of the obvious financial stress that comes with being unemployed or underemployed, job seekers also suffer from worse physical health, with rates of depression rising among the unemployed the longer they go without finding work. Job seekers usually become discouraged with the belief that finding a job isn’t possible and time-consuming. It is, but it will require extra patience.
Dealing emotionally with this sort of adversity is a skill few of us have been taught, and it requires building new habits in our personal lives. An article published by The New York Times, affirms that Creating a structure for the job hunt can reduce stress levels. Dr. Norris said that learn new skills and stay Social also helps to increase efficiency and can help keep the search from bleeding into every area of our personal life
What it does
Robin is a Facebook Messenger chatbot that assists people during their job search. Robin helps job seekers setting an structure, to avoid stress, provides help with relevant resources, and connects them with a community of mentors and career counselors.
How we built it
Robin uses the Messenger API with Wit.ai to build ongoing interactive conversations that help people find Jobs in the USA. The user's data is stored in AWS DynamoDB, and the matching results were implemented using Google Custom Search API. We used the user's data to custom search (indeed, youtube, glassdoor, and other websites), and the API returns the results in JSON format. We also used other APIs to get Indeed reviews and company details. Moreover, the app finds and matches mentors based on user job preference. The recommendation can be based only on the job role or with a combination of job role and company preference. The function will check the data and decide what to implement. On the other hand, the NLP interaction was built using Wit.ai. We created the intents and entities then trained the App with some possible utterances like :(I need to set a reminder for an interview on December 1, 2020 | I need review for CVS). The App sends an error message to the user if it detects an intent without the required entity like: (I need reviews | I have a job interview). Some intents are generic and can work with 1,2 or 3 entities like: (I need a software engineer job | I need a software engineer job in Florida | I need a part-time software engineer job). It will work with only the job role or with combinations by handling each case differently. Finally, we used Messenger One Time Notification to send reminders if the user set a reminder for an interview. The function refreshes periodically, and it will first check if the user is subscribed or not. If the user is subscribed, it will check the reminders' dates. If the date is one day before the current day, it will send Notification with some helpful resources.
Challenges we ran into
It is hard to get free public access to Indeed job search API & Glassdoor. We used an alternative which is Google Custom Search API to get the results from the website
Accomplishments that we're proud of
We are very proud to put our efforts into a working project that can help many people during their job search. We tested our demo with various users and all of the agreed that Robin would be instrumental to not only help them find the right job but to deal with stress and anxiety, avoiding spending time doing lots of research and providing them with valuable resources.
What we learned
We had to used google API to custom search indeed, glassdoor, youtube, and other websites because it is hard to get access to free API.
What's next for Robin - Job finding assistant
The next step for Robin is to provide users with more sophisticated features such as Auto-emailing responses, rehearsal tools that can give feedback in terms of voice tone, engagement, clarity, and performance, as well as mental health support, especially after receiving a rejection email.
Built With
amazon-dynamodb
facebook-messenger
google-custom-search
wit.ai
Try it out
m.me | Robin - Job finding assistant | Hi, I'm Robin, I can help you to find job opportunities that fit with your background and interests, analyze job descriptions, set interview reminders, and find a mentor. | ['Iris Rodriguez', 'Khaled Abouseada'] | [] | ['amazon-dynamodb', 'facebook-messenger', 'google-custom-search', 'wit.ai'] | 8 |
10,492 | https://devpost.com/software/witball-chat | Login screen
Initial Conversation
Fixtures of Premier League Club named Arsenal
Players of Premier League Club named Manchester City
Individual player information
Random fact by bot
Inspiration
For football geeks like us, getting information about soccer matches like fixtures, scores, players, etc in a jiffy is very important. There are many apps out there which give us this information but why not chat with a bot to get the required information?
What it does
Presenting
WitBall
! A
Wit.AI
powered
Flutter Application
that gets data about latest fixtures, current score and also gets the players of your favourite team or any other team you name. Using Witball users can communicate with the Wit.AI bot using a chat interface.
How we built it
The application is built using Flutter. Flutter is a hybrid app development framework which can create apps for both Android and iOS at the same time!
The application is connected to our Server using sockets and messages are streamed to & from the application to the server.
The server then gets each message and detects the Wit.AI bot detects the intent from the text.
Based on the intent an appropriate message is then sent back to the user.
What we learnt
Wit.AI was relatively new to us and after using the software we realised how easy it has made our lives.
Wit.AI was really helpful to create our chatbot. Learning how to use it was a big boon for us.
Challenges we ran into
Creating a socket connection between the mobile application and server was a challenging task.
Training the Wit.AI to detect team names and different other intents was challenging.
Accomplishments that we're proud of
We have created this project within 7 days of continuous learning and developing and that to us is a big achievement.
As said before, Wit.AI was relatively new to us and with Facebook's precise and prompt documentation we were able to understand how to use it.
What's next for WitBall
WitBall is currently in its adolescence. There are many leagues current in the world of Soccer. We have focused just on one i.e the English Premier League. We intend to focus on more leagues such as Bundesliga(Germany), La Liga(Spain), MLS(USA), etc.
We also would try to increase the amount of intents so that user can request for more additional information.
Built With
dart
flutter
heroku
hive
javascript
node.js
redis
socket.io
wit.ai
Try it out
github.com
drive.google.com
github.com | WitBall | Get all your football needs through a mobile application using the smart Wit bot. | ["Sherwyn D'souza", 'Darlene Nazareth'] | [] | ['dart', 'flutter', 'heroku', 'hive', 'javascript', 'node.js', 'redis', 'socket.io', 'wit.ai'] | 9 |
10,492 | https://devpost.com/software/whatzontv |
window.fbAsyncInit = function() {
FB.init({
appId : 115745995110194,
xfbml : true,
version : 'v3.3'
});
// Get Embedded Video Player API Instance
FB.Event.subscribe('xfbml.ready', function(msg) {
if (msg.type === 'video') {
// force a resize of the carousel
setTimeout(
function() {
$('[data-slick]').slick("setPosition")
}, 2500
)
}
});
};
(function (d, s, id) {
var js, fjs = d.getElementsByTagName(s)[0];
if (d.getElementById(id)) return;
js = d.createElement(s);
js.id = id;
js.src = "https://connect.facebook.net/en_US/sdk.js";
fjs.parentNode.insertBefore(js, fjs);
}(document, 'script', 'facebook-jssdk'));
Quick view of WhatzOnTV
What's Trending
What's on Hulu
Inspiration
Like many people I am frustrated by the difficulty to find shows or Movies across the spectrum. I decided to provide a PoC as first step to build a set of applications aimed to provide a quick and easy access to the live broadcast and on- demand library offers.
What it does
It allows users to get information about TV shows and movies from Broadcast Networks and OTT services. It simplifies the search by using text, quick replies and voice to obtain quick results
How I built it
Using Facebook page linked to a Messenger FB app which interact with wit.ai and request information from a external source (TV guide)
Challenges I ran into
The model training and audio manipulation: Using wit.ai Speech to understand the voice input from Messenger required a audio file manipulation.
Accomplishments that I'm proud of
The integration of the audio processing to interact with WIT. And an attempt to square the UX to some defined options
What I learned
Integration of AI (ML) to provide a new user experience based on Natural Language Processing and voice-first enabled application.
What's next for WhatzOnTV
Focus on training the model in Wit.ai from user inputs and provide the users guidance to easily find information. Diving in conversational AI. Integration of wit.ai in the future minimalist website to allow a natural UX with the user. Build recommendation based on usage data.
Built With
apis
external
facebook-messenger
ffmpeg
glitch
node.js
wit.ai
Try it out
m.me | WhatzOnTV - Voice-enabled FB Messenger bot for TV lovers | Voice-enabled FB Messenger bot aimed to help "WhatzOnTV" FB page visitors to find TV shows or Movies across the Streaming TV services in response to searches requests by text, voice or quick replies. | ['Christian Thomas'] | [] | ['apis', 'external', 'facebook-messenger', 'ffmpeg', 'glitch', 'node.js', 'wit.ai'] | 10 |
10,492 | https://devpost.com/software/expression-buttons | Inspiration
The need to use the keyboard just to add the question mark and confirm a sentence seemed like an unnecessary distraction. To make the interaction with wit.ai more fluid while maintaining concentration on the voice it was necessary to add what was missing: the final punctuation!
What it does
This solution simplifies the interaction with the voice interface by minimizing the need to access the keyboard.
How I built it
I made this simple solution with css, javascript and a few lines of html so that I can implement the idea without complications.
Challenges I ran into
I tried to simplify as much as possible and went through several solutions, but I was guided by the simplicity of use.
Accomplishments that I'm proud of
I am particularly pleased with the interaction improvement I have achieved on the artificial intelligence interface I am working on: metaquid.com
What I learned
I have agreed that trying to do things the simple way can give a lot of satisfaction especially if the benefit is greater than the apparent simplicity of the solution.
What's next for EXPRESSION BUTTONS
Since it will be difficult to simplify further I would like to spread the idea of expression buttons as much as possible.
Built With
css
html5
javascript
jquery
wit.ai
Try it out
www.metaquid.com | EXPRESSION BUTTONS | Graphical interface for voice interaction using expression buttons: a mini keyboard in which each key is associated with an expression that completes the sentence and sends it. | ['S M Z'] | [] | ['css', 'html5', 'javascript', 'jquery', 'wit.ai'] | 11 |
10,492 | https://devpost.com/software/powerup-6o01c3 | PowerUp
PowerUp Alex
Inspiration
Connecting with each other is the most basic yet important thing for humans to share their views
about anything they like or even how they are feeling in recent times and for people who are visually
impaired, it posts a hard challenge for them to express what they feel from inside and this project is
related to bridge the gap between people who are visually impaired that brings something more
special and contributory to the society of with this disability and let them see and grow through the
automated alt-text using AI, this project works towards providing a platform that connects the
visually impaired with companies directly that may provide them with jobs and people can boost
their life.
What it does
In recent years Facebook with their object recognition technology to generate automated alt-texts
using AI helped to build Facebook android app that can be used by visually impaired people making
it easier to connect and have fun while connecting with people online and this inspired me to further
take this project ahead by not only just providing them with content but also to job opportunity by
creating a platform that would help them interact with various companies directly in applying for the
jobs and we can overtake this project to build a strong platform that can help over 32 million people
across the globe resulting it to be a crucial step for such a massive population waiting for something
like this and this project and platform can be revolutionary and bring the world even more close and
connected to each other in this era of digital and social media life.
How I built it
What is PowerUp program?
PowerUp program invites University students or professionals to offer multidisciplinary program throughout the world.
Our program is designed to inspire and grow people with disabilities and visual imparement.
What kind of career opportunities you provide?
The aim of PowerUp program is to facilitate and empower with career opportunities for Job, Internships, Personalised Interviews, Sponsered Programs, Schems throughout the world in multidisciplinary fields.
What are program goals?
Our program is focused to facilitate the people to develop career oriented skills and build confidence in their abilities.Career oriented opportunities will encourage person with disabilities with support.
I have made using entities and intents for every opportunities and category respectively.
For career opportunities -
intents-
Job opportunities
Internship opportunities
interview opportunities
Programs opportunities
Aim to provide features using voice enabled answers to every questions.
People with disability and visual impairment can either use text/audio for the conversational bot.
Challenges I ran into
In recent years Facebook with their object recognition technology to generate automated alt-texts
using AI helped to build Facebook android app that can be used by visually impaired people making
it easier to connect and have fun while connecting with people online and this inspired me to further
take this project ahead by not only just providing them with content but also to job opportunity by
creating a platform that would help them interact with various companies directly in applying for the
jobs and we can overtake this project to build a strong platform that can help over 32 million people
across the globe resulting it to be a crucial step for such a massive population waiting for something
like this and this project and platform can be revolutionary and bring the world even more close and
connected to each other in this era of digital and social media life.
I have done research on voice enabled conversational bot from user and bot side end. And I could provide and facilitate from user-end only and working in progress from bot's side.
what's next?
Working on fully functional voice enabled conversational bot for enhanced career opportunities for University students and professionals .
Built With
facebook
facebook-messenger
flask
heroku
natural-language-processing
python
speechtotext
wit.ai
Try it out
www.facebook.com | PowerUp | PowerUp aim to empower the people with disabilities and visual impairement. | ['namrata agrawal'] | [] | ['facebook', 'facebook-messenger', 'flask', 'heroku', 'natural-language-processing', 'python', 'speechtotext', 'wit.ai'] | 12 |
10,492 | https://devpost.com/software/mastor-the-intelligent-companion | Mandy
Process1
Process1a
Process2
Process2a
Process3
Process3a
Process4
Inspiration
Get information in shorter period of time to process it and take a decision
What it does
Identifies what a user that has a chemical want to do and proposes the best procedures by extracting reliable information from PubChem
How I built it
I used anaconda,python and libraries that are connected with the website through their APIs
Challenges I ran into
Integrating and putting everything ready to use with WIT.ai
Accomplishments that I'm proud of
Mandy is an intelligent transcribe videos with which you can discover information and knowledge chemical properties and safety handling and managing procedures what store are use chemicals. this store is a pretty interesting application in which people without any knowledge of chemistry can use it and understand how to store and manage chemicals more safely. Everyday people use washing dishes, detergents, soap and much more at their houses. Those product contains chemicals that may be harmful when combining with other type of products that are near or when products are combined. A safety procedure and understanding off of how to use and apply chemicals can make a very big difference at home difference.
We proposed use wit AI to understand what did users want to do with chemicals No matter whether they know about chemistry. this application identifies the number of chemicals that one is asking about and suggest how chemicals should be managed at home or in any other place. People can store chemicals at home, combined or mix chemicals and use it for their own purposes.
The application identifies these three different situations in which chemical products would be immersed so that users take a decision about how to put the chemicals in a warehoue or how to use them or whether it is good for them to apply those for their own use.
The application also proposes to look for information on YouTube so that it relies on the machine learning algorithms that YouTube provide proposing video to the user. The user get the transcribe video after finishing the duration of the video. There is no need to see the video.There are videos that last more than 3 minutes or even one hour so that users get both the whole text and a summary of the video easily on their emails. Users are able to discover information among 13 million from the public Pubchem API.
This application can not only be used to look for videos on youtube it also is used with tiktok and twitter to get transcriptions more accurately, precisely faster on your email
What I learned
Programming python
What's next for Mandy the Intelligent Companion
We can improve the application with tiktok and twitter by gather the videos that are published there. You can also check the video for the application
https://www.dailymotion.com/videotutorialeseninternet
Built With
pubchem
tiktok
twitter
youtube
Try it out
github.com | Mandy - the Intelligent Companion | Mandy is the perfect application for you to verify chemicals at your house, school, or in any other place. Verify about chemical properties and different safety procedures to manage chemicals wisely | ['gibran santa cruz ruiz'] | [] | ['pubchem', 'tiktok', 'twitter', 'youtube'] | 13 |
10,492 | https://devpost.com/software/rocket-journal | Inspiration
Inspired by productivity lifechanger - Bullet Journal and Moleskine notebook implemented in pure CSS.
I have always want a personal dashboard that can celebrate, document every day moments as well as improve the my productivity flow. I hope you will try this journal out. And I definitely plan to use it.
What it does
Talk to RocketJournal in plain English.
It takes your input and use Wit.ai natural language processing to identify your intent and content. It updates the realtime document store in Firebase accordingly.
It's the perfect synergy to be able to interact with this journal using your own words, instead of clicks. And most importantly it is very easy to use on the go. It serves as a beautiful dashboard for your weekly plans.
How I built it
Wit.ai is the super intelligent, low-code NLP processing layer used in RocketJournal to quickly, effortlessly parse user input and modify the journal in real time. Wit figures out which daytime to add or remove (intent) journal entries (custom message body). It is super easy to re-train!
It is very cool to experiment with utterances to re-train Wit.ai on the go, instantly. The moment I come up with more examples, I can just log into wit.ai and train validate within seconds. That is magic.
Challenges I ran into
System Integration is a big challenge.
The original source code was measured in MB because I had to experiment a lot with Firebase API, how to parse and handle Wit.ai response, train it, generate sample utterance, display changes and handle changes in real time using vanilla javascript in a single page application.
Accomplishments that I'm proud of
Use Wit.ai as the NLP layer to make user interactions effortless.
What I learned
NLP API request handling, real time database json store, change handling with Firebase, vanilla javascript.
What's next for Rocket Journal
More in the read me section: but definitely 1. bubble tracker for exercising, coffee 2. custom marker for bullets 3. integration with messenger and 4. more implementation for UI components on the journal. 5. re-write refactor in React.js!
I have always want a personal dashboard that can celebrate, document every day moments as well as improve the my productivity flow. I hope you will try this journal out. And I definitely plan to use it.
Built With
firebase
javascript
wit.ai
Try it out
github.com
rocketjournal-b9099.wl.r.appspot.com | Rocket Journal | Rocket Journal is the best wellness productivity tool for you. Document memories, tasks on the go, display them beautifully by talking to RJ in plain English! Inspired by Bullet Journal, Moleskine. | ['Yu Sun'] | [] | ['firebase', 'javascript', 'wit.ai'] | 14 |
10,492 | https://devpost.com/software/draw-using-voice | home page
Inspiration
I am somewhat inspired by the sci-fi movies where AI systems are able to draw complex shapes just by the user voice commands.This very idea is made me to try it myself given the time and opportunity was
perfect.
What it does
The project allows user to draw shapes like a square or a circle and to change the color by just using voice command.
How I built it
I built it using Django web framework ,python and open-cv and numpy to draw shape.
Challenges I ran into
Due to time constraint, I am not able to test radius and thickness for shape using voice commands and the web page is still not able to refresh by itself.
Accomplishments that I'm proud of
I am proud that I am able to build a web app in a limited time of few days that can pretty much draw some basic shapes like circle and square and can change colors using voice commands.
What I learned
I learned about wit.ai and how to leverage it's power and efficiency and apply it to our own project.
What's next for Draw using Voice
The future application can be applied to draw more complex shapes just using user's voice and may be in kids learning apps where they can learn about different shapes in a fun way or for the people who are specially challenged can really use the voice features to draw digital sketches.
Built With
django
opencv
python
wit.ai
Try it out
github.com | Draw using Voice | The project allow you to draw shapes using voice command | ['Shubham Shaswat'] | [] | ['django', 'opencv', 'python', 'wit.ai'] | 15 |
10,492 | https://devpost.com/software/game-castle-fortress | Login
Menu
About
How to play
Game play -1
Game play -2
CASTLE FORTRESS
💡 Inspiration
The idea is to be able to explore a new level of immersion within a decision-making game using the power of the player's voice. The way the idea has been developed is with a police mystery story that little by little shows that the secrets of the past always come to light.
The story takes inspiration from the police thrillers of television but adding more elements such as science fiction, mad scientists, etc.
📕 Story.
A rookie FBI agent must solve the kidnapping case of two children in a small rural town, but what begins as a small investigation reveals darker secrets that were buried in the past.
👓 Pitch deck
You can see the pitch deck presentation here:
Link of presentation
🤔What it does
The game us the voice of the user 🧏♂️ , as an input, to move over the story of the game, and depends of what the user says, the course of the history change.
💻 Tech Stack
Frontend: React Native
Backend: Cloud Functions , NodeJS, Google Cloud Speech API
DB: Firebase Firestore
Auth: Firebase Auth
NPL Engine: Wit.ai
🧙♂️ How we built it
First step was coming out with an idea of a story out of the conventional decision games. Wit.ai provides the necessary tools to achive this goal so we developed the API using Node JS to manage sound data requests, handle all natural process language operations from wit.ai, looking for the point in the story, from our DB, and finally converting the text to speech, with Google Cloud. For the process of building a cross platform application we used React Native so the whole environment was written using JavaScript and the integration was less complicated. Next steps were finding out how to record audio from the physical device and send this data to our API so it could be processed by wit.ai, and receive a response depending from the API which data comes processed as text to speech, so the reponse from the API after the voice message is processed can be reproduced by our application and players are able to listen to it.
🧱 DB Structure
Each dialog of the story, has an ID related to it. and every character, has and id, of voice configuration, for the Google Cloud API, so we have a collection for those things.
And the story moves foward, by passing the ID of the point in the story, from the API to the APP.
Collection Story
Collection Voices
🧠 NPL Flow
🦾Challenges we ran into
Firstable we had to find a way to capture voice sound because none of us had previous experience working with sound data, so after some research we found a way to accomplish this but the next complication was handling the data generated from this recording in order to send it as the data structure needed for the tool. We had no idea how to manage this raw information until we investigated and found out how to convert it into Buffer data type which was what we needed. Now we had to manage the sound data from the API response and in order to do it we found out by the experience we had converting sound data into Buffer that we had to do the reverse process and decode this data into a playable sound file.
🤓 What we learned
Building a game application was a whole new challenge for us. As we had no previous experience on the area, we learned from scratch how to handle sound files information and send it as raw data through http requests and make the reverse process of this data to convert it into playable sounds on mobile devices. We discovered the potential usage of this technology in other areas and how the proper implementation can make a really good impact in the society.
🧐What's next for Game - Castle Fortress
More story lines
Clues inside the game
More user interaction with new technlogies
iOS App
🎮Game Play
Built With
express.js
firebase
firebase-cloud-functions
firebase-hosting
github
node.js
react
react-native
speechapi
wit.ai
Try it out
drive.google.com
github.com
github.com
documenter.getpostman.com | Castle Fortress | Decision based mobile game controlled by voice commands. | ['Mauricio Trejo', 'David Quintanilla', 'Gama Nolasco', 'Emilio Jose Campos'] | [] | ['express.js', 'firebase', 'firebase-cloud-functions', 'firebase-hosting', 'github', 'node.js', 'react', 'react-native', 'speechapi', 'wit.ai'] | 16 |
10,492 | https://devpost.com/software/assist-me-0eqixg | Home screen
While listening to voice command
Inspiration
I heard about wit.ai from Facebook's developer group. Many times faced a challenge for marathi(local language in India) voice assistant by people surrounding me. So wit.ai was a great support for this.
What it does
It is a marathi voice assistant which can be used for performing different actions in mobile phone such as open gallery, facebook, instagram, call, message, create a note and many more based on voice commands. It can also be operated in english language as well.
How I built it
I saw a tutorial video on devpost resources, got a demo app on github. Learned how to integrate wit.ai app with android app and then done. Trained the app one by one command given. Trained with different accents and added little more features.
Challenges I ran into
I mainly faced challenges related to marathi speech to text conversion. But solved later on. Again accent recognition problem was there.
Accomplishments that I'm proud of
I'm proud that I really built an app which helps me to control my phone in my native language. People will really love it and I would be happy if many people uses it after a launch. Thanks to facebook.
What I learned
I learned to use wit.ai and train apps for different commands in different languages. Learnt to integrate wit.ai app with android app and many more.
What's next for Assist Me
In future I'm going to add more features in this app. Like detailed conversation with user, better UI. Adding support of more native languages, etc.
Built With
android
android-studio
java
wit.ai
Try it out
github.com
drive.google.com | Assist Me | Apla digital marathi mitra. | ['Kunal Patrikar'] | [] | ['android', 'android-studio', 'java', 'wit.ai'] | 17 |
10,492 | https://devpost.com/software/online-shopping-using-voice-assistant | Footer
Product page
Header
Inspiration
What differentiates the offline shopping experience from online? No hustles of searching and exploring through huge product descriptions for what you really need in that product. In offline shopping, the salesman is there to answer all of your questions. You can ask whatever you want to know about the product. And, no denying, we humans love to just ask for what we need. Reading is inherently boring for us. But, online shopping is a lot of hard work. This is what inspired us to do some of the job for daily online shoppers.
What it does
Whatever you want it to do. Okay, not now. But yes it can do a lot for online customers.
Ask for the product you need (Yes, no more typing) and it will get you to the right section.
Choose the product of your interest and ask anything about the product for example it’s price, features, availability, customer ratings, or main highlights of the product.
It can also show you the Specials deals and Top products of the day.
How we built it
We integrated
wit.ai
with an online shopping template. We added all the voice-based functionalities using Django as the backend. The frontend is based on HTML5, CSS3, Bootstrap, and JavaScript. The model is trained to recognize and understand the voice using
wit.ai
Challenges we ran into
The main challenge was to integrate all the technologies. But after that,
wit.ai
worked well.
Accomplishments that we're proud of
When shopping experiences have changed a lot with every product just a few clicks away, we are proud that we made this experience even more ecstatic and engaging by removing the hustle to find the details of a product. Our project will allow users to find the information they are looking for by using their voice, giving them the experience of offline shopping, and interacting with the salesman.
What we learned
This project gave us a chance to learn and work with
wit.ai
which seems the next big thing in the NLP and Bot industry. This project allowed us to polish our existing skills while allowing us to develop new ones.
What's next for Online Shopping using Voice Assistant
It can do only a few things for now because online shopping really includes a lot. But we believe online shopping will be really fun and engaging if most of the things can be done with the voice alone. So, we are going to add many more functionalities to our online shopping website. To mention some:-
Comparing the specifications of two or more products.
Allow users to add filters to their search query.
Adding products to the cart and most part of the checkout.
Built With
ajax
bootstrap
css3
django
html5
javascript
jquery
wit.ai
Try it out
voice-assisted-online-shopping.herokuapp.com | Online Shopping using Voice Assistant | Why dig the online store for the right product with right features, when your voice is enough to ask for it? | ['Prachi Aggarwal', 'Gautam Gupta'] | [] | ['ajax', 'bootstrap', 'css3', 'django', 'html5', 'javascript', 'jquery', 'wit.ai'] | 18 |
10,492 | https://devpost.com/software/a-i-go2-college | (A)IGO2COLLEGE
Inspiration
It's not always easy to find a reliable college guidance counselor. As a result of COVID-19 pandemic, this task has become even more difficult as students shifted to virtual mode of learning and in-person interactions have been restricted to certain extent.
What it does
(A)IGO2COLLEGE is a web-application designed to help students stay on top of the college application process in the absence of a college guidance counselor. It answers general questions including but not limited to ACT/SAT score requirements of different colleges,academics, tuition and fees, college application requirements, etc.
How we built it
We used html+css to style and design the website,javascript to handle backend of the chatbot and Wit.ai as the brains of the whole web application. The javascript makes calls to the wit.ai server which returns the parsed text with its respective categorisations in terms of entities, traits and intents.
The dataset was hand-collected from various websites including those listed in the references and is to our best of knowledge.
Challenges we ran into
The journey from start was a real challenge for us.
We had new members joining in and then quitting mid-way of project ideation.
Both of us had almost none experience with Web development. We took it as a challenge and taught ourselves web-dev in ~1 week of coding process.
The dataset collection was hard because not all universities post out their data on their sites so had to be referred from other independent sites.
Getting a well-trained model while maintaining good accuracy.
Accomplishments that we're proud of
We're happy how the site came out. We didn't want the conventional chatbot UI so we went for a design which was easy to use and both of us hadn't really seen before. Also the steep learning curve was amazing for us.
What we learned
We learned how to use Wit.ai. We realised how easy it is to integrate wit.ai to our app and plan to use in a future project. We also learned how to work remotely on a project which is a skill hugely relevant in these times.
What's next for (A)I Go2 College
Scale it for our mobile users.
Expanding our database with more universities, including liberal arts colleges and community colleges and adding info like what all majors are provided by the universities.
Making the app more personalised by providing an interactive dashboard for each user and giving analysed results for what university they should apply to given their scores and achievements.
Choice to link accounts with third party apps like google calendar or facebook to set reminders for important dates.
References:
https://www.collegesimply.com/
https://www.usnews.com/education/best-global-universities/rankings?int=a27a09
Built With
javascript
netlify
wit.ai
Try it out
aigo2college.netlify.app
github.com | (A)I-Go-2-College | Don't let COVID delay your plans for college. Meet (A)I Go2 College (aka "I go to college"), your virtual college guidance counselor, who will give you the latest info on your dream university. | ['Aneesh Chawla', 'ellie kuang'] | [] | ['javascript', 'netlify', 'wit.ai'] | 19 |
10,492 | https://devpost.com/software/pokemon-master | Your starter buddy.
Our goal.
See where you buddy has reached now.
Master ball to catch Mewtwo.
Game Interface I. I know, its not GUI :( .
Game Interface II.
Inspiration
A fan of the Pokemon Franchise, I always loved to play its games on an emulator. But always we are pressing keys and doing the job. It felt the need to make an interface where I can dictate the pokemon to do things like a fight, heal, or run at the bare minimum.
What it does
A command-line interface, the player is introduced to the situation and soon finds oneself in a pokemon battle. As one gives instructions and wins the battle, he climbs the steps to reach the top: evolve his pokemon, attain the master ball, catch Mewtwo and sail off the island. A fun interactive game for every adventurous out there.
How I built it
I came across the command line Pokemon Game when I was learning about Pokemon Games in Python. As I was short of time, I thought of skipping GUI and starting perfecting it, also integrating Wit.ai for speech-to-text-to-intent service.
Challenges I ran into
The first challenge was that I didn't know about this before
2nd Sept
. As my college semester has started on 1st, it was difficult to do major development like in Unity, or Web/App Deployment in a few days. The main challenge was to record audio in real-time, and send over to request and doing all without costing any major time. It came out pretty neat.
Accomplishments that I'm proud of
I am proud that I worked all out in inadequate time. I am happy about the success of Wit.ai doing exactly what I wanted it to be. It perfectly captures the essence of speech!
What I learned
First of all, the awesome Wit.ai! I am really shocked at its simplicity and would surely make more projects with it. Apart from it, I first time worked with real-time audio recording and then sending a request and getting a response in Python. So it was surely a learning experience.
What's next for Pokemon Master
As I knew about this hackathon way too late and could only devote few years, I would love to give it a GUI, deploy on a web page, and maybe expand the horizon into a full journey, and not only battling pokemon.
Built With
python
wit.ai
Try it out
github.com | Pokemon Master | Gotta talk, and catch 'em all! | ['Gurbaaz Singh Nandra'] | [] | ['python', 'wit.ai'] | 20 |
10,492 | https://devpost.com/software/angel-assistant | Angel Assistant
Unfortunately, given the limited time constraint we were in, it would have taken Facebook approximately 1 - 2 weeks to publicly release the chatbot, which wasn't possible since judging would have been over by then. Therefore, it will only work for users who have been added by us as designated test users. Sorry for the inconvenience.
Angel is an intelligent medication tracking assistant powered by
Wit.ai
.
Want to use Angel Assistant? Start a conversation by messaging at
https://facebook.com/angelassistantai
Wit.ai App ID: 763921211088980
Messenger Link: m.me/angelassistantai
Our project website is
https://angelassistant.tech/
Inspiration
Keeping track of personal medications is challenging for many people. According to a review in Annals of Internal Medicine, “studies have shown that 20 percent to 30 percent of medication prescriptions are never filled, and that approximately 50 percent of medications for chronic disease are not taken as prescribed.” This lack of adherence is estimated to cause approximately 125,000 deaths and at least 10 percent of hospitalizations, and to cost the American health care system between $100 billion and $289 billion a year.
Our team was determined to leverage the power of Wit.ai to build a well-designed and easy-to-use chatbot interface that assists people with remembering and keeping tracking of their medications, in an effort to encourage people to stay on top of their medications and remember to take them as prescribed. Angel Assistant supports a simple and accessible way for users to communicate via natural language and stay on top of their medications, which is possible through the cutting-edge infrastructure and technology of Wit.ai.
What it does
Angel is an intelligent chatbot that allows users to quickly add, track, and remember their medications. Users can add medications via voice commands, specifying the name of the medicine, dosage, and times. They can update their status by leaving a quick voice or text message, after which Angel will give timely reminders. Lastly, Angel provides each patient with a unique ID, which they can then share with their doctors for them to monitor their patients’ progress, and send messages accordingly.
How we built it
Angel lives in a Python based cloud server hosted on Azure. We are using Flask for the cloud server and MongoDB Atlas for our database. We are using the Facebook developer infrastructure to integrate the Angel Assistant backend with Messenger for automated intelligent messaging.
Incoming messages from Messenger get forwarded to the Angel backend server for processing.
Messages are then sent to our Wit.ai application, which returns the intent, state and traits of the message, backed by the mongoDB database and our state machine. In this manner, we curate custom responses to the user based on their message, as well as update the database accordingly.
Challenges we ran into
The implementation of the chatbot logic and integration with Wit.ai was the most time consuming part because we had to set up the development environments and get everything functional before even starting. The chatbot logic turned out to be more complex than we anticipated. We also had challenges in implementing the voice commands features, as traditional Messenger voice audio files aren’t supported on the Wit.ai platform. Thus, we encoded and sent the data as .wav files over REST API. This was quite challenging to do as the conversion had to be done in memory. Lastly, migration of the bot from Flask to Azure functions was a challenge.
Accomplishments that we're proud of
We were proud to have created a well-designed and well-executed Minimum Viable Prototype of an intelligent chat bot that successfully tracks medications, and implements various tracking features. The system integrates well with Messenger, and we strive to integrate it with other chat platforms as well in the future. Lastly, we are proud to have configured natural language interactions by enabling users to send custom voice messages by sending .wav files to Wit.ai, and retrieving the results from the backend server.
What we learned
We learned how to create concepts and intents in Wit.ai, and integrate it with a Flask backend server that sends messages to the Messenger platform. We learned how to create callback requests to our Flask backend server from the Messenger API infrastructure. Lastly, we wanted to transition our conventional Flask server to the more cost effective and efficient serverless architecture provided by Azure functions, which we learnt how to integrate with the python messenger client.
What's next for Angel Assistant
Currently, Angel Assistant is designed and published as a Minimum Viable Product. We would like to refine the functionality, taking into consideration feedback from users and experts in the healthcare field. A feature we would especially like to implement is support for finding and showing further information on various medications, as well as integrating with pharmacy systems for online purchases of medications.
References
https://www.nytimes.com/2017/04/17/well/the-cost-of-not-taking-your-medicine.html
https://pubmed.ncbi.nlm.nih.gov/22964778/
Build Instructions
Set up your
Wit.ai
and create
Messenger
bots to get your access keys.
Install Python and Flask via pip, and and update the access keys for the bot.
Start the ngrok server by running server.py.
Test using messenger
Alternatively, message
https://facebook.com/angelassistantai
to start chatting now!
Built With
azure
facebook-messenger
python
wit.ai
Try it out
www.facebook.com
angelassistant.tech | Angel Assistant | An intelligent medication-tracking assisant powered by Wit.ai | ['Veer Gadodia', 'Nand Vinchhi', 'Muntaser Syed', 'Ebtesam Haque'] | [] | ['azure', 'facebook-messenger', 'python', 'wit.ai'] | 21 |
10,492 | https://devpost.com/software/project-heron-ai-powered-phone-booking-system | Inspiration
Inspired by Google Duplex where AI can be used to help call a restaurant or salon to make a booking, I thought what about if I can create a booking system that helps SME to handle phonecall booking.
What it does
Project heron first let restaurant/salon owners define what they need from users during booking (name, choice of staff) or etc and this info would be fed to Wit.AI which will then create an AI that can help handle customer's booking.
How I built it
I build the front-end with React.js with Recorder.js which does the recording of audio; I did VAD on the browser which only sends snippet of audio that contains voice to my back-end. My back-end is based on Node.js which then relay the audio blob to Wit.AI for intent detection. Based on the intent or entity detected at Wit.AI side, my backend would reply appropriately to the user. I also embed context when I need to ask for more information from the user.
Challenges I ran into
It was rather challenging for me to capture audio snippets that contain speech from the frontend. In a phone call, there is no indication of the end of speech from the caller unlike the push-to-talk system, so I had to research VAD in order to only capture the speech portion. The other challenge I faced is on testing, due to my accent, Wit.AI sometimes cannot accurately detect what I am trying to portray, hindering my testing process sometimes.
Accomplishments that I'm proud of
Manage to integrated VAD and Recorder.js on React.js to only capture the speech portion
1st Audio Bot I created
What I learned
How to build a speech-to-text system
How to build an AI context with Wit.AI
Wit.AI is easy to use and awesome for NLP but for some reason dialog feature is removed (this could be very handy)
What's next for Project Heron - AI-powered Phone Booking System
Add text to speech
Better fine-tune of flow
Use a chatbot framework rather than doing it from scratch?
Checking for operating time before booking or staff availability
Wit.AI ID:910134492809920
Built With
elastic-ui
javascript
node.js
react.js
wit.ai
Try it out
heronai.herokuapp.com | Project Heron - AI-powered Phone Booking System | Project Heron an AI-based booking system to help store to take care of phone booking based on the information provided by the store owner. | ['Chun Sheong Foong'] | [] | ['elastic-ui', 'javascript', 'node.js', 'react.js', 'wit.ai'] | 22 |
10,492 | https://devpost.com/software/edith-73pbr4 | Edith in vscode marketplace
Edith in VScode
Edith activate in status bar of VScode
Fast API endpoints
example of an API call
Inspiration
The inspiration came from when we thought about building something for kids who love to code.so they can easily ask errors and questions as a voice command very simple and managed to get the very appropriate result for them.but when it's done building in just 3 days for this hackathon we think it's very useful for even professional developers. because it's simplicity we are in love with Edith when we started using it.
What it does
Edith is a voice assistant designed to get the most appropriate information for the users when they are asking some queries. take the kid's example, they feel trouble when searching online for an error in forums. so that we managed to build a fast API server for gathering the most appropriate information from different online platforms. it has a published vscode package that does the sole purposes of an AI assistant built for developers.
How we built it
we managed to build all the whole things in just 3 days, such as the vscode extension, wit.ai actions, and a fast API server for gathering the best results. the vscode extension listens to the user and sends the voice data to wit.ai voice API and gets the entities from there and sends it to fast API server deployed on Heroku. the API has 2 API endpoints for now. one for voice queries and another for processing the chat functionality that we can build soon. the server managed to get all the best and short information from various web portals and send back to the extension and it will play over there as Edith's voice
Challenges we ran into
None of us have been any experience in building vscode packages and in NodeJs, there where we spend most of the time. in fact, Edith was a spontaneous idea and we managed to build it in 3 days, so time was another challenge we tackled down
Accomplishments that we're proud of
Publishing a package for a real problem and watching people using it properly is the best reward and accomplishment.so we are very proud and motivated to support and contribute to Edith in the future for building more powerful and stable versions.
What we learned
The entire process is a breakthrough learning. and most importantly we made a realization after using technologies like wit.ai make the process very simple and fast. we used to build applications with ML frameworks but using wit.ai made things very powerful and easy. as a funny side note, when we started using wit.ai we never thought it was so powerful. it's just so amazing than we thought
What's next for Edith
We have a lot to work on in the future Edith. I'm just pointing the potential ones down below
Chat support in extension (the API part is almost done)
More text editor supports like Atom, Sublime
in-app actions like open a new tab and save with voice commands
Personalized Edith
Built With
fastapi
heroku
node.js
python
speech-to-text
wit.ai
with.ai
Try it out
marketplace.visualstudio.com
github.com | Edith | Alexa but for developers, Edith can be used in diffrent text editors as extensions. currently we have VS code extension | ['Jaiden John', 'Gopikrishnan Sasikumar', 'Suparna Jayaprakash', 'navaneeth kt'] | [] | ['fastapi', 'heroku', 'node.js', 'python', 'speech-to-text', 'wit.ai', 'with.ai'] | 23 |
10,492 | https://devpost.com/software/pick4me-shyfu9 | Inputting text for Boston Area
Result from Boston query
Audio capture for Chicago
Result from Chicago query
Inspiration
I have a hard time picking where to eat when I go out. I wanna save my brain power.
What it does
This application picks a random restaurant depending on what the person asks for. The person has to provide what he/she feels like eating and where.
How I built it
I used React to build the front end and have the site backed by GitHub Pages. Wit.ai reads user input and passes relevant information onto Yelp.
Challenges I ran into
Working with Wit.ai. Working with AI was pretty new to me.
Accomplishments that I'm proud of
I can save my brain power now and let the bot pick where to eat for me :)
What I learned
Working with AI is actually not that bad, provided someone did all the hardcore math behind the scenes already
What's next for Pick4Me
Right now, it works well on desktop. I haven't really made it mobile friendly yet hehe...
Built With
css
react
wit.ai
yelp
Try it out
levane.github.io | Pick4Me | I can't decide where to eat so let's have the Mr. Wit decide!I'm feeling like eating Japanese.... Or Korean... Maybe I want sushi... Or Bibimbap...Help Mr. Wit! | [] | [] | ['css', 'react', 'wit.ai', 'yelp'] | 24 |
10,492 | https://devpost.com/software/currency-converter-wn916v | Inspiration
I wanted to create a messenger app which can be used to get exchange rates of currency.
What it does
It gets the exchange rate of a currency in terms of USD or gets the exchange rate in terms of any currency.
How I built it
I used the wit.ai platform to train the utterances and used the wit.ai messenger quick start to build a messenger experience around it.
Challenges I ran into
I faced some difficulty understanding the messenger ecosystem, then I used the sample code as a baseline for my project.
Accomplishments that I'm proud of
I am proud of completing this in a short frame of time.
What I learned
I learnt about the wit.ai ecosystem.
What's next for Currency Converter
Improve User Experience.
Built With
node.js | Currency Converter | Messenger App using Wit.ai which can be used to check the price of a currency exchange rate. | ['Rohan adv'] | [] | ['node.js'] | 25 |
10,492 | https://devpost.com/software/n-5hiz8m | I ran my NLP model through python and got these results
Inspiration I was inspired by the fact that i will be fulfilling people's dreams through helping them display their talent on the extremely strong internet network. I will also be creating a lot of jobs for the tech savvy people. Living in India, I have seen a lot of unemployed tech savvy people due to the less number of tech companies in India. These tech savvy people would be able to make some money through my platform
What it does
It connects talented but not technically knowledgeable people to the technical experts. Like a business owner who wants a website for his/her business can find a website developer through our platform so that he/she does not have to waste time gaining that technical knowledge and the website developer can earn money
How I built it - I built it through a Natural Language processing software known as Wit.ai and i connect wit to python to check its practical use
Challenges I ran into - Learning about NLP as i am a high school students
Accomplishments that I'm proud of - Learning NLP
What I learned - how to use natural language processing softwares
What's next for n
Built With
python
wit
wit.ai | TecHelp | The idea is to connect talented people, who want to display their talent through internet, and tech savvy people. For example - An aspiring youtube content creator can be connected a video editor. | ['Samyak kapoor'] | [] | ['python', 'wit', 'wit.ai'] | 26 |
10,492 | https://devpost.com/software/food-for-thought-l6a2bo | Messenger Demo
Facebook Page
Inspiration
The inspiration for Food for Thought came from when we were having trouble deciding where to go eat out. There are always so many places to eat in our location however we can never find a place that was good and towards our liking. Thus we thought it would be a great idea to have someone or something to record our past cravings and favorite foods and provide us with a restaurant to go and order out.
What it does
Food for Thought is a messenger chat bot that should provide restaurant suggestions based on location and your food preferences. It uses Facebook's Wit.AI technology in order to comb through and parse the intents and entities of the user's Facebook message and provide a customized restaurant suggestion based on the message.
How I built it
Food for Thought was built using Wit.AI that uses our own set of training data to parse location and food types. It also uses Facebook messenger platform with a node.js express server as a web hook to connect the Wit.AI tool with the Facebook messenger for our Facebook page. Inside the web hook, it processes the Facebook Message with the power of Wit.AI and then we use TripAdvisor's API endpoints to help locate a restaurant in a certain location and on food preference.
Challenges we ran into
There were quite a few challenges such as error handling with Facebook messenger to the node.js web hook. Due to the nature of Facebook messenger which repetitively send a message to the web hook on a 400 response there was many edge cases we needed to account for. In addition, in the beginning process of creating the Food for Thought application, there was a lot of training that needed to be done with Wit.AI in order for it to start recognizing location and food types in our application.
Accomplishments that I'm proud of
The team is proud to accomplish a working Facebook page with the use of Wit.AI that provides a restaurant suggestion to the user, in such a short time span.
What we learned
We learned the capabilities of Wit.AI, and hope that with more training for our application that Wit.AI will provide us with more confident intents and entities on messages received. We also learned about how to connect and integrate our application with Facebook's services which will be very useful for our future endeavors.
What's next for Food for Thought
The next step for Food for Thought is to work more on the personalization of restaurants for each user and tailor responses to be more human-like. There is also a lot of training that needs to be done with Wit.AI in order for our messenger application to be able to fully recognize the different food types and locations.
Built With
facebook-messenger
glitch
node.js
tripadvisor
wit.ai
Try it out
www.facebook.com | Food for Thought | A Facebook page that helps search up restaurants and personalizes their search using the power of wit.ai. | ['Tofeeq Ahmad', 'Jimmy Chao', 'Colin Li'] | [] | ['facebook-messenger', 'glitch', 'node.js', 'tripadvisor', 'wit.ai'] | 27 |
10,492 | https://devpost.com/software/recroot | Recroot.io Logo
Empty Searchbar
Searchbar with description
Colorcoded tab displaying what the AI identified and as which attribute + visual, simple and concise job description
Visual job description + detailed text description explaining in more depth what the job description implies
Recr👀t .io
Recroot is a tool that automates and simplifies the recruiting process using natural language processing by extracting vital information in a sentence and converting it into an entire job description.
✨Purpose
Nobody should spend their precious time and effort on creating job descriptions. That's what Recroot is made for. Many young startups struggle with making job descriptions quickly and have to enter everything manually, often with no direction in mind. No more filling out the lengthy forms, checking boxes and selecting from drop-down lists, with Recroot all you have to do is enter a simple sentence, click enter and watch the magic happen. You can do all of this with voice recognition too. Ensuring you recroot more effectively.
Inspiration
The idea of Recroot came about due to my personally experienced struggle of finding people for my small startups, which is often the case with the majority of young companies. In particular, one of the issues I faced was making simple and to-the-point job descriptions in minimal time, and frequently I didn't really know where to start. The point of Recroot is therefore threefold: 1. to reduce the time it takes to develop a formal job description, 2. save time on making the appearance attractive and simple, and 3. Recroot helps a ton with simply making a job description that you can use as reference and expand on or improve later so that you start with having a direction to follow.
⚙️The Main Function
Enter a 1-sentence description starting with "Find somebody...", either by typing or voice recognition
Click enter or the submit button
Verify in the grey tab that the NLP model correctly interpreted your description
Copy the generated job description below and export directly to Facebook or Twitter
❓How It Works
After you submit your description, it gets fed into the Wit.ai model which identifies all the individual attributes and returns them back to Recroot. Recroot then iterates through each attribute and its sub-attributes and, using them, builds the visual description. With some additional logic, Recroot is able to generate the text description by identifying which attributes are existing and under which category they fall into in order to place them in the correct position.
What it does
Upon reaching the Recroot.io website, the first thing you will see is a search engine-like search bar that prompts you to type "Find somebody that...". The input you need to provide is a simple sentence that defines what the ideal employee you're searching for is like, in terms of 9 currently trained attributes: location, school, industries, companies, minimum years of experience, has a degree, qualities, skills, and held positions. What's best about how Recroot saves you time is that it doesn't require any capitalisation or formal grammar as long as the meaning of your sentence is clear. To provide this input you can either type it out or use the voice recognition feature to which you can dictate your sentence and it will concurrently automatically type it for you with a very high accuracy. This sentence can then be submitted by either clicking enter or the search button, or if you are completely unhappy with the sentence you can reload the search bar to blank with the button to the left. Once the description is submitted, the first thing Recroot will respond with is any errors in a gray tab below, either overly long or too short of a text. If no errors appear, then you'll be greeted with your sentence once again yet this time each attribute that the NLP model recognized will be highlighted in a specific colour in accordance with the colorcoding scheme. You can verify that the AI got everything correct and scroll down a bit further to see the final product. The first half is a visual and concise visual job description that immediately conveys what you, the employer, is looking for in terms of all attributes you provided. The second half is a more in-depth description that provides more detail as to what you are looking for in potential candidates. You can then copy the product and use either the Twitter or Facebook buttons to immediately post your job description to the social media platform.
🦋NLP Model
Accuracy
After conducting a best of 10 sentences consistently with at least 7 of the 9 attributes, the Wit.ai model had an outstanding 100% accuracy rate on 8/10 of the phrases (overall around 94%)
Training
Training data was generated using a python script and would sometimes be slightly altered manually to provide more of a variety of sentence structures and attributes. The point of using a data-generator script is to save time and fit more training data into the model before the deadline, as well as reduce the human bias I have of not exposing the model to as large of a range of sentences, attributes and keywords than a computer script can since I am more prone to frequently using similar keywords and attributes out of habit.
Implementation
The Wit.ai model would receive the sentence then analyze and identify in it which attributes are present. For each of the attributes present, the model would list what substrings in the sentence fall under that attribute, then return everything in JSON file. Recroot then uses javascript to store the JSON values in separate variables then uses logic and randomness to generate the entire job description.
🔮Future Ideas
Increase number of trained attributes
Continuously improve accuracy of model
Expand to other functions using the data like web-scraping
Create a more sophisticated training data generator script potentially using NLP
Make the text-based job description-creator more intelligent (maybe with AI?)
What's next for Recroot
What I think the future holds for Recroot is continuous improvement of the AI's accuracy, and this can only be achieved through more and more data, which I believe will be primarily collected through user's descriptions on the site as well as from a far more sophisticated sentence generator that might run using NLP too. To add on, I really want to continue expanding the horizon of Recroot in terms of not purely generating job descriptions, but also scraping sites like LinkedIn and returning matching people to users on the site, which was my original idea that didn't come to fruition. Lastly, it would be great if I could continue increasing the versatility of Recroot in terms of the attributes it recognizes: some to-be-employed attributes off the top of my head are timezone difference (for remote workers or startups aiming to collaborate from different countries), the exact degrees people have completed, and more.
How I built it
The main chunk of the project/website is built with standard web development languages (html, css, js) and this was very straightforward to make. The other large section is the actual AI which was entirely trained with Wit.ai from the beginning to the end (9 days) I spent making the project, and the accuracy of the AI is very successful (currently over 90%). Furthermore, since I didn't want to waste time creating training data/descriptions for the model, I wrote a python script that would randomly generate sentences using a bit of logic, randomness, and arrays of popular terms for each attribute, ei. for companies many of the most well-known enterprises.
Challenges I ran into
Initially the idea of Recroot was to actually implement web-scraping, yet 3 days before the project was due I realized I wasn't able to run scraping scripts on the client-side which meant I had to shift focus onto an entirely new idea. Luckily I managed to perform this transition effectively and still create a product that can have a lot of impact on many people. Another circumstance to note is that I only learned of the hackathon less than 2 weeks before the due date, meaning I really had to prioritize my time effectively which leads into the next point.
What I learned
As mentioned earlier, I have truly refined my time-management skills with my project due to my tight time constraints, yet I also learned how much time and energy you need to pour into a project before you start seeing tangible results. For me, that meant spending 7 hours everyday after school solely working on training the NLP model, developing the website, and failing and failing recursively yet simultaneously learning so much more about the tools and software that I use. Examples of this include myself learning Node.js to a proficient level along the way even though I didn't even implement it in the final outcome of my project.
Accomplishments that I'm proud of
The single achievement I'm most proud of is actually ultimately finalizing the project. Although it doesn't sound like much, the fact that I have dedicated so much time and have converted it into a functioning website that I see value in means a lot to me and only encourages me to continue developing interesting and practical software. It is also worth mentioning that I have endured so many errors along the way as all developers do, and as it is also my first ever large project developed, I'm really proud that I was able to manoeuvre through all the bugs and malfunctioning pieces of code to, in the end, reach a great working project.
Built With
css3
github
html5
javascript
python
random
wit.ai | Recroot.io | Recroot is a tool that automates and simplifies the recruiting process using natural language processing by extracting vital information in a sentence and converting it into an entire job description. | [] | [] | ['css3', 'github', 'html5', 'javascript', 'python', 'random', 'wit.ai'] | 28 |
10,492 | https://devpost.com/software/regimenai | Inspiration
What it does
RegimenAI
is a voice-enabled fitness & exercise app that features natural language interactions within the
Wit.ai
platform.
How I built it
Challenges I ran into
Accomplishments that I'm proud of
What I learned
What's next for RegimenAI
Try it out
bitbucket.org | RegimenAI | RegimenAI is a voice-enabled fitness & exercise app that features natural language interactions within the Wit.ai platform. | ['Warp Smith'] | [] | [] | 29 |
10,492 | https://devpost.com/software/voice-i28j3h | Screenshot
It all started with a story. Once upon a time, a 12-year-old girl named Fani wakes up one morning with these sheets all stained with blood. She was home alone and didn't know what to do. In such a situation it is very difficult to understand what is going on and several unanswered questions arise. This is how we chose to work on a chatbot that will be able to respond to the concerns of women who have questions about their body, their intimacies, their feelings, their life as a woman: an initiate advisor.
This mobile application was made in React-Native with a Back-End which runs on Node.JS technology, affiliate to the natural language management platform wit.ai.
This project allowed us to perfect our skills as a developer but also our knowledge of women. It was a pleasure.
Built With
express.js
node.js
react-native
Try it out
gitlab.com
gitlab.com | My Intimate Advisor | The idea behind this project is to provide the girl with credible advice on sensitive and sometimes taboo subjects such as sexuality, painful periods and much more relating to her privacy. | ['Lino004', "Derrick M'PO", 'Patricia konan', 'yannick yapo'] | [] | ['express.js', 'node.js', 'react-native'] | 30 |
10,492 | https://devpost.com/software/botwit | The layout
Hi
Hello there
Inspiration
Ever since I came across AI-assistants like Google Assistant, Siri, Alexa, etc. I have wanted to try and build one myself. When I found out about Wit.ai, I finally got the opportunity to do the same.
What it does
It's a simple chatbot that greets you.
How I built it
I used python and flask to build the app, and I have hosted it on heroku.
What I learned
I learned a lot about Wit.ai and how simple it is to use it for projects. I think that I would keep using Wit.ai for my future projects from now on. It's one of the best NLU framework I have ever encountered.
What's next for BotWit
Currently, BotWit doesn't do much. So there is everything to do in the future.
Built With
flask
python
witai
Try it out
botwit.herokuapp.com | BotWit | My project is a simple chatbot built using Wit.ai. | ['Daksh Rawat'] | [] | ['flask', 'python', 'witai'] | 31 |
10,492 | https://devpost.com/software/ai-prediction | Inspiration
AI is a relatively new branch in computer science, but the term 'artificial intelligence' is not precise, as computer programs don't have real intelligence, machine learning is a proper description. I try to use my improved SVM module to allow AI accomplish tasks like analyze how did dinosaur extinct, which can't be solved by human scientists.
What it does
My new AI model uses enhanced data analytics to explore the area which is not tangible to human, then gives the useful data sets; like the geographical change between Jurassic and nowadays, the climate change, the earth orbit within solar system, even the position of solar system within the milky way. And many other problems, like how to enable human live over 200 years, by manufacturing stem cell, etc.
How I built it
Challenges I ran into
I joined the hackathon a bit late so I don't have enough time to build the whole app, and unfortunately I didn't find any team mates.
Accomplishments that I'm proud of
What I learned
What's next for AI prediction
Hopefully I can find people with strong AI development skills and similar interests to help improve my project, and AI should bring our world into a new era.
Built With
wit.ai | AI prediction | Use AI to analyze problem which can't be solved by science, like how did dinosaur extinct. | ['David Yu'] | [] | ['wit.ai'] | 32 |
10,492 | https://devpost.com/software/mrs-career-wise-f3j9lw | Note: Works best on Chrome
Demo -
Link1
,
Link2
Inspiration
The news of my friends losing out their jobs due to COVID-19 was heartbreaking.
Such posts flooded LinkedIn, highlighting how many companies have either fired their existing workers,
or revoked the offers of new hires.
About to graduate in these uncertain times, looking for jobs has become even more difficult.
It has become much more important to prepare thoroughly for your interview processes, as the competition rises stiffly with a growing rate of unemployment.
With hundreds of resources on the web, it can be overwhelming to pick the best one and get started with interview prep.
What it does
Mrs. Career Wise helps you prepare for your next interview at tech giants like Facebook, Microsoft, Google, Amazon for various roles like Software engineering, Testing, Product Management, etc.
Features
Prepare for leading Tech giants
Tech questions - Data structures, Data Science, Machine Learning, DevOps, Product Management ..
Interview tips.
Analytics - Keep track of your progress.
How I built it
1) With a pen and paper, try and list out all possible ways a user might interact with Mrs. Career wise. This helped me list the intents and entities.
2) Creating a Knowledge base of interview questions asked by various tech giants for different roles.
3) Creating a wit.ai python client.
4) Creating a flask server
5) Using Plotly.js to plot graphs for the user's progress.
6) Deploying it over Glitch, Heroku
Challenges I ran into
The biggest knowledge was creating the Knowledge base. There is no API that provides such data, so I set out creating one of my own.
Also going all alone may not be the best idea.
Accomplishments that I'm proud of
I am glad I was able to create a product that would be immensely helpful to people.
What I learned
Using an NLP engine!
Chatbot design - handling intents, entities, contexts...
Creating a complete product from scratch
Caring for user's data privacy
What's next for Mrs. Career Wise
The next step would be to gather a lot of feedback from users.
Expand the Knowledge base to cover more types of questions.
Cover more types of jobs, not just Software-based.
Gamify the process.
Detailed progress monitoring.
Leaderboard - comparing with peers.
Built With
flask
glitch
heroku
jquery
plotly
python
wit.ai
Try it out
careerwise.glitch.me
career-wise.herokuapp.com | Mrs. Career Wise | Ace your next Tech interview with Mrs. Career Wise. Amidst historic layoffs, it's best to be ready for any scenario. | ['Pankaj Kumar'] | [] | ['flask', 'glitch', 'heroku', 'jquery', 'plotly', 'python', 'wit.ai'] | 33 |
10,492 | https://devpost.com/software/octo-xcai0g | Logo
Home
Pitch
What it does
Octo is an all in one trip planner and navigation tool that connects to third-party services to help users get detailed information such as Routes, Estimated Trip Fare, Weather Forecast, Incident Report, Safety Score Rating of their desired location with the help of wit.ai. The third-party services include Wit.ai, Google Direction API, Google Geocode API, Here.com Traffic API, and so on. Octo supports multiple countries supported by the integrated third party services. Octo uses web speech API to collect audio from users and converts it to text. Users can also type in the message as text. The text is then sent to wit.ai for Natural language processing. Users can filter results based on several criteria to fetch relevant information.
How it works
Visit https://octo.tinylabs.app
Tap the microphone button to start recording.
The query result is then displayed.
Features
Trip planner:
It provides the best mode of transportation available in a specified origin and destination while taking into consideration the weather conditions, historical traffic, distance, estimated time of arrival, and transport fare. It also provides alternative routes of specified transport modes. Route information is fetched using Google Directions API.
Test Phrase:
Find my way to Maryland Mall Lagos from Mutual Alpha Court Lagos.
Results can also be filtered based on:
The Mode of Transport ("bicycling", "driving", "walking", "bus", "train")
Test Phrase
: Find my way to Maryland Mall Lagos from Mutual Alpha Court Lagos via
bus
.
The Distance (shortest or longest distance)
Test Phrase
: Recommend the shortest distance to Maryland Mall Lagos from Mutual Alpha Court Lagos.
The Traffic Condition (lesser or more traffic)
Test Phrase
: Recommend a route with
lesser traffic
from Mutual Alpha Court Lagos to Maryland Mall Lagos.
The Duration (Slowest or Fastest route)
Test Phrase
: Recommend the
fastest route
from Mutual Alpha Court Lagos to Maryland Mall Lagos.
The Transport Fare (Cheapest or Expensive Transport Mode)
Test Phrase
: Recommend the
cheapest transport mode
from Mutual Alpha Court Lagos to Maryland Mall Lagos.
Additional commands:
To Open Driving Alternatives, Try Saying: “Open driving alternatives”.
To Close Driving Alternatives, Try Saying: “Close driving alternatives”.
To Launch Directions, Try Saying: “Open driving directions”.
To Launch Navigation, Try Saying: “Open driving navigation”.
Weather condition
: Users can fetch the weather condition of a specified location.
Test Phrase
: What’s the weather like in Lagos?
Incident report
: Users can fetch the incident report of a specified location using Here traffic API. This info includes the criticality of the incident (major/minor) and the incident type (Accident, Congestion, Disabled Vehicle, Road Hazard, Construction, Road Closure etc).
Test Phrase
: What's the incident report in Waterloo Station?.
Criticality Filter (Major or Minor):
Test Phrase
: List out major incidents in Waterloo Station.
Tourism
: Octo recommends tourist attraction places based on a specified location. It also tells the user the Safety Score rating and weather condition of the location before embarking on a journey to the location.
Test Phrase
: Tourism Locations in London.
Safety Score
: Users can also fetch the safety score rating of a specified location.
Test Phrase
: What's the Safety Rating in London?
How we built it
Tools Used:
Frontend UI: Vue JS, SCSS
Backend: Sails JS
Database: Redis
Natural Language Processing: Wit AI, Dialogflow
External APIs: Here.com Traffic API, Google Direction API, Amadeus API, Open Weather API.
Challenges we ran into
We started a bit late which made it impossible to integrate Uber, Lyft, and other ride-hailing APIs. We hope to integrate them in the future.
What's next for Octo
Integrate Uber, Lyft, and other ride-hailing APIs
Introduce user authentication to provide a more personified experience for users
Built With
amadeus
dialogflow
google-directions
here-traffic
node.js
openweathermap
redis
sails.js
socket.io
vue
wit.ai
Try it out
octo.tinylabs.app | Octo | All in one navigation and travel voice assistant. | ['adeoluwa akinsanya', 'Emeka Mba'] | [] | ['amadeus', 'dialogflow', 'google-directions', 'here-traffic', 'node.js', 'openweathermap', 'redis', 'sails.js', 'socket.io', 'vue', 'wit.ai'] | 34 |
10,492 | https://devpost.com/software/facebook-ai-fi5l8z | Stockeeper welcome messages with Quick Replies
Stockeeper keeps tracks of your investment portfolio
Stockeeper recognises companies from human voice
Stockeeper stocks selection with Quick Replies
Stockeeper shows our stock's performance
Stockeeper gives intelligent investment advice
Inspiration
We invest in stocks to grow our wealth and there are varieties of brokers that offer such services. When we have invested with more than one brokers, it becomes a challenge to keep track of our investment portfolio.
By harnessing the power of Wit.ai, we have decided to build an easy-to-use chat bot that assists individuals with portfolio management and performance tracking. The chat bot ideally provides a natural and fun way for us to organize our investments, making our book-keeping easy and accessible.
What it does
Stockeeper allows users to quickly organise their stock investment and keep track of our overall position. Simply tell Stockeeper the orders you have made by sending a text or leaving a voice message, he will be able to provide analytics such as stocks allocation, overall P&L, daily portfolio performance, and so on.
How I built it
Stockeeper sits in a Heroku application server. The brain of Stockeeper was written in Python, which interacts with Wit.ai and Facebook Messenger during the conversation. An AWS S3 instance was used to host generated images and Apache Redis was used to cache useful information in between sessions. Financial market data extracted from Yahoo Finance are analysed together with users’ trading portfolio.
Challenges I ran into
At the start of the project, it was easy to generate responses for sentences that have less than one entity. However, when we dealt with tasks that required more than one entity to complete, it is common that users do not include all the information in one sentence. We then needed to prompt users to input the missing information after. For example, a user may say “I bought 100 shares of Facebook yesterday”, Stockeeper would have to ask users the executed price while remembering the context in the previous sentence. Not until we have made good use of the State Machine design pattern and Apache Redis to store relevant state information, we struggled to maintain this dialog flow.
Accomplishments that I'm proud of
We are proud of implementing a solution that provides efficiency in managing our stock investment. Also, being able to train Stockeeper understanding voices with Wit.ai Speech API is a great accomplishment. In addition, we have made a good start in using state machine design pattern which helps maintain a natural dialog flow when users interact with Stockeeper.
What I learned
Wit.ai is a powerful tool that helps extract key information from unstructured text and voice data for building applications. However, we learnt that it is very important not to over rely on Wit.ai. Striking a balance between what our applications should handle and what Wit.ai can predict is important. For example, when a user says “two eight zero zero”, Wit.ai won’t distinguish whether this refers to a Hong Kong stock ticker (2800.HK) or 2800 number of shares. This is very fair because we didn’t provide any prior information to Wit.ai, and only our application knew the full context. In such circumstances, we applied Quick Replies messenger feature to solve the problem. Quick Replies provides a good user experience through pre-filtering available options for users. In our application, we provided users a selection of tickers to choose from when Stockeeper is expecting a stock in the next response.
What's next for Facebook AI
Stockeeper is a proof-of-concept that demonstrates its capability of managing finance-related CRUD tasks. The next focus for Stockeeper would be providing more analytical power. We want Stockeeper to trigger price alerts based on trading indicators such as MACD, Bollinger bands and RSI, etc. Ideally, we would also like to include as many investment assets as possible apparently because users don’t just invest in stocks.
Built With
facebook-messenger
python
redis
s3
wit.ai
Try it out
m.me | Stockeeper | An easy-to-use chat bot that assists individuals with portfolio management and performance tracking | ['Chun Tung Wong', 'Sarah Cheung', 'Iris Rodriguez', 'Leo Shek'] | [] | ['facebook-messenger', 'python', 'redis', 's3', 'wit.ai'] | 35 |
10,492 | https://devpost.com/software/customization-of-house-before-construction | Unity and Wit integrated to move the cube from one place to other(Caught at moving!!)
Inspiration
I was inspired to make this out to show my uncle to build his new house.
What it does
Just gives the exact 3D view before constructing the house using voice with Wit.ai
How I built it
Built with Wit.ai, Unity and Blender.
Challenges I ran into
Integration of Wit.ai with Unity to provide exact details that required.
Accomplishments that I'm proud of
We can customize the house which ever way we need that to be.
What I learned
Integration, training the voice and AI to act accordingly, HTTP request to provide the communication.
What's next for Customization of house before construction
We are going to make it visualize it virtually and provide the customization impressively.
Built With
blender
c#
unity
wit.ai | Customization of house before construction | We get only 2D plans before construction but a real time is felt later. To avoid that and customize our building with Wit.ai voice is added that helps to build our house virtually before construction. | ['Danush Ravichandran', 'Harjithaa Rajavel', 'RAJALAKSHMI M', 'Kalai Selvan'] | [] | ['blender', 'c#', 'unity', 'wit.ai'] | 36 |
10,492 | https://devpost.com/software/interchat | Sign-up/Login Page
Main view for candidates
Create job positing
Application report (analysis with Wit.ai)
Interview process for candidates
Inspiration
Picture this: you’re a recruiter who just created a job posting for your company. Within a few days, you receive hundreds of applications. As you’re sorting through them, you see that many of the applications don’t even meet the job requirements! You become exhausted as you read and begin to zone out. You have to set up the next round of interviews within a short period of time.
What if we could prevent this whole situation? What if there was a more efficient way to get through the first round of applicants of a job?
Introducing
InterChat!
InterChat is a web application that allows recruiters to create job listings and receive a ranked list of the people who applied. The information that the recruiter is looking for will be abstracted from the responses through Wit.ai.
What it does
From the business/recruiter’s end, simply create a job listing. Enter all the criteria that the position are looking for, including location, years of experience, degree, skills, and more. Once completed your posting will be visible to all candidates using InterChat.
From the candidate’s end, they will see the job listing and apply to it. They will answer a series of questions in a chatbot. Their responses will help us determine if they have the criteria that the job posting is searching for. The information that the recruiter needs will be abstracted from the responses through Wit.ai. (e.g, how many years of experience a person has, or what degree he/she holds, etc). Moreover, the sentiment of the responses, being positive, neutral, or negative will also be obtained. These entities/traits will be sorted based on who is the most enthusiastic and qualified for the position.
How I built it
The back-end was built using Node.js with MongoDB as the database. Wit.ai was used to create utterances that would help us abstract key information (i.e, entities) we are looking for within the candidates' responses.
The front-end was built as a React application. Moreover, Ant Design, a React UI library was used to enhance the overall user experience.
Challenges I ran into
While we ran into multiple challenges, the most significant challenge was setting up the Wit.ai API connection to the application. This process was challenging due to the fact that we were unfamiliar with the documentation. However, having samples within the documentation allowed us to learn how to work with the API and successfully implement it into InterChat.
Accomplishments that I'm proud of
We are proud that the overall user experience functions smoothly. Moreover, we are glad that we were successfully able to achieve our goal of using Wit.ai to form rankings of the candidates' answers.
What I learned
We learned a lot about using the various technologies that were needed to create this project. Specifically, it was our first time using Wit.ai and it was a great experience as it helped us improve our understanding of how Machine Learning works.
What's next for InterChat
While there are plenty of aspects of InterChat that we could consider adding/improving, we would like to add an analytics page for the recruiter so he/she could see different data regarding the applicants (e.g, graphs that show the average number of years every applicant/candidate has, the most common degree, etc.).
Built With
ant.design
javascript
mongodb
node.js
react
wit.ai
Try it out
github.com | InterChat | InterChat is a web application that allows recruiters to create job listings and receive a ranked list (ranking created by using Wit.ai) of the people who applied. | ['Sayma Khan', 'Shrish Mohapatra'] | [] | ['ant.design', 'javascript', 'mongodb', 'node.js', 'react', 'wit.ai'] | 37 |
10,492 | https://devpost.com/software/fghfgh | ChatBot Logo for Cue
Sample Chat Flow Diagram
Inspiration
With 2 members of our team being full-time students, we’ve felt the effect of COVID driving education remotely, and it’s not just us. 87% of the global student population has been affected by school closure. This means that students find it harder to engage with the content, to feel accountable, and to succeed academically.
Millions of students already use Facebook, offering us the unique opportunity to help students succeed in an easy-to-use and individualized way.
What it does
Cue is every student’s virtual assistant. It updates you on your pending assignments, tests, and lectures while helping you log progress on tasks to have the accountability you would in a classroom.
How we built it
We included users and priorities stored remotely over a SQL database. We trained nearly 100 unique utterances, entities, intents, and traits on Wit. ai, to have a seamless user experience. And we attempted to use Wit to integrate voice processing, although we ran into some issues working with the APIs.
Challenges we ran into
Some challenges we ran into involved having seamless collaboration despite all working remotely and being in different timezones. We solved this through regular standups as well as using Glitch to help us collaborate on code in real-time.
Accomplishments that we're proud of
We're especially proud of our ability to get a strong proof-of-concept working quickly! Wit made it cutting edge NLP technologies fun to use and easy to implement into our final product.
What we learned
None of us had prior experience working with Wit.ai or the API, so there was a lot to learn and play around with through our wit app and through the documentation.
What's next for Cue Bot
We hope to implement a teacher-facing side of the app that allows teachers to assign new priorities to students, along with the projected time to complete them. We will also leverage Facebook Messenger to allow a safe but informal way for students to interact with each other and with teachers. Finally, we hope to integrate notifications for upcoming due dates to further optimize student workflows.
Built With
javascript
sql
wit.ai
Try it out
m.me
glitch.com | Cue Bot | An innovative approach to remote student accountability | ['Piyush Dubey', 'Sara Liu', 'Aditi Misra'] | [] | ['javascript', 'sql', 'wit.ai'] | 38 |
10,492 | https://devpost.com/software/happiness-seeker-chatbot | Landing page on Facebook
Inspiration
In 2020 due to the coronavirus pandemic, our lives have completely been changed. Since I was finishing my graduation from Arizona State University, I was living in the US away from my family who couldn't travel to the US for my graduation. It's very difficult to express emotions to people who might also be feeling sad because of the same situation. Also, due to the pandemic, the job market has become static and I've been without a job for months now while despairing over my visa situation. During this time, the best thing to do is to keep oneself busy and that is what I've been trying to do in recent days. However, it's not easy to come up with new activities while being in a state of despair and anxiety. It then occurred to me that having an app that can suggest actions or activities that I can do based on what I am feeling would be extremely useful, particularly for people who are in a similar situation as myself. Almost all recommendation systems are based on previous likes and consumption, however, getting suggestions based on your current emotion has a sense of intimacy which is not available in existing systems. Then I found about this hackathon and managed to make my friend from India convinced about the idea and thus, began the journey for the happiness-seeker bot
What it does
Happiness-seeker is a chatbot that uses the powerful Natural Language Processing power of Wit.ai to understand the emotion of the user from their responses and then suggests useful actions to either lift the user's mood when it's down or suggests ways to enjoy or celebrate their achievements. It can suggest nearby restaurants, movies, music, vacation plans, or nearby activities based on the mood. It has more personal actions suggestions like calling your partner, speak to your family, speak to your friends. It can also detect medical emergencies and suggest nearby hospitals or emergency rooms. The moods are detected using a custom build entity - get_emotions (wit.ai) and are validated further using built-in trait wit$sentiment. The emotion can range from extremely positive, positive, neutral, negative, and extremely negative. Each range of emotions have their own action items and the top 5 five choices for the particular emotion is chosen and displayed in the carousel of the generic message template. For the purpose of the hackathon, the choice of actions is limited and the attached links direct to their respective Facebook with the exception of music, where it opens the Spotify app. The user can then decide which particular action to choose to continue their online activity. The user can also re-speak their emotions to get better choices. Currently, the app supports both voice-based and text-based interactions with the messenger platform.
How I built it
The app was built leveraging the messenger platform and the webhook capabilities of a Facebook app. We created a custom made a Facebook page that served as the UI for the chatbot. We connected it to a Facebook app using my developer's account. Then we added the messenger and webhook capabilities to the app. We built the app using python and flask. We added the necessary libraries, dependencies required to handle the voice interactions. The app doesn't connect to any database and hence doesn't store any user's information. It only keeps the voice attachment temporarily for the wit.ai's speech API to process the information. We trained the wit.ai app by first collecting the common utterances for a wide range of emotions from positive to negative. We spent time manually annotating the entities and traits. After the training was completed, we used the Wit.ai speech and message APIs to communicate with the app. The interactive nature of the app was maintained by the pymessenger bot which took care of structuring the payload for request and response. As the app was developed the utterances encountered by the app increased and they were retrained and validated.
Challenges I ran into
Almost all the things that we encountered to build this app was completely new to us. We started learning from scratch about various kinds of web and mobile app development. Ultimately, based on the time frame, this was the most pragmatic approach that we found for the purpose of the hackathon. The biggest challenge that I faced is to understand the process of handling audio files within the system. Android records the audio as mp4, which was not recognized by the wit.ai speech API. We then converted the .mp4 file to .wav file which ultimately worked. Another issue was with making our app public. It requires a privacy policy URL which we didn't have much idea about. However, almost all the challenges were an opportunity to learn more about the functionalities and features of wit.ai.
Accomplishments that I'm proud of
The biggest accomplishment was to integrate the voice attachments with the text-based system. We didn't have much idea about handling audio recording and hence it was quite a challenge. However, the greatest reward was when the app accurately recognized our utterances and gave the correct emotions and results.
What I learned
There are plenty of things that make this hackathon a huge learning experience. Firstly, we delved into a completely unknown territory of app development. We learned a lot about handling API requests, responses using python and flask, connecting webhooks, access apps through webpages, etc. Secondly, we got exposed to the wit.ai platform and all the fantastic functionalities it offers. We also learned about full-stack development and what it takes to upgrade a feature or add functionalities to existing systems. Lastly, we learned a lot about handling different kinds of files in python.
What's next for Happiness-Seeker chatbot
The future plan for the chatbot is to add more functionalities. Currently, there is a limited choice of actions for particular emotions, the action pool can be further extended. The current version has limited voice interactions, in the future callback functionalities can be added for the button template such that it can be interacted by voice commands. A better way of choosing the actions can be done by storing the user's preference when they encounter a particular emotion. It can also be extended to detect the emergency situation and can provide help to the user by contacting first-responders and emergency contacts.
Built With
facebook
facebook-messenger
flask
heroku
pymessenger
python
webhook
wit.ai
Try it out
m.me
www.facebook.com | Happiness-Seeker chatbot | A human-bot connection to lift your mood | ['kaushiki biswas', 'Sambarta Ray'] | [] | ['facebook', 'facebook-messenger', 'flask', 'heroku', 'pymessenger', 'python', 'webhook', 'wit.ai'] | 39 |
10,492 | https://devpost.com/software/rexana-the-robot-jx3lmz | UI / Dashboard
Hardware Components
Base Electronics
PyTorch Machine Learning (Object detection and Human Pose Estimation)
Inspiration
1) The idea of having a Voice Assistant like Alexa or Siri that could extend into the physical world for tasks around the house
2) Having an interactive robot for fun and immersive language education, she understands and can reply in Spanish
3) The Idea of having a personal robot that in the future as the ability to be a useful caregiver/ physical assistant
What it does
Rexana is a voice-activated personal assistant robot that currently does the following tasks:
Autonomous navigation around the house, using distance and location data (landmarks) from the 2D lidar, her cardinal/compass bearing, the wheel encoders that track each wheel's distance and object detection Rexana knows where she is around the house and using voice commands to navigate around the house. Why? Paranoid you left the oven on after leaving the house... Rexana can be accessed on my phone via a browser to give text or voice command to "go to the oven" I can then view the oven via the web cam. Forgot to water the plants "Rexana water the pot plant in the lounge room"..feeling lazy... "Rexana bring me the Pringles".
Rexana has custom "hands" that can be switched for purpose fit tasks, watering can, magnets, grippers.
More demo here:
https://rexanapaperai.wordpress.com/demos/
As Rexana is navigating around the house she takes note of detected objects detection (powered by PyTorch) I divided her viewpoint into left, straight and right. So I can ask her about objects she can see, she stores data about each object (compass bearing, X,Y coordinates based on wheel encoders and distance-based on lidar) this allows her to recall objects or go to recently seen objects via voice or text command.
Using the above data I can also practice Spanish in a fun, immersive way with her by asking her what she can see "Que puedes ver aqui", "A la izquierda hay libros" a la derecha el television".
Programmatic training her hand movements is very tedious, using 3D human body pose estimation I can train her much faster and intuitively to do tasks, she can copy my actions waving, gestures or picking up objects.
She also has a retro-inspired dashboard for monitoring, training and manually controlling.
How I built it
Build Blog:
https://rexanapaperai.wordpress.com/
Rexana is a physical robot made from scratch using 3D printed parts, several plastic plant pots, 8 servo engines, camera, wheel encoders,2 dc engines a 2D LIDAR a magnetometer and a Raspberry PI onboard computer.
I used Pytorch Detectron2, 3D human pose estimation and experimented with Pytorch Geometric.
I used wit.ai for training the voice commands (originally I was experimenting with RASA and Azure LUIS, Wit.ai has by far the lowest barrier to entry the team have done a great job at keeping it simple to yet powerful)
Data is captured and formatted via onboard Raspberry Pi Computer then sent to an AWS server for realtime inference over web sockets which then returns the pose/detected/inferred results.
Challenges I ran into
Her arm dimensions and joints are very different from a human so pose estimation is not very accurate (version 2 will be bigger and more closely resemble human joint positions and dimensions
Powering the 8 arm engines + 2 dc motors and the onboard Rasberry Pi computer was an unexpected challenge (getting the correct voltage /amperage and decent battery life)
Accomplishments that I'm proud of
Working proof of concept!
What I learned
I burnt our 3 servo engines trying to get the arms working well so learned a lot about servo motor torque and how to power them.
Autonomous, human-sized and genuinely useful robots are achievable, although some of the functionality is basic or rough I was able to complete a proof of concept and the lessons learned and existing groundwork will make the next version significantly better.
What's next for Rexana the Robot
Adding 3 x micro vacuum cleaners and mop extensions to so she can complete the "Vacuum Kitchen" / "Mop Kitchen" commands.
V2 bigger size, human dimension arms for better pose estimation, create docs, improve code and open-source.
Self annotation and improve automatic training by showing objects and giving names and locations.
More info and demos here:
https://rexanapaperai.wordpress.com/
Built With
google-web-speech-api
python
pytorch
raspberry-pi
tornado
websockets
wit.ai
Try it out
rexanapaperai.wordpress.com | Rexana the Robot | More than a voice assistant, this project is the foundation for physical household robot trained to autonomously complete basic tasks | ['Dan O'] | [] | ['google-web-speech-api', 'python', 'pytorch', 'raspberry-pi', 'tornado', 'websockets', 'wit.ai'] | 40 |
10,492 | https://devpost.com/software/basic-level-python-interview-bot |
window.fbAsyncInit = function() {
FB.init({
appId : 115745995110194,
xfbml : true,
version : 'v3.3'
});
// Get Embedded Video Player API Instance
FB.Event.subscribe('xfbml.ready', function(msg) {
if (msg.type === 'video') {
// force a resize of the carousel
setTimeout(
function() {
$('[data-slick]').slick("setPosition")
}, 2500
)
}
});
};
(function (d, s, id) {
var js, fjs = d.getElementsByTagName(s)[0];
if (d.getElementById(id)) return;
js = d.createElement(s);
js.id = id;
js.src = "https://connect.facebook.net/en_US/sdk.js";
fjs.parentNode.insertBefore(js, fjs);
}(document, 'script', 'facebook-jssdk'));
Inspiration
People face difficulty while giving a technical interview so to make the interviewee confident and to help him practice some questions verbally we designed this system
What it does
It displays a question from basic python and then listens to the answer from the user as voice input and then displays whether the answer is correct or not.
How I built it
We trained our bot using wit.ai and then build a python program that displays basic Python questions and takes the voice input from the user than that audio is converted to text and is sent to wit and then according to the text wit returns an intent.
Challenges I ran into
We were new to wit.ai and we had only 8 days to complete the whole project from think and refining the idea till the end product.
Accomplishments that I'm proud of
I'm now able to make any kind of project using wit.ai
What's next for Basic Level Python Interview Bot
A mobile application can be made in the future
Built With
pyaudio
python
wave
wit.ai | Basic Level Python Interview Bot | It's is a voice-enabled chatbot that helps users to practice basic Python interview questions. | ['Syed Hassan Ikram', 'MaazAbdulWahab'] | [] | ['pyaudio', 'python', 'wave', 'wit.ai'] | 41 |
10,492 | https://devpost.com/software/ketowit-a-voice-based-keto-diet-assistant | Your Keto diet advisor
Inspiration
A proper diet is an important part of leading a fit healthy life. The food we eat directly affects our health and fitness. Therefore, it is important for everyone to eat proper food and also not compromise on the taste. Especially during this Coronavirus pandemic, where people are mostly staying at home, not being able to go out much especially to places like gyms for working out, the obesity rate increases. Obesity leads to a lot of health-related problems. So, people must follow a healthy diet regularly in order to avoid obesity but not compromise on the taste of the food at the same time.
We came across a diet called the Keto diet that reduces the sugar intake and burns the body fat for the regular functioning of the body. So, people can have their favorite foods that are rich in fat and have very less sugar content, and stay healthy at the same time. But there was no proper guide for it, the articles over the internet were too complex for a common person to understand and did not include the products that the people reading them could directly buy from the market. So, the people following the Keto diet have a hard time searching food products, looking into the description of the product's contents, and comparing it with the Keto safe content limits that they read on the internet blog when they go shopping. This would take more time for the people following the Keto diet for shopping, cooking, and ordering food at a restaurant. Also, the diet consultants charge a lot for a limited time of consulting.
Therefore, we wanted to make something that people can use any time, get friendly suggestions that do not have too much nutrition jargon, talk to it just like the way they talk to their diet consultant, and use it easily for free.
What it does
KetoWit is a voice-based Keto diet assistant that helps people following the Keto diet with shopping, deciding what to cook or order at restaurants. It consists of three main functions (also called intents) for now:
It can help the user to check if a specific food product, dish, or ingredient falls under the keto diet.
Suggests Keto-friendly food products and brands that the user can shop for at the supermarket.
If the user wants to know the nutritional value of a specific food product, ingredient, or a dish, it can tell the user about it.
In this way, people who do not know much about nutrition and do not have much time for looking into food specifications can also use this for choosing what to have.
How we built it
At first, we found a dataset with a lot of food products, dishes and ingredients, along with their nutritional value. After that, we filtered all the food products, dishes, and ingredients that were Keto diet-friendly. Then, we uploaded 1000 of those filtered products to our hosted PostgreSQL database on the Heroku cloud platform.
After that, we created and trained the three intents on wit.ai and also built a Django-based web app. We integrated the WebKit speech recognition module in the front end so that whenever a user speaks something into the front end, it converts the speech to text and sends it to our Django-based back end. The Django-based back end would then make a request to wit.ai with the user's speech as the utterance in order to get the intent and the food entity name. Based on the intent, the Django app would execute a specific function that would query our food database with the food entity recognized from the user's speech to get the results. Then, the results would be put into a sentence form and sent back to the front-end where the Text to Speech engine would read out the result to the user. Later, we deployed the web app to Heroku.
Challenges we ran into
We had to research thoroughly into the Keto diet and the nutrition attributes associated with it.
Dataset was hard to collect and process because of the huge number of attributes.
The Heroku cloud platform had limits on the database and app usage.
Accomplishments that we're proud of
Finishing our first project that uses a voice-based assistant whose utterances are trained on wit.ai, the number of lives that this project can impact, the ease of use that we achieved.
What we learned
Creating a voice-based assistant fast and easy.
Simplifying user experience.
Using wit.ai and facebook's developer platform.
Making AI-driven technologies that have an impact.
Using open-source and other free to use tools extensively in order to build impactful technologies.
What's next for KetoWit - A voice-based Keto diet assistant
We want to make it personalized for every user using it with options for a shopping to-do list, scheduling, diet tracking, and vegan specific diet recommendations. We also want to add more dishes, ingredients, and food products into our database so that the users can look up a wider range of food items.
Built With
css3
django
git
github
heroku
html5
javascript
pandas
postgresql
python
sqlalchemy
Try it out
ketowit.herokuapp.com
github.com | KetoWit - A voice based Keto diet assistant | KetoWit is a diet assistant that assists the people following Keto diet with shopping or cooking by letting them know information about a dish/food product/ingredient and giving food suggestions. | ['Suraj S Jain', 'Puneeth C', 'Sushranth Hebbar'] | [] | ['css3', 'django', 'git', 'github', 'heroku', 'html5', 'javascript', 'pandas', 'postgresql', 'python', 'sqlalchemy'] | 42 |
10,492 | https://devpost.com/software/freestyle-fitness | Start tracking metrics for current exercise
Stop tracking current exercise and save stats
Navigate to next exercise
Pick a random exercise
Call out which exercise you want to track
Leveraging wit.ai to discover new voice commands from app users
Inspiration
I was trying to find ways to motivate myself to do handstands daily and improve the time I can hold a handstand.
To do so, I started looking for a mechanism that can help me track the length of handstand holds, with minimal effort, so I can focus on practicing rather than log keeping. Clearly, the key was that the tracking needed to happen hands free.
Gyroscope on the phone was my first idea, but then I realized it’s not that practical. I considered using a chip that attaches to the foot, but then again, it is yet another device that needs to be charged, worn, and put somewhere so it won’t get lost.
When I thought I ran out of options, I realized that using voice could be promising. It’s hands free, doesn’t require any setup, and does not force me to attach something to my body. I also wanted something without that tax of having to say “Hey Siri/Google, start my app X” every time. And voilà, I think that resulted in the first fitness app of its kind that leverages voice commands to guide a workout, one exercise at a time.
What it does
Allows a user to talk to a mobile app to guide their workout:
Start tracking an exercise by saying "start". This will either start a timer or start counting reps.
Say "stop" to stop tracking.
Say the exercise name to pick the exercise, or say next, previous, random to navigate exercises
Say "faster" or "slower" to increase/decrease the speed of reps for an exercise
Based on what users say, the app leverages wit.ai to discover new voice commands (intents).
How I built it
The app is built for iOS using the Swift language, and with firestore as the backend. It leverages wit.ai to detect intent and discover new voice commands based on how users talk to the app.
Challenges I ran into
There is no Swift SDK for wit.ai. I had plans to leverage more of the API, but decided to just focus on the messages API for intent discovery.
The voice recognition capabilities are all on the backend. It would be HUGE if there were one available only on the client side on the device. This way, the functionality would work offline and of course it will run faster. Something like pocketsphinx.
When the app speaks, it's hard to get the mic to ignore speech coming from the speaker versus that coming from the user.
Also, it would have been great if wit.ai let me explore all the messages, instead of limiting me to see 10 and instantly either removing or training them in order to see more messages.
Accomplishments that I'm proud of
Being able to recognize intents from a "continuous stream" of voice, as the user does a workout. The app focuses on recognizing between one and 5 words commands.
The app is practical and one of the very first fitness app to leverage voice for workouts without weird wake up commands.
What I learned
Leveraging wit.ai gave me a very cool idea to "start 15", which would mean that tracking starts for 15 seconds or 15 reps and then it automatically stops on its own. This came directly from a user using the app and having their command funneled to wit.ai.
Users actually use commands that are not associated with an intent, and leveraging wit.ai helps me discover new ways (voice commands) that users could leverage to interact with the app.
The voice interface needs to be carefully designed, just like UX and chatbots. There is so much potential with voice-enabled apps that have not yet been tapped into.
Voice recognition does not work every single time, mainly due to the fact that it is open-ended, and not a known set of words/sentences. This gives a ton of flexibility but reduces accuracy/confidence.
What's next for Freestyle Fitness
Go beyond exercise tracking into programs and workouts. Leverage more machine learning to check form and alignment for different types of exercises (mainly handstands).
Thanks to the power of wit.ai, allow users to ask questions like "when did I do my best handstand?", "what is my handstand record?", "remind me to workout at 6 pm" etc..
Built With
firebase
firesbase
swift
wit.ai
Try it out
apps.apple.com | Handstand Quest: Freestyle Fitness App | Imagine doing a workout that dynamically adapts to your energy level, the time you have, and areas in your body that you would like to focus on. How, you might ask? By talking to the app! | ['Sara Farhat B'] | [] | ['firebase', 'firesbase', 'swift', 'wit.ai'] | 43 |
10,492 | https://devpost.com/software/test-t1ey0u | Landing page
Co-Pilot code editor
Generated code
Works on mobile too
Inspiration
As a developer who writes code in various programming languages, sometimes I forget how to write syntax in a particular language, I end up reading through documentation or searching Google in order to find the code I need, but doing that slows down development time. What if there was a way to quickly insert forgotten code in your code editor without having to research? and what if you could simply speak to your code editor and tell it which code to insert? that's where Co-Pilot comes in!
What it does
Co-Pilot is an AI voice assistant that aims to help developers speed up software development time by simply speaking to their code editor in natural language.
With Co-Pilot, you can write code with natural speech commands such as "add a for loop" or "insert a try-catch block" rather than memorizing hundreds of hotkeys and syntax.
Co-Pilot helps you save time, Instead of typing long lines of code you can simply generate the code with natural speech in no time.
How I built it
Natural language:
I used Wit.ai for natural language processing by adding a few sample utterances that the user might say such as:
Insert a
try-catch
block
Add a
switch statement
Show me how to write
comments
in Javascript?
I then created a custom intent and labeled the entities in the utterances.
Frontend:
I used React to build the web application and added the speech recognition API. Using the Wit.ai API I make an HTTP request and send the spoken message which returns an intent and entities. I make another HTTP request to AWS which sends the intent and entities to the backend.
Backend:
In the backend, I used Dynamo DB to store and fetch information via API-Gateway. The code stored in the database is fetched from
Mozilla JavaScript Reference
.The database is then queried to find the correct code based on the intent and entities received from the front end. The diagram below illustrates the process:
The client records their voice in the web application, the information is sent to Wit.ai, which returns an intent, and entities which are then sent to the backend.
Challenges I ran into
Finding the correct way of sending voice data to Wit.ai with JavaScript was a challenge.
Accomplishments that I'm proud of
Building a tool that solves a problem I face, sometimes I spend months working in Java or python then when its time to work in Javascript I end up forgetting the JS syntax. I hope other developers find this helpful.
What I learned
I learned how to build with Wit.ai.
What's next for Co-Pilot
Next, I plan to create extensions of Co-Pilot for VSCode and other code editors.
Built With
amazon-dynamodb
react
wit.ai
Try it out
shielded-mesa-13986.herokuapp.com
github.com | Co-Pilot | Double your productivity with Co-Pilot. | ['Harry Banda'] | [] | ['amazon-dynamodb', 'react', 'wit.ai'] | 44 |
10,492 | https://devpost.com/software/eliza-is-a-mock-rogerian-psychotherapist-using-wit-ai | Demo Website usinh Wit.AI
Inspiration
Weizenbaum, Joseph "ELIZA - A Computer Program For the Study of Natural Language Communication Between Man and Machine
https://dl.acm.org/doi/10.1145/365153.365168
What it does
It is a simple app to help with teaching new people how to use Wit.AI. This was phase one of creating an Eliza program. IT collect responses about how someone feels and we train our model. The goal is to have it automate like the classi Eliza program with key questions..
How I built it
React JS simple form
Challenges I ran into
We ran out of time for the complete implementation., but the process helped us to see how this initial app is a great way to teach people how to use Wit.AI.
To test with Speech a person can use the Chrome Extension:
https://dictanote.co/voicein/
. we are working build our own cloud browser extension to allow voice.
Accomplishments that I'm proud of
Bringing the team together and getting everyone up to speed in 30 minutes on how to use Wit.AI
What I learned
How to intent function decomposition.
What's next for Eliza is a mock Rogerian psychotherapist. using Wit.AI
Complete.
Built With
materialui
node.js
react
wit.ai | Eliza is a mock Rogerian psychotherapist. using Wit.AI | Wit.AI to Simplify Weizenbaum, Joseph "ELIZA - A Computer Program For the Study of Natural Language Communication Between Man and Machine https://dl.acm.org/doi/10.1145/365153.365168 | ['Brandon Taylor', 'Zachary Lewis', 'Matt Stillwell'] | [] | ['materialui', 'node.js', 'react', 'wit.ai'] | 45 |
10,492 | https://devpost.com/software/vc4u-an-aid-for-the-blind | VC4U
Work Flow
Inspiration
Nearly 40 million people in India alone are visually impaired(285 million worldwide).
It is hard for them to do their own jobs. They find themselves dependent on someone almost always.
Devices that help the visually impaired people by scanning the environment and guiding them accordingly do exist but they are priced at several thousand dollars. The cost factor makes accessibility a question mark.
This inspired us to develop VC4U (We See For You) - an affordable device for the Visually Impaired
What it does
VC4U has 2 components - An APP and a Spectacle
The spectacle is embedded with a camera and a touch sensor. Whenever the user wishes to use the device he touches the sensor. This action triggers the app to listen to the voice command the visually impaired person is about to give. With NLP the speech data is processed and is converted to commands.
Here are a few commands (and their motive):
-> Detect objects: Gets image from the onboard camera in the spectacle and detects objects, which is then
converted as Voice feedback. ( Sign Boards, Person, Traffic Signs, Trucks, Bus are few classes to
mention). Tflite (Tensorflow lite) and YOLO are used to achieve the task.
->Where am I? : Gives the exact location of the user via voice commands
How we built it
For Capturing image ESP-CAM32 was used.
The captured image is sent via HTTP to its clients.
For NLP wit.ai was employed.
The App(Flutter based) uses YOLO - custom trained on Open Image dataset and was integrated with the app using tflite.
Challenges I ran into
Getting hands-on the required Hardware and completing within the stipulated time were the biggest challenges.
Accomplishments that I'm proud of
The main objective was to provide a low-cost solution, which was possible - the cost price was under 1000 INR.(14$)
What I learned
Wit.ai is a great platform for any solution that involves NLP and was really user-friendly, it was a good experience exploring it.
Flutter was a totally new experience for me and my team. We had great fun working with flutter.
And worth mentioning - ESP's capabilities and YOLO surprised us.
What's next for VC4U - An aid for the Blind
Next, we are looking into developing improvised versions and making it available for the visually challenged community in an attempt to make the world a better place.
Built With
arduino
esp
flutter
machine-learning
natural-language-processing
opencv
tensorflow
tflite
wit.ai
yolo
Try it out
github.com
drive.google.com | VC4U - An aid for the Blind | A wearable glass for the visually impaired , paired with an APP to give them a gist about their surroundings | ['Thiruvikkraman S', 'Sujith Krishna'] | ['Top 1517 Fund Pick'] | ['arduino', 'esp', 'flutter', 'machine-learning', 'natural-language-processing', 'opencv', 'tensorflow', 'tflite', 'wit.ai', 'yolo'] | 46 |
10,492 | https://devpost.com/software/easypeasy-py | Speak Notes
Inspiration
Our primary inspiration was our eagerness to explore the world of
Speech Recognition
,
Speech Processing
, and
NLP
in general.
We wanted to use our theoretical knowledge in building a real world application that can serve very meaningful day-day life purposes, and fortunately we have come close to building such an application.
What it does
speakNotes
allows you to use your classical Notepad in a modern futuristic way. It allows you to
speak out your thoughts onto your notepad instead of typing it
, hence allowing a free flow of thought and greater human-computer interaction.
It also allows you to
convert an audio file into text
, and
vice versa
.
How we built it
We used Python's Tkinter GUI library to build the interface and then used Facebook's Wit.AI to leverage AI capabilities.
Challenges we ran into
We ran into a hell lot of challenges. Since this was the first time that we ventured into Python GUI programming and application development, we had a tough time going through all the documentations and stuff.
Another challenge that we faced was to make the application a multi threaded one so that it doesn't freeze on I/O operations. We are very proud to have been successful in accomplishing it and we gained a lot of working knowledge of how
Threading
works in Python.
Accomplishments that we're proud of
1) Made a full scale Windows application in python for the first time and have been very successful in making it.
2) Leveraged the power of AI in our project in a very useful way.
3) Got a good feedback from our friends who tested it.
4) Got a chance to apply our software engineering skills
What we learned
We learned about the process of making a
robust software application
and the various application development life cycle principles associated with software engineering.
We have also learned Python GUI programming along with how to use Wit.AI's speech recognition services.
What's next for EasyPeasy.py
We plan to take the project further by developing its versions for Android, ios, Mac OS and Linux. We will also update the UI and the speech recognition functionality.
Built With
gtts
pyaudio
python
speechapi
tkinter
wit
wit.ai
Try it out
github.com | speakNotes.py | A hybrid notepad application combining the power of Natural Language Processing. | ['Avhijit Nair', 'BONGU MEGHANA ADITHI 18BCE7224'] | [] | ['gtts', 'pyaudio', 'python', 'speechapi', 'tkinter', 'wit', 'wit.ai'] | 47 |
10,492 | https://devpost.com/software/selma-voice-enabled-cui-for-disease-self-management | Table from "Living a Healthy Life with Chronic Conditions" Third Edition
Selma services running as containers on a RedHat OpenShift cluster
OpenShift container log stream
Developing Selma in F# using Visual Studio
Wit.ai training on utterances
Inspiration
Covid-19 and chronic illnesses
Healthcare providers around the world are faced with looming, potentially reoccurring crises of disease treatment and patient resources becoming critically overloaded. People who suffer from chronic illnesses and disabilities are the most severely affected, both by the pandemic itself and by the pandemic's impacts on health care systems. People with pre-existing conditions like diabetes, asthma, cancer etc. are at
higher risk
from the virus, but must also cope with health care resources being diverted towards treatment of acute conditions and emergencies that must be given priority.
What is self-management?
Self-management can be defined as the methods, skills, and strategies by which
individuals effectively direct their own activities toward the achievement of
specific objectives. It usually includes goal-setting, planning, scheduling, task
tracking, self-evaluation, self-intervention, and self-development.
In healthcare, self-management typically refers to the training, skill acquisition, and
interventions through which patients who suffer from a disease or a chronic condition may
take care of themselves and manage their illnesses.
From
Self-Management of Depression A Manual for Mental Health and Primary Care Professionals
by Albert Yeung et. al
A self-management program teaches patients to see treatment as a collaborative process with the patient taking responsibility for self-monitoring and tracking their symptoms as well as medication intake and other vital information, together with a commitment to using evidence-based, self-administered structured therapy and intervention as adjuncts to professionally delivered interventions for managing their illness. Many chronic disease self-management
programs
and
resources
have been developed for diseases and conditions like arthritis, asthma, diabetes, heart-disease, high-blood pressure, depression, obesity, smoking-cessation etc.
The problem
There are many Android and iOS mobile apps in the category of self-management and self-help like task planners, time and activity trackers, med and symptom trackers, journals etc. For desktop users there is bStable for managing Bipolar Disorder, as well several telemedicine and CBT apps designed to deliver specific therapies remotely, together with the usual assortment of general-purpose calendars, task management and time tracking tools.
All of these apps however rely on GUIs and touch interfaces with visual forms and widgets for entering and reviewing data and presenting information.
McGraw System's "bstable" desktop app for bipolar disease management
These types of interfaces can be inaccessible or difficult to use for older people unaccustomed to complex GUI interfaces, or people with neuropathy, arthritis, and generally people with vision or motor disabilities or chronic conditions. A large proportion of the target user-base for these apps and systems would benefit from simpler systems or the ability to rely on assistive technology to help then navigate modern desktop and web applications. Microsoft, Apple and Google have made major advancements in accessibility technology for their operating systems with Microsoft’s Windows Narrator and Apple’s VoiceOver technology in particular being widely used by people with vision disabilities. But even with the best assistive technology, app interfaces and web pages that rely on a persistent visual medium can be frustratingly difficult to use for disabled people as they force users to navigate linearly through large hierarchies of visual widgets which use visual orientation in space and visual style elements like font sizes to effectively organize and present information.
The most accessible app interfaces for older people and people with disabilities are voice assistants like Cortana and Siri and Alexa.
What it does
Selma is a multimodal CUI that provides an inclusive interface to self-management tools like medication trackers, mood and symptom trackers, dream and sleep journals, time, activity and exercise, trackers, personal planners, reliable knowledge bases on health conditions and diseases, and similar tools used in the management of chronic physical and mental diseases and disorders and conditions like ADHD or chronic pain where.self-management skills for life activities are critical.
Selma follows in the tradition of 'therapy bots' like ELIZA but updated with powerful ML-trained NLU models for interacting with users in real-time using both typed text and speech. Existing self-management apps like journal, activity-, and symptom-tracking apps all use GUIs or touch UIs and assume users are sighted and dexterous. The reliance on a visual medium and complex interface for entering and reviewing daily self-management data is a significant barrier to adoption of these apps by people with disabilities and chronic conditions, who form a majority of a self-management app's user base.
Selma eschews complex GUI forms and visual widgets like scales and calendars and instead uses a simple line-oriented conversational user interface that uses automatic speech recognition and natural language understanding models for transcribing and extracting symptom descriptions, journal entries, and other user input that traditionally requires navigating and completing data entry forms. Patients interact with Selma using simple natural language commands or questions and enter their journal or medication or symptom tracking entry using speech or text. The captured audio and text is analyzed using NLU models trained to extract relevant details spoken by the patient on their medication intake, mood, activities, symptoms and other self-management details, which are then added to the user’s self-management journals.
The Selma CUI is an accessible user interface that produces text output easily read by screen readers, braille displays and other assistive technology. Users interact in a conversational style with Selma which gathers information in specific areas and guides the user through specific tasks like daily medication and mood tracking and filling out periodic journal entries and evaluations. Users can ask questions (“Did I take my meds today?”, “Did I go out this week?”) and bots can answer intelligently based on information previously captured and analyzed. With the user's consent the information gathered can be automatically sent to the patient’s health providers, reducing the time needed for administering these routine tasks and allowing face-to-face communication and direct supervision with a practitioner to be conserved and more effectively use. The information can also be analyzed for possibly warning symptoms or threats of acute events that may require intervention.
Selma uses Facebook's Wit.ai NLU service for understanding what users say and and input as text.
How I built it
Overview
Selma is written in F#, running on .NET Core and using PostgreSQL as the storage back-end. The Selma front-end is a browser-based CUI which uses natural language understanding on both text and speech, together with voice, text and graphic output using HTML5 features like the WebSpeech API to provide an inclusive interface to self-management data for one or more self-management programs the user enrolls in. The front-end is designed to be accessible to all mobile and desktop users and does not require any additional software beyond a HTML5 compatible browser.
Client
The
CUI
,
server
logic
and
core
of Selma are written in F# and make heavy use of functional language features like first-class functions, algebraic types, pattern-matching, immutability by default, and avoiding nulls using
Option
types. This yields code that is concise and easy to understand and eliminates many common code errors, which is an important feature for developing health-care management software. CUI rules are implemented in a declarative way using F# pattern matching, which greatly reduces the complexity of the branching logic required for a rule-based chatbot.
Server
The Selma server is designed around a set of micro-services running on the OpenShift Container Platform which talk to the client and stored data in the storage backend.
Since the data is highly-relational and commonly requires calculation of statistics across aggregates, a traditional SQL server is used for data storage.
What's next
Working with local health-care providers to develop structured self-management programs. I'd also like to investigate the use of custom speech recognition models for other English speaking countries with Wit.ai as (can be seen in the video) sometimes the Wit.al voice recognition doesn't pick up what I say due to different intonation.
Built With
.net
c#
f#
kubernetes
mongodb
nlu
openshift
postgresql
webspeech
wit.ai
Try it out
selma-victor.apps.us-east-2.starter.openshift-online.com
github.com | Selma: Voice-enabled CUI for chronic disease self-management | Multimodal CUI for delivering chronic disease self-management programs to people with disabilities or anyone who finds touch- and mouse-driven GUIs inaccessible. | ['Allister Beharry'] | [] | ['.net', 'c#', 'f#', 'kubernetes', 'mongodb', 'nlu', 'openshift', 'postgresql', 'webspeech', 'wit.ai'] | 48 |
10,492 | https://devpost.com/software/xx-cp8a6o | Inspiration
What it does
This assistant will help you in all day to day tasks. You just say it and the
assistant follows your command making your life a bit easier. Whether it is
opening youtube or playing songs offline or opening applications like notepad and
PowerPoint or searching the Wikipedia, just give it a command and it's done,
eliminating all the steps that you were required to do these small
tasks. Moreover, if you are bored you can also have a random talk with your
the personal assistant just for fun. Moreover, we have linked this voice assistant with
wit.ai, so that the same command that you give to your assistant reflects on your
wit app also and we have given the various commands that our voice assistant
can understand some intents and entities.
This assistant will help you in opening amazon with just a command “open
myntra” and then you can browse and search products that you like without
typing anything in search tab.Moreover,you can also login to your amazon
account and add products to your cart without a single click.Our project is
restricted to only amazon and not generalised for all the shopping sites.We can
do a lot on the shopping sites but our project is restricted for some features only.
We have used web automation using python to accomplish this task and
integrated all this with wit.ai.
How we built it
We have used Python and some built-in libraries to make this assistant.
Challenges we ran into
Web automation was a really challenging part.
Accomplishments that we're proud of
The final working voice assistant is an accomplishment in itself.
What we learned
Several new skills such as web automation and also learned about python libraries and packages.
What's next for xx
Built With
python
selenium
webautomation
wit.ai
Try it out
github.com | VoiShop | We have tried to make a voice assistant named “VoiShop” that leverages the Wit.ai platform which will work as your personal assistant as well as shopping assistant. | ['Esha Goel', 'Vidhi Garg'] | [] | ['python', 'selenium', 'webautomation', 'wit.ai'] | 49 |
10,492 | https://devpost.com/software/sova | SOVA Screen Captures
Inspiration
Since mobile voice assistants are currently used by many people, the improvisation of this system is needed. Sometimes people get bored with a plain voice assistant or flat chatbot. Also for some cases, most of all voice assistants are too general, which can't provide specific cases with more details.
What it does
SOVA is an intelligent assistant with 3D visuals that can be more entertaining compared to other plain text based or just voice assistants. You can interact with this virtual assistant using your voice and it will respond to you and act like a human being. This application also specifies some cases so it can be more detailed if we want to ask about something. Example : for this prototype version, we have our first case "COVID-19 Assessment". You can ask SOVA to help you on COVID-19 assessment in more detail, and it will try to calculate the result and give you some suggestions after the session.
How I built it
Wit.ai as our intelligent system for Natural Language Processing
Unity as a tool to build the application
iClone to create and modify the 3D Avatar
Facebook SDK for login needed.
Challenges I ran into
Maybe we can say about integrating API from Wit.ai to Unity, we need to do some adjustment regarding the JSON file. Also to make the Natural Language Processing can make sense even more, because we have to keep training it so it can recognize what our intentions. This is the first time we built NLP from scratch so maybe it's not that really good for now, but we are still improving it.
Accomplishments that I'm proud of
Happy to accomplish an Artificial Intelligence project with a visual assistant like this, and this can help people in some ways.
What I learned
Learn about Wit.ai itself and learn how Natural Language Processing works.
Learn how to integrate Wit.ai and Unity (How they communicate with each other)
What's next for SOVA
If possible we will improve this more accurately and add more cases like education, financial and other fields.
Improve how the avatar will interact with the user (expression, animation etc)
Built With
api
facebook-login-api
iclone
json
speech-recognition
unity
wit.ai
Try it out
sova.rgplays.com | SOVA - Somewhat Omniscient Virtual Assistant | Make an intelligent virtual assistant on a mobile application that can help people's lives in many ways. | ['Maynard Lumiu', 'Rikad Hegaru'] | [] | ['api', 'facebook-login-api', 'iclone', 'json', 'speech-recognition', 'unity', 'wit.ai'] | 50 |
10,492 | https://devpost.com/software/vote-questions | answer
what you see when first open
Inspiration
A lot of people are uneducated about the upcoming election. During the 2016 presidential election, there was 10 questions that was commonly asked. I'm pretty sure, those questions will likely be asked during the 2020 election.
What it does
Basically answer questions for you about this upcoming election.
How I built it
I build it as a web app. I think this was the best choice because they can view it anyways as long as they have internet.
Challenges I ran into
Polling booths and ballot drop boxes isn't clear. There is no api for this. One would have to dig into the state's website, into the county's site, and even then it might not be displayed. Also this information is changing as we speak. Unfortunately this is the most useful information.
Accomplishments that I'm proud of
The MVP is built out!
What I learned
People ask the questions in interesting ways. I think I need to train with several hundred more questions before it's actually useful.
What's next for Vote Questions
I built out an MVP, I only handle like 3 intents right now. I want to handle ~10 intents.
I also want to allow it for multiple languages so it's easier for people of all ethnicities to get the necessary information about the upcoming election.
Built With
html5
javascript
react
vercel
wit.ai
Try it out
votequestions.vercel.app | Vote Questions | This web app leverages Wit.ai to understands the intent of any question to help you get the best information regarding the 2020 presidential election. | ['Henry Wong', 'Harvey Chan'] | [] | ['html5', 'javascript', 'react', 'vercel', 'wit.ai'] | 51 |
10,492 | https://devpost.com/software/i-wish-voice-activated-shopping-assistant | Graphic User Interface
Inspiration
Searching for products online can be a hassle, especially when searching for something specific, what is products and necessities could be searched just by stating our needs, an app that understands our needs.
What it does
The app takes voice search requests, gets relevant entities using Facebook's Wit.ai, and prompts search to search for specific key words.
The search query is is generated from the audio clip as
Item
Color
Criteria
Ocassion
to give the user most relevant results.
eg: "I want a
blue
IPhone 11
with
128 GB
Storage for my brother's
birthday
."
Item - IPhone 11
Color - blue
Criteria - 128 GB
Ocassion - birthday
How We built it
The app was constructed in Thunkable, with Speech-to-text and text-to-speech support. It uses Wit.ai for Natural Language Processing. The product searches are displayed using in-app web viewer.
Challenges I ran into
Connecting to Wit.ai api to thunkable.
Features
Shop using Voice Commands
Create Wish List
Share Wish List
What's next for I Wish - Voice activated Shopping Assistant
Chatbot to make searches more interactive
Add Social Networking Option
Built With
java
python
thunkable
wit.ai
Try it out
github.com | I Wish - Voice activated Shopping Assistant | Why search products when you can wish for them.... | ['Bhavya Bhardwaj', 'Sorabh Dadhich', 'SYED ISHTIYAQ AHMED', 'Jai harie'] | [] | ['java', 'python', 'thunkable', 'wit.ai'] | 52 |
10,492 | https://devpost.com/software/coronil | FBAIBot
Inspiration
In this difficult situation of Covid19, people need a friend to help in the daily task and male them aware about current situation of corona
What it does
This is a chatbot. It helps you do a certain task and help you know the information about the corona.
How I built it
I built it using nodejs, Javascript,wit.ai and google search api
Challenges I ran into
Google search API has changed.so I face some issues there. I also find it difficult to parse the wit.ai response json.
Accomplishments that I'm proud of
I able to build a chatbot using different technology stack, especially using wit.ai..
What I learned
I learned many things link wit.ai, google custom search API.
What's next for Coronil
We can increase its functionality like sending SMS,call someone and do different daily work like switched on the fan, light, etc.
Built With
css3
html5
javascript
node.js
searchapi
typescript
webspeechapi
wit.ai
Try it out
github.com
docs.google.com
wit.ai | Coronil | Coronil is a chatbot, who will help everyone in covid19 | ['Eye eye'] | [] | ['css3', 'html5', 'javascript', 'node.js', 'searchapi', 'typescript', 'webspeechapi', 'wit.ai'] | 53 |
10,492 | https://devpost.com/software/flatfinder |
window.fbAsyncInit = function() {
FB.init({
appId : 115745995110194,
xfbml : true,
version : 'v3.3'
});
// Get Embedded Video Player API Instance
FB.Event.subscribe('xfbml.ready', function(msg) {
if (msg.type === 'video') {
// force a resize of the carousel
setTimeout(
function() {
$('[data-slick]').slick("setPosition")
}, 2500
)
}
});
};
(function (d, s, id) {
var js, fjs = d.getElementsByTagName(s)[0];
if (d.getElementById(id)) return;
js = d.createElement(s);
js.id = id;
js.src = "https://connect.facebook.net/en_US/sdk.js";
fjs.parentNode.insertBefore(js, fjs);
}(document, 'script', 'facebook-jssdk'));
Logo
FlatFinder messenger homescreen.
FlatFinder voice interaction.
Inspiration
Indian metro cities are home to millions of people. Most of the working class is not a resident of these cities. People spend a large amount of time searching for flats for rent over various platforms and are regularly frustrated by the overload of information. Voice bot solutions can bring ease in this search and make it smoother to search for houses.
What it does
FlatFinder can take data from various facebook groups containing flat rent, room rent, flat sharing posts and parse it through wit.ai model to extract structured data later this data is used to answer user queries. On FlatFinder user can ask about rental properties on basis of various properties like occupancy type (Single, double), size(in BHK), location etc. User can also create their own listing, using simple voice interface. E.g. "1 BHK semi furnished flat is available in Baner."
How we built it
We trained 2 wit.ai bots, One for processing existing data to convert unstructured to structured format and other for processing user queries in three categories [help, create ad, find flat].
Processed data is stored in a database for querying.
Facebook messenger interface is used for user interaction.
Python+Flask+Git+Heroku+Postgres is our tech stack.
Challenges we ran into
20 second limit of facebook messenger makes application harder to test.
Slot filling have been challenging.
Lack of tutorials around messenger and wit.ai.
Accomplishments that we're proud of
Successful training of bots.
Easy system for interaction built using voice commands.
What we learned
Wit.ai is really powerful tool and can be used to solve a plethora of problems.
Messenger integrated directly with groups can help generate more insights.
What's next for FlatFinder
Get FlatFinder app approved by facebook; so that it can be integrated with groups to extract feed and generate private replies using wit.ai model.
Improve wit.ai models with more data.
Add visual elements for search.
Improve search functionality.
Built With
facebook-messenger
flash
github
google-assistant
heroku
messenger
pymessenger
python
wit
wit.ai
Try it out
m.me
www.facebook.com | FlatFinder | Find flats for rent or rent your property as per your need with simple voice commands. No more going through facebook pages, websites or calling brokers to rent flats. | ['Rajat Paliwal', 'Tushika Singh', 'Amrita Neekhara', 'Shriya Goel'] | [] | ['facebook-messenger', 'flash', 'github', 'google-assistant', 'heroku', 'messenger', 'pymessenger', 'python', 'wit', 'wit.ai'] | 54 |
10,492 | https://devpost.com/software/hermes-video-editor | Hermes Video Editor Logo
Application Screenshot
Schema Mockup
🦋 Hermes Video Editor
Video editing is complicated - but it doesn't have to be.
Video editors are notoriously complex - hundreds of tiny buttons with tiny symbols on them representing every editing function possible. But it doesn't have to be that way - Hermes video editor is a web based, cloud powered, voice controlled video editor.
Hermes makes editing easy.
✨ Mission + Inspiration
Make video editing easy.
I love to record videos about programming, but I hate to edit them - it takes forever and editing never feels natural. Often, I was re-watching a large, raw video 5+ times - editing became an all day thing. I wanted to build an app that editing feels natural, where commands get inserted naturally and I need not struggle with a complex UI.
📈 Features
VOICE COMMANDS
entirely controlled by voice commands, Hermes can add cuts, mutes, fast forwards and much more to segments your video.
TWO WAYS TO INSERT COMMANDS
by clicking and dragging on the track, you can create a tethered voice command that's tied to the segment that you selected. You can also click on the track, hold "s" on your keyboard, and just say exactly what you want done!
ROBUST LANGUAGE PROCESSING
"I don't want to hear the next 10 seconds" Is a valid command, and will mute the next 10 seconds of your audio. Whatever you say, Hermes will do. "Remove the next 10 seconds", "Delete the next minute", "Get rid of the last 30 seconds" are all valid and will do exactly what me and you think they will do.
CLOUD PROCESSING
"Do you hear that? It's your computer's fans thanking you." One of my personal problems with video editors is that they render my computer unusable for however long it takes to render the video. The computer slows to a crawl and the cooling fan makes it hard to think: That's why Hermes does the editing in the cloud. Just input an email, and Hermes will put you in the queue. Then, when your video is ready, you'll get an email with a download link to your video.
COMMANDS
There are a bunch of commands loaded in.
cut
= removes a part of the video.
fast forward
= fast forwards a part of the video
mute
= mutes a part of the video
[type] music
= adds [type] background music to a segment of the video. [Epic, Sad, Happy, Background, Calm]
add a [color] caption that says [caption text]
= adds a caption to that part of the video. Caption has color text.
There are two ways to insert the commands:
click on the track where you want to insert the command, and say something similar to: "add a [command] for [duration] [after/before]"
click and drag your mouse over the track. while dragging, say your command: "[command]"
Other
good to know
things while operating:
Dragging on the LEFT side of a command card moves it.
Dragging on the RIGHT side resizes it.
Holding down the command card and clicking BACKSPACE on your keyboard deletes it.
🧱 Architecture
For schema, refer to schema image.
HOW WIT.AI IS USED
wit.ai is the core of the app. each command is processed and analyzed into a json that javascript then extracts info from to create a command. The flexibility of wit.ai is used to a large degree, extracting a lot of info from natural speech to render commands on the screen. By training the model on hundreds of inputs, wit became accurate at dissecting intents and features of speech, allowing the app to truly be a natural language experience.
DESIGN CHOICES
Frontend: React. React does a good job with web apps, and it truly feels like a native app experience rather than a webpage. This was my first time using React's context API, (My internship used redux) and I liked it a lot - it made large-scope state easy to manage.
Backend: fastAPI's python framework is quickly becoming my favorite framework to build API's with. It supports a whole lot out of the box and is easily extensible, creating a truly agile development experience. As a one - man team, it allows me to quickly iterate and create without worrying about the weeds.
FFMPEG: FFMPEG is a command line video editor. commands are processed into an FFMPEG readable format and executed by the server.
SECURITY
Hermes is light on security because there's not much to secure!
Emails are stored in a passworded Redis queue, and after the email is sent, it's discarded.
Downloading the files requires a specific link, and once that link is clicked the video is discarded.
No passwords are sent, and bruteforcing for video files is not viable (due to bruteforce slowdown).
Logs are cronjobbed to be wiped after 5 days.
SSL is used for all communication, so no MITM either.
SSH is locked down to only use SSH keys.
No need for JWT's or any auth at all - Hermes is free and open to use.
Hermes is a tight ship!
OTHER INFO
Hermes is hosted on a linux server (supported by linode.com).
Nginx is used as a reverse proxy into the local app.
Uncomplicated Fire Wall (UFW) is used as a firewall.
Hermes uses a Redis Queue to queue up ffmpeg executions. This is because, due to resource splitting, having concurrent ffmpeg commands takes an exorbitant amount of time - by having them run consecutively, I can make sure that files get processed in reasonable amount of time without taking up too much resources.
Hermes uses a cloud storage bucket because video files are generally large, and storing them on a small machine is not viable.
✔️ To Do
Add more filters! Ideas include transitions, color-filters, and zooming. (perhaps "zoom in on my face" should be a viable command?)
allow user to input many files and stitch them together
standardize ffmpeg process
polish up emails
create CI/CD pipeline
write unit tests
📚Learning
I learned that voice control feels great - being able to say what I want to do when I edit and have it be processed by the computer for me makes editing a much more intuitive experience. Wit.ai itself is incredibly easy to operate. The UI is slick enough to feel like I'm not working with a state of the art machine learning algorithm, and adding intents, entities and traits is a quick and easy process.
This was my first time working with ffmpeg, and it was a great experience - although it was slightly confusing at times, I quickly picked up the gist of it, and when I got stuck there was more than enough documentation to look through and figure out what filter, command, or flag I needed.
React Context API is probably how I'll build React apps from now on - it's much easier than Redux, and nothing happens "behind the scenes" which was one of my biggest issues with Redux.
👺 Extra
"Hermes" is the Greek god of speech and communication, and there is a lot of that going on in this project.
This was a fun one to build! I had a good time working with the API's.
Building this solo was difficult, and I had to scrap a lot of feature ideas I had to meet the deadline - but I'm excited to continue work on Hermes.
Built With
fastapi
ffmpeg
google-cloud
javascript
linode
linux
nginx
python
react
redis
smtp
supervisor
wit.ai
Try it out
www.hermesvideo.tech
bitbucket.org
bitbucket.org | Hermes Video Editor | Video editing is a complex task, but what if we could talk to our editor? Hermes is a completely voice powered video editor. | ['Anthony Oleinik'] | [] | ['fastapi', 'ffmpeg', 'google-cloud', 'javascript', 'linode', 'linux', 'nginx', 'python', 'react', 'redis', 'smtp', 'supervisor', 'wit.ai'] | 55 |
10,492 | https://devpost.com/software/sara-an-ai-voice-assistant | Inspiration
I always wanted my own assistant,sara is my fav actress name & I have created it with wit.AI for easy nlp.
What it does
It does chat with us & provide services like translations ,places,artists informations on command.
How I built it
I built it with Wit.AI for nlp.
For hosting I used heroku and with FLASK framework i have created an backend for my AI chatbot,
I have called wit.ai on server.
Java is used for android app and it is calling an api i made for this application.
So i only have to train and make changes on server side.
Challenges I ran into
I was not able to use wit.ai with python,
but sample Python code in documentation of github was helpful,
I have never built chatbot app,
Now It was hard for me how to connect ?
how to make api for calling in android app?
and many more.
Accomplishments that I'm proud of
I never stopped.
I learnt wit.AI which polished my chatbot knowledge,
I have now own api for my chatbot,
My own android app,
now i can even create website,ios app with same api.
What I learned
How to create chatbot,
How to integrate nlp easily with wit.ai,
how to connect backend and connect to android app.
What's next for Sara - An AI Voice Assistant
Many services like nearby hospitals,location.
an Integrated service like Mental health psychologist will be introduced into it.
Built With
android
flask
heroku
java
ownapi
python
wikiapi
wit
Try it out
www.mediafire.com | Sara - An AI Voice Assistant | An assistant which can chat,provide translations with many languages & can answer about gk questions. | ['Saurabh Jadhav'] | [] | ['android', 'flask', 'heroku', 'java', 'ownapi', 'python', 'wikiapi', 'wit'] | 56 |
10,492 | https://devpost.com/software/mr-fitter | Main Window
Back workout_demo
Exercise_demo
Inspiration
As a team of fitness enthusiast, the quarantine sent us all running to YouTube fitness coaches, and while being very helpful, sometimes you just wanna get that small piece of information without scrubbing through an entire video. We wanted to build something which was quick, to the point, and easy to use.
What it does
Mr Fitter takes input either through text or voice, and scans the intent of the user using wit, and displays a relevant guide to the user.
How we built it
The interface is built using a niche Python library called Dearpygui. We went through several iteration to end up at the current state of the project. We started with categorizing how we wanted our database to be, and trained the same on wit.ai.
Challenges we ran into
The biggest challenges we faced were finding a way how to run Gifs in the window, and how to asynchronously listen for audio. Both of these features took a lot of trial and error, and not to mention, time.
Accomplishments that we're proud of
No surprises that our accomplishments are the same as the challenges we solved! Other than that, we are proud of being able to maintain the consistent aesthetic of the interface, even when we could not implement things the way we wanted to.
What we learned
We learned about a new API service Wit.ai, we learned how to use a new GUI library. And most importantly, we learned new ways of optimizing code, and while it is still a work in progress, we are happy with some of the techniques we applied.
What's next for Mr Fitter
We have plans to expand the database to hundreds of exercises, and to add a new feature which can diagnose you in case you get injured performing any exercise. In long term, we have plans to expand this to a web-app, and even a mobile application.
Built With
dearpygui
google-web-speech-api
pandas
python
wit.ai
Try it out
github.com | Mr Fitter | Mr Fitter is a voice enabled, virtual, motivational fitness coach which will guide you towards achieving your fitness goals. It is a desktop application built on python. | ['Siddharth Gautam', 'Sheikh Parvez Ahmed', 'Pranav Gupta', 'Shriyans Kaushik'] | [] | ['dearpygui', 'google-web-speech-api', 'pandas', 'python', 'wit.ai'] | 57 |
10,492 | https://devpost.com/software/cure-mate | Main page of the platform
messenger and telegram bot
monitoring
history
visualization
result
compare meds
Profile
Cure-Mate 🏥
Sentiment analysis of user reviews for particular disease and medicine Using Wit.Ai with Covid-19 chatbot
We are trying to give the user fair information about what others think about that medicine he is using and what other
best-rated medicines suggested for this particular disease and User can also compare two Medicines to know the best one.
We are using the UCI drug review dataset for getting reviews having more than 2 million reviews.
We have implemented a chatbot for all your queries on Covid-19 and it works on both
Messenger
and
Telegram
.
Aim 🎯
Our aim is to give a platform that will be useful for both medical professionals and other Users. On this platform, they can study their situation and medicines, they can easily see the reviews for the work of medicine in that condition, and by our monitoring tool, they can get the latest information about the disease and its medicine. It would be really helpful in these pandemic situations like Covid-19 created. These situations create shortages of medicines. So, through our platform medical professionals and users can easily check alternate best medicine for the same situation and much other information that they need.
Features 🌟
▶️ Search Box
⌨️
1.Typing Search
You can normally search through our search box and Suggestions will be provided beneath it for better help to get your desired name.
🎙️
2.Voice Search
An interactive Voice search option is also available. Just click on the MIC icon to trigger voice search and an interactive modulated voice will help you throughout your search.
▶️ Search From
At the bottom of the search box, there is an option of selecting a relevant source
( Ex. UCI Dataset, Twitter, Drug.Com(WIP) and other NewsAPI )
from where data is taken and then broken into tokens and Using WIT.AI inbuilt NLP and Sentiment Analysis the Sentiment is negative, positive or neutral is taken out and displayed in results.
▶️ Sentiment Analysis
We get the relevant data from the user-desired platform and then tokens are generated from the data then Using NLP, Sentiment Analysis with the help of Wit.Ai we get the sentiment in each data and the model predicts the average sentiment of all persons and displays them in results.
▶️ Results
Using Sentiment analysis the results for the medicine and disease are shown as written in the previous section.
Now, in results, we also display other medicines that are rated good for the same disease and the other disease that can be cured by the same medicine. You can also download the result as a pdf softcopy.
We also display the Graph to better understand both
Sentiment Analysis
results and other
best-rated medicines.
▶️ Compare meds
You can navigate to this feature from the main page navbar and then you can compare two of your medicines that you are confused in and you will get the results according to other reviews. The result will be in two forms, side by side
Graph
and
Comparison Table
for the full detailed comparison of both in tabular form.
▶️ History
From here you can get all the records for your previous search with date and time, name of medicine, and disease.
You can also see the result again by clicking the
See results
Button.
▶️ Monitoring/Query
You can see what's going on recently or the history of medicine. For the section, the application uses NewsAPI and tweets to show the sentiment of the user changing with the time for that medicine or disease.
Available with both
⌨️Typing Search
&
🎙️Voice Search
▶️ COVID Chatbot
A chatbot to solve all your queries regarding the Covid-19 situation and tries to help you in the best possible ways.
The chatbot works on both
Messenger
and
Telegram
.
▶️ Speed
Analyzing 2 million of the dataset in just some seconds. That makes it more User-Friendly and Time-Efficient.
Architecture ⚙️
This is the overall architecture of our application. on the left side, the red-colored sections are those with which the user interacts and on it right the brief internal working of our application is shown.
Details of NLP Model 📚
▶️ About Model
Sentiment Analysis is used on the pre-saved dataset (over 2 million) and the fixed data that is being Scraped from Twitter and Drug.com. Then data is broken into tokens and Using this dataset we train our model over
Wit.ai
, Wit.ai Speech API is also used to make our application voice interactive.
▶️ Model's Accuracy
On running our Model on a pre-stored dataset we get the accuracy of about 85% and when the twitter and drug.com dataset is added accuracy increases to 87.05 %.
▶️ Future Updated
This project has a very wide area to which we can explore and we can add many new features in it. For the future, We are thinking of adding a feature to suggest the best Doctors and Best hospitals for the searched medicine. and it can be according to the city of the user and also best in his country By getting user location and Analyzing data from different platforms about the best doctors and hospitals for that disease.
Technology Used 💻
Frontend & UI :-
HTML
CSS
Bootstrap
Javascript
Backend :-
Flask
Database :-
PostgreSQL
Voice intraction :-
Annayang.js
Authentication :-
Google Auth0
Model :-
Wit.ai
Support for Model :-
NLP
UCI
Overview 💡
☐ It helps the user to better understand the medicine he is using and provide helpful feedback.
☐ Deep sentiment analysis is performed for over 2 million datasets.
☐ It has the ability to suggest best-rated medicines for that disease.
☐ It also gives the list of other diseases on which the same medicine works.
☐ Both graphical and statistical representations of data for better understanding.
☐ Feature to download your result in Pdf softcopy.
☐ The history section for all your searched results so you can check anytime and revisit.
☐ Compare the Meds section for knowing the best medicine B/W 2 if you are confused.
Fun Fact 👻
🐬 Our Chotu was pretty hard to handle because he has a great sense of Humor.
🐬 Designing a logo took a lot more time than the development of the platform. 😝
Built With
annyang.js
flask
javascript
natural-language-processing
tweepy
uci
wit.ai
Try it out
curemate.herokuapp.com
github.com
github.com
www.messenger.com | Cure Mate | The approach is to give users a platform where they can get all the information about their medicine, other best medicines for that disease, and they can easily compare 2 medicines to know best. | ['Saurabh Gupta', 'Sundaram Dubey', 'Sarthak Singh'] | [] | ['annyang.js', 'flask', 'javascript', 'natural-language-processing', 'tweepy', 'uci', 'wit.ai'] | 58 |
10,492 | https://devpost.com/software/coco-cl7aqe | Inspiration We got an idea with the help of wit.ai and started to develop some of the algorithm to which it will help to be out as an assistant
What it does it does almost everything you ask almost all the common activities by the command given similar to Google assistant
How we built it we built it using python in wit ai
Challenges we ran into was patience alone
Accomplishments that we're proud of
My team
What we learned was the team work
Built With
python
visual-studio
wit.ai
Try it out
drive.google.com
drive.google.com
docs.google.com | hAUks COCO | Voice assistant | ['HARIHARAN M', 'Kirubasini Sabeshkumar', 'Srinivasan G', 'Anu Rekha S', 'Ukesh B', 'UdhayaKumar G'] | [] | ['python', 'visual-studio', 'wit.ai'] | 59 |
10,492 | https://devpost.com/software/artiqlate | quantum teleportation circuit
Motivation
Quantum computers have the ability to revolutionize our future by providing exponential speed-up on current algorithms. One such example is
Shor's algorithm
which can perform integer factorization in polynomial-time by utilizing the properties of quantum effects, such as superposition and interference. On the other hand, the fastest known classical algorithms take exponential time and is what current encryption schemes depend on for security.
In the past decade, interest in quantum computing from both public and private sectors have grown exponentially. There is a growing need to educate the public on quantum computing concepts to prepare for this "quantum revolution".
When learning about quantum computing, quantum algorithms are generally visualized as quantum circuits. For example, the image above is the quantum circuit representing the
quantum teleportation
algorithm.
Currently, to be able to write quantum algorithms, you would have to pick up one of the quantum programming languages like
Q#
,
Qiskit
, or
Cirq
.
What it does
ArtiQlate makes it easy for anyone to pick up and learn about quantum algorithms without writing code! All you need is your voice to design your quantum algorithm within the browser.
How I built it
ArtiQlate was built using the following technologies:
Python Flask - server
JavaScript - browser client + SpeechRecognition Web API for speech-to-text
Wit.AI - NLP to extract relevant information from text
Qiskit - Creating quantum program and visualizing the quantum circuit
Challenges I ran into
The JavaScript SpeechRecognition API was not adept at recognizing the terminology used in quantum computing, such as "X gate" or "qubits". I had to make use of Wit.AI to recognize these variations in interpreted text from the SpeechRecognition API and map it to the intended terms.
Accomplishments that I'm proud of
I learned about this hackathon less than a week before the deadline and managed to start and complete this project within 2 days!
What I learned
Learned how to work with Wit.AI and incorporate it within my side projects.
What's next for ArtiQlate
There's a lot of improvement that can be done on the speech recognition side. We could send over a .wav audio file of recorded speech and pass that into the Wit.ai API instead of relying on the JavaScript SpeechRecognition API.
Built With
flask
javascript
python
wit.ai
Try it out
www.raphaelkoh.me | ArtiQlate | ArtiQlate makes it easy for anyone to pick up and learn about quantum algorithms without writing code! All you need is your voice to design your quantum algorithm within the browser. | ['Raphael Koh'] | [] | ['flask', 'javascript', 'python', 'wit.ai'] | 60 |
10,492 | https://devpost.com/software/knock-knock-jokebot | Knock-Knock Jokebot
Inspiration
There is a very painful stage (for parents) after a kid learns the format of a knock-knock joke and before she learns what makes knock-knock jokes FUNNY. This voice-enabled bot takes one for the team.
What it does
Knock-Knock Jokebot provides a simple interface for pre-literate kids to tell their knock-knock jokes. Through a basic turn-based voice conversation, it applies the classic knock-knock joke format and laughs at every joke, regardless of whether it makes sense.
How I built it
I used Wit.ai to do the language processing, and Wit.ai's Microphone Web SDK to accept speech input. I used the reasonably well-supported browser-based SpeechSynthesis interface of the Web Speech API to speak the bot's side of the conversation.
Challenges I ran into
Kids are hard to understand, and the phrase "knock-knock" is often interpreted as other repetitive phrases, like "not not" and "no no." It took a lot of training to identify most of these variants.
Accomplishments that I'm proud of
The result works well enough for my kid (no longer a preschooler, but still very fond of telling knock-knock jokes) to use! I'm also very fond of the interface, which is fun and supports kids who are learning to read by displaying the text that's spoken.
What I learned
I had never worked with Wit.ai or any similar service, and I'd never worked with microphones or text-to-speech, so the entire project was a great learning experience.
What's next for Knock-Knock Jokebot
The obvious next step would be to flip it around and have the bot tell the joke.
Built With
javascript
jquery
speechsynthesis
wit.ai
Try it out
door.happyhumans.com | Knock-Knock Jokebot | This bot saves parents of preschoolers from nonsensical knock-knock jokes | ['Sarah Lewis'] | [] | ['javascript', 'jquery', 'speechsynthesis', 'wit.ai'] | 61 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.