hackathon_id int64 1.57k 23.4k | project_link stringlengths 30 96 | full_desc stringlengths 1 547k ⌀ | title stringlengths 1 60 ⌀ | brief_desc stringlengths 1 200 ⌀ | team_members stringlengths 2 870 | prize stringlengths 2 792 | tags stringlengths 2 4.47k | __index_level_0__ int64 0 695 |
|---|---|---|---|---|---|---|---|---|
10,133 | https://devpost.com/software/useless-project-fruit-astrology | Inspiration
Boredom. We also find horoscopes entertaining. This was the first thing that came to mind when we thought of something silly and useless.
What it does
Tells you the fruit you HAVE to eat! You enter your birthday, and we tell you what your star sign is and what fruit you should be eating based on that.
How we built it
Together :)
Also, we used Repl.it, with HTML, CSS, and JavaScript AND lots of images.
Challenges we ran into
Having Motivation :(
Also, using local storage to access variables once the HTML page changed (and syntax errors).
Accomplishments that we're proud of
Finishing it.
What we learned
We do not know how to code.
What's next for Useless Project - Fruit Astrology
Connect you to your celebrity birthday twin (via UIPath).
Built With
css
html5
javascript
Try it out
uselesswebsite--tidang.repl.co | Useless Project - Fruit Astrology | Attention! You have been living your life incorrectly. If you eat fruit based on you star sign, you can live a better life. | ['Kim S', 'Timmy Dang', 'Dylan Huynh', 'Mia Chen'] | [] | ['css', 'html5', 'javascript'] | 61 |
10,133 | https://devpost.com/software/impromptu-meet | Login Screen
LinkedIn Mobile OAuth2.0
Home Screen
Registration Page
Matching Platform
LinkedIn OAuth 2.0 Strategies
Chat Example Screen Capture
Chat Example Screen Capture 2
Our App Mascot.
What is Spark?
Spark is a React Native Android application that enables users to easily match with others in order to find project partners, hackathon team members, whiteboarding buddies, find study groups, and to spark new lifelong friendships.
Inspiration
In many instances, an individual may feel too shy, too intimidated, or not in the mood to try to network/introduce themselves to a large gathering and would rather keep to themselves. This typically results in losing many opportunities to participate in hackathons (due to not having a team), not being able to work on a large collaborative project or practice whiteboarding with a partner.
Our experiences of trying to find interesting individuals to do a hackathon with, do Leetcode/Whiteboarding preparation, do interview prep, projects, and study groups had motivated our team to create this app to hopefully simply this search and modernize the experience. With the goal and ambition of replacing the many "looking for a partner" posts on large Facebook groups, we hope to modernize the search for a study buddy.
As more individuals are at home, it is important to keep being productive. Finding similarly ambitious individuals is very key to an individual's mental health and motivation levels. Having that extra buddy to keep you accountable and push you beyond your limit is invaluable. We hope that our app could contribute to a solution to make working from home/ studying from home/ staying at home easier, more productive, and social.
Happy at Home Hack
What it does
Spark is a solution to this problem. With a very user-friendly interface, Spark allows simple swipes left or right matching users based on inputted interests. Spark is the one place stop for quickly forming connections for specific events and activities where a user would be inclined to work with others.
How I built it
Our stack consists of React Native, JavaScript, React, Socket.io, MongoDB, OAuth2.0, Express.js, Passport.js
The mobile application allows the user to login with LinkedIn via OAuth2.0 where the users LinkedIn profile is uploaded to the MongoDB server. Via GET requests, we are able to populate the user's profile fields.
The user can choose to swipe left or right on an array of users who have similar interests. In this curated list of users that the user can swipe, we had used a matching algorithm in the backend in order to test the similarity of profile users. This algorithm gave each a similarity score which is based on matching results/keywords. A higher score means that there's a higher match in the keywords of the interests of each user. We curated the user list to show other users with high scores to ensure a high-quality matching experience.
Challenges I ran into
The entire team was in the midst of midterms and preparing for job interviews; thus, our team was really only able to participate in half of the hackathon. :(
Due to setting up the environment for React Native (as it was the first time our team had worked with React Native), our actual development couldn't begin until 8 pm PDT which is 11 pm EST :(
Connecting the front end to the backend was a troublesome issue. React Native could be very troublesome especially when using the dev tool, EXPO.
Implementing Chat was filled with bugs.
Accomplishments that I'm proud of
Our team was able to complete a:
Completely functional backend
Completely functional frontend
in only a single day :)
Additional features we have completed and are proud of:
OAuth2.0 and LinkedIn verification
Cookie sessions to also ensure an excellent browser experience and easy portability to a web app.
What I learned
First time learning React Native- none of our team ever built anything in React Native. It was an excellent experience to be able to learn and add React Native to our stack. It also serves as a very huge contrast to Android development using Android Studio and Java.
We learned how to set up the backend completely and utilize Passport.js and OAuth2.0 in order to enhance user experience. Users can use OAuth2.0 to signup and sign-in with LinkedIn.
We learned how the numerous issues and troublesome bugs that can occur when trying to link up the backend and the front end. It was a very beneficial first experience as it was our team's first time connecting an actual backend to an Android app.
What's next for Spark
Implement OAuth2.0 for Facebook/Google
Streamline Chat functionality
Add User Feeds
Custom unique code as an identifier for specific groups in order to allow targeted swiping.
(ie. For Same Home Different Hacks Hackathon, users who are looking for hackathon teams can simply enter the code: SHDH12391741 and be able to swipe other users who specifically have also entered the code.)
Implement Authentication Strategies for DevPost which may simply the above feature.
Best Domain Name from Domain.com
sparkify.online
Built With
express.js
javascript
mongodb
react
react-native
socket.io
Try it out
github.com | Spark | Spontaneously meet new individuals, find a project buddy, spark new connections. | ['Ben Cheung', 'Rick Huang', 'Anthony Lopes', 'Eric Y. Kim'] | [] | ['express.js', 'javascript', 'mongodb', 'react', 'react-native', 'socket.io'] | 62 |
10,133 | https://devpost.com/software/same-home-different-hacks-cssi-2020-team-1 | index page
video page
profile page
upload page
mockup of our site
Same Home Different Hacks: Daince
Submitted for the "Educational" track. It also can be considered for the "Best use of Google Cloud" since it uses Firebase and "Best Domain Name".
Inspiration
Dance is a beautiful art form and many people want to learn it, as shown by the growing number of dance studios. With our new physically-distanced world, we can no longer take advantage of that. Learning dance moves from choreography videos is a wonderful tool, but videos cannot tell you if you are doing a dance move correctly. We want people to experience the benefits of a real-life coach with the convenience of on-demand video.
What it does
DAINCE uses artificial intelligence to detect people’s dance moves. It evaluates and tracks how a person is dancing compared to how the dance should be performed, so users of DAINCE learn how to dance better and assess their progress in learning dances.
How we built it
Beginning with our vision for an AI dance tutorial, we designed a wireframe mockup of the flow of our app and functionality of the pose comparison algorithm. On the front end, we used HTML & CSS to style the site and extensive Javascript to import and implement. We used the Posenet library to implement AI-based pose detection. Meanwhile, the backend team worked on retrieving videos and a list of timestamp markers. Tutorial videos loop according to the timestamps and those markers can be skipped, ignored, and replayed. Feedback from a webcam is taken and used to display a live score. The score is calculated by comparing the data from the webcam to the data from the original/professional dance video.
Challenges we ran into
Challenge 1 - We faced major hurdles when attempting to implement the PoseNet model, especially due to the complexity of extracting frames of a webcam/video and applying it. Some relatively simple features, like plotting the points of where PoseNet estimated limbs to be, actually required a lot of tinkering to erase the points after a certain period of time (so that the screen doesn’t become cluttered).
Challenge 2 - We have encountered various issues that may seem trivial at first glance, but turned out to have consumed large amounts of time and energy. For example, we spent hours attempting to fix a problem caused by a misplaced iteration variable in a for-loop as we initially thought the problem was caused by something else. We also spent a long time figuring out async/await statements for a few functions to solve a problem that could be solved with a simple if-else statement. Through rigorous checking and testing, discussion with team members working on other parts of the app, and lots of googling, we were able to overcome these challenges.
Challenge 3 - Because of our diverse set of skills, it was initially difficult for us to divy up the tasks we needed to accomplish in order to build DAINCE. However, by working together and playing to each others’ strengths, we were able to work on the areas that we were the most well suited for, and thus succeed as a team.
Challenge 4 - As a remote group, we had to work through merging issues on Github, real-time partner coding software, and varying timezones. Still, even though we have never met, we managed to create something beautiful together.
Accomplishments that we’re proud of
We are exceptionally proud of how our project can better teach people dance and enhance learning from choreography videos/tutorials.
Our awesome web designer created a fantastic web mockup that we translated into real-life. Then we took it even farther and added beautiful micro-interactions. Subtle CSS transitions were added to most elements. If there was a user interface award in this hackathon, we are confident that we’d be strong contenders.
In addition, applying the PoseNet model on both a video and live webcam footage required creativity and robust programming. Comparing the two with an algorithm was also an exceptionally difficult challenge that we conquered!
What we learned
Working with PoseNet introduced all of us to a complex AI algorithm that piqued our interest and served as a great introduction to working with big datasets. Everyone also took away different things from this project depending on their focuses. For example, the web developers learned how to create CSS animations and use JQuery to show/hide different elements. People who worked on the backend improved their skills with Firebase and handled difficult interactions with PoseNet objects.
What's next for DAINCE?
From the data we glean from our user’s dance moves and perhaps a few more hours, we would be able to develop custom dance plans with specific instruction with which users can improve their weak points. With the same data, perhaps we can write something that choreographs new dances using the user’s strengths and moves they have already learned.
Built With
ai
computer-vision
css
firebase
google-cloud
html
jquery
machine-learning
posenet
Try it out
github.com
daince.tech | DAINCE | Learn to Dance with AI using a fun and beautiful user interface. We track and evaluate your progress so you can improve the best way possible while enjoying the beats and rhythms.. | ['Albert Zhang', 'Devonne Busoy', 'Yuyuan Luo', 'Kevin Yang', 'Tiffany Wang', 'Alexander Krantz'] | [] | ['ai', 'computer-vision', 'css', 'firebase', 'google-cloud', 'html', 'jquery', 'machine-learning', 'posenet'] | 63 |
10,133 | https://devpost.com/software/bmmt-mlh | Inspiration
With everything that is occurring worldwide due to the global pandemic, Covid-19. It is important to know how to stay safe from this deadly disease and protect yourself and your loved ones.
What it does
This project creates a helpful website with links to multiple resources and a helpful quiz to test your knowledge on Covid-19. Here's the Link:
https://covidinteractivewebsite.herokuapp.com/
How we built it
We built it using a software called Eclipse which nicely formats all of our front-end code and links it to a localhost website. Then we linked the back-end code which displays the quiz questions and their respective answers through GitHub.
Challenges we ran into
We ran into a few challenges while trying to properly link the front-end code to the back-end. Fortunately, we were able to complete this through GitHub and Eclipse.
Accomplishments that I'm proud of
We're proud of being able to create our own website that could potentially help others.
What we learned
We learned how to program in HTML and CSS using the eclipse software.
We also learned how to create a website.
What's next for BMMT-MLH
The next step would be to find more resources for our website, create more quizzes, and maybe even create a domain for the website.
Built With
css
html
java
Try it out
github.com | BMMT-MLH | We believe that prevention is the best cure. In our project we created a website with a helpful quiz that educates users on Covid-19. This project helps users stay safe during this global pandemic. | ['John Amiscaray', 'dnqUW', 'Olamiposi Oso'] | [] | ['css', 'html', 'java'] | 64 |
10,133 | https://devpost.com/software/kilter | Inspiration
With the new Novel Coronavirus gyms and workout centers have shutdown all across the worlds. This has caused a dilemma for both trainers and trainees. People have been unable to get the fitness guidance that they need and as they spend more time at home, living often sedentary lifestyles, their physical health only deteriorates. People have been trying to find a way to stay in shape from the comfort and safety of their homes and thus have been resorting to fitness on demand, or have given up entirely on the matter. On the flip - side, trainers have been trying to find innovative ways to reach out to their customers through methods such as live streaming and prerecorded videos. Although these methods work, they are often not as effective as motivating people to work.
We present Kilter the solution to these problems and the ultimate way to motivate people to exercise while at home during Coronavirus!
What it does
Kilter is the first of its kind fitness app which combines the rapidly expanding market trends of online 1 - 1 personal fitness and a cashback reward system to provide a motivating experience for its users.
Kilter has a free path and a paid path:
Most users will use the free path so lets focus on that first:
The free path gives uses a free daily workout along with guided videos that they can follow. Everyday they are given the option to submit a clip of them working out and if they do so they are awarded with digital currency, as they amass this currency they can use it to purchase sweet rewards from the store!
Users will be capped on the number of videos they can submit and the total currency that they can earn everyday.
The currency aims to motivate people to workout everyday and to stay healthy.
The Paid Aspect:
The paid aspect relies on a subscription model. Users will subscribe to get sessions with a personal trainer and they can then schedule those lessons with their trainer.
This hold people accountable and encourages them to workout on a daily or weekly basis. The best part is that it is completely digital and users can get the benefits of a personal training session from their home!
Users who pay for the subscription will also get a higher currency cap and will be able to gain more currency every day.
Kilter is the future of online personal fitness and judges, we invite you to be a part of our journey!
How I built it
Our team used HTML and CSS to build the website. We are still getting used to the languages, but we had a blast using them!
Challenges I ran into
Our backend team member was unable to participate which really threw a wrench into our plans. Thus we had to use HTML to link all of out files together, which although was functional, was not practical. We hope to integrate a backend once our team member frees up!
Accomplishments that I'm proud of
We are proud that we were able to get it done. We had a late start and as a result had limited time to do it.
What I learned
We learned more about HTML and CSS and we also learned how to link pages together and also embedd videos. It was a great experience!
What's next for Kilter
We plan on integrating more features into the app. Once that is done we hope to make it more robust and then launch it! With the vaccine looking far away it seems that people will be living a sedentary lifestyle for quite a while and we hope that we can launch our idea in time for it to have a large impact!
Built With
css
html
Try it out
github.com | Kilter | Connect with Personal Trainers at Home! | ['Akhil Ramidi', 'Gaurish Lakhanpal', 'Subham Mitra', 'Aditya Tiwari'] | [] | ['css', 'html'] | 65 |
10,133 | https://devpost.com/software/productivity-sticky-space | A vision board created using the poster background
A bulletin board complete with todos
Inspiration
We went off the themes, choosing "Happy at Home." We wanted a more visual way of organizing what we needed to do.
What it does
You can generate different types of notes, such as a task note with a due date, a casual note, and a photo note. These notes can be dragged around and pinned down anywhere on the board, just like on a real bulletin board! You also have the option of using a bulletin board or a poster board.
How we built it
We used HTML, CSS, and JavaScript to create this app and hosted it using GitHub Pages. We also used some code from interact.js to make resizing possible.
Challenges we ran into
The sticky notes disappear on a page refresh; however, the app is still usable without needing to refresh the page. Resizing an image is still a little funky, as the image will move a little while resizing if it is not pinned.
We also had an unsuccessful email notification idea (didn't work due to G Suite being a requirement and neither of us being administrators in an organization).
Accomplishments that we're proud of
The CSS styling (gradient and shadows, specifically!)
The drag and drop function for moving around sticky notes
Creating a resizable image note with a maintained aspect ratio actually took
so
long, but it ended up working in the end!
What we learned
Lots of JavaScript
The little quirks of HTML forms (such as radio buttons not unsticking?!)
Front-end replacements for back-end functionality
That hackathons are really fun (even virtually) :D
What's next for Productivity Sticky Space
Some possible changes we'd like to implement involve backend coding, including making the notes stick and sending out scheduled email/text reminders.
Built With
css
html
javascript
Try it out
gujiguj.github.io
github.com | Productivity Sticky Space | Have you felt unproductive during quarantine? The Productivity Sticky Space is a simple productivity tool simulating a real board of sticky notes, just like real life! | ['Audrey Yang', 'Lucy Wang'] | [] | ['css', 'html', 'javascript'] | 66 |
10,133 | https://devpost.com/software/eye-can-code | Home page.
List of tutorials provided.
Simple print("hello").
More complex function.
Inspiration
With the recent COVID-19 pandemic, students worldwide have transitioned to online schooling. For some students, however, the transition has been harder than for others. Near where Veer lives is the oldest school for blind students: Perkins School for the Blind. Veer had always wanted to help them, and, during these times, he decided to help them when they needed it more than ever. Together, Veer and Saber worked on an online platform dedicated for the blind and targeted for Veer and Saber's favourite lesson: programming.
According to the National Federation of the Blind, COVID-19 has had a disproportionate impact on the blind, with many facing additional challenges during the pandemic. From an education standpoint, blind students and blind parents face uncertainty about the types of electronic materials they will be expected to use for the remainder of the academic year, making it hard for them to keep up with classes. Lastly, it is difficult for the visually impaired to learn how to code on their computer, a challenge which has been exacerbated by the pandemic.
What it does
We built a text editor which can listen to speech, translate it to Python code, and then run the code in a console. The platform is complete with an academy to teach blind students how to code, with lessons in variable types, for loops, if loops, functions, etc.
We used natural language processing to:
Allow the visually impaired to code in python by simply speaking
Provide a handful of python tutorials with voice and speech recognition features to effectively teach coding to people with visual impairments
Create an online platform for the visually impaired to learn
How we built it
We used:
Flask
HTML, CSS, and JS
Python
Natural Language Processing
Google Cloud Speech API
Challenges we ran into
We at first parsed the code in Python. However, when connecting it to the JS, it was incredibly laggy and didn't update in real time. Therefore, we had to translate all the Python code into JS which was tedious. In addition, SpeechRecognition only worked on one teammate's computer and not the other, which caused a lot of debugging to occur.
Accomplishments that we're proud of
We're really proud that our product is actually working for others to use. Not only did we complete a text editor, but we also got the academy working, which was great.
What we learned
We learnt how to use speech recognition and execute the code in string form. One of our teammates learned how to deploy code to Heroku and link it to a domain. We also learned more about linking JS with Python, especially for real-time work.
What's next for Eye Can Code
We want to make more aspects of our website audio to further help make it accessible for the blind. Afterwards, we hope to have the platform available for all to use.
Built With
css3
flask
google-cloud
google-web-speech-api
html5
javascript
natural-language-processing
python
Try it out
github.com
eyecancode.online | Eye Can Code | An online platform built with a speech-to-text python code editor for the visually impaired to learn coding | ['Shreya C', 'Veer Gadodia'] | ['The Wolfram Award', '1st Place Award', 'Amazon Gift Card', 'Wolfram|One Personal Edition + 1 year subscribtion to Wolfram|Alpha Pro'] | ['css3', 'flask', 'google-cloud', 'google-web-speech-api', 'html5', 'javascript', 'natural-language-processing', 'python'] | 67 |
10,133 | https://devpost.com/software/trick-or-tweet-hxnsyt | Inspiration
Our inspiration for Trick or Tweet came after brainstorming for a hack at the start. We never had the opportunity to work on a “silly, useless, pointless, [or] funny hack” so after a brief discussion of what is going on in the world and specifically in the media we settled on this idea
What it does
A game that gives users points for correctly guessing whether or not a tweet was said by a famous politician, musician, or actor.
How we built it
The WebApp was built using python, javascript, HTML, and CSS.
Built With
css3
html5
javascript
python | Trick or Tweet | Is this a fake tweet? Is it real? Trick or Tweet to find out! | ['bhaps thaya', 'flintwil', 'Ashvin Uthayakumar', 'Sarin Shrestha'] | [] | ['css3', 'html5', 'javascript', 'python'] | 68 |
10,133 | https://devpost.com/software/yt2009 | Inspiration
Ever felt nostalgic for the old YouTube?
Re-experience the glory days with YT2009!
What it does
YT2009 applies a collection of effects to videos to give them a distinct early YouTube feel.
How I built it
Users submit a video file and their email address to a Flask web server, which forwards the video to Transloadit to add the Unregistered Hypercam 2 watermark and downgrade the video format to ensure compatibility with Windows Movie Maker.
Then, UiPath-driven automation imports the video into Movie Maker, adds the intro, outro, and transitions, before exporting it and sending it back to Transloadit to add the music track (009 Sound System).
Finally, videos are emailed to users at the address they specified earlier.
Challenges I ran into
Dealing with video codec compatibility issues (just about the only video formats it accepts still in modern use are gifs) was no fun, nor was hacking around all the weird peculiarities of Windows Movie Maker's UI and unwillingness to play nice with UiPath.
Accomplishments that I'm proud of
Getting this crazy thing to work.
What I learned
I learned how to use UiPath, and a decent bit about video encoding and formats.
What's next for YT2009
Right now, the service isn't publicly accessible.
I don't have the slightest bit of confidence that the Rube-Goldberg-just-barely-works strings tying this project together will hold at the first sight of unexpected or unusual input.
Additionally
I don't want to hear that music track ever again.
Built With
flask
python
transloadit
uipath
windows
Try it out
yt2009.online | YT2009 | Give your videos an early YouTube aesthetic | [] | [] | ['flask', 'python', 'transloadit', 'uipath', 'windows'] | 69 |
10,133 | https://devpost.com/software/maskit-zv8ji2 | InspirationWe are inspired by the struggle of many of us to keep safe during the current public health crisis.
That is why we have designed a system to keep track of Mask usage and to check that people use masks whenever they enter public spaces.
What it does
The system detects in realtime whether a person is wearing a mask and then responds with a mechanical output ie servo motor movement.
How we built it
Our system uses a Raspberry-pi camera to capture live images then sends the captured image to a custom HTTP Linux server. On the server, we use a TensorFlow model (from AIZOO) to examine the image from the Raspberry-pi. If the image contains a person with a mask the objection detection model will return true and send a request for the Raspberry pi to move a servo (“ie open the door”).
Challenges we ran into
We faced many challenges during the completion of the project. We tried to do all the image processing on the raspberry pi however the pi, unfortunately, did not have either the necessary speed or space to handle the complicated image processing in OpenCV and Tensorflow. Therefore, we had to design an HTTP server using python to send the image to a Linux computer where the image is processed. Finding the right model for mask detection was also difficult. We tried several different frameworks before finding an efficient Tensorflow mask detection model from AIZOO. Remote work also proved difficult especially for a hardware project where only one of us could see the immediate results.
Accomplishments that we're proud of
We are proud of integrating the hardware and ml software. There were many difficult aspects of the processing and sending the data between server and Raspberry-pi.
What we learned
We learned about the capabilities of different machine learning frameworks, the Raspberry-pi, and how to create an HTTP server and send images over this server.
What's next for MaskIt
We would like to extend our technology to test it to new situations ie large groups and with greater mechanical output ie doors opening. Hopefully, our technology could be beneficial in public spaces to check that visitors are using masks
Built With
python
raspberry-pi
tensorflow
Try it out
github.com | MaskIt | Mask detection Raspberry pi with machine learning technology to better manage public and private spaces | ['Allen Mao', 'Hans Gundlach'] | [] | ['python', 'raspberry-pi', 'tensorflow'] | 70 |
10,133 | https://devpost.com/software/auto-mask | Inspiration
When mask-wearing became required due to COVID-19, I empathized with the doctors and nurses who have always had to wear the itchy and hard-to-breath face covering. Realizing that face coverings will always be uncomfortable no matter what material or design, I thought to make masks easy to take on and off so catching my breath in a grocery store would be effortless.
What it does
Auto Mask features an eye shield to protect from infected saliva, touchless control to minimize bacteria transfer from hands, and even a sneeze detector! An electrode on the abdomen activates the mask just in time to catch a cough or sneeze.
How I built it
I designed the 3D printed headpiece and combined the Arduino microcontroller with an ultrasonic sensor, muscle sensor, and a pair of servo motors.
Built With
3dprinting
arduino
c++
cad
Try it out
www.thingiverse.com | Auto Mask | Normal masks are uncomfortable to wear all day. Auto Mask features touchless mask on/off control and abdominal muscle sensing sneeze detection to catch coughs in time. | ['Taliyah Huang', 'Calista Huang'] | ['Hardware winner'] | ['3dprinting', 'arduino', 'c++', 'cad'] | 71 |
10,133 | https://devpost.com/software/random99-random-brooklyn-nine-nine-cold-opens | Home Screen
Player View
Inspiration
I was stuck at home, nothing to do. No Netflix account. No friend to "lend" me their's. But I wanted to watch some Brooklyn nine-nine!
What it does
The user clicks a button and from the JSON with all the available episodes a random one is selected. Then using the Youtube iframe API, the video is autoplayed for the user. They can also press a button to start a new one if they get bored.9
How I built it
First I collected together the playlist for each season's cold opens.
Then I used this tool to get an Excel sheet
http://www.williamsportwebdeveloper.com/FavBackUp.aspx
Then I put all that data into an Airtable and formatted it. Also used APIs and Javascript to get missing data.
Converted that Airtable stuff to JSON.
Created Javascript code to get a random value
Created an MVP without styling
Redid the JS Code to use the Youtube API
Styled the app
Done!
Challenges I ran into
It was very hard to get the data, I had to hack the data together!
Accomplishments that I'm proud of
I'm proud of my problem solving to get all the data I needed
What I learned
All about JSON
Built With
css
html
javascript
json
youtube
Try it out
99.sampoder.com
github.com | Random99 - Random Brooklyn Nine Nine Cold Opens | Bored? Nothing to do? Want some comedy in your life? Well it is time to watch infinite Brooklyn Nine Nine cold opens with Random99! | ['Sam Poder'] | [] | ['css', 'html', 'javascript', 'json', 'youtube'] | 72 |
10,133 | https://devpost.com/software/find-your-way-t9ouyp | Play in your free time and have lots of fun
It;s not as easy as you think
GIF
Don't get fooled just because it says so...
Inspiration
I wanted to build a web app where people can have fun, which is like a game, it is why I built this...
What it does
Find your way is a website which anyone can access as long as they have an internet connection and a suitable device. At Find Your Way you find challenging steps which you need to pass... It all starts with a black and white start page, the first challenge is to find the NEXT button which is completely black and is placed in a black background. It's difficult, but if you have a good tech knowledge and if you are intelligent you can find tips and find the NEXT button. After that too you get several steps which are different from each other that you need to pass... To understand it you must try it out... It's fun, you will understand how stupid you are or realize how intelligent you are if you pass the whole game without a single failure.... Remember Getting LOL's mean you Fail... Even at the last moment you find challenges... Most people won't be able to pass this successfully, try whether you can... Also there is a surprising end.
How I built it
First I got a domain from domain.com, then I used infinity free hosting for my website. First I created the questions and made the algorithm for the age guessing feature. Then I coded the website using html and CSS. After that I added the files using Filezilla and also used CloudFlare to secure my website.
Challenges I ran into
I ran into trouble doing some CSS. I included the style element within the html and did most of the design as there was an issue with most of the CSS...
Accomplishments that I'm proud of
I am proud of successfully building this website. Also something that I am proud of is the comments I got from my friends, they were really interested in this website. (They weren't in any other project I did).
What I learned
I learned some coding tips and few animation effects.
What's next for Find Your Way
I will be building a mobile app for find your way too, and I've got to add many more steps and challenges to Find Your Way.
Built With
css
html
Try it out
www.findyourway.tech
github.com | Find Your Way | Find the way out of this tricky web app, many steps that require intelligence and co-ordination to pass... | ['Senuka Rathnayake'] | [] | ['css', 'html'] | 73 |
10,133 | https://devpost.com/software/elpta | Inspiration
We were inspired by a class one of our team members took this semester on endangered languages. This course not only highlighted the unique and impactful elements of several lesser-known languages spoken across the world, but explained how preserving linguistic diversity can promote important cultural elements and allow for different modes of cognition. We wanted to create an initiative to spread awareness of the importance of these critically endangered languages and help educate people by showing off some of their distinctive features.
What it does
Our project is the ELPTA (Endangered Language Preservation and Transcription Association) website. The project aims to educate and bring awareness to endangered languages by providing information about such languages on our website. The information is not only present on the web pages, but also incorporates the usage of an interactive map to increase engagement. The historic, geographic and linguistic information about the language help to bring heightened awareness to these endangered languages and highlight how widespread the phenomena of endangered languages, an integral part of many cultures, is. The website is unique in that it also has a way for individuals to get involved and contribute to the preservation of these endangered languages from the comfort of their own homes. This is done through the transcription feature where members of the community can help transcribe documents written in endangered languages. This is then sent to our team for review and streamlines the transcription and preservation process.
How we built it
We built this website from scratch using HTML/CSS and JavaScript. We used HTML to build the framework for the website, and used CSS to style and format the website. JavaScript was used so that the client can interact with elements of the website, such as being able to select languages from drop down menus and submitting their transcriptions in a form for review by the team’s editors. The site is being hosted on GitHub Pages which we chose because it was easy, free, and accessible to all. As a result, we also used Github extensively for version control and collaboration.
Challenges we ran into
One of our biggest challenges was (for the most part) learning HTML, CSS, and JavaScript on the fly.
Accomplishments that we're proud of
This was our first hackathon and we managed to create a working product. It was a really cool experience for us and we got to learn a lot throughout the course of this event.
What we learned
Web-development is relatively new for our team so this experience allowed us to work closely with HTML, CSS, and JavaScript, and become more familiar with how the three interact to create websites that are not only functional, but aesthetically pleasing. This project also caused us to consider UI design and how to improve the user experience, and we redesigned several elements in the pursuit of that goal. In the forming of our interactive map page, we also got practice working with the Google Maps API and learned how to embed Google Maps onto a webpage, add location markers, and attach descriptions to these markers.
The site is being hosted on GitHub Pages which also gave all of our team members more practice using Git.
What's next for ELPTA
In the future, we aim to make the website be able to bring awareness to a much wider variety of endangered languages, perhaps through transitioning to a wiki-based format where users would submit transcriptions that would be reviewed by a grassroots team of moderators. We are diligently working to make sure that the transcription process is effective and efficient, so when there is a larger influx of transcription documents we seek to store the documents on an actual database. We are always working to improve through feedback, and we are excited about the future.
Built With
css
github
google-maps
html
javascript
jquery
Try it out
elpta.github.io | ELPTA | An open source platform dedicated to preserving endangered languages through educational resources and community transcription efforts. | ['Rena L', 'Bo Deng'] | [] | ['css', 'github', 'google-maps', 'html', 'javascript', 'jquery'] | 74 |
10,133 | https://devpost.com/software/valorant-lineups | Valorant Lineups
Wanna become the next Shroud or Ninja? Wanna take your skill to the big esport teams? Look no further than valorantlineups.online! Valorant is a 5v5 tactical shooter that recently was released by Riot Studios (The people who made league of legends). This website aims on teaching you how to effectively use your agent to the best of your ability by giving you spots where your ability can be best used. Here are some of the agents that our website helps the most:
Cypher
Cypher is a one-man surveillance network who keeps tabs on the enemy’s every move. No secret is safe. No maneuver goes unseen. Cypher is always watching. Our website helps to provide some of the best-hidden cameras that can help you and your teammates provide critical information under your enemy 's nose and leaving your opponents guessing.
Sova
Sova tracks, finds, and eliminates enemies with ruthless efficiency and precision. His custom bow and incredible scouting abilities ensure that even if you run, you cannot hide. Our website gives you the information on where to aim your recon darts in order to provide the most information. This gives you their location and you being a force to be dealt with.
Viper
Viper deploys an array of poisonous chemical devices to control the battlefield and cripple the enemy’s vision. If the toxins don’t kill her prey, her mind games surely will. Our website helps you line up the most detrimental smokes and completely destroy your enemies' vision. Thus, leaving you with complete control.
Built With
css
html
javascript
react
Try it out
valorantlineups.online
github.com | valorantlineups.online | Wanna improve your skill and be the next esport legend? Valorantlineups.online has all to offer! | ['Jared Prather', 'Ziyang Li'] | [] | ['css', 'html', 'javascript', 'react'] | 75 |
10,133 | https://devpost.com/software/i-will-die-in-a-day | The start of the interactive fiction.
A squirrel has passed Asha's path.
Inspiration
Trigger warning/Content warning:
This game contains content that may upset some players, since it discusses and explores the sensitive topic of mortality and death. If these topics are something you are uncomfortable with, I advise you not to read the interactive fiction.
This game/interactive function was inspired by the comic: ‘This croc will die in 100 days’ by twitter user @yuukikikuchi. I personally was really impacted by the portrayal of death in the comic. For most of us, the concept of mortality and death is not discussed that often. Even when it is discussed it is talked about almost in a taboo fashion. In a way, I thought it was a little strange that something that is common and happens to everyone eventually isn’t talked about a lot and in a more open manner.
In a way this restraint on talking about death made me want to explore someone with lots of regrets left and mismanaged use of time, and numbed her pain through procrastination and trash television. I knew I wanted to explore the concept and view it from the lens of a normal person, who would die in unpredictable and unplanned accidents that happen across the world daily. I thought that the best way to talk about it was to create interactive art (literature/drawings).
So I ask you the question, what would you do if you had one day left?
What it does
This game explores the final day of Asha’s life. Asha represents the hardworking normal person. She’s the adult daughter of a single mother, who through blood sweat and tears worked to achieve a small degree of financial stability from living under poverty. To do this, Asha gave up many life experiences and little mundane things to achieve it. Asha never took risks or did anything that was exciting, missing through what everyone considered growing up to become an adult quickly.
As the reader follows Asha through multiple paths, I tried to provide various options for the decisions they could make as Asha, and provided. Players could try to avoid risk taking, but the date of Asha’s death is set in stone so nothing would change except leaving her lingering regrets.
I challenged the reader to play through multiple paths and try to keep an open mind of enjoying life and it’s detours. In many ways, the journey is more impactful than the destination. I wanted to give the idea that it is okay to enjoy yourself a little, and have fun, because you never know when you might die, so try to live a life without regrets—if you love your parents, remember to call them and tell them you love them.
How I built it
I built this interactive fiction through ink (using Inky) by Inkle studios. In it I was about to add images through tags, as well as create multiple scenes and decisions for the player to make. I also added conditional scenes which would only occur if a scene was viewed by the player.
In order to add some images and audio, I had to go into the exported web pages created by Inky and edit the html, css and javascript code.
As for the visual art, it was all drawn during the hackathon using a tablet and the art program Medibang.
Challenges I ran into
Some major challenges that I initially ran into were adding images. Because of how inky exported the file, tagging and referencing images were a bit different. To add the image source in, you had to make sure to place the image in the folder, which I only figured out after digging through the javascript for a while.
Another challenge I ran into was the writing component itself, which I had underestimated. Death of course was a sensitive topic and the last thing I would want to happen was to offend anyone. I tried to make sure I wrote things in a respectful manner. However, this was my very first time writing something creative, not to mention interactive. It was really difficult to avoid cliches and grammar mistakes especially when written under time constraints, as well as writing well. I tried to overcome it by proofreading, although it is highly possible that typos and errors escaped my grasp. I hope to improve this skill by practicing my creative writing skills more.
Planning out the story was also a challenge since it was interactive. I had to plan how branches and scenes fit together, and which conditions would lead to which potential ending. It was more difficult in the beginning since I was freestyling the story without writing down a plan which caused some issues down the road. To fix this it became immensely easier to organise after just sitting down and writing down the story paths and scenes.
Accomplishments that I'm proud of
I think what I was most proud of was that I managed to get a finished product in. I did not expect writing to be such an arduous process. I have never written creative fiction before let alone interactive fiction. One thing that I was happy about was that interactive fiction existed as a part of many games, rpg, decision making, etc. By putting myself out of my comfort zone, I was able to gain more confidence in being able to structure a story for future projects.
Even though I had trouble avoiding cliches, when I showed a friend. He laughed in amusement at my mistakes and terrible cliches, so that was great. It wasn’t the reaction I was expecting, but I am happy he was entertained by what I wrote. We all learn from mistakes, and next time I’ll make sure to write more succinctly.
What I learned
I learnt ink as a tool to use to write interactive fiction. But alongside the syntax I think I definitely learnt the most by practicing my creative writing skills, and gaining more confidence in how to structure an interactive narrative.
In the process, I also gained a lot of respect for writers. For those able to churn out something so good and so quickly is incredibly impressive.
Dealing with the topic and writing process I think I learnt a lot about myself personally.
I asked myself questions: If I were to die now, would I be satisfied with what I have achieved?
I challenged myself to think about that question while writing the story in order to challenge the readers the same question.
What's next for I will die in a day
I think I definitely will add more drawn pictures, and edit pre-existing ones. I am thinking of making the images more colorful the more happy experiences Asha experiences.
Additionally, I definitely want to edit many of the typos I made in the process and maybe rewrite things to make them more clear.
I also might use the story and switch to using Unity instead to try including animations and better graphics.
Built With
css
html
ink
javascript
Try it out
iwilldieinaday.online | All Was Silent | What if you knew how much time you had left? This interactive literature explores the last day of Asha, a 26 year old unemployed woman. Learn Asha’s story and make her choices on her final day. | [] | [] | ['css', 'html', 'ink', 'javascript'] | 76 |
10,133 | https://devpost.com/software/starktoast | Inspiration
I really just thought this would be a cool, fun project, and was hoping to make something Tony Stark-esqe (Hence StarkToast)
What it does
It uses the system's webcam to allow the user to interact with a digital 3D object in real time, with their hands
How I built it
I used my admittedly limited knowledge of computer vision and 3D modeling with vpython, along with quite a bit of research to help further my knowledge while developing this project
Challenges I ran into
Using vpython module to properly render and modify the 3D object, as well as refining the hand detection and calculations properly (which is still a work in progress)
Accomplishments that I'm proud of
Being able to develop enough that this project mostly works under optimal conditions
What I learned
I learned far more about 3D modeling and computer vision than I had previously known
What's next for StarkToast
Continuing to refine the algorithm for hand detection and calculations based on that to make the program smoother overall
Built With
numpy
python
tkinter
vpython
Try it out
github.com | StarkToast | 3D virtual object Manipulation with hands | ['Greih Murray'] | [] | ['numpy', 'python', 'tkinter', 'vpython'] | 77 |
10,133 | https://devpost.com/software/trackify-vzmknj | Manage Subscriptions
Homepage
Order Tracking
Inspiration
Trackify
is inspired by an incident that occurred in Miraal’s house the morning before the hackathon started. Miraal’s older sister had just realized that Linkedin had been charging her for Linkedin Premium for four months without her realizing, and that she had lost over 100 dollars. As she cried in despair, her little sister panicked at her own realization that she had just missed the deadline to cancel her Apple Music trial by a day. In the other room ,their mother was on call trying to cancel an order she had made for a freezer. Extra expenses were the last thing Miraal’s family needed in these difficult quarantined times. Miraal sat there and tried to think of a solution that would solve all the problems her family members were facing, and would allow them to be happy at home. One morning later, after the hackathon group call, the idea of Trackify was born.
With everyone locked up at home due to
coronavirus
, online subscriptions such as Netflix and Apple Music are being bought more and more, yet people are forgetting to cancel them either before their free trial ends or after quarantine ends and they no longer need it. Trackify ensures that
no one
makes a payment by accident and that
no one
carries the stress of tracking their own subscriptions, so they can be
happy at home
.
What it does
Trackify is a website which manages all kinds of online purchases in one place. From the moment an online purchase is made, you can manage it on Trackify. It allows you to track your orders, cancel subscriptions, and manage purchases. Trackify enables phone, text and email returns with no hassle. One of its most useful features is call waiting - where it will wait (
listening to all that hold music on your behalf
) for a human to answer the phone before connecting you to them. Furthermore, it tracks your online subscriptions and helps you make sure that no payment is made unintentionally. If a subscription is to be cancelled before the free trial ends, Trackify can handle this for you.
Essentially, Trackify ensures that no one has to experience any stress when purchasing products online, something that we feel is essential, especially as people purchase more things than ever online in a quarantined world.
How we built it
There are four main features incorporated into Trackify.
Firstly, the automatic call waiting system. This was built using the Twilio API. The speech recognition API provided by Twilio was used to identify common keywords in both human and machine speech to tell the difference between them. Then, the API was used to create a conference call when a human was identified so that a user can be added to the call after the bot was done navigating and waiting!
Secondly, the email cancellations. This was built using the Gmail API that allowed us to send emails to a company on a user's behalf at the push of a button.
Third, the order tracking page. For the purpose of this hackathon we set up tracking codes for one carrier (Canada Post) and worked with that. We were able to retrieve tracking numbers from a user's email by using the gmail API to read the emails sent from a particular company. Then we used various NPM packages to extract the tracking number from the body of the email. After this, we were simply able to send the tracking number off to an API that was able to give us the status of the package.
Finally, the search feature. This was built using MongoDB which had we had populated using fake company information (for the purpose of this hackathon).
Challenges we ran into
Due to time constraints it was hard to employ complete NLP into our project. As a result, our bot only serves as a proof of concept rather than a complete tool that can be used to navigate just
any
IVR. A lot of the features we programmed had a lot of hard-coding that would, unfortunately, not work for an unknown IVR. The good news, however, is that there a few very easy fixes to increase the probability of navigating any IVR. By implementing a NLP API, we believe that we would be able to understand context and perform actions such as key-presses more dynamically rather than fixing them before hand for each company that our bot would deal with.
Accomplishments that we're proud of
For many of us, this was our first hackathon; A totally new experience with new knowledge coming at you in every direction. And yet we persevered in this environment and developed a proficient solution to a real problem during quarantine that will surely make everybody happy at home. We are proud of what we built; This includes our implementation of the Twilio, Gmail, and NLP API's thatwe used to develop our application, and the design process , where we collaborated to develop designs through Figma. We are also incredibly proud of our communication and collaboration through this virtual environment, where we were able to educate, teach, learn, and motivate each other to create our amazing project, Trackify.
What we learned
We learned how to better understand documentation and work with APIs. Incorporating the automatic call waiting system was really challenging, however by learning how to break down each step of the process, we were able to achieve what we were looking for. Working with the Twilio API to recognize speech, and initiate a conference call definitely gave us a more in depth understanding of the potential uses of this API.
In addition, we also learned a lot about creating efficient algorithms. This learning happened when were were sending ourselves hundreds of order emails and trying to process them with for loops!
Fortunately
we saw some opportunities for improvements and made them!
Apart from technical skills, we learned a lot about teamwork and collaborations, especially in the environment that we worked in for this hackathon. Not having your teammates near you to wake you up when you start drooling on your keyboard was
especially
challenging. We learned how to efficiently use some good online tools to work together, especially in the development world.
What's next for Trackify
Currently, Trackify is just a basic prototype that displays a fraction of it's real potential in the lives of its users. Many features could be implemented that would make the user experience of the application much better than it already is. In the future, Trackify will look to implement features such as auto-cancelling subscription options, and options to restart and renew subscriptions directly through the application. We also imagine Trackify being able to track not only subscriptions, but all online spending for the user, to make our application the ultimate one-stop shop for all digital transactions.
Built With
css
express.js
gmail
html
javascript
mongodb
node.js
twilio | Trackify | A machine-learning powered app to help people manage their online subscriptions and purchases | ['Sameer Pujji', 'Adil Kapadia', 'pranav dureja', 'miraalk Kabir'] | [] | ['css', 'express.js', 'gmail', 'html', 'javascript', 'mongodb', 'node.js', 'twilio'] | 78 |
10,133 | https://devpost.com/software/exercise-together | Live Video Streaming
Video Room
Youtube enabled
Live Data Syncing
Search Bar
Authentication
DynamoDB
Home
Inspiration
We know that physical activity and social interaction have immense benefits*. During lockdown, many people aren't able to go to the gym or see any of their friends in person. I wanted to create an app to help people get their endorphins up and see their gym buddies across the world.
*
https://www.cdc.gov/physicalactivity/basics/pa-health/index.htm
,
https://www.mercycare.org/bhs/services-programs/eap/resources/health-benefits-of-social-interaction/
What it does
Exercise Together is a web app that allows 3 people to share video while watching the same Youtube exercise class and log their exercise activity.
It works like this:
A user visits the website and either creates and account or logs in. Amazon Cognito is used for authentication.
Once authenticated, the user is directed to a dashboard depicting the amount of time spent exercising with Exercise Together.
The user clicks join room and enters a room name. Up to 3 of their friends enter the same name to join the same room.
The users enter a video chat room and can search for a Youtube exercise video together by utilizing the search bar. Once everything is ready, they click start exercise to begin!
When the video ends, the user returns to the dashboard and their time spent exercising is logged.
Exercise Together is helpful when you want to exercise with your friends and simulates an exercise class you could do at the gym like yoga or pilates. This way people can work out with their friends that are all over the world!
How I built it
I used react and redux to build the front end of the project. For the backend, I used Serverless functionality like Cognito, AWS Lambda, S3, DynamoDB, and App Sync. Cognito verifies the user so that I can log exercise data for every user separately. All data is stored in DynamoDB. When people enter a room, Agora.io livestreams everyone's video to each other, so they can see each other's faces while React is used to display everyone's video. Every change you make to the search bar or clicking a Youtube video is logged to DynamoDB and is logged to all the other clients in the same room through AppSync. As a result, everyone in the room can see the same view at the same time. When you finish the workout, the data is sent to DynamoDB with the email you logged in as the key for the data. On the dashboard, a get request is made back to DynamoDB, so that you can see your exercise data for the whole week.
Challenges I ran into
I used a wide variety of services in order to develop the application that I wasn't experienced with previously like Agora.io, AWS Amplify, and AWS AppSync. Learning them was difficult and I went through a lot of troubleshooting with those services in the code. Moreover, syncing all these services together into one application was a large challenge, and I kept trying different pieces of code one at a time to try to get them to work together.
Accomplishments that I'm proud of
I was able finally learn how to use web sockets (AWS AppSync uses web sockets), which I'm really excited to use for my future projects! Web sockets are especially crucial for online games, which I want to make.
What I learned
I learned how to use a multitude of services and link them together. For example, I learned web sockets, Agora.io, AWS Amplify, and AWS Appsync. All these services would be immensely useful for my fire projects, so I believed that I really benefited from creating this project.
What's next for Exercise Together
Some extensions I'd like to make include:
Adding Fitbit and Apple Health functionality so that users who use them can all see data logged onto the website.
Making a sidebar like to that people could use to see who is currently online out of their friends list and join a room with them. In order to implement that, I would have to use AWS Neptune, which uses the same technology that Facebook uses for Facebook Friends.
Creating a phone app using React Native. I feel that more people would like to use a phone app rather than the website.
There are still
many bugs
, especially with the video streaming since I'm using a third party API and a free account for it. For example:
The video streaming only works chrome.
Entering the video room with more than one person is a buggy process. The way I get it to work is by duplicating the tab for each user entering and closing the previous tab.
The Cognito verification link redirects to localhost, but will confirm the account.
Built With
agora.io
amplify
appsync
cognito
cookie
dynamodb
graphql
javascript
lambda
materialize-css
node.js
react
redux
s3
serverless
ses
websocket
Try it out
exercisetogether.rampotham.com
github.com
www.youtube.com | Exercise Together | Exercise Together is a webapp that simulates your own group fitness class online with your friends | ['ram potham'] | ['The Wolfram Award'] | ['agora.io', 'amplify', 'appsync', 'cognito', 'cookie', 'dynamodb', 'graphql', 'javascript', 'lambda', 'materialize-css', 'node.js', 'react', 'redux', 's3', 'serverless', 'ses', 'websocket'] | 79 |
10,133 | https://devpost.com/software/meetmap-nb5aij | meetmap home
meetmap traffic
meetmap dark mode
meetmap heat map
meetmap
Same Home, Different Hacks 2020
Hey! Thanks for checking out meetmap.
We’ve all been stuck at home for quarantine. For us, part of being Happy At Home necessitates a mental rest once in a while, like catching some fresh air at a nearby park. That being said, we still want to make sure that we’re being safe and not going to a busy area - but how can we scope out the area from the comfort of our own homes?
We created meetmap, a live web service that you can use before heading out to meet up with your friends. all while being socially distanced and safe. We deployed multiple Google Maps APIs from the Google Cloud Platform to allow users to view heatmaps to check the population density in a given area. Users can enter a location and toggle a heatmap to determine how busy a park is, and judge whether it is safe or not to go to a park. They also have the ability to change the gradient colours, the radius, and the opacity of the heat map, as well as customize the look by turning on the dark mode.
meetmap was made using the Google Maps API from the Google Cloud platform.
note: root user is k-chew on an aws ec2 ubuntu instance
Built With
css
html
javascript
python
Try it out
github.com | meetmap | Same Home, Different Hacks 2020 | ['Wesley Choi', 'Kailey Chew'] | [] | ['css', 'html', 'javascript', 'python'] | 80 |
10,133 | https://devpost.com/software/flask-automate | Inspiration
I really want to enable people who don't know how to code to able to code a Flask application. However, it's not close to the finish...
What it does
For now, you can use YAML and it can change it to Flask-WTForms.
How I built it
I built it using pyyaml and python.
Challenges I ran into
Some challenges I ran into is how to parse the information and actually be able to create Flask-WTForm.
Accomplishments that I'm proud of
I actually get it to work. You can use a YAML file and it will able to give you Flask-WTForm
What I learned
I learned how to use Flask-WTF.
What's next for Flask-Automate
Next will be I will keep advancing this module, and eventually deploy it to Pypi.
Built With
flask
python
yaml
Try it out
github.com | Flask-Automate | May the people don't know code can code | ['boyuan12 Liu'] | [] | ['flask', 'python', 'yaml'] | 81 |
10,133 | https://devpost.com/software/home-buddy-t0mpab | Inspiration
For most kids, summertime is filled with endless fun and laughter. Kids deserve to still form these happy memories, even when in quarantine. To fill this time, lots of kids will play games and use apps that distract them from what's going on around them. Home Buddy, however, allows them to see their surroundings in a new perspective by turning everyday things into exciting games.
What it does
Home Buddy encourages kids to use their senses by playing games based on what's going on around them. Show Me is a game that prompts kids to find a specific item in their home and take a picture of it. If the camera recognizes the correct object, they win a point. Alarm Alert keeps kids on their toes by keeping their ear out for alarm noises. If the app detects an alarm, Sloane the Sloth will start screaming.
How we built it
We created this cross-platform application for both Android and iOS using the Flutter SDK and the Dart programming language. It uses the Google Cloud Vision API to allow us to detect what objects are being scanned by the user's camera.
To perform audio level analysis, we used a Dart package called
noise_meter
. This package allowed us to retreive the decibel (dB) of the incoming audio from the user's phone. Additionally, we used another Dart package called
oscilloscope
to create a real-time graph that displays these decibel values. If the audio reaches a certain dB, then we notify the user that Sloane the Sloth hears a loud noise.
Challenges we ran into
Originally, the plan was to make a NodeJS Express server that would interact with an Angular front-end and the Flutter mobile application would act as the "remote controller." This proved to be too complicated for our team because we easily became too ambitious. In the end, we narrowed it down to a mobile application because this would allow children to easily move around their homes while playing with our app.
Accomplishments that we're proud of
We're proud of coming up with an idea that might actually be useful (or at least entertaining) to some parents and children. We hope to take this idea further in the future so that children can get some knowledge out of what we made for 48 hours.
What we learned
We learned that it's best to narrow down your idea ahead of time. Being too ambitious with projects can greatly affect its outcome.
Built With
dart
flutter
google-cloud-vision
Try it out
github.com | Home Buddy | Helping kids make the most out of quarantine by exploring their homes | ['Celina Gallardo'] | [] | ['dart', 'flutter', 'google-cloud-vision'] | 82 |
10,133 | https://devpost.com/software/team-builder-rov6lz | Inspiration
I was looking for a team to join on Discord, and there seemed to be far more people looking for a team than people offering to accept team members.
What it does
It's supposed to help people find teammates.
How I built it
To be honest, I could've used Node.js and this project would've been done already. However, in the spirit of the hackathon, I introduced myself to Python web hosting. It was a hassle, but I enjoyed it and hope to add more to the project soon.
Challenges I ran into
I mainly struggled with setting up the droplet. I learned the hard way that Flask has dependencies that are messy to work with, but I persevered, and I have a feeling actually coding the darn thing will be smooth sailing compared to the initial setup. Or I could always just switch to Node.js and Express.
Accomplishments that I'm proud of
I got Nginx to proxy to a Flask server hosted on Gunicorn.
What I learned
I learned a lot about systemd services, HTTP proxies, and HTML templates.
What's next for Team Builder
I hope to make a web app that could supplement future hackathons.
Built With
flask
gunicorn
nginx
pymongo
python | Team Builder | A way for hackathon participants to find teammates | ['Laszlo Goch'] | [] | ['flask', 'gunicorn', 'nginx', 'pymongo', 'python'] | 83 |
10,133 | https://devpost.com/software/simple-smart-glass | Logo
Side View
Actual Prototype Side #1
Actual Prototype Front #1
Actual Prototype Side #2
Actual Prototype Front #2
Actual Prototype Side #3
Actual Prototype Front #3
Actual Prototype Above
Inspiration
The ever increasing cost of products not only for the visually impaired, but for other differently abled individuals. Read the three paragraphs below if you would like a more in-depth description.
What it does
Using a camera attached to glasses, a co-processor, and a vision API it can detect and auditorily identify objects upon the issuance of a voice command or the tap of a button. Parents can also use the child monitoring feature.
How we built it
We used a Raspberry Pi and a USB webcam for our hardware. Then on the Raspberry Pi, we used the pre-installed node-red. We linked node-red to Google Cloud using JSON authentication, and we used the Google Vision API for our object detection. For Google Assistant integration, we used IFTTT to send a webhook to our node-red server, and trigger our code.
Challenges we ran into
Some challenges we ran into include the usage of Node-Red. None of us have ever really used node-red, and that was a challenge. Some other challenges include using Google Cloud APIs, as none of us have used them as well.
Accomplishments that we're proud of
-CG concept art
-Use of Google Cloud to offload processing
-Using Node-Red successfully for the first time.
What we learned
We learned not only about integrating Node-Red, the Raspberry Pi, and Google Cloud, but about what dedicated people can do in a surprisingly short period of time.
What's next for Simple Smart Glass
On the road ahead, Simple Smart Glass will be expanding in a number of ways. Firstly, we wish to improve the quality of the hardware, and move out of the prototyping phase and on to the final development phase. Second, we would like to be able to streamline the UI and UX, making it simpler for users to set up and use our hardware and software. We are also thinking about expanding to a larger market by providing more functions like Google Glass.
Long Explanation
For someone with impaired vision, life can be an uphill battle. Trying to navigate one’s way through the ever changing landscape of our modern world is a challenge even for someone with perfect vision. Using modern technologies there are a number of ways to overcome this challenge, but unfortunately these solutions are incredibly expensive. The most common object detection systems on the market cost anywhere between $2,500 USD and $5,000 USD putting them out of reach for most individuals. The goal and inspiration behind Simply Smart Glass was to bring this price down to under $100 USD, making them not only affordable but offering a fully custom experience to the end user.
Simply Smart Glass is a sophisticated object detection system with full voice control. The system is designed to be mounted onto a pair of sunglasses or reading glasses by the user, and uses a co-processor and cloud vision processing to tell the user what they are seeing once activated using a voice command, or with the touch of a button. The near instant response gives the user an idea of what they’re “looking at”. Not only this, but the glasses allow remote access through a web browser, enabling parents, guardians, or caretakers to see what the individual wearing the glasses does in near real-time.
To accomplish this sophisticated design on a budget, we relied heavily on cloud vision processing. The data from the camera is sent to the cloud by a Raspberry Pi, a small computer that can be fit in a pocket, or bag. We used IFTT (If This Then That) to create a trigger, which is then processed by Node-Red and sent to Google Cloud, which runs an object detection algorithm to determine what the individual wearing the glasses is “looking” at. The reasoning behind this heavy reliance on cloud processing is what it means for the hardware. If the majority of the heavy lifting doesn’t have to be handled by the local hardware, the device can not only be made smaller, but less expensive.
We created something that we believe genuinely has the ability to better the lives of others. We’ve created something that, if taken to market, would improve the lives of people across the world. Along the way, we not only learned about the technologies involved and expanded our skillsets, but we learned about ourselves, and how to work as a team.
Built With
cloud-vision
google-assistant
google-cloud
ifttt
node-red
raspberry-pi
webcam
Try it out
github.com | Simple Smart Glass (simplesmartglass.tech) | We are creating an affordable assistant for the visually impaired. Many other existing products are thousands of dollars, and we are trying to create an affordable product for all users. | ['Aaron Santa Cruz', 'Wesley Sullivan', 'Kannen R-S'] | [] | ['cloud-vision', 'google-assistant', 'google-cloud', 'ifttt', 'node-red', 'raspberry-pi', 'webcam'] | 84 |
10,133 | https://devpost.com/software/biota | Inspiration
TBD
What it does
TBD
How I built it
TBD
Challenges I ran into
TBD
Accomplishments that I'm proud of
TBD
What I learned
What's next for Biota | Biota | TBD | ['Yuxi Qin'] | [] | [] | 85 |
10,133 | https://devpost.com/software/ar-bobble-skull | Looking into the distance
Grinning but you can't see it
What it does
It generates a huge AR skull where your head and neck should be, resulting in you looking like something of a skeleton bobblehead figure
How I built it
SparkAR Studio
Built With
sparkar
Try it out
github.com | AR Bobble-Skull | I was looking into SparkAR, facebooks AR studio for development of AR filters and decided to try it out in a hackathon. | ['Generic Bolb'] | [] | ['sparkar'] | 86 |
10,133 | https://devpost.com/software/cvv-5grpzc | cardgui
Cryptography for VISA/MASTERCAD card (and others). PIN (PVV/IBM OFFSET), CVV, CVV2, ICVV, PINBLOCK (clear or encrypted)
Cryptography for VISA/MASTERCAD card (and others).
PIN (PVV/IBM OFFSET), CVV, CVV2, ICVV, PINBLOCK (clear or encrypted)
JAVA JDK 1.7.0_65.
Edit key file .\data\config\key.xml
ZPK Zone Pin Key. Pin block validation can be ISOFORMAT0 or ISOFORMAT3.
Per BIN number adds :
- Add CVKs pair, one for CVV/ICVV (or CVC/ICVC) and another for CVV2 ( or CVC2 ).
- One PVK for every BIN number.
Pin validation Type can be Visa PVV or IBM_3624_OFFSET.
For IBM_3624_OFFSET type pin validation data type can be (THALES700 or THALES800)
- THALES7000 Pin Validation Data is calculated as follows:
* Refer to Thales 7000 manual - 9.4 IBM PIN Offset (command code value 'DE' )
* - Computes Account Number : Takes the 12 right-most digits of the account number, excluding check digit.
* - Inserts the last 5 digits of the account number (previous data) in a given position <INSERT_POSITION>
* - Returns this data
- THALES8000 Pin Validation Data is calculated as follows:
* Refer to Thales HSM 8000 Host Command Reference Manual - Generate an IBM PIN Offset (command code value 'DE' )
* - Takes characters from Pan Number starting at position <PAN_START_POSITION> and ending at <PAN_END_POSITION> ( 1 <= sp < ep <= 15 )
* - Add pad character <PAN_PAD_CHARACTER>, until a 16 characters length is completed.
* - Returns this data
Batch utility
- Create your own file with cards to be processed ( use .\data\cards.txt as a template )
- All fields are mandatory.
- Lines that contains an '#' character, are considered as a comment.
- Open a cmd window and execute :
.\cardutl.bat <cards_filename> <key_filename>
- Once finished, go to .\data and check .out file (and .\log\cryptocardutl.log for warnings and errors).
Window utility
- Edit cardgui.bat and point JRE_HOME variable to your JRE (my JRE version is from jdk1.7.0_65)
- Execute cardgui.bat
Built With
batchfile
cvv
emailoutbound-email-list
federal-procurement-data-system
java
listingware
otp
pageone
payapply
paylog
paylou
paymill
totalcounts
totango
Try it out
github.com | lianhe666 | Procurement system | ['Wshao62 yiz'] | [] | ['batchfile', 'cvv', 'emailoutbound-email-list', 'federal-procurement-data-system', 'java', 'listingware', 'otp', 'pageone', 'payapply', 'paylog', 'paylou', 'paymill', 'totalcounts', 'totango'] | 87 |
10,133 | https://devpost.com/software/theclown | our mascot!!
Inspiration
Inspired by the Silly Hack category, went with a circus theme for the color scheme
What it does
It returns a random joke and a picture when you click on the clown emoji.
How I built it
I used the "icanhazdadjoke" API and gave it a basic css design.
Challenges I ran into
I struggled to figure out the joke API because I needed it to return the information in a JSON format.
I also tried to set up the domain.com name theclownis.online, but I didn't know how to work with DNS and hosting.
Accomplishments that I'm proud of
I'm proud of getting the joke functionality to work as well as adding in an extra picture functionality.
I finally got the domain set up towards the end which was really exciting!
What I learned
I'm practicing getting used to implementing APIs and re-teaching myself some javascript.
What's next for TheClown
Making the pictures random everytime you click the clown for a new joke
Built With
api
css3
html
javascript
Try it out
theclownis.online | TheClown | Hack for the MLH Same Home Different Hacks Hackathon which returns a funny joke when you click on the clown | ['Fay Lin'] | [] | ['api', 'css3', 'html', 'javascript'] | 88 |
10,133 | https://devpost.com/software/pointlessclicks | Logo
Story Teller
Dashboard
Animal Pool
Colors Click
Inspiration
Kids in this quarantine must be missing their favorite kindergarten days. They must be feeling so bored and less energetic. Now here is the solution to elevate their boredom, EKG is built for teaching the kids with few cool stuffs.
(E-Electronic, KG-KinderGarten)
The features are :
Animals Pool
Color Click
Story Teller
What it does
Our EKG will teach kids what animals sound like.
For example, a cat says meow, a cow says mooo, a dog says baw baw, a pig says oink oink, a snake says isssss. This is the first feature Animals Pool. A pool of animals are shown in animated version and when clicked will make their respective sound along with a tag popping up showing it's name.
Our EKG will teach what a color looks like!
For example, Red, Blue, Green, Black, White, Yellow...
There will be a bunch of colors boxed and when clicked the color name pops up and read out.
The last feature our EKG has is the story teller.
Few hand-picked short stories are available to be read out by the app. When any story is clicked the story will be read out with the text shown on screen.
How I built it
The application is built in Android-Studio using Kotlin as the language. The resources like animal voices and pictures are taken from google images and free animal sound websites. The UI is built in XML and there was no external libraries or extensions added. It is simple and fun full application for kids.
Challenges I ran into
I was not getting the right resources for almost 1/4th of the hackathon time but I tried my best to get the resources ready for the application on time.
Accomplishments that I'm proud of
I successfully implemented all the 3 features I planned for although I spent the whole night learning kotlin.
What I learned for E-KinderGarten(EKG)
I learnt kotlin in much better way and was able to learn and implement text-to-speech method.
What's next for E-KinderGarten(EKG)
Next plan is to put it on playstore ;-)
In future I will implement more features and user stories.
Built With
android
Try it out
github.com
drive.google.com | E-KinderGarten(EKG) | A mini teaching app for kids under 4 years of age. | ['Haripriya Baskaran'] | [] | ['android'] | 89 |
10,133 | https://devpost.com/software/brainiac | Note with link
Search results page
Longer note
Inspiration
Inspired by roam research, and similar apps
What it does
Allows you to take notes in an outliner format, and efficiently link concepts..
How I built it
React, Redux, Slate.js. Hosted on Firebase.
Challenges I ran into
Manipulating slate state can be very hard.
What I learned
Slate! A lot of slate.
What's next for Brainiac
Data persistance, Images, rich text formatting.
Built With
git
react
slate
typescript
Try it out
www.betterbrain.tech | Brainiac | An outliner based notetaking app with search and internal wiki style linking. | ['Richard Borcsik'] | [] | ['git', 'react', 'slate', 'typescript'] | 90 |
10,133 | https://devpost.com/software/tremor-therapy | home-gamelay
json data
registering
gameplay
gameplay
game over
firebase auth
game
teraphist instructions
data received from iot
login
app
app
app
app
Inspiration
We were inspired to build this project by the increasing dis-compforatablility of therapist and doctors of not able to treat their patients especially children and teens because of the current lock down and COVID-19 safety measures. Thousands of children undergoing issue after accidents, brain surgery, Parkinson's disease , etc.. are stranded with no help to continue their recovery exercises. We found an opportunity for making something for the future. After some research we found that gaming are the most effective way of improving children and teens recovery as it makes the process fun and enjoyable.This is built in their subconscious mind and gaming makes them more inclined towards recovering faster than usual doctor visited training. Thus, We wanted to build a gaming system that has some hardware at users(patients) end which can be used by the patient for gaming. The intelligent iot device gets the data realtime and helps the patient play games. This data can be further collected and provided to the doctor/therapist for personal analysis and thus therapist can analyze the timely recovery of patients get all health related data in this side using an app or web. This can help him access the child more closely by sitting remote areas and during restrictions like we have now. This also enables doctors across the world to treat patients and help improving the medical network.
What it does
We have an iot device at the patients end. The patient wears this during the teraphy time in his hands. The patient can login to the system using his email and password. Then either go to instructors by learn button where the child can learn diffrent exercises or can click play option. We catogorized diffrent excercies in diffrent levels for fun way of interaction for the patients. For demo purpose we have only used one level, which we plan to expand to diffrent levels and add time based features. Now the user can do the exercies and the iot device will capture the data and send it in a csv format, which we change into json and is parsed to a dictionary in our system and hence our software can get the track of movements made by the patient and help him with moving object, balancing ,etc.. Severe jumps, or level failure detected can be noted. Now when a patient perform this the data generated later which is added to firebase database. Our teraphist can basically take this data from any remote area nad analyze it which gives him perfect way of treating the patient by understanding deeply how the patient is improving and giving ideas about how to procced in future.
How I built it
We are 3 developers building this project -
Rafi Rasheed started off building the hardware together to make them communicate the way we want. Rafi integrated NodeMcu - ESP8266 with MPU6050 - Sensor and got the track of actions performed by patients. Which he send to Siddharth for the game actions. Rafi used MicroPython a new language to him for many of initial works and also integrated some files in python due to lack of complete documentation in micropython. Siddharth worked completely on the game development. Siddharth had never done game development before and never used language GDScript or framework Godot. He studied it for a day and later buld the game for the next day. Siddharth also integrated the firebase auth with godot and send receiving data from patient to Anas. Anas worked completely on the UI/UX of the game and he made the android app for doctor/therapist, and integrated firebase backend to it. We also used EchoAR and integrated AR video innto gaming system that is like an instruction to the kids.
Challenges I ran into
Overall we faced a lot of challenges. The best challenge was we were using a language or trying to build a platform that we had never done in our life. Siddharth had never worked on game development and Rafi had never worked with micropython. Anas had new experience working with game design. Overall the new adaptation was a big challenge. Further we faced some challenges like authenticating firebase with a game development engine like Godot due to lack of any libraries. Integrating Augmented reality into gaming system.
Accomplishments that I'm proud of
We never thought we could make an entire new software without experience in a span on 2 days. We had planned this project just when the hack started , but at end we are proud we could learn tons of knowledge and add the stuff to our resume.
What's next for Tremor Therapy
For this demo we had taken the json and automated it for gameplay due to lack of time. We next plan to integrate realtime gameplay with video calling feature for therapist. Further we plan to integrate firebase MLKit so that we can have in depth data analysis and better decision making for the doctors. We also plan to make it open source so that tons of awesome developers can fork and contribute to make it better and everyone at any part of world can heal any other person.
Built With
android
ar
arduino
firebase
gdscript
godot
java
Try it out
github.com | Tremor Therapy | Tremor Therapy is an interactive game developed for helping children and teens with their therapy process for Tremors(Shaky Hands). It gives the therapist complete analytics about the patients | ['Siddharth M', 'Rafi Rasheed T C', 'ANAS DAVOOD TK'] | [] | ['android', 'ar', 'arduino', 'firebase', 'gdscript', 'godot', 'java'] | 91 |
10,141 | https://devpost.com/software/brillbox-ugvmcf | You can make BrillBox at home too! Just follow the schematic.
GIF
Dirt, germs, and viruses can get stuck on the surface of the mask
Arduino Nano close up
LCD close up
Resistive Ballast
Aluminum foil layer
Introduction
With the shortage of PPE in hospitals and masks surging in price on online delivery sites, the public has resorted to taking their own initiative to protect themselves. This initiative has primarily become prevalent with anyone that has 3-D printers or sewing equipment. With organizations such as Helpful Engineering creating and certifying 3-D printed mask designs, they help set the standard of safety in the D.I.Y PPE community. Yet, how does the public know if their masks are still safe after a certain amount of uses? Our mouths are essentially a vacuum, sucking up surrounding particulates. When an N-95 mask is worn, these particulates get sucked towards our faces only to be stopped by the mask. Over time, germs, dirt, and viruses will populate themselves onto the surface, turning it into a health hazard to anyone that touches it.
This same question can also apply to essential workers’ clothing, as these employees are at work for most - if not all - days of the week. During their long shifts, their clothes will not only pick up grime like everyone else's, but also bacteria and viruses that are floating around in the air or multiplying on countertops and furniture.
Even with consistent wash-and-dry techniques, the virus may still be present on clothes, homemade masks, and other fabrics that humans have frequent, skin-based contact with. Thus, there is a need for the general public to have access to a way to clean their PPE from dirt and grime but also from germs and viruses. This is where BrillBox comes in.
Purpose & Motivation
With the surge of COVID-19, there has also been a huge spike in demand for PPE, most notably masks. This increase in demand quickly left pharmacy shelves bare of surgical and N-95 masks. To solve this, mask producing companies such as 3M and Honeywell have sought it to increase their production of PPE. This tactic, although simple in nature, will do very little to stifle the demand of masks. Instead, the solution to the lack of PPE is finding a way to constantly reuse it.
Likewise, in respect to frontline workers, constant washing and steaming of clothes is not efficient nor convenient. As for at-home mask makers, the materials used in a mask may typically derive from materials such as cloths and paper-based filters. It is immediately obvious that these materials will degrade after continuous washing, prompting the user to require more masks and more PPE.
These issues can be easily solved by using a sterilizer, which can destroy germs and viruses that may be present on the surface. Although this technology is widely used in labs it costs far too much for the average user to buy (a lab sterilizer hovers around $7000). Besides this, currently, there is no commercially available sterilizer meant to clean PPE and masks.
With all this said, it is immediately obvious that to extend PPE supply, save hospital workers time, and find a way to reuse non-washable filters, we need a cheap, user-friendly sterilizer.
How BrillBox Works
It is a well-known fact in biology that UV radiation breaks down DNA and RNA, eradicating germs, and viruses. Knowing this, all that is needed to sanitize PPE is a concentrated amount of UV for 15 minutes.
To achieve this, Brillbox uses off-the-shelf germicidal bulbs operating in the 264nm range. This wavelength has been tested to be the most efficient at destroying RNA. From there, all that is needed is a special control circuit that will be able to turn the bulbs on and off. This controller comprises of an Arduino Nano, a voltage regulator, a resistive ballast, and a relay, among other miscellaneous parts. Likewise, the control circuit also interfaces with the UI of the box, which is simply two buttons, a buzzer, and an LCD display.
The schematic for the entire project is linked in the gallery (above)
Cost
Due to the simplicity of the design, the entire box costs about $12 to produce and is able to sanitize up to 160,000 masks before needing to replace the germicidal bulbs.
The cost of each individual component is as follows:
Arduino Nano -
$1.70
Buzzer -
12¢
LCD Display -
$2
Buttons (2) -
50¢
Germicidal Bulbs (2) -
$8
Cardboard Box -
Free
Aluminum Foil -
30¢
Relay -
50¢
Reed Switch + Small Magnet -
8¢
Total: $11.20
Note:
The 160,000 comes from the following calculations:
The bulbs last for an average of 50,000 hours or 300,000 minutes.
Each mask requires at most, 15 minutes to sterilize.
8 surgical masks can be sterilized at the same time.
(300,000/15)*8 =160,000
A Note on Safety
Brillbox is designed with safety in mind.
Using UV germicidal bulbs, it is obvious that users may first feel apprehensive of the product. This apprehension is understandable but the Brillbox team has done many things to increase user safety.
Brillbox uses 264 nm UV waves, a subset of the UV-C range. Although UV is able to cause skin cancer, the
UV-C range is not the wavelength associated with skin cancer
(UV-A to UV-B). Likewise, the bulbs used in Brillbox are low power (3 watts), a small fraction compared to the
175 watts that reach the Earth from the Sun
. This means that, in direct contact with the skin, the UV-C will have little affect for short periods of time.
It is also important to point out that UV does not have the same properties as Alpha or Gamma particles. UV waves can be reflected by certain materials, most notably water, glass, and aluminum. To contain the radiation, the inside of the box is lined with two layers of aluminum foil, helping to reflect the UV rays towards the masks. This increases both the efficiency of the box and as well as keeping users safe from potential contact.
Ozone may also be a concern as many UV sanitation devices produce the dangerous gas to increase sanitation efficiency. Although this would benefit Brillbox, ozone, when inhaled, can trigger
asthma attacks, chest pain, coughing, and other respiratory complications
. Thus, the Brillbox team has opted to use ozone-free bulbs to users from potential ingestion.
Last but not least, if the Brillbox lid is ever opened during operation, the control circuit will instantly turn off the UV bulb. This is to ensure that the user will never come in contact with the UV present in the device.
Overall, the team at Brillbox has done extensive research on UV sanitization. The team has taken all the necessary steps to prevent any complications from happening, protecting the user, loved ones, pets, and anyone else that uses Brillbox.
Difficulties and Challenges Faced
There were many challenges that the Brillbox team had to overcome to arrive at where we are today. For instance, our team went over 2 different ideas to get to the idea of sanitizing PPE.
Our first idea was to utilize small particles to measure the efficiency of a mask, which evolved to using a laser that has the same wavelength as the diameter of COVID-19 (60-140nm). The basic principle is to measure how much light is able to pass through the mask in question. From there, we are able to calculate how small the holes are in the mask and how effective it would be in the medical setting. This project fell through because the wavelength needed (60-140nm) is in the high end of the UV range, and in a laser form, would pose a health risk to anyone that is not an optics professional.
Our second idea was to make an enclosed mask, kinda like a scuba mask, that will isolate the user from the outside environment. It would use the hot side of a Peltier module to kill incoming bacteria and viruses, and on the cold side, would generate a climate-controlled environment. This would essentially mean the user would have A/C in his mask, keeping the user cool throughout the day. The main application for this mask would be hospital workers who have to operate on patients for 8+ hours a day. Keeping cool is a necessity for them, however, this idea fell through simply because Peltier modules are unable to reach the 100° needed to destroy proteins at an affordable price point.
Besides going through multiple ideas trying to get the UV bulbs to work was extremely tiresome and laborious. Bought off of Amazon, the UV bulbs lack any substantial documentation. Therefore, we had to experiment with different resistive ballasts to prevent the bulb from burning itself out.
Another challenge was writing up the documentation and creating the video has been a very stressful process. As the bulk of the team are high schoolers, this hackathon happened right as finals and AP testing started to kick in. This ate away our time, leaving us only 5 days to write up everything.
Edit:
It is with a heavy heart that I write this but it seems that my teammates have ditched me. My teammates promised to help write the documentation, which they wrote the first two paragraphs but disappeared ever since. This is why only one person is listed in the Devpost despite continuously referring to the BrillBox team. I, Kevin Yang, simply assumed that they were taking a break from the screen so I continued to write using “we” and “team”.
This obviously has its own challenge (I had to do the bulk of the project).
Market Evaluation
As said earlier, this product is the only commercially available sterilizer to date. With its simple UI, low price, high user safety, combined with the increased demand for masks, BrillBox is a viable product.
First and foremost, Brillbox is designed to be manufactured on a large scale. The control circuitry can be easily turned into a PCB and the UV bulbs are in-large supply. This means BrillBox is both cheap to make and always in ready supply.
Likewise, Brillbox is flexible, able to change form factors depending on the user's needs. If a hospital needs to clean hundreds of masks a day, BrillBox can be easily re-designed to hold a high volume of masks in a dense space, still using the same base controller. If the home user requires only a mask to be cleaned twice a week, a smaller, more compact BrillBox can be made.
Besides flexibility, BrillBox is designed with user safety in mind. As said earlier, the BrillBox team has done their research and has developed a way to safely contain UV light in a box, protecting user’s families, pets, and more. As well, BrillBox can be easily approved by the FDA as it already expounds on best practices seen in lab sterilizers and other commercial UV products such as toothbrush and HVAC cleaners.
Improvements
A product can always be improved. With BrillBox, there are a few minor things that need to be improved in later models.
The first issue is the aluminum foil inside. As the foil layer was done by hand, the aluminum is not as smooth as the team would like. The only consequence of this is a decrease in sanitization efficiency. The un-smooth aluminum foil does not have any effect on the safety of the box.
Likewise, another improvement is using a plastic housing instead of a cardboard box. Although cardboard is cheap and lightweight, it is not as strong as plastic and thus, can be prone to breaking. The reason why Brillbox was constructed using a cardboard box was simply due to a lack of supplies and an inability to get a large, plastic box.
Another change would be to run the UV bulbs on AC and not DC. The UV bulbs used in BrillBox are some of the most efficient on the market (~40% of the bulb's energy comes out as UV), yet they are currently being run at ½ its rated efficiency. This is simply because the box is driving them using DC whilst they are normally driven off of AC. The reason is simply I did not have access to an AC supply that could power the bulbs.
The last notable change to Brillbox would be replacing the resistive ballast circuitry with an electric ballast. This would help increase the efficiency of BrillBox as the resistive ballast converts any unneeded current into heat.
Conclusion
All in all, with Brillbox, users can say goodbye to the days of handwashing masks or going to the store to buy a box of N-95’s. Brillbox is cheap enough, flexible enough, and safe enough to be used by everyone. With the prototype only costing $11.20 to make, combined with the demand for more PPE and masks, BrillBox will supply the people a product they have been needing for so long.
Whats Next?
Many say that the COVID-19 pandemic is slowly dying down, but I think the opposite. If we look at the states (in the U.S) that have reopened, many of them have had a huge surge in cases. This is why we need a product such as BrillBox. As long as COVID-19 exists, which will be a while, people will need PPE. BrillBox allows people to reuse their PPE, allowing them to consume less and extend their diminishing supply.
With that said, I hope to make BrillBox a fully-fledged consumer product aimed towards at-home users such as families.
Built With
arduino
electronics
hardware
liquidcrystal
nano
relay
sensors
Try it out
github.com | BrillBox | Revolutionizing How We Sanitize | ['Kevin Yang'] | ['The Wolfram Award', 'Most Innovative Hack'] | ['arduino', 'electronics', 'hardware', 'liquidcrystal', 'nano', 'relay', 'sensors'] | 0 |
10,141 | https://devpost.com/software/miia-medical-intelligence-applied | App screens for miia
Overview
Here are some quick links to some of the resources we developed while creating our project:
💡 • Website
📐 • Wireframe
📱 • Prototype
📕 • Documentation
Inspiration
As our population ages we will begin to have a lot of multimorbidities. The aging population will have higher rates of diabetes, hypertension, and other chronic ailments. Mobile health (mHealth) platforms using smartphones have proven effective for monitoring blood pressure, glucose and other health related symptoms. However, applications are not always accessible for the elderly population. Finger sensitivity and mobility can be an obstacle for the elderly as it impairs their ability to interact with apps. Features such as larger font size, high contrast, and text to speech functionality are often neglected due to the lieu of modern design trends intended to appeal to younger audiences.
We designed our app, miia (Medical Intelligently Applied) to be accessible and usable by most seniors. Miia is an application that will help track and manage health conditions for the elderly population. For instance, we implemented a Chatbot function to help seniors input their vital signs. The chatbot can be made to speak aloud, while the senior can utilize their voice which is then converted to text. The chatbot can also ask questions to monitor symptoms and mood to screen for infection or depression, respectively. Furthermore, our app will track mobility and activity functions of our users through drawing data from the built-in accelerometer, gyroscope, and other smartphone sensors. This will help us predict activity level and potentially prevent frailty and traumatic falls with seniors.
How to use miia
Miia can be used through entering
https://miia.me/
and signing in with gmail or by creating a new account. Once you've logged into miia you're greeted by the main dashboard that provides an overview of your profile along with several different tabs. Here users can chat with miia, sync wearables, and receive diagnostic reports from health checkups. Current functionality of the application is limited to conducting conversations with the chatbot while also completing facial recognition scans that detect mood and BMI.
Nonetheless, our current figma prototype serves as a better representation of the apps final functionality and design.
In contrast to the web application the prototype is developed for mobile devices to better serve the elderly through prioritizing convenience and mobility. The prototype itself is fully interactive as users have the ability to click, scroll and drag through both caregiver and patient interfaces.
What it does
The system leverages AI technology to analyze data collected from facial recognition, speech recognition, wearable devices and/or IoT on a daily basis, and alert the caregivers if there is any identified risks. The platform also provides a way to facilitate communication between caregivers and care recipients, while aiding with health management to alleviate caregiver stress.
Main features
Health data collection
We ensure the health data collection process is easy to follow by having the whole health check up process guided by our AI chatbot miia, which include the following:
Facial recognition - facial image taken for analysis of cardiovascular diseases risks, emotions, BMI and etc.
Speech recognition - speech recorded and analyzed for emotions and mood
AI chatbot - collect health data unavailable in facial and speech recognition/ wearable devices
Phone sensors - detection of fall
Wearable devices/sensors - measurements including but not limited to blood pressure/ heart rate/ sleeping pattern/ activity
Elderly focus design
Voice control - elderly users can choose to interact with chatbot by voice or text
AI Chatbot to stimulate human interactions
Enlarged text and other accessibility features
Reminder system - visual and sound alerts can be snoozed until the elderly login and complete the health monitoring daily
Data visualization for caregivers
Data analytics dashboard - show key metric of elderly over one month
Detailed health reports of elderly - details of each health parameter
Alert system for identified issues - caregivers can set threshold values according to elderly's condition; red warning symbols and notification pop up when value above/ below normal
App Guide
Caregiver
Signs up in the app and makes a profile for both themselves and their care recipient.
After choosing the caregiver option, they will set up an account with their email and phone number, and set a password.
Then, the caregiver will add the patient’s name and phone number.
They can then add the pre-existing medical conditions of their care recipient. In this case, the preset conditions are common chronic diseases but there is also the option to add more conditions and background information.
The caregiver can choose important metrics to monitor for certain chronic conditions, such as blood sugar level for diabetes, or mood for depression.
After adding the background information for the patient, a unique pin will be generated for connecting the caregiver with the care recipient.
A confirmation screen will also show the patient’s conditions and metrics to follow.
If there are multiple care recipients, the caregiver can add another patient.
On a daily basis, caregivers log in and monitor health of care recipients, the most important metric on display. The red notification symbol indicates a warning that requires caregivers to follow up on a metric.
In the patient profile, the caregiver can change or add more metrics to monitor, chat with the patients, or edit the patient profiles.
Elderly/ Care Recipient
Care recipient received a text message from the caregiver with his/ her unique pin. If a senior is unfamiliar with technology, the caregiver can help him/ her to set up the app.
Choose to sign up as a patient, and enter the pin received.
Our chatbot, guide seniors through the whole health checkup process on a daily basis
The patient can choose to text or speak to the chatbot.
Miia will proceed to initiate the process of health check by taking their facial image
Miia will first ask a few questions regarding their physical and mental health, such as body temperature, blood pressure, or mood and the senior can input manually or tell miia their measurements. For voice inputs, Miia will repeat the measurement to verify.
Depending on the needs of the senior and caregiver, the chatbot can also ask about other metrics, give reminders, or chat with the senior.
After health check, users will be redirected to a health overview which summarizes results for the senior.
Key metrics of seniors are shown in measurements. If the user is interested in knowing more of a particular metric, they can click the metric and look into the details.
If seniors have any concerns, they can contact their caregivers using the in-app chat function.
If desired, they can also choose to add or remove wearable devices and sensors.
Lastly, they can check their profile, which shows personal information, settings and caregiver information.
How we built it
Software
• Frontend Dev using Angular, FireBase Authentication.
• Node Libraries Likes charts.js PWAs, BootStrap, Material Design, etc.
• Hosting and CICD setups using Netlify and Heroku and GitHub.
• Domain and SSL certificate from Namecheap and Let's Encrypt.
• SQL DB connected to the app with Restful API.
• Google Colab notebooks to execute heavy GPU workloads and ML Algorithms.
• Invision for developing WireFrames
• Figma for creating final prototype
• Slack for Internal Communications & Google Drive for Documents, Images, etc.
Machine learning
We collected datasets from varies sources such as Kaggle, JAFFE and IMFDB and trained the machine learning model for a couple of tasks: the identification of emotions from facial expressions, identification of BMI from face images, identification of emotions from speech, and detection of falls from phone sensors. Determination of cardiovascular disease risk is also achieved by reviewing cohort studies and results in medical journals. After training the model, we deployed a demo of the emotion prediction model, BMI prediction model, and cardiovascular disease risk using Heroku service.
Challenges we ran into
It is difficult to find quality labelled data for training machine learning models, which in turn affects the accuracy rate. Given that this is a remote hackathon, we were also unable to test connection with wearables. While there is flexibility to use the app without external sensors, we plan to integrate with multiple wearable devices and platforms in the future.
Market Evaluation
To facilitate the adoption of our technology, we plan to target caregivers (B2B) as our primary target demographic. Currently there are 34 million caregivers for the elderly in the United States, with 5 million of them being long distance caregivers. Our goal is to introduce our product, while increasing our adoption rate, and thus solidify our application as an essential tool for caregivers worldwide.
Currently miias distributions channels will be limited to mobile app stores found on both android and ios devices. In later iterations miia will transition to being available as a web application for desktops.
Our go-to-market strategy during distribution will include a combination of freemium and viral approaches. This in-turn provides us with financial incentives for early adopters, who are able to take advantage of the 2-month free trial while having the ability to subscribe later. We’d also like to introduce a referral system where users are able to promote our application while being rewarded for successful signups. In addition to this, we aim to partner with health organizations (clinics/ hospital/ national health insurance) alongside deploying through-the-line marketing tactics in order to enhance customer reach and maximize customer acquisition.
What's next for miia!
App Development
Health data collection via speech recognition and wearables
Data analytics dashboard
In-app chat
Wearables
Water-proof watch for seniors
Water-proof necklace for seniors
Recruitment
We are planning to bring the project to the next stage. Shoot us a message if you're interested!
Built With
angular.js
cicd
figma
firebase
github
invision
ml
netlify
pwa
python
Try it out
www.figma.com
github.com
github.com
emotionpredict.herokuapp.com
bot.dialogflow.com | miia - medical intelligence applied | Digital health solution for elderly and caregivers | ['Ava Chan', 'Rohail Khan', 'Alice Tang', 'Billy Zeng'] | ['Best Designed Hack'] | ['angular.js', 'cicd', 'figma', 'firebase', 'github', 'invision', 'ml', 'netlify', 'pwa', 'python'] | 1 |
10,141 | https://devpost.com/software/bridge-4kb0n6 | Figma -- page 2
Figma -- page 3
Figma -- page 1
Here's
the Figma link, and
here's
the link to our Google Slides!
Inspiration
Due to this pandemic, everyone, especially adults, has been forced to work in a virtual setting through platforms such as Zoom, Skype, Webex, and other conferencing platforms. Those who are hearing impaired, unfortunately, have a larger barrier when it comes to communicating on these virtual platforms.
Statistics have shown that adults with a hearing impairment cause the household income to decrease on average by $12k. There are hearing aids for this, however, 3.65 million hearing aids in 2016 alone were thrown out, as many people do not want to wear it.
Based on the information we gathered, we realized that we wanted to create a web application that would allow people who have difficulty hearing to have trouble-free communication.
What it does
bridge is a video calling web application that converts ASL to speech and speech to text, allowing for trouble-free communication between people who are hearing impaired and others.
bridge flaunts flawless communication through the utilization of advanced technology. Bridge will view and read ASL hand gestures through the camera. Then, our web application will work to recognize these hand gestures as it will use ML and Neural Networking to process this information from the camera. After, the application will convert the AMS hand gestures to text and then to speech.
How we built it
For the Speech to ASL and ASL to Speech, we used a python source code (
Signum
) that already had a basic ASL recognition scanner. We altered the code by removing their second and third guess feature. To add on to the code, we made it a multi-user video chatting call and had closed captions. We used the Google's text-to-speech library (gTTS) to convert the text into spoken words. We also used Keras, a Machine learning library that recognizes the ASL alphabet and has a trained neural network with a database of ASL gesture images.
The website was coded through React Native, and to create a prototype and UI design of how we envisioned bridge to look like, we utilized Figma.
Challenges we ran into
One of our main challenges was beta testing. Because the target market of our app is primarily people who have a hearing impairment, we were hoping to showcase and demo bridge to people with a hearing impairment and get their feedback and advice on how we could improve our product and make it more accessible. However, because of the short time span of this project and COVID-19 concerns, we were unable to physically have people with hearing impairments test our idea and provide feedback.
Another big obstacle that our team faced was regarding the code. We found source code that recognized ASL and deciphered the letter from Github, but after running this code, we found numerous errors that kept the code from identifying the ASL to letter accurately. Our team had to spend a lot of time initially understanding the code, and then trying to identify the bugs, and then try to fix them. Our team relied heavily on online resources like Stack Overflow to identify and fix the errors, but this process took a large amount of time.
Accomplishments that we're proud of
One of our major accomplishments for this project includes coding a fully functional website to complement our idea. We used React Native to build this website but most of our team members did not have experience with React Native or how to code with it. Our team learned a lot about utilizing React and we’re proud of the website we made with it, and we’re looking forward to using React Native in future projects.
Another accomplishment was the prototype that we made on Figma. Before hACCESS, no one on our team had used Figma to build website UI designs, so we were all quite new to Figma and all of its various features. Along the way, we learned a lot about how to use Figma and create UI designs that are both informative and aesthetically pleasing, and this experience exposed us to a new prototyping platform that we will surely use in the future!
We’re also proud of the fact that we were able to conduct a survey and receive 50 responses, most of which supported the problem we are trying to solve.
What we learned
Throughout this process, our team was exposed to how so many people face widely different problems relating to accessibility, especially how these problems are often ignored by large companies and corporations. The experience opened our eyes to the fact that there is still a lot of work to be done to help people with disabilities and impairments access specific information and resources, and doing extensive research on this topic has inspired our team to continue working on projects that help with accessibility.
This project also helped our team learn a lot about using Figma and other prototyping platforms, as well as coding websites using React Native. Our team was not very experienced with using this platform, but throughout this project we learned a lot about how to make use of the various features on Figma and React Native.
What's next for bridge
In the next couple of weeks, we want to continue developing bridge such that it is accurate and efficient at recognizing and making sense of ASL. Currently, the code only recognizes letters, but we want to further develop it such that it can recognize ASL for entire words, as this feature will likely be much more useful. We also want to transition from creating a website to creating a browser extension such that bridge will be easy to use.
We also want to conduct a more thorough market analysis and hope to speak to at least 50 people with hearing impairments such that we can accurately gauge what features we should include in bridge. Once quarantine restrictions begin to ease, we would like to formally conduct beta testing to analyze how we can make bridge more accessible and easy to use, especially for those with hearing impairments.
Built With
python
react-native
Try it out
github.com | bridge | Let's communicate better with those who are hearing-impaired. | ['Trishala Jain', 'Mihika Bhatnagar', 'Anika S', 'Prisha Parashar'] | ['Best Fit Hack'] | ['python', 'react-native'] | 2 |
10,141 | https://devpost.com/software/braille-keyboard | . | . | , | ['rahul garg', 'Shivay Lamba', 'Pulkit Midha'] | [] | [] | 3 |
10,141 | https://devpost.com/software/cbi | Inspiration
Around half of the world’s population of 6 billion people live on less than $2.50 a day. Of these, more than 1.3 billion live in extreme poverty on an amount less than $1.25 a day. Using a commonly held metric of 'evil', people who go to prison tend to come from a lower socioeconomic status than their peers who are not deemed guilty by the criminal justice system. Education and opportunity are the great enablers that can lift these individuals out of the poverty cycle and prevent others from falling through the cracks.
Hence, CBE is envisioned to be an app that will connect educators to students with the desire to solve one part of the equation that will enable these individuals to lead better lives.
What it does
CBE is an app that will connect educators to students, allowing educators to interact with students via video link and share files with each other.
How I built it
This app is work in progress.
Challenges I ran into
It was difficult trying to integrate the services together, especially since this was my first time attempting to build an app.
Accomplishments that I'm proud of
NA
What I learned
Coding is hard.
What's next for CBE
It is intended for CBE to be tested out in the Singapore market.
App Access (Currently not available):
The app can be accessed here:
https://snappy.appypie.com/index/app-download/appId/7dd48a30f648
Let me know if there are any issues.
*Note: The video is not entirely reflective of the current app
Built With
appy-pie
Try it out
snappy.appypie.com | CBE | Can't be evil. That hinges on a good education. Hence, Continuous Brain Enrichment is born in order to educate individuals one at a time towards a world that cannot be evil. | ['winston law1'] | [] | ['appy-pie'] | 4 |
10,141 | https://devpost.com/software/bemyeye | app logo
Inspiration
Our eyes allow us to see the world. They help us carry out several tasks at ease, something we often overlook and take for granted. Visually impaired people struggle with tasks we consider to be very basic. For instance, shopping. The challenges they face are unimaginable . Globally the number of people of all ages visually impaired is estimated to be
285 million
, of whom
39 million
are blind. So this project is an initiative to help make their lives easier.
What it does
Mobile App
beMyEyes helps the visually impaired shop conveniently by :
Detecting and Identifying objects around
Narrating the type of object identified
Reading labels on products using OCR (Optical Character Recognition) and narrating it to the user
Connecting with a human assistant nearby to help the visually impaired user with shopping
It's compatible for both Android and iOS.
Website
The website enables volunteers to help visually impaired shoppers in their area by acquiring their information on the sign up page so they can be notified via text when a volunteer opportunity arises in their area.
How we built it
Mobile App.
Twilio
- Used for messaging between users. helpers and system.
google cloud vision
- used for ML object recognition
clarifai
- Used for identifying food
gcp OCR
- used for text identification
React-Native
- Used For building a cross platform app.
Expo-Speech
- Used for narration of results and providing instructions to the user
Adobe Illustrator
- Used for designing the logo, assets and UI for the app
mongodb
- Used for storing users helpers and matches data
Website
Maps JavaScript API
- Used For Embedded Interactive Google Maps and Auto-fill feature for the address slot in the form
Geocoding API
- Used For Embedded Interactive Google Maps and Auto-Fill feature for the address slot in the form
Semantics UI Library
- Used To Create Button on Join Page
Places API
- Used For Embedded Interactive Google Maps and Auto-Fill Feature for the address slot in the form
Figma
- Used to create prototype of website
Languages
- HTML, CSS, and Javascript were utilized to build the website.
*api deployment and testing : Sashido app was created and used for testing various API endpoints used in the system
Challenges we ran into
First of All , Our teammates are from different time zones So, It was a challenge to maintain real-time communication.
For some of us React Native, and image recognition and image captioning and working with API was the first experience. Still We managed to do it unitedly.
Accomplishments that we are proud of
Overcoming the above challenges were the Biggest accomplishments.
What we learned
Through this project we had the opportunity to learn React Native , Object Detection, OCR, Image Captioning, Text Generation, and text to voice conversion.
What's next for BeMyEye
For the project’s future, We think of integrating it with every shopping mall and market for better reach. So, that product can easily help the person in need .Also we can integrate it with sensors and spectral anagram for obstacle detection . Which will in turn help the visually impaired walk with minimal risk.
Built With
adobe-illustrator
css
expo.io
figma
gcp
geocoding-api
html5
javascript
maps-javascript-api
places-api
python
react-native
sashido
semantic-ui
twilio
Try it out
github.com
drive.google.com | BeMyEye | An additional set of eyes for the visually impaired | ['Ashish Kumar Panigrahy', 'Ebtesam Haque', 'Mualla Argin', 'Muntaser Syed'] | [] | ['adobe-illustrator', 'css', 'expo.io', 'figma', 'gcp', 'geocoding-api', 'html5', 'javascript', 'maps-javascript-api', 'places-api', 'python', 'react-native', 'sashido', 'semantic-ui', 'twilio'] | 5 |
10,141 | https://devpost.com/software/liquay | Model of the Liquay
Inspiration
As I was relaxing at my desk, watching a youtube video on different Asian snacks, one part of the video got my attention. As the vlogger was talking about the mountain of snacks piled on the check out table, I noticed that instead of directly handing money to the cashier, he placed it on a tray. The cashier then took the money and placed some coins on that same try. Teeming with curiosity, I did a quick Google search.
What it does
The Liquay offers a place to put money so that the cashier and the customer don't need to directly touch each other to complete an in-person transaction. This system of putting money in trains originally comes from Japan, but I am just making my own version with a few changes due to the coronavirus. In addition, it's meant to be cleaned at the end of each day because of all the money that it touched.
How I built it
I first made a model of the tray in Autodesk Fusion 360, then I made a simple website to display some of the information about my project. Then after I made a presentation, I published it to youtube and began learning how to edit the video well.
Challenges I ran into
Since it's been a long time since I've used Autodesk Fusion 360, I had to relearn the basics and even some advanced techniques to bring out the best in the model. Plus, my computer's GUI isn't optimal for Fusion 360 so there was a plethora of crashes and problems that I ran into.
Accomplishments that I'm proud of
I'm proud of launching my first, complete, individual project on DevPost. Plus I'm really proud of relearning some design and implementation techniques in Autodesk Fusion 360.
What I learned
I learned about basic and advanced techniques in Autodesk Fusion 360. I learned how to solve some of the problems with my GUI and learned a little more deeply in Computer Hardware.
What's next for Liquay
All I'm really looking forward to do is to inspire someone more qualified to release products and hope that the community improves based of this idea. I just hope that the negative side effects of the coronavirus become alleviated thorough our hardwork and determination.
Built With
autodesk-fusion-360
css3
html5
javascript
w3s-css
Try it out
rashstha.netlify.app | Liquay | A CAD-Designed Cash Tray to leverage direct contact in places like stores | ['Rashmit Shrestha'] | [] | ['autodesk-fusion-360', 'css3', 'html5', 'javascript', 'w3s-css'] | 6 |
10,141 | https://devpost.com/software/whattheimage-image-caption-generator | Homepage
Caption Generated
Inspiration
Specially abled people face many problems, one of them being how to interpret the image correctly. For this reason, Image Captioning can be very useful for them because then the image can be depicted properly. Therefore, CaptionBot uses machine learning approaches to identify the image which is being requested to generate a caption.
What it does
It uses machine learning approaches to identify the image which is being requested to generate a caption.
How I built it
Using Microsoft's CaptionBot AI, it helped me to create this wonderful tool which can be used to generate the caption of an image.
Front-End
HTML
Bootstrap 5
Back-End
Flask
Demo
Accomplishments that I'm proud of
I think that this tool will help specially-abled people to interpret images at times when they have decision-making problems.
What I learned
How to use the CaptionBot Package, and
Working with form inputs in Flask.
What's next for WhatTheImage - Image Caption Generator
Generating a live feed of caption generated whenever any user visits a website. So that they don't have to manually try things out.
Built With
bootstrap
captionbot
flask
html5
Try it out
github.com | WhatTheImage - Image Caption Generator | Helping the specially abled people to interpret images (generating caption) which they see on the internet. | ['Sumit Banik'] | [] | ['bootstrap', 'captionbot', 'flask', 'html5'] | 7 |
10,141 | https://devpost.com/software/home-health-care-patients-tracking-application | Home Health Care Mobile
Home Health Care Sample Decision Support System
COVID-19 Risk Prediction Tool
Salesforce
The follow-up of Home Health Care and elderly patients is not made digitally and home health data is not processed data. This makes it difficult for the elderly patient to follow the situation. Healthcare professionals are obliged to learn the examinations, drugs and patient status previously applied to the patient during the patient visits. With the current COVID-19 outbreak, patient visits have decreased considerably. Since the patients in this group are in the highest risk groups of COVID-19, hygiene requirements during the visits complicate the maintenance procedures. In addition to this situation; Symptom monitoring of home care patients, people in the geriatric class (65 years and older) and potential / recovering COVID-19 patients should be done remotely.
First of all, the information necessary for the follow-up of home health care patients was prepared for information entry in the Android environment. Improvement was made in the Salesforce environment to retain the data. A website was prepared in RStudio environment by developing an AI based model for monitoring the health status of home health care patients. For the COVID-19 symptom follow-up, the data of 22,000+ COVID-19 patients were processed all over the world, and an website was prepared in RStudio environment.
The biggest challenge we face is to find anonymous data that we will use for decision support systems and to clean and make the data available.
In the later stages of the project, video speech, voice recognition and sensor and smart watch (Apple Watch) integration will be supported.
Built With
android
api
css
flutter
html
java
r
rstudio
salesforce
Try it out
dveshealth.com
twitter.com
www.linkedin.com
www.instagram.com
dveshealthai.shinyapps.io
dveshealthai.shinyapps.io
drive.google.com | HOME HEALTH CARE PATIENTS TRACKING APPLICATION | DVESHealth provides AI based home health & elderly care decision support and monitoring mobile / web / cloud solutions. | ['Berna Kurt', 'Mustafa Aşçı', 'Asım Leblebici'] | [] | ['android', 'api', 'css', 'flutter', 'html', 'java', 'r', 'rstudio', 'salesforce'] | 8 |
10,141 | https://devpost.com/software/app-for-the-visually-impaired | Inspiration
Inspired by how much machine learning is underused in modern applications, and how it could play a significant role in aiding the live of the visually impaired when mixed with other technologies.
What it does
The user just point the camera in a general direction and taps to take a picture. It then scans the image, and sends it to a server, which uses tesseract (An OCR library using machine learning and pre-trained examples) in Python to convert the image to text. If no image is found, then the app keeps taking images. If it receives text from the server, it uses a Text-To-Audio library to convert make it audible for the user.
How we built it
Using Android Studio with a flutter plugin, as well as a flask server running a python script, which runs a tesseract OCR library. Google's TTS is used for the Text-To-Sound conversion.
Challenges we ran into
-Initially with Python using a Kivy framework. However, due to the lack of support for iPhone, poor documentation and various lack of supports, development was switched to Android studio using Flutter. This allowed for greater flexibility, and guaranteed a better documentation.
-Time was a huge factor, initially it was planned for the app to have a connection to the Google Maps API and be able to give the user directions, but it would take too long.
-We attempted to train the machine learning algorithm, but it's a huge task, which took hours just to learn 3 fonts.
-Further, there was a plan to make the app take photos periodically and talk if any text was detected. This was done in Python but Flutter does not support this behaviour.
Accomplishments that we're proud of
The server provides backend support for the app, and it took a lot of time to setup. It can be expanded to provided additional functionality in the future.
What we've learned
Machine learning has a lot more support than initially thought. It is extremely easy for any individual to set it up with your own training data is extremely versatile; it can be used for a wide variety of purposes.
What's next for App for the Visually Impaired
It can still be optimised. The auto-scan function can be implemented to reduce the inconvenience for the user. More test cases can be used inside the app, so it doesn't rely on an external server (which has more test cases), and thus can be used without an Internet connection. The Google Maps API can be incorporated to the app so that it works in conjunction with the camera, to tell the user what shop they are facing, and help them find shops.
PLEASE NOTE - This program requires a server. The server is public and is currently running. The code for the server has been attached so it can be examined, but no download of it is required.
Built With
android-studio
dart
flask
flutter
heroku
opencv
python
tesseract
tts
Try it out
github.com
github.com | Vission Access | App to Allow Visually Impaired People To Interact With The Environment | ['Pavel Mirosnicenco'] | [] | ['android-studio', 'dart', 'flask', 'flutter', 'heroku', 'opencv', 'python', 'tesseract', 'tts'] | 9 |
10,141 | https://devpost.com/software/productive-at-home | Inspiration
Our team decided to make a To Do app that rewards user on each completed task to increase productivity while staying home.
What it does
Users can add tasks, and deadlines and will be entertained after every completed task with a joke or a sound.
How We built it
We used React.js and Firebase for a serverless webapp with Firestore as the storage/database.
Challenges We ran into
We are all new to React and Firebase so at first, it was challenging to develop the application. However, we all spent several hours going through various tutorials, and as a result, were able to implement React into our to-do-list project.
Accomplishments that We're proud of
We managed to get through all the challenges and created our first CRUD React app on firebase.
What We learned
We learned to use React and Firebase through this project.
What's next for Productive At Home
Our next steps are to make the app more user-friendly and with several new features.
Built With
css
firebase
html
javascript
react
Try it out
github.com
productive-at-home.web.app | Productive At Home | To Do App with a reminder and incentive/rewards on each completed task while being at home during quarantine to increase productivity | ['Myat Thu Ko', 'Syed Osama Hussain', 'Aditi Parekh', 'nouralquraini'] | [] | ['css', 'firebase', 'html', 'javascript', 'react'] | 10 |
10,141 | https://devpost.com/software/the-warrior-returns-nyx9to | Inspiration
When one mentions the entertainment industry, most people would think about films and music. Many people watch the Oscars, Grammys, Golden Globes, BRIT Awards, etc. Of course, there is a lot of glitz and glamour in the film and music industries. But would you be surprised to learn that these two are not the top-grossing sectors in entertainment? As a matter of fact, these two put together do not even match half the revenue the video game industry is earning. According to the latest figures, the video game business is now larger than both the movie and music industries combined, making it a major industry in entertainment. This year, the global games market is estimated to generate US$152.1 billion from 2.5 billion gamers around the world. By comparison, the global box office industry was worth US$41.7 billion while global music revenues reached US$19.1 billion in 2018. Consider the top blockbuster movie to date, Avengers: Endgame. When it premiered on April 16, it raked in over US$858,373,000 during its opening weekend. It even surpassed last year's Avengers: Infinity War, which generated US$678,815,482 in gross revenue.
What it does
It is a Story Based RPG game.
How I built it
Built on unity3d, all the UI designs and elements are built on photoshop.
Challenges I ran into
Developing elements for graphics.
What I learned
VFX development, graphics development
What's next for The Warrior Returns
Complete the game and release it to production.
Built With
blender
c#
photoshop
unity | The Warrior Returns | Reality is Brutal. Its time to face it. | [] | [] | ['blender', 'c#', 'photoshop', 'unity'] | 11 |
10,141 | https://devpost.com/software/infigo-smart-wearable-for-the-visually-impaired | Inspiration
EVER SINCE THE TOUCHSCREEN TECHNOLOGY AND SMARTPHONES
HAS ENTERED THE COMMUNICATION MARKET, ACCESS TO MOBILE COMMUNICATION HAVE BECOME COMPLEX FOR A CERTAIN SEGMENT OF PEOPLE, PARTICULARLY THOSE WHO ARE TECHNOLOGICALLY NOT VERY ADVANCED.
What it does
A light weight Glove which has a conductive fabric in the finger regions that facilitates capacitive touch, just like touchscreens, so that the users can operate their phones and other devices through simple intuitive hand gestures.
(
https://ibb.co/GtDgsjQ
)
It acts a standalone cell phone embedded on a wearable glove with the keypad buttons embedded on the tips of your finger. In the common way that Indians count numbers, dates, months on the fingertips, they can dial numbers with this device and place calls.
Every fingertip of the finger is transformed into a keypad, with the help of conductive fabric, such that the user can make calls through some intuitive gestures. So with this feature mobility comes on the fingertips, which would ensure hassle free communication and a user friendly usage
How I built it
HARDWARE: In order to develop the prototype, the following components have been used:
An Arduino Uno-Microcontroller (ATmega32)
A GSM 900A Module
Conductive Thread
Jumper Wires
Bare Conductive Glove
A battery pack of 5V-1A
In-ear Speaker, Electret Microphone & Buzzer
More detailed information is present on : (
https://drive.google.com/file/d/1I5qBMxl6uGywyfSv1iQlOqoK7qJo9HQc/view?usp=sharing
)
Challenges I ran into
The major problem I faced while making the glove, was stitching the conductive thread in different channels as the conductive thread often tends to get in touch with each other through small yarns, apart from this, a number of errors and problems came into my way, but I tried to overcome that and continued to work upon this idea.
Accomplishments that I'm proud of
Applying technology for a cause can empower the most vulnerable across the world. We have had strong people in the past like Hellen Keller who have blazed the trail with their achievements, without such devices and support. Therefore, how exuberant it is to think that such people can be equipped with technology in our times and achieve extraordinary feats. I look forward to add features into my device such as Smart Assistants and GPS navigation and ultimately empower the user.
What I learned
How to use to conductive thread in wearable projects and the basics of keypad matrix.
What's next for Infigo : Smart Wearable for the visually impaired
Collaborate with NGOs and Blind Organisations and provide them with a refined version of the prototype
Built With
arduino
c#
conductivethread
gsm
python
Try it out
github.com | Infigo Glove | Smart Wearable for the Visually Impaired | ['Albert Seins', 'Praveen Kumar'] | [] | ['arduino', 'c#', 'conductivethread', 'gsm', 'python'] | 12 |
10,141 | https://devpost.com/software/cogate | Intro
It is an
Built With
css
html
javascript
Try it out
github.com | MaskDetector | It will detect whether you are having mask for not | ['Saswat Samal'] | [] | ['css', 'html', 'javascript'] | 13 |
10,141 | https://devpost.com/software/accessilink | accessilink
accessilink 2
accessilink example 1
accessilink figma
fair opportunity project (submission)
AccessiLink
AccessiLink is a tool for visually impaired people to better see all the links available on a given URL.
About
This was inspired by my work with visually-impaired children through my research with the MIT Sinha Lab for Vision, which studies various neurological problems related to vision. I was able to learn about their incredible stuggles when it came to technology, which is not often built for accessibility. This inspired AccessiLink - a web app to help understand websites.
AccessiLink is a tool for visually impaired people to better see all the links available on a given URL. To use AccessiLink, head over to the "Try It" tab, input the URL you want, and hit submit! All the links available on the page will show up in an easy-to-see format.
Running AccessiLink
To run AccessiLink, simply download the GitHub repo, and run the app.py file. Then, head over to localhost:5000 and try AccessiLink!
How It's Made
We used Flask/Python for the backend, and HTML/CSS/Javascript for the frontend. It was our first time using Flask, so we had to learn it at a learning curve. We used Figma for the initial wireframes, as it is our go-to tool for designing a powerful UI. You can see our initial wireframes as an image on the "Process" tab on the website!
Fair Opportunity Project Submission
We ran an analysis on the Fair Opportunity Project website for our submission, and found that several parts of the website are not AA-conformant. The biggest issue was that some links were found only in areas with very low color contrast ratios. Our proposal for the site is to increase the color contrasts, and we are developing a Chrome Extension that can automatically find and correct this.
Specifically (see images above), at the bottom of the page, we analyzed the colors used and saw that the contrast was extremely low, and not WCAG compliant. The links found here are not found anywhere else on the website as well, and includes critical information such as the contact email. As such, visually impaired people will not be able to find these links and be able to navigate the site properly. However, with our tool, they would be able to see those links show up.
Challenges
The biggest challenges was finding time and streamlining the workflow. It was one of our first hackathons, and for the other, the first itme creating this type of site, and most of the technologies were new to both of us. Additionally, conducting the initial researc into official accessibility guidelines, creating the color palette, and testing everything to make sure the site is AA-conformant was a big step we knew we had to take. However, we are proud of the site, and hope it comes in handy!
Next Steps
Our next step, which we are almost done with, is creating a Chrome Extension that also extracts all the links on the webpage. We are currently trying to finalize the design!
Youtube Video
https://youtu.be/q2WfLcbZv9c
Built With
bootstrap
css
flask
html
javascript
python
Try it out
github.com | accessilink | AccessiLink is a tool for visually impaired people to better see all the links available on a given URL. | ['Sohini Kar', 'Ishan Kar'] | [] | ['bootstrap', 'css', 'flask', 'html', 'javascript', 'python'] | 14 |
10,141 | https://devpost.com/software/smarttracker-covid19 | Inspiration :
Now a days whole world facing the novel Corona Virus, to track the spread of novel Corona Virus country-wise, details of confirmed cases, deaths and Recovered, awareness regarding COVID-19. This Android app was created to spread awareness about the covid -19 virus.
What it does :
The Android app named as ‘SmartTracker-Covid-19’ created to spread awareness about the COVID -19 virus. App includes following functionalities:
CoronaEx Section -
This section having following sub components:
• News tab: Having latest new updates. Fake news seems to be spreading just as fast as the virus but as we have integrated from official sources so everyone will be aware from fake news.
• World Statistic tab: Real-time Dashboard that tracks the recent cases of covid-19 across the world.
• India Statistic tab: Coronavirus cases across different states in India with relevant death and recovered cases.
• Prevention tab: Some Prevention to be carried out in order to defeat corona.
CoronaQuiz section - quiz that will help people know about the Corona virus and its effects on human body. It chooses random questions and show the correct answer for the questions and at the end user will get to know their highest score.
Helpline Section - As this application particularly made for Indian citizen to use, all state helpline number of India included.
Chatbot Section - A self-assisted bot made for the people navigate corona virus situation.
Common Questions: Start screening,what is COVID-19? , What are the symptoms?
How we built it :
We built with using Android studio. For the quiz section we have used sqlite database and live news data we have integrated from the News API. For the coronavirus statistic we have collected data from worldometer and coronameter.
Challenges we ran into :
At time of integrating the chatbot in application.
Accomplishments that we're proud of :
Though , It was the first attempt to create chatbot.we have tried to up our level at some extent.
What's next for SmartTracker-COVID19 :
For the better conversation we will be looking to work more on chatbot.
Built With
android-studio
chatbot
java
news
quiz
sqlite
Try it out
github.com | SmartTracker-COVID-19 | Android app to track the spread of Corona Virus (COVID-19). | ['Pramod Paratabadi', 'Supriya Shivanand Madiwal .'] | ['Best Use of Microsoft Azure'] | ['android-studio', 'chatbot', 'java', 'news', 'quiz', 'sqlite'] | 15 |
10,141 | https://devpost.com/software/towards-need | Message Notification
Inspiration
We hear everyday how health care workers are risking their lives to save patients effected with Covid-19,
inspired by the selfless dedication of the front-line healthcare workers who are fighting this battle on behalf of the rest of us.
What it does
Towards Need is a web-based platform that connects healthcare workers who need PPE and any other goods and services with members of the community willing to donate directly to them. The site is very simple for both sides and contains signIn/Register,listing, search,filtering,posting a requirement , Notifications and messaging functionality.
How I built it
Frontend: Angular 9,Angular Material
Backend: Firebase, Google Places API, Google Geo Coding API, Google Maps API
Used FormSpree for messaging functionality.
Challenges I ran into
Getting feedback from healthcare providers and community members in order to design the Product in a manner that is easy to use and deliver the most .
Accomplishments that I'm proud of
I was able to do a short survey and understand the requirements and build a prototype that can make a huge impact .
What I learned
Health care workers are facing challenges every day with the facilities provided to them , In research i also found that donations are not directly getting to them its very complicated .Its needed to support the front line workers at large scale from community members.
What's next for Towards Need
Some future ideas we have are as follows:
1.Scaling: As app gets ready for the production as number of hits gets increase to site that can be handled by caching db queries and putting a load balancer using ELB on AWS.
2.Features:As web app moves to more enterprise level architecture ,features like collecting health care worker medical center location and filtering data to show on map limited to a nearby area using Google Distance matrix API and sending notifications whenever there is donor have posted in nearby area and vice-versa and having a social sharing feature can even more help the health care workers .
Built With
angular9
angularmaterial
bootstrap
firebase
google-geocoding
google-maps
google-places
Try it out
github.com | Towards Need | A Platform to Connect Front-line Health Care Workers With Community Members to help Health Care Workers to donate them critically Needed PPE during Covid-19 Crisis | ['vijaylakshmi karwa'] | [] | ['angular9', 'angularmaterial', 'bootstrap', 'firebase', 'google-geocoding', 'google-maps', 'google-places'] | 16 |
10,141 | https://devpost.com/software/fakecheck-r51qxc | Business Model Canvas
User Process Flow
Rating and Information Process Flow
News source for each age group
1. The Problem our Product Will Solve
As the world becomes more globalized, news and media polarize the masses to greater depths with every major event, revolution, or social-impact movement. The grout among the bricks of the world's events is digital media - news articles, rapid-growth posts, videos. The problem with the incredible pace these media spread, nobody fact checks their authenticity, leaving many, especially the people with accessibility needs with the potential to spread harm or fear through misinformation.
The first step is to eliminate Fake News before it wreaks its havoc.
Issues with the Present Approaches:
The difficulty of Accessing Information: There are many barriers to finding information on the web for people with disabilities. This reducing the chance for them to do proper research and gain the correct information leaving the vulnerable.
A Lack of Fire Power: Current fake news detectors or rating agencies make use of academic scholars with special expertise in their respective areas of study. This means that while you may get a deep analysis of a political piece of media, for example, you won't have this review done quickly, it won't appeal to the average person, and this method is limited by the physical hours fact-checkers can work.
Fact-Check Privilege: The fact is, most fact-checking sites are English-language based only. That means primarily U.S., Canada, and the U.K. based, with support only for those English-speaking communities. People who suffer from the wrath of misinformation - whether it be false health advice, targeted propaganda, or plain frauds often do so in countries where English is not spoken, and so don't have the privilege of being able to check the validity of their own media as alerts, in real-time.
We have built FakeCheck to bring the globe world-class, alert-ready and honest fact-checking that is intuitive, and easy for everyone to access and understand, regardless of their background. We also paired our tech with business value in mind, with a value-multiplying insight platform for businesses to better understand their brand image and perceived value to their customers around the world.
We make income for development and growth by serving personalized ads based on users' tastes in the news - something that is incredibly powerful, as well as charging for-profit businesses to gain value-multiplying insight from our platform on how their brands and banners are perceived.
**
What is FakeCheck?
**
We built a crowd-sourced media fake-news detection platform that puts the user and their media first while protecting anonymity to alert the world of fake news before it spreads and causes damage. This platform gives the user options such as alt tags for text-to-speech capabilities and basic design features such as keyboard-only control option to ease the browsing of the site.
We make use of the value and well-thought technology as well as other great companies and the best tech to create a platform that is user-launched, and a true butler to its boss; the user!
Our Rating's Process:
And our user-flow below:
How Does it Work?
Our process works as follows:
*1. Users have a piece of media they want to check a rating for or rate itself for fake news.
They go to our site and intuitively paste their link into our engine, which is the most prominent feature on our home-page.
If prior crowd-sourced ratings exist, the user immediately sees our curated and published ratings for that media, based on our users' input and proof.
We prompt the user, if they have the aptitude, to rate the media as well, as long as they have insight that may work in support of the media, or equally, against its truth-fullness, we respect their voice and let them share it. We process their queries and build and internal score we wish to publish.
Once our confidence metrics are met, we update a newly published rating for the media and classify and link it to other similar cases for users to see. We constantly adapt, just like a real business has to, to make use of the effort and insight of the world's user-base to republish.
We alert any watchers, or users who gave ratings of newly published updates by us, so they are constantly in the know, and can rely on being kept in the light instead of the dark, on the news, regardless of where they may come from.
We provide businesses with insight, based on alerts happening around the world related to fake news detection or negative sentiments and classified to their names to show their brand is recognized by users across the world, helping them improve and adapt.*
**
This Competition, The Challenges, and Our Impact:
Through this competition, we wish to strive for an award because we believe our platform will make use of an incredible software suite based on live alerts to address a worldwide crisis of misinformation - the infodemic!
We have been inspired by the current influx of fake news we see every day around the world to try and do something about it - the challenge comes from just how many people are misinformed and how it negatively affects each and every one of their lives, especially the people with disabilities.
We want to win in these challenges because we want to grow - we hope to use the prizes to help grow this idea to fruition to a point where we are able to make progress using the website's income from advertisement revenue. Something that is so under-implemented among the tech and news society today, this concept has the potential to strike rapid growth in the rating space for media, and help mitigate fake news' wrath altogether, by empowering any individual user to check their media easily.
Built With
django
graphql
python
react
typescript
Try it out
github.com | FakeCheck | A crowd-sourced platform to identify fake media | ['Solomon Kent Paul', 'Jason A.', 'chandlerlei2017 .', 'Jadon Fan'] | [] | ['django', 'graphql', 'python', 'react', 'typescript'] | 17 |
10,141 | https://devpost.com/software/papr-light-czsyaj | ,
Try it out
devpost.com | . | .. | ['Mohamed Hany'] | [] | [] | 18 |
10,141 | https://devpost.com/software/sahaaya-making-web-accessible-to-everyone | Inspiration
The inspiration came from my friend who was dyslexic and that's why it was hard for him to read text on browser
What it does
Sahaaya is browser extension that changes the appearence of the entire web based on your design customizations and preferences. It’s especially helpful for people with dyslexia, because it comes built-in with an option that adjusts the color-contrast ratio and typography. We provide a huge variety of color contrast schemes with different font styles so that every user can adapt to his/her preferences.
How I built it
I build this app using chrome APIs to manipulate web pages to change their color contrast and typography according to the preferences
Challenges I ran into
The greatest challanges I faced in this project to manipulate DOM of external websites that too using a chrome extension
Accomplishments that I'm proud of
I was able to make this project in such a short time(4 days) and I am really proud of what I have accomplished in such a short time
What I learned
I really mastered DOM manipulation after using it so extensively in this project and got a really in depth understanding of how Chrome extensions and APIs work
What's next for Sahaaya - Making web accessible to everyone
I want to add Text to speech feature in this extension to also include auditory challenged people
Built With
chrome
css3
html5
javascript
webstore
Try it out
github.com | Sahaaya - Making web accessible to everyone | Sahaaya is a chrome extension that helps people with visual and learning disorder read on the web. | ['Shivansh Yadav'] | [] | ['chrome', 'css3', 'html5', 'javascript', 'webstore'] | 19 |
10,141 | https://devpost.com/software/expense-scheduler |
window.fbAsyncInit = function() {
FB.init({
appId : 115745995110194,
xfbml : true,
version : 'v3.3'
});
// Get Embedded Video Player API Instance
FB.Event.subscribe('xfbml.ready', function(msg) {
if (msg.type === 'video') {
// force a resize of the carousel
setTimeout(
function() {
$('[data-slick]').slick("setPosition")
}, 2500
)
}
});
};
(function (d, s, id) {
var js, fjs = d.getElementsByTagName(s)[0];
if (d.getElementById(id)) return;
js = d.createElement(s);
js.id = id;
js.src = "https://connect.facebook.net/en_US/sdk.js";
fjs.parentNode.insertBefore(js, fjs);
}(document, 'script', 'facebook-jssdk'));
Mobile Model
Desktop App like Electron
Web App
Mobile Phone PWA
Inspiration
Due to Covid-19 Pandemic, Financial Problem happen everywhere. so i have t plan develop a Common Man Financial App.
What it does
It's have Optimization Engine is heart and its run by Julia compiler.
Working Procedure
JuMP Constraint based Optimization
constraint 1--> sum(schedulingAmount)= Total Amount
Constraint 2--> Each day Spending Amount for each type= 100%
Objective function
Min, (sum(w*xd)-Amount)
where w--> percentage of each type
xd--->perday spent Amount
display in Graph
Link with WebIO
How I built it
1. using Julia language and little-bit HTML 5
[link](http://159.65.147.167/)
It's only Temporary use.
Challenges I ran into
I'm New to Julia language, so its really challenging task, with less than 20 days, understand and study most of the library and implemented. Major challenge change to cloud.
Accomplishments that I'm proud of
Most Interested Data Science/Machine Learning Platform Startup.
What I learned
1. Julia Language Structure
2. Work with Windows and Ubuntu
3. Web Technologies
4. Cloud Technologies
What's next for Expense Scheduler
Try to Resolve
**Limitation**
Initial Latency of Web page load is high
Initial Compilation time also High
Built With
digitalocean
html5
julia
Try it out
github.com | Expense Scheduler | Optimized Expense Scheduler for this Financial Crisis | ['AmburoseSekar SiluvaiRaj'] | [] | ['digitalocean', 'html5', 'julia'] | 20 |
10,141 | https://devpost.com/software/trafficwatch | Dashboard for the Navigator web app.
API routes for the Navigator web app.
Navigator
Traceability insights created from incidents by the PagerDuty API and traffic monitoring via a TomTom (
https://developer.tomtom.com/
) layer. Navigator reports traffic problems with last-mile parcel delivery for consumers using the Tom
Tom Traffic Incidents API
and Tom Tom
Traffic Flow API
in conjunction with the
PagerDuty REST API
.
Customers can add their tracking numbers to a real-time database and visit their dashboards to receive live crowdsourced updates of traffic incidents, wait times and overall flow of a specific region or zip-code. This data is then mapped using the open-source
Leaflet
maps using heat-map markers for further visual insights.
How I built it
The server and web app was built using Node.js technologies. The front-end is a single-page React.js app, powered by
react-router-dom
,
@material-ui/core
for interfaces and
undux
for state/store management. The back-end uses Fastify.js for routing and database client drivers like
rethinkdb
and
redis
.
I communicated with the PagerDuty API using the `@mapbox/pagerduty' library, and added the TomTom Traffic API to Navigator's API after thinking more about how I wanted to use PagerDuty to monitor traffic and create insights from incidents.
Unfortunately, I found out about the DeveloperWeek Hackathon pretty late and was unable to access the webinar from 06/15/20, so I had only a day to come up with a demo for my concept of
real-time traffic monitoring through incidents and log entries
. Thankfully both PagerDuty and TomTom have well documented APIs, and sacrificing complexity for the ease of a quick Node.js API proved useful in the end for banging out my idea rather rapidly.
The web app is hosted on Vercel, formerly ZEIT. The API instance is deployed to a VPS using PM2 and Nginx reverse proxies. The database(s),
rethinkdb
and
redis
, are self-hosted using a quick docker image pulled from DockerHub. Everything is still a work in progress, but the project is definitely promising for a real-world, world-wide use case: traffic.
What I learned
I learned how powerful PagerDuty and TomTom APIs are. Most people are familiar with PD because they're a part of some software SaaS company that needs hosted incident reporting and analytics, and they needed it yesterday. With this in mind, it's easy to dismiss PagerDuty as another blackbox developer tool to be used for an existing system. I wasn't aware of the real-time possibilities you could explore w/ using PagerDuty as a real-time service.
Of course, the monitoring and analytics bunch is still there, but there's something there that reminds me of Zapier/Twilio for end user reach, and I wish to explore it further. After reading the documentation, I knew I had to find a way to incorporate
RethinkDB
with the
TomTom
API for the submission suggestion of traffic.
So far, it's turning out good.
What's next for Navigator
My work for Navigator, and the DeveloperWeek Hackathon won't stop at this week. I plan to keep the code open source and continue working on the problem, preferably with a team, to continue investigating useful cases for the PagerDuty API. I hope to create something cool, and if I get bored, leave it to the next promising challenger with a vision for niche real-time logistics tools like me.
Also, I admittedly want to change the name. Navigator is cool, but it doesn't really
stick
that well. I got it from a old Childish Gambino song.
Try it out
github.com
navigator.vercel.app | Navigator | Traffic monitoring and insights for last-mile parcel delivery, powered by PagerDuty and TomTom. | [] | [] | [] | 21 |
10,141 | https://devpost.com/software/test-uq1af3 | The Founders
Landing Page
Classroom environment
Our dashboard
Teachers can use our whiteboard for live lectures.
Teachers receive AI-powered feedback.
Teachers make quizzes with an easy interface.
Student sits in chair in our immersive 3d environment.
Classroom environment on two screens
Inspiration
COVID-19 has transformed distance learning. As high school students, we felt isolated from our peers and teachers, and decided to create SmartRoom: an immersive and feature rich application for students and teachers.
What it does
SmartRoom connects students with teachers through an interactive and immersive 3D classroom. Students can move around in the classroom, sit in chairs, and see each other. SmartRoom features an unique smart dashboard, which teachers may use to receive AI-powered feedback from students, administer real-time quizzes, and give lectures on a live whiteboard.
How we built it
For the front-end we used: Three.js for the 3D environment, HTML/JavaScript/CSS (SASS) for our webapp, and bundled it with Parcel. We also used Google Cloud Storage to manage personal photos uploaded by users, and Blender for animating the 3D character models.
For the back-end, we used: Node.js/Express for our server, Socket.io (websockets) for communication, and IBM Watson’s Tone Analysis API for our smart feedback.
We used Heroku to deploy our application (smartroomvr.herokuapp.com)[
http://smartroomvr.herokuapp.com/
]
Challenges we ran into
Lighting our 3D scene was difficult since we had to balance performance with quality of lighting.
There were some issues when we were loading our HTML pages in separate chunks since we wanted to make a single page application.
Structuring our code was a challenge because we had to combine a 2D web interface with a 3D environment, as well as write significant back-end code to support it all.
Combining multiple FBX animations into a single GLTF one in Blender also took a lot of time to learn.
Google Cloud Storage was giving us a CORS error when uploading photos, so we had to manually modify the server through the Google Cloud Terminal to allow CORS with all domains.
Accomplishments that we're proud of
Kirtan:
I made a sleek and aesthetic dashboard from scratch -- no templates, no frameworks.
I built my own, custom classification between Constructive/Destructive criticism derived from IBM Watson’s 5 tone classifications (joy, anger, analytical, etc.)
Deepak:
I made an intuitive landing page that allows students and teachers to select their role and enter a room number to join a classroom.
I build the real-time 3D environment from scratch using Three.js and Socket.io.
I learned about Vector projections from 3D to 2D and how to combine multiple FBX animations into a single GLTF file.
What we learned
We learned a lot about server side logic and the Socket.io library for websockets
We learned about 3D graphics concepts like lighting, model animation, and rendering
We learned how to use IBM Watson’s tone analysis API
We learned how to use Google’s Cloud Storage API along with modifying the bucket itself through the Google Cloud terminal
What's next for SmartRoom
We will replace the images (profile pictures) with a Web RTC video so that students and teachers feel even more connected with one another.
We will also integrate the whiteboard into the 3D environment so that the experience is much more immersive.
We will implement our 3D environment on a VR platform for an even more immersive experience.
Built With
blender
cloudstorage
css-3
express.js
google
html5
javascript
node.js
socket.io
stackoverflow
three.js
Try it out
smartroomvr.herokuapp.com | SmartRoom | SmartRoom is a 3D classroom that connects students and teachers. It features a dashboard where teachers may receive AI-powered feedback, administer realtime quizzes, and draw on a live whiteboard. | ['Deepak Ramalingam', 'Kirtan Shah'] | ['2nd Place'] | ['blender', 'cloudstorage', 'css-3', 'express.js', 'google', 'html5', 'javascript', 'node.js', 'socket.io', 'stackoverflow', 'three.js'] | 22 |
10,141 | https://devpost.com/software/steril-laser | Title
Inspiration
PPE supply shortages are a worldwide problem, and solving this problem is key to finding this and future medical emergencies like pandemics. Also wastage of PPE like gloves, masks and overalls is a huge unmitigated problem. We wanted to take an innovative approach to solving the shortage crisis and create a user accessible product that can sterilize PPE through UV disinfection methods.
What it does
Our Product allows the reuse of PPE by sterilization using UVC. UVC is known to eliminate a majority of pathogens including the coronavirus family of viruses. Surgical instruments are sterilized with UVC. Our system hardware consists of a self built sterilization chamber which utilizes UVC lamps to sterilize PPE. This can also be used to sterilize other objects, and is programmable with various durations and intensities. The mobile application side of our product provides a simplistic way to use the chamber. Users can simply make commands and execute them to sterilize or simply enter a time and start the sterilization process.
How I built it
App: The frontend was made entirely from Flutter. We communicated between the front end and the backends through http post and get requests, which gave commands to the product so it could start sterilizing. I stored the user’s commands in Firebase because it was easier to configure firebase for simple information storing.
Hardware: Sterilization chamber built from cardboard box with 2 UVC lamps daisy linked together. Powered by a WIfi enabled smart switch. The switch is controlled by invoking a python function on the local network, running on an NGROK tunnel with flask. The various settings are stored in a MongoDB database and the status of this device is also stored as a document, these are updated/used when the controller functions are invoked. Remote invocation by app is possible by use of serverless functions running on google cloud.
UVC lamp specifications: 8W x2
Wavelength : 265 - 275 nm
Power input: 110V
Challenges I ran into
Integrating all the components together
Wiring the hardware without shorting anything
Programming the smart wifi plug
Coming up with a simple and clean UI
Connecting the frontend and Backend via http requests.
Accomplishments that I'm proud of
The hardware all works, as well as the app. We are proud to produce a working prototype, we merely need to upgrade the hardware and clean up the UI, as well as optimize everything.
What I learned
We learned that it is hard to program a Google Home and connect it to a mobile app via Flutter. We also learned that UI is not as easy as it looks: you have to come up with a simple and direct looking interface that communicated each Widgets purpose perfectly to provide a gerat user experience.
What's next for Steril-Laser
A more robust version
Lockable chamber
Personal units deployed in public spaces, like lockers at school
Compress Hardware
Built With
firebase
flutter
postman
uvc
Try it out
github.com | Steril-Laser | Solving the PPE crisis with UVC, one mask at a time | ['James Han', 'Muntaser Syed'] | [] | ['firebase', 'flutter', 'postman', 'uvc'] | 23 |
10,141 | https://devpost.com/software/eye-can-code | Home page.
List of tutorials provided.
Simple print("hello").
More complex function.
Inspiration
With the recent COVID-19 pandemic, students worldwide have transitioned to online schooling. For some students, however, the transition has been harder than for others. Near where Veer lives is the oldest school for blind students: Perkins School for the Blind. Veer had always wanted to help them, and, during these times, he decided to help them when they needed it more than ever. Together, Veer and Saber worked on an online platform dedicated for the blind and targeted for Veer and Saber's favourite lesson: programming.
According to the National Federation of the Blind, COVID-19 has had a disproportionate impact on the blind, with many facing additional challenges during the pandemic. From an education standpoint, blind students and blind parents face uncertainty about the types of electronic materials they will be expected to use for the remainder of the academic year, making it hard for them to keep up with classes. Lastly, it is difficult for the visually impaired to learn how to code on their computer, a challenge which has been exacerbated by the pandemic.
What it does
We built a text editor which can listen to speech, translate it to Python code, and then run the code in a console. The platform is complete with an academy to teach blind students how to code, with lessons in variable types, for loops, if loops, functions, etc.
We used natural language processing to:
Allow the visually impaired to code in python by simply speaking
Provide a handful of python tutorials with voice and speech recognition features to effectively teach coding to people with visual impairments
Create an online platform for the visually impaired to learn
How we built it
We used:
Flask
HTML, CSS, and JS
Python
Natural Language Processing
Google Cloud Speech API
Challenges we ran into
We at first parsed the code in Python. However, when connecting it to the JS, it was incredibly laggy and didn't update in real time. Therefore, we had to translate all the Python code into JS which was tedious. In addition, SpeechRecognition only worked on one teammate's computer and not the other, which caused a lot of debugging to occur.
Accomplishments that we're proud of
We're really proud that our product is actually working for others to use. Not only did we complete a text editor, but we also got the academy working, which was great.
What we learned
We learnt how to use speech recognition and execute the code in string form. One of our teammates learned how to deploy code to Heroku and link it to a domain. We also learned more about linking JS with Python, especially for real-time work.
What's next for Eye Can Code
We want to make more aspects of our website audio to further help make it accessible for the blind. Afterwards, we hope to have the platform available for all to use.
Built With
css3
flask
google-cloud
google-web-speech-api
html5
javascript
natural-language-processing
python
Try it out
github.com
eyecancode.online | Eye Can Code | An online platform built with a speech-to-text python code editor for the visually impaired to learn coding | ['Shreya C', 'Veer Gadodia'] | ['The Wolfram Award', '1st Place Award', 'Amazon Gift Card', 'Wolfram|One Personal Edition + 1 year subscribtion to Wolfram|Alpha Pro'] | ['css3', 'flask', 'google-cloud', 'google-web-speech-api', 'html5', 'javascript', 'natural-language-processing', 'python'] | 24 |
10,141 | https://devpost.com/software/covid-19-health-center-qogj8t | Doctor
Inspiration
I took inspiration to do this project from the growing crisis and how any help related to the current scenario can yield something better.
What it does
This is a GUI Application with which the user can check for symptoms with the help of a rule based chatbot, which will further give recommendations on evaluating their health. The user can also read all the latest worldwide news related to COVID-19. This digital health center also help the user to keep track of the statistics that is the total cases, total recoveries, total death, total active and critical cases of any country and allows the visually impaired to listen while the machine speaks the text. Furthermore the user can read or listen the precautions given by WHO to keep one safe.
How I built it
I built it using tkinter to give users an easy interaction. A text to speech library- pyttsx3 has been used to also help the visually impaired to some extend. There is a rule based chatbot that evaluates the health by asking certain questions. The data for evaluation has been adapted from DOH Guidelines. To keep track of the statistics I have used the "Coronavirus map" API. And the other data has been obtained from the WHO website.
Challenges I ran into
I ran into certain challenges while making the chatbot since pyttsx3 and tkinter were not syncing together so I decided to use multithreading to some extend. Other than this the designing and functioning needed a lot of thinking and efforts.
Accomplishments that I'm proud of
I'm really proud of the fact that I did something on the current crisis that might be beneficial for people and especially the fact that I did consider the visually impaired while building this. I am also proud of the fact that I used OOP completely to build this project.
What I learned
I have learnt various different libraries and I think I have grown a hands on experience in using most of them like Tkinter, BeautifulSoup and pyttsx3. Other than this I have grown a fair knowledge of information related to COVID-19.
What's next for COVID-19-Health-Center
I really plan on using machine learning and training the chatbot to answer smartly. There are still some thing that can be added to the GUI.
Built With
beautiful-soup
python
pyttsx3
tkinter
Try it out
github.com | COVID-19-Health-Center | This is a GUI application needed to keep track of almost everything related to coronavirus. This is made also to help the visually impaired, so that they can keep track with just one click. | ['maryamnadeem20 Nadeem'] | [] | ['beautiful-soup', 'python', 'pyttsx3', 'tkinter'] | 25 |
10,141 | https://devpost.com/software/the-virtual-medic-fwbdr1 | LOGO
Disease Prediction Example
Disease Prediction Example
Disease Prediction Example
Barcode Scanner backend
Medicine Reminder
Medicine Reminder Setup
Inspiration
While access to healthcare and hospitals remain very delicate in these times, we are developing a full AI based model that can be easily used and deployed to automate the process of patient hospitalization.
What it does
As the initial part of the project we have developed the health application serves several purposes. To begin with, the app has a Disease Predictor, which takes symptoms as the user input and predicts the disease based on the
same. The application also implements a Barcode Scanner that scans the barcodes on different
medicines via the camera and displays the details about the same like usage, effectiveness
and side effects. The app also lets the user input their daily medication including the number of doses and their
timings. On adding, the app would send the user a notification at every dosage time.
How do we aim to automate hospital interface with the same ?
Through the disease prediction model, i further aim to deploy a consultation mechanism that based upon the user symptoms connects us with the related doctor who specializes in the same field, for e.g orthopaedics, psychiatrists, general surgeons etc. Deployed in a hospital, the same can be used to assist patients with database management as well consultation management.
How I built it
The app is completely based on python with android being used as the front end framework.
Accomplishments that I'm proud of
The accuracy of the model and its ability to predict upto 41 diseases based upon the 132 different symptoms provides evidence of the advancements that can be made in this field.
What I learned
Being a machine learning enthusiast, i had never used android for deploying a front end to my application, hence android is the very main thing that i acquired from the project
What's next for The Virtual Medic
I plan to work on the UI of the application as well as plan to integrate a virtual video conferencing tool that can connect the user to nearest doctor for health consultation.
Built With
android
flask
keras
machine-learning
python
Try it out
github.com | The Virtual Medic | ML Based Disease Prediction and Hospital Automation | ['harsh pandey', 'Neel Kukreti', 'Tanvi Thakur'] | [] | ['android', 'flask', 'keras', 'machine-learning', 'python'] | 26 |
10,141 | https://devpost.com/software/kitler | These were the results of some market research that we conducted.
Inspiration
Millions of Americans are obese and with the rate of obesity only growing something had to be done. Our group did some research and we realized that the main challenge people faced when working out was the
lack of motivation
. We had our core problem and as we conducted more and more research on fitness and the lack of exercise we were able to come up with Kilter which targets obesity at its root to create a more motivating and rewarding workout experience.
What it does
Exercising can be a hassle; not only is it impersonal, but it is often inconvenient, and therefore, we often fail to see the results we desire. Let us introduce you to our app
Kilter
, a first-of-its-kind fitness app which combines the rapidly expanding market trends of both personal training and a cashback reward system, all at a lower price than our competition. The large majority of health app users are not willing to pay for the app, so let's start off with the complementary aspect of our service. The free version of our app provides users with guided exercise videos in areas such as cardio, weightlifting, and calisthenics. We will also run ads during the video to profit. The trainer of the video will recommend that the user records a 1 minute video of them performing the exercise, and the user can choose to submit this video to the Kilter team to be validated. The user will then receive in-app currency to use in the app’s shop. As one of our potential customers stated in an interview,
“Money is a big incentive”
for people to complete fitness challenges. Furthermore, companies advertising in the Shop offer another viable revenue stream. And similar to the app Sweatcoin’s system, users can pay to submit more videos per day. Of course, our video submissions will comply with HIPAA in order to maintain the trust of users, sponsors, or companies that underwrite our solution. However, if a user wants a more personal experience, then our app also provides a paid service. For a price lower than gyms and our competitors, the paid experience of our app is a subscription based service where users can buy virtual training sessions with certified personal trainers. While other companies, such as Peloton or Skimble, offer online personal training, neither gives the user 1-on-1 video calling sessions, which Kilter would offer in order to add another revenue stream.
How we built it
We used
HTML and CSS
to build the webpage. Our backend developer was not present so we were unable to make the backend of the app.
Challenges we ran into
We were missing a key team member which really put a wrench in out plans, as a result we had to make do without a backend and out website was not as robust as we would have liked.
Accomplishments that we're proud of
We are proud that we were able to translate our idea into a
functioning webpage
!
What we learned
We learned how to link
HTML and CSS.
We learned how to use an essential skill for all coders,
Git and GitHub
. We also learned how to host a website on
GitHub
.
What's next for Kitler
We hope to get some funding so that we can develop the
website/app
in a more robust manner and so that we can also market our app once we finish developing it!
Find out more about the idea
A
google doc
with more information :
https://docs.google.com/document/d/15ymF2LfmkNsf3qw60XlfBjSdgwnFXub8hwiLBy24Huk/edit?usp=sharing
Our
facebook page
with potential customers:
https://www.facebook.com/fitnesskilter
Link to
Website
https://akhilr0.github.io/
Built With
css
html5
Try it out
github.com
akhilr0.github.io | Kitler | Kilter is the first app to combine the two market trends of 1-on-1 personal training and a cashback reward to provide a more motivating workout experience. | ['Gaurish Lakhanpal', 'Subham Mitra', 'Akhil Ramidi', 'Lance Locker'] | [] | ['css', 'html5'] | 27 |
10,141 | https://devpost.com/software/wecare-5l9dgi | Home Screen of app, which allows you to report your symptoms, check the status of your circle, and get daily personalized tips.
Map Screen of app, which allows you to see hotspots around you and your Care Circle.
Care Circle screen of app, which allows you to health conditions of your loved ones.
Web interface, which can be used to update the symptoms. It is synced with the app.
New logo.
Update with a key.
Hotspots for countries.
Options from the start.
Questions about your health.
Hot spots.
App design
As the outbreak of COVID-19 continues to spread throughout the entire world, more stringent containment measures from social distancing to city closure are being put into place, greatly stressing people we care about. To address the outbreak, there have been many ad hoc solutions for symptom tracking (e.g.,
UK app
), contact tracing (e.g.,
PPEP-PT
), and environmental risk dashboards (
covidmap
). However, these fragmented solutions may lead to false risk communication to citizens, while violating the privacy, adding extra layers of pressure to authorities and public health, and are not effective to follow the conditions of our cared ones. Unless being mandatory, we did not observe the large-scale adoption of these technologies by the crowd. Until now, there is no privacy-preserving platform in the world to 1) let us follow the health conditions of our cared ones, 2) use a statistically rigorous live hotspots mapping to visualize current potential risks around localities based on available and important factors (environment, contacts, and symptoms) so the community can stay safer while resuming their normal life, and 3) collect accurate information for policymakers to better plan their limited resources.
Such a unified solution would help many families who are not able to see each other due to self-quarantine and enable early detection and risk evaluation, which may save many lives, especially for vulnerable groups. These urgent needs would remain for many months given that the quarantine conditions may be in place for the upcoming months, as the outbreak is not reported to occur yet in Africa, the potential arrival of second and third waves, and COVID-19 potential reappearance next year at a smaller scale (like seasonal flu). There is still uncertain information about immunity after being infected and recovered from COVID-19. Therefore, it is of paramount importance to address them using an easy-to-use and privacy-preserving solution that helps individuals, governments, and public health authorities.
WeCare Solution
WeCare is a cross-platform app that enables you to track the health status of your loved ones. Individuals can add their family members and friends to a Care Circle and track their health status and get personalized daily updates on best prevention practices. In particular, individuals can opt-in to fill a simple questionnaire, supervised by our epidemiologist team member, about their symptoms, comorbidities, and demographic information. The app then tracks their location and informs them of potential hotspots for them and for vulnerable populations over a live map, built using opt-in reports of individuals. Moreover, symptoms of individuals will be tracked frequently to enable sending a notification to the Care Circle and health authorities once the conditions get more severe. We have also designed a citizen point, where individuals get badges based on their contributions to solving pandemic by daily checkup, staying healthy, avoiding highly risky zones, protecting vulnerable groups, and sharing their anonymous data.
WeCare includes a contact tracing module that follows the guidelines of Decentralized Pan-European Privacy-Preserving Proximity Tracing
(PEPP-PT)
. It is an international collaboration of top European universities and research institutes to ensure the safety and privacy of individuals.
What we have done during the weekend
Have been in contact with other channels in Brazil and Chile.
We have updated the pitch (extended), app-design and backend connection of the app this week. New contacts with Chile and Singapore. We have also made some translation work with the app. Shared more on social media about the project and also connected to more people on slack and LinkedIn. We have also modified the concept of Care Circle and how to add/remove individuals. Now, the app is very easy-to-use with minimal input (less than a minute per day) from the user. We are proud of the achievements of our team, given the very limited time and all the challenges.
Challenges we ran into
The Hackathon brought together plenty of people of different expertise and skills. There were challenges that we faced that were very unique, as we faced a variety of communication platforms on top of open-source development tools.
Online Slack workspaces and Zoom meetings and webinars presented challenges in forms of inactive team members, cross-communications, and information bombardment in several separate threads and channels in Slack and online meetings of strangers that are coordinated across different time zones. In developing the website and app for user input data, our next challenge was in preserving the privacy of user information.
In the development of a live hotspot map, our biggest challenge here was to ensure we do not misrepresent risk and prediction into our live mapping models.
Also for the testing of the iOS version, we ran to the new restriction of App Store for COVID-related apps, which should be backed up by some health authorities or governmental entities.
The solution’s impact on the crisis
We believe that WeCare would help many families who can see each other due to self-quarantine and enable early detection and risk evaluation, which may save many lives, especially for vulnerable groups. The ability to check up on their Care Circle and the hotspots around them substantially reduces the stress level and enables a much more effective and safer re-opening of the communities. Also, individuals can have a better understanding of the COVID-19 situation in their local neighbourhood, which is of paramount importance but not available today.
The live hotspot map enables many people of at-risk groups to have their daily walk and exercise, which are essential to improve their immunity system, yet sadly almost impossible today in many countries.
The concept of Care Circle motivates many people to invite a few others to monitor their symptoms on a daily basis (incentivized also through badges and notifications) and take more effective prevention practices.
Thereby, WeCare enables everyone to make important contributions toward addressing the crisis.
Moreover, data sharing would enable a better visual mapping model for public assessment, but also better data collection for the public health authorities and policymakers to make more informed decisions.
The necessities to continue the project
We plan to continue the project and fully develop the app. However, to realize the vision of WeCare we need the followings:
Public support: a partnership with authorities and potentially being a part of government services to be able to deploy it on AppStore. It also makes WeCare more legitimate. This would increase the level of reporting and therefore having a better overview and control of the crisis.
Social acceptance: though being confirmed using a small customer survey, we need more people to use the WeCare app and share their data, to build a better live risk map. We would also appreciate more fine-grained data from the health authorities, including the number of infected cases in small city zones and municipalities.
Resources: So far, we are voluntarily (and happily) paying for the costs of the servers. Given that all the services of the app and website would be free, we may need some support to run the services in the long-run.
The value of your solution(s) after the crisis
The quarantine conditions and strict isolation policies may still be in place for upcoming months and year, as the outbreak is not reported to occur yet in Africa, the potential arrival of second and third waves, and possible COVID-19 reappearance next year at a smaller scale (like seasonal flu).
Therefore, we believe that WeCare is a sustainable solution and remains very valuable after the current COVID-19 crisis.
The URL to the prototype
We believe in open science and open-source developments. You can find all the codes and documentation (so far) at our
Website
.
Github repo
.
Pitch:
https://youtu.be/7fMrVqxoPKY
Pitch extended version:
https://youtu.be/Vo0gs3WlptU
Other channels.
https://www.facebook.com/wecareteamsweden
https://www.instagram.com/wecare_team
https://www.linkedin.com/company/42699280
https://youtu.be/_4wAGCkwInw
(new app demo 2020-05)
Interview:
https://www.ingenjoren.se/2020/04/29/de-jobbar-pa-fritiden-med-en-svensk-smittspridnings-app
Built With
node.js
python
react
vue.js
Try it out
www.covidmap.se
github.com | WeCare | WeCare is a privacy-preserving app & page that keeps you & your family safer. You can track the health status of your cared ones & use a live hotspot map to start your normal life while staying safer. | ['Alex Zinenko', 'Sina Molavipour', 'Ania Johansson', 'Hossein S. Ghadikolaei', 'Christian M', 'Seunghoon HAN', 'Tomasz Przybyłek', 'Mohamed Hany', 'Alireza Mehrsina'] | ['1st Place Overall Winners', '2nd Place'] | ['node.js', 'python', 'react', 'vue.js'] | 28 |
10,141 | https://devpost.com/software/eagle-sight | Inspiration
I wanted to build a mini game to entertain people and I also wanted to add a lot of design to it. So, I got an idea of making a quiz like game where it tests your eyes.
What it does
Eagle Sight is a website which is mobile-friendly too, therefore it can be used by any device with an internet connection. At Eagle Sight, you get 10 questions that are all related to your eyesight. The questions are like, "How many numbers do you see?" or "Do you see a circle?" It is not really easy but anyone with very good sight can get 10 out of 10... After answering the 10 questions, your result will come... It will say how good your eyes are or whether your eyes aren't really good. Then you can either play the game again or share your results on Facebook by pressing the "share score" button.
How I built it
Firstly, I wrote down what sort of questions to add and how to make questions more difficult as you go.. Then, I designed the graphics needed for the questions and the website. After that using html I coded the basic website with no CSS or JavaScript. I made all the pages except for the homepage and results page.
Then, I started designing the homepage. I used a lot of CSS to design the homepage and also added a beautiful floating colorful balls design using JavaScript.
After that I started adding CSS effects to the buttons and text of each and every question page. Finally, I created the results page with a similar design to the homepage.
The domain was obtained for free through freenom and hosting was done through infinity free. Also I used Cloudflare and secured my website...
Challenges I ran into
As this was my first time using 100% code and no website builders at all this was difficult. I ran into so much of challenges.
Whenever I couldn't add a beautiful design I would use an open source library's help. But most of the codes in open-source libraries didn't work really well on mobile, either the text would not show up or it would be really ugly. So, I had to try out so much of these codes...
Making the site mobile friendly, similar to the second challenge I mentioned, sometimes my own code made the site ugly on mobile. Sometimes I had to make pages all over again because it worked well on PC but not on mobile. But finally I made the website really mobile-friendly.
Accomplishments that I'm proud of
I'm really proud of building a website with this much of code... Specially this much of CSS, I didn't even know something called CSS exists until this week, but now I have a website with a lot of it...
Also, I am proud of the function of my website, when I told my family and few friends to try it out, it was perfect. Everyone got perfect results...
What's next for Eagle Sight
Now, Eagle Sight is live and anyone can visit it...
I have two main goals with this new website,
Bring in more people to my website and make it known among the people. Although I have published my website it doesn't mean I am done, it's now that the real game starts. I will have to promote it on my social media accounts as well as on other sites.
I have to develop my site. I need professionals to review my site and learn about how I can develop it.
Built With
css
html
javascript
Try it out
www.eaglesight.tech
github.com | Eagle Sight | Play this online game and see how sharp your eyes are... Are they as sharp as an Eagle's or weak as a bat's... | ['Senuka Rathnayake'] | [] | ['css', 'html', 'javascript'] | 29 |
10,141 | https://devpost.com/software/safe-ai-browser | Inspiration
Making browsing the internet safer is tricky, filtering applications and hardware block a whole domain, but are not smart enough to just filter sensitive images
What it does
Safe AI Browser is a browser plugin that detects the presence of images which are not suitable for minors and switch them to neutral images
How I built it
Using Azure cognitive service, the browser extension calls the vision API to determine if the image is suitable for minors or not.
Javascript for the browser extension
Accomplishments that I'm proud of
An intelligent AI based solution for disturbing images on the web
What I learned
Using Azure ML and cognitive services
What's next for Safe AI Browser
Next is a mobile SDK that can be integrated in mobile applications
Built With
ai
azure
machine-learning
Try it out
gitlab.com | Safe AI Browser | Improve your browsing experience using AI | [] | [] | ['ai', 'azure', 'machine-learning'] | 30 |
10,141 | https://devpost.com/software/exercise-together | Live Video Streaming
Video Room
Youtube enabled
Live Data Syncing
Search Bar
Authentication
DynamoDB
Home
Inspiration
We know that physical activity and social interaction have immense benefits*. During lockdown, many people aren't able to go to the gym or see any of their friends in person. I wanted to create an app to help people get their endorphins up and see their gym buddies across the world.
*
https://www.cdc.gov/physicalactivity/basics/pa-health/index.htm
,
https://www.mercycare.org/bhs/services-programs/eap/resources/health-benefits-of-social-interaction/
What it does
Exercise Together is a web app that allows 3 people to share video while watching the same Youtube exercise class and log their exercise activity.
It works like this:
A user visits the website and either creates and account or logs in. Amazon Cognito is used for authentication.
Once authenticated, the user is directed to a dashboard depicting the amount of time spent exercising with Exercise Together.
The user clicks join room and enters a room name. Up to 3 of their friends enter the same name to join the same room.
The users enter a video chat room and can search for a Youtube exercise video together by utilizing the search bar. Once everything is ready, they click start exercise to begin!
When the video ends, the user returns to the dashboard and their time spent exercising is logged.
Exercise Together is helpful when you want to exercise with your friends and simulates an exercise class you could do at the gym like yoga or pilates. This way people can work out with their friends that are all over the world!
How I built it
I used react and redux to build the front end of the project. For the backend, I used Serverless functionality like Cognito, AWS Lambda, S3, DynamoDB, and App Sync. Cognito verifies the user so that I can log exercise data for every user separately. All data is stored in DynamoDB. When people enter a room, Agora.io livestreams everyone's video to each other, so they can see each other's faces while React is used to display everyone's video. Every change you make to the search bar or clicking a Youtube video is logged to DynamoDB and is logged to all the other clients in the same room through AppSync. As a result, everyone in the room can see the same view at the same time. When you finish the workout, the data is sent to DynamoDB with the email you logged in as the key for the data. On the dashboard, a get request is made back to DynamoDB, so that you can see your exercise data for the whole week.
Challenges I ran into
I used a wide variety of services in order to develop the application that I wasn't experienced with previously like Agora.io, AWS Amplify, and AWS AppSync. Learning them was difficult and I went through a lot of troubleshooting with those services in the code. Moreover, syncing all these services together into one application was a large challenge, and I kept trying different pieces of code one at a time to try to get them to work together.
Accomplishments that I'm proud of
I was able finally learn how to use web sockets (AWS AppSync uses web sockets), which I'm really excited to use for my future projects! Web sockets are especially crucial for online games, which I want to make.
What I learned
I learned how to use a multitude of services and link them together. For example, I learned web sockets, Agora.io, AWS Amplify, and AWS Appsync. All these services would be immensely useful for my fire projects, so I believed that I really benefited from creating this project.
What's next for Exercise Together
Some extensions I'd like to make include:
Adding Fitbit and Apple Health functionality so that users who use them can all see data logged onto the website.
Making a sidebar like to that people could use to see who is currently online out of their friends list and join a room with them. In order to implement that, I would have to use AWS Neptune, which uses the same technology that Facebook uses for Facebook Friends.
Creating a phone app using React Native. I feel that more people would like to use a phone app rather than the website.
There are still
many bugs
, especially with the video streaming since I'm using a third party API and a free account for it. For example:
The video streaming only works chrome.
Entering the video room with more than one person is a buggy process. The way I get it to work is by duplicating the tab for each user entering and closing the previous tab.
The Cognito verification link redirects to localhost, but will confirm the account.
Built With
agora.io
amplify
appsync
cognito
cookie
dynamodb
graphql
javascript
lambda
materialize-css
node.js
react
redux
s3
serverless
ses
websocket
Try it out
exercisetogether.rampotham.com
github.com
www.youtube.com | Exercise Together | Exercise Together is a webapp that simulates your own group fitness class online with your friends | ['ram potham'] | ['The Wolfram Award'] | ['agora.io', 'amplify', 'appsync', 'cognito', 'cookie', 'dynamodb', 'graphql', 'javascript', 'lambda', 'materialize-css', 'node.js', 'react', 'redux', 's3', 'serverless', 'ses', 'websocket'] | 31 |
10,141 | https://devpost.com/software/stacy-bot | Interface in FB messenger
This representation of NLP
Features which will be added more as time goes
PLEASE NOTE THIS IS A TEST BOT, AS PUBLISHING AND VALIDATION TAKES TIME, SO IF U WANT TO USE THIS THEN U NEED TO BE THE TESTER. BUT U CAN USE THE PHONE CALL FACILITY.
CALL AT: +1 463-221-4880
(This is a toll-free number based in US, if you are out of US then only minimal international charges will be applicable, I am from India and it takes 0.0065$/min)
If you want to use this app in your Facebook Messenger like shown in the video then please comment your Facebook ID in this project's comment section, I will add you as a tester to this app
IT IS JUST AN WORKING DEMONSTRATION OF MY IDEA TO TACKLE THE PROBLEM, IT CAN BE MADE AS PER THE DEMAND OF ANY ORGANISATION. AND THE BEST THING IT IS NOT A CONCEPTUAL IDEA IT IS TOTALLY A REALISTIC IDEA THAT CAN BE DEPLOYED AT ANY MOMENT ACCORDING TO THE DEMAND OF THE ORGANIZATION
Our Goal
General Perspective
Due to the situation of COVID-19 the work force of the world is decreasing(since everyone is maintaining self quarantine and social distancing ), which is creating a big havoc in the world, through this project of mine, I mainly target to tackle this problem and help the health organizations with a virtual workforce that runs 24*7 without any break, and handles all kind of mater, starting from guiding the people to fill up the forms to managing the data of the patients automatically and all-together.
Business Perspective(if required)
Bot service (it is not a company yet, I am just referring to the thing that we want to build or start this company, we are student developers right now) which adds a virtual work force to every client organisation to bloom in the market. In business perspective Our potential business targets are small business,NGO and health organisations and we help them to be free from human service cost and help them to grab more users by providing 24*7 interaction with there users, thus generating more revenue for them.
Inspiration
I really got inspired for making this advance A.I bot by seeing the current COVID-19 situations, because of these COVID-19 situations people are restricted from gathering hence work force and user interaction of various health organisation are diversely effected. Through this project I aimed to connect the health organizations with the patient anywhere in the world,using any platform(not limited by android, ios or Web). And also manage the data of the patients automatically thus reducing human effort and maintaining social distancing.
MADE THIS PROJECT TO BRING A CHANGE
.
How is our product different than others
1)
There are many types of A.I bots,where most of them are Decision tree based models that work with particular buttons only,our products will be totally based on NLP based models,which are more advanced and are in higher demands than others.
2)
Other service A.I bot service providers are confined to only 1 or 2 platforms, whereas we at the same time are providing advantage to the client to choose from a large scale of platforms like FB messenger, google assistant,slack,line,website bots and even in calls
3)
For the health organisations that are willing to buy our technology (We are also willing to donate this tech for free), from business perspective we will also be cheaper than our other competitors, when others are taking near about $3300/year for the service, we are doing it in $100-$1500 one-time fee range with more versatility.
It will totally be free for any user using it, no charges will be applicable for users
What it does
Our bot provides the power to every health organisation at such situations of COVID-19 by managing the screening,testing and quarantine data and also connecting the persons that are willing to do the test with the help of diversified digital platforms. In cases where internet is not working (where other bots won't function) still our bot works inside the phone number thus providing fruitful results in such situations.It basically covers all important aspects of an advanced A.I bot. It also connects the health organisations with volunteers that are willing to donate their time as helping hands in this hour of need.
How I built it
I built it using Google cloud A.I solutions, Google cloud Dialogflow framework(which includes automatic firebase integration) where I trained the bot with NLP with huge datasets from WHO and government and then integrated it with the Facebook messenger through Facebook Developer account. It is also supporting Phone call facility
Challenges I ran into
I had to go through many challenges, starting from being a solo developer, I really had to face a lot of problems as making such a complex app which all the advanced features as mentioned, all these things together cost me a lot of sleepless nights but i hope my hard-work pays off
Accomplishments that I'm proud of
I am really proud of the app that I made because it itself is a big milestone for a solo developer like me.
What I learned
I learned a lot of things through out this journey of developing this app, starting from advance use of Google cloud A.I solutions, Dialogflow and integrating it to Facebook messenger, making filters inside the chat-bot to enhance user experience etc.Connecting it with a phone number to receive phone calls etc.
What's next for Health Bot
If my work gets selected, then for sure I am going to work really hard to make Health Bot even bigger and to add more amazing functionalities to make my users happy.
Built With
dialogflow
facebook
google-cloud
javascript
json
Try it out
github.com | Advanced A.I Health Bot | An A.I bot with: Telephone calling,NLP,24*7 health coverage,total automatic data management,wipes rumors,Easy navigation,HD pictures,Customer service help etc | ['Udipta Koushik Das'] | ['Accessibility: Second Prize', 'Healthcare: Second Prize'] | ['dialogflow', 'facebook', 'google-cloud', 'javascript', 'json'] | 32 |
10,141 | https://devpost.com/software/college-for-dream | A screencap of the video.
My Project
I created a new video guide to add to the Fair Opportunity Project website. The website itself is already very accessible for those with disabilities: I tested it using just my keyboard, using a screen reader, and using a colorblind filter. One thing I noticed, however, was that there were no special resources for undocumented immigrants looking to go to college. Applications for DACA recipients can be difficult, so I created a short video explaining applications and financial aid for undocumented immigrants.
Here's the link:
https://youtu.be/y7nuO7TzxdE
Built With
youtube | College for Dreamers | Let's make college more accessible for undocumented immigrants. | ['Trisha Agrawal'] | [] | ['youtube'] | 33 |
10,141 | https://devpost.com/software/sistema-de-historico-de-pacientes-publicacao | mapa de telas
Desenvolvemos um projeto para área de saúde pública que visa atender a população mais carente. O projeto trata da gerência dos dados históricos de pacientes. A ideia é desenvolver um Data Lake (um repositório de dados estruturados, semi-estruturados e não-estruturados) que armazene os dados dos pacientes. A vantagem do Data Lake é que é possível associar a dados estruturados (um banco de dados existente), dados não estruturados como imagens de um exame, etc. Além de ser possível aplicar técnicas de aprendizado de máquina sobre esses dados. É um ambiente de integração de dados sem todo o custo de bancos de dados multidimensionais como Data Warehouses e Data Marts. A partir desse Data Lake, o plano é desenvolver um sistema que seja capaz de gerenciar/consultar os dados. Resumidamente um banco de dados atualizado, confiável, seguro e gerenciado com dados e informações pelos pacientes e seus médicos. O fato de usar a tecnologia de Data Lake faz com que esse projeto possa ser utilizado em qualquer lugar do mundo da mesma forma. Finalizando podemos usar a saúde pública nos controles de doenças de forma coordenada mundialmente e ajudar diretamente quem mais precisa nos momentos de pandemia além de poder projetar futuros surtos de doenças através de inteligência artificial e modelos de projeção matemáticos.
Built With
data
flutter
Try it out
xd.adobe.com | Sistema de Historico de Pacientes | Leve sua vida consigo onde for! | ['Jose Alexandro Acha Gomes'] | [] | ['data', 'flutter'] | 34 |
10,141 | https://devpost.com/software/again-vui0w1 | Inspiration
Few days before the start of the quarantine in Morocco, we were walking down the street and we saw a homeless guy trying to find food. Going back home, we were wondering what can this guy do if the quarantine gets imposed on us, Moroccans. A few days later, that was exactly what happened: we were quarantined. Thinking about that guy we saw the other day, we started brainstorming solutions that we can build as computer science passionates to make him and many others in the same situation as he finds a shelter especially during this tough time when they can be easily infected by the virus, as likely as easily spreading it. After seeing Covidathon, we believed that this is our chance to make our solution reach more people and to take the first step in making an impact.
What it does
Again is a solution that aims at securing shelter for homeless people during the lockdown by matching associations and organizations that deal with homeless people and house donators.
The solution also creates jobs for people who have lost their jobs by being applications' reviewers (more details about this below).
To secure shelter for homeless people, the application allows users to create accounts as an association, a house owner, or an applications' reviewer. All of the different types of users enter useful information about themselves when registering (details about the registration information required from each type can be found on the demo site):
As a house owner: anyone who possesses a house or multiple houses can donate them via the application by filling a house donating application. The application asks for information about the house/s that the user would like to donate. This information includes the location of the house, the area, but most importantly a document proving that the user owns that house. The purpose of this proof is to reduce the wasted time after matching an association with a user that does not really possess the house. This proof document will be processed by an AI system that will either validate it or not. If the document is validated, it will be available to applications' reviewers to match it with an association. If not, the donor’s application will be withdrawn. After the donated houses have been matched with an association or more associations (if there are many houses that a lot of associations can use), the contact of the donor is given to the associations so that they coordinate to finalize the donation process.
As an association: after registering in the application, associations can submit applications asking for matching with a donor. An approximate number of homeless people who will benefit from the donation should be specified in the application. It is then the job of applications' reviewers to review the application and decide on a match with a donor.
As an application reviewer: applications’ reviewers are people recruited through the application in order to review the associations’ applications and match them to house/s donors. To be an applications' reviewer, one must apply to the job through the website (applications are available in case of need when the amount of applications is too much). Applicants must provide their personal information, but most importantly, proof of losing their job because of the pandemic. This proof can be of any kind: a screenshot of an email of firing (the email should be forwarded later to make sure it comes from a recruiter, a document..). This proof of losing a job, plus the first-come, first-served basis, and the description of the need in the application are the factors that the admins are going to rely on when assessing applications. Each applications' reviewer will get associations’ applications on a weekly basis. Their job is to assess the need for associations and match them with house donors in the same locations. They also have to distribute the houses in an optimal way taking the need and the impact into consideration. Applications reviewers get paid from donations to the web application. These donations have nothing to do with the house/s donations, they are monetary donations that can be done through the web application to a specific bank account for this purpose. Anyone can donate including people not registered under any type in the application. More on how application reviewers get paid in the section below.
Payment Policy
Applications reviewers will get paid from donations. Since donations are uncontrollable, our team came up with an adequate solution. Applications reviewers will get a token for each application reviewed and thus an association matched with a donor. The value of a token changes on a weekly basis depending on the donations received. Here is a hypothetical scenario: we have 3 applications' reviewers who have reviewed 10 applications each, this means that each applicant has earned 10 tokens, making 30 tokens in total. The amount of donations received in this week is 300 $, implying that a token is worth 10 $. In this case, each reviewer will receive 100$ for this week. However, this method is not good if the amount of donations for a certain week is very high, let’s suppose that in the same previous scenario, the amount of donations is 30000 $, then a token will be worth 1000$. This also means that an applicant will earn 10000 $ for a single week. This might be not fair for other applicants who will join in the coming weeks, and when the donations will be very much lower. To solve this problem, we decided on having a maximum amount that a token cannot exceed so that if the amount of donations is high, we save it for later weeks.
Going back to our scenario, if we set the maximum worth of a token to be 20$, and having 30 tokens to issue, we will spend 600 $ and save 29400$ for upcoming weeks.
Important notes:
Before associations submit their applications, they have to agree to some terms and conditions. An important condition is that the associations should engage the beneficiaries in society by making them help either by doing a job, volunteering or helping other homeless people. The goal of the application is not only to find shelter for these people but to try to engage them in society especially during these tough times when we all have to unite.
Link to the document about using AI in Again:
[
https://docs.google.com/document/d/1RNNpGf3MIhp-lksVtGzXkH7Tb91Ilw4gRw7AJmu27bA/edit?usp=sharing
]
How we built it
To build our web app again, we (team members) divided the work into three parts:
The front-end part (Mohamed Moumou): This part consisted of designing each web page in the web app. The story of AGAIN and all the scripts in the web app. Also, building the actual web app front end using the react framework.
The back-end part (Ouissal Moumou): This part consisted of designing the database and building the actual back-end of our web app using the express.js framework, MongoDB(for the database), and APIs.
Deployment (Ouissal Moumou & Mohamed Moumou): We used Heroku to deploy either the back-end and the front-end app.
Accomplishments that we're proud of
The team of Again is very proud that he is thinking about homeless people when everyone is thinking about the problems of the homeful ones. It does not mean that homeful people’s problems are not urgent, but it means that there is a huge part of society that struggled and now struggling more because of the COVID-19 outbreak that needs urgent help and re-integration. Another accomplishment we are proud of is that our idea is providing jobs for people losing their jobs.
What's next for Again
1- Implementing AI solutions in our App,
2- Adapting the services offered by the app to every country's laws,
3- Make our web app available in many languages (Arabic, French...).
Helpful hints about running the application in our demo site:
http://againproject.herokuapp.com/
If the page returns an error message from Heroku, just refresh the page and it will work.
Here are some login credentials for quick testing of the application:
For an association: **
email:
tasnimelfallah@gmail.com
password: Tasnim123
*
For a house/s donator: *
email:
mohamedjalil@gmail.com
password: yay yay
*
For an application reviewer: *
email:
badr@again.com
password: Badr123
**The information and metrics shown on our app are fictional.
Built With
heroku
javascript
mongodb
node.js
react
rest-apis
uikits
Try it out
againproject.herokuapp.com
againbackend.herokuapp.com
github.com
github.com
docs.google.com | Again | Again is a solution that aims at securing a shelter for homeless people during the lockdown by matching associations and organizations that deal with homeless people and house donators. | ['Mohamed MOUMOU', 'Ouissal Moumou'] | ['The Wolfram Award'] | ['heroku', 'javascript', 'mongodb', 'node.js', 'react', 'rest-apis', 'uikits'] | 35 |
10,141 | https://devpost.com/software/safetravels-pr429f | SafeTravels Logo
Arduino Hardware Wiring Diagram
Constructed Arduino Hardware Circuit
Mobile App Signup Page
Mobile App Bus Line List Without Filters
Mobile App Seating Chart with Recommendation (Blue)
Mobile App RFID Setup Page
Mask Detection Analysis (High Confidence True)
Mask Detection Analysis (High Confidence False)
Audio Spectrogram for Cough Detection
Admin Website View + Add Bus Lines
Inspiration
Public transportation is a necessity to society. However, with the rapid spread of COVID-19 through crowded areas, especially in lines like city metros and busses, public transportation and travel have taken a massive hit. In fact, since the beginning of the pandemic, it is estimated that usage of public transportation has dropped between 70-80%. We set out to create a project that would not only make public transportation safer and more informed, but also directly reduce the threat of disease transmission through public transportation, thus restoring confidence in safe public transportation.
What it does
SafeTravels improves safety in public transportation by enabling users to see the aggregated risk score associated with each transportation line and optimize their seating to minimize the risk of disease transfer. A unique RFID tag is tied to each user and is used to scan users into a seat and transportation line. By linking previous user history on other transportation rides, we can calculate the overall user risk and subsequently predict the transportation line risk. Based on this data, our software can recommend the safest times to travel. Furthermore, based on seating arrangements and user data, a euclidean based algorithm is utilized to calculate the safest seat to sit in within the transportation vehicle. Video analysis for mask detection and audio analysis for cough detection are also used to contribute to overall risk scores.
How we built it
Mobile App
A mobile app was created with Flutter using the Dart programming language. Users begin by signing up or logging in and linking their RFID tag to their account. Users are able to view public transportation schedules optimized for safety risk analysis. Seat recommendations are given within each ride based on the seat with the lowest disease transfer risk. All user and transportation data is encrypted with industry-level BCrypt protocol and transferred through a secure backend server.
Administrator Website
The administrator website was created with React using HTML/CSS for the user interface and JavaScript for the functionality. Administrators can add transportation lines and times, as well as view existing lines. After inputting the desired parameters, the data is transferred through the server for secure storage and public access.
Arduino Hardware
The Hardware was created with Arduino and programmed in C++. An MFRC522 RFID reader is used to scan user RFID tags. An ESP8266 WiFi module is utilized to cross reference the RFID tag with user IDs to fill seat charts and update risk scores for transportation lines and users. If a user does not scan an RFID tag, an ultrasonic sensor is used to update the attendance without linking the specific user information. Get requests are made with the server to securely communicate data and receive the success status to display as feedback to the user.
Video Analysis (Mask Detection)
Video analysis is conducted at the end of every vehicle route by taking a picture of the inside and running it through a modified Mobile Net network. Our system uses OpenCV and Tensorflow to first use the Res10 net to detect faces and create a bounding box around the face that is then fed into our modified and trained Mobile Net network to output 2 classes, whether something is a mask or not a mask. The number of masks are counted and sent back to the server, which also triggers the recalculating of risks for all users
Audio Analysis (Cough Detection)
We also conduct constant local audio analysis of the bus to detect coughs and count them as another data point into our risk calculation for that ride. Our audio analysis works by splitting each audio sample into windows, conducting STFT or Short Time Fourier Transform on that to create a 2D spectrogram of size 64 x 16. This is then fed into a custom convolutional neural network created with Tensorflow that calculates the probability of a cough (using the sigmoid activator). We pulled audio and trimmed it from Youtube according to the Google AudioSet, by getting audio labeled with cough and audio labeled as speech and background noise as non_cough. We also implemented silence detection using the root mean square of the audio and a threshold to filter out silence and noise. This works in realtime and automatically increments the number on the server for each cough so the data is ready when the server recalculates risk.
Backend Server
The backend was created with Node.js hosted on Amazon Web Services. The backend handles POST and GET requests from the app, hardware, and Raspberry Pi to enable full functionality and integrate each system component with one another for data transfer. All sensitive data is encrypted with BCrypt and stored on Google Firebase.
Risk Calculation
A novel algorithm was developed to predict the risk associated with each transportation line and user. Transportation line risk aggregates each rider’s risk, mask percentage, and the duration multiplied by a standard figure for transmission. User risk uses the number of rides and risk of each ride within the last 14 days. Because transportation line risk and user risk are connection, they create a conditional probability tree (Markov chain) that continually updates with each ride
Optimal Transportation Line and Seat
After the risk is calculated for each transportation line and user, algorithms were developed to pinpoint the optimal line/seat to minimize disease transmission risk. For optimal transportation lines, the lowest risk score for lines within user filters is highlighted. For optimal seat, the euclidean distance between other riders and their associated risk levels is summed for each empty seat, yielding the seat with the optimal score
Challenges we ran into
One challenge that we ran into when doing the audio analysis was generating the correct size of spectrogram for input into the first layer of the neural network as well as experimenting with the correct window size and first layer size to determine the best accuracy. We also ran into problems when connecting our hardware to the server through http requests. Once the RFID tag could be read using the MFRC522 reader, we needed to transfer the tag id to the server to cross reference with the user id. Connecting to a WiFi network, connecting to the server, and sending the request was challenging, but we eventually figured out the libraries to use and timing sequence to successfully send a request and parse the response.
Accomplishments that we're proud of
Within the 24 hour time period, we programmed over 3000 total lines of code and achieved full functionality in all components of the system. We are especially proud that we were able to complete the video/audio analysis for mask and cough detection. We implemented various machine learning models and analysis frameworks in python to analyze images and audio samples. We were also able to find and train the model on large data sets, yielding an accuracy of over 70%, a figure that can definitely increase with a larger data set. Lastly, we are also proud that we were able to integrate 5 distinct components of the system with one another through a central server despite working remotely with one another.
What we learned
One skill we really learned was how to work well as a team despite being apart. We all have experience working together in person at hackathons, but working apart was challenging, especially when we are working on so many distinct components and tying them together. We also learned how to implement machine learning and neural network models for video and audio analysis. While we specifically looked for masks and coughs, we can edit the code and train with different data sets to accomplish other tasks.
What's next for SafeTravels
We hope to touch up on our hardware design, improve our user experience, and strengthen our algorithms to the point where SafeTravels is commercially viable. While the core functionalities are fully functional, we still have work to do until it can be used by the public. However, we feel that SafeTravels can have massive implications in society today, especially during these challenging times. We hope to make an impact with our software and help people who truly need it.
Built With
c++
css
dart
html
javascript
kotlin
objective-c
python
ruby
swift
Try it out
github.com
safetravels.macrotechsolutions.us | SafeTravels | Restore confidence in safe public transportation | ['Sai Vedagiri', 'Gustav Hansen', 'Elias Wambugu', 'Arya Tschand'] | ['Best Hardware Hack presented by Digi-Key'] | ['c++', 'css', 'dart', 'html', 'javascript', 'kotlin', 'objective-c', 'python', 'ruby', 'swift'] | 36 |
10,141 | https://devpost.com/software/safer-browser | Inspiration
A lot of the video content on the web is mislabeled and mis-categorized, which makes it difficult for parents to select the age appropriate material for their kids, and wish if there was a real time video analysis tool that could detect inappropriate content.
What it does
Using machine learning the application can detect inappropriate scenes in a video and alert the viewer. this gives many parents, and content platforms, some peace of mind.
How I built it
Computer vision, Java, Android Studio
Challenges I ran into
the time it takes to analyze a video is usually long, had to manipulate frame rate to speed up the process
Accomplishments that I'm proud of
Democratizing this technology and making available to all smart tv owners, and content platform providers.
What I learned
Video analysis is resource intense process.
What's next for Safer Browser
The application will undergo a major UI/UX overhaul to make it more intuitive and user friendly.
There are many analysis that we can add, including analyzing the audio for inappropriate words and phrases, not to mention the huge application for advertising relevant to the scenes on the screen.
Built With
android
android-studio
java
Try it out
gitlab.com | Safer TV | Machine Learning for safer tv time for kids | [] | ['Honorable Mention for Best Overall (5)'] | ['android', 'android-studio', 'java'] | 37 |
10,147 | https://devpost.com/software/covid-19-medical-justice | Explore a whole range of social vulnerability and COVID-19 metrics and see the map dynamically change to reflect new correlations.
Explore analytics and visualizations about each county and the relationship between disparity and COVID-19. All COVID-19 data updated daily.
Search for any county, whether your own or across the country.
As student activists ourselves, we recognize the severity of these issues and the timeliness of change.
To help out, we’ve compiled a repository of resources to address health inequality in communities like McKinley & countless others.
The COVID-19 Health Vulnerability Mapper – for the explorer and change maker in all of us. http://covid.shawsean.com/
Inspiration
The idea for the Health Vulnerability Mapper was motivated by the apparent link between socioeconomic disparity and susceptibility to the COVID-19 pandemic in the United States. While there has been an abundance of media attention surrounding social disparity during this pandemic, we saw a lack of data-driven and interactive geographic tools that show people how this issue really plays out on the national stage.
What it does
The Health Vulnerability Mapper presents an easy and engaging way of exploring up-to-date COVID-19 data while directly visualizing the health disparities found in different communities. Once launched, the web app displays a 3D map of the US, with each county represented by a vertical bar whose height corresponds to the selected COVID metric (updated daily through AWS) and whose color corresponds to the selected vulnerability metric. The vulnerability statistics are based off of the CDC’s Social Vulnerability Index (SVI) and include census variables like income, demographic composition, and minority status, by percentile and percent. We’ve also included measurements like high school graduation rate and vehicle ownership, some of which have surprising correlations with COVID-19. By searching for a county, or by clicking on one of the bars on the map, users can access information about that county including its social vulnerability, its current COVID status, the relationship between these two metrics, and how both of these values compare to the rest of the county. All counties also include visualizations showing the frequency of mask use among its residents.
How we built it
We created the Health Vulnerability Mapper using several powerful technologies offered by AWS and relevant libraries. Our COVID statistics are updated daily from AWS Data Exchange, where we’ve leveraged every datapoint in the Enigma Daily COVID-19 U.S. Counties database. We used AWS Cloudwatch and Lambda to provide automated processing for the 400,000 lines of data we retrieve each day which we store through S3 and deliver to the user through our Elastic Beanstalk server. The final visualization is then powered through Mapbox and C3.JS.
Challenges we ran into
One of the major challenges we ran into while creating the web app was tied to data processing. Specifically, the two main datasets that we used -- the Enigma COVID database and the SVI dataset -- had different county organizations. To solve this issue, we ultimately had to write programs to pre-sort the datasets by county FIPS code and cross-check each line of data to get the corresponding COVID and SVI values for each county. Other challenges arose in designing an intuitive display and debugging the S3 automation process.
Accomplishments that we're proud of
As new college students, we undoubtedly faced challenges while harnessing technologies we hadn’t had much experience with before. For some of us, this was our first time using AWS services. Yet together, we managed to construct an interactive, real-time web app, complete with statistical visualizations and related resources. We’re proud of the individual successes, like finally getting a feature to work or finding a relevant dataset, but most of all, we’re proud to have a valuable tool to contribute to the evolving conversation surrounding social disparity and COVID-19.
What we learned
From a technical perspective, our team learned a lot about using AWS services and web development both on the front-end and back-end. From a teamwork perspective, we also learned quite a bit about leveraging the synchronous capabilities of AWS Cloud9 to work more effectively. From a data perspective, we found several important relationships between the SVI and COVID metrics. We found significant positive correlations between expected metrics like demographics, socioeconomic situation, living circumstance, and poverty – and more surprisingly, we also found trends between less obvious variables like vehicle ownership and education.
What's next for The COVID-19 Health Vulnerability Mapper
Going forward, we’re excited to use our tool to partner with nonprofits, county governments, and other organizations to identify and develop solutions for health inequity. To this end, we’re in the works of adding more analytical features, such as a time series illustrating COVID data over time. We’re also working on incorporating COVID and vulnerability data from multiple countries to expand our reach to a global scale.
Built With
amazon-cloud9
amazon-cloudwatch
amazon-ec2
amazon-elastic-beanstalk
amazon-lambda
amazon-web-services
animate.js
bootstrap
c3.js
d3.js
express.js
javascript
mapbox-gl-js
node.js
wow.js
Try it out
covid.shawsean.com | Access and Equity: Health Vulnerability Mapper COVID-19 | We’re presenting an interactive map-based web application that highlights the relationships between socioeconomic disparity and COVID-19 across the U.S. in an intuitive, real-time, & tangible medium. | ['Ethan McFarlin', 'Iris Xia', 'Sean Yang'] | ['Best in Data Visualization', 'Best Healthcare Solution'] | ['amazon-cloud9', 'amazon-cloudwatch', 'amazon-ec2', 'amazon-elastic-beanstalk', 'amazon-lambda', 'amazon-web-services', 'animate.js', 'bootstrap', 'c3.js', 'd3.js', 'express.js', 'javascript', 'mapbox-gl-js', 'node.js', 'wow.js'] | 0 |
10,147 | https://devpost.com/software/graph-map | Inspiration
Network Science lets us understand “the whole picture” between related data points thanks to the structural properties of a graph dataset. Some graph dataset examples are relations on social networks, relations between proteins, and relations on words co-occurrence in multiple texts. In past years, Network Science brought advances in drug discovery and social science - from molecules interactions prediction to humans interaction analyses. With GraphMap, I would like to bring Network Science to the regular Data Analysis pipeline to reach insights, based on datapoints relationships.
What it does
With graphMap, you can use network science and geolocation visualization to analyze data points relations on territory. Insights are extracted from the plain text so the datasets don’t necessarily have to be of the same type. The Network Science calculations triggered on every GraphMap query, can be used to discover insights such as: Which niche markets are satisfied within a 100 km radius?, How many beds do hospitals have in this area? How far is this property from other high valuable Real Estate properties? What are the ingredients demand in the area? between others. An interactive Graph visualization, 3D maps, and word clouds are provided to understand insights.
How I built it
On-demand NetworkX analyses are performed taking advantage of AWS Athena, AWS S3, AWS API Gateway, and AWS Lambda services; NetworkX is a Python package for Network Science Research. The frontend is powered by Vue.js, Sigma.js, and deck.gl. It lets the user select a radius in a map and trigger a query to AWS Athena (through AWS Gateway and AWS Lambda) obtaining information from datasets stored in AWS S3; the data is processed with AWS Lambda and Python. Co-occurrence algorithms are used to process the “DESCRIPTION” field of each element returned from AWS Athena and a graph is generated with the words relations.
As an example, the project uses datasets on a subscription basis from Kochava Collective, REARC and Relevant Data provided in the AWS Data Exchange service. These datasets were enriched and formatted for the application.
Challenges I ran into
The most noticeable challenge was to manage the time it takes to get data from AWS Athena and be able to return information through AWS Gateway after processing it in AWS Lambda in less than 30 seconds (the request timeout default). To solve the problem, I used chained requests; another approach could be to use the AWS SNS service. I was surprised at how easy is to setup the AWS Athena Service as long as all the .csv files stored in AWS S3 will have the same columns.
Accomplishments that I'm proud of
I am proud of running NetwokX algorithms on a serverless architecture such as in AWS Lambda.
What I learned
I learned the potential of data as a service in the AWS DataExchange platform. Also, I learned how well AWS Athena can be used to perform a serverless analysis of large datasets. AWS Athena was capable of retrieving data with simple SQL from multiple .csv files stored on the cloud.
What's next for graphMap.?
I would like to collaborate with AWS to build a service for co-occurrence graphs generation. It could be a similar service such as AWS LDA in AWS Comprehend. I would like to further develop the interface and develop plots for nodes properties such as nodes betweenness centrality, page rank, etc. I will try more heterogeneous data to better understand the scope of the technology.
Built With
amazon-web-services
aws-athena
aws-dataexchange
aws-gateway
aws-lambda
networkx
node.js
python
vue.js
Try it out
graphmap.hcanales.com | graphMap. | Analyze relations between multiple datasets with geo-location and network science | ['Horacio Canales'] | ['Best in Data Analysis', 'Best Retail Solution'] | ['amazon-web-services', 'aws-athena', 'aws-dataexchange', 'aws-gateway', 'aws-lambda', 'networkx', 'node.js', 'python', 'vue.js'] | 1 |
10,147 | https://devpost.com/software/medical-intelligence-applied | Software architecture (dashboard)
Health check bot
Search bot
Email search results
Dashboard login
Dashboard home
Dashboard user profile
emotion prediction
BMI prediction
Login
Signup
Data visualisation dashboard
Voice chat
Text chat
Overview
Here are some quick links to some of the resources we developed while creating our project:
💡 • Website
📐 • Wireframe
📱 • Figma Prototype
📕 • Documentation
Inspiration
As our population ages, we will begin to have a lot of multimorbidities. The aging population will have higher rates of diabetes, hypertension, and other chronic ailments along with neuropsychological conditions. Seniors suffering from chronic diseases require regular health check-ups every 3-4 months for proper management of medications, vital signs, and lab values. These conditions impact the quality of life and ability for seniors to perform in everyday activities, thus increasing the need for caregivers to assist with daily routines on top of managing complex conditions. Furthermore, evidence suggests that caregivers enter their roles with little support and therefore carry high rates of mental and emotional health problems as a result.
Now with the onset of COVID-19, the elderly’s ability to access usual medical care and mental support have drastically decreased, and the communication with caregivers is impaired. Seniors are advised to stay at home and may feel isolated from their family and friends, leading to struggles with mental health on top of existing feelings of hopelessness and possible grief from the loss of loved ones. Caregivers too may experience increased stress due to barriers in remote healthcare support.
These problems all point to the need for a solution in developing communication channels between the seniors and caregivers for health management while making remote support and health care accessible. Mobile health (mHealth) interventions using smartphones have proven effective for monitoring mood and health symptoms, while also providing a platform for communication and support for mental health concerns. However, these applications are not always accessible to the elderly population. Finger sensitivity and mobility can be an obstacle for the elderly as it impairs their ability to interact with apps. Features such as larger font size, high contrast, and text to speech functionality are often neglected due to the lieu of modern design trends intended to appeal to younger audiences.
Therefore, we designed our app, miia (Medical Intelligently Applied) to be accessible and usable by most seniors. Miia is an application that will help track and manage both physical and mental health conditions for the elderly population. For instance, we implemented a Chatbot function to ask seniors about mood and emotions, while also providing a means to input health measurements such as vital signs. The chatbot can be made to speak aloud, while the senior can utilize their voice which is then converted to text. Furthermore, our app has an additional function to track the mobility and activity functions of our users through drawing data from the built-in accelerometer, gyroscope, and other smartphone sensors. This will help us predict the activity and encourage exercise, and potentially prevent frailty and traumatic falls with seniors.
How to use miia
Miia can be used through signing in with gmail or by creating a new account. Once you've logged into miia you're greeted by the main dashboard that provides an overview of your profile along with several different tabs. Here users can chat with miia, sync wearables, and receive diagnostic reports from health checkups. The current functionality of the application is limited to conducting conversations with the chatbot while also completing facial recognition scans that detect mood and BMI.
Nonetheless, our current Figma prototype serves as a better representation of the app's final functionality and design. In contrast to the web application, the prototype is developed for mobile devices to better serve the elderly through prioritizing convenience and mobility. The prototype itself is fully interactive as users have the ability to click, scroll, and drag through both caregiver and patient interfaces.
What it does
The system leverages AI technology to analyze data collected from facial recognition, speech recognition, wearable devices, and/ or IoT on a daily basis, and alert the caregivers if there are any identified risks. The platform also provides a way to facilitate communication between caregivers and care recipients, while aiding with health management to alleviate caregiver stress.
Main features
Health data collection
Daily health monitoring
We ensure the health data collection process is easy to follow by having the whole health check-up process guided by our AI chatbot miia, which include the following:
AI chatbot - collect health data unavailable in facial and speech recognition/ wearable devices.
Facial recognition - facial image taken for analysis of emotions, and BMI
Elderly focus design
We have conducted phone interviews and user tests with seniors to ensure the app is simple and easy to use.
Voice control - elderly users can choose to interact with chatbot by voice or text
AI Chatbot to stimulate human-like interactions
Enlarged text and other accessibility features
Data visualization for caregivers
The caregivers' interface is specially designed by our designer and incorporated feedback from two medical professionals in our team. We ensure the app is useful and meets the needs of caregivers.
Data analytics dashboard - show key metric of elderly over one month
Detailed health reports of elderly - details of each health parameter
Medicare provider search
If medical follow-up is required, caregivers and/ or seniors can ask our search bot for the Medicare providers in their living area to schedule an appointment.
How we built it
Software
• Frontend development using vue for the caregiver dashboard and Android studio for Android app.
• AWS amplify - develop and deploy Android app
• Amazon lex - developing chatbots for daily health check-in and Medicare provider search
• Lambda - collect values from Amazon lex
• Amazon Athena - providers search and create graph visualization
• Amazon Cognito - authentication and access
• Amazon DynamoDB - Database
• Amazon EC2 - deploy machine learning models
• Hosting and CICD setups using Netlify and Heroku and GitHub.
• Google Colab notebooks to execute heavy GPU workloads and ML Algorithms.
• Invision for developing WireFrames
• Figma for creating the prototype
• Slack for Internal Communications & Google Drive for Documents, Images, etc.
Machine learning
We collected datasets from various sources such as Kaggle, JAFFE and IMFDB and trained the machine learning model for a couple of tasks: the identification of emotions from facial expressions, identification of BMI from face images, identification of emotions from speech, and detection of falls from phone sensors. Determination of cardiovascular disease risk is also achieved by reviewing cohort studies and results in medical journals. After training the model, we deployed a demo of the emotion prediction model, BMI prediction model, and cardiovascular disease risk using Heroku service.
Challenges we ran into
There is not much documentation for AWS, and we had a hard time debugging the chatbots, Cognito and EC2. It is also challenging for us to collaborate in AWS.
It is difficult to find quality labeled data for training machine learning models, which in turn affects the accuracy rate. Given that this is a remote hackathon, we were also unable to test connections with wearables. While there is the flexibility to use the app without external sensors, we plan to integrate with multiple wearable devices and platforms in the future.
What's next for miia!
New features
Health data collection
Speech recognition - speech recorded and analyzed for emotions and mood
Phone sensors - monitor activity to encourage exercise and detect falls
Wearable devices/ sensors - measurements including but not limited to blood pressure/ heart-rate/ sleeping pattern/ activity
Activity monitoring
Users can opt-in to allow monitoring of activity in the background. Exercise has been shown to help alleviate symptoms of depression. Since low activity may be one potential symptom of depression, a notification can be sent to senior users to help guide them through simple exercises (yoga, walking). These exercises can also help decrease fall risk.
In-app communication
To facilitate communication between caregivers and senior users, the app supports messages, calls, and video calls.
Seniors can also ask our chatbot miia to initiate a call with their caregivers.
Voice control - elderly users can choose to call or message their caregivers with the help of chatbot
Elderly users can highlight a region or ask the chatbot to send a particular section to their caregivers for clarification
Reminder system for seniors
Visual and sound alerts can be snoozed until the elderly login and complete the health monitoring daily
Alert system
Alert system for identified issues - caregivers can set threshold values according to elderly's condition; red warning symbols and notification pop up when value above/ below normal
Join us!
We are planning to bring the project to the next stage. Shoot us a message if you're interested!
Built With
amazon-athena
amazon-cognito
amazon-dynamodb
amazon-ec2
amazon-lex
amazon-marketplace-web-service
amazon-ses
amazon-web-services
amplify
android
figma
lambda
vue
Try it out
github.com
github.com
master.d3s7eipwj38e3u.amplifyapp.com
www.figma.com
drive.google.com
miiaml.tk | medical intelligence applied | Two-way health management platform for seniors and caregivers | ['Karim Khattaby', 'Deepesh Grover', 'Chloe Chen', 'kevin patel', 'Billy Zeng', 'Rohail Khan', 'Akhilesh Iyer', 'Ava Chan', 'marcos a oliva', 'Li Agnes', 'Alice Tang', 'Megan Thong'] | ['Best in Data and Machine Learning'] | ['amazon-athena', 'amazon-cognito', 'amazon-dynamodb', 'amazon-ec2', 'amazon-lex', 'amazon-marketplace-web-service', 'amazon-ses', 'amazon-web-services', 'amplify', 'android', 'figma', 'lambda', 'vue'] | 2 |
10,147 | https://devpost.com/software/qwe-8a2wtn | Inspiration
The safest way place to be in this pandemic would be at home but going out is always an inevitable thing whether its for groceries or for other necessities. Going out during this pandemic contributes a risk of getting infected with COVID19. But how much exactly is that risk of going out poses to your health in general? In hindsight of that question, this project aims to give every user a perspective on how much their exposure levels are to COVID 19 based on their daily activity of going out.
What it does
The app takes your home location and set's it as a safe zone where COVID exposure levels are 0 and the app classifies the data received from the AWS Data Exchange enigma corona tracker it classifies every place in the world as one of 3 zones. Zone 1 is a green zone where COVID cases are light to almost none for example countries with less COVID cases or in general an empty space, Zone 2 is an orange zone where you have orange level exposure where COVID cases are mediocre and finally, Zone 3 which is a red zone which is a heavy COVID population concentration and also includes public spaces.
The app classifies which zone you are in and presents a risk-based analysis based on the time you spent outside and based on the zone that the app classifies you in. It tracks how long you spent outside on that zone and when you return back home the app gives you an analysis of all the places you been to and the associated zone and gives a risk estimate on contracting COVID19. It gives you a detailed analysis of your daily activity and your monthly exposure levels to COVID.
This gives the user's a visual understanding of contracting COVID and the app also furthermore gives a risk analysis of wearing a mask vs without wearing mask for giving the user a perspective on how their exposure risk changes based on your precautions. Lastly, the app presents a set of precautions that must be taken based on your exposure levels in the time you spent outside.
The app also contains a map that wasn't completed to its potential but was intended to give a zone view of your location's radius on different exposure levels at different places.
How we built it
We built this app using flutter as our frontend for presenting our data visualization and analysis and fast API as our backend to compute our data from AWS and the data generated by the user. We used the AWS data exchange data set enigma corona scraper for analysis on locations and classifying a location to a zone based on population, corona growth rate, and other aspects to classify a location into a zone. Once it classifies an algorithm computes your risk exposure levels based on the time you spent at a zone. We hosted this application on Amazon EC2 instance and our storage in S3 and arangoDB for storing all data from AWS and the user-generated data. Lastly, we used different API's for map generation and news collection. We used a python library geopy to translate coordinates into an address the database understands for zone classification.
Challenges we ran into
Classifying a location to a certain zone took a lot of metrics and in many cases, the database didn't contain all metrics needed so we had to find other sources for the database to collect missing values. The data visualization and analysis was a challenge we had to pass while building the application.
Accomplishments that we're proud of
We are proud of our data analysis based on the user's daily input and AWS data exchange dataset's insight for backend zone determination and risk assessment. Furthermore, we built a good model to predict COVID exposure from real-life tracking and exposure risk levels.
What we learned
We learned to work with different API's connecting all of them together and certainly learned a lot about data modeling visualization.
What's next for TrackMyCovid
The next steps for TrackMyCovid are creating a map view of the world with green, orange, and red zones and creating more personalized reports based on COVID exposure and creating more accurate estimates. Furthermore, we plan on developing a map view to push notifications when you enter a new zone the app gives statistics on that zone and furthermore precautions for that exact zone.
Built With
amazon-ec2
amazon-web-services
arangodb
aws-data-exchange-enigma-dataset
boto3
covid19.org
fastapi
flutter
geopy
matplotlib
newsapi.org
nginx
numpy
pandas
scikit-learn
Try it out
github.com | TrackMyCovid | A mobile app which gives you a risk analysis of contracting COVID19 based on the places you been to and analyzing the amount of cases recorded and the time spent by you in each place. | ['Rohit Ganti', 'appidi abhinav', 'Arshdeep Singh', 'Abhishek Kumar', 'KrishNa Na'] | ['Honorable Mention', "People's Choice", 'Best AWS Project'] | ['amazon-ec2', 'amazon-web-services', 'arangodb', 'aws-data-exchange-enigma-dataset', 'boto3', 'covid19.org', 'fastapi', 'flutter', 'geopy', 'matplotlib', 'newsapi.org', 'nginx', 'numpy', 'pandas', 'scikit-learn'] | 3 |
10,147 | https://devpost.com/software/freshair | Intro Page
Our Mission and Tips section
Interactive World Map
The gradient bar highlights the countries with the selected color + each country has a link to each of their detail pages.
Ranked countries based on the number of tonnes polluted
Detail page for each country
On hover shows a graph of the pollutant's emissions throughout the years
Shows the the countries' total progression of pollution
Category: Data Visualization
Inspiration
Severe air pollution has been prevalent in our world since the Industrial Revolution. In recent years, however, scientists have revealed that humans started emitting greenhouse gases over 2,000 years ago after finding trapped air within Greenland’s glaciers [1]. After these many years, how has our atmosphere taken in all these harmful pollutants -- and how has air pollution been a major contributor to global warming? Not only is our environment getting damaged, but the public health too, in major ways. For example, contact with smog (when fossil fuels react with air) and soot (airborne pollutants) can irate the eyes and damage the lungs. As well, polycyclic aromatic hydrocarbons (PAHs) in large amounts have been linked to blood/liver problems, eye/lung irritation, and cancer [2].
Inspired by the recent Covid-19 pandemic and the viruses' damage to the lungs, we wanted to encourage countries to improve their air quality and focus on air pollution reduction. Although air pollution has gotten significantly better over the years, particularly the United States thanks to the Clean Air Act, there's still much to improve on. This 1963 act made a huge advancement in the reduction of air pollution and is one of the most influential modern environmental laws.
Even more, this further validates that leading countries like the United States are serious in making a change in regards to air pollution.
Change and improvement will happen in the industrial industry if we look back in our past and the
data
we collect, to define the next steps for our future. Our solution does just that and visualizes the air pollution in a unique way, one that has not been done before. Taking inspiration from
https://www.worldometers.info/
and air quality visual maps such as
https://aqicn.org/map/world/
, we created freshAir, a unique air pollution visualizer and tracker.
What it does
Visualizes and ranks the air pollution for countries throughout the years to provide analysis and access to visual trends across the world.
FreshAir’s features include an air pollution index to track the tonnes of pollutants emitted each year per country, a map color-coded with colors from white to the black gradient to uniquely represent the pollution, and graphed trends across the years for each pollutant per country. To summarize, our platform shows the progression of air pollution across the years around the world. Unlike other air pollution indexes, or maps, who use AQI ( Air Quality Index ) to rank and measure the severity of air pollution in
real time
, we used the number of tonnes emitted to show the progression
through time
, courtesy of the CRUX OECD - Air Emissions by Source dataset. It's important to not only focus on the now but also the past, so we can learn from our mistakes and our uprisings to improve how we approach the issue of airborne pollutants in the future. As well, we used Amazon's SageMaker to graph those trends for each of the main pollutants, Carbon Monoxide, Nitrogen Oxides, Non-Methane, PM10 / PM25, and Sulfur Oxides to describe these trends and the history of air pollution.
Through tracking, visualizing, and representing, we believe that we can get a better idea on how one can approach the issue in the future, and what countries need the most work in reducing their waste.
How we built it
The technologies/languages we used include:
Amazon SageMaker: To interpolate the data and create graphs that would show the progression of tonnes of pollution emitted for each pollutant per country across time.
AWS Amplify: To host our react application.
React/Javascript: To complete the front-end of our platform and all the functionalities, including the interactive SVG map.
Python: The language we used for creating the graphs on Amazon Sagemaker.
Some highlighted features:
Color gradient - white to black
For each country, we labeled its air pollution intensity with a color between white and black- representing the colors of smoke and smog. We decided to do this to give a better visual on our map and ranking list so it's easier to understand compared to confusing AGI values and unclear labels other pollution maps have. These hex color values were calculated based on the number of tonnes the country emitted that year. The more tonnes emitted, the darker the shade is. The visual below shows a clear explanation of how it was done:
Using Sagemaker for Interpolation
We have created Notebook instances for AWS Sagemaker. Using Python 3.6 we imported the CRUX Air emission dataset and wrangled it. We dropped unwanted rows and columns and finally with the dataframe we made lineplots based on trends. We used Tonnes and the unit and plotted time series graph for value vs year for each country for each pollutants.
For example, this graph shows Australia's air emissions in tonnes versus time:
Interactive Map SVG
To be able to highlight the country borders, colors, and hovers, we decided that a sufficient way to do this is with an SVG. The world map is constructed through loops in our react app and each SVG path is inserted separately. When a border color or background is changed, the react state variable causes a reload and the SVG re-creates itself again. This allowed us to make seamless transitions while interacting with the map but also maintain a lot of design control. (The map was taken from
https://simplemaps.com/
).
Challenges we ran into
Merging SageMaker results with our react application - and the best way to showcase both in our final product.
Finding the best way to display the map while having control over the display, colors, and highlights.
Changing the year variable throughout all navigation links and through the state. It was challenging to make the updates seamless.
Deploying the site on AWS Amplify because of our large files causing ‘Javascript heap out of memory’ errors.
Accomplishments that we're proud of
Overall, we’re glad we participated in this challenge and created a fresh idea that is different from similar counterparts (air pollution maps). We created a simple interface that’s easy to use and displayed the trends created with SageMaker in unison with our site. We used the CRUX dataset to its fullest potential since we incorporated almost all the data from it. Finally, this project is rather large for only a 2 person team, so we’re definitely proud of the work we have done in quantity and quality.
What we learned
During the development of this project, we use a couple of new technologies for us. Amazon SageMaker and AWS amplify were new, so we learned how to use them and integrated them with our React app. As well, we got more familiar with React and its differences between javascript (especially with SVG's). We learned how to solve merge conflicts and to make sophisticated decisions regarding the design, idea, and functionality of our project.
What's next for freshAir
Access to the full database, so we can report data on all the countries.
Calculating the percentage each sector contributes to the pollution and displaying it in a simple way on our country detail page.
Machine learning predictions of future air pollution.
More information on the ranking table such as the country’s improvement from the previous year and the projected number of tonnes the country will emit the following year.
Zoom in feature on the map- to have a better view of the air pollution per country.
Graph the trends worldwide, not just per country (easier to see the progression of pollution).
Sources
[1]
https://www.smithsonianmag.com/history/air-pollution-has-been-a-problem-since-the-days-of-ancient-rome-3950678/#:~:text=Before%20the%20Industrial%20Revolution%2C%20our,at%20least%202%2C000%20years%20ago
.
[2]
https://www.nrdc.org/stories/air-pollution-everything-you-need-know#sec3
Data
OECD - Air Emissions by Source -CRUX Informatics
https://aws.amazon.com/marketplace/pp/prodview-dfw7buzlknvzw?qid=1595129067836&sr=0-1&ref_=srh_res_product_title#overview
Built With
amazon-web-services
css3
html5
javascript
python
react
sagemaker
Try it out
github.com
master.d19vsle5yfokw7.amplifyapp.com | freshAir | Track air pollution across the world and see its impact instantly. How can we keep our air FRESH? | ['Nicole Streltsov', 'Siddharth M'] | ['Honorable Mention'] | ['amazon-web-services', 'css3', 'html5', 'javascript', 'python', 'react', 'sagemaker'] | 4 |
10,147 | https://devpost.com/software/fintech-insurance-claims-analysis | Inspiration :
Post Covid, impact of economic slowdown on auto insurance sector and particularly on uninsured claimants. Below are the key findings of the study done from USA.
32+ million uninsured vehicles on the road
35x spike in subrogation workload
15% missed opportunities of subrogation (deteriorating)
32% of recoverable paid claims are lost
What it does :
With the help of AWS data exchange, we augmented our claims data to uncover important aspects of the uninsured or underinsured claimant, and thus used those variables such as income levels, weather conditions and sales preference to build a machine learning model to understand his or her likelihood to pay the subrogation
How I built it :
We researched and finalized the below datasets to augment our solution
Claims First Party Data (Synthetic)
Epsilon – Insurance Consumer Data Insights (AWS Data Exchange)
Enigma – Individual Tax by Zip Code (AWS Data Exchange)
Weather Source – OnPoint Historical Weather Data ( AWS Data Exchange)
Phase two – Implementation of the Solution
Design data pipelines to feed models
Perform ML experiments to train models
Develop the UI & Visualizations
Integrate & Deploy the model.
Challenges I ran into :
Bias in the data set of claims.
Performance issue while processing and integration of datasets with size > 300 MB.
Hosting of the application using AWS LightSail
Accomplishments that I'm proud of :
Completed this hackathon project with demanding timelines along with full time job.
Achieved high accuracy of >98% and recall > 94% for model.
Integration of model and UI screens built in Python flask through Amazon Light Sail, we never done that before.
What I learned :
Understand various datasets AWS data exchange offers and its potential use cases.
Although not optimal, however learnt different AWS components features for end-to-end logical use case.
What's next for Claims ML based Subrogation Recovery Prediction :
We want to integrate various data sources and research possible ways to improve the model to the real life scenarios.\
Some of them at the top of our mind are
Social Media posts (based on collision date time, location and vehicle description).
Google Street View (based on collison date time & location)
Digital Identity Mapping for the Claimant
Built With
amazon-web-services
api
natural-language-processing
python
sagemaker
semantic-engines-semantic
Try it out
3.89.145.246 | Claims ML based Subrogation Recovery Prediction | Get little closer to the reality !!! | ['Priyanka Balakumar', 'Manisha Kumari', 'Sumeet Kumar', 'Vedant Vasishtha', 'Giridhar Mynampati'] | ['Large Organization'] | ['amazon-web-services', 'api', 'natural-language-processing', 'python', 'sagemaker', 'semantic-engines-semantic'] | 5 |
10,147 | https://devpost.com/software/hospitaldataset-visualization | Inspiration
For us it was more thinking matter as we wanted to work on a solution for visualizing the data.We used kepler.GL to visualize the dataset
What it does
Visualizes the Hospitals Data in 3D for work using KeplerGL.We have integrated Kepler.GL library to get valuable insights into the data.
The Timeline approach helps us over a period of time how Hospitals.The dataset contains "This dataset contains locations and resources of Hospitals for 50 US states, Washington D.C., US territories of Puerto Rico, Guam, American Samoa, Northern Mariana Islands, Palau, and Virgin Islands"
How I built it
We acquired the data from aws data exchange
https://console.aws.amazon.com/dataexchange/home?region=us-east-1#/subscriptions/prod-fmjkj5jshol2k
We use AWS Sagemaker to install Jupyter Notebook instance
We used Python to code it .
There were issues while Loading the kepler.GL map but we resolved it.
We need open terminal from Jupyter notebook options
Then in terminal install
pip install keplergl
Then the following commands
** jupyter nbextension install --py --sys-prefix keplergl **
** jupyter nbextension enable --py --sys-prefix keplergl **
We used this resolution
https://github.com/keplergl/kepler.gl/issues/583
We were able to get the desired results
Challenges I ran into
Discussed above
Accomplishments that I'm proud of
The Visualization
What I learned
Work
What's next for HospitalDataSet Visualization
Built With
ec2
keplergl
sagemaker
Try it out
awsdatavisualizerhospitality.notebook.us-east-1.sagemaker.aws | HospitalDataSet Visualization | Data Visualization done in perfect manner | ['Abhishek Nandy', 'Anubhav Singh', 'Kazi Haque'] | [] | ['ec2', 'keplergl', 'sagemaker'] | 6 |
10,147 | https://devpost.com/software/biggle | Resources for COVID-19
Inspiration
The coronavirus pandemic has impacted the lives of millions of people and families. It is essential to provide resources to assist people during this crisis both healthcare and non-healthcare.
Also, to lower healthcare costs, improve outcomes and promote self-care through proactive remote patient monitoring and user engagement.
What it does
SolutionsReache provides connected health solutions and is committed to helping you manage the challenges presented by COVID-19 through our digital marketplace of products by providing free and paid COVID-19 apps and Solutions for government and public health with community and residents, employers and employees, hospitals and patients, as well as PPE supply chain for medical sites including labs and hospitals.
How I built it
Using the AWS Data Exchange to subscribe to data, loading the data into S3 buckets, and using Glue crawlers and mappings to load the data into Amazon Redshift. Tableau has connectors for Redshift and RDS to bring data into tableau for analysis.
Challenges I ran into
There were cumulative data for Tableau coronavirus (COVID-19) and Rearc data on AWS Data Exchange which required some data discovery, auditing and filtering to get the data in the right format.
Accomplishments that I'm proud of
Defining an end-to-end solution architecture to connect a VueJS web app to a subscribed data on AWS Data Exchange and proving out it works by creating apps and other solutions around the architecture.
What I learned
AWS Data Exchange and AWS Glue.
What's next for SolutionsReache
Implement the tasks on our roadmap including dashboards for hospitalizations, testing, and hospital capacity.
Built With
amazon
amazon-web-services
android
data
exchchange
glue
javascript
python
s3
tableau
vue
vuetify
Try it out
gitlab.com | SolutionsReache | Digital Health Solutions for Connected Health and COVID-19 Pandemic | [] | [] | ['amazon', 'amazon-web-services', 'android', 'data', 'exchchange', 'glue', 'javascript', 'python', 's3', 'tableau', 'vue', 'vuetify'] | 7 |
10,147 | https://devpost.com/software/covid-19-policy-decision-helper | equations
equations2
modified SEIR model
Inspiration
As the COVID-19 pandemic spreads out all over the world, we see different strategies adopted by different countries and states, and (maybe therefore), we see different characteristics and trends in the transmission of the virus in different regions. There are lots of concerns about the future of the pandemic, and there are lots of debates about policymaking in many countries and regions.
We think that it will be interesting to model the dynamics of viral transmission and to visualize the future trend of COVID-19. Then by setting a tolerant threshold for the future mortality rate, we can find the best combination of parameters using optimization methods. Furthermore, we can model the relationship between the transmission parameters and the environmental factors such as the control policy, hospital beds, mobility, population, and so on. Once it is modeled, the effects of them on the transmission process can be explained quantitively, and the optimal set of parameters obtained above will correspond to a set of control policies with the least cost, that is, the most relaxed policy acceptable.
What it does
Viral transmission model.
We built a dynamic model for viral transmission using differential equations. The state-level time-series data is used in the fitting of the model. After choosing a state of interest and a time period, the transmission parameters will be optimized and outputted, and a predicted curve will show the trend of the pandemic assuming all conditions stay the same in the future.
Finding the best set of parameters.
With a tolerant threshold of mortality rate / infected rate in a given future period set, our algorithm can find the best set of parameters using the optimization methods. That is, the most relaxed conditions while keeping the mortality rate / infected rate under control for a given time period. In real policymaking, dynamic optimization can be of great help since it can continuously correct the model and give the optimal parameters based on the real-time data.
Modeling the relationship between the transmission parameters and environmental factors.
We aim at obtaining models that are interpretive as well as predictive; in order words, we are hoping to find models that are simple, accessible and easy to be understood, so that people can gain some insights of what is significant to the way a pandemic develops; but at the same time, we are also searching for models among those explainable models that are most helpful for prediction of transmission parameters.
How we built it
Part O: Data
Data source from
AWS Data Exchange
:
Global Coronavirus (COVID-19) Data (Corona Data Scraper)
provided by Enigma
COVID-19 Prediction Models Counties & Hospitals (Yu Group (UC Berkeley))
provided by Rearc
Complementary sources:
Corona Data Scraper page
Part I: SEIR infection model
After preprocessing, we fetched the state-level time-series data of cases, deaths, recovered, hospitalized, date, and population.
We built a viral transmission model based on the classical SEIR model with some modifications.
We Assume...
Susceptible (S): healthy people, will be infected and turn into E after close contact with E or Q.
Exposed (E): infected but have no symptoms yet, infectious with a rate of $\lambda$. E will turn into I after the virus incubation period, which is 14 days on average. So we assume $\sigma = 1/14$, dE/dt (t) = dI/dt (t+14).
Infectious (I): infected and have symptoms. We will take the data of test_positive or cases_reported as the data of I. The severe cases will be hospitalized (H), the mild cases will be in self-quarantine (Q). I may recover or die after some time.
Self Quarantine (Q): have symptoms, may still have some contact with others, thus infectious with a different rate of $c\lambda$ ($0 \le c \le 1$). We also assume $Q = kI$, where $k = 1 - avg(\frac{\Delta hospitalized}{\Delta test_pos}) $
Hospitalized (H): have symptoms, kept in hospitals, assume no contact with S.
Recovered (R): recovered and immune, may turn into S again (immunity lost or virus not cleared)
Dead (X): dead unfortunately :(
Therefore, we have a set of differential equations to describe this process:
Apply to our datasets, we have:
Running the model for each state for every 15-day period with appropriate data, we got the best parameters fitting the models for each state and time period.
Part II: backward optimization
Consider the physical meaning of all the parameters in the above SEIR model:
$\lambda$, $c\lambda$: infection rate, which should have a direct relationship with the strength of the control policy (how many days should people in self-quarantine stay at home, how strict should the quarantine be, how should social distancing be enforced, whether or not people are required to wear a mask, etc).
$\sigma$: we choose $\sigma = 1/14$ because the incubation period is 14 days on average.
$k$: the proportion of positive cases that are not hospitalized, which should have a relationship with the current medical resources such as the number of hospital beds.
$\mu, \omega$: recover rate and death rate, which should have a relationship with the current medical resources and the health condition of the population.
$\alpha$: immunity lost rate, it depends on the characteristic of the virus and sometimes relates to the testing methods.
Among all above, an important limit to the control of the pandemic is medical resources. In other words, $k, \mu, \omega$ depends on the temporal situations of each states or countries. Another limit is the inherent feature of the COVID-19 virus itself, i.e. $\sigma$ and $\alpha$.
However, the policy maker can decide on the control policy, which directly affects $\lambda$ and $c\lambda$. We know that the control policy is like a double-edged sword: if it is too strict, it would have a bad impact on the economy and people's social life; but if it is too relaxed, the pandemic will soon lose control, causing even more severe consequences.
Therefore, our solution is trying to find the best set of transmission parameters $\lambda$ and $c$ using optimization methods.
Given a state of interest and a start date, the algorithm will fetch for the pandemic data of that region at that time. It will then take the original parameters as the first guess, and simulate the trend assuming all conditions stay the same in future.
With the control term (death / case) and the control factor (proportion of population) specified, the optimization algorithm will run to find the best set of parameters needed to satisfy the requirement. And by controlling the transmission parameters, the trend can also be visualized.
Ideally, this algorithm can be run dynamically using the real time data, so that the policy can be adjusted timely and effectively.
By controling the transmission parameters, we can see some completely different curves, which means that making a wise policy decision may significantly affect the future of people.
Part III: environmental factors modeling
Finally, we attempted to model the relationships between the SEIR model parameters and a variety of social/environmental factors, including demographic, medical and policy factors. We aim at obtaining models that are interpretive as well as predictive; in order words, we are hoping to find models that are simple, accessible and easy to be understood, so that people can gain some insights of what is significant to the way a pandemic develops, but at the same time, we are also searching for models among those explainable models that are most helpful for prediction making.
With such goals in mind, we engaged a relatively small number of variables in our study--variables that seem most significant to us intuitively, from the most accessible open data source.
The variables engaged in the study are the following:
SEIR model parameters:
k
: the proportion of positive cases that are not hospitalized
sigma
: we choose
sigma
= 1/14 because the incubation period is 14 days on average.
lamda
: infection rate of E (exposed)
c
: a coefficient measuring the effectiveness of self-quarantine,
c*lamda
is the infection rate of Q (self-quarantine)
alpha
: immunity lost rate
omega
: death rate
miu
: recover rate
Geographic factors (state-level data, obtained by taking averages of county-level data):
POP_LATITUDE
: latitute of population center
POP_LONGITUDE
: latitute of population center
Demographic factors (state-level data, obtained by taking averages of county-level data):
PopulationEstimate2018
: estimated total population in 2018
PopTotalMale2017
: total population of male in 2017
PopulationEstimate_above65_2017
: total population above 65 years of age in 2017
PopulationDensityperSqMile2010
: population density per square mile in 2010
DiabetesPercentage
: estimated age-adjusted percentage of diagnosed diabetes in 2016
Smokers_Percentage
: estimated percentage of adult smokers in 2017
HeartDiseaseMortality
: estimated mortality rate per 100,000 from all heart diseases
StrokeMortality
: estimated mortality rate per 100,000 from all strokes
Medical resouce (state-level data, obtained by taking averages of county-level data):
Hospitals
ICU_beds
HospParticipatinginNetwork2017
: number of hospitals participating in network in 2017
Current situation:
cases
deaths
recovered
days
: days since the first day of SEIR modeling
State policy (released date)
stay_at_home
above_50_gatherings
above_500_gatherings
restaurant_dine_in
entertainment_gym
The
modeling methods
we applied include the following:
Data cleaning as necessary to address observations with missing or extreme values.
Multiple linear regression
ANOVA
Interaction
Residual diagnostics
Transformations
Polynomial regression
Stepwise model selection (AIC & BIC)
Variable selection
Test/train splitting
The resulting models are listed in the "
Result
" section on the part-3 html page.
Challenges we ran into
Searching for meaningful and usable datasets as well as extracting and cleaning information of interests had been a greater pain for us than we had thought.
Also, in part 3, we should have been able to at least get some clue of the significance of the state policies as pandemic predictors. However, due to the paucity of data quality, linear dependence occurs in the data columns involving Covid-19 policies, so that the relationship we questioned remains unable to be calculated for now.
Accomplishments that we're proud of
We modified the classical SEIR model to fit the COVID-19 pandemic by considering the people in "quarantine" and making use of the hospitalized data. To our suprise, simple as our model seems, it fits the real data pretty well.
Some models we obtained in part 3, though simple, are performing quite well regarding the model diagonostics and prediction making metrics, implying that those are models
We did find out some really interesting relationships between the development of a pandemic and the social/environmental conditions of a state. Simple models tell a big story. The models that engage geographic and demographic predictors as significant factors should raise our awareness of the importance of geographic and demographic factors in decision making.
What we learned
The crucial role that cloud computing plays in data analysis.
Both partners of our team are undergraduates in non-CS majors, and this is our first time touching AWS or any other cloud service system. It did take us a while to figure out where to incorporate our tasks into AWS, but soon we saw the great potentials and capability of AWS.
A slight change in early-stage transmission rate could lead to a completely different ending.
In our simulations controling for the transmission parameters, we found that a tiny change in the the transmission parameters could lead to a radically different track of pandemic development in the end. This reveals the importance of early-stage policy making to control the transmission rate, even if the change in transmission rate seems negligible.
Different models and theories of the relationships are possible.
As we should know, in multiple-variable modeling, there is not necessarily one, singular correct answer/model, although certainly some methods and models are more useful and would perform better than others depending on the data we choose. The same applies to this project. In part 3, we collected a variety of models corresponding to each SEIR parameter, which performs similarly but are sometimes different in a radical way. For example, we have come across two models that reaches approximately the same error level when predicting a SEIR model parameter, one engages the policy-related variable as significant, while the other engages more demographic factors as significant but excludes any policy-related factors at all. Apparently, the two models tell different stories: the former implies that how early we impose an interving policy does affects the way that the pandemic develops, while the latter says actually more of the factors are predicted by the already-set geographic and demographic variables.
Geographic and demographic predictors are more useful and more important than we used to think.
Surprisingly, variables such as
total population
--instead of
population density
--and
lattitude/longitude
appear statistically significant as predictors in the models predicting
transmission rate
of the virus, while variables that seems intuitively important, such as
population density
,
medical resource
and
policy information
, did not seem to help as much. At first glance this does not seem to make sense, and we guess that this could mostly be becuase of the poor quality of the data in some columns, especially columns relating to the state policies. However, this still reminds us that some envinronmental data could be unexpectedly important in the development of a pandemic, and policy makers should be aware of that.
What's next for Covid-19 Policy Decision Helper
As mentioned in the beginning, the ultimate goal of this project is to solve for a set, or a range, of best state-policy-related parameters conditioned by the social/environmental factors of a state. In other words, the mission is to
recommend the best policy decisions
.
We have modeled the relationship between the transmission parameters and the environmental factors. Given the optimal parameters obtained from part 2, we should at last be able to determine the corresponding optimal policy and the value of environmental factors. It remains for us to expand the dataset, especially to obtain more data describing the state policies, and eventually develop a formal mathemetical model that outputs indicators as decision guidelines.
Built With
amazon-data-exchange
ami
ec2
html
markdown
python
r
rds
Try it out
xinyi-lai.github.io
github.com | Covid-19 Policy Decision Helper | Predictive modeling of Covid-19 development that helps policy makers make decisions best controlling mortality with least costs. | ['Yulin Li', 'Xinyi Lai'] | [] | ['amazon-data-exchange', 'ami', 'ec2', 'html', 'markdown', 'python', 'r', 'rds'] | 8 |
10,147 | https://devpost.com/software/analysis-of-air-quality-post-covid-19 | website_main
website1
website3
website2
Inspiration
The Covid19 times had seen lockdown implemented in many regions. The motivation of the study was to see how the air quality is impacted post Covid19 times with several countries imposing strict lockdown policies.
Can this be a future strategy to implement a cleaner climate ??
What it does
It shows the live air quality from different locations and analyses of the data we have from the last few years. The result shows the cities which improved and which didn't.
How we built it
The application is running on python flask hosted on AWS BeanStalk, the datasets are retrieved from
https://aqicn.org/
, the maps on the webpage are generated using google maps api and leaflet.js and other databases are from rds.
Challenges we ran into
Insufficient datasets for our projects. It was critical for us to find the best dataset which would help us to finish the project as we wanted. We found many free dataset providers on AWS Data Exchange but most of them did not satisfy our requirements.
Accomplishments that we're proud of
Successfully completed an analysis of the air quality in various cities across the world.
What we learned
Data Analysis on Open Datasets, Building an application on python flask, Deployment on AWS, Front-end Design, Usage of multiple API's in the project.
What's next for Analysis of Air Quality post Covid-19
We will be conducting more analysis retrieving different pollutants contributing to the air quality and adding more cities in the future.
Built With
amazon-web-services
api
aqicn
beanstalk
flask
google-maps
javascript
jquery
leaflet.js
python
rds
Try it out
github.com
hackathon-env.eba-2pxa3xum.us-east-1.elasticbeanstalk.com
github.com | Analysis of Air Quality post Covid-19 | The motivation of the study was to see how air quality is impacted post Covid19 times with several countries imposing strict lockdown policies. | ['kyatham omkar', 'SIDDHI NAIR', 'Pragnyashree Maitreyee', 'Nisarg Pawar', 'Upasana Dhar'] | [] | ['amazon-web-services', 'api', 'aqicn', 'beanstalk', 'flask', 'google-maps', 'javascript', 'jquery', 'leaflet.js', 'python', 'rds'] | 9 |
10,147 | https://devpost.com/software/predictor | Website Main
Messenger Query Conversation
IRS Data
IRS Data California
IRS Data California
IRS Data California 2
COVID-19 Data
COVID-19 Data California
COVID-19 Data California 2
Inspiration
Querying Data isn’t always easy. You need to connect to a database and write specific queries to find the required data. DataZon will change this traditional way and get rid of the long syntax writing. You can simply ask a chatbot any question by typing or speaking, and you will get the data in seconds; even from your phone. You don’t need special softwares and connectors any more, just type or say a normal question and get formatted answer. Also, you can analyze Data Graphs remotely using the embedded QuickSight Dashboards; also can you can access from your phone. It’s so simple and it provides an easy way to compare the Data over time.
What it does
You can use a chatbot to query the Data. Also, we provide Interactive Visual Graphs to help analyze and compare the data trends over time. You can ask the chat bot questions like?
Q1: How many returns were filed in NY 10036.
Q2: How many corona cases in Alaska?
Q3: How many COVID cases in Italy?
Q4: I need to analyze the COVID-19 data?
How I built it
First, I subscribed and requested the IRS & COVID-19 Data from in AWS Data Exchange, and exported the data to AWS S3. Then, I created Aurora Database with MYSQL connectivity and created the necessary tables. Moreover, I inserted the Data from S3 into the tables. Now, I have the Data and can be accessed from AWS QuickSight & AWS Lambda.
** AWS LEX **: I created Intents and defined Utterances with slots for each Intent. Then, I used AWS Lambda to connect to AWS RDS and query the data from Aurora with Lex. When the user ask a question, Lambda take the intent and translate it to a query then send back the response to the user.
** AWS QUICKSIGHTS **: I created analysis for each Data set and used the most appropriate Visuals for the Data. Then, I filtered the necessary visuals and connect all visuals together. Finally, I exported the Dashboards and share it with a group of users.
** AWS-SDK **: I used the sdk to register users and get the embed URL when the user request the link. This was done in Node.js Web App using Express & Express session.
What's next for DataZon
Adding Anomaly or SageMaker to predict Future Events in some Datasets. Also, using Amazon Pay to charge per Dashboard session. Allow the user to upload his Data, and create automated intents with utterances with the respective SQL syntax.
Built With
amazon-pay
aws-lambda
aws-lex
aws-quicksights
aws-rds
aws-sdk
javascript
node.js
sagemaker
Try it out
github.com
github.com
mynameuuy.com
www.m.me | DataZon | In DataZon, you can Query, Analyze, and Visualize the Data in a new way. We have awesome Chatbot that can help you query the Data easily. Also, we have nice visual graphs to help analyze the data. | ['Khaled Abouseada'] | [] | ['amazon-pay', 'aws-lambda', 'aws-lex', 'aws-quicksights', 'aws-rds', 'aws-sdk', 'javascript', 'node.js', 'sagemaker'] | 10 |
10,147 | https://devpost.com/software/data-exchange-on-the-go | The diagram of the process
yaml ddl
Inspiration
I have been an ETL developer and i could tell that creating ETL job is a repeating task and requires a lot of testing.
What it does
We essentially starting with a yaml that will define all the downstream process that will create the data extraction/request from data exchange. Then the data will be pipe to kinesis stream to a amazon personalize (assuming we will be doing campaign, a/b test on recommendation system). All these in a yaml predefined!!
How I built it
I was doing the api of Boto using python notebook and it will be put into lambda and client script(kinesis part)
Challenges I ran into
Getting the data set from data exchange as 'Entitled' is challenging with api method.
Accomplishments that I'm proud of
I have manage to leared I can basically get any public data set and make them into amazon personalize dataset.
What I learned
API of BOTO(very important.
What's next for Data Exchange On The Go
A terraform/cloudformation script development.
Built With
data-exchange
kinesis
lambda
python | Data Exchange On The Go | Imaging that you can get public dataset stream to your recommendation system using a single yaml file. My app will do the automation and creation of the recommendation based on your criteria. | ['Nicholas Lu'] | [] | ['data-exchange', 'kinesis', 'lambda', 'python'] | 11 |
10,147 | https://devpost.com/software/aws-data-exchange-challenge-kenneth-zhang-project | AWS Data Exchange Challenge - Kenneth Zhang Project
Kenneth Zhang's AWS Data Exchange Project
General Overview of Mobility Data/Reports
Due to the COVID-19 Pandemic, Mobility data and it's trends continue to shift and alter.
Using this data, we can identify change-points in mobility data in order to identify different time periods or instances
where mobility was increased in the population, in specific industries, or other areas.
Mobility data is often collected as a series of points with latitude and longitude collected at intervals by devices such as smartphones,
shared micromobility vehicles, on-board vehicle computers, or app-based navigation systems.
Mobility data often has a temporal element, assigning time as well as location to each point. Depending on the device used to capture the data,
other characteristics, such as the speed of travel, or who is making the trip, can be connected to each individual latitude/longitude point.
Throughout this document, mobility data is often referred to as “geospatial trip data”, “trip data”, “geospatial mobility data”, “geospatial data”, and “bread-crumb”.
• GPS Trace/ Breadcrumb Trail - The product of recording information about a trip by using
a series of points with latitude and longitude collected at regular intervals by devices such as
smartphones, bicycles, scooters, navigation systems, and vehicles. When mapped, a breadcrumb
trail can show the path of travel of an individual and/or vehicle. GPS trace data may or may not
have temporal data associated with each point.
• Individual Trip Records - For shared micromobility, ride-hail trips, and trips recorded in app-based
navigation systems, a GPS trace record is created for each unique trip. This record typically
includes start/end locations and times, route, and may include information tying that trip to a
specific user account. Individual trip records are sometimes referred to colloquially as “raw” or
“unprocessed” data. “Anonymized” trip data is that which has individual identifiers removed.
• Location Telemetry data - Any data that records the movements and sensor readings from a
vehicle including location, direction, speed, brake/throttle position, etc. Fleet operators may
use vehicle telemetry data to determine instances of dangerous driving such as harsh-braking
or excessive speeding. Some shared micromobility providers report that they can use scooter
telemetry data to determine if a scooter has been left in an upright vs tipped over the position.
• Data Protection - Mechanisms for guarding against unauthorized access, including practices for
preventing unauthorized entities from accessing data. This also includes the methods used for diminishing the
usefulness of stolen data should a system be breached.
• Verifiable Data Audit - Tools or practices that automatically and routinely capture, log, and
report activity in a data set in order to ensure those accessing sensitive datasets are acting in an
approved manner.
Using the FBProphet for Change-point Detection in Google Mobility Data/Reports of Different Countries
In this project, I will use the mobility data provided by Google Inc.
I specifically chose the United Arab Emirates (UAE) for change-point detection and analysis as it was a country that had one of the least amounts of missing values in each of the recorded data columns. Although the FBProphet is robust against missing data points and positions, it is beneficial to have as many actual recorded data points.
Since the Prophet forecasts for time series data based on an additive model where non-linear trends are fit with yearly, weekly, and daily, seasonality, with the addition of
holiday effects, it would be useful for us to use more data points.
Simply put, the FBProphet uses time as a regressor and fits several similar linear and non-linear functions of time as components.
The first function fits the trend and models non-periodic changes. The second function fits seasonality with periodic changes present. The third fits ties in effects
of holidays on potentially irregular schedules for greater than or equal to one day. The last function covers idiosyncratic changes not accommodated by the model. The prophet is essentially “framing the forecasting problem as a curve-fitting exercise” rather than looking explicitly at the time-based dependence of each observation.
I observed the change-points of the data depending on the yearly, monthly, and daily seasonality. Since daily and yearly seasonality is able to give us a general overview and forecast of what mobility data for specific countries may look like, we can use it to do a general forecast. In order to get a more detailed and consistent change-point analysis, data such as COVID-19 Mobility Data which is taken in weekly and monthly intervals should be analyzed weekly or monthly. Monthly and weekly seasonalities were also added manually using the
.add_seasonality
function in python.
pro_change = Prophet(changepoint_range = 0.9)
pro_change.add_seasonality(period = 30.5, name = 'monthly', fourier_order = 5)
forecast = pro_change.fit(train_dataset).predict(future)
fig = pro_change.plot(forecast);
a = add_changepoints_to_plot(fig.gca(), pro_change, forecast)
The Python code above uses the
.add_seasonality
function to add the monthly seasonality to the model.
I analyzed two columns of data, one for 'retail_and_recreation_percent_change_from_baseline' and the other for 'grocery_and_pharmacy_percent_change_from_baseline'.
The Jupyter Notebooks for the according columns were uploaded separately.
Building a Multi-Layer Perceptron Artificial Neural Network Predicting Future Mobility Data
The core features of the model will include an input layer with shape (1480, 1) which is the shape of the input data, a Dense layer with 64 filters and a 'relu'
activation function with a 'normal' kernel initializer, another Dense layer with 64 filters and relu activation, and an output layer.
The loss function is 'mse', and I also used the 'RMSprop' as optimizer, and accuracy as the metric.
Before importing the x-values though, I had to adjust the dates like this:
from sklearn.model_selection import train_test_split
x = uaeMobilityData['date']
y = uaeMobilityData['retail_and_recreation_percent_change_from_baseline']
x = pd.to_numeric(uaeMobilityData['date'],errors='coerce')
x = pd.factorize(uaeMobilityData['date'])[0].reshape(-1, 1)
This is so that the dates are recognized as numeric values so that the regression can actually work.
Also, I needed to reshape the data, as it was only 1-dimensional at the time.
Using the
sklearn.preprocessing.scale
function, I normalized the dates of the data. The
preprocessing.StandardScaler().fit
function returns a scalar with the normalized mean and standard deviation of the training data, which I applied to the test data using
scalar.transform
function.
The code shown below is how I preprocessed and normalized the
x_train
data.
x_train_scaled = preprocessing.scale(x_train)
scaler = preprocessing.StandardScaler().fit(x_train)
x_test_scaled = scaler.transform(x_test)
I then built the model using the components and layers introduced in the previous paragraph in this section.
The code below, in Python 3, shows how I did it.
model = Sequential()
model.add(Dense(64, kernel_initializer = 'normal', activation = 'relu',input_shape = (1480, 1)))
model.add(Dense(64, activation = 'relu'))
model.add(Dense(1))
I then compiled the model using the loss function, optimizer, and metrics I before:
model.compile(
loss = 'mse',
optimizer = RMSprop(),
metrics = ['mean_absolute_error']
)
, and the then fitted the model and analyzed and printed the overall test loss and accuracy:
history = model.fit(
x_train_scaled, y_train,
batch_size=128,
epochs = 500,
verbose = 1,
validation_split = 0.3,
callbacks = [EarlyStopping(monitor = 'val_loss', patience = 20)]
)
score = model.evaluate(x_test_scaled, y_test, verbose = 0)
print('Test loss:', score[0])
print('Test accuracy:', score[1])
Easily, I could make predictions:
prediction = model.predict(x_test_scaled)
print(prediction.flatten())
print(y_test)
Conclusion
In conclusion, we can clearly visualize how mobility data trends altered during the COVID-19 Pandemic. We can see through the change-point detection program whether policies implemented by the United Arab Emirates did slow activity down in the country. Using the Mobility Data, we can see which countries would survive during times and whether they deemed to be essential or not. Of course, these programs are versatile and can implemented for any country.
In addition, using the Regression Predicting using a Multi-Layer Perceptron Artificial Neural Network, we can predict the future values of the mobility data for the UAE.
These visualizations can help countries whether policies that were implemented slowed the spread and activity of the citizens within the country, as well as if the activity was correlated to an increase in cases or deaths.
Hopefully you enjoyed my project!
Built With
fbprophet
keras
python
tensorflow
Try it out
github.com | AWS Data Exchange Challenge - Kenneth Zhang Project | Using change-point detection and a Multi-Layer Perceptron ANN to detect change-points and predict future Mobility Data. | ['Kenneth Zhang'] | [] | ['fbprophet', 'keras', 'python', 'tensorflow'] | 12 |
10,147 | https://devpost.com/software/sevabhava | Inspiration
The platform started as reaction to the helplessness I felt after the current pandemic started. I saw numerous helping hands who took the initiative to support the ones who were the worst affected.
Sevabhava (
http://sevabhava.in
) is a web based social collaboration platform where individuals or NGOs can find, register & create Social Initiatives in their locality.
What it does
It has Data Analytics platform where Non Profits can use to manage there Relief work.
They find highly impacted areas, run analytic son demographics data and see where they can add more value.
How I built it
Built it usinng nodejs, mdbbootstrap and fushion charts.
Challenges I ran into
Challenge 1 : Creating custom analytics schema on amazon athena
Challenge 2 : Creating visuals on the app to display analytics data in a simple visalisation.
Challenge 3 : Dynamically update the visualizations with user input
Accomplishments that I'm proud of
The whole app :P
What I learned
About AWS and how to use S3 powered by Athena. And visualizations using fushioncharts
What's next for Sevabhava
We would like to add visualizations for India territory.
Built With
amazon
amazon-web-services
ethena
node.js
s3
Try it out
sevabhava.in | Relief Work Data Analytics Platform | Helping Non Profits better manage on ground relief work with our Relief Work Data Analytics platform | ['vivek shukla'] | [] | ['amazon', 'amazon-web-services', 'ethena', 'node.js', 's3'] | 13 |
10,147 | https://devpost.com/software/deep-virtual-try-on-cloths-powered-by-pytorch-and-aws | demo4
demo1
demo3
demo5
demo2
GIF
Android wireframe using Adobe XD
GIF
Flask RESTful web app deployed on AWS Ec2
Inspiration
We started our journey when we try to purchase some clothes in a nearby apparel shop during this COVID-19 pandemic. We realized that people count is significantly low, after some research we found that people are reluctant to go to shops and try on clothes, due to the fear of getting Covid-19. Then we thought of developing an android or mobile application which helps to wear clothes without wearing them physically. We searched on YouTube and Google for similar implementation. We found exactly 2 implementations.
Virtual Try-On mirror uses the Microsoft Kinect sensor. Customers need to present in front of the display and cloths can be fitted and also change the cloth using hand gesture control. The problems with this system are the fitting won't work perfectly, very high hardware cost, the customer has to wait for their turn, difficult to implement.
Virtual Try-On mobile application with simple drag and drop. The problem with this one is that the fitting becomes a disaster and also the UI is not user friendly.
What it does
In simple words, it's a RESTful API deployed as a Flask application. Where you can upload your full body image. After that, you can upload different upper human body clothes such as T-shirts, shirts, etc, and see how well it fit in your body.
We deployed the same on AWS EC2 and tested it(See the YouTube Demo), but due to the increasing charges we terminated the instance, still you can run the program on google colab.
How we built it
We first collected the dataset for the Virtual Try-On(
http://47.100.21.47:9999/overview.php
).
The image includes different full-body images, their parsed images, and key_points.json(OpenPose).
We trained our model by dividing the dataset into different categories and do some data processing.
4.
We trained our network using PyTorch and saved the weights.
Combined GMM(Geometric Matching Module, generator_parsing, generator_app_cpvton(Clothing shape texture preserving VTON) and generator_face.
Implemented on Google Colab in Flask RESTful API.
Customers can upload the poses and the Openpose estimate the pose and write a JSON file along with applying a instance segmentation for parse image and save the output to separate folders. After that customer can upload any upper cloth image and the program first remove the background, save the output to separate folders and then we apply our PyTorch pre-trained model to get the result with detailed warped cloth on the customer image.
In the output, we get two images, one with a fake face or generated face and one with the original face
To understand the pipeline we put all the intermediate steps in the output image.
It is now deployed on AWS EC2(p2.xlarge) VM-GPU powered instance.
We also built an android wireframe on Adobe XD, we are in process of building the app using the RESTful API.
Challenges we ran into
1.
Combining GMM, generator_parsing, generator_app_cpvton(Clothing shape texture preserving VTON) and generator_face is a very hard task since it needs a lot of optimization in the code.
Deploying as a flask application along with OpenPose implementation needs a lot of effort. Since we need to build the OpenPose using Caffe and need to satisfy a lot of GPU driver requirements such as CUDA and cuDNN.
Deploying on AWS EC2(p2.xlarge) taken more time than we expected due to protobuf conflicts in OpenPose Installation(CMakelists.txt bash file changes and also library installation issues.).
Implementing instance segmentation algorithm also taken a lot of time, because we need to include specific cloth segementation on human body.
Accomplishments that we're proud of
Developing worlds first detailed Virtual Try-On cloths powered by PyTorch.
Implemented a web app for user to interact.
Built an android wireframe and in the process of completing it.
What we learned
we learned to integrate different PyTorch models in to a single program
Understood the importance of dynamic computational graph of PyTorch in our project
What's next for Deep Virtual Try-On cloths powered by PyTorch
An android application where user can just take the photo of them and apply the Virtual clothing in real-time.
Training the model on lower human body clothes like pants, shorts, etc, and images of men.
Built With
opencv
python
pytoch
tensorflow
Try it out
ec2-13-233-237-14.ap-south-1.compute.amazonaws.com
github.com | Deep Virtual Try On cloths powered by PyTorch and AWS | Worlds first API for Deep Virtual Try on cloths exclusively for pandemic recovery in apparel industry. Powered by powerful PyTorch deep learning model and AWS with detailed cloth warping | ['Nandakishor M', 'anjali m'] | [] | ['opencv', 'python', 'pytoch', 'tensorflow'] | 14 |
10,147 | https://devpost.com/software/us-poverty-level-impact-on-hospital-bed-utilization | Hospital Beds Utilization
Hospital Beds Availability
A positive relationship we found between an increase in bed availability and poverty level in the context of bed utilization.
Percentage increase in Beds Availability.
Inspiration
US population living in poverty face unprecedented pressures due to COVID-19. Although the federal government moved quickly to provide relief, more help is needed.
This project was created to help federal agencies and hospital systems help optimize scenario planning for when staff can be shifted around to serve those living in federal poverty areas.
What it does
This project looks at Historical Bed Utilization Rate, Poverty Level, Potential available in Bed capacity, and Hospital type as key features that could help forecast the staffing needs of health workers (doctors, nurses, etc). The staffing needs to a specific hospital type can be scored based on the availability enabling counties in poverty to be better served.
How I built it
I built this using AWS Sagemakre, AWS Dataexchange, AWS S3 and Pandas.
Challenges I ran into
Initial phase of this project ran into multiple challenges including find the right dataset, using appropriate data normalization technique.
As I made my way into the data analysis I found great insights while there were challenges with the tools I chose. AWS services (dataexchange, sagemaker notebooks) helped me through to complete my project.
Accomplishments that I'm proud of
I'm proud of the findings that I have from the data analysis i.e. Historical Bed Utilization Rate, Poverty Level, Potential available in Bed capacity are key features that can help forecast the staffing needs of health workers (doctors, nurses, etc). The staffing needs to a specific hospital type can be scored based on the availability enabling counties in poverty to be better served.
What I learned
I learned AWS best practices, AWS Machine learning, Data Analysis and Data Visualization.
What's next for US Poverty Level impact on Hospital Bed Utilization.
As the next steps, building a machine learning model that helps support and augment health workers where demand exists by integrating with existing staff scheduling system allowing managers to view the model recommendations, allowing healthcare stakeholders to deliver better patient care and improving productivity.
Built With
amazon-web-services
python
sagemaker
Try it out
github.com | US Poverty Level impact on Hospital Bed Utilization. | To better understand how socioeconomic status - such as populations living in federal poverty counties impact hospital bed utilization and to help optimize scenario planning. | ['Jagadeesh Josyula'] | [] | ['amazon-web-services', 'python', 'sagemaker'] | 15 |
10,147 | https://devpost.com/software/ship-flow | Inspiration
I wanted to create a data visualization showing macroeconomic movement. I decided shipping container data would be perfect.
What it does
ShipFlow uses a US Customs and Border Protection (CBP) Data
provided by Enigma
as its source. The app shows all shipments from the dataset following an interpolated path to their destinations during their last two days of travel. It allows users to scroll from January - August 2020, pause / play, and see metadata about individual shipments in a map UI.
How I built it
I found the dataset on AWS Data Exchange, and decided to explore it a bit.
The file that ended up being most useful was
data/ams__header_2020__202008241500.csv
, showing the header data of containers cleared by CBP.
The header data contains port of origin, port of destination, weight, vessel name, and several other fields useful for visualizing shipments.
I wrote several Python scripts run locally to interact with the data (see source in github repo):
Data Agg
to automatically move data from AWS Data Exchange to S3.
Data Cleaner
to clean data for upload to AWS Athena
Data Sampler
to view a few rows of data locally.
I then uploaded the dataset to AWS Athena, where I was able to further investigate the data using SQL.
After finding queries that returned the data that the front-end needed,
I created a Lambda
to do these queries in realtime. This backend Lambda simply takes a date, like
2020-07-19
, as an argument and returns all relevant ships for that date from the AWS Data Exchange dataset stored in S3.
I then wrote the front end using Angular 9 and Mapbox. The front end pulls several days of data from the backend lambda using an API Gateway proxy, then queries the Mapbox to find the lat/lon of each destination. These results are stored in an array and displayed on the webpage. The app recalculates position of every ship every tick of its simulated time. The user is able to change the time, triggering a reload, and watch as the ships move to their destination. Metadata about the ships is shown in a popup, and the size of a ship's circle on the map is proportional to the size of its cargo.
Since detailed telemetry data is out of scope of this project and dataset, I decided to interpolate their path of travel using a simple algorithm. I assumed ships travel at a fixed speed along a line that intersects the port of destination and the center of the United States. For the vast majority of ships, this gives a visually reasonable path. The algorithm located
here
is
shipCoord(endCoord, endDate: Date, lonOffset, latOffset) {
const D = this.shipSpeed * this.daysToShow * 24; // km total over days to show
const totalDeltaLon = endCoord[0] - this.defaultCenter[0];
const totalDeltaLat = endCoord[1] - this.defaultCenter[1];
const AoA = Math.atan2(totalDeltaLat, totalDeltaLon);// "angle of attack"- ratio of lat:lon
const lonKm = D * Math.cos(AoA); // KM in the east/west direction
const latKm = D * Math.sin(AoA); // KM in north/south
const latConst = 111.2;
const lonConst = 111.2 * Math.cos(Math.PI / 180 * this.defaultCenter[1]);
const deltaLat = latKm / latConst;
const deltaLon = lonKm / lonConst;
const daysLeft = (endDate.getTime() - this.appTime.getTime()) / (24 * 3600 * 1000);
const pct = daysLeft / this.daysToShow;
return [endCoord[0] + (deltaLon + lonOffset) * pct, endCoord[1] + (deltaLat + latOffset) * pct];
}
This algorithm means that to know where a ship's visualization location is, we only need to know its destination in lon/lat, arrival date, and the app's clock time. A small, randomized offset was added so that a group of ships with the same destination location and time would not be shown directly on top of each other.
Since the AWS Data Exchange dataset contains generic place names, a translation layer is needed. The Mapbox Geocoding API was used to retrieve a lon/lat from a generic place name.
The frontend shows ships two days from their destination, and removes them on arrival.
To host the app, I registered a domain name using AWS Route 53 and ACM. The app's Javascript/HTML artifacts are stored in S3, and served using cloudfront. This app uses infrastructure as code via Terraform, allowing easier management and configuration.
A small deploy script
was written to efficiently update the backend Lambda's code.
Challenges I ran into
The data was more complex than expected. Particularly, I had assumed each row of data for the same vessel's voyage would have the same origin and destination, but this is not the case. I thought a container originating in Germany but shipped from the Netherlands to the United States would show the Netherlands as a port of origin, but instead Germany is shown. To simplify the display, I chose the origin with the largest cargo weight on the vessel to show as the origin in the app.
I originally wanted to implement a great circle path to more accurately show where ships were travelling from, but soon realized this was a high-effort low-reward task. Actual ships follow shipping lanes and more complex navigation than great circles, so I decided instead to use the algorithm described above.
The animation speed ended up being a bit slow, so did a bit of tweaking of the app's simulated clock speed and refresh rate to obtain a reasonable animation.
Accomplishments that I'm proud of
In about one week of working on my own, I was able to go from zero to a fully-functioning data-driven web app, showing the power of AWS.
What I learned
This was my first time using AWS Athena and the AWS Data Exchange. These are incredible tools that I look forward to using in the future.
What's next for Ship Flow
ShipFlow has plenty of room for performance improvements. Some possible angles of attack are:
Looking further into the Mapbox API to try to improve the animations.
Precaching ship paths
Precaching {place-name: lon-lat} mapping that is now pulled in realtime.
Tie in more data sources, such as actual telemetry data and more ship metadata.
Built With
amazon-web-services
angular.js
athena
lambda
mapbox
python
terraform
Try it out
ship-flow.io
github.com | ShipFlow | Show maritime shipments flowing into the US! | ['Brendan John'] | [] | ['amazon-web-services', 'angular.js', 'athena', 'lambda', 'mapbox', 'python', 'terraform'] | 16 |
10,147 | https://devpost.com/software/covid-and-gdp-relations | Population vs Death Graph
GDP and COVID-19 Impact
Quicksight Chain
COVID-19 Cases across the globe
Deaths due to COVID-19
Event Triggers For New Revisions
Background
I've created various visualisations via Amazon's quicksight by merging various datasets from different Amazon Data Providers.
Data Providers Used
CRUX - OECD - Domestic Demand Forecast
Rearc - M2 Money Stock | FRED
Enigma - Global Coronavirus (COVID-19) Data (Corona Data Scraper)
AWS Services Used
AWS Data Exchange
AWS S3
AWS Glue
AWS Athena
AWS Quicksight
Process
1.) Data sets are subscribed via AWS Data Exchange.
2.) Any revision in dataset publishes AWS Cloudwatch Event.
3.) This event is used to trigger AWS Lambda service that exports the datasets to AWS S3 buckets.
4.) AWS Glue Crawlers are set to run every 24 hours to check for new revisions and update the older tables in AWS Athena.
5.) AWS Athena is used as a data source to create visualisations in AWS Quicksight.
Built With
amazon-web-services
aws-athena
aws-data-exchange
aws-glue
aws-quicksight | Covid and GDP Relations | Amazon AWS Ecosystem powered quicksight visualisation for the relation between COVID-19 and GDP of the countries amongst some other interesting charts. | ['Dhruv Kanojia'] | [] | ['amazon-web-services', 'aws-athena', 'aws-data-exchange', 'aws-glue', 'aws-quicksight'] | 17 |
10,147 | https://devpost.com/software/worldwide-pandemic-tracking | Open-source tools and Serverless architecture
Automation
Value proposition
Deep-dive into the architecture
Simple, fast, Consistent
A sample of the dashboard, try it out in our link!
Inspiration
From the beginning of the pandemic, we were obsessed with providing consistent views of cases, death and test results in all countries, especially comparing our home country (Colombia) and the United States.
Making use of Open-Source tools like Zabbix and Grafana, as well as by leveraging AWS Serverless capabilities, we were able to achieve this with remarkable accuracy and consistency between different data sources.
What it does
Our
COVID-19 Visualization
application uses a dynamic content automatically fetched from AWS Data Exchange subscriptions (just like REARC repositories) to S3 buckets. From there, data points are normalized, validated, compared and presented to end-users via publicly available Grafana dashboards.
How we built it
The core application runs on a
Lambda function built on Python
which captures events generated in Cloudwatch whenever a new data-set revision is made available by the provider (in this case,
REARC
Testing data and New York Times COVID-19 data) and exports it to a S3 bucket.
Once that data is pushed into the S3 bucket, a new Lambda function parses those metrics and pushes each one into a Zabbix Server landing zone where the automatic pre-processing engine from Zabbix loads that data into its database for alerting and exposure.
Finally, all data in consumed by Grafana via an API gateway that provides great response time for drill-drown, refresh ratios and overall interaction with each widget available in the dashboard.
Challenges I ran into
Making sure that we deal with different refreshment rates from data sources, catching errors on data transcription (cities or states with no changes in 24-hours) and pushing more and more countries into our repository to improve the data normalization rules within Zabbix and AWS Lambda.
Accomplishments that I'm proud of
Enabling automatic ingestion from AWS Data Exhange subscriptions
using Cloudwatch events, AWS Lambda and S3 buckets.
Creating Grafana dashboard templates
that can be extrapolated to other similar time-tracking healthcare data relatively easy.
What I learned
The great potential in AWS Data Exchange and the agility that it will bring to data-driven applications where external data availability was a major roadblock in improving these types of solutions.
What's next for our Dashboards
Adding more countries and heterogeneous data sources.
Improve our data normalization rules.
Built With
lambda
python
zabbix
Try it out
covid19.imagunet.com | Worldwide Pandemic real-time tracking and visualization | A single and consistent view of COVID-19 metrics for any country in the world. | ['Carlos Ortega', 'Luis de la Torre'] | [] | ['lambda', 'python', 'zabbix'] | 18 |
10,147 | https://devpost.com/software/testkycb | Inspiration
We notice that financial services clients express similar pain points when it comes to onboarding new customers and keeping existing customers engaged in their products.
One of the main factors that contributes to a prolonged onboarding process and customer attrition is the lack of effective use of integrated data sources to extract holistic insights about a prospect. This subsequently prevents business users from making data-driven decisions to provide effective marketing campaigns that drive customer growth.
We therefore decided to address these pain points using a data-driven approach that will not only enhance existing sales and marketing processes but will also make it easier for firms to carry out these activities without burdening the cost of additional resources. We were also motivated by this challenge to create a fun and innovative solution so that
any
user can understand the value that it will bring to real world scenarios.
What it does
Our solution aims to rapidly improve the onboarding and sales processes for new and existing small to medium-sized enterprises. In a single platform our solution:
Provides business users with Know Your Customer Better (KYCB) information about a new customer and its product of interest.
Showcases existing customers that have similar attributes as the prospect based on clustering analysis.
Identifies a subset of the similar customers that do not already have the prospect’s product of interest.
Allows users to automatically generate a campaign to market this product to existing customers.
How we built it
The first step in building the back-end was to integrate all third-party data, including AWS Marketplace foot traffic data (Safegraph), open web, social media and commercial data. We then used Python to select, transform, and standardize customer features to apply the k-means clustering model, which created customer segments for better targeting effectiveness.
After a series of development iterations, we worked backwards by identifying the business case we wanted to solve, cross referencing that with the data/insights we had available and coming up with a vibrant, narrative-driven application which real users could utilize. Our ultimate goal was to create an automated system which developed all the insights under the hood with only a few user inputs, then displayed them all in the most concise way possible, eliminating unnecessary clutter.
Challenges we ran into
Easily integrating the campaign generation into the platform
Being able to strategically optimize the data visualizations so that the UI would not render too slowly
Navigating legal agreements with data vendors
Accomplishments that we're proud of
Building a one-stop platform that has the potential to effect change in real-world situations
Building a clustering model that segments customers effectively, which can improve marketing campaign performance
The scope of work completed in the allocated time
Being able to work well together to build something cool while everyone is so busy
What we learned
There are many factors that contribute to making a data-driven platform comprehensive and easy to understand
The strengths of different AWS features along with the depth and richness offered by the AWS Marketplace and AWS Data Exchange
Strengthened our coding, development, machine learning, design, storytelling, and cloud services skills
The intricacies that comes with building an end-to-end solution from design to testing
What's next for Know Your Customer Better: Marketing Accelerator
We envision bringing this solution to market and speaking with potential users in the financial services and insurance sectors that would best utilize our platform to enhance their onboarding processes and increase customer retention. We also plan to continue iterating on our solution and enhancing our clustering model.
Built With
aws-lightsail
aws-pinpoint
chart.js
css
flask
google-chart
google-maps
html
javascript
jquery
numpy
pandas
plotly
python
scikit-learn
Try it out
github.com | Know Your Customer Better: Marketing Accelerator | Know your customer. Understand your customer. Give your customer what it needs. Keep your customer. | ['Gautam Kumar', 'Jules de Courtenay', 'Yichi Zhang'] | [] | ['aws-lightsail', 'aws-pinpoint', 'chart.js', 'css', 'flask', 'google-chart', 'google-maps', 'html', 'javascript', 'jquery', 'numpy', 'pandas', 'plotly', 'python', 'scikit-learn'] | 19 |
10,147 | https://devpost.com/software/automated-contact-tracer | GIF
Demonstration of WebApp in Action
The tech used in my front-end web app
The tech used in my back-end model
Visit go.osu.edu/hackwiki for more pics, details and discussion!
Inspiration
I have been on campus for a little under a week, and already we are seeing the positive Covid-19 cases skyrocket. One of the issues that we are seeing is that the university is struggling to track down the people who have been exposed to a positive individual. In the world of big data, I felt there had to be a solution to bypass the monotony.
Additionally, I have seen COVID risk calculators put out by a number of medical groups. Specifically,
this one
from the Cleveland Clinic uses a number of data points to predict the risk of the likelihood of testing positive.
What it does
Using location data collected by Kochava Collective, the Automated Contact Tracer can view the trails of individuals as they go about their lives. When combined with a daily health check built into the web app, this data can be used to judge the potential exposure to coronavirus an individual has throughout their day. Scored by a mixture of visiting contaminated spaces, interacting with infected individuals, and the results of the daily health/symptom survey, the user is provided a rating of how weary they should be regarding COVID. If they have a score high enough, they are recommended to visit their nearest testing center.
How I built it
This was admittedly my first time working with anything in AWS aside from S3 and Route 53. So, to start, I had to watch a lot of videos and work through a lot of tutorials. I tried to keep as much of the project within AWS as possible to continue building skills and working on technology that was new to me.
I have a ton of detail relating to how I built this on my
github wiki
. Please go check it out!
Challenges I ran into
While there are a ton of resources out there, I was not always able to find answers to my questions. One of the big issues I had was in figuring out how to host the python script on AWS to be run on a schedule. Ideally, I would have loved to use Lambda functions and imbed all of the code there, but I couldn't due to the lack of some packages in Lambda. I spent a ton of time trying to figure out a workaround, but none of the 'miracle cures' perscribed to me via youtube ever did the trick...
Another big challenge was starting from essentially nothing with javascript. I had to do a lot of research to learn some of the basics of how to integrate responsiveness into a website. It is a very good skill that is now growing for me! Excited to implement it on my personal website next!
Accomplishments that I'm proud of
I am quite proud of the fact that I started with very little experience with the AWS suite and was able to self-teach my way to proficiency in a pretty good list of tools. It all fits together quite nicely, which is making me tempted to look into studying up for an AWS certification. Would definitely be a challenge, but I am up for it!
Additionally, I am proud of the fact that I was able to build this entire thing by myself. I have never done a hackathon alone, and as such, I have always been in the backend of the team, deep in the data. This time, I had to be versatile- working from python to css... No place to hide when you are working alone!
What I learned
As I have said above, I have learned quite a bit regarding new technology, but I have also learned quite a lot about individual project management. I did not hear about this challenge until mid-July, but once I did, I began to think and plan. Over my internships, I have been learning the value of planning before coding. Too many times, I have been 5 hours into code and then find out I had missed something in my thinking. For this project, I worked to think first, then act.
What's next for Automated Contact Tracer
I truly believe this is an idea that could help institutions like my university recover quicker from the pandemic. If I am to continue with the project, I may need to draft some of my 'web-developer' friends to join me. I am a Data Scientist. I have my weaknesses. Web development is definitely (for now) one of them!!
I also need to continue thinking through the privacy and legal implications of an app like this. I know the university has a lot of power, especially when facing a pandemic, so I think they may be able to work with me on something like this. I have a contact in my schools' legal office, so I might need to reach out and have a chat.
Built With
amazon-web-services
amplify
api-gateway
athena
css
data-exchange
data-pipeline
dynamodb
glue
html
javascript
lambda
leaflet.js
mapbox
python
s3
sagemaker
sql
Try it out
github.com | Automated Contact Tracer | A solution that utilizes users' location data and daily health checks to alleviate the pressures of manual contact tracing at universities and other large institutions. | ['Mitch Radakovich'] | [] | ['amazon-web-services', 'amplify', 'api-gateway', 'athena', 'css', 'data-exchange', 'data-pipeline', 'dynamodb', 'glue', 'html', 'javascript', 'lambda', 'leaflet.js', 'mapbox', 'python', 's3', 'sagemaker', 'sql'] | 20 |
10,147 | https://devpost.com/software/hospital-beds-finder | Search Form
Form error handling
LA Demo
LA Demo showing hospital address and beds available
LA demo zoomed in on a different colored hospital
mouseover hospital icon demo
mouseover zoomed demo
hospital-beds
USA hospital beds capacity web app. Try it at
http://hospitalbeds.link/
Inspiration
The COVID-19 lock down has us thinking twice before going out and thinking more than a couple times before going to a hospital if not in an emergency. Being at a hospital in these times can expose you to COVID-19 and other dangers just by being there. This tool can be used to get a glimpse of information about surrounding hospitals to help make a decision on which one to go if neccesary depending on how busy the
hospital beds
are.
What it does
This web app lets you search in a
US
zip code, city or state and renders a map of hospitals in the selected area.
Hospital icons are displayed in the following way:
Green: 2/3 or more beds available
Orange: Above 1/3 beds available
Red: Below 1/3 beds available
Gray: If no bed data is available in the dataset
On mouseover, the app displays the percentage of available beds on mouseover, the total number of staffed beds and pediatric and adult ICU beds. On click, the app displays the hospital name and its address.
How I built it
Data description
The dataset is from the
AWS Data Exchange
and it's called
USA Hospital Beds | Definitive Healthcare
. Up next is the description given by the data developer:
Definitive Healthcare provides intelligence on the numbers of licensed beds, staffed beds, ICU beds, and the bed utilization rate for the hospitals in the United States.
Web App
Using
Flask
for
Python
I wrote a web app form that would take in a search inquiry from the user and render another page from there. Then, I used
folium
to generate maps using the search inquiry. The search inquiry is processed with
Pandas
since the app looks for the inquiry within a Comma Separated Values (*.csv)
dataset
.
AWS Resources used
Identity Access Management (IAM)
role with permissions to write to S3 and to instantiate an EC2 machine
Amazon S3 AKA Amazon Simple Storage Service
holding the dataset obtained through
AWS Data Exchange
A public
Amazon S3 AKA Amazon Simple Storage Service
containing the application
Amazon Elastic Compute Cloud (Amazon EC2)
for a virtual Linux 2 t2.micro machine instance running the application on the cloud
Elastic Beanstalk
For deployment of web application and load balancing
Amazon Route 53
to register a unique Domain Name System (DNS)
Amazon Elastic Load Balancer
To distribute Elastic Beanstalk application traffic
AWS Certificate Manager
To get a Secure Sockets Layer (SSL) certificate for a domain/web application
AWS Data Exchange
USA Hospital Beds | Definitive Healthcare
dataset
Challenges I ran into
The folium documentation is challenging since it's hard to search in and there are not many examples of fully-fleshed out applications using the framework. Another challenge was using Flask since I had only used it for a brief project before. Most people face the challenge of hosting with AWS, which can be very complicated for someone who has never used AWS before. I have completed some AWS training but never hosted a web app before so I went through the documentation, looked for examples and even contacted AWS help because of a feature I am waiting to hear back so I can implement.
Accomplishments that I'm proud of
Learning how to use two frameworks (Flask and Folium) in a few weeks and being able to integrate them using Object Oriented Programming (OOP) was challenging and rewarding. Being able to deploy to AWS with a custom domain name was very rewarding as well.
What I learned
Web development, AWS Data Exchange, AWS Cloud Services, Flask, OOP, folium, geospatial analysis, Full Stack development, bootstrap, javascript, app forms
What's next for Hospital beds finder
Implement the use of geopandas to be able to exploit the geojson dataset version of the data
Dailyt bed capacity predictions using machine learning
Integrate multiple data sources to make the app not limited to the US
Implement "find near me" functionality
Add AWS Cloudfront event that triggers AWS Lambda to refresh the dataset S3 with every new revisions by the provider (waiting to hear back from AWS help)
Installation
using a conda virtual environment (optional but recommended)
Naming the environment "geojson"
conda create -n flask python==3.7.6
conda activate flask
conda install jupyter notebook==5.7.8 tornado==4.5.3
(Optional)
Installing dependencies
pip install -r requirements.txt
AWS EB requirements
The following need to be in your requirements file in order for AWS Elastic Beanstalk to serve your app:
click==6.7
Flask==1.0.2
itsdangerous==0.24
Jinja2==2.10
MarkupSafe==1.1.1 (1.0 can cause errors)
Werkzeug==0.14.1
Built With
amazon-ec2
amazon-web-services
elastic-beanstalk
flask
folium
geopandas
jinja
numpy
pandas
python
s3
Try it out
github.com
hospitalbeds-env.eba-xf56z2j2.us-east-1.elasticbeanstalk.com
hospitalbeds.link | Hospital beds finder | A full stack solution that maps a USA location of your choosing with hospitals colorcoded to display their percentage of available beds. It uses the USA Definitive Healthcare | Hospital Beds datasets. | ['Carlos Salgado'] | [] | ['amazon-ec2', 'amazon-web-services', 'elastic-beanstalk', 'flask', 'folium', 'geopandas', 'jinja', 'numpy', 'pandas', 'python', 's3'] | 21 |
10,147 | https://devpost.com/software/noise-pollution-tracker | Sensor Map
Introduction for Sensor
Map of Hearing Disability Across the US
Inspiration
When the world went into lockdown, one interesting effect was the drastic reduction in sound in places like New York. That change in sound got us wondering how noise pollution has been effecting us. We grounded our look in how hearing loss could be measured. Using AWS Data Exchange, we found CDC data showing the prevalence of hearing disability by state and age group. We were expecting there to be a clear pattern between those states that have large cities and industry and hearing loss. However, we were shocked to see that for the most part, hearing loss effected every state significantly. With this discovery, we began wondering how the world should prepare for a good portion of its population to eventually become hard of hearing.
Having family members who suffer from hearing loss, we knew first hand that hearing aids can be improved significantly and have a real impact on quality of life. With this in mind, we began researching how to improve hearing aids. While we saw implementations on the market that would allow for the hearing aid to automatically adjust the volume, the flaw we saw in this was that not all sounds should be treated equally. While dampening construction noises would likely be fine, reducing the sound of a siren or a car horn could be dangerous for the person. This was when we began creating our machine learning model.
What it does
Our project has three deliverables.
The first one showcases how many people across the country suffer from a hearing disability. Showing you the data by state and age group, you can see that the problem effects people nationwide, and across all age groups. It is truly a problem of societal scale.
The second one showcases the danger that comes with city living and hearing loss. Using SONYC data from the past 4 years, we illustrated via map how walking through the city can impact your ears. We illustrate this concept visually with pulsations on map markers to show how loud the sound is. We draw lines between the map makers and the person icon to show what is a safe distance to be from the noise. Finally, we allow the listener to hear what the sound is like. All in all, the point is to visualize how the noises can impact you without your consciously realizing it.
The final one is a testing script. Using the sound classification convolutional neural network devised by Justin Salamon and Juan Pablo Bello (
https://arxiv.org/pdf/1608.04363.pdf
) and implemented by CodeImporium (
https://www.youtube.com/watch?v=GNza2ncnMfA
), we trained the model on UrbanSound8K data. Then to see how well it responded to a real world scenario (where multiple different sound classes were present), we tested it against data from SONYC. Overall the model did not perform well, suggesting that when building tech to help hearing aids distinguish multiple different sounds, CNNs will have to be more focused on the probabilities being correct for the different sounds than the one classification they think it is most likely to be.
How we built it
Hearing loss data was accessed using AWS Data Exchange, which contained the CDC DHDS - Prevalence of Disability Status & Types by Demographics dataset. We then used an Amazon EC2 instance, S3, and an auxiliary EBS disk, along with JupyterLab to clean and explore this dataset. Once we had sufficiently cleaned the data, we merged it with a GeoJSON of the 50 US States and Puerto Rico, which was displayed in a LeafletJS chloropleth map. This map provides the user the ability to toggle between different CDC defined age groups and view the estimated prevalence of hearing loss.
Once we noticed that hearing loss was more common than our initial expectation, we explored other datasets. This brought us to NYU SONYC (Sounds of New York City), who maintain a Creative Commons licensed dataset of audio files from numerous microphones placed around New York City. For our purposes, we used version 2.2 of this dataset, which had a temporal range of 2016 - 2019. This dataset contained 10k+ short recorings of sounds, in the WAV file format, along with location metadata. We decided to visualize a unqiue subset of these recordings using a LeafletJS map. In order to be able to visualize identify louder sounds easily, we make use of icons which pulse based on how "loud" a specific sound should be. Additionally, when a marker is in range of one of SONYC's sensors, the specific audio clip from that time will be played. This allows users to move an icon around the map and experience different sounds and their relative "loudness". Each icon placed on the map also has an independent play/pause button, allowing for an inquisitive user to play each sound individually. Additionally, every marker on click will display metadata about its year, relative volume, and estimated decibels.
The map lazy loads audio files only when a marker comes with a close distance in order to minimize bandwidth usage for the client and our EC2 instance.
Challenges we ran into
*It was taking an inordinate amount of time to clean the data
*finding the best data for the project
*Installing Librosa was painful as a llvmlite installation failed, requiring us to dive deep into the underlying code of that dependency.
Accomplishments that we're proud of
We think we found a great way to visually represent the power of sound in our sensor map. People often need to see things to believe it, and we feel the way we chose to represent it does a good job intuitively illustrating the potential danger that exists with noise pollution.
While we didn't have enough time to test the model on all the data we found, being able to understand how the model worked on the surface level let us experiment and think about how best to move the project forward had we been given more time.
What we learned
With audio classification, the most challenging part is dealing with multiple true cases. While CNNs are well set up to account for this nuance, as they end with multiple different probabilities for each end neuron, the difficulty will be ensuring there is enough test data for the CNN to detect multiple true cases with a big enough differential for the false ones that there are limited false positives.
Built With
amazon-data-exchange
amazon-ec2
jupyter
leaflet.js
librosa
python
sonyc
tensor
urbansound8k
Try it out
github.com | Project Tusk | We visualized the prevalence of hearing disabilities. Using convolutional neural networks, we then built a model to classify which sounds should be heard by a hearing aid and which can be dampened | ['Cody Benkoski', 'Matthew Gunton'] | [] | ['amazon-data-exchange', 'amazon-ec2', 'jupyter', 'leaflet.js', 'librosa', 'python', 'sonyc', 'tensor', 'urbansound8k'] | 22 |
10,147 | https://devpost.com/software/visualization-ys7bfx | Country-wise Data
Time Series Analysis
Inspiration
We wanted to work on a dataset that was out of league yet helped address a major cause in Southern Asia. Also, we wanted to work with various AWS services available for visualization purposes. It was then that we came across Devpost's AWS Data Exchange Challenge, and we agreed to take part in it.
What it does
This is a Data visualization project , named
Drishyam
, on gender equality in entrepreneurship, which shows the details about the changes in gender ratio in entrepreneurship over a span of time (1976-2019) in various countries.
How we built it
We build it using five AWS services :
AWS Data Exchange
Amazon S3
AWS Glue
AWS Athena
AWS QuickSight
Challenges we ran into
We had never work with AWS Data Exchange and had no clue how to go about visualizing data obtain from AWS Data Exchange. Working remotely was also a challenge in itself, the work that could have been done within a day or so took longer than usual.
What we learned
We learned how to work remotely.
We learned how to use following services :
AWS Data Exchange
AWS Glue
AWS Athena
AWS QuickSight
Built With
amazon-web-services
athena
aws-data-exchange
awsquicksight
glue
s3
Try it out
github.com | Drishyam | Drawing insights about diversity in gender as entrepreneurs across the globe over the years. | ['Firoj Siddique', 'Vibhuti Maheshwari'] | [] | ['amazon-web-services', 'athena', 'aws-data-exchange', 'awsquicksight', 'glue', 's3'] | 23 |
10,147 | https://devpost.com/software/covid-19-hospital-bed-utilization-application | Optimisation Model for Utilisation of Hospital Beds - COVID-19
Inspiration
The project is motivated by the ••overcrowding of hospital beds due to COVID-19••. Another reason is to avoid the repeat of the events which happened at the beginning of the pandemic. Scarce healthcare resources due to COVID-19 require carefully made policies ensuring optimal bed utilization and quality healthcare service. An unsupervised deep learning algorithm is applied to decide the assignment of resources to patients. Further, developing and deploying AI applications is a challenging endeavor requiring a scalable infrastructure of hardware, software, and intricate workflows.
This project shows an end-to-end Machine Learning workflow and aims to help for both policy-makers and the public.
What it does
Full-stack Machine Learning Application with data visualization and clustering algorithm. Clustering shows a new way to use the data to assess the impact of COVID-19. Modeling is applied to Geospatial data in order to solve the use case of optimizing the utilization of hospital beds. It uses the number of licensed, staffed, unstaffed, ICU, adult ICU, Paediatrics ICU beds, bed utilization, a potential increase in capacity, and average ventilator usage. This allows us to take into account both the number of resources and the intensity of care needed. It provides the means to optimize bed-occupancy management and evaluate geographical hospital resource allocation.
The data and model are deployed as a web app. The machine learning pipeline is deployed using AWS Fargate which is a serverless compute for containers. This enables us to build and host a fully functional containerized web app on AWS without provisioning any server infrastructure.
How I built it
I used Python for the project. The steps involved are as follows:
Import packages, read data, create business features.
Data Analysis and visualization with the presentation of the data on the map with folium and geopy.
Applying deep learning clustering algorithm called Self Organizing Map (SOM) with ‘minisom’ package. A SOM is a type of artificial neural network that is trained using unsupervised learning to produce a low-dimensional representation of the input space, called a “map”. We use the “clusters” from the algorithm to visualize on the Geospatial plot.
Build a front-end web application using Streamlit. It is an open-source Python library that makes it easy to build beautiful custom web-apps for machine learning and data science
Create a Dockerfile. Docker is a tool designed to make it easier to create, deploy, and run applications by using containers. Containers are used to package up an application with all of its necessary components, such as libraries and other dependencies, and ship it all out as one package.
Installing and running Docker on AWS EC2 instance using Amazon Linux AMI
Build and push a Docker image onto Amazon Elastic Container Registry. Amazon Elastic Container Service (Amazon ECS) is a container orchestration platform. The idea behind ECS is similar to Kubernetes (both of them are orchestration services.
Deploy web app using serverless infrastructure i.e. AWS Fargate. It is a serverless compute engine for containers that work with both Amazon Elastic Container Service (ECS) and Amazon Elastic Kubernetes Service (EKS). It removes the need to provision and manage servers, we specify and pay for resources per application, and improves security through application isolation by design. The application is stateless (A stateless app is an application that does not save client data generated in one session for use in the next session with that client).
Challenges I ran into
I found some integration issues between Streamlit and Folium e.g., showing the legend on the map. It also took me time to access the S3 bucket from within Fargate container.
Accomplishments that I'm proud of
I am really happy with being able to create and deploy an application that can be used by both public and policymakers. It was my first time working with certain AWS technologies e.g., Data Exchange, ECS, and Fargate. Data is also updated in the application as it is updated at the source ensures that it is dynamic and reflects COVID-19 cases.
What I learned
I learned how to develop a Full-stack machine learning application. I also learned about the ease of getting 1000’s of datasets from the data exchange. This application is available for the public using Fargate.
What's next for Optimisation Model for Utilisation of Hospital Beds-COVID-19
This is a really simple algorithm that can be improved in several ways. The application can also benefit from having a domain name and more security. I would like to scale the application using more distributed systems. I also would like to replicate the same information for other countries.
Built With
amazon-web-services
deep
folium
machine-learning
pandas
python
streamlit
unsupervised
Try it out
3.88.209.255
github.com | Optimisation Model for Utilisation of Hospital Beds-COVID-19 | Application to help decision-makers optimise the distribution of beds for COVID-19 patients. It effectively informs planning, resource allocation, and mitigate the overcrowding of hospitals. | ['Piush Vaish'] | [] | ['amazon-web-services', 'deep', 'folium', 'machine-learning', 'pandas', 'python', 'streamlit', 'unsupervised'] | 24 |
10,147 | https://devpost.com/software/market-status-k89g16 | Do Expired medicines contaminants in the American water supply, affect weakened human immunity for American and cause Spread for Covid-19
Inspiration
The resources of the land on which we live are decreasing and with the population increasing on our earth making it difficult to meet the next Future generation’s needs The expiry problem is not only a pharmaceutical problem but also involves all food products. To a better world where the poor have their food and medicine
What it does
Predict the end of the product's expiration depend on its Consumption rate
How I built it
relational database
Challenges I ran into
You can learn a new thing every moment
Accomplishments that I am proud of
My worksheet run and its predict expiration
What I learned
First, it was just a worksheet to help me in my role as warehousing manager to decreasing the expire date in our warehouse, However, when I think in that worksheet and try to develop the idea My research drives me to -The Myth of Drugs Expiration Dates is a global impact-
"One pharmacist at Newton-Wellesley Hospital outside Boston says the 240-bed facility is able to return some expired drugs for credit but had to destroy about $200,000 worth last year. A commentary in the journal Mayo Clinic Proceedings cited similar losses at the nearby Tufts Medical Center. Play that out at hospitals across the country and the tab is significant: about $800 million per year. And that doesn’t include the costs of expired drugs at long-term care pharmacies, retail pharmacies and in consumer medicine cabinets."
What's next for Tracking-Coronavirus-USA-Spread
The project was adopted by the American government
Built With
amazon-rds-relational-database-service
machine-learning
sql
Try it out
github.com | Tracking-Coronavirus-USA-Spread | Do Expired medicines contaminants in the American water supply, affect weakened human immunity for American and cause the Spread for Covid-19 | ['Abdelrazek Rizk'] | [] | ['amazon-rds-relational-database-service', 'machine-learning', 'sql'] | 25 |
10,147 | https://devpost.com/software/utilization-chart-of-hospital-beds-during-covid19 | USA-Hospital-Beds
Inspiration
With COVID-19 on the rise hospitals have become overloaded with patients. Along with staff, it is highly important to know the number of beds and their utilization rate to better serve the patients. In order to get the utilization rate of beds in each hospital for each state, it is important to know the number of licensed beds operated by each state and the type of hospital, to treat and address the patient needs during COVID-19. Hence the graphs using AWS QuickSight are visualized and analyzed based on the Hospital type, number of available licensed beds grouped by each US State (Color Coded).
What it does
This project visualizes and analyses data and presents data in visual format to describe the number of licensed beds available for each Hospital type in each US state. It also shows the potential increase in bed capacity for each state & average ventilator usage and licensed beds for each state. The project also shows the visual representation of data that is filtered based on the Hospital type and the highest ventilator usage for the hospital type.
How I built it
I subscribed to "USA Hospital Beds - COVID-19 | Definitive Healthcare" delivered by “Rearc” using Amazon AWS Data Exchange. New datasets are added every day for this product, so I created a CloudFormation distribution from an existing template to update the data in S3 from AWS Data Exchange. The CloudFormation created a stack by creating resources like S3 bucket to store the hospital bed data, Lambda functions to manage daily update of data in the S3 bucket, IAM Permission Roles and events.
The datasets that are in .CSV and .GeoJSON format is extracted from S3 and tables are created using Extract, Transform, Load function and crawlers in AWS Glue. I created a trigger "ETL_Trigger_COVID_Beds" and scheduled a job "JOB_ETL_COVID_BED" to run every once a week to update the table contents. The job uses a python script to map the fields from S3 to Athena. The database "database-usa-hospital-beds" with table name "hospitalusa_hospital_beds_csv" is created in Athena.
For data visualization and analysis, I used AWS QuickSight. The input data for generating visualizations in QuickSight is from Athena.
Challenges I ran into
There were number of decision factors to consider apart from challenges.The bed utilization is updated every day and hence the need and the decision to use AWS CloudFormation .
Also, the hospital data needed to be filtered and color coded to understand the bed utilization rate of all the hospitals in each state . So the use of filters is pivotal to know important questions such as the highest number of licensed beds available for a hospital type in a single state.
Accomplishments that I'm proud of
Not only learnt new topics in AWS and used the data visualization tools in QuickSight. Having subscribed to "USA Hospital Beds - COVID-19 | Definitive Healthcare" by "Rearc", learnt to use data to visualize, interpret in QuickSight and use data to make informed decisions.
What I learned
Apart from the technical stuff, also learnt about the various Hospital types and the different types of hospital beds that can help hospitals to visualize and analyze data to utilize beds effectively during the pandemic times.
What's next for Utilization chart of hospital Beds during Covid19
To use forecasting and insights to predict the hospital beds for each hospital based on the number of patients in the hospitals and patients scheduled to exist on a particular date.
Built With
amazon-web-services
athena
aws-data-exchange
aws-glue
aws-jobs
aws-triggers
cloudformation
python
quicksight
s3
Try it out
github.com
us-east-1.quicksight.aws.amazon.com | Utilization chart of hospital Beds during Covid19 | This project has charts visualizing & analyzing data, showing the utilization of hospital beds & potential increase for each US state. The data is visualized based on the Hospital type in each state. | ['G S'] | [] | ['amazon-web-services', 'athena', 'aws-data-exchange', 'aws-glue', 'aws-jobs', 'aws-triggers', 'cloudformation', 'python', 'quicksight', 's3'] | 26 |
10,147 | https://devpost.com/software/nyc_property_sales_data_visualisation | Inspiration
What inspired me is the urge to learn something new. This is my first hackathon so I want to take the effort to learn more about how a hackathon works and the skills needed to win it.
What it does
The dashboard will show visualisations produced from NYC_Property_Sales_Data(2014 - 2018). Every dataset has 2 pages of visualization. One to show time series plot and another to show visualization of other variables in the dataset. At last there is one last page to show time series visualization from 2014 to 2018.
How I built it
I built it using Rstudio and "flexdashboard" library which provides a better framework to produce a clean and easy dashboard.
Challenges I ran into
The main challenge was to clean the datasets as they were so messed up. It took up most of the data preparation time. Especially, formatting the date into one format because the date ("26/6/18", "6/26/18", "26/6/2018", "6/26/2018") was in so many format and I had to find a simple was to format them. It involved some simple manual calculations as well in the end because of the heavily distorted date.
Accomplishments that I'm proud of
An accomplishment that I were proud of that I was able to finish the dashboard as how I imagined it earlier. The dashboard had all the earlier drafted items in it. I am proud also for producing a complete dashboard because this is the first time I'm using flexdashboard. I also like to thank some of the youtube channels which gave a simple guide on how flexdashboard works.
What I learned
I learned many data cleaning, correction, standardization , renaming, combining, filtering, grouping techniques. I did learn about "highchart" library in Rstudio. Before this , I always used to applying ggplot2 or normal graphs but in this project I gave a try to highchart and it worked so well. The graphs and charts produced by highcharts are better than others as it has some interesting interactive elements in it.
What's next for NYC_Property_Sales_Data_Visualisation
The data analysis and prediction will be the next part for the NYC_Property_Sales_Data_Visualisation project. I'm still collecting some ideas and reading more about them before jumping into them. Hope the readings will help for next part of this project.
Built With
rstudio
Try it out
github.com | NYC_Property_Sales_Data_Visualisation | NYC_Property_Sales_Data_2014_To_2018 | ['Vishwareeta Vanoo'] | [] | ['rstudio'] | 27 |
10,147 | https://devpost.com/software/carbon-print-in-the-environment-eu | SDG_13_20
Inspiration
There are 15 Sustainable Development Goals (SDG) according to Eurostat. Some of the goals among these 15 are clean energy and ways to reduce green house gasses into the atmosphere. Between 2000 and 2016, the number of people with electricity increased from 78% to 87%. As population continues to grow, so will the demand for cheap energy. Investing in alternating source of power other than fossil fuels is vital to the environment and reducing the amount of carbon and other green house gasses in the atmosphere.
What it does
The Goal 13 is the climate action has an action plan to reduce the amount Greenhouse gas emissions into the atmosphere. SDG_13_20 focuses on Energy related Green House Gas Emissions pertaining to Goal 13 that is Climate Action. The data from "Eurostat - Greenhouse Gas Emissions Intensity of Energy Consumption" delivered by CRUX focuses on SDG_13_20 which focuses on affordable and clean energy. The data shows an indicator value which represents the energy related Green House Gas emissions and Gross inland consumption of energy.
How I built it
Subscribed to "Eurostat - Greenhouse Gas Emissions Intensity of Energy Consumption" delivered by CRUX using Amazon AWS Data Exchange. New datasets are added once a year for this product, so I took the latest revision of data set available in AWS Data Exchange and stored it in AWS S3. I extracted the data from .CSV and .GeoJSON datasets in S3 and created the tables using Extract, Transform, Load function and crawlers in AWS Glue. I created table "greenhouse_eucx05564", that can be used in AWS Athena to create queries, group queries based on a condition and to filter data.
For data visualization, interpretation and calculations for predictions, I used AWS QuickSight. The input data for generating visualizations in QuickSight is from Athena.
Challenges I ran into
There were number of decision factors to consider apart from challenges. Firstly, the decision to use AWS CloudfFormation is skipped as the Eurostat data is updated yearly and not weekly, daily or monthly .
Also, instead of using QuickSight inbuilt forecasting or insights to predict the future data, I used "parameter" option in QuickSight to create a new value which is a calculation based on data from the past .
Accomplishments that I'm proud of
Learnt new topics in AWS and how to use the data visualization tool - QuickSight. Having subscribed to "Eurostat - Greenhouse Gas Emissions Intensity of Energy Consumption" by Crux, learnt a lot about using, interpreting data in QuickSight and the power of data and tools like AWS-QuickSight to make informed decisions and actions pertaining to the environment.
What I learned
Apart from the technical stuff, also learnt about the various Sustainable Development Goals (SDG) that can have a positive impact from the individual's personal well-being to Global - Environmental. The use and collection of environmental data and the power of tools like QuickSight to represent and be used for predictive and forecasting purposes, to make more informed decisions, and take actions that have less harmful impact on the environment.
What's next for Carbon print in the Environment - EU
If the same data exist for US then combine the data and compare the Green House Gas emissions pertaining to US with other countries. Include other developing countries in the list and visualize and interpret data.
Built With
amazon-web-services
athena
aws-athena
aws-crawlers
aws-data-exchange
aws-glue
aws-quicksight
crux-data
etl
glue
s3
Try it out
github.com
us-east-1.quicksight.aws.amazon.com | Carbon print in the Environment - EU | This project expresses, analyses, interprets data in visual format and shows how many tonnes of Carbon-Di-Oxide energy related green house gasses are being emitted per unit of energy in EU countries. | ['G S'] | [] | ['amazon-web-services', 'athena', 'aws-athena', 'aws-crawlers', 'aws-data-exchange', 'aws-glue', 'aws-quicksight', 'crux-data', 'etl', 'glue', 's3'] | 28 |
10,147 | https://devpost.com/software/fintech-retail-survive-or-die | Real GDP Percent change from U.S. Bureau of Economic Analysis
Sum of jobs supported by loan status
Sum of jobs supported by borrow state and loan status
Inspiration
“A big business starts small” – Richard Branson
It is undeniable that every large business we can think of, whether it may be Walmart, Amazon, ExxonMobil, Apple, started small. And as they scaled and grew into the behemoths they are today, investors and banks took on the risks to provide capital where-ever they saw fit. In fact, American’s successful venture into capitalism could not have occurred without small businesses. According to an article by the Office of Advocacy at the Small Business Administration (SBA), small businesses are commonly referred to the “the lifeblood of the U.S. economy” and “generate 44% of U.S. Economic Activity.” 1 And as the COVID-19 global pandemic changed the lives of so many Americans, so has altered the trajectory of our domestic output. According to the Bureau of Economic Analysis, our gross domestic product for Q2 in 2020 dropped 32.9%.2 The implications of that drop will have a profound effect on the country and our economic livelihood. Small businesses are no exception.
The U.S. has taken actions on combatting the effects of this invisible disease and enacted various economic levers to help jump start the economy. While the CARES Act is more associated with providing $1,200 to the less fortunate, it is riddled with controversy for providing millionaires and corporations with much larger tax breaks.3 Adding fuel to the fire, it has been noted that “black and latino business owners are struggling to get pandemic assistance.” 4 These facts are the inspiration for this analysis.
For decades, SBA Loans have been the best way to finance your business. The loans come in two flavors: SBA Loan 7a, SBA Loan 504. To avoid going into details about nuance qualifications and features, there are a slew of benefits to being awarded one of these awards. Unfortunately, there are certain cons that could completely wipe out businesses during this business climate. For instance, SBA loans take weeks to get approved and dislike startups or founders with poor credit. In the interest of their business, and the nation, banks have a duty to revisit areas that have traditionally been a catalyst for entrepreneurship.
Small Businesses Generate 44 Percent of U.S. Economic Activity -
https://advocacy.sba.gov/2019/01/30/small-businesses-generate-44-percent-of-u-s-economic-activity/#:~:text=WASHINGTON%2C%20D.C.%20%E2%80%93%20Small%20businesses%20are,percent%20of%20U.S.%20economic%20activity
.
Gross Domestic Product -
https://www.bea.gov/data/gdp/gross-domestic-product
The CARES Act Sent you $1,200 Check but Gave Millionaires and Billionaires Far More -
https://www.propublica.org/article/the-cares-act-sent-you-a-1-200-check-but-gave-millionaires-and-billionaires-far-more
Few Minority-Owned Businesses Got Relief Loans They Asked For -
https://www.nytimes.com/2020/05/18/business/minority-businesses-coronavirus-loans.html
SBA Loans: What You Need to Know -
https://www.nerdwallet.com/article/small-business/small-business-loans-sba-loans
Best Banks For Business Loans -
https://www.startups.com/library/expert-advice/best-banks-for-business-loans
What it does
This project attempts to analyze data from the Small Business Administration (SBA) Loans provided by Enigma. This analysis will try to optimize targeting for businesses in need of SBA loans. Traditionally, SBA Loans take weeks to get approved. Borrows have looked for online lenders to speed up the process. If banks can target the areas in greatest need of a loan than they can help rebuild this economy faster.
How I built it
I currently downloaded Enigma’s SBA Loans (7a and 504) on AWS Data Exchange and moved the files on AWS S3. Then I imported the data into AWS QuickSight and analyzed the datasets.
Challenges I ran into
Creating a manifest file is not as intuitive as it seems. I hit a lot of roadblocks that slowed down my ability to analyze the data.
Accomplishments that I'm proud of
Completing this submission.
What I learned
I learned a lot about moving data within AWS Data Exchange and how to easy it was to analyze it in AWS QuickSight.
What's next for Aiding Small Business With Intelligent Targeting
I’m going to run the data on AWS SageMaker and find some models to optimize targeting.
Built With
quicksight
s3
sagemaker
Try it out
us-east-1.quicksight.aws.amazon.com | Aiding Small Business With Intelligent Targeting | COVID-19 has changed the retail banking landscape forever. What hasn’t changed is small businesses being the heart of American capitalism. Banks can address the needs for small business loans. | ['Luis Vera'] | [] | ['quicksight', 's3', 'sagemaker'] | 29 |
10,150 | https://devpost.com/software/hellocoder-learntocode | splashScreen
homeScreen
Learn
offline courses
Each Couse Dashboar
compiler
given programs
course content
inside the course
Inspiration
I faced a lot of problems while learning programming which is actually a root of my dreams such as making my self a MERN developer but each time whenever I tried to learn programming I cant just because of the traditional boring way of learning programming from books and other boring sources which always getting me out of the path or out of focus just because of the resources or the learning material. And the same thing is nowadays facing by millions of newbies or new future developers so i wanna remove their stress by providing them the next level of learning platform which will make programming easy to learn for every age or also the most important for every field of persons no matter u belong to any field we will guarantee that u will definitely learn to program easily using our platform if u show passion and a little interest.
What it does
It can help all to learn modern technologies including programming language and development framework etc no matter u r IT student or not its specialty of our platform
How I built it
As its development is not fully done yet because its a very huge app but yes we covered its 65 percent of development and the remaining is still in progress. and therefore we also provide the prototype of the app to have an understanding of what we actually want to achieve
Challenges I ran into
The only challenges I faced is time management here but yes we tried a lot to done it in the giving time but just because of some health issues its take some more time to do but inshALlah we will make it possible soon
What I learned
I learned a lot from product designing to its backend and still, I am o my way of learning thanks a lot to gdg who provide this opportunity
prototype or a showCase
Conclusion
Once it's complete then it will hopefully attract a large number of audiences as well as help the number of students or tech enthusiast who found learning code is difficult & boring.
Built With
customapis
firebase
firestore
flutter
learningpassion
Try it out
github.com | HelloCoder LearnToCode | Many students and tech enthusiast have dreams to learn programming and do wonder with it but they cant we provide them a HelloCoder app for learning which make everything easy and understandable | ['Uzair Leo', 'Ahmed Ali'] | ['Surprises'] | ['customapis', 'firebase', 'firestore', 'flutter', 'learningpassion'] | 0 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.