hackathon_id int64 1.57k 23.4k | project_link stringlengths 30 96 | full_desc stringlengths 1 547k ⌀ | title stringlengths 1 60 ⌀ | brief_desc stringlengths 1 200 ⌀ | team_members stringlengths 2 870 | prize stringlengths 2 792 | tags stringlengths 2 4.47k | __index_level_0__ int64 0 695 |
|---|---|---|---|---|---|---|---|---|
10,280 | https://devpost.com/software/seemonuments-online | Inspiration
In school we often see that when we are taught about tourist places in geography, we only see the picture in a textbook, which makes the learning boring. We want to visit the places we learn about, but not everyone can afford that and with the pandemic, it is not possible. So our team came up with the idea through which students can see the complete 3D model for a particular monument or tourist place from their own classroom.
What it does
Our project presents the information of monuments along with their pictures. However, there is a special feature that the user can use to also see the 3D model of the particular monument. It will make learning about monuments fun and accessible for all. This project is especially made for young students.
How we built it
We built it using the following-
1)HTML
2)CSS
3)EchoAR
Challenges we ran into
Linking the AR models to our webpage.
Accomplishments that we're proud of
We are proud that our team had near-perfect coordination and the drive to complete the project on time.
What we learned
We learned about how to make 3D augmented reality models, build and link webpages to our app, and present a project in a video pitch.
What's next for SeeMonuments.Online
Currently our project has only a few monument's AR models. If we get rhe chance to improve then we will add more monuments with realistic models. We will also add a Virtual Reality feature with which people can feel that they are really seeing the monuments.
Built With
css3
echoar
html5
Try it out
github.com
dtang31.wixsite.com | SeeMonuments.Online | Learn practically and enjoy! | ['Yash Sharma', 'Claudia McComb', 'DEVIN TANGUDU', 'Viraj Chhajed'] | ['echoAR Challenge'] | ['css3', 'echoar', 'html5'] | 6 |
10,280 | https://devpost.com/software/envitweet-sn7gaj | main page
Inspiration
Climate change is a problem that many people are aware of; however, our lifestyles are not changing as much as they can to help the problem. Many government leaders have spoken out about it, but are not making a huge difference in how they run their country. We decided to create a website to show the change that has to be made to adapt and mitigate to climate change, and allow people to post tweets about climate change.
What it does
It is a forum for people to tweet about climate change to better educate themselves, and it allows people to see the difference between government official tweets and the reality of how they run their country.
How I built it
We built it by creating a website using JavaScript, HTML, and CSS, and we web scraped quotes and statistics from the internet.
Challenges I ran into
The website took a while to build, and we were not able to create a Domain for it.
Accomplishments that I'm proud of
We were able to successfully create the website with and create tweets, tables, and graphs.
What I learned
I learned how to web scrape in a time crunch and my partner learned how to code a website well very quickly.
What's next for EnviTweet
We hope to finish the website with the tweets, graphs, and tables on them. We hope people use it as a platform to write their thoughts about climate change.
Built With
css
html
javascript
python
Try it out
github.com | EnviTweet | Platform for people to tweet about climate change and learn about what steps need to be taken to save the environment. | ['Vishnudev Poil', 'Aurna Mukherjee'] | [] | ['css', 'html', 'javascript', 'python'] | 7 |
10,280 | https://devpost.com/software/qfit-vgu1mo | Inspiration
Our inspiration was our own personal fitness journey during quarantine. We’ve all been getting into fitness during quarantine. We’ve been watching exercise videos and following exercise guides by people such as Elevate Yourself and BullyJuice on Youtube. We hope our app can help others improve their fitness as well.
What it does
QFit allows the user to personalize a quarantine workout focusing on either the arms, core, or legs. Once the user has chosen an area of their body to focus their workout on, we allow them to choose a difficulty level to allow them to pace themselves. Once the user has personalized their workout, they are given a list of workouts to do, along with a video of each exercise showing them how to maintain proper form while they workout along. Every exercise makes use of body weight so the user won’t have to spend money on workout equipment.
How we built it
We built QFit using Android Studio. We coded the homepage screen, a bar allowing the user to switch to a workout screen, radio buttons allowing the user to personalize a workout, and videos to go along with the workouts provided.
Challenges we ran into
Some difficulties we encountered during this hackathon were coding using Android Studio and splitting up the work efficiently. We had trouble making our app be able to change tabs, and our app kept crashing when a radio button was pressed. Additionally, we had trouble getting videos to play on the app. Despite these difficulties, our group was still able to produce an app in the 24 hours provided.
Accomplishments that we're proud of
Some accomplishments that we’re proud of are being able to participate in our first hackathon, and produce a somewhat usable app using a language we weren’t familiar with. Additionally, editing our video and watching the end result was very satisfying.
What we learned
We learned how to code in Android Studio, how to edit a video using premiere pro, and how to efficiently work together as a group. Additionally, we learned what mistakes not to make during our next hackathon.
What's next for QFit
Next for QFit is making our videos function properly, adding workouts for people who have exercise equipment, creating a larger variety of workouts to do, and allowing the user to personalize their workout more by adding more areas of the body to work out, such as the chest, shoulders, etc. Moreover, we hope to add a tab tracking the time spent working out, the amount of calories each workout burns, and the total number of repetitions they’ve done.
Built With
android-studio
java
Try it out
github.com | Qfit | QFit - Hackitbetter Hackathon | ['JOEVIN CHEN', 'JUSTIN YU', 'JUN Z', 'STEVEN LIN'] | [] | ['android-studio', 'java'] | 8 |
10,280 | https://devpost.com/software/drigital-covid-tracker | Inspiration
Being that we are all students in the final year of secondary school, we’ve seen what teachers have to go through (at least in our area) to deal with Covid-19. Just focusing on keeping track of students is a headache in and of itself. This is fundamental to preventing spread of Covid-19 as they need to be aware of who has been in contact with who. As of right now, it is done through—hundreds of—paper logs filled in by the students as they leave or enter the classroom. The times that are filled in are inaccurate, and the students are required to fill out a physical log (leading to contact).
Instead, we believed a modern approach where students can fill out these logs electronically would be safer and more efficient. They could track the times more precisely and would allow the administrators to easily store and view the logs without sifting through hundreds of physical logs. We thought that only applying this to schools would be silly, so generalizing it to workplaces and other areas was a natural step.
What it does
It is designed not to just help teachers, but other workplace administrators as well. It is a convenient tool to help track where people are going, and at a precise time. While useful as a general tool, it is also particularly useful today with dealing with Covid-19. When the inevitable does happen and someone in the administration catches it, TrackerInout allows the administrator to effectively see the activity of this individual. It is then much easier to quickly see who could have reasonably come into contact.
In terms of functionality, our project allows administrators to sign in using Google. They can then create a room (using a code). Anyone with such code and sign in and out, sending a log including the time, reason, and direction to the administrator.
How we built it
For this project, we started by coding in HTML, CSS, JavaScript, and Firebase for backend.
Challenges we ran into
As none of us were experienced with Firebase, it required a lot of effort and time to learn the necessary tools to keep the site running smoothly. This ate a lot of our time and required us to work faster for the remainder of this hackathon.
We were only able to get the login function to work when signing in with Google. We also created a quick UI so that we could focus on the backend, and learn how to write and read from the database using JavaScript.
Accomplishments that we're proud of
As this is the first hackathon for all the members of our team, we did not know what t expect coming into this challenge. We are all also relatively new to backend programming. We are extremely proud of being able to learn how to use firebase to an extent within 12 hours, and then be able to read that data.
What's next for TrackInOut
Because Google is so ubiquitous, having a sign in with Google option is convenient. However, we would like to implement a feature where administrators can log in using any type of email, allowing a better varied login experience.
Other features include different tools to help administrators view the logs. We would like to implement a search feature that allows administrators to see all the logs for a particular person. This could help check who they’ve been in contact with, or if they’ve been taking too much break time. A feature that allows them to download the logs in an excel or google spreadsheet format would also allow them to effectively view their logs independently.
Because time was so short, we didn’t get to create the clean design we wanted to. This would be a natural step to include as we work on the site in the future.
optimization of backend
Built With
css3
firebase
html5
javascript
node.js
Try it out
github.com | TrackInOut | A simple check in/out program to facilitate logistical processes. | ['DHIRHAN KANESALINGAM', 'Jaimin Jaffer', 'Alex L', 'Matthew Melanson'] | [] | ['css3', 'firebase', 'html5', 'javascript', 'node.js'] | 9 |
10,280 | https://devpost.com/software/14-day-quarantine-nbo7h4 | Inspiration
Our inspiration for this project was from the sample ideas, especially the idea where a virtual robot could be created to keep self-quarantine-rs company. We wanted to modify it so people used their their time to entertain themselves, while doing something relevant to the pandemic.
We also remembered that during the quarantine, Instagram and TikTok challenges became
super
popular, so we wanted to create a challenge where it helps the person who's doing it, their community, and positively influences others, as opposed to just tagging others or circulating a hashtag.
That's where
14 Day Quarantine
was born!
What it does
14 Day Quarantine
is a website which has a challenge for each day of your self-quarantine. By clicking on the day of your quarantine, you get a challenge. Each challenge has a link for you to read and complete! Some challenges involve learning more about COVID-19 from the official CDC website and taking a quiz to test your knowledge, while other challenges are more fun and involve putting together care packages or making cards to send to COVID patients! Creating your own mask and supporting local businesses are also part of the challenges :)
We didn't create an Instagram page or any social media for the challenge, but the challenge is designed to be very social -- virtually. You can share your progress as you complete the challenge using hashtags, and the 14 Day Quarantine social media is supposed to share participants' photos on their account and help promote social distancing and wearing masks!
How we built it
Originally, we created a prototype on Figma, so that we knew how exactly we wanted our website to be like. We coded it through HTML/CSS, and Visual Studio Code as our text editor.
1)
We created modals, so that the challenges would pop up, however we didn't know how to combine the modals into the code.
2)
Another challenge that we ran into while making the challenges
(get the joke?)
was collaborating together on the website. Our internet kept going out, so we had to switch to local, and therefore could not do things on the same files. This created complications later, as we were unable to merge all the files together since the CSS that was used by my brother's files were interfering with the CSS used by my files, and creating a mess.
3)
The third and final problem we ran into was in our animations. We originally wanted to make animations, except we had no experience with Javascript, and we had no time to learn either. We experimented for a bit, but then we decided that after the hackathon, we'd learn in ease.
Accomplishments that we're proud of
Some accomplishments that we're proud of include:
1)
Our graphics. We think that our website looks really aesthetically pleasing to the eye, which is really good for challenges. When the word
challenge
comes to mind, some people already start groaning! With a good looking website, sometimes your motivation goes up, which is the case for me :) Good aesthetics definitely improve the interest factor for many people! The sliding little viruses are also
super
cute!
2)
Our content. We're proud of the challenges we were able to create, and how official they look. We're proud of the quizzes we have created, and how our website turned out overall!
4)
Our fake Instagram page. I photoshopped it all :)
What we learned
We learned so much from the hackathon!
1)
This was our second hackathon! This was our first time coding anything, and we learnt a lot throughout the way!
2)
We learned a lot more on how to use HTML/CSS, and how to make function and responsive websites!
What's next for 14 Day Quarantine
Some additional features we wanted to implement were:
1)
A user login that keeps track of your progress, and gives you badges, etc. to keep you motivated!
2)
A challenge count which keeps track of how many people have challenged themselves to date
3)
A certificate to be given out after completing the challenge
4)
Additional links for people who wanted to do/know more
Once we are satisfied with the end result, we might release it to the public via social media so that people can actually take part in the challenge!
Our user base is designed to be broad and applicable to everyone, so anyone can take part!!
Built With
css
html
Try it out
paneersamosa.github.io
github.com | 14 Day Quarantine | (Coded) A 14 day quarantine challenge dedicated to improving you and your community! | ['Sahithi Lingampalli', 'Uday Lingampalli'] | [] | ['css', 'html'] | 10 |
10,280 | https://devpost.com/software/tutora-kse7o0 | Inspiration
.
What it does
How we built it.
..## Challenges we .ran into
Accomplishment...s that we're proud of
.
What we learned.
..
What's next for TuTora
Built With
flutter | but | . | ['Vihaan Dhaka', 'Arnav Bansal'] | [] | ['flutter'] | 11 |
10,280 | https://devpost.com/software/covid_lineup_app | hackathon_prep
Shopping for groceries is still a real need. COVID - 19 is still a great threat. We made this app to minimize transmission at places like stores where it has the potential to occur the most. The app allows users and admins to view the COVID guidelines for any particular store. Moreover, instead of queueing up physically, they can do it virtually on the app. Users can see the number being called and book or cancel a spot in queue, while the administrators have more control over the number being called, etc.
What inspired us was our own and our family's experiences at grocery list and the current second wave happening. We couldn't help but notice how many people were often packed together in queues for stores.
We used the Firebase firestore as a backend to store the Store data along with the queue data. Flutter was used to code the project.
Since this was our first hackathon we faced many problems concerning organizing and dividing work. Because of COVID, we had to also collaborate remotely, and we learned using important tools such as github. But in the end, the experience was worth it, and we look forward to our next hackathon!
A new Flutter application.
Getting Started
This project is a starting point for a Flutter application.
A few resources to get you started if this is your first Flutter project:
Lab: Write your first Flutter app
Cookbook: Useful Flutter samples
For help getting started with Flutter, view our
online documentation
, which offers tutorials,
samples, guidance on mobile development, and a full API reference.
Built With
dart
java
kotlin
objective-c
swift
Try it out
github.com | StoreSafe | Help customers make safer decisions when shopping by creating a virtual waitlist. | ['ANUBHA JOSHI', 'lele Zhao', 'JANE WU', 'Mason Zhou'] | [] | ['dart', 'java', 'kotlin', 'objective-c', 'swift'] | 12 |
10,280 | https://devpost.com/software/nature-vr-scenes | .
Built With
openvr
unity | Nature VR Scenes | Simple stationary VR environments of nature | ['Michael Vlamis', 'Charlie Mackellar'] | [] | ['openvr', 'unity'] | 13 |
10,280 | https://devpost.com/software/breathe-o28fz3 | Inspiration
Our inspiration behind the app was our own lives and the lives of people around us. We see the heavy psychological burden that many high school students, adults, and even children carry on their back, with
depression, stress, and even frustration
with the amount of work they may be getting. In addition, when these issues start to pile up, while the world may not actually be ending, it definitely feels as if it is because there is an extent to which any person can deal with these issues. There is no reliable way to cope with these issues without running the risk of
social stigma
or
insanely high prices
and this is where our solution comes in.
What it does
Breathe has specialized therapy and binaural beats for your needs and whenever you feel down, it is always accessible with an ever growing base of resources that you can turn to. Whether it be just wanting to listen to some relaxing music, finding a recreational activity to do, or a fun new comic strip to crack up at, Breathe has got you covered at every angle. Not only do we do so, but we encourage users to use our app in such a way that they will both cope with their current issues but also prevent these same issues from happening in the future. Providing detailed plans, explanations, and coping techniques are of the essence and Breathe is our solution.
How I built it
Our team built the cross platform mobile application on the framework React Native. We primarily used Javascript for both the front and backend as well as used numerous dependencies to support our many features. For example, the audio that is played from our application used the ‘expo-av’ package, we used ‘motion’ for our timers, ‘react-native-gesture-handler; for our animations, etc. In addition to the general components of the react native framework our team used other resources for things like authentication, trail directions, testing sites etc. We used firebase for our username and password as well as google authentication. We also used a postman API called COVID-19 testing sites to track local testing sites with links to directions and the points of contact.
Challenges I ran into
Some of the challenges we ran into were definitely on the fronts of Interface and some backend. To go into each, the user interface was particularly challenging not because we weren’t sure of how to code the frontend but when we were creating this cross platform mobile application, we wanted to give them as many alternatives or exercises as possible, but in doing so, we also had to be very careful to not fluster the user. The second challenge we had was with the audios. There were oftentimes inconsistencies with audio amongst multiple devices, and as this was our first time using audio packages in react native, we did avid research and were able to find a solution in the end.
Accomplishments that I'm proud of
Some of the accomplishments we are proud of include us overcoming some of the challenges we faced. Despite having limited time, we were able to pull together a suitable function cross platform application with a frontend and backend. One thing unique specifically about this project is that in addition to the coding aspect, we had to also do quite a bit of research on different therapies and accumulate some of the resources manually to be put at the users disposal. Being able to incorporate all three of these things successfully was a great accomplishment in and of itself.
What I learned
We have learnt more about react native, more about the different dependencies that we can use to support newer projects and features, we gained a further grasp and experience on firebase and overall were able to affirm our grasp on the principles of mobile application development.
What's next for Breathe
In an ideal scenario, we want to be able to further test the therapy and see how effective it is as well as get some more professional opinion on the therapies. From there, we would be able to provide more alternatives in case a certain one doesn’t work for someone, but at the moment we have provided general strategies that have proven effective to a majority population. In addition, we would like to develop a website going forward for the same purpose and in doing so creating more outlets for people to use the service.
Built With
firebase
google-maps
javascript
postman
react-native
Try it out
github.com | Breathe | Cross platform mobile application that allows people facing mental pressure or detriment to get the therapy they need as well as provide testing site locations and contact info nearest to the user. | ['Kathan Sheth', 'Ved Joshi', 'Sashv Dave'] | [] | ['firebase', 'google-maps', 'javascript', 'postman', 'react-native'] | 14 |
10,280 | https://devpost.com/software/saferos | Applications
Matan's "Bob Ross" MSPaint Drawing
Desktop
Inspiration
In the past months Matan and I have experimented with many different distributions of Linux in our free time. Our grandparents also asked us for help every once in a while. For the Hack it Better hackathon, we decided to create an operating system for them!
What it does
It's a Linux distribution which includes everything a senior might need: a file manager, web browser, and other things. Because it is built on Arch Linux, it's extremely secure and offers fast updates.
Challenges we faced
We wanted to create a simpler zoom client because our grandparents asked for help many times but sadly the SDK didn't support Linux so that turned out to be a dead-end (If you find a way to do this PLEASE contact us we will be really glad to code it)
How we built it
We worked together on a Virtual Machine, beginning with a plain Arch installation and then developed our own software.
Simultaneously we worked on the extension (which is also available separately!) using Atom.
What we learned
Better time management. Making this in 24 hours has been extremely wild and difficult.
What's next for SaferOS
The chromium extension will be extended for more websites: news sites, document editors, social media websites in order to allow seniors to use the web without any fear.
Additionally, we hope to create an easy installer using Calamares and video chat clients especially for the current pandemic so the elderly can be independent.
Built With
chromium-extension
css
gnome
html
javascript | SaferOS | A simple and easy-to-use operating system for senior citizens | ['Matan Rafalovitz'] | [] | ['chromium-extension', 'css', 'gnome', 'html', 'javascript'] | 15 |
10,280 | https://devpost.com/software/digital-contact-tracer | Add Entry
Report Case
Inspiration
We realized that one of the few things that has not been digitized is part of the contact tracing process, specifically the mandatory part where businesses need to keep track of their in-person customers. Wanting to create an application to help in some way with the pandemic, we decided this was the perfect fit.
What it does
The application exists as a better alternative to the current pen and paper process where businesses keep track of customer information for contact tracing. It allows businesses to add entries of who their customers are and when they were in contact with other people. The most innovative feature is the interface where testing centers can report the name of a person who has tested positive, and our application will immediately produce a list of all of the names and contact information of the people who may have been in contact with this person during the virus incubation period. In addition, it provides a risk assessment for each person based on how direct and close the contact with the person who tested positive is presumed to be. This way, they can be notified as soon as possible.
How we built it
We built the web application using mean stack. This included angular for the front-end, node and express for the backend, and mongodb to store the data on a database.
Challenges we ran into
Despite facing some challenges throughout development on the front-end, the backend code proved to be the most difficult and time-consuming aspect of the project. Specifically, making sure the algorithm for generating the list of potentially infected people involved many challenges that took time to overcome. We also faced some problems on the server side that involved a fair bit of research and debugging to fix.
Accomplishments that we're proud of
This was our first project using mean stack so we had to learn a lot as we went, and we are proud that we were able to create an application that functioned the way we planned.
What we learned
We definitely learned a lot about the mean stack structure, as well as specific syntax or algorithms that we were not familiar with beforehand, and I think that we have become more proficient in making mean stack projects and web applications in general.
What's next for Digital Contact Tracer
One thing that we are interested in doing is deploying our web application and trying it out on a small scale. One idea we had for this is trying to use it in our classroom at school, where we are currently using pen and paper to track when students leave the room.
Built With
angular.js
css
express.js
html
javascript
mongodb
node.js
typescript
Try it out
github.com | Digital Contact Tracer | An application that automates contact tracing after a confirmed positive COVID-19 test | ['Ross Cleary', 'Colin Chen'] | [] | ['angular.js', 'css', 'express.js', 'html', 'javascript', 'mongodb', 'node.js', 'typescript'] | 16 |
10,280 | https://devpost.com/software/illustree | Inspiration
Environmental sustainability is a huge issue in the world everyday for a variety of reasons, one of which is the rise of technology. We wanted to use our programming skills to show the positive impact tech can also have on society, using it to raise awareness, advocate, and facilitate fruitful discussion.
It's often difficult to see the ways you slowly contribute to the overwhelmingly large amount of greenhouse gases, so we created a calculator to help with that.
What it does
Our project serves as a area of environmental awareness, featuring a page with information on the various contributors to carbon emissions and a calculator to show how much CO2 you output through your daily travels.
How we built it
This website was created in primarily HTML, CSS, and Javascript, using the repl.it IDE.
Challenges we ran into
For a lot of us this was one of our first hackathons, so a lot of unforeseen issues came up that were pretty hard for us to solve. Something we spent a lot of time on was the javascript behind our calculator, because the function wasn't executing when it was supposed to. Thankfully, with the help of mentors, we were able to fix it and get it done.
Another issue we had to tackle was echoAR. We originally wanted to integrate it into an QT app, having one of our members being proficient in C++, as to utilise all of our strengths. However, while integrating our website was easy enough, echoAR proved to be more difficult than previously assumed. We decided then to push the app development aspect of Illustree, as a 'next step'. This allowed us more time to work on the website, and what we had already made for the Illustree app could served as a prototype of how we could further expand the website.
Accomplishments that we're proud of
A huge aspect of the project we're proud of is the design and UI. This is something we spend a large portion of our time on, and worked on a lot. Another issue that we tackled and overcame was the wide distribution of experience, but although it was difficult, because some of us had never used Relp.it before, we all worked together to learn and teach each other about how to use the software. Many of us also gained exposure to new languages, and we all walked away with something new.
Our team members also learned many advanced formatting techniques, one in particular was the moving background on our website, and defining more complex layout and allowing flexibility.
What we learned
Coming from three different countries, this project offered us the opportunity to learn to work together alongside improving our technical skills. All of our programming proficiency's increased a lot, through the practice we gained during the creation of this project.
What's next for Illustree
Moving forward, we're hoping to complete the implementation of the Illustree app, adding echoAR and OpenCV in conjunction with our calculator to see what else can be done to lower carbon emissions. We also want to gather more information about carbon emissions and address all of the various causes, and tackle more issues.
Built With
css
html5
javascript
Try it out
hack-it-better.sonnetx.repl.co
github.com | Illustree | Raising awareness and advocating for environmental sustainability, Illustree's mission is to help you understand choices you can make to reduce your carbon footprint to help the natural world. | ['Sonnet Xu', 'Kate Y', 'kxrrotಠ_ಠ Agbro', 'Maddy C'] | [] | ['css', 'html5', 'javascript'] | 17 |
10,280 | https://devpost.com/software/covid-19-screening-test | Inspiration
Recognizing the problem created by COVID-19, our team set out to create an innovative program that satisfies both health and community wellbeing. With the idea of many COVID-19 testing sites being oversaturated and the added danger of entering one, our app is designed to counter this ever-growing threat by providing people with a modern online, easy to use solution.
What it does
Its a screening test that asks the user a set of questions. It then calculates the likely hood that the user has covid-19 and the precautions he/she should take.
How I built it
We built the program through hard work and determination.
Challenges I ran into
It was difficult and tedious when working on the video.
Accomplishments that I'm proud of
We are proud that we started the hackathon using Visual Basic. It is very easy and efficient to use when creating a GUI.
What I learned
This was our very first hackathon. We learned that teamwork is very important and coordination is especially important in an online situation.
What's next for COVID 19 Screening Test
We hope this program could help many people and reduce the spread of COVID 19.
Built With
adobe
visual-basic | Covid 19 Screening Test | Recognising the problem created by covid-19, our team set out to create an innovative program that is designed to provide people with a modern and online way to be tested if one has covid-19. | ['DANNY TRAN', 'Kevin Yu', 'Tony Yang', 'NGUYEN-HANH NONG'] | [] | ['adobe', 'visual-basic'] | 18 |
10,280 | https://devpost.com/software/save-fl81q0 | Inspiration
What it does
How we built it
Challenges we ran into
Accomplishments that we're proud of
What we learned
What's next for Example React Native Project
Built With
react | Example React Native Project | Example React Native Project | ['Om Joshi', 'Neeral Bhalgat'] | [] | ['react'] | 19 |
10,280 | https://devpost.com/software/dont-get-to-close | Inspiration
COVID-19 has affect a lot of how we run through this all, but sometimes were needed to go outside too, to buy daily needs/ someone who can't work at home. With social distancing, we must keep a distance with others, this is one of the most substantial in how we can slow the spread and keeping ourself more safely. But sometimes, people not knowing how far they have to keep the distance, or so. So I made some application that will be helping peoples to keep distance with the others.
What it does
It does giving the users the measurement/ prediction how far they have to give distance with the others, are their position is safe or not.
How I built it
I built it using Python and OpenCV, with some physics formula that I get from Internet.
Challenges I ran into
My phone is broken so I can't get it to work on my phone, because it's actually maded for phones, but I think i just want to give the ideas, and giving example on my computer.
Accomplishments that I'm proud of
I proud that I can make it happen.
What I learned
I learn physics, haha
What's next for DONT GET TO CLOSE
Porting, made the application so it can work on phones. and giving it some UI&UX so people it will be easy to use the app.
Built With
machine-learning
opencv
python
Try it out
github.com | DONT GET TO CLOSE | sometimes people need to know how far the safe distance to avoid COVID | ['Muhammad Aucky Aisy Sudjono'] | [] | ['machine-learning', 'opencv', 'python'] | 20 |
10,280 | https://devpost.com/software/smiles-with-care | Logo
Home Page
Home Page 2nd image
Home page 3rd image
Senior Citizen Care Page
Letter to senior Citizen
Survey
Inspiration
I wanted to spread a message for Fun with care! during pandemic COVID-19 situations. I wanted to create a virtual community for Fun stories.
What it does
Fun with care allowed users to place their fun stories and allow a user to write a letter to senior people.
They can share their stories and readers give them to review their funny stories
and earn badges. Readers can able to post their survey using survey form-- Fun with Care review
all reader survey to provide more features.
How I built it
I build with html5 and javascript
Challenges I ran into
Time was a challenge.
Accomplishments that I'm proud of
I am proud to spread the message for a Fun with care!
What I learned
I learned to do effective time planning to submit projects.
What's next for Fun with a Care!
I wanted to do more community project reference kindness, empathy and mental relax tool.
Built With
html5
javascript
Try it out
github.com
share-a-fun-with-care.sjj3.repl.co | Fun with a Care! | Spread Fun with care in a community! | ['Sally Jain'] | [] | ['html5', 'javascript'] | 21 |
10,280 | https://devpost.com/software/dish-drop-m8p0kn | Inspiration
Social good and economic impact! We were inspired by the rampant amount of homeless people who struggle to find a meal everyday and low-income families struggling financially, which is also exacerbated by the Covid-19 pandemic. Due to high unemployment rates, many people are now struggling to put food on the plate and restaurants are going out of business. Everyday, due to underwhelming demand, we realized that many restaurants are forced to discard excess food at the end of the day. We thought that the same food that is thrown out would happily be accepted by those who don’t know where their next meal is coming from. We put two and two together and decided to create this app in order to not waste that excess food and put it to good use by having it donated to homeless shelters to help those that have become impoverished because of the pandemic as well as those that were already low-income. This app is especially useful as long as we’re in a recession and in the pandemic, but is also useful even after because of how there are always low-income families that are in search of food.
What it does
Dish Drop is a food delivery app that supports the homeless and low-income families as well as restaurants by connecting the two, allowing for restaurants to donate their excess food to homeless shelters instead of throwing it and also giving participating restaurants tax credits for their donations. Restaurants can send out a notification stating that they have food they have ready to donate. Homeless shelters can then send a request for the food, triggering the transaction. A Dish Drop driver is located, picks up the restaurant’s excess food, and delivers it to the homeless shelter or meal program (soup kitchen, food bank).
How we built it
We split up the work between the four of us by front-end and back-end work. Two of us worked on designing the User Interface and User Experience, which was created using Storyboard/UIKit in Xcode. The other two of us used Swift in order to code the back-end portion and utilized both Google Cloud’s Firebase Firestore and Real Time Database. We used Firestore in order to store information about the different restaurants and meal programs, while using Real Time Database in order to keep track of orders, order status, driver information, and driver location. This helped us provide this data in real-time to all parties involved regarding where the food is.
Challenges we ran into
We ran into challenges when trying to figure out the whole back-end flow of how the information should be passed. We originally thought that we would utilize a system where homeless shelters and other meal programs can request food and restaurants would fulfill them, but we decided to make the design change to make it so that restaurants would indicate food they have to donate at a certain time, and then notify homeless shelters about the available food, which they could claim. This information would now be passed onto a driver, who would be able to reserve a delivery and then go and carry it out.
Accomplishments that we're proud of
We were able to utilize tools that we knew about in order to create a socially applicable project that has wide implications. We are proud of creating a food delivery application that can support the homeless and help restaurants during the pandemic and also after.
What we learned
We learned a lot about how to utilize location data. We also learned about how to use Swift and UIKit and MapKit in order to utilize Apple Maps to display the data. We also learnt how to use Google Cloud’s Realtime Database in order to see the orders coming and being accepted in real-time as well as the driver’s location.
What's next for Dish Drop
As of right now, Dish Drop provides a layout for how restaurants will be able to donate food to other homeless shelters. We’d work on also contacting more restaurants and homeless shelters to use the app in order to be able to feed more people as well as save more food wastage. We’d also largely target incentivizing drivers to join the program, either through a payment service which restaurants can use to pay drivers to deliver the food. We will also work on creating a rating system for drivers in order to give more information to restaurants and meal programs about who the driver is and more information about them. We would work on also creating profiles for restaurants and homeless shelters to give more statistics on the amount of people that live there, where they usually order from, etc. ( for homeless shelters) and give statistics on how much food they usually give and average ratings that they have gotten from meal programs about the quality of food. We’d also want to potentially implement machine learning to be able to predict meal ordering patterns for meal programs as well as placing meal information for restaurants and add scheduling features.
Built With
firebase
firestore
google-cloud
mapkit
storyboard
swift
uikit
Try it out
github.com | Dish Drop | Routing excess food from restaurants to homeless shelters and other meal programs to decrease food wastage and help impoverished families during the pandemic. | ['Dot Developer', 'Kartik Punia', 'Akshay Talkad', 'Siddharth Cherukupalli'] | [] | ['firebase', 'firestore', 'google-cloud', 'mapkit', 'storyboard', 'swift', 'uikit'] | 22 |
10,280 | https://devpost.com/software/covid-19-diagnosis-with-deep-convolutional-neural-networks | Unsped up video here:
Demo at
https://www.youtube.com/watch?v=o8Ey9no-3BQ&feature=youtu.be
How to use it
Our model is deployed to a web application.
https://covid-chest-xrays.herokuapp.com/
Simply click the URL and upload images for diagnosis. Or, if you only want to test it, download COVID-19, viral pneumonia, or normal chest x-rays from Google and upload them for diagnosis. It's that easy.
Inspiration
COVID-19 nasal swabs have been widespread for months. However, they are intrusive and harmful for people with sensitive sinuses.
Studies have proposed chest imaging, but found that “No single feature of covid-19 pneumonia on a chest radiograph is specific or diagnostic, but a combination of multifocal peripheral lung changes of ground glass opacity and/or consolidation” (Cleverly, 2020)
This describes the functionality of convolutional neural networks; they are great at extracting many features. Therefore, we decided to apply them to this task.
What it does
Given a Chest x-ray in Dicom, Jpeg, or PNG form, our application will diagnose as either COVID-19 infected, viral pneumonia, or normal. Our solution is more robust and accurate than existing solutions, as well as being extraordinarily fast (see our video).
Performance is as follows:
EfficientnetB0 (runs on 500mb RAM/disk, 1 cpu core):
Test Metrics
Test AUC
Test Accuracy
0.90
95.44
EfficientNetB4 (Requires GPU for fast inference)
Test Metrics
Test AUC
Test Accuracy
0.92
96.92
How I built it
We used Tensorflow to train the model. Since we needed to iterate quickly to improve the model in only 24 hours, we used very poserful cloud GPU's (Nvidia Tesla P100) on Kaggle. For more robustness, we used augmentations on the images during training, and TTS validation (k-fold would have been marginally better). Our notebook for training is
here
. We ended up training two models. One model was a small one optimized for efficiency, which was designed to run on 500mb of RAM/disk, and a single CPU core. The other was an extremely large model optimized purely for performance, and achieves slightly better scores. It must be run with a dedicated GPU (4+gb vRAM).
Our frontend for Heroku is using Streamlit, a python frontend library. However, our github repository has a much more functional frontend that couldn't be deployed to Heroku, and instructions to run this frontend are in the repository. However, we believe that for most users, our Heroku frontend is adequate.
What's next for us
Ideally, we would have deployed our better model to a web hosting service. However, it needs to be run on a GPU otherwise speed is awful, so we just opted for a smaller model with worse performance, but can be run extremely quickly on extremely limited resources. We would also like to use react/flask for our frontend, so we can have demo images and other functionality.
Challenges we faced
The modelling part went surprisingly smoothly, as the dataset was easy to use and I have tons of previous experience in computer vision. Training took a while, but a powerful GPU allowed us to finish training in time. Our frontend hit some roadbumps however. We initially used flask and react for the frontend, but our end goal was to deploy to Heroku. We couldn't figure out how to render JS with flask, and as a result, needed to run two commands simultaneously to route flask through react.js. As a result, it could not be deployed to Heroku. We had to create a new frontend quickly, this time in Streamlit, to deploy to Heroku. The new frontend is less aesthetic and much less functional than originally planned, but it was a necessary compromise.
References
[1] Cleverley, J., Piper, J., & Jones, M. M. (2020). The role of chest radiography in confirming COVID-19 pneumonia. BMJ, m2426.
https://doi.org/10.1136/bmj.m2426
Built With
flask
machine-learning
react
tensorflow
Try it out
covid-chest-xrays.herokuapp.com
github.com | COVID-19 diagnosis with Artificial Intelligence | Diagnose COVID-19 from Chest x-rays (Deep Convolutional Neural Networks) | ['Stanley Zheng', 'Sai Kishore Bhujangari'] | [] | ['flask', 'machine-learning', 'react', 'tensorflow'] | 23 |
10,280 | https://devpost.com/software/superlottery-play-and-win | Front-End
Back-End
Entering Into Lottery - Making Transaction of Ether
After Successfull Transaction - Got Enterance Into LiveLottery
Inspiration
I got this idea from my friend who plays the lottery and always wanted to know how it works. He never saw any lottery project which is implemented through ethereum and blockchain. So from his idea, I thought of creating an online lottery system that can be played anytime, anywhere, and by anyone. It has minimum criteria for submitting some ether and then you are ready to go.
What it does
It basically asks you to enter any ether amount more than 0.5 ether to enter in the lottery and after you enter, the lottery winner is decided by the manager which is me and by random, the winner is chosen and the amount of ether is directly transferred into the ethereum wallet by which the person paid earlier. Live representation is done in the video or you simply visit -
https://lotteryproject.netlify.app/
and play yourself. For this, if you have metamask it would be good.
How I built it
I Used remix.ethereum.org for writing code in solidity for my smart contract and created front-end using react and merged them in the back-end. It also uses the Rinkeby Test network by MetaMask for making transactions for demo purposes. Real ether can also be used.
Challenges I ran into
I was new to BlockChain, I learned everything in 7 days, about metamask, ethereum, solidity, and BlockChain. I took this as an opportunity to learn and enhance my knowledge in BlockChain.
Accomplishments that I'm proud of
While Understanding In and Out's I learned various technologies that I'm proud to know from this Hackathon.
What I learned
I learned what is BlockChain, CryptoCurrency, MetaMask and many more such thing which led me to create a LiveLottery Project
What's next for SuperLottery - Play And Win
I will enhance more features into it while I'm learning more about it.
Built With
css3
ethereum
html5
metamask
node.js
react
solidity
Try it out
github.com
lotteryproject.netlify.app | LiveLottery - Play And Win | This is a live Lottery Project where anyone can try luck while contributing minimum 0.5 ether . The winner will be random and choosen by the Contract Owner. It is based on Solidity and BlockChain. | ['Kartik Agarwal', 'Manan Jain'] | [] | ['css3', 'ethereum', 'html5', 'metamask', 'node.js', 'react', 'solidity'] | 24 |
10,280 | https://devpost.com/software/hackitbetter2020 | MaskNet: Automated Face Mask Detection For Better COVID Data Analytics
Inspiration
We are living in a public health crisis, where a polarized political atmosphere and general social unrest has led to unnecessarily complex and political stances on wearing face masks. One of the major reasons that much is unknown about the spread of COVID and why tactics such as contact-tracing are sometimes fruitless is because of the lack of a reliable and widespread method of collecting data on metrics such as face mask usage. In order to provide such a solution, MaskNet was developed. MaskNet is an automated face-mask detection pipeline to provide better public health data analytics and allow authorities to pinpoint COVID risk areas. The software could also be used to generate real-time maps so that users can see which places around them are frequently visited by non-mask users and can exercise caution in those zones.
What it does
MaskNet is a live camera system that detects whether someone is entering a store with or without a facemask. It then sends this info to a firebase database which produces a visualization on a website as you can see in our demo video. This way, people can see high COVID risk areas (where people are not wearing face masks regularly) and take appropriate measures. It can also be used by authorities to determine areas that need medical supplies and where there is high inequality.
How we built it
The data used in this project consisted of 4236 unique images containing a total of 15411 unique faces, separated into 20 different classes based on the type of facial covering. Since this was a binary classification problem, only the images from three of these classes were considered (no covering, lower-face mask, and mask worn incorrectly). The first problem to solve was that of accurately extracting faces from an image containing one or more. This was done through a Multi-Cascade CNN method (mtcnn). The second issue to resolve was a heavy class imbalance in the image dataset favoring masked faces, and this was solved through under-sampling the data and implementing a weighted loss function. To classify whether a person’s face was wearing a mask, a CNN architecture was developed and the CNN was trained with over 5500 unique face images of vastly differing degrees of resolutions, zooms, orientations, and skin colors. After optimizing and fine-tuning, the final test accuracy of the model was 96.8%, which is an achievement considering the fact that a decent proportion of the training images were excessively blurry or had other occlusions such as shadows or graininess. Finally, the two networks were combined into a single system that segmented, extracted, and resized faces from an image frame and then evaluated the face-mask CNN classifier on each of the faces. This was then packaged into a program that ran the composed pipeline on a video stream and showed live predictions.
We used the center prop to dynamically change the range and zoom of the map when a store is pressed.
On componentDidMount, we initialize an event listener for changes to the Firebase Realtime Database so that people counts and mask counts can be updated as they are processed by the backend
Challenges we ran into
Challneges for web: A challenge encountered in the web development is the complexity of designing a user interface that is simple to use and also visualizes the received metrics from the MaskNet camera backend. We use an Express backend to index the received bounding box coordinates and map them as either coming into the store or leaving the store. The backend instance also sends this data to Firebase Realtime Database, and our front-end listens to the database and automatically rerenders with any changes to the store customer count as well as the customer with
mask count.
Challenges for the ML: OpenCV drawing lines and keeping track of all the bounding box coordinates was kind of hard. It was also difficult predicting when someone entered, since we needed to keep track of both the direction the person was going and then threshold the x value of the center of the face bounding box based on that. Additionally, the algorithm to track each face was also a challenge. Due to the time constraint, we ended up using a naive approach of assuming that the same face would have the minimum euclidean distance between the centers of the bounding boxes of consecutive frames. Based on this we were able to the approximate the face's average velocity and then appropriately send a request to the firebase database when someone had entered.
Accomplishments that we're proud of
Really proud of the accurate ML model that predicts whether someone is wearing a face mask. It even managed to predict whether someone is wearing the mask incorrectly, or if they're just covering their face with their hands. For this, kudos to the great quality data on kaggle. We're also proud of the algorithm that predicts the direction of movement and thresholds accordingly. This felt a lot like a traditional coding competition problem.
What we learned
Learned about doing advanced ML in python, interfacing with google maps with react, and in general combining these two different systems into one coherent pipeline. We learned that transfer learning is really powerful, and we learned some ways to reduce class imbalance when training a computer vision model. Lastly we learned that you shouldn't procrastinate until 20 minutes before the deadline to start your video :)
What's next
We hope to further integrate more ways to keep track of COVID-19 in your community. By using public health sources and county-specific virus metrics, we hope to outline entire counties by particular counties and move to a “heat-map” type of visualization. This will make it even easier for people to find stores to support while staying safe in this public health crisis.
Built With
firebase
javascript
jupyter-notebook
python
react
Try it out
github.com | MaskNet: Automated Face Mask Detection For Data Analytics | MaskNet is an automated face-mask detection pipeline to provide better public health data analytics and allow users to pinpoint COVID risk areas. | ['Ethan Sayre'] | ['Runner up Biomedical Imaging'] | ['firebase', 'javascript', 'jupyter-notebook', 'python', 'react'] | 25 |
10,280 | https://devpost.com/software/baymax-pofxqc | Meet Baymax
Our Inspiration
Simple UI
How we made Baymax
What's next?
Inspiration
We were inspired by the fact there is no personal health assistant that is easy, practical, and readily available at your gadget(s). Even if there is one, the UI/UX may not be lively.
Then comes the Big Hero 6 movie, which is great. Everyone loves Baymax, one of the main characters, who also happens to serve as a health assistant. This is due to his friendly demeanor and physical appearance. For this reason, we decided to built the app version of Baymax.
Baymax: your favourite healthcare companion, but an app.
What it does
Baymax allows users to express about their illnesses and injuries. Then, users will be given the choice to rate their severity. Baymax will provide the appropriate response to the users.
How we built it
We built it in Android Studio by using Kotlin.
Challenges we faced
We struggled to find the appropriate time to meet and work together as each of us were located in 3 different time zones.
Accomplishments that we're proud of
Despite each of us living in 3 corners of the world, we were able to deliver a polished and practial application in less than 24 hours.
What we learned
We were able to improve our coding and presentation skills. Due to the time differences, we also learned to manage time more efficiently. Most importantly, as long as there is cooperation and teamwork, anything is possible.
What's next for Baymax
To expand our reach, we plan to extend support for iOS and the Web. We plan to use AR technology for a better user experience. Moreover, we also aim to use machine learning in order to provide better outcome(s) and a more personal touch to the user.
Built With
android-studio
kotlin
Try it out
github.com | Baymax | Your favourite healthcare companion is now an app! | ['markanthonyantao@gmail.com', 'Mark Antao', 'Hasib H.', 'manuelstefan150@gmail.com', 'Manuel Stefan Christopher'] | [] | ['android-studio', 'kotlin'] | 26 |
10,280 | https://devpost.com/software/donateaplate-wigx4m | Home page
Add Donation
Account Signup
Adding a category
Adding a custom Item
Inspiration
Every year, an estimated 1.3 billion tonnes of food is wasted globally, amounting to 2.6 trillion dollars annually, which is more than enough to feed all 815 million hungry people in the world ten times over. Although inefficient consumer habits are contributors, the majority of food waste comes from the supply chain majorly constituting distributors, retailers and restaurants. As 12th Grade High School Students from Bangalore, India, we envisioned our app DonateAPlate which allows local restaurants, supermarkets as well as individual donors to donate, daily or weekly, the excess unused food by setting up highly customizable donations through the app.
What it does
DonateAPlate allows local restaurants, supermarkets as well as individual donors to donate, daily or weekly, the excess unused food by setting up highly customizable donations through the app. In addition, NGOs and other charity organizations can view and sort nearby donations for pickup, by requesting donations from the donors. The entire system allows feasible communication between the 2 entities directly through the app, and also allocates points for each successful donation, calculated based upon, distance, food weight, etc, and users can view monthly leaderboards to see how their social work stacks up against other users.
The app portrays a stunning UI design accompanied by an interactive user experience, allowing for clean navigation and modern usability.
How we built it
The app was developed on Android Studio deployed on the Gradle Framework with 20,000+ Lines of code, in Java, Kotlin & XML. Various APIs such as the Google Maps & Places APIs were integrated into the app. The Backend data storage was built on Firebase and implemented FireStore, Firebase Realtime Database, Firebase User Authentication & Firebase ML.
Challenges we ran into
Initially we were unsure whether we would be able to complete the app & fully implement it in time for the submission. We had to do a lot of adjustments like choosing a suitable database for efficient and feasible backend development, which is why we chose Firebase. In addition, we had to plan out the maximum number of features we could implement in the time interval, making adjustments along the way. But at the end, everything turned out well, and we have a stunning video accompanying our completed product.
What we learned
Throughout the course of the project, we learnt various UI/UX architecture practices as well as efficient models set up on the Firebase backend for efficient and economical database scaling & usage. Overall, we really boosted our app development skills through this project, and it was an enjoyable learning experience.
What's next for DonateAPlate
Since our app is logically scalable to all across the world, with virtually no constraints, since it acts as an efficient platform and link for restaurants/supermarkets to donate food to NGOs, with the sufficient help and guidance, we believe we can scale the app globally, and implement the objective with various restaurants / NGOs across the world, hopefully bringing about a smart, efficient and economical way to tackle the ongoing food crisis (which has been especially critical, due to the amplification caused by the pandemic), while at the same time, cutting down the unholy amount of food wasted by our species worldwide.
Built With
android
android-studio
firebase
google-cloud
google-maps
google-places
java
kotlin
Try it out
github.com | DonateAPlate | Are you ready to take a bite out of hunger? | ['Chandrachud Gowda', 'Rohit Kanagal'] | [] | ['android', 'android-studio', 'firebase', 'google-cloud', 'google-maps', 'google-places', 'java', 'kotlin'] | 27 |
10,280 | https://devpost.com/software/waterlogged-eib79k | Inspiration
Recently, there have been many droughts around the world. That got us thinking about the conservation of water--even though 71% of Earth's surface is covered in water, only 3% of Earth's water is fresh water. 1 in 7 people in the world don't have access to clean water, which is why it is super important to conserve water to keep as much water pure and clean as possible and to help the environment! We decided to make Waterlogged to spread awareness about of the importance of water conservation and help people conserve water.
What it does
Waterlogged helps people to keep track of their water consumption by logging the different actions they do that involve water. There are lots of options for water consumption actions to log, like flushing a toilet, brushing your teeth, or taking a shower, but users can also add their own actions using the “other” button. Keeping track of water consumption like this will help people realize just how much water you can save when you do small things, like taking short showers instead of baths and turning off the faucet when you brush your teeth.
Users can also check their timeline of water usage to see what activities they did today that involved water. This helps keep track of when you did what and helps you plan a better course of conserving water.
How we built it
We built this website using HTML, CSS, Javascript, and React.
Accomplishments that we're proud of
We're proud of completing a project for our first hackathon and having fun while doing so!
What we learned
We learned a lot about making websites with React and Javascript!
What's next for Waterlogged
We'd like to add a sign-in feature to Waterlogged so that users can log in from multiple different devices, and we'd like to improve the UI.
Built With
css
d3.js
html
javascript
react
Try it out
c9x8m.csb.app | Waterlogged | Waterlogged helps people conserve water by keeping track of their water consumption. Start saving water today! | ['Katherine Li', 'Invisible Ninja'] | [] | ['css', 'd3.js', 'html', 'javascript', 'react'] | 28 |
10,280 | https://devpost.com/software/window-vawokb | logo
Community Track
Inspiration
I saw that most social media feeds only have posts from opinions that you agree with. I wanted to make an app where you could see a variety of opinions on different topics.
What it does
It allows people to see different opinions, along with sharing their own. This hopefully helps people be more understanding and realize why people think certain ways.
How I built it
I used flutter for the front end and firebase for the database.
Challenges I ran into
I had some bugs with implementing firebase into the app. I also found it a lot slower to work on a hackathon alone than with teammates like I have done previously.
Accomplishments that I'm proud of
I'm proud of finishing the hackathon with a somewhat completed app, doing it all myself.
What I learned
I learned more flutter and firebase concepts. I was relatively new to flutter, but I learned a lot from doing this project.
What's next for window
Better machine learning, better UI, better database management
Built With
firebase
flutter
Try it out
github.com | window | See all sides | ['Thomas Liang'] | [] | ['firebase', 'flutter'] | 29 |
10,280 | https://devpost.com/software/innovate-driving-9f4zgp | Home
Accident Heatmap
The Map
Accident Severity
Severity Predictor
Distracted Drivers
Classifying Drivers
Data Visualizations
Visual Heatmap(1/4)
Inspiration
Recently a famous NFL player, father and mother were both involved in a very serious car crash. His father died, while his mother remains in the ICU with serious injuries. Amongst young people, specifically under the age of 30, road accidents are the leading cause of death. When we were brainstorming projects, we thought of this event and realized there are no current solutions which can help people prevent accidents. Thus we decided to create Innovate Driving to help prevent and reduce the number of accidents on the road for drivers, bikers, and pedestrians, along with predicting accident severity in order to help accident management processes such as first responder response times and resource management, ultimately saving lives.
What it does
Our Project has three core features. The first is the map, the second is an accident severity predictor, and our third feature is detecting if a driver is distracted or not, which can figure out what specifically the driver is doing that is making them distracted. All three features utilize machine learning. The map is a heatmap, which uses a neural network to show certain cities and the risk of an accident on the streets. Based on a user selected city, the website will display a heatmap, which shows areas that have a high chance of an accident. From here users are informed about which streets to avoid and which to be more cautious around. Many accidents occur due to weather in addition to a variety of reasons which create crash hotspots. Crash hotspots are important because they suggest that there is a constant factor that results in more crashes than other locations. This can then alert drivers to avoid driving at marked crash hotspots, so they reduce the chances that they, themselves, are involved in a crash, ultimately saving lives.
The second feature of our project is that it predicts the severity of an accident by the location and time. A user can input a location and time, and based on specific locations, weather, time, and other features, we are able to predict how severe accidents would be. We are able to do this with a random forest model. By predicting the severity of accidents in certain locations, we can give our findings to city planners and workers, who can then create eye catching signs that notify drivers of what locations and during what times have the highest risk of severe accidents. This will make sure that drivers will drive carefully during these times and areas, thus reducing the number of accidents and saving lives. Our crash severity prediction model can also be used to relay vital information to first responders before they arrive on the scene. Based on location, if first responders know the severity of the accident beforehand, they can bring more or less resources in order to better accommodate those in need and also know whether they need to drive extremely high speeds to get to the accident. Accident severity prediction is a key step in the accident management process and governments can use this information to take effective measures to reduce accident impacts and improve traffic safety. This would not only help the common people, but cities and governments as well.
Our last feature is detecting whether or not a driver is distracted or not. The model intakes a picture of a driver from inside the car. Using a CNN model we are able to detect a distracted driver and what they are doing. If they are classified as distracted we can separate them into 4 of the most common reasons for being distracted. Using their phone, radio, drinking/eating, or using their mirror. Although our model is fully functional and can classify images of drivers at an extremely accurate rate, this feature is more of a proof of concept as we would need to have access to cameras inside of cars and from there alert the driver to keep their eyes on the road. About 3,000 people die from distracted driving every year, and with our model, we would be able to reduce the number of distracted drivers immensely and save countless lives.
How we built it
Our website was built with four main languages. HTML, CSS, and JS for our frontend, and JS and Python for our backend and model development. We used countless models from a library in python known as Scikit-learn alongside a CNN and neural network.
The accident heatmap was created using a neural network and was trained on data from a few select urban areas. Because it was hard to find data and due to time constraints, we had to stick to just a few areas for now. We found our data from an article and trained our model on it. The data consists of road infrastructure, weather feeds, speed limits, traffic congestion to predict accident risk per street. However, with more data we would be able to incorporate this for more cities.
For our accident severity predictor we tried many different models in order to achieve the highest accuracy possible. The inputs for the model were the location, and time. We split the location into latitude and longitude, whereas time was split into the date, month, year, and hour. We needed all these features as based on the time, accident severities can change drastically. Location is a huge factor as well as weather. Different locations have different climates especially based on the time of year. All of these factors combined can make the severity of an accident extremely different. We fit a decision tree classifier, random forest, SVM, a linear SVM, logistic regression, MultinomialNB, and a Knn. The logistic regression and MultinomialNB models performed extremely poorly only producing an accuracy around 55-60%, however the SVM had a 73%. Ultimately we decided to use the Random Forest model as it had the highest accuracy of 95%. In order to implement our model to our website, we had to use flask to get the user input and then put it into the model then finally display the output. With flask we were able to have the user input values and when they click submit, we display the predicted accident severity.
The distracted driver section uses a Convolutional Neural Network model in order to break down the photo into many layers. Through training the CNN model learns what to look for, and filters the picture so it can figure out what is going on. Our Model was trained on an extreme amount of pictures of drivers doing certain things. The model has 5 categories it can classify an image as. Attentive, using phone, using radio, eating/drinking, and using mirror. Since all of these can be differentiated based on what the hand is doing, the CNN is able to figure that out and focuses on the hand placement. After final testing the model performed extremely well and had very few false negatives, where the driver was distracted but our model thought they were attentive. We also used flask here in order to download the user's file to a local drive and then put the file through the model. Because it would take a very long time and maybe even impossible to have a perfectly accurate model, we decided to have our model have more false positives than false negatives as we would rather be safe than sorry and alert them when they were already paying attention.
Our last page contains visuals we have used throughout the project in order to get a better understanding of our models and which features are useful. We also have a confusion matrix from our distracted driver model so we are able to see how many false positives and false negatives we have. Lastly, we hosted our website with Google Cloud's Firebase.
Challenges we ran into
Finding data for the map was a struggle. Because we needed very detailed data we were only able to create functionality for a select few metropolitan areas. On top of this rendering a new heatmap after a user selects the location, so we decided to not deploy our heatmap models to the web, but rather just the heatmaps.
As we had never used flask before, deploying our model for predicting severity was difficult. We had to read a lot of documentation and go through a lot of trial and error. The main issue was getting the inputs from the user as we had to form separate inputs into a single dataframe and then put it through the model. However, after quite a bit of testing we were able to get it fully functional.
Accomplishments that we're proud of
We are proud of creating a fully functional web app using machine learning, which is able to truly help people and save lives. We are extremely proud of developing a solution that has never been previously considered or implemented in this setting and developing a working model.
What we learned
We had to learn how to use flask as well as new machine learning models. Because we had never deployed a live machine learning model to the web none of us knew how to use flask. Therefore this experience was completely new. On top of this, we strengthened our skills with CNN's for our distracted driver model and used three new models we had never used before for the accident severity predictor.
What's next for Innovate-Driving
Currently our project is suited for the common person and everyday use, however in addition to this we want to give our project to local and state governments so they can take steps to improve roads and reduce risks of crashes.
We also would like to work with first responders, so they can use our project to gain vital information about accidents before they arrive at the scene. Giving our project to city planners would allow for them to make effective signs, which notify drivers when and where crashes are the most severe, which will prompt them to drive more carefully.
We plan on partnering with car manufacturers for our distracted driver feature so they can install a camera which feeds directly into our model and their cars can have an alert system similar to a seatbelt system when our model senses that the driver is distracted and only stops once it senses the driver is no longer distracted.
Built With
ai
convolutional-neural-network
css
flask
html
javascript
jupyter-notebook
machine-learning
neural-network
python
random-forest
Try it out
github.com | Innovate Driving | Using Machine Learning to make roads safer | ['Shafin Haque', 'Yousuf Zaman', 'Henrik Zhang'] | ['Second Overall'] | ['ai', 'convolutional-neural-network', 'css', 'flask', 'html', 'javascript', 'jupyter-notebook', 'machine-learning', 'neural-network', 'python', 'random-forest'] | 30 |
10,280 | https://devpost.com/software/community_pal | Community Pal
Community Pal is a unique platform to help hospitalized and/or socially isolated kids make and have fun with friends virtually.
Inspiration
When my friend was diagnosed with leukemia, I was shocked and in misery. Due to her condition, she missed a lot of fun and connection all alone and kept telling me that it was boring in the hospital. This inspired me to make Community Pal, a website aimed at helping kids connect in isolation.
What it does
Community Pal is a website aimed at destroying boredom and getting rid of isolation virtually. Kids stuck alone have a hard time figuring out what to do, and so through the power of willing kid volunteers, kids can connect with others to learn, have fun, and connect.
What's next
Next on the list is to add live streams, webinars, and text messaging to Community Pals.
Built With
bootstrap
css
css3
github
html
javascript
Try it out
github.com
prisha-pandeya.github.io | Community Pal | Kids help kids get rid of isolation together. | ['Prisha Pandeya'] | [] | ['bootstrap', 'css', 'css3', 'github', 'html', 'javascript'] | 31 |
10,280 | https://devpost.com/software/super-brain-mri-segmenter-2000 | How our model works (tumor present)
How our model works (no tumor)
Our model is very accurate
Our 54 million parameter model!!
Inspiration
While doing some research for our project, we came across a statistic that 75% of people who are diagnosed with brain cancer die of it within five years. Upon learning this, we were also shocked to know that many of these deaths may have been prevented if only a proper diagnosis was given earlier on when the malignant tumor was still removable. This gave us an idea to leverage the benefits of deep learning to create a biomedical image segmentation model that is able to segment brain abnormalities and tumors just from a single MRI.
What it does
SGMT is a web app that allows a doctor or neurologist to upload a brain MRI which then passes the MRI through our deep learning model, and returns a segmentation mask as well as an overlay of exactly where the abnormality is located. Not only does it only take a few seconds to segment the abnormality, compared to days to weeks taken by professional neurologists, it is also precise and can guide neurologists when they perform surgeries.
How I built it
We created a custom UNet with a residual network backbone using PyTorch, which has nearly 54 million trainable parameters. For our frontend, we used html/css/js, and for our backend, we used flask to deploy our model.
Challenges I ran into
Since the model is so large and sophisticated, it took 8 hours to train. Developing the model also took another 6 hours to train, which left us with only 8 hours to create the backend, deploy the model, film the presentation, and sleep. It was quite a stressful hackathon, to say the least.
Accomplishments that I'm proud of
We are proud of creating such a sophisticated model that is able to surpass even the top neurologists. On top of that, our model is no sham. It achieved a test F1 score of 95%, meaning the segmentation masks our model create are 95% accurate on average.
What I learned
We learned a lot about semantic image segmentation over the course of the hackathon, and also delved into a bit of statistics.
What's next for Super Brain MRI Segmenter 2000
Keep improving it!
Built With
css3
flask
html5
javascript
python
pytorch
Try it out
github.com | SGMT | Leveraging deep learning to combat brain cancer and abnormalities - just from a single MRI. | ['Bill Bai', 'Andrey Starenky'] | ['Best in Biomedical Imaging'] | ['css3', 'flask', 'html5', 'javascript', 'python', 'pytorch'] | 32 |
10,280 | https://devpost.com/software/driverdrowsinessdetection | The main page
The about page
Functioning section
Inspiration
We got the idea for this project, when we're thinking what to make, and thought to have some fun with Machine learning, and we found a Life saving project. Which can save lots of lives, lost every year. According to the National Highway Traffic Safety Administration, every year about
100,000 police-reported crashes
involve drowsy driving. These crashes result in more than
1,550 fatalities
and
71,000 injuries
. Being on road its all about focus, but sometimes that too is lost when one's worn out. Not only drivers, but everyone in the vehicle loses their lives, So, why not save them?
What it does
It scans the driver's faces using OpenCV, And then
analyses the eyes, ears, and mouth
using haar-cascade algorithm and finds if the eyelids are closing the eyes, or is he yawning, and Several other minute details to detect drowsiness, which later
follows up with an alarm sound that warns / alerts the driver to be cautious.
How we built it
The project using the Useful resources, like
OPEN CV
, and
Scipy
, along with
matplotlib
, to plot the rates of drowsiness, and also using
dlib
, as the scanning resources, along with computer vision and other multiple dependencies. The webapp is built using
HTML, CSS, and javascript, jquery
along with
flask
for backend purposes
Accomplishments that we're proud of
We're proud of the accuracy this has obtained, while scanning various datasets, and learning from them, and we're also proud of learning better python, and other languages, and using for the betterment of the society, which matters most.
What we learned
We have learnt neural networks, machine learning, working with graphs, and scipy, Also using anaconda, and using the various research libraries.
Driver drowsiness detection.
You need to have anaconda installed on your system :)
Step 1: Update conda
conda update conda
Step 2: Update anaconda
conda update anaconda
Step 3: Clone the github repository
git clone https://github.com/ShobhitRathi/DrowsyRide
Step 4: Create a virtual environment
conda create -n env_dlib
Step 5: Activate the virtual environment
conda activate env_dlib
Step 6: Install dlib
conda install -c conda-forge dlib
If all these steps are completed successfully, then dlib will be installed in the virtual environment
env_dlib
. Make sure to use this environment to run the entire project.
Step 7: Installing packages
pip install -r requirements.txt
Step 8: Running the webserver!
python app.py
And the app runs on the localhost of port 5000, And you can visit, and see it!
Step to deactivate the virtual environment
conda deactivate
Built With
anaconda
css
dlib
flask
html
javascript
jquery
machine-learning
matplotlib
ml
python
scipy
Try it out
github.com | DrowsyRide | An Innovative project to prevent road accidents caused by drowsiness. | ['Shobhit Rathi', 'Sunrit Jana', 'Rohith04MVK Bobby', 'moolini'] | [] | ['anaconda', 'css', 'dlib', 'flask', 'html', 'javascript', 'jquery', 'machine-learning', 'matplotlib', 'ml', 'python', 'scipy'] | 33 |
10,281 | https://devpost.com/software/covid-bias-checker | Inspiration
Arguably, the biggest problem in America and most other western countries is that the media from both the left and the right often lies about facts in order to get their ratings up. Donald Trump likes to criticize them for this by calling them fake news. This problem is especially important during times of crisis, such as this. The COVID-19 pandemic is wreaking havoc across the world and part of the reason it has gotten so serious is because people are trusting media sources that are lying about the virus. For example, CNN lied a lot in May about the number of corona virus cases. They exaggerate the death toll by thousands and this caused unnecessary panic among citizens which made things a million times worse. We realized the gravity of this problem and decided that we wanted to help.
What It Does
Our website's main app page allows the user to input in the URL of the article that they want to read. Our website then applies two main algorithms to it. It uses both a natural language processing model algorithm that scans the article and gives out a numerical bias rating, and it shortens the URL that the user inputted into just the domain name and runs it through the mediabiasfactcheck.com API to determine the political bias of that news source. After it has these two pieces of information, the website gives the user the bias rating of the article. The user can then use the information that they have received to become more weary of how fake or real the COVID-19 article they are reading is. By being more weary, they can avoid unnecessary panic and stay safe. Apart from the main page, the web app also has two static about us and submission pages.
How We Built It
We built this website using Python, Flask, Bootstrap, Javascript, CSS, HTML Jinja, and a variety of Natural Language Processing and linear regression algorithms.
Challenges We Ran Into
We had to combine a few NLP and classification algorithms which took a lot of time for processing and made us kind of impatient. Downloading all of the modules and setting up the virtual environment also took forever. Lastly, once we were all done the website and were ready to deploy to heroku, we encountered a major problem. We kept exceeding the slug size limit. After hours of debugging, we found out that all we had to do to solve this issue was remove the python mkl modules. That was really funny.
Accomplishments That We Are Proud Of
We are proud that we could complete such an advanced and complicating project in such a small amount of time. It was definitely a very confusing project even though both of us had worked with NLP before. Nonetheless, this was a huge feet of achievement for both of us and we could not be more proud.
What We Learned
We learned a lot during this hackathon. We learned a lot about natural language processing this hackathon. We had only done topic modelling before but learning how you can use AI for as complicated as things as sentimental analysis was definitely surprising. It was also our first times using Flask to host an actual website. We had only used it to make rest apis before and we are proud of having learned this new skill.
What's Next For COVID Bias Checker
We are going to refine the app more by getting expert opinions. Once the app is completely finished, we will spread the word about it in the hopes of making it go viral so that people can stay aware and safe.
Built With
anaconda
flask
heroku
mediabiasfactcheck.com
natural-language-processing
python
Try it out
covid-bias-checker.herokuapp.com
github.com | COVID Bias Checker | Revealing The Biased News Articles About COVID-19 | ['Mihir Kachroo', 'Dhir Kachroo'] | ['Airpods'] | ['anaconda', 'flask', 'heroku', 'mediabiasfactcheck.com', 'natural-language-processing', 'python'] | 0 |
10,281 | https://devpost.com/software/contrac-z4x3ly | Login, Location Reviews, Location Tagging
UI with specific action, Alert, Recent Trips, App Feedback
Inspiration
When we learned that 6.15 million people in the world have contracted the novel Covid-19 disease and 374,000 people have died of Covid-19, we were shocked. Even though so many people are getting infected, many people still do not follow the social distancing rules or wear face masks in public places, and this is not just a concern of mine, but for many other people also. This very clearly causes more community spread and leads to more Covid-19 cases, we wanted to help people from being infected from the Covid-19 through an online platform that would control and display is places were following social distancing guidelines.
What it does
Our mobile app, conTRAC, has two main components: an accurate database of information about public locations following Covid-19 safety rules and the ability for users to tag/leave reviews about locations. As we could not complete the mobile app development, we created a website with a built-in calculator that can show the risk of getting infected by infectious diseases. The calculator takes three inputs - A location's number of Covid-19 infections, deaths, and if your location is following the Covid-19 rules. It then calculates (just a prototype) whether your location is at risk.
How we built it
We built this website using HTML, JavaScript, and CSS. In addition, we also incorporated google maps. This website is an extension of my mobile app, where we are developing a risk calculator. Since we could not complete the mobile app, this website that we started working on this morning contains a prototype of what we would build in the future. To understand what the app would look like, we made a Canva mock-up.
Here
is the link.
Challenges we ran into
When we were building my website, we ran into data issues where we could not get Covid-19 infection rates and death rates accurately. We ended up asking the user to enter this data. While creating my calculator, we could not find how to use a radio button and how to read the value from it. Writing JavaScript functions was tricky since we were not too familiar with it.
We have never coded from scratch before, and we usually use drag-and-drop platforms like Thunkable and MIT App Inventor to code apps. This was the first time we used a written coding language, and although we didn't know anything about HTML, CSS, and Javascript at the beginning, we used W3 school's blog and other online resources to understand how to code.
Accomplishments that we are proud of
We are proud of developing a solution for consumers to safely visit a location knowing it is safe to go. We are also proud of ourselves for creating a website in little time and for helping people for being exposed to an infectious disease.
Over the past week, we surveyed over 50 friends and family to confirm the need for an app like conTRAC. The survey validated the problem of not having access to data regarding Covid-19 safety at public locations. We also developed mock-ups to visually represent how my mobile app would look like and work. Validating the user needs and developing the mocks were the most important steps to build my app.
What we learned
We learned how to use javascript within HTML. We had never used the 'radio' button feature before which was tricky. Also, we learned how to use Canva for building UI mocks which we enjoyed. When we first started this competition, we thought we could not finish, but we did it. Doing this competition helped me believe in our teamwork.
What's next for conTRAC
For the app development, we plan to learn how to use Swift and xcode and start using it next week. For the users, currently, we don’t have a way to verify if the reviews left by a user are true. In the future, we want to include a leaderboard that showcases and highlights users of conTRAC that are very active and provide the most valuable feedback.
Built With
css
html
javascript
Try it out
github.com | conTRAC | Crowdsourcing to control the spread of coronavirus. | ['Ad J'] | ['Waterproof Bluetooth speaker'] | ['css', 'html', 'javascript'] | 1 |
10,281 | https://devpost.com/software/contrac | Inspiration
n/a
What it does
How I built it
Challenges I ran into
Accomplishments that I'm proud of
What I learned
What's next for [please refer to other submission conTRAC] | conTRAC] | please refer to other submission | ['Ad J'] | [] | [] | 2 |
10,281 | https://devpost.com/software/homly-space | Home Page
Symptoms and Precaution Alert
Signing Up as a Volunteer
Virtual Diagnose
Requesting for help either food or shelter
Request Food Page
Request Food Page
Donate food and money
Live Status about Covid19
Virtual Health Bot
Local Status of user location
Local statewise Status of user location
Request Shelter Page
Request Shelter Page
Inspiration
During this pandemic, most people are quarantining at home trying to stay safe. But what about the people that don't have a home? And what about the shortage of space faced by hospitals and government to keep the covid19 affected people.And during this time many police forces are removing homeless camps across cities, with many having now where to go and the government sometimes does not provide or expand it's alternative housing spaces. At the same time, hotel and housing properties listed on airbnb and other media are sitting without any occupancy as travel has become extremely limited. We felt that these rooms/homes with no one using them can be put to better use to help out the community and also possibly have a hand in helping flatten the curve. Another problem in this ongoing situation is that tons of farmers are forced to throw away their products as there has been less buying.
What it does
Homely.space is a web app designed to be easily accessible by everyone and provide all the facilities needed for the user to fulfill their purpose. There are two different people that would use the application. One is someone who wants to list their property on the database for homeless people to use. Another is someone who is seeking a temporary place to stay.
It is especially used by government and hospitals as the demand for hospitals and isolated health camps are increasing so through this platform we solves this problem also.
People who want to enlist their housing can do so by signing up and providing the home address which will then be added to our database. The person who is seeking a home can also sign up and click the "locate me" button which will get their address and find the nearest available home for them. Once a home has been found, the two people will be put in contact with each other via phone or email. Another feature that is available in the application is a Request Food option. As farmers are throwing away they produces, we though that we can save some of this waste and connect a consumer straight to a farmer. In this part, a person can sign up as a volunteer that helps get food from a farm to a purchaser, which would help reduce good products going to waste. There are other small features such as a chat bot in which one can diagnose themselves and also get the latest updated data for covid.
How we built it
We built the application using JavaScript web framework, Express.js which is a part of Node.js. Simple endpoints were created to the login and registration, and a database was created that hold a users information. The database also stores the address for housing that people are offering to the people in need. When a house provider login in, he/she is asked to add the address of the house they want to give temporarily, which is then sent to the data base. When a shelter seeker uses the app, he/she enters their first and last name and click a button which allows for us to get the lat and long coordinates of their current location. Through an algorithm,using googles geolocation api and google distance matrix api, we find the house closest to their current location and provide them with the owners contact info. This logic is also used for when requesting food. The farmers would register their address, and the purchaser would enter their address. The purchaser then is connected to a volunteer that would be able to get the fruits and vegetables from the farmer to the buyer.
Challenges we ran into
We initially struggles to create a layout of the whole application, but were able to figure that out with some rough drawings. Another challenge was that the team members are all from different time zones, so communication with very difficult. Another challenge we ran into was understand database and getting it to work how we wanted it to. Since this was the first time some of us used firebase and google apis, we had to read documentations to understand everything about the two.
Accomplishments that we're proud of
There are a couple of things that we are proud of. One of them is that eventually we were all able to work together and understand what everyone was doing despite the time difference. We are also proud of the fact that throughout the project we were able to learn to use new features such as apis and databases that we did not know before the hackathon started. One important accomplishment is that we were able to overcome a huge time constraint and come up with a prototype of an idea that has potential to change people's lives.
What we learned
We learned how to set-up and use Google's geolocation and distance matrix apis, create a database and extract from it and using a new framework to built the website on.
What's next for Homly.space
The next steps for Homly.space is to create additional features such as having a donation tab where, people who are seeking shelter can donate whatever they feel like to the host (as anything helps). Another feature that can be added is that instead of having a web-page we could make some sort of chat bot that people can use to find nearly housing and also add housing. In addition, we can also have some sort of rating function, where the host can rate the people who stayed at their listing so future hosts can know how the people are treating their properties.
Built With
bootstrap
css
express.js
firebase
google-maps
html5
javascript
mongodb
node.js
Try it out
github.com | Homly.space | Temporary Housing For The People, By The People - A application that connects people in need of shelter and food with people who can provide. | ['Rishabh Jain'] | [] | ['bootstrap', 'css', 'express.js', 'firebase', 'google-maps', 'html5', 'javascript', 'mongodb', 'node.js'] | 3 |
10,281 | https://devpost.com/software/c-trac-app-for-tracking-corona-hotspots | Inspiration
During this current COVID 19 pandemic, I see health worker is curing the patients, doctors are innovating new medicine, the police is controlling the crowd movement and even bus drivers are helping people to get back to home. As a future engineer, I felt like my contribution is none, so I felt motivated to do my part and try to bring a positive change and to make sure my product can also be used in a future pandemic.
problem our project solves
we all can agree that this pandemic needs to overs soon so as we can meet our loved ones, in order to contain this pandemic, the government is using CONTACT TRACING APPS ( CTAs ). Research says if contact tracing is done correctly it can reduce the number of case 3 folds, then why the number is still rising? , the problem with these CTAs is they only tell whether you have come in contact with an infected person or not, what it doesn't tell us that from where the person caught the infection ( the parent source ). let's take an example if there are two-person 'X' and 'Y' and Y got infected then X will get notified by the current CTAs that he might have got an infection as he came in contact with Y, but it doesn't tell from Which PLACE y got the infection, this is crucial as if we don,t find out that PLACE then many other people who had visited that PLACE may get the infection.
what our project does
Our project C-TRACK 1st of it's kind of reverse contact tracing app, so let me explain how Reverse contact tracing works, whenever the user visits a place too frequently like a shopping mall. then that particular location will be saved inside the app and if in future user found COVID +ve then we can track down that shopping mall, the app will also send a notification to all over people who have visited that exact shopping mall, now health authorities can sanitize and lockdown that specific shopping mall instead of locking down the whole locality. the location stored is fully encrypted and can only be accessed by the user. it also has two additional feature
1.) it sends 'wear mask ' notification when the user leaves then house and 'wash hand' notification when the user returns the house, this small precaution can bring a huge change by keeping you and everyone around you safe.
2.) whenever the user enters a government certificated hotspot or RedZone he will get a warning notification
Challenges I ran into
1,) we lack financial support as we have to make this app from scratch.
2.) the problem in collecting data regarding government-certified hotspot and also we have to do a lot of research regarding the spread pattern of COVID-19.
3.) It was hard for us to get in contact with health workers as they were busy fighting an increasing number of patients so we have to talk to retired doctors.
4.) It took us too long to use it in real-time as during lockdown it was too hard to go outside in the quarantine but finally, after lockdown loosens a bit we tested it and it gave an excellent result.
What I learned
All team members of C-TRACK were able to grow their area of competence by participating in the whole process of idea definition, market research, validation, prototyping, and presentation. Through different mentor sessions, we learned that problems could be approached by many means, but most importantly our mission should be clear.
What's next for C - TRAC App for tracking corona hotspots
our app can be used for a future pandemic or seasonal diseases such as swine flu or bird flu.
Built With
android
android-studio
java | C - TRACK 1st ever reverse contact tracing App | Our app is 1st reverse contact tracing app which locate the possible hotspots from the user location history and also 1st safety awareness system which notify user to ( wear mask ) and ( wash hand). | ['Anup Paikaray', 'Arnab Paikaray'] | [] | ['android', 'android-studio', 'java'] | 4 |
10,281 | https://devpost.com/software/viralcheck-social-media-app | Web app
Built With
python | ViralCheck | Web app | ['Jeremy Nguyen', 'Gideon Grinberg', 'Ritvik Irigireddy', 'Nand Vinchhi'] | [] | ['python'] | 5 |
10,281 | https://devpost.com/software/covnatic-covid-19-ai-diagnosis-platform | Landing Page
Login Page
Segmentation of Infected Areas in a CT Scan
Check Suspects using Unique Identification Number (New Suspect)
Check Suspects using Unique Identification Number (Old Suspect)
Suspect Data Entry
COVID-19 Suspect Detector
Upload Chest X-ray
Result: COVID-19 Negative
Upload CT Scan
Result: Suspected COVID-19
Realtime Dashboard
Realtime Dashboard
Realtime Dashboard
View all the Suspects (Keep and track the progress of suspects)
Suspect Details View
Automated Segmentation of the infected areas inside CT Scans caused by Novel Coronavirus
Process flow of locating the affected areas
U-net (VGG weights) architecture for locating the affected areas
Segmentation Results
Detected COVID-19 Positive
Detected Normal
Detected COVID-19 Positive
Detected COVID-19 Positive
GIF
Located infected areas inside lungs caused by the Novel Coronavirus
Endorsement from Govt. Of Telengana, Hyderabad, India
Endorsement from Govt. Of Telengana, Hyderabad, India
Generate Report: COVID-19 Possibility
Generate Report: Normal Case
Generated PDF Report
Inspiration
The total number of Coronavirus cases is
2,661,506 worldwide
(Source: World o Meters). The cases are increasing day by day and the curve is not ready to flatten, that’s really sad!! Right now the virus is in the community-transmission stage and rapid testing is the only option to battle with the virus. McMarvin took this opportunity as a challenge and built AI Solution to provide a tool to our doctors. McMarvin is a DeepTech startup in medical artificial intelligence using AI technologies to develop tools for better patient care, quality control, health management, and scientific research.
There is a current epidemic in the world due to the Novel Coronavirus and here
there are limited testing kits for RT-PCR and Lab testing
. There have been reports that kits are showing variations in their results and false positives are heavily increasing.
Early detection using Chest CT can be an alternative to detect the COVID-19 suspects.
For this reason, our team worked day and night to develop an application which can help radiologist and doctors by automatically detect and locate the infected areas inside the lungs using medical scan i.e. chest CT scans.
The inspirations are as below:
1. Limited kit-based testings due to limited resources
2. RT-PCR is not as much as accurate in many countries (recently in India)
3. RT-PCR test can’t exactly locate the infections inside the lungs
AI-based medical imaging screening assessment is seen as one of the promising techniques that might lift some of the heavyweights of the doctors’ shoulders.
What it does
Our COVID-19 AI diagnosis platform is a fully secured cloud based application to detect COVID-19 patients using chest X-ray and CT Scans. Our solution has a centralized Database (like a mini-EHR) for Corona suspects and patients. Each and every record will be saved in the database (hospital wise).
Following are the features of our product:
Artificial Intelligence to screen suspects using CT Scans and Chest X-Rays.
AI-based detection and
segmentation & localization of infected areas inside the lungs
in chest CT.
Smart Analytics Dashboard
(Hospital Wise) to view all the updated screening details.
Centralized database (only for COVID-19 suspects) to
keep the record of suspects and track their progress
after every time they get screened.
PDF Reports,
DICOM Supports
, Guidelines, Documentation, Customer Support, etc.
Fully secured platform
(Both On-Premise and Cloud)
with the privacy policy under healthcare data guidelines.
Get Report within Seconds
Our main objective is to provide a research-oriented tool to alleviate the pressure from doctors and assist them using AI-enabled smart analytics platform so they can
“SAVE TIME”
and
“SAVE LIVES”
in the critical stages (Stage-3 or 4).
Followings are the benefits:
1. Real-world data on risks and benefits:
The use of routinely collected data from suspect/patient allows assessment of the benefits and risks of different medical treatments, as well as the relative effectiveness of medicines in the real world.
2. Studies can be carried out quickly:
Studies based on real-world data (RWD) are faster to conduct than randomized controlled trials (RCTs). The Novel Coronavirus infected patients’ data will help in the research and upcoming such outbreak in the future.
3. Speed and Time:
One of the major advantages of the AI-system is speed. More conventional methods can take longer to process due to the increase in demand. However, with the AI application, radiologists can identify and prioritize the suspects.
How we built it
Our solution is built using the following major technologies:
1. Deep Learning and Computer Vision
2. Cloud Services (Azure in this case)
3. Microservices (Flask in this case)
4. DESKTOP GUIs like Tkinter
5. Docker and Kubernetes
6. JavaScript for the frontend features
7. DICOM APIs
I will be breaking the complete solution into the following steps:
1. Data Preparation:
We collected more than 2000 medical scans i.e. chest CT and X-rays of 500+ COVID-19 suspects around the European countries and from open source radiology data platform. We then performed validation and labeling of CT findings with the help of advisors and domain experts who are doctors with 20+ experience. You can get more information in team section on our site. After carefully data-preprocessing and labeling, we moved to model preparation.
2. Model Development:
We built several algorithms for testing our model. We started with CNN for classifier and checked the score in different metrics because creating a COVID-19 classifier is not an easy task because of variations that can cause bias while giving the results. We then used U-net for segmentation and got a very impressive accuracy and got a good IoU metrics score. For the detection of COVID-19 suspects, we have used a CNN architecture and for segmentation we have used U-net architecture. We have achieved 94% accuracy on training dataset and 89.4% on test data. For false positive and other metrics, please go through our files.
3. Deployment:
After training the model and validating with our doctors, we prepared our solutions in two different formats i.e. cloud-based solution and on-premise solution. We are using EC-2 instance on AWS for our cloud-based solution.
Our platform will only help and not replace the healthcare professionals so they can make quick decisions in critical situations.
Challenges we ran into
There are always a few challenges when you innovate something new. The biggest challenge is “The Novel Coronavirus” itself.
One of the challenge is “Validated data” from different demographics and CT machines.
Due to the lockdown in the country, we are not able to meet and discuss it with several other radiologists. We are working virtually to build innovative solutions but as of now, we are having very limited resources.
Accomplishments that we're proud of
We are in regular touch with the State Government (Telangana, Hyderabad Government). Our team presented the project to the Health Minister Office and helping them in stage-3 and 4.
Following accomplishments we are proud of:
1. 1 Patent (IP) filled
2. 2 research paper
3. Partnership with several startups
4. In touch with several doctors who are working with COVID-19 patients. Also discussing with Research Institutes for R&D
What we learned
Learning is a continuous process. Our team learnt
"the art of working in lockdown"
. We worked virtually to develop this application to help our government and people. The other learning part was to take our proof of concept to the local administration for trails. All these “Government Procedures” like writing Research Proposal, Meeting with the Officials, etc was for the first time and we learned several protocols to work with the government.
What's next for M-VIC19: McMarvin Vision Imaging for COVID19
Our research is still going on and our solution is now endorsed by
the Health Ministry of Telangana
. We have presented our project to
the government of Telangana for a clinical trail
. So the next thing is that we are looking for trail with hospitals and research Institutes. On the solution side, we are adding more labeled data under the supervision of Doctors who are working with COVID-19 patients in India. Features like
Bio-metric verification, Trigger mechanism to send notification to patients and command room
, etc are under consideration. There is always scope of improvement and AI is the technology which learns on top of data. Overall, we are dedicated to take this solution into real world production for our doctors or CT and X-rays manufacturers so they can use it to fight with the deadly virus.
Built With
amazon-web-services
flask
google-cloud
javascript
keras
nvidia
opencv
python
sqlite
tensorflow
Try it out
m-vic19.com | M-VIC19: McMarvin Vision Imaging for COVID19 | M-VIC19 is an AI Diagnosis platform is to help hospitals screen suspects and automatically locate the infected areas inside the lungs caused by the Novel Coronavirus using chest radiographs. | [] | ['1st Place Overall Winners', 'Third Place - Donation to cause or non-profit organization involved in fighting the COVID crisis'] | ['amazon-web-services', 'flask', 'google-cloud', 'javascript', 'keras', 'nvidia', 'opencv', 'python', 'sqlite', 'tensorflow'] | 6 |
10,281 | https://devpost.com/software/masked-ai-masks-detection-and-recognition | Platform Snapshot
Input Video
Model Processing
Model Processing
Output Video Saved
Output Video Snapshot
Output Video Snapshot
Output Video Snapshot
Output Video Snapshot
Output Video Snapshot
Output Video Snapshot
Inspiration
The total number of Coronavirus cases is 5,104,902 worldwide (Source: World o Meters). The cases are increasing day by day and the curve is not ready to flatten, that’s really sad!! Right now the virus is in the community-transmission stage and taking preventive measures is the only option to flatten the curve. Face Masks Are Crucial Now in the Battle Against COVID-19 to stop community-based transmission. But we are humans and lazy by nature. We are not used to wear masks when we go out in public places. One of the biggest challenges is “People not wearing masks at public places and violating the order issued by the government or local administration.” That is the main reason, we built this solution to monitor people in public places by Drones, CCTVs, IP cameras, etc, and detect people with or without face masks. Police and officials are working day and night but manual surveillance is not enough to identify people who are violating rules & regulations. Our objective was to create a solution that provides less human-based surveillance to detect people who are not using masks in public places. An automated AI system can reduce the manual investigations.
What it does
Masked AI is a real-time video analytics solution for human surveillance and face mask identification. Our main feature is to identify people with masks that are advised by the government. Our solution is easy to deploy in Drones and CCTVs to “see that really matters” in this pandemic situation of the Novel Coronavirus. It has the following features:
1. Human Detection
2. Face Masks Identification (N95, Surgical, and Cloth-based Masks)
3. Identify human with or without mask in real-time
4. Count people each second of the frame
5. Generate alarm to the local authority if not using a mask (Soon in video demo)
It runs entirely on the cloud and does detection in real-time with analysis using graphs.
How we built it
Our solution is built using the following major technologies:
1. Deep Learning and Computer Vision
2. Cloud Services (Azure in this case)
3. Microservices (Flask in this case)
4. JavaScript for the frontend features
5. Embedded technologies
I will be breaking the complete solution into the following steps:
1. Data Preparation:
We collected more than 1000 good quality images of multiple classes of face masks (N95, Surgical, Clothe-based masks). We then performed data-preprocessing and labeled all the images using labeling tools and generated PASCAL VOC and JSON after the labeling.
2. Model Preparation:
We used one of the famous deep learning-based object detection algorithm “YOLO V-3” for our task. Using darknet and Yolo v-3, we trained the model from scratch on 16GB RAM and Tesla K80 powered GPU machine. It took 10 hours to train the model. We saved the model for deploying our solution to the various platforms.
3. Deployment:
After training the model, we built the frontend which is totally client-based using JavaScript and microservice “Flask”. Rather than saving the input videos to our server, we are sending our AI to the client’s place and using Microsoft Azure for the deployment. We are having on-premise and cloud solutions prepared. At the moment, we are on a trail so we can’t provide the link URL.
After building the AI part and frontend, We integrated our solution to the IP and CCTV cameras available in our house and checked the performance of our solution. Our solution works in real-time on video footage with very good accuracy and performance.
Challenges we ran into
There are always a few challenges when you innovate something new. The biggest challenge is “The Novel Coronavirus” itself. For that reason, we can’t go outside the home for the hardware and embedded parts. We are working virtually to build innovative solutions but as of now, we are having very limited resources. We can’t go outside to buy hardware components or IP & CCTV cameras. One more challenge we faced was that we were not able to validate our solution with drones in the early days due to the lockdown but after taking permission from the officials that problem was not a deal anymore.
Accomplishments that we're proud of
Good work brings the appreciation and recognition. We have submitted our research paper in several conferences and international journals (Waiting for the publication). After developing the basic proof-of-concept, We went on to the local government officials and submitted our proposal for a trial to check our solution for better surveillance because the lockdown is near to be lifted. Our team is also participating in several hackathons and tech event virtually to showcase our work.
What we learned
Learning is a continuous process. We mainly work with the AI domain and not with the Drones. The most important thing about this project was “Learning new things”. We learned how to integrate “Masked AI” into Drones and deploy our solution to the cloud. We added embedded skills in our profile and now exploring more features on that part. The other learning part was to take our proof of concept to the local administration for trails. All these “Government Procedures” like writing Research Proposal, Meeting with the Officials, etc was for the first time and we learned several protocols to work with the government.
What's next for Masked AI: Masks Detection and Recognition
We are looking forward to collaborating with local administrative and the government to integrate our solution for drone-based surveillance (that’s currently in trend to monitor internal areas of the cities). Parallel, The improvement of model is the main priority and we are adding “Action Recognition” and “Object Detection” features in our existing solution for even robust and better solution so decision-makers can make ethical decisions as because surveillance using Deep Learning algorithms are always risky (bias and error in judgments).
Built With
azure
darknet
flask
google-cloud
javascript
nvidia
opencv
python
tensorflow
twilio
yolo | Masked AI: AI Solution for Face Mask Identification | Masked AI is a cloud-based AI solution for real-time surveillance that keeps an eye on the human who violates the rule by not using face masks in public places. | [] | [] | ['azure', 'darknet', 'flask', 'google-cloud', 'javascript', 'nvidia', 'opencv', 'python', 'tensorflow', 'twilio', 'yolo'] | 7 |
10,281 | https://devpost.com/software/covidcentral-u21txv | Landing Page
Landing Page
Landing Page
Landing Page - Contact Us Section
Signup Page
Login Page
Content Summarizer
Comparison of 4 Types of Content Summarizer
Text Insights
Preprocessing
Inspiration
This year has been really cruel to humanity.
Australia is being ravaged by the worst wildfires seen in decades, Kobe Bryant’s passing
, and now this pandemic due to
the Novel Coronavirus
originated from the Hubei province (Wuhan) of China. Coronavirus disease (COVID-19) is an infectious disease caused by a newly discovered coronavirus.
More than 3 million people
are affected by this deadly virus across the globe (Source: World O Meters). There have been around 249,014 deaths already and it’s counting. 100+ countries are affected by this virus so far. This is the biggest health crisis in the last many years.
Artificial Intelligence
has proved its usefulness in this time of crisis. The technology is one of the greatest soldiers the world could ever get in the fight against coronavirus. AI along with its subsets (Machine Learning) is leveraging significant innovation across several sectors and others as well to win against the pandemic. After
Anacode releases “The Covid-10 Public Media Dataset”
, we took this as an opportunity to use Natural Language Processing on those data composed of Articles. According to Anacode “It is a resource of over 40,000 online articles with full texts which were scraped from online media in the timespan since January 2020, focussed mainly on the non-medical aspects of COVID-19. The data will be updated weekly”. Anacode further says “We are sharing this dataset to help the data community explore the non-medical impacts of Covid-19, especially in terms of the social, political, economic, and technological dimensions. We also hope that this dataset will encourage more work on information-related issues such as disinformation, rumors, and fake news that shape the global response to the situation.”
Our team leveraged the power of NLP and Deep Learning and built
“CovidCentral”
, a PaaS
(Platform as a Service)
. We believe our solution can help media people, researchers, content creators, and everyone else who is reading and writing articles or any kind of content related to the COVID-19.
What it does
Our tagline says
“Stay central with NLP powered text analytics for COVID-19”. CovidCentral is one of its kind NLP driven platform for fast and accurate insights. It generates a summary and provides analytics of large amounts of social and editorial content related to COVID-19. STAY CENTRAL INSHORTS.
It does three things:
1.
CovidCentral platform can help to
understand large contexts related to COVID-19 in a matter of minutes.
Through the platform, Get actionable insights from hundreds of thousands of lines of texts in minutes. It generates an automated summary of large contents and provides word-by-word analytics of the texts from total word count to the meaning of each word. The user can either enter an URL to summarize and getting insights or enter the complete content directly into the platform.
2.
The large content of text data is hard to analyze. It is very difficult to analyze the large content of texts. CovidCentral can help people to get insights within minutes. Manual analysis of texts leads to a number of hours. Media people, researchers, or anyone who is having the internet can access our platform and
get the insights related to the COVID-19.
3.
Humans are lazy in nature and people want to save time. This platform can generate content’s summary within minutes via a single URL. CovidCentral uses NLP and Deep Learning technologies to provide an automated summary of texts. Very helpful for getting short facts related to the COVID-19.
Why Use CovidCentral?
1. Fast
2. Ease of Use (User-friendly)
3. High Accuracy
4. Secure (No content or data will be saved in the server rather we are sending NLP to you at the frontend.)
How we built it
We built CovidCentral using AI technologies, Cloud technologies, and web technologies. This platform uses NLP as a major technique and leverages several other tools and techniques. The major technologies are:
a. Core concept:
NLP (Spacy, Sumy, Gensim, NLTK)
b. Programming Languages:
Python and JavaScript
c. Web Technologies: HTML, CSS, Bootstrap, jQuery ( JS)
d. Database and related tools:
SQLITE3 and Firebase
(Google's mobile platform)
e. Cloud:
AWS
Below are the steps that will give you a high-level overview of the solution:
1. Data Collection and Preparation:
CovidCentral is built on mainly using “Covid-19 Public Media Dataset” by Anacode. A dataset for exploring the non-medical impacts of Covid-19. It is a resource of over 40,000 online articles with full texts related to COVID-19. The heart of this dataset are online articles in text form. The data is continuously scraped from a range of more than 20 high-impact blogs and news websites. There are 5 topic areas - general, business, finance, tech, and science.
Once we got the data, the next step is obviously “Text Preprocessing”. There are 3 main components of text preprocessing:
(a) Tokenization (b) Normalization (c) Noise Removal.
Tokenization
is a step that splits longer strings of text into smaller pieces, or tokens. Larger chunks of text can be tokenized into sentences, sentences can be tokenized into words, etc. Further processing is generally performed after a piece of text has been appropriately tokenized.
After tokenization, we performed
“Normalization”
because, before further processing, the text needs to be normalized. Normalization generally refers to a series of related tasks meant to put all text on a level playing field: converting all text to the same case (upper or lower), removing punctuation, converting numbers to their word equivalents, and so on. Normalization puts all words on equal footing and allows processing to proceed uniformly.
In the last step of our Text preprocessing, we performed
“Noise Removal”
. Noise removal is about removing characters digits and pieces of text that can interfere with your text analysis. Noise removal is one of the most essential text preprocessing steps.
2. Model Development:
We have used several NLP libraries and frameworks like Spacy, Sumy, Gensim, and NLTK. Apart from having a custom model, we are also using pre-trained models for the tasks. The basic workflow of creating our COVID related NLP based summarizer or analytics engine is like this: Text Preprocessing (remove stopwords, punctuation). Frequency table of words/Word Frequency Distribution – how many times each word appears in the document Score each sentence depending on the words it contains and the frequency table. Build a summary or text analytics engine by joining every sentence above a certain score limit.
3. Interface:
CovidCentral is a responsive platform that supports both i.e. Mobile and web. The frontend is built using web technologies like HTML, CSS, Bootstrap, JavaScript (TypeScript, and jQuery in this case). We have used a few libraries for validation and authentication.
On the backend part, it uses python microservice “Flask” for integrating the NLP models, SQLITE3 for handling the database, and Firebase for authentication and keeping records from the User forms.
4. Deployment:
After successfully integrating backend and frontend into a platform, we deployed CovidCentral on the cloud. It runs 24*7 on the cloud. We deployed our solution on
Amazon Web Services (AWS)
and use an EC-2 instance as a system configuration.
Challenges we ran into
Right now, the biggest challenge is “The Novel Coronavirus”. We are taking this as a challenge and not as an opportunity. Our team is working on several verticles whether it is medical imaging, surveillance, bioinformatics and CovidCentral to fight with this virus.
There were a few major challenges:
Time constraint
was a big challenge because we had very little time to develop this but we still pulled CovidCentral in this short span of time. The data which has more than 40K articles are pretty much messy, so
we got difficulties dealing with messy data
in the beginning but after learning how to handle that kind of data, we eliminated that challenge to some extent. We also got challenges while deploying our solution to the cloud but managed somehow to do that and still testing our platform and making it robust.
Accomplishments that we're proud of
Propelled by the modern technological innovations, data is to this century what oil was to the previous one. Today, our world is parachuted by the gathering and dissemination of huge amounts of data. In fact, the International Data Corporation (IDC) projects that the total amount of digital data circulating annually around the world would sprout from 4.4 zettabytes in 2013 to hit 180 zettabytes in 2025. That’s a lot of data!
With such a big amount of data circulating in the digital space, there is a need to develop machine learning algorithms that
can automatically shorten longer texts and deliver accurate summaries that can fluently pass the intended messages.
Furthermore, applying text summarization reduces reading time,
accelerates the process of researching for information, and increases the amount of information that can fit in an area.
We are proud of the development of CovidCentral and to make it Open Source so anyone can use it for free on any kind of device to get important facts related only to COVID-19.
What we learned
Learning is a continuous process of life, the pinnacle of the attitude and vision of the universe. I tell my young and dynamic team (Sneha and Supriya) to keep on learning every day.
In this lockdown situation, we are not able to meet each other but we learned how to work virtually in this kind of situation. Online meeting tools like Zoom in our case, GitHub, Slack, etc helped all of us in our team to collaborate and share our codes with each other.
We also
strengthen our skills in NLP (BERT, Spacy, NLTK, etc)
and how to integrate our models to the front-end for end-users. We spent a lot of time on the interface so people can use it and don’t get bored. From design to deployment, there were many things that helped us improve our skills technically.
We learn many things around us day by day. Since we are born, we learn many things, and going forward, we will add more relevant features by learning new concepts in our platform.
What's next for CovidCentral
We are adding features like “Fake News Detector” to spam fake news related to the COVID-19 very soon on our platform. CovidCentral’s aim is to help content creators, media people, researchers, etc to
only read that matters the most
in a quick time. APIs to be released soon so anyone who wants to add these features in their existing workflow or website can do it so they won’t need to use our platform rather they can just use our APIs.
We are also in discussion with
some text analytics companies to collaborate
and bring an even more feasible, robust, and accessible solution. In the near future, we will make CovidCentral an NLP powered text analytics platform in general for all kinds of text analytics for anyone, free to use from anywhere on any kind of devices (Mobile, Web, Tablet, etc).
Built With
amazon-web-services
bootstrap
css
firebase
flask
html
javascript
natural-language-processing
nltk
python
sqlite
Try it out
covidcentral.herokuapp.com | CovidCentral | CovidCentral is one of its kind NLP driven platform for fast and accurate insights. It generates a summary and provides analytics of large amounts of social and editorial content related to COVID-19. | [] | [] | ['amazon-web-services', 'bootstrap', 'css', 'firebase', 'flask', 'html', 'javascript', 'natural-language-processing', 'nltk', 'python', 'sqlite'] | 8 |
10,281 | https://devpost.com/software/biozene-interactive-bioinformatics | Why Biozene?
Problem Statement
Solution
Supports all kind of devices
Biozene will help in vaccine discovery
Benefits: Helps in decoding the DNA sequences of virus
Desktop View
Mobile View
Tablet View
Several Features
Data Visualization
View DNA features
Generate plot automatically
Generate Dot Plot between sequences
Mutation rate modelling
Amino acid frequency
Translation
Decode thousands of sequences within minutes
Decode thousands of sequences within minutes
DNA alignments and their scores
Amino acid frequency
Inspiration
Severe Acute Respiratory Syndrome Coronavirus 2 (SARS-CoV-2) is the causative agent for the Coronavirus Disease 2019 (COVID-19). Since its first detection in December 2019 the disease has engulfed almost the entire world by spreading over more than 100 countries that resulted in the above 352,294 deaths as of 25th May 2020. This highly infectious virus spread via respiratory droplets and aerosols when an uninfected person comes in contact with an infected one. Without any drug or vaccine at sight, the world is slowly succumbing to the disease. Therefore, researchers around the world have started collaborating and sharing their research data so that with concerted efforts a cure for the disease can be developed quickly. In this challenging scenario,
Bioinformatics came out as one of the essential tools to analyze viral data as it provides vital information about the genetic makeup of the virus and also assists directly in the development of drugs or vaccines against the deadly disease.
The COVID-19 pandemic is far from over, and there is worldwide research on the development of effective diagnostic methods as well as treatments and preventable vaccines.
We wanted to automate the overall process so we came up with a solution in free time.
We took this as a challenge more than an opportunity and developed a bioinformatics application “Biozene” for researchers/scientists/anyone who is working on the DNA sequences of the virus.
What it does
Biozene is a bioinformatics application for computational biology and to perform basic to advanced tasks in a short amount of time. This application is developed considering COVID-19 in mind to help researchers/scientists or anyone who is working to fight with the pandemic.
It is a data analysis application for integrated and interactive analytics on genomics to compute and compare millions of sequences of COVID-19 DNA sequences.
Biozene can help scientists decode the genome of the severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) that causes COVID-19 disease.
This approach could help in drug identification and vaccine development.
Features of Biozene:
1. Represent and analyze DNA sequences
2. View DNA Features
3. Protein Synthesis Analysis (Translation, Transcription, Complement, Amino Acids generation, etc)
4. Generate Genome Diagrams
5. Mutation Rate Modeling
Benefits:
1. Helps Researchers/Scientists
2. Saves number of hours
3. Free to use and supports all kind of devices
How we built it
Biozene is a SaaS application which supports all kinds of devices (responsive). The application runs on the cloud and performs tasks in real-time. We have used end to end pipeline structure to build the solution. Starting from data mining to deployment, we have used several tools, frameworks, libraries, and languages. The core language behind Biozene is “Python” and supported by the Tornado engine as a server. Below are the techniques used to develop the app:
Data Analysis and Visualization: Pandas, Numpy, Matplotlib, Seaborn, etc.
Core Language: Python.
Supporting language: Markdown, JavaScript, Bootstrap, etc.
Main Library: Biopython, and Scikit-learn.
Cloud Technology and Server: AWS, Tornado, and, Streamlit.
Additionals Tools: Git, Anaconda, Colab, Instamojo, Putty, etc.
NCBI and the University of Edinburgh are the two platforms where we collected a lot of information about the bioinformatics tools and algorithms. Biozene currently supports the “FASTA” and “GenBank” files that are available on NCBI.
Challenges we ran into
The biggest challenge of this year is the “Novel Coronavirus” itself. Biozene is deployed on free dynos on a cloud service provider and the challenge we are facing is of funds at the moment. We are not able to scale up the infrastructures on the cloud. For preparing this production level application, we used free services.
In the coming days, we would like to enhance the features like integration of Databases, authentication, malware, better UX, etc and for those integrations, we will need to upgrade the cloud services.
Apart from that, we are really excited to bring Biozene in production to help our researchers/scientists.
Accomplishments that we're proud of
We have developed this application in a very short span of time. After developing the solution, we reached out to a
few reputed research institutes and labs in India
who are working on vaccine development. They really find our solution impactful and at the moment we are exploring the opportunity to build a partnership with them so they can use Biozene and other custom analytics-based solutions. We are proud of our team who have contributed to the development of the project.
What we learned
We took this pandemic as a challenge more than an opportunity. Biozene is one of our special projects and so far and we are really happy with the outcomes and response from the community. At the moment, India is incomplete lockdown and our young and dynamic team is working from home (remote). We learned the art of work in these lockdown situations. Apart from it, on the technical and development side, we learned awesome-streamlit which is going to revolutionize the AI/ML domain in the next few months. The production level deployment also helped us learn new concepts like adding domains, add-ons, custom pipelines, etc. Learning is a continuous process and we will keep on learning going forward.
What's next for Biozene: Interactive Bioinformatics
We are adding more features in a few upcoming days to provide a robust and feasible tool to the bioinformatics community who is working to battle the deadly virus.
Coming Soon:
- Run Blast
- Machine Learning Modeling (Cluster and Regression Analysis)
- Phylogenetics and sequence motifs
Biozene is free to use so anyone can use it by their convenience. We are also developing a customized paid solution. Apart from the development, we are optimistic about the collaboration with the research labs here in India to accelerate their COVID-19-related R&D activities with Biozene and related custom solutions. We are committed and dedicated to work with anyone who is working to fight with COVID-19 to support their COVID-19 related research and discovery efforts using AI, ML, and Bioinformatics technologies.
Built With
amazon-web-services
anaconda
biopython
bootstrap
git
google-colab
javascript
markdown
matplotlib
numpy
pandas
putty
python
scikit-learn
seaborn
streamlit
tornado
Try it out
biozene.online
drive.google.com | Biozene: Interactive Bioinformatics Application for COVID-19 | Biozene is bioinformatic application for interactive analytics on genomics to compute and compare DNA sequences to speed up the vaccine discovery by reading the genetic structure of the viruses. | [] | [] | ['amazon-web-services', 'anaconda', 'biopython', 'bootstrap', 'git', 'google-colab', 'javascript', 'markdown', 'matplotlib', 'numpy', 'pandas', 'putty', 'python', 'scikit-learn', 'seaborn', 'streamlit', 'tornado'] | 9 |
10,281 | https://devpost.com/software/castme | Main Menu
Motion capture streaming demo
Female avatar professor teaching
Male Avatar professor teaching
presentation screen
view from the back
View from the middle
Customize Character
castme.life website
Splash Screen
Inspiration
Video lectures are present in abundance but the mocap data of those video lectures is 10 times ahead in the form of precise data. High quality and a large amount of data are one of the requirements of best argmax predicting ML models, so we have used here the mocap data. Despite the availability of such promising data, the problem of generating bone transforms from audio is extremely difficult, due in part to the technical challenge of mapping from a 1D signal to a 3D transform (translation, rotation, scale) float values, but also due to the fact that humans are extremely attuned to subtle details in expressing emotions; many previous attempts at simulating talking character have produced results that look uncanny( two company- neon, soul-machine). In addition to generating realistic results, this paper represents the first attempt to solve the audio speech to character bone transform prediction problem by analyzing a large corpus of mocap data of a single person. As such, it opens to the door to modeling other public figures, or any 3D character (through analyzing mocap data). Text to audio to bone transform, aside from being interesting purely from a scientific standpoint, has a range of important practical applications. The ability to generate high-quality textured 3D animated character from audio could significantly reduce the amount of bandwidth needed in video coding/transmission (which makes up a large percentage of current internet bandwidth). For hearing impaired people, animation synthesis from bone transform could enable lip-reading from over-the-phone audio. And digital humans are central to entertainment applications like movies special effects and games.
What it does
Some of the cutting edge technologies like ML and DL have solved many problems of our society with far more better accuracy than an ideal human can ever do. We are using this tech to enhance our learning procedure in the education system.
The problem with every university student is, they have to pay a big amount of money for continuing to study at any college, they have to interact with the lecturers and professors to keep getting better and better. We are solving the problem of money. Our solution to this problem is, we have created here an e-text data to human AR character sparse point mapping machine learning model to replace the professors and use our ai bots to teach the same thing in a far more intractable and intuitive way that can be ever dome with the professors. The students can learn even by themselves AR characters too.
How we built it
This project explores the opportunities of AI, deep learning for character animation, and control. Over the last 2 years, this project has become a modular and stable framework for data-driven character animation, including data processing, network training, and runtime control, developed in Unity3D / Unreal Engine-4/ Tensorflow / Pytorch. This project enables using neural networks for animating character locomotion, face sparse point movements, and character-scene interactions with objects and the environment. Further advances on this project will continue to be added to this pipeline.
Challenges we ran into
For Building, first of all, a studio kind of environment, we have to collect a bunch of equipment, software, and their requisites. Some of them have been listed following.
Mocap suite- SmartSuite Pro from
www.rokoko.com
- single: $2,495 + Extra Textile- $395
GPU + CPU - $5,000
Office premise – $ 2,000
Data preprocessing
Prerequisite software licenses- Unity3D, Unreal Engine-4.24, Maya, Motionbuilder
Model Building
AWS Sagemaker and AWS Lambda inferencing
Database Management System
Further, we started building.
Accomplishments that we're proud of
The thinking of joining a virtual class, hosting a class, having a realtime interaction with your colleagues, talking with him, asking questions, visualizing an augmented view of any equipment, and creating a solution is in itself is an accomplishment.
Some of the great features that we have added in here are:
Asking questions with your avatar professors,
having a discussion with your colleagues,
Learning at your own time with these avatars professors
and many more. some of the detailed descriptions have been given in the submitted files.
What we learned
This section can be entirely technical. All of the C++ and Blueprint part of a Multiplayer Game Development.
We have started developing some of the designs in MotionBuilder, previously we have been all using the Maya and Blender.
What's next for castme
1. We are looking for a tie-up with many colleges and universities. Some of the examples are Galgotiah University, Abdul Kalam Technical University (AKTU), IIT Roorkee, IIT Delhi.
2. Recording an abundance amount of the lecture motion capture data, for better training our (question-answering-motion capture data) machine learning model.
Try it out here:
Intro Demo (2 min):
https://youtu.be/Xm6KWg1YS3k
Complete Demo:
https://youtu.be/1h1ERaDKn6o
Download pipeline here:
https://www.castme.life/wp-content/uploads/2020/04/castme-life%20Win64%20v-2.1beta.zip
Documentation to use this pipeline:
https://www.castme.life/forums/topic/how-to-install-castme-life-win64-v-2-1beta/
Complete source code (1.44 GB):
https://drive.google.com/open?id=1GdTw9iONLywzPCoZbgekFFpZBLjJ3I1p
castme.life:
https://castme.life
More info
For more info on the project contact me here:
gerialworld@gmail.com
, +1626803601
Built With
blueprint
c++
php
python
pytorch
tensorflow
unreal-engine
wordpress
Try it out
castme.life
www.castme.life
github.com
www.castme.life | castme | We are revolutionizing the way the human learns. We uses the Avatar Professors to teach you in a virtual class.Talk to your professors,ask questions,have a discussion with your colleagues in realtime. | ['Md. Zeeshan', 'Rodrixx Studio'] | ['The Wolfram Award'] | ['blueprint', 'c++', 'php', 'python', 'pytorch', 'tensorflow', 'unreal-engine', 'wordpress'] | 10 |
10,281 | https://devpost.com/software/sarsv-detector | Inspiration
Science is the best field ever. Viruses are one of micro organisms that we study an the SARS virus also known to be COVID19 is also one of them and it can be fatal once not detected earlier. Its strong points is that it is too tiny to be seen but what if we knew it and could see it up close with an app that could just scan every surface and species in a blink and quickly detect the virus along with the population of the virus present.
What it does
How I built it
Challenges I ran into
Accomplishments that I'm proud of
What I learned
What's next for SARSV Detector
Built With
microscope
scanner
software | SARSV Detector | Severe Acute Respitory Syndrome-related Virus | ['Buntu Mangxola'] | [] | ['microscope', 'scanner', 'software'] | 11 |
10,283 | https://devpost.com/software/ar-grapher | Inspiration
3D functions are often difficult to visualize and interact with. Most graphing technology displays the 3D graphs on 2D surfaces, only showing a cross-section of the graph. This makes it difficult to analyze and appreciate many of the properties of the function. By using AR technology, users will be able to interact with these functions in a 3D sense. This technology also exists in VR, however, headsets are expensive so AR is much more accessible to most students.
What it does
The app allows users to enter a 2-variable function, for example, z = x + y, and it will be graphed with AR. The user can move their camera around to see different parts of the function.
How we built it
The app was built in Unity using Vuforia.
Challenges we ran into
Our team was unfamiliar with Unity or AR development and had to navigate through many aspects of both without prior knowledge. Additionally, parsing the equations was difficult to do. We had tried several online libraries to help with it but none of them did exactly what we needed them to so we chose the best ones and added a bit of our own code.
Accomplishments that we're proud of
We are proud that we were able to create a functioning application using AR.
What we learned
We learned how to create projects in Unity, as well as creating AR apps using Vuforia.
What's next for AR Grapher
In the future, we plan on using more advanced graphing techniques to create our graphs. For example, we researched and planned to implement marching cubes to create the graphs, but did not have enough time to do so. Other functionalities we would implement include allowing a larger variety of functions to be graphed, adding sliders to allow users to see transformations to functions in real-time, and streamlining the function parsing.
Built With
c#
unity
vuforia
Try it out
aadityayadav.github.io
github.com | AR Grapher | A 3D mathematical function graphing application that utilizes augmented reality. This application will help students visualize math functions in R^3. | ['Aaditya Yadav', 'Wolf Van Dierdonck', 'Nick Liu', 'Peter Zhu'] | ['1st Place Prize'] | ['c#', 'unity', 'vuforia'] | 0 |
10,283 | https://devpost.com/software/homegrown-o2mz3r | Sign-in
Search engine
Various snippets of the code - 1
Various snippets of the code - 2
Home Page
Search with google maps
Newsfeed
Google Maps
HomeGrown
Link for video:
https://drive.google.com/drive/folders/1QD7w_WBRqmBjpQaCfEpPsmc_FffdVtHM?usp=sharing
Inspiration
With the whole world on lockdown due to the COVID-19 crisis, more and more people are adapting to their new routine of life at home. However, with the sharp decline in customers many local businesses are struggling to adapt to the new norm. Business owners are faced with a difficult choice: stay open for their communities at risk of bankruptcy or shut down permanently. Additionally, millions of people are being unemployed every day as lockdowns are being extended. The world is slowly looking at its worst economic conditions in generations.
What it does
HomeGrown is our solution to this: a web app that not only serves as a platform for small and local businesses to promote themselves to their communities but also to connect people locally with available financial aid resources and to find job openings in their vicinity.
When visiting our site, businesses nearby are displayed with exciting deals, posts, and advertisements on the website’s newsfeed. Customers will be able to search and filter the results based on the businesses name and the offered services. Furthermore, people will be able to search up available jobs to sustain themselves.
Contributions
This project was made during the 2020 Hack3 Virtual Hackathon.
Team members:
Krish Mehta, Carol Xu, Rahul Aggarwal, Jeannie Quach
How We Built It
We began with splitting the team into backend and frontend developers based on prior experience. The frontend team began designing ideas for the overall layout and the eventual functionality of the website. In contrast, the backend team gave feedback on these design ideas and determined the APIs needed to implement the desired functionality.
After that, the frontend team began coding the landing page and business listings page in static HTML, later on converting this to a React.JS web app. While this was happening, backend worked on using the Firebase API to perform user registration and authentication. Once this was done, the backend team moved on to determining user location by integrating the Google Maps API into HomeGrown. Finally, the frontend implemented a feature that would allow searching and filtering through all businesses listings and job postings included in our firebase.
HomeGrown was made with React.JS, Node.js, Jquery, and Firebase, and bootstrapped with
Create React App
.
Challenges we ran into
The first challenge we faced was working to create a database on MySQL and accessing data stored in the database with PHP in a React.js framework. With multiple problems occurring, we decided to pivot to firebase and react.js to create an OAuth account registration and login page.
The second challenge we faced was the time constraints preventing us from creating new registrations and maintaining a live news feed. To tackle this problem, we created a static region-specific database of local shops, which, though not accurate, projects the full potential of our live news feed feature.
The third challenge we faced was integrating the google maps GeoLocation API with our pages considering it is no longer a free service. Irrespective, we went ahead with the plan and continued to display the map in the developer mode, which also shows how beautifully the web app would work with all the necessary tools.
Accomplishments that we are proud of
We were able to create a database on firebase with OAuth registration and login directly connected with the database.
Static newsfeed content was created to show and advertise local businesses in a region.
*We were able to connect the live rendering search engine to the database of the local businesses.
Created re-directs to help order from the shops as well as searching for jobs in the vicinity with the search engine.
Integrated google maps GeoLocation API which asks for the user's location and shows registered shops in the vicinity through multiple markers.
Successfully created a full-stack web app.
What we learned
Creating web applications with React.js
Linking HTML, CSS and JavaScript files to Firebase database
Limitations and advantages of the Google Maps API
Learned how to create a live/static rendering search engine
What's Next?
Allowing small businesses to register an account on HomeGrown to post their advertisements and job listings.
Turning the web app into a phone app: deployment to iOS and Android.
Implement a dynamic refreshing/updating system into the job postings and business listings pages.
Adding more businesses and locations to HomeGrown's database, which would eventually lead to reaching out to small businesses in cities all around the world!
Creating a dedicated portal to allow customers to directly order from registered businesses with the HomeGrown web application.
Dependencies to Install in the Root Directory
These packages can be installed using the "npm install" command in the terminal line.
react
firebase
react-firebaseui
react-geocode
react-autocomplete
react-google-maps
react-router-dom
axios
jquery
qs
Commands to Run
In the project directory, you can run the following commands:
npm start
Runs the app in the development mode.
Open
http://localhost:3000
to view it in the browser.
The page will reload if you make edits.
You will also see any lint errors in the console.
nodemon server.js
Runs the development server to connect with Google Maps API. In the terminal "nodemon starting node server.js Server running on port: 5000" should be running.
Built With
css
firebase
html
javascript
jquery
node.js
qs
react
react-firebaseui
Try it out
github.com | HomeGrown | Helping small and local businesses | ['Carol Xu', 'Rahul Aggarwal', 'Krish Mehta', 'Jeannie Q'] | ['2nd Place Prize'] | ['css', 'firebase', 'html', 'javascript', 'jquery', 'node.js', 'qs', 'react', 'react-firebaseui'] | 1 |
10,283 | https://devpost.com/software/mask_pi_hack3 | Inspiration
Honestly, a lot of inspiration comes from Srinivas's Dad. He was the one who taught all of us how deep learning works. He provided the fire to learn Deep Learning models.
How we trained the model:
We built a CNN in Keras based off of this dataset:
Things we learned:
We learned how to train a custom model for our own purposes.
We learned a lot more about how we can truly impact the world with Deep Learning.
We found out a bunch of ways for how our product can be used in the real world.
We learned a whole lot more about the Raspberry Pi
We learned about time management, and how to plan things accordingly based on what’s most important.
Difficulties/ Things to improve on:
At first, we had trouble finding resources and making do of what we got.
We took a while to fix the bugs in the code when we switched from laptop - Pi.
We were having some trouble finding an effective way to pitch our product in 5 min.
It took us a while to find an effective way to solve the problem.
We had to figure the inconsistency in our code. sometimes it would work, other times it didn’t
Built With
gpio
keras
numpy
opencv
python
shell
smtplib
tensorflow
Try it out
github.com | Mask_Pi_HACK3 | Project for the Hack3 online hackathon that detects if people are wearing masks or not. | ['Abhisar Anand', 'Srinivas Sriram', 'Milan Behera'] | ['3rd Place Prize'] | ['gpio', 'keras', 'numpy', 'opencv', 'python', 'shell', 'smtplib', 'tensorflow'] | 2 |
10,283 | https://devpost.com/software/hactre-e-s | Home Page
Volunteer Website opened when you search your location in our bar
Pollution Map 1: EPA 2020 Data of General AQI
Pollution Map 1: Preview of search bar feature
Pollution Map 2: Preview of hover for stats feature
Game Page: Instructions + Link to Pygame in Browser through Repl
Pygame in Browser through Repl
Pygame in Progress through actual Python code in VS Code
Pygame Lose Screen through actual Python code in VS Code
About Us Page!
Inspiration
Air Pollution is such a vast problem globally from small urban sprawls to large metropolises, and caused by our negligence for the environment.
We’re sure you’ve probably heard this MANY times, and, while it’s true,
our source of inspiration was a little different
.
We’ve all come from families with backgrounds of asthma and breathing issues due to the air pollution present in their homelands, and were
brought together by the realization that even the citizens of our own nation don’t have enough knowledge about pollution
and climate change. Even our president is skeptical about climate change sometimes! So, in order to combat this issue and inform people, we created Hactre(e)s!
We’re so grateful we got this opportunity to meet someone new across the nation and bring our minds together to produce this idea!
What it Does
Hactre(e)s is a recreational platform meant to educate the user regarding the current global issue of
Air Pollution through an interactive game and several helpful visuals.
The product contains three individual features to the HTML webpage:
A
custom location-based volunteer search bar
that connects the user with opportunities to help combat pollution (in another web page that searches for volunteer opportunities).
Two heatmaps that visually display carbon production and general AQI data
from EPA and EIA reports (2020 and 2017). They contain a geolocating feature and embedded search bar to look up your location on the map and find data near you.
Terra vs Train, an
educational PYTHON (pygame) game
that encourages users to plant and collect trees, and watch their carbon footprint at the same time.
How we built it
The platform for our project was created using
HTML and CSS through the github server
. We embedded two heat maps which were created using
Qgis
. Qgis programming includes the languages C++, Python and QT. Additionally, we also included a recreational and educative game called Terra and Train which was a pollution spin off of the popular snake game. Terra and Train was
built using Python and Pygame
.
Challenges we ran into
We wanted to make our game available to all (no pygame installation required), but it was
difficult to embed our pygame into the html webpage
; we instead made it available on a web browser by running the game in Ripl.
Another issue we faced was creating and inserting our maps into an iframe, it was
difficult to get our map
(with over 19,000 rows of data)
to display and load properly
.
Lastly,
coding the game TerravTrain
was actually pretty hard, as we’re all new to pygame. However,
with the help of a mentor we got it done
!
Accomplishments that we're proud of
We’re first of all super proud of our
website display and our page organization
through buttons (considering one of our members was brand new to HTML and CSS and another completely new to our team in general and so far away!). Also, coding the
python game
was very satisfying, as was learning so much more about how pygame works and different shortcuts in the code. Lastly,
creating the map with so much data
was so tough, and when it finally displayed as the heat map we wanted, it was a moment of complete glee before we got back to work.
What we learned
Through this process, we all
strengthened our confidence and skill in
the varying languages that were utilized for the project including
HTML, CSS, pygame, Python, and C++
. The use of Github, Virtual Studio Code Live Share, Google Meets, and Slack for
communication and collaboration
, also taught us how to be more flexible and adapt to quick changes, as we worked together virtually. Most importantly, we learned how to
troubleshoot both efficiently and effectively
; with all the small challenges that we faced along the way, all members were able to better solve these issues in quick and creative ways.
What's next for Hactre(e)s
Our next steps for Hactre(e)s is to
reach a bigger platform and larger audience
. We hope to improve our website styling to be more appealing as well as include a myriad of maps (water, land, and soil pollution hopefully), covering information from across the globe to better inform the users. It would be nice if we could learn to
make our heatmaps with R rather than relying on QGIS
, as well. Additionally, our team plans on adding a
wider variety of games
(and maybe levels to the Terrav.Train game) so that the user can better engage with our webpage. We might even code our next one in C++ and figure out how to display it directly in the website.
Try it out
github.com
risheethagb.github.io | Hactre(e)s | Hactre(e)s is a recreational platform where users can gain insight on air pollution through custom heat maps and an educational game, and can help the cause with our location-based volunteer search. | ['Risheetha Bhagawatula', 'Neha Balaji', 'Shreya Chandran', 'Hailena Bian'] | ['Popular Choice Award'] | [] | 3 |
10,283 | https://devpost.com/software/covid-19-alert | Firebase Database
App Screenshot
Inspiration
We have had a couple of friends and their families contract the Coronavirus. Contact tracing could have helped to mitigate the spread of the virus to others.
What it does
The application acts as a Bluetooth Low Energy (BLE) beacon and scanner; using this technology, the app logs the people they interacted with six feet or closer based on the Bluetooth signal strength. When a user notifies the app that they have symptoms or have tested positive for COVID-19, all of the smartphone users who have been close enough to that particular person’s smartphone will be notified the date at which they have been potentially exposed to the virus and advised to quarantine.
How I built it
We built the app using Android Studio with Kotlin, Java, and XML. We also used Google Firebase for cloud storage for the database of people who have contracted COVID-19 or have symptoms.
Challenges I ran into
We had issues at first linking our Firebase server into our application, but we resolved it after a few hours of searching. We also had to debug why the beacon was not working at first, but this was due to an error in our configuration of it.
Accomplishments that I'm proud of
We were able to make an application that detects other phones within six feet using the signal strength of Bluetooth. The Firebase database also stores the incidents with the virus as well, and this can be cross-referenced with a user’s list of people they have met. This app has the potential to slow down the spread of COVID-19 significantly.
What I learned
We learned how to use a Firebase database in an Android application. We also learned how to integrate a remote stack with a local stack to transfer information.
What's next for COVID-19 Alert
We will make a service that automatically checks every 30 minutes if the user is at risk for having COVID-19. Currently, this is handled by a button that the user can click to see if they are at risk.
Built With
android
firebase
java
kotlin
xml
Try it out
github.com | COVID-19 Alert | An advanced contact tracing app for the Coronavirus. The phone detects anyone they’ve encountered in six feet and alerts the user if someone they’ve met has symptoms or has the disease. | ['Parv Shrivastava', 'Shreyas Raikhelkar', 'Pranav Dantu'] | ['COVID-19 Track Winner'] | ['android', 'firebase', 'java', 'kotlin', 'xml'] | 4 |
10,283 | https://devpost.com/software/astr | Main Screen
Astr Insight
Astr Screening
Astr Mind
Note about demo:
The demo is available at
http://www.astr.xyz
, but due to costs, I couldn't afford a server that could handle the deep learning models. However, both the "Astr Screening" and "Astr Mind" are available.
Inspiration
Despite machine learning showing amazing results in many tasks, it will still be a long time before it can be implemented into real diagnostic systems due to both pure accuracy and public trust. Based on both the shortcomings of machine learning and state of the art research, I'm proposing a platform that implements machine learning in a more assistive sense by building upon, instead of replacing, hospital infrastructure.
What it does
It consists of three parts:
Astr Insight - A modular pipeline that uses deep learning to implement better preprocessing techniques from image superresolution to anomaly detection for both human-based and machine learning-based diagnoses. It also uses the GradCAM architecture to provide insight into the "thought process" of neural networks and
why
a certain classification was made, assisting doctors in diagnosis instead of just producing a separate classification value.
Astr Screening - It uses existing information within hospital databases or from routine checkups to automatically detect diseases that a given patient may be at risk for. Because of the availability of the input features, it's able to scan large amounts of patients and serve as an automatic "early warning" system that then leads to consults with physicians. The main interface is a REST API designed to be implemented into hospital software.
Astr Mind - Connects patients with therapists in a more personal way by using a chatbot interface that mimics natural conversation. It alters its mode of speaking in order to reflect or "empathize" with the sentiment of the patient. Finally, it uses keyword processing algorithms to automatically detect what kind of therapist the patient needs based on natural conversation alone.
How I built it
Astr Insight - There are 4 deep learning models in this module. The superresolution and denoising are
"Residual Dense Networks"
that are designed to be better at capturing local information in images. The anomaly detection is accomplished by a variational autoencoder that probabilistically models image distributions. With a trained VAE, you can calculate a lower bound on the log likelihood of seeing a given image. Based on the histogram of lowerbounds, boundaries were determined to classify images as "anomolous". The skin cancer detection model was very simple: just a ResNet50 with a dense classifier fine tuned on the HAM10000 dataset. I didn't spend too much time on that one as pure classification was not the focus.
Astr Screening - Because of the time restrictions, I was only able to implement 4 detection systems. The classifiers used were Random Forest and Gaussian Naive Bayes. They were trained on publicly available datasets of common diseases, with features optimized for both accuracy and occurrence in regular hospital data.
Astr Mind - I scraped articles from Wikipedia concerning various psychological conditions. They were preprocessed by the removal of stopwords, stemming, and lemmatization. Those articles were then passed through the "Rapid Automatic Keyword Extraction" algorithm to extract keywords. It is based on those keywords that classification is made. As a backup, for ambiguous text, a general therapist search is made instead. The sentiment classification is based on the polarity score given by TextBlob.
Challenges I ran into
The preprocessing and classification pipeline was difficult due to resource constraints both in my own personal computer and in the server. There are ways around it - e.g. I used tflite for the variational autoencoder, but I needed to calculate gradients for GradCAM, which isn't available in TFLite. It is for that reason that the online demo does not include the "Astr Insight" feature.
Accomplishments that I'm proud of
I definitely got better at web design in this process, I became more familiar with a css framework, which will be very useful for later projects. The screening machine learning models, along with the keyword-based classification, also turned out better than I thought.
What I learned
I became a lot more familiarized with many different aspects of both machine learning and webdesign. I feel much more confident doing something similar to this in the future.
What's next for Astr
I want to find a way to reduce the resource costs of the different models. I think that it wouldn't be too difficult getting the memory usage of the RDN models down, but the classifier is much more difficult as it has to be compatible with GradCAM.
Github:
https://github.com/AlexWan0/Astr2
Built With
flask
keras
python
tensorflow
Try it out
www.astr.xyz
github.com | Astr | Assistive Machine Learning for Hospitals | [] | ['2nd Place', 'New Technology Track Winner'] | ['flask', 'keras', 'python', 'tensorflow'] | 5 |
10,283 | https://devpost.com/software/virtual-cooking-9sq0l3 | Inspiration
We wanted to create an app that encouraged people of all ages to learn how to cook and bake in a safe and fun way without having any prior experience.
What it does
Provide a wide range of easy and difficult recipes to learn on an augmented reality platform.
How I built it
We built this application through the AppLab in code.org and through echoAR.
Challenges I ran into
The brainstorming process was a large challenge, and with the help of a mentor, we were able to solidify our ideas. This step took over four hours to overcome, and we were doubtful that we would end up submitting anything.
Accomplishments that I'm proud of
We are proud to have overcome the creative block during the brainstorming stage and now have a product to show for it.
What I learned
We learned how to use the echoAR platform, and we experienced the stress of a hackathon for the first time.
What's next for Virtual Cooking
With more time, we will be able to fully implement AR into the app and touch up on the interface. After thorough testing, Virtual Cooking can be added to the app store so our product will reach the intended audience.
Built With
code.org
css
html
javascript
Try it out
studio.code.org | Virtual Cooking | A fun and easy mobile game that allows you to learn to cook and bake through augmented reality! | ['Yashvi A', 'Danielle Trinh', 'Mehrnaz Bastani'] | ['Recreation Track Winner'] | ['code.org', 'css', 'html', 'javascript'] | 6 |
10,283 | https://devpost.com/software/covid-wait | Screen showing nearby stores, and the crowdedness
Safety tips
Homepage
The need for social distancing to combat COVID-19 has increased the difficulty of grocery shopping. Often, stores adhering to proper social distancing guidelines will have long wait times, or stores may neglect safe COVID-19 practices entirely. To stay safe as possible, it’s best to shop somewhere that respects social distancing with smaller crowds, to reduce exposure risk. However, these locations can be difficult to determine, and can often change. Our project, COVID Wait, takes into account these problems which are now found in our everyday lives. The basic premise of COVID Wait is to find the nearest grocery stores near clients with a score attached to them. A score which is higher than average is marked in red, indicating that the store is busier than average. As a result, those who are scared about the potential risks venturing outside can feel more confident by looking at the busyness ratings of each of the stores near them, positively impacting their day-to-day lives.
Inspiration: We were inspired to make this project because we wanted to help people find a safer way to go about the essential task of grocery shopping by providing real time information on which grocery stores are comparatively safer than others.
What it does: Our project utilizes location based tracking in order to track nearby grocery stores. Through this, grocery stores are given a score. Scores which are higher than average are considered busy, informing the user to potentially visit later.
How we built it: A client sent requests to a server with a Google API key, which was used to format the request for the Google Maps API and retrieve nearby locations. The populartimes package for python was used to retrieve a score for how busy the location was, and how busy it was likely to be in the next hour.
Challenges we ran into:
Under Google Maps, businesses have a wide variety of tags. While searching for grocery stores, we found that many of them were tagged as supermarkets. We resolved this issue by searching for keywords, rather than tags.
What we learned: We learned that communication is vital for a team project and having clear goals to achieve along the way help to define how much of the project is finished and what needs to be finished later.
What’s next: We hope in the future to add a distance feature and a scoring feature, primarily through up-voting and down-voting. We also hope to implement features that allow users to see what safety measures individual stores are taking to improve their safety scores.
Built With
flask
gatsby
google
html
javascript
jsx
python
react
Try it out
github.com | COVID Wait | Find the least crowded, and safest grocery stores (or other place types) in your area. Updated in realtime. | ['Sunny Zuo', 'Jason Ngo', 'Cheryl Li', 'Eric Lee', 'Cheryl Li'] | ['Honorable Mention: Best Beginner Project'] | ['flask', 'gatsby', 'google', 'html', 'javascript', 'jsx', 'python', 'react'] | 7 |
10,283 | https://devpost.com/software/resurrect | Home Page
Recreate Your Friend's Page
Chat with Dad (Family/Friend)
Chat with Celebrities
Chat with Kobe Bryant
Request a Celebrity
Conversation with Therapist
Forum Page
Local Therapists
Inspiration
As hundreds of thousands of people continue to lose family members due to the pandemic, our team has reflected on how we may one day say goodbye to those we love. In addition to seeing families grieving from their losses, we have seen many public figures pass away both before and during the pandemic, such as Kobe Bryant’s untimely death. We wanted to make use of AI and ML to help people cope with their losses, both with loved and inspirational ones.
Thus, we took a unique approach to the idea of staying connected during quarantine.
Instead of creating an application that connects users conventionally, which already has numerous solutions, such as Instagram and Facebook, we opted to develop an app meant to connect people with those they cannot connect with anymore -- namely, those who have passed away. We wanted to find a way to allow friends, family, and even fans to artificially interact with these people and keep a memory of their experiences with those they have lost.
Thus, we developed Resurrect, an application that allows users to bring back those who have passed away through Generative Natural Language Processing Models, which replicate distinct personality and conversational traits.
What it does
Resurrect is a unique progressive web application that lets users artificially bring back loved ones and late celebrities. The core of our project is multiple generative NLP models, which allow users to build conversational agents with the personalities of anyone the user desires. Note that our demo shows the process, but because our model is incredibly large and because each bot needs to be trained for hours, the only way for us to deploy the generative models is to gain access to expensive, high-powered servers. Our model will work for anyone so long we are given the required data, and we hope to automate our model by gaining access to these servers.
First, users can upload both a csv file of their downloaded conversations (by connecting a phone to computer) with their late family members, as well as an audio clip of them speaking. The file is then uploaded to
Google Cloud
, and then we input it into our model. Our model takes the text conversation and begins training to replicate their speaking style. We found a research paper written by Google AI researchers on voice cloning, and while we were unable to implement this model with our generative models due to not having server power, we can still take the inputted audio sample and generate new samples. Once we finish processing their files, we notify them via email. We return a generative model embedded within a messaging interface, where they can have an interactive conversation with their late friend or family, allowing them to cope with their loss or to relive their memories.
We also developed a model to allow people to interact with late celebrities. Rather than using messaging transcripts, we web-scrape data from their twitter accounts, which often display their personality and resemble how they interact with fans. If a celebrity model has already been created, users can instantly interact with it, but if not, they can request one and we will quickly develop one for them.
Lastly, we have a page for those struggling with the death of their loved one or any other mental health issue. We created a generative model that acts as a counselor and therapist for mental health patients. This creates a supportive environment for those struggling with tough losses. We also have a forum where people can express themselves and meet new people who are facing the same grief. In addition, we created an interactive map using the
Google Maps API
, which allows users to find local therapists to receive additional support from.
How we built it
After numerous hours of wireframing, conceptualizing key features, and outlining tasks, we divided the challenge amongst ourselves by assigning Ishaan to developing the UI/UX, Adithya to connecting the
Firebase
backend and creating the chat features, Ayaan to developing our base generative models and researching the voice replicating model, and Viraaj to training the models with
AI-Fabric from UI Path
and connecting the backend.
We coded the entire app in 5 languages:
HTML, CSS, Javascript
,
DockerFile/Makefile
, and
Python
(Python3 /iPython). We developed our chat interfaces and integrated our models using PythonAnywhere and the
Flask
Framework. We used
PythonAnywhere
as our backend. We used
Javascript
to create our website backend, and used
Google Cloud
to store our data. We hosted our website through
Netlify
and
Github
.
For this project, we focused on developing generative NLP models with
Pytorch
and
Tensorflow
. For all our models, we used the pre-trained HuggingAI generative model and fine-tuned it on our data for each circumstance through transfer learning. For our counselor bot, we used the pre-trained Bert NLP Model and fit it to our data. When we get a message from the user, we are able to convert it into a latent vector and thus generate the correct output message. For our voice cloning model, we followed the documentation in the Google AI Research paper, and we were able to recreate their results with modifications, but couldn’t integrate due to server restrictions.
In order to collect data for these models, we developed two webscrapers. First, we created a basic web scraper to collect and format tweets based on a twitter handle. We then developed a web scraper to web-scrape counselchat.com, a forum for experienced and qualified counselors to answer questions, provide support, and post advice. For our adaptation model, we downloaded CSVs of text conversations from our iPhones and used them as data.
Challenges we ran into
The primary challenge that we ran into was developing our generative models. Since we have never built any generative NLP models, we weren’t sure how to start. Luckily, we found great documentation on how to develop them. We ultimately built 4 generative algorithms that all have different tasks. Training these models was also a huge challenge, and we saw that it was taking a long time to train. While we were not able to deploy our models, as they are too large to deploy on free and available servers, as long as users give us the CSVs or twitter handles, we can develop a bot for them.
Accomplishments we are proud of
We are incredibly proud of how our team found a distinctive yet viable solution to allowing people to cope with the loss of their friends and family. We are proud that we were able to develop some of our most advanced models so far. We are extremely proud of developing a solution that has never been previously considered or implemented in this setting.
What we learned
Our team found it incredibly fulfilling to use our Machine Learning knowledge in a way that could effectively assist people who have lost their friends and family. We are glad that we were able to develop a wide range of generative models to help a vast range of people. Seeing how we could use our software engineering skills to impact people’s daily lives was the highlight of our weekend.
From a software perspective, developing generative models was our main focus this weekend. We learned how to effectively build generative NLP models and web scrapers. We learned how to use great frameworks for ML such as
Docker/Makefile
and
Flask
. We grew our web development skills and polished our database skills.
What is next for Resurrect
Since our application is free and available to the web, our project can be scaled and implemented anywhere and with many other programs. With the possibility of a second wave for COVID, it is imperative that people have access to resources that can improve and stabilize their mental health and help them cope with the losses of their loved ones that are inevitable even beyond COVID.
In terms of our application, we would love to deploy our models on the web for automatic integration. Given that our current situation prevents us from buying a web server capable of running the models, we look forward to acquiring a web server that can process high-level computation, which would automate our services. We would also like to find new ways to collect datasets for our adaptation model. Lastly, we would like to focus on refining our voice cloning software and be able to integrate it with the rest of our models.
Our Name
We chose the name
resurrect.space
because our application attempts to resurrect a lost person and fill the space they left behind.
Built With
css
docker
firebase
flask
generative-models
google-cloud
google-maps
html
javascript
makefile
natural-language-processing
netlify
python
pytorch
tensorflow
uipath
xterm.js
Try it out
resurrect.space
github.com | Resurrect | Using Generative NLP Models to Reconnect with Lost Loved Ones | ['Adithya Peruvemba', 'Ishaan Bhandari', 'Ayaan Haque', 'Viraaj Reddi'] | ['Honorable Mention: Best Usage Of A Server', 'Best UiPath Automation Hack'] | ['css', 'docker', 'firebase', 'flask', 'generative-models', 'google-cloud', 'google-maps', 'html', 'javascript', 'makefile', 'natural-language-processing', 'netlify', 'python', 'pytorch', 'tensorflow', 'uipath', 'xterm.js'] | 8 |
10,283 | https://devpost.com/software/coronacator-qymiov | Inspiration
With the recent COVID-19 pandemic, people fear that going to the supermarket or just going on a walk will cause them to contract coronavirus. Because the actual risk of getting coronavirus differs greatly by state, and even by city, we wanted to build an app that can report out the risk at the current user’s location in real-time. Additionally, we wanted to create an alternative solution to contact tracing. Because privacy is a hot topic, especially in the tech industry, we looked to develop a pseudo-contract tracing method that allows the user to both keep his privacy and check locations he might have come in contact with a COVID-19 carrier.
What it does
The app keeps a log of all the places the user has been to recently. In those logs, each location is displayed with the current risk of that place and when the user was last there. Along with the map, the app also provides a real-time risk assessment of the user’s current location. By retrieving data from an existing COVID-19 API, an algorithm is able to convert the complicated data into a single, easy to interpret, risk score.
How we built it
We used Visual Studio Code and Android Studio to create a React Native Phone Application using JavaScript, Google Maps, and a custom COVID-19 API.
Challenges we ran into
Besides the difficulties that came along with using a new technology, React Native, we faced challenges that involved linking up the server-side code with the front-end React Native code. We needed to figure out a way to use data from our existing MongoDB database and integrate that with a custom algorithm for generating a single value between 0 and 10 as a COVID-19 risk factor for a specific location. Lastly, as we are in the midst of an epidemic, we were unable to bring our mobile devices to other locations besides our home to test, so we had to use coordinates for cities to test other locations.
Accomplishments that we're proud of (What we learned)
Before this week, we had no experience developing mobile applications. Within a week, we had to quickly follow online courses to learn the ins and out of react-native. During the development process, we also learned how to integrate a Node.js backend with the react-native front end. We’re most proud of the fact that we tried our best to learn a completely new technology in a couple of days just so we could implement this project!
What's next for Coronacator
Some additional features we would add to our application are web scraping, heat maps, and global coverage.
We would scrape the web for news articles about locations that have high COVID-19 risks and check if you ever went to one of those locations.
We would put heat maps on the google maps which show COVID-19 severity by darkening colors.
We would get more COVID-19 data from countries other than the United States.
Built With
android-studio
express.js
node.js
react-native
Try it out
github.com | Coronacator | App which generates a COVID-19 risk factor based on location. | ['Andrew Lee', 'Jacob Chang'] | ['Honorable Mention: Judges Choice'] | ['android-studio', 'express.js', 'node.js', 'react-native'] | 9 |
10,283 | https://devpost.com/software/exarcise | Inspiration
Obesity afflicts over 13 million American children, according to the U.S. Centers for Disease Control and Prevention. The risk of childhood obesity also increases with age, with over 20% of all American 12 to 19 year olds being classified as obese. Furthermore, study after study has found that childhood obesity dramatically increases the risks of dangerous health consequences in adulthood.
COVID-19, and the lockdowns imposed to combat it, have made this complex and far reaching problem even worse!
Schools all around the U.S. have closed, which means no recess, no sports, and no gym class. Many communities have implemented strict social distancing guidelines which have closed outdoor spaces like playgrounds and parks. Unfortunately, these measures have accidentally encouraged kids to spend even more time just sitting around indoors. In fact, University of Missouri sociology professor Joseph Workman estimates that just six months of school closure could result in a 4.86% increase in childhood obesity.
I built the exARcise web app to help solve this problem, by giving kids a creative way to have fun exercising in their own homes.
What it does
ExARcise uses augmented reality to gamify exercise for kids, in order to encourage them to exercise. The app is a platform that facilitates the creation of interesting and fun real-world activities by providing easily embeddable augmented reality experiences.
The core of the app is the AR enabled QR codes, which I power through echoAR. The QR code has the information the app needs to pull up the AR player, which then displays an exercise tutorial/example video onto the marker. This combination tech stack allows users to use the QR codes to create their own activities, like scavenger hunts, dice, or bean bag toss, and display the information into the real world.
I have also used gamification to encourage exercise by awarding badges for doing specific exercises, and having the user set daily exercise goals.
How I built it
Since exARcise is meant to be accessed via a mobile device, responsiveness is a key design feature. Therefore, I chose to build a React app, styled in Material UI so the website is accessible and easy to use on all sized devices. It’s also hosted on Google Firebase Hosting, to guarantee high availability and to automatically scale server resources up and down based on demand.
For the AR portion of the app, I used echoAR, because it allows me to easily manage and track my AR experiences, and change them in the future. To use echoAR’s built-in rendering system, our app has a two-part embedded system for reading and viewing the experiences. The first step is scanning the QR code using the reader built into the app. This triggers a popup with the appropriate viewer for the experience the user is looking to have. From there, echoAR allows me to embed and play the video.
Challenges I ran into
One major challenge with the app that I came across was in encapsulating echoAR’s full rendering process into the app. Traditionally, the user would scan an echoAR QR code outside of any app, and the QR code would take them to the web viewer for the media. While that is great for a poster or flyer that is isolated from any specific technology, I specifically wanted people to view the exercises in the app so I could reward them for their exercise. The result was creating a slightly roundabout process on our end, but the end result from the user is intuitive and simple, and it prevents them from ever needing to leave the app.
Accomplishments that I'm proud of
The main thing I am proud of is getting the AR to work so well! I'm really proud of the simple process I created for the user.
What's next for exARcise
In the future, I want to expand exARcise in the following ways:
Expand out the library of exercise videos, exercise badges, etc.
Enable the user to create personalized workout routines and sets
Add mindfulness and wellbeing activities, like yoga poses
Built With
echo-ar
firebase
react
Try it out
exarcisedemo.web.app
github.com | exARcise | Augmented Reality for real fitness fun! | ['Nathan Dimmer'] | ['3rd Place', 'Honorable Mention: Best Use Of EchoAR'] | ['echo-ar', 'firebase', 'react'] | 10 |
10,283 | https://devpost.com/software/covid-caution | We were inspired by the world’s need for an efficient contact tracing app that can benefit everyone, from the healthy to the most vulnerable. By creating a cloud based system using Google Firebase, we were able to store data from QR code scans and apply those data to a function that returned a resulting warning alert string based on the severity of the encounter (based on time, location, and day). We faced several roadblocks in building our project. All of us were unfamiliar with the Android Studio IDE and development flow, so we learned to use the IDE as we built our app. We also ditched a traditional backend in favor of Firebase integration, which required us to port all of our backend code to the client. Overall, it was a great experience, and we look forward to using our new skills to publish our new app, or a similar one, to the Google Play Store in the near future.
Built With
android-studio
Try it out
github.com | COVID Caution | A user-friendly app that records the places the user visited to quickly IDENTIFY TRANSMISSION PATH, NOTIFY POTENTIALLY INFECTED USERS, AND LIMIT VIRUS SPREAD by utilizing QR code technology. | ['Xiangyu Chen'] | [] | ['android-studio'] | 11 |
10,283 | https://devpost.com/software/covid-19-contact-tracker-tl4zor | This is the COVID-19 Contact Tracker. COVID-19 is an extremely dangerous and unique disease for a few reasons - but one of the largest ones is that even asymptomatic carriers can spread the virus. Over 12 million people worldwide have already been infected and numbers in the US are still climbing each day. Our app allows for people to connect (like through a social media app), but serves a much larger purpose. People can connect with their close friends that they have seen recently and notify them that they have tested positive. Even further, with a login system, our app allows for people to enter their location on a given day and see if anyone there has tested positive. This allows for people to be more sure of whether they need to get a test or not, especially considering that many Americans do not have healthcare and that these tests can be expensive for them. People can be more sure of when exactly they need to get a test and can get them sooner, thus preventing the virus’ spread one person at a time. In short, our app gets people to test faster with more assurance and control over where they’ve gone. In the home section are updates about COVID-19. Then, in the Contacts section is where you can add close contacts. Notifications shows you if you've been near anyone sick. Finally, the COVID Updates section lets you input your location to see if anyone got sick there. Our video software didn't work, which was another major last-minute setback, but we pulled through and got it to work. Although we couldn't connect to our database on FireBase, we had it set up, so that is one of our future plans. This would allow users to see if they came into contact with ANYONE who could've gotten the virus. In addition, we want o improve the looks and functionality in the future.
Built With
dart
firebase
kotlin
objective-c
swift
Try it out
github.com | COVID-19 Contact Tracker | COVID-19 Contact Tracker | ['Ankit Nakhawa', 'Anthony Ahn', 'Akshaya Ravi'] | [] | ['dart', 'firebase', 'kotlin', 'objective-c', 'swift'] | 12 |
10,283 | https://devpost.com/software/live-covid-19-tracker-for-the-usa | Arkansas Example
Colorado Example
Alaska Example
Inspiration
We were inspired to make this tracker due to the sheer numbers of COVID-19 cases throughout the USA, and so we wanted to make a super simple and easy method to find basic data for states across the USA.
What it does
It opens a new window that has a dropdown menu of each state, selecting one will provide basic COVID-19 Statistics for the state.
How we built it
We built it in combination using eclipse as a main java compiler, and using repl.it to work with each other in collaboration.
Challenges we ran into
We ran into many challenges along the way but we were able to successfully overcome each one using each other, and the internet for help. We ran into to many compiling issues, but we were able to solve those through extensive debugging. We also came across the problem of how to easily retrieve and display data without using the default text box in our IDE's, where we solve it by creating a new window used specifically for the tracker. There are many other challenges we ran into but with our time and effort, we overcame each one.
Accomplishments that we're proud of
Most of the work that we did was done on the fly. We weren't aware how to accomplish many of the tasks in the project, but again, thanks to the internet, we were able to quickly learn how to utilize many interfaces, and implement them into our project.
What we learned
We learnt multiple things, we learnt:
How to retrieve data from a website
How to use java and create and open a custom window
How to use various data types to accomplish your goals
Many other small but possibly important things in java that could be utilized in future projects.
What's next for Live COVID-19 Tracker for the USA
In its current state, our project is pretty basic and does just enough to accomplish our goal, and we're aware of that. It only provides basic data on Coronavirus for each state, and at that the UI isn't as good as it could be. In the future we can add much more data available for each state, as well as make a more modern, and expansive UI!
Built With
covidtracking.com
eclipse
java
repl.it
Try it out
repl.it | Live COVID-19 Tracker for the USA | Tracking COVID-19 Statistics for each state in the USA with Java. | ['Rohit Basu', 'Zaeem Khan'] | [] | ['covidtracking.com', 'eclipse', 'java', 'repl.it'] | 13 |
10,283 | https://devpost.com/software/moderated-trade-platform | The homepage to the site.
Inspiration
This idea came from a frustration that I had all season for Science Olympiad. A key part to practice for Science Olympiad is acquiring test sets to practice with from other teams. Most of this test trading, happens in DMs on different platforms like Discord or Reddit or through a captain's server which have been hard to find for some teams. When I was test trading on Discord, it wasn't a rare occurence when someone would scam me out of some tests we agreed upon, or if someone kept persistently asking for an illegal test that wasn't meant to be traded. This website eliminates the uncertainty in test trading.
What it does
This site allows you to talk with others in a moderated environment to exchange notes, tests, and other academic resources. It provides an easy way to see what tests other people have and what they want, without having lots of unnecessary conversation.
How I built it
We used HTML and CSS to create a website using repl.it to collaborate. Since both of us are beginners, we used resources like w3schools.com to help us with the code.
Challenges I ran into
We struggled to make the website interactive with JS. The website as an app doesn't function as of now because we didn't have enough time to figure how to implement it. We also had issues with CSS formatting and making the navigation bar work.
Accomplishments that I'm proud of
This was my first site that I created with HTML/CSS that had more than one page! :D
I only had basic knowledge of HTML coming into this from one workshop two years ago, and never really explored much into coding. This was also one of my partner's first experiences with this language.
What I learned
I learned so much throughout this process! Both of us attended the HTML/CSS workshop and it was a great refresher for me on HTML. It also pointed us to w3schools.com where I spent more time reading the articles than working on our project (oops). I learned how to create different files in the project which led to a website with multiple pages.
What's next for Moderated Trade Platform
The next step for Moderated Trade Platform is using JS to make the website interactive, so people can actually use the site. Right now, the sample "My Profile" and "Traders" is written into the code and isn't from an user input.
Built With
html/css
Try it out
Hackathon-Project-Academic-Trading-Platform.kyky39.repl.co
github.com | Moderated Trade Platform | This is a platform where students can trade academic resources without worry of scams, harassment, or illegal trades. | ['Kaylee Yang', 'Anthony Ahn'] | [] | ['html/css'] | 14 |
10,283 | https://devpost.com/software/corona-protective-gloves-and-ppes | Inspiration: On this current situation COVID19 affected patient count will be growing exponentially and WHO declared that COVID19 outbreak is a global pandemic. Our medical resources are reducing day to day and Doctor and medical staffs are also affected with corona virus because they are directly connected with COVID19 patients. The crisis of shortage of doctors and medical staff in this critical situation is a most crucial scenario. So, this is very important to protect doctors and medical staff’s for our society. Apart from that coronavirus is very infectious disease, very easily spread through touching. Considering this aspect, we designed a glove that is the very helpful to protect doctors and medical staffs from coronavirus without harming our resources.
What it does :
As per WHO, alcohol is a chemical which will able to destroy novel Corona Virus by breaking their outer lipid-coat. Basis of this concept the present innovation is introduce a special type of gloves that is capable to wet its contact surface by alcohol in every 20 min. and the alcohol comes automatically from the embedded container.
The alcohol container (60 ml) and a lightweight specially designing pump, controlling unit is connected with the alcohol container for the alcohol ejection time to time. The alcohol come from the container and spread on gloves contact surface through the pipe-line and disinfects the gloves.
How I built it:
The effective model introduced with self-disincentive intelligent gloves. The gloves integrated with the module of self-disinfection with alcohol carrying small container and a mini lightweight pump. The alcohol able to destroy novel coronavirus by breaking their outer lipid-coat. Alcohol have the capability to destroy other infected virus/ other bacteria as well. This special type of gloves eject alcohol from all the pore of thin pipe grid embedded with gloves surface on 20 munities of time interval. This time interval will also able to change according to requirement. The alcohol container populated at upper portion of gloves with the capacity of 60 ml alcohol contenting for 5 ml ejection of each interval. The facility is introduced to restore alcohol after it finish at 8 hrs. of interval for continuous use. The self-disinfectant module is introducing in Personal protective equipment (PPE). In case of PPE, this module it carries total two 1.2liter alcohol pouches encapsulation one for upper PPE and another for lower PPE. The pored pipe grid distributed across the PPE surface with proper integration. Every 20 minutes of interval 30 ml alcohol ejected at the surface of PPE and disinfect the external part of PPE. Alcohol pouches need to replace 8 hrs interval on the basis of continuous use of Self Disinfectant Module.
The manual switch is incorporated as per the requirement of no/off of this module. The 60 ml alcohol contenting module is introduced separately for frequently touchable places such as door /window knob, railing of stairs, ATM keyboard, Left switches, other switches etc. In case of frequently touch required place, a transparent flexible plastic module is encapsulated with Self Disinfectant Module. The pipe line connected with the plastic and container capable to carry 60 ml alcohol. 20 minutes of interval the alcohol ejected automatically and wash out the transparent plastic using alcohol. The transparent plastic insulates the electronic switch of ATM, lift or others from alcohol drenching. This invention claims its importance on the basis of present scenario of world and it is obvious to introduce in new industry sector.
Challenges I ran into :
I have to design a special pumping unit because it should be one directional alcohol flow and to be leakage proof.
What's next for "CORONA PROTECTIVE GLOVES AND PPEs"
After this gloves I am designing a total PPE suit with automatic alcohol sanitizer module.
Built With
alcohol
arduino
c++
gloves
lipo
pump | "CORONA PROTECTIVE GLOVES AND PPEs" | As per WHO Alcohol can able to kill Novel Corona virus. By this concept We made a special gloves that can wet its contact surface by alcohol in every 20 min automatically.we are also design other PPEs | ['Sankha Dey'] | [] | ['alcohol', 'arduino', 'c++', 'gloves', 'lipo', 'pump'] | 15 |
10,283 | https://devpost.com/software/drone_wars | #DRONE WARS
Inspiration
This was inspired by robot wars in the sense that it involves machine like creations fighting in an arena. The original concept was to produce a similar experience to a robot wars battle.
Designing
I did all of the game from scratch, using Solidworks to CAD my drone and using Unity to program the game. This was all done in under 24 hours
Problems
There were many errors some of which were hard to fix (such as the webGL size but overall the project went well
Whats next
I intend to build this into a full game, including a multiplayer side, the graphics on the game are currently not too good, but they will be improved in the next update.
code
All the code was made by me and is avaliable on github along with the assets, feel free to go through it.
PLEASE PLAY THE GAME!
I would love for you to go to either of the links and play
Built With
c#
solidworks
unity
Try it out
github.com
sharemygame.com
simmer.io | DRONE_WARS | This is a new 3D drone based combat game! | ['Vedaangh Rungta'] | [] | ['c#', 'solidworks', 'unity'] | 16 |
10,283 | https://devpost.com/software/covid-watch-ag9t2i | Inspiration
a
What it does
a
How we built it
a
Challenges we ran into
a
Accomplishments that we're proud of
a
What we learned
a
What's next for a
a
Built With
appy-pie | a | a | ['Mashrur Chowdhury', 'Philip Choi', 'Moe S'] | [] | ['appy-pie'] | 17 |
10,283 | https://devpost.com/software/opificina | The landing page
Opificina
This is a Progressive Web App (PWA) where shops can easily display their information in a uniform manner for users to access. We weren't able to build it fully within 24 hours, so we've used mockups in the presentation, but given more time we'll be able to make a useful final product. You can check out what we've made so far at
https://opificina.herokuapp.com
.
Since we conceived of this project yesterday, we faced a number of problems trying to make it. As both members in our team use different platforms to code, it took a while to get the tools set up and for us to collaborate properly, but we soon split up the work and worked on it individually. We made use of git features like branches to help with this. We've done our best in the short time we had, so we hope you like our idea!
Built With
css
html5
javascript
postgresql
rust
Try it out
github.com | Opificina | A common platform that connects users and shops | ['Nitin Ravi'] | [] | ['css', 'html5', 'javascript', 'postgresql', 'rust'] | 18 |
10,283 | https://devpost.com/software/notable-iv5lsp | So sorry for the watermark!
Inspiration
Many studies have shown that taking notes by hand increases material retention. But it also increases something else--the chance of losing your work. What if you could have the learning benefits of handwriting notes but still be able to keep a copy as a Google or Word document and Ctrl-F through it later? As two students who spent the past year studying machine learning, we knew we had to create our own solution.
What it does
Scribr is a deep learning model that allows you to input pictures of your notes and have it transcribed for you into a text document of your choice using our deep learning model.
How I built it
We built our app with a 4-tier architecture integrated into both the cloud and the browser. We aggregated data from the IAM Handwriting Database, the Bentham Manuscripts Collection, the RIMES Letter Database, and the Saint Gall Database and trained our model on Google Cloud Platform’s Cloud ML Engine. We then served our model with Docker and Flask in an easy to use web application.
Our model's training can be divided into three steps. First, our preprocessed images are fed into a five-layer convolutional neural network to extract features. Next, the outputed feature map is propagated through a Long Short Term Memory Network. Finally, we use CTC to both calculate the loss for the RMSProp optimizer as well as decode into our final text.
Challenges I ran into
After bricking our computers trying to download all the data, we decided to move our data aggregation and model training to Google Cloud Platform’s Cloud ML Engine. This allowed us much more time for optimizing our model and creating our Flask interface. Also, we spent much more time than we expected preprocessing our data.
Accomplishments that I'm proud of
Figuring out how to integrate Google Cloud Platform into our workflow was a lifesaver. Our app would not be where it is without it.
What I learned
We learned a ton about Convolutional Neural Networks and Long Short Term Memory Networks while building our project, as well as integrating machine learning with flask to create an easy to use and nice looking ui instead of a command line.
What's next for Scribr
There's still room to improve our model through more data and better architecture, which is going to be vital going forward. We also have plenty of work to do in making our quick hackathon web app into a full-fledged application/website.
Try it out
github.com | Notable | Transcribe handwritten notes to text documents on the web with a novel deep learning OCR architecture | ['Sebastian Schott'] | [] | [] | 19 |
10,283 | https://devpost.com/software/masky-k2luf7 | Inspiration
We need to wear masks to stop the spread of COVID-19, as a first layer of defense. However, many forget to wear them or do not pay heed to guidance. Therefore, Masky aims to use awareness and peer pressure to convince people to wear a mask. No one wants to be the odd man out.
What it does
Masky is a tool that highlights faces of people that are not wearing masks. These faces can be optionally blurred.
How I built it
Masky was built with a variety of technologies, consisting mainly of a frontend and a backend. I wanted to create a web interface so that it would be usable on any device.
The model that Masky uses is Yolov5s. The smallest and quickest version of Yolov5, a recently released state-of-the-art object detection model. Currently, requests to the server happen every second and CPU-only processing takes approximately 100ms.
The model was trained in Google Colab.
Challenges I ran into
There were quite a few issues in the training process, making it take longer than expected. Additionally, the regular inference script was not designed to take an input of a base64 string and was not designed to work without command line parameters. Those changes took considerable work.
Accomplishments that I'm proud of
What I learned
How to create a full fledged application in a short period of time.
What's next for Masky
Integration with internal security systems, on-device inference, accessible API.
Built With
opencv
python
pytorch
vue
Try it out
github.com | Masky | App that shows faces of people not wearing masks, to be projected for all to see. | ['Vijay Daita'] | [] | ['opencv', 'python', 'pytorch', 'vue'] | 20 |
10,283 | https://devpost.com/software/trainer-683bqo | App Logo
Home Page
Inspiration
With the recent pandemic, many gyms and fitness centers have closed down leaving many without a way to continue their active lifestyle. Especially with the lack of gym equipment at home, people are even less motivated to adapt to a new routine.
What it does
trAIner makes this new routine much easier to adapt to by using machine learning and AI to detect the specific workout routine that would best suit you based on your ideal weight and current fitness level. During a workout, trAIner uses real-time feedback to discern whether a specific exercise is liked or disliked by a user, which then helps in producing the next workout routine for the user.
While there are currently many AI fitness apps on the market right now, we believe we stand out since:
We give users workout routines based on activity type such as "HIIT" or "Cardio", not a body part.
Our real-time feedback system allows users to create more curated workout routines for themselves over time.
How I built it
trAIner was built using primarily asp.net core. For the backend we had classes for users and for the structure of an exercise. We then set up in memory databases for the exercises that will be serialized and saved to the user. The most technologically interesting aspect of this project is our learning system. This module works by using a dictionary of every exercise type, along with values of doubles and a list of past workoutTypes used that acts as memory. While the user is working with our procedurally generated exercises, they can give feedback on whether they liked it or not. Based on that feedback, which is sent to the backend through real time signalr websockets, Not only is the double associated with that workoutType increased based on an exponentially increasing amount based on how many times that workoutType has been judged, but values such as times, rep amounts, and intensity are increased or decreased to better fit their preferences. When the next exercise is generated, a staggered random is used based on those doubles from the dictionary which makes the more disliked activities less likely and the others more likely. In the end, trAIner offers a personalized workout every time the user logs in, and is increased in accuracy each time they supply feedback.
Challenges I ran into
While building it, trAIner served as an overall good experience, however there were some very significant problems I ran into. The first of these was setting up SignalR, which gave me many 404 responses before I noticed a small typo. The bigger challenge I had was the very algorithmically demanding exercise generation function. All of the small details that I had to incorporate in order to give the best user experience was a very challenging hassle, but I think it paid off.
Accomplishments that I'm proud of
I am really proud personally of how well the staggered random system worked in the end. It really blew me away that I could visually see certain workoutTypes get less or more likely.
What I learned
I learned a big lesson about the patience required to correctly make an efficient algorithm as well as lots of useful information about websockets and how they work.
What's next for trAIner
In the future, we plan to refine some of the UI of the trAIner and make it a real mobile app available on all devices using gonative.io.
Built With
asp.net
c#
css
html
javascript
Try it out
github.com | trAIner | No equipment. No gym. No problem. | ['Braden Everson', 'Sanjit Juneja', 'Advik Kulkarni'] | [] | ['asp.net', 'c#', 'css', 'html', 'javascript'] | 21 |
10,283 | https://devpost.com/software/no-touch-disinfectant-wipes-dispenser-x0cmdy | The problem
Hey, I am Tanya Rustogi and I got the idea of the wipes dispenser when I was thinking of how Covid-19 is affecting developing countries. My first thought was that to open a wipes container like lysol you need to touch at least two surfaces which can spread coronavirus. Additionally, having a container of wipes per person in an office or school is not realistic due to the shortage of disinfecting wipes. Then came the idea of an affordable, easy disinfecting wipes dispenser that can be used for classrooms to day cares to shopping carts everywhere.
The solution
So what this dispenser does is when an object such as your hand comes within ten centimeters of the sensor, the motor starts moving which is connected to a rod with rolled up wipes on it. The rotation of the motor moves the roll of wipes, causing them to unroll and make their may out of the container.
How to build
So, each of the pins except the ground and vcc on the motor driver are connected to pins on the arduino, which we defined in the code. The trigger and echo pin on the sensor are also connected to the arduino which are defined in the code. Then the ground and vcc of both the motor and the sensor are connected to the ground and vcc of the arduino which is connected to the power. The sensor detects the distance by seeing how long it takes a wave to come back. The code on the arduino makes sure that if the sensor detects something within 10 centimeters of it, it runs the function stepper which causes the motor to run. The container is made from the lysol container, hopefully making it cheaper for developing countries. The container has two holes, one for the wipes to come out from and one for the motor. Then we need to connect the motor to the container which I achieved with tape. The rod connects to the motor which is held on the other side through the hole already provided in the lysol container. Now when the motor rotates, the rod rotates as well.
What’s next
This is just a prototype, with more material, the final product would look cleaner with a box covering the circuits and the pcbs and circuits connected to the container.
What did I learn
I think the most important thing I learned through this experience is time-management due to the time constraints of two days to make the whole thing as well as perseverance to be able to try again despite how many times the circuit and the code did not work as it was supposed to.
Built With
arduino
stepper-motor
Try it out
github.com | No-Touch Disinfectant Wipes Dispenser | A prototype of a no-touch dispenser that is easy and affordable to make and could be used from cleaning tables to disinfecting carts. | [] | [] | ['arduino', 'stepper-motor'] | 22 |
10,283 | https://devpost.com/software/backdrop | Logo
What's next for Backdrop
Add audio support for video capture
Built With
css
html5
javascript
ml5.js
p5.js
unet
Try it out
backdrop.vercel.app
github.com | Backdrop | Change the background of a video | [] | [] | ['css', 'html5', 'javascript', 'ml5.js', 'p5.js', 'unet'] | 23 |
10,283 | https://devpost.com/software/reduced-gy54du | Reduced
🔗 Reduced is a powerful modern custom URL shortener with a minimalistic design made with MEVN stack by
Sumit Kolhe
✨ Features
:heart:
Lightweight and minimalistic design :
Modern minimalistic design that is a treat for the eyes.
:zap:
Easy to use :
Simple and intuitive design thats easy to use.
:rainbow:
Support for Custom Aliases :
Support user defined custom aliases as well as randomly generated ones.
:iphone:
QR Code Support :
Generates QR Code for shortened links instantly.
:card_file_box:
Store previous links :
Stores previously shortened links in
localStorage
of the browser for easy access.
:rocket:
Performance :
Reduced is built using the MEVN stack to ensure lightning-fast speeds and great performance.
:pencil2:
Public API :
Free public API that can be used to shorten links quickly or implemented on any other frontend.
:wastebasket:
Auto link deletion :
All shortened links are automatically deleted affter 10 days of creation.
:lock:
Secure :
We don't collect any data about you or store logs in our server.
🖥️ Demo
https://reduced.me
🧰 Built with
VueJS
: Frontend framework
Vuetify
: Vuejs Framework
Express
: Backend server
Nodejs
: Javascript runtime engine
MongoDB
: Data storage
:construction_worker: SETUP
Clone the repository or download the latest
release
to a folder of choice.
$ git clone https://github.com/sumitkolhe/Reduced
:building_construction: Backend Setup
Install the dependencies for the backend
$ cd Reduced
$ npm install
Rename the
.sample-env
file to
.env
and fill all the required fields
To start the app
$ cd Reduced
$ npm run dev
NOTE :
Running only the backend server will use previously generated static files ( from server -> static ) for the frontend.
:art: Frontend Setup
Make sure you have Vue-CLI installed, if not
$ npm install -g @vue/cli
Install the dependencies for the frontend
$ cd client
$ npm install
Rename the
.sample-env
file to
.env
and fill the required fields
To run frontend only
$ cd client
$ npm run dev
To build frontend
$ cd client
$ npm run build
NOTE :
All frontend builds will automatically be placed in
server -> static
. You can edit this in
client -> vue.config.js
:pencil: REST API Documentation
Reduced comes with a fully functional API that can be used to create short links with support for custom Aliases. As of now no authentication is required for using the API.
The API resides in
Reduced -> server
and can be modified as per one's use case
:alembic: Features of the REST API
Allow creating short URLs with or without custom aliases.
GET link statistics for shortened URLs. This includes -
Total number of clicks
Date / Day of Creation
Time of Creation
Original link
Time / Date of link expiration
The API comes with
Rate-Limiting
by default. The settings can be changed as per one's requirements.
CORS
is also enabled by default
:triangular_flag_on_post: REST API
The REST API requests and endpoints are described below.
# Create a short link
POST /api/shorten/
Creating short URL
without
custom alias
curl --header "Content-Type: application/json" \
--request POST \
--data '{"longurl":"google.com"}' \
http://localhost:80/api/shorten
Response Object
{"clicks":0,
"stats":[],
"_id":"5ef749408887c725bc489620",
"alias":"ejrf",
"shorturl":"https://reduced.me/ejrf",
"longurl":"http://google.com",
"created":"2020-06-27T13:27:28.374Z",
"expire":"2020-07-07T13:27:28.374Z",
"__v":0}
Creating short URL with custom alias
curl --header "Content-Type: application/json" \
--request POST \
--data '{"alias:"sample","longurl":"google.com"}' \
http://localhost:80/api/shorten
Response Object
{"clicks":0,
"stats":[],
"_id":"5ef74ae98887c725bc489621",
"alias":"sample",
"shorturl":"https://reduced.me/sample",
"longurl":"http://google.com",
"created":"2020-06-27T13:34:33.903Z",
"expire":"2020-07-07T13:34:33.903Z",
"__v":0}
Failed Requests
Alias already exists :
API throws error if a custom alias is provided but it already exists in database.
curl --header "Content-Type: application/json" \
--request POST \
--data '{"alias":"sample","longurl":"google.com"}' \
http://localhost:80/api/shorten
Response Object
{"status":"AAE",
"message":"Alias already exists"}
Invalid link provided :
API throws error if an invalid link is supplied
curl --header "Content-Type: application/json" \
--request POST \
--data '{"longurl":"google"}' \
http://localhost:80/api/shorten
Response Object
{"status":"IURL",
"message":"Invalid URL"}
# Get Link Statistics
POST /api/check/
curl --header "Content-Type: application/json" \
--request POST \
--data '{"linktocheck":"reduced.me/sample"}' \
http://localhost:80/api/check
Response Object
{"clicks":0,
"stats":[],
"_id":"5ef74ae98887c725bc489621",
"alias":"sample",
"shorturl":"https://reduced.me/sample",
"longurl":"http://google.com",
"created":"2020-06-27T13:34:33.903Z",
"expire":"2020-07-07T13:34:33.903Z",
"__v":0}
Failed Requests
Link does not exist :
When the supplied link to check does not exist in database.
curl --header "Content-Type: application/json" \
--request POST \
--data '{"linktocheck":"reduced.me/sampleinvalid"}' \
http://localhost:80/api/check
Response Object
{"message":"Link does not exist"}
Invalid Link :
When the supplies link is invalid
(is not an actual link)
.
curl --header "Content-Type: application/json" \
--request POST \
--data '{"linktocheck":"invalidlink"}' \
http://localhost:80/api/check
Response Object
{"message":"Invalid Link"}
# Check Server Status
GET /api/status/
curl --header "Content-Type: application/json" \
--request GET \
http://localhost:80/api/status
{"status": "OK"}
✍️ Authors
Sumit Kolhe
-
Author
📜 License
This project is licensed under the
MIT License
- see the
LICENSE
file for details.
Built With
express.js
html
javascript
mongodb
node.js
vue
Try it out
reduced.me
github.com | Reduced | Reduced is a powerful modern custom URL shortener with a minimalistic design and a public API made with MEVN stack | ['Sumit Kolhe'] | [] | ['express.js', 'html', 'javascript', 'mongodb', 'node.js', 'vue'] | 24 |
10,283 | https://devpost.com/software/melanomai-gbifmq | Home Page
Thank You Page
Inspiration
Melanoma is a deadly skin cancer which affects all ages. It starts off as a cancerous growth but can spread to other parts of the body as well.
The worst part is that Melanoma has a 25 - 30 percent misdiagnosis rate meaning 1 in 4 people have been misdiagnosed with the cancer.
Considering how dangerous and scary cancer can be a 25% misdiagnosis rate is too high.
1 in 4 people should not have to suffer due to an accident that can be avoided.
MelanomAI works to fix this problem by making Melanoma diagnosis easy, fast, and above all accurate.
What it does
MelanomAI analyzes the image of suspected melanoma to detect whether or not it is melanoma. Once the analysis is complete the user gets and email confirming their results where they can then seek out the proper help based on the diagnosis.
Diagnosing Melanoma just needs 3 steps. Upload an image, enter your email, and click submit.
MelanomaAI is faster than modern day diagnosis and is also more accurate. It bring down the 1 in 4 number to about 3 in 20 an almost 70% imporvement.
How We built it
We used pytorch to build the AI model and train it.
We used bootstrap, django, css, and html to create the website.
(Note: The AI and website work on their own, we have yet to integrate the 2)
Challenges I ran into
One of our members lost half of their files 3 hours before the hackathon and had to recode all of them.
It was a traumatic experience and was a good learning opportunity on how to store files and use github.
Accomplishments that I'm proud of
We are proud of our accuracy. The AI turned out to be more accurate then proffesionals which was a big surprise and we are glad they we could make it so accurate.
We are also proud of our website as this was out first time making an AI and a UI to go along with it in a Hackathon.
We wish we could have integrated the two fully. Given 1 - 2 more hours it should have been possible to have a completely ready product.
What I learned
How to work with AI(Pytorch)
How to use django.
What's next for MelanomAI
Integrating the AI into the website so that it actually predicts based on image. The AI can do that right now but it isn't integrated with the website so the Demo was based off of a prewritten message and just a random image submit. The AI does work however, its just a matter of integration.
We want to host the website to make it available to all!
Try it out
github.com | MelanomAI | Detect Melanoma with 85% Accuracy | ['Anish Karthik', 'Gaurish Lakhanpal'] | [] | [] | 25 |
10,283 | https://devpost.com/software/carbon-planet | Inspiration
In 2018 there were at least 6,677 Million Metric Tons of CO2 being released and causing the greenhouse effect and climate change. The transportation sector generates the largest share of greenhouse gas emissions-28%.
What it does
Our website has a preset measuring system that will measure whether the user’s inputted data is higher or lower than the benchmark and by how much.
So our benchmark variable is 7000 steps and for each additional 1000 steps above the benchmark the user will be rewarded 100 coins where they can purchase items in the “store”. The items in the store currently include houses, apartments, and hotels which can directly increase the number of people (the population) on the user’s planet.
The more people you have on your planet the higher your rankings are on the leaderboard. The leaderboard will show the user’s ranking and the top 10 ranked users. Represent your country and earn a spot on the leaderboard!
If the user’s steps are less than 7000, every 1000 steps missed will result in 5% of the increase in carbon emissions rate. Why does that matter? Because it is directly correlated to the population of your planet. For example, If the carbon emission rate of the user’s planet reaches 50%, then 20% of the people on the user’s planet will unfortunately die. So on and so forth.
Carbon Planet’s simplistic instructions and design make it suitable for any age group to play. It’s interactive, competitive, and interesting to play. Users are able to have real-life interactions with this website.
How I built it
We use BlueJ as the platform and we are using Java Swing to create GUIs
Challenges I ran into
We ran into this problem that we couldn't convert the code to HTML and publish it, so we used the video demo format to show the judges the prototype. We are going to develop this into a real app with more budgets being invested in this project.
Accomplishments that I'm proud of
We actually created a prototype as beginners in 24 hours.
What I learned
teamwork and good communication is essential
What's next for Carbon Planet
We are going to develop this into a real app with more budgets being invested in this project so that users can link their "health app" to our app and don't need to input data on a daily basis.
Built With
bluej
javaflow
Try it out
github.com | Carbon Planet | a website that tracks your activity and encourages users to walk instead of using transportations that generates carbon emission. | ['Benjamin Chen', 'Pallab Paul', 'Devang Ajmera'] | [] | ['bluej', 'javaflow'] | 26 |
10,283 | https://devpost.com/software/metro | logo
homepage
code editor
desktop application login
desktop application job menu
Inspiration
I build Metro because I noticed that we had old computers lying around that were doing nothing. Computers have a lot of potential and if we think more creatively with them we can be more sustainable. This idea become more of a reality when I started to experiment with containerization.
What it does
Metro allows Clients to upload code which will be ran by other Metro users. Upon uploading, code is ran by a volunteer that uses their own machine to run the code. Everything is secure and encrypted so that the client's code remains to them.
How I built it
I used Golang, Docker, and Electron.JS to create a restful API and desktop application. The restful API allows for code validation, the uploading and downloading of user data, and the management of MongoDB documents.
The Desktop application on the other hand, allows Metro to talk to the native OS and run docker containers on top of it. Metro is able to manage and record docker containers so that cost stays cheap and scalability stays a priority.
Challenges I ran into
The biggest challenge was allowing for the communication between containers and Metro's server.
Accomplishments that I'm proud of
I really proud of the container management system. It can effectively run a piece of code and send that code to the server. I had to get creative and start using tools such as Electron.Js so that I could talk to the native OS.
What I learned
I learn that Docker is really fun thing to use and has a lot of potential. I will definitely be using it in my upcoming projects for the future.
What's next for Metro
I would like to overhaul and improve the UI. I would also like Metro to incorporated APIs such as Stripe to handle payment. It's a bright future for Metro!
Built With
docker
electron
gin
go
mongodb
node.js
Try it out
github.com | Metro | Sustainable Computation | ['Nabeel Ahmed'] | [] | ['docker', 'electron', 'gin', 'go', 'mongodb', 'node.js'] | 27 |
10,283 | https://devpost.com/software/pandemicfootprint | Inspiration
Coronavirus has affected all of our lives and it is our duty to help limit the spread and to help our community. Our quiz tells people how well they are doing to contain the virus based on their current habits.
What it does
It presents 4 multiple choice questions, which the user answers with a number from 1-6. Based on their responses, the PFI (Pandemic Footprint Index) of the user is calculated. Then, based on their score, recommendations for what to do next are printed.
How we built it
As beginners, we built it using java by scanning in the user's choice and making calculations based off their inputs.
Challenges we ran into
Some challenges we ran into were in the calculation of the Pandemic Footprint Index. We had to find a way to find the score based off of the 4 multiple choice questions, which we initially had trouble calculating.
Accomplishments that we're proud of
I am proud that we were able to provide a comfortable user experience on a first-time desktop application to help the common folk deal with the pitfalls of the pandemic.
What we learned
We learned how to upload a java file into a desktop application so it can be used by others rather than simply within the local IDE. We also learned how to use github so we can share our code and others can learn off of it as well as provide suggestions to our code. Overall, it was a great experience!
What's next for PandemicFootprint
We hope to turn the program into a website that anyone can visit to calculate their own pandemic footprint at any time. This would make it much more accessible to everyone!
Built With
java
Try it out
github.com | PandemicFootprint | In order to help people improve their habits during the pandemic, we created a simple multiple choice quiz that allows users to find their PFI, or pandemic footprint index. | ['Akhil Kammila', 'Hrishi Joshi'] | [] | ['java'] | 28 |
10,283 | https://devpost.com/software/ar-anatomy-827wgq | Ar view
Inspiration
As we know that this pandemic is really tough for us so I made an AR app which will help the people as well as the doctor
What it does
It will give the virtual world related information of our body parts.
How I built it
Challenges I ran into
Accomplishments that I'm proud of
What I learned
What's next for AR Anatomy
Built With
augmented-reality
echo-ar
Try it out
drive.google.com | AR Anatomy | AR based app for the hospital | [] | [] | ['augmented-reality', 'echo-ar'] | 29 |
10,283 | https://devpost.com/software/shopala | home screen
users add images of clothing to inventory and say how often they wear it
users can predict cost-per-use of a new item they find by imputing the cost and image of item
based off previously inputted images, the app will predict cost-per use based off how often users wear owned items
Inspiration
Our generation is
obsessed
with online shopping, especially during quarantine. I personally have online shopped a lot, and recently, I have researched about the damage this is causing to our planet and the health of thousands. The textile industry is one of the most
polluting industries in the world
, and due to fast fashion brands like
Zara
and
Shein
, buying cheap, trendy clothing is appealing. However, the waste from making these items is
damaging waterways, using up hundreds of gallons of oil and water, releasing more emissions than flights, and the chemical runoff is leaving people with medical health issues
. We can’t just tell people to stop shopping. Instead we can teach people how to become
smarter
shoppers and help them save money and help the planet at the same time. So, I have developed
Shopala
, the first machine learning shopping assistant app of its kind.
What it does
It calculates
cost per wear of items that users want to buy while they are shopping, based on items the user already has
. When users take a picture of items they want, the machine learning used enables the app to compare the image to previously imputed images of clothing that the user already has. It takes the
cost
of the new item and divides it by
the number of times users wore a similar item
in their closet.
How I built it
I used
react native
to develop the mobile application. I used the React APIs to render the components on a mobile device. I also used
Clarifai API
to incorporate machine learning. I created a custom model to get the model to begin understanding differences among specifically clothing and store
json metadata
of the amount of times users wore that particular item. As users input more apps, the model becomes more intelligent in recognizing differences in apparel, fabric, and clothing types.
Challenges I ran into
Getting the imputed information to enter the model and be set as json metadata for each particular image was difficult. Fetching this data when users selected an image similar to it was also a challenge. However, I read a lot into json metadata and its functionality, and learned a lot considering I have never heard of this functionality prior to this hackathon. After a lot of trial and error, I was able to narrow down the hits of similar images to 10, and have the metadata return. After this point, it was just a simple arithmetic function that provides the user the cost-per-wear value.
Accomplishments that I'm proud of
Creating a whole
custom model
and getting the
json metadata
to work was something that I was surprised to do in just 24 hours. I have some experience with machine learning, but I have definitely refined my skills a lot, and learned about connecting user input into a machine learning model. I also spent a lot more time on
UI
and it resulted in a professional looking application.
What I learned
I learned about more React API features in React Native and implemented them for a better UI. I also learned about json metadata and connecting it with Clarifai custom model API. I was able to use this
cutting edge machine learning technology
to make a mobile application, and develop it to successfully and accurately predict images.
What's next for Shopala
I was pleased that the machine learning aspect works and is accurate, but I would like to continue to develop this app and put it in the app store soon! I want to create a
user profile page
where users can input their name and set some preferences. By doing this, I can welcome the users on the homepage when they enter with a “hi !”. It’s a small touch, but it makes the world of difference because it makes the app more friendly. After all, it is a shopping buddy. In addition, I would like to include an
image gallery
of all the previously imputed images and users will have the ability to remove images, or edit the json metadata of how many times they wear the item in a year. Finally, I would like to improve the
UI
even further. I have gotten amazing advice on moving forward from Vicky Vo, project designer mentor during the hackathon, and have begun to implement some of the features she suggested, but I will continue to edit it. All in all, I am proud that the machine learning worked excellently, and I am excited to continue to develop it following this hackathon.
Built With
clarifai
github
javascript
react
react-native
Try it out
github.com | Shopala | First ever machine learning shopping assistant app that calculates cost per wear of clothing items based off items you already own | ['Nandita Kathiresan'] | [] | ['clarifai', 'github', 'javascript', 'react', 'react-native'] | 30 |
10,283 | https://devpost.com/software/earthquake-watch-htsl9m | Motivation
Social media, most notably Twitter, has played a key role in the distribution of information during earthquakes. People use Twitter to alert and inform other citizens, and this is where media outlets source much of their early-stage information. Yet, in many less developed countries across the world, this source of information is underutilized. Emergency response systems are often alerted hours or days after the earthquake takes place.
Solution
Earthquake Watch is a real-time worldwide earthquake monitoring platform that does the following:
Mines real-time data from Twitter with Twitter's API and web scraper
Extracts the relevant information through Natural Language Processing
Predicts the locations and relative magnitudes of earthquakes through Latent Dirichlet Allocation and a LSTM recurrent neural network.
It can be used as a tool to allow the appropriate respondents, such as disaster relief agencies and humanitarian organizations, to more quickly act upon earthquakes across the world.
Challenges I ran into
No matter how much relevant information there is on Twitter, there is always an equal amount, if not more, of irrelevant thoughts, advertisements, and even misinformation. One of the major challenges of this project was to process these tweets and extract the most relevant data through Natural Language Processing and learning models.
How I built it
Earthquake Watch is entirely built on Open Source software. I gathered data with Twitter's API, used TensorFlow for modeling, and nltk and gensim for Natural Language Processing inference. I served up my application with Flask.
What's next
As well as improving the accuracy of Earthquake Watch's predictions, I plan to migrate the application to AWS infrastructure in order to improve performance and scalability.
Try it out
github.com | Earthquake Watch | Real-time worldwide earthquake monitoring by analyzing tweets with NLP and deep learning models | [] | [] | [] | 31 |
10,283 | https://devpost.com/software/pie-rooms-contactless-private-restaurant-in-a-private-room | Inspiration
No Safety in open Restaurant: Post covid-19, people will be more concerned about safety & it will become huge when they will have to go to an open restaurant to have their dine-in experience where they will have to touch menus, interact with other people, will sit in the used couch, will be near to other people, do the billing in the counter & will have to interact with others also no idea about how the food in prepared inside the kitchen.
No Private space to enjoy: Most of the time Travellers or general people wanted to get a place where they can freely enjoy or chill-out for few hours with full privacy. When people generally got stuck in a city & have some free tiem to spare.
Judgement Issue: Mostly couples,party goers or even families get judged/stared by other people in an open restaurant, which creates an discomforting/embarrassing moment for the people who are being stared. This simply spoils the mood to enjoy.
What it does
It provides a private space in a hotel/guest house to couples,party goers & families to have their dine-in experience, food, some fun & us time together with zero disturbance or interaction with others with complete privacy,& a single centralised kitchen in the same building to serve in those rooms.
they need our solution because
They need to get private dine-in experience with their loved ones.
For Group party, birthday parties etc.
For lunch/dinner Date
For Gossiping & spending time together.
Chillout
Safety & enjoyment in corona times
How we built it
We built a Web app with the help of various frame works. We used ionic for frontend. Ruby & rail for backend. We used mysql for database support.
We deployed the frontend in firebase & backend in heroku.
Challenges I ran into
Finding a good team to make our video & software.
Restructuring of the plan.
Lack of good internet connection
Got to add too many things in the website in a limited time.
Had to remove a lot of bug from the website.
Accomplishments that I'm proud of
1.We successfully completed our task in such a less time.
2.Got an amazing hard working team to work with in future.
3.We are proud of coming up with an idea which has the potential to change the world
4.We are proud of creating a service which can restore hospitality industry with more customer satisfaction.
What I learned
Team work is necessary if you want to create something amazing.
Working with calm minded in rush or hurry situations can really help in good thinking, executing & managing the team.
Creating value in a business is the key to success.
Always focus on the big goals you want to achieve. It really helps in keeping you focused.
Accomplish more in less time.
What's next for Pie Rooms:Contactless Private restaurant in a Private Room
We are going to implement it in some properties by modifying them. After that we are going to test our different assumption by taking customers review & will do some modifications if necessary. Now that we tested the product & customers liked it, we will expand our branches in multiple cities with some external funding.
Built With
heroku
ionic
mysql
ruby-on-rails
Try it out
piehotelodisha.firebaseapp.com | Pie Rooms | Contactless Private Restaurant in a Private Room | ['Sourav Prajapati', 'Rishab kumar shah'] | [] | ['heroku', 'ionic', 'mysql', 'ruby-on-rails'] | 32 |
10,283 | https://devpost.com/software/skool-ps1t2w | dashboard
landing
register
post, viewer and creator
schedular
Search
settings
Inspiration
So we have this app in our school for all the students to connect, and also use as a way to distribute homework from teachers, and the related.
What it does
This is a social platform built and engineered for students, with an easy to use and understand UI.
How I built it
I built it from ground up using the web framework React and other technologies like Node.JS for the backend, PostgreSQL for database, and MobX in the front-end for state-management
Challenges I ran into
I ran into various data structure / management issues about post sharing between users and users getting connected.
Accomplishments that I'm proud of
I used a brand new UI framework called Ant.Design instead of the usual frameworks I use and I'm stoked to see how it performed and looked. This is also one of my first proper tries at a social-network esque product, and while I'm absolutely surprised at the fact that how many things you have to do right to get a product like this off the ground, I'm proud that I could build it thus far without no prior experience in this hyper-specific field.
What I learned
I learned that data structure is a very important fundamental in a product, which relies mostly upon data (in a surprise to no one :-P) and I have learned about it a fair bit, but I have also learned that I have a long way to go in understanding about it.
What's next for Frission
The currently submitted project is basically just a PoC, and I have various other features planned out in my mind, of which a couple are:
A better way to connect.
Support teacher accounts and task distribution.
Use the US Public School database for better recommended connections between students. (
https://catalog.data.gov/dataset/public-schools/resource/34001721-b825-4261-b2b7-74be402320b6
)
Support moderation.
Moderated chats, and classes with video from multiple sources. Since this will be open-source, security issues will be next to none.
Built With
node.js
postgresql
react
Try it out
107.178.212.238
github.com | Frission | Connect with your school mates during the trying times | ['Gourab Nag'] | [] | ['node.js', 'postgresql', 'react'] | 33 |
10,283 | https://devpost.com/software/learn-to-dance | Performance Report
Upload page
Site overview
Imposing dance.ai
Home page
Inspiration
Dance is a beautiful art form and many people want to learn it, as shown by the growing number of dance studios. With our new physically-distanced world, I can no longer take advantage of that. Learning dance moves from choreography videos is a wonderful tool, but videos cannot tell you if you are doing a dance move correctly. I want people to experience the benefits of a real-life coach with the convenience of on-demand video.
Working
DANCE uses artificial intelligence to detect people’s dance moves. It evaluates and tracks how a person is dancing compared to how the dance should be performed, so users of DANCE learn how to dance better and assess their progress in learning dances.
Making
Beginning with my vision for an AI dance tutorial, I designed a wireframe mockup of the flow of my app and the functionality of the pose comparison algorithm. On the front end, I used HTML & CSS to style the site and extensive Javascript to import and implement. I used the Posenet library to implement AI-based pose detection. For the backend, I worked on retrieving videos and a list of timestamp markers. Tutorial videos loop according to the timestamps and those markers can be skipped, ignored, and replayed. Feedback from a webcam is taken and used to display a live score. The score is calculated by comparing the data from the webcam to the data from the original/professional dance video.
Challenges
Challenge 1 - I faced major hurdles when attempting to implement the PoseNet model, especially due to the complexity of extracting frames of a webcam/video and applying it. Some relatively simple features, like plotting the points of where PoseNet estimated limbs to be, actually required a lot of tinkering to erase the points after a certain period (so that the screen doesn’t become cluttered).
Challenge 2 - I have encountered various issues that may seem trivial at first glance, but turned out to have consumed large amounts of time and energy. For example, I spent hours attempting to fix a problem caused by a misplaced iteration variable in a for-loop as I initially thought the problem was caused by something else. I also spent a long time figuring out async/await statements for a few functions to solve a problem that could be solved with a simple if-else statement. Through rigorous checking and testing I finally understood my mistake also a google meet with Jonathan Lei helped me to sort out most parts of it.
Challenge 3- Another issue I faced was the time issue. Though timings was known to me but I am exhausted while working on this as the timings were just opposite. Being in India it was really a challenge as well as pride for me to participate and finally make an entry
Accomplishments that I am proud of
I am exceptionally proud of how my project can better teach people dance and enhance learning from choreography videos/tutorials. At first one mentor helped me to develop a good outlook visuals of my website. Then I took it even farther and added beautiful micro-interactions. Subtle CSS transitions were added to most elements. If there was a user interface award in this hackathon, I am confident that I am a strong contenders. In addition, applying the PoseNet model on both a video and live webcam footage required creativity and robust programming. Comparing the two with an algorithm was also an exceptionally difficult challenge that I conquered!
What I learned
Working with PoseNet introduced all of me to a complex AI algorithm that piqued my interest and served as a great introduction to working with big datasets. I also took away different things from this project depending on their focuses. For example, I learned how to create CSS animations and use JQuery to show/hide different elements. working on the backend improved my skills with Firebase and handled difficult interactions with PoseNet objects.
What's next for DANCE?
From the data I glean from our user’s dance moves and perhaps a few more hours, I would be able to develop custom dance plans with specific instruction with which users can improve their weak points. With the same data, perhaps I can write something that choreographs new dances using the user’s strengths and moves they have already learned.
Built With
ai
cam
css
firebase
google-cloud
htm
jquery
machine-learning
posenet
vision
Try it out
github.com
daince.tech | Learn to Dance | Learn to Dance with AI using a fun and beautiful user interface. | ['Udayan Chatterjee'] | [] | ['ai', 'cam', 'css', 'firebase', 'google-cloud', 'htm', 'jquery', 'machine-learning', 'posenet', 'vision'] | 34 |
10,283 | https://devpost.com/software/r-uv | Prototype of R-UV
Inspiration
We were inspired by hundreds of millions of visually impaired people who are facing enormous challenges using the elevators and navigating public spaces. We wanted to minimize the risks of contracting COVID-19 among this group of people using Computer Vision and Arduino.
What it does
UV fixture detects if there is a person inside the elevator (through Motion Sensor); and if there is no, then it slides outside and disinfects the panel for the safety of the next person.
How I built it
We used 2cm wide aluminium contruction corners for the carcass. We've cut 4 details of each of 3 different lengths-15 10 and 5 cm. Then drilled holes in each. Using those holes and nuts and screws we attached them together froming the Parallelepiped shape. The sliding shelf to which the light emitters are attached is made from 2 layers of ply wood. When the sliding element and carcass were done-sensors and other elements soldered together and attached to Arduino and power relay. Everything was powered by a common 5V powerbank.
Using motion sensor we made a function to record a infrared light emission changes within a certain radius of a common elevator cabin. Then if doesn`t record, or "see" significant change and movement of infrared light within radius-arduino sends signal to servomotor, which moves the sliding element out, then arduing sends high voltage signal to power relay,which activates the NO(normally open) channel and so, the light emitters turn on. If there is a movement recorded-the opposite processes happen and the device turns off
We used C and Arduino IDE to make code for the project.
Challenges I ran into
Computer Vision was tough to implement during the duration of Hack3 event, so we used Motion Sensor, which was also cheaper for prototype. However, for the future we would like to use cameras in order to really ensure that the elevator is empty.
Accomplishments that I'm proud of
We fully developed the prototype, it works and the working in team was funny. We are proud for making the product that will make life of visually impaired people easier through helping in navigation.
What I learned
We learned to really think about people who are part of our society, about inclusion and the ways to achieve it to the higher extent.
What's next for R-UV
Implementation of cameras, make the product thinner and collaborate with NGOs (AFB, ACB, Lighthouse International).
Built With
arduino
c
motionsensor
uv | R-UV | “R-UV” is a product that protects visually impaired people, as it disinfects elevator panels using UV light. | ['Damir Zhumatayev', 'Alisher Otkelbayev', 'Yersultan Winglet', 'tupoy skah tupoy tupoy'] | [] | ['arduino', 'c', 'motionsensor', 'uv'] | 35 |
10,283 | https://devpost.com/software/eduquix | Assignment
Zoom Class
Quiz
Home Page
About The Teammates
Cart
Store HomePage
Cart Oliver Twist
Meet The Team
Best Books
Inspiration
Corona Virus Outbreak made the world slower than usual. Students would suffer more because of lockdown in the countries, schools are shut and students have no other source than learning online. We decided to improve online learning and education by making a project called EduQuix. Also our project focus on home delivery of school essentials.
What it does
Some of the features are listed below:-
It asks students and teachers to sign up or login. After that, it provides the students zoom url to join the online meeting, teachers has to start their own meeting and has to upload the meeting ID and password to the website server so the students may join.
After the meeting ends, the students has to solve quiz related to the class which will automatically give them grades according to their performance. Teachers get excess to upload the quiz.
Teachers may assign students projects or assignments using this platform and students can submit their work before the deadline.
Also this project enables users the access to order school essentials like books, pens and other stationery online through it.
How I built it
The delivery part was built using wix.com's theme and by modifying it into stationery store website
Quiz page was made using Wolfram Techologies
Rest of the complete website was made using Bootstrap Studio by integrating website with, Zoom for the meetings, Firebase to get the meeting ID and password for the Zoom meeting and then linking the main site with the delivery site made through wix.
Challenges I ran into
All the teammates belonged to different countries, So the time zone created a major problem for us to collaborate but we divided the work and every teammate completed its work on time.
I was new to Wix.com, so it was time consumable to understand it and make a working website.
One of my teammate was new to firebase. So, it was difficult for him to work with that but he got to learn about firebase and how to use it. It was time consuming but helping each other made the work easier
Accomplishments that I'm proud of
I got to learn about website development because it was my first time making a website
I got to learn integrating firebase to websites.
I was a beginner to bootstrap but now I have enough knowledge to make a website work using bootstrap
What I learned
Team Work is a great thing I got to learn by working with people belonging to different parts of the world.
I was new to website development. So I got to know more about website development using bootstrap and wix.com
I used to integrate mobile apps with firebase but this project gave me an opportunity to learn about integrating websites with firebase.
What's next for EduQuix
In future, we plan to make an android app so it would be easier for everyone to use it even for them who don't own a computer or a laptop.
We would try to add notification pannel in the project through which students may get aware of emergency updates like if class gets postponed or gets canceled.
We would try to improve assignment part and quiz part so that students may get amazing experience related to the class they attended.
We can tie up with local stationery stores to provide users same day delivery.
Built With
bootstrap
css
firebase
html
iframe
javascript
python
wix
wolfram-technologies
Try it out
github.com
naseeb0.github.io
devpost.com | EduQuix | Endless Learning from Home, Seamless Delivery of School Supplies! | ['Naseeb Dangi', 'Keshav Majithia', 'Min Min Tan'] | ['Track Winner: Education'] | ['bootstrap', 'css', 'firebase', 'html', 'iframe', 'javascript', 'python', 'wix', 'wolfram-technologies'] | 36 |
10,283 | https://devpost.com/software/alexa-let-s-code | Best Main Prize - Machine Learning - Alexa, Let's Code
Inspiration
With the recent COVID-19 pandemic, students worldwide have transitioned to online schooling. For some students, however, the transition has been harder than for others. Near where Veer lives is the oldest school for blind students: Perkins School for the Blind. Veer had always wanted to help them, and, during these times, he decided to help them when they needed it more than ever. Together, our team worked on an online platform dedicated for the blind and targeted for our favourite lesson: programming.
According to the National Federation of the Blind, COVID-19 has had a disproportionate impact on the blind, with many facing additional challenges during the pandemic. From an education standpoint, blind students and blind parents face uncertainty about the types of electronic materials they will be expected to use for the remainder of the academic year, making it hard for them to keep up with classes. Lastly, it is difficult for the visually impaired to learn how to code on their computer, a challenge which has been exacerbated by the pandemic.
What it does
We utilized a complex tech stack incorporating alexa skills, flask endpoints, rest api's, google cloud speech to text, and a python desktop application in order to build a speech-to-text editor which can listen to speech, translate it to Python code, and then display the code in an desktop text editor. The platform is complete with voice enabled git commands which the user can perform using the amazon alexa.
We used natural language processing to:
Allow the visually impaired to code in python by simply speaking
Provide a handful of voice enabled git commands and speech recognition features to effectively teach coding and version control
Display the spoken code in an online IDE, where one can run it and get the output
How we built it
We used:
Amazon Alexa
Flask
Python
Natural Language Processing
Google Cloud Speech API
Challenges we ran into
At first, we wanted to run everything through Alexa; however, we soon learned that we could not read raw text directly from an Alexa action. Thus we decided to pivot to using Alexa for specific commands and the GCP to intake and process all the commits. We also faced difficulty with time zone differences and staying connected.
Accomplishments that we're proud of
We're proud of how we handled the situation once we figured out Alexa couldn't process raw data. We were able to pivot our project nicely and create a product we're proud about. Despite never having met in person, our team as a group was able to be flexible and adapted to the situation.
What we learned
We learnt how to use speech recognition and execute the code in string form. We also learned more about linking Alexa skills and git commands with Python, as well as connecting to flask endpoints through desktop applications, especially for real-time work. Overall, three of our four teammates learned about Alexa and its compatibility, and we all tried to learn more by combining different technologies together.
What's next for Alexa, Let's Code
We eventually want to push this out for all Alexa users. In addition, we want to connect our application with popular text editors, such as VSC, Repl.it, and Atom. We want to eventually expand our project to more languages and functionalities to ease the life of developers.
Built With
amazon-alexa
flask
google-cloud
natural-language-processing
pyqt
python
Try it out
github.com | Alexa, Let's Code | Using Alexa to empower the visually impaired to code simply by speaking | ['Veer Gadodia', 'Nand Vinchhi', 'Shreya C'] | [] | ['amazon-alexa', 'flask', 'google-cloud', 'natural-language-processing', 'pyqt', 'python'] | 37 |
10,283 | https://devpost.com/software/help-our-heroes | Header
Inspiration
We see stories of healthcare workers on the front line every day during this pandemic. They are working tirelessly for us, so we thought in return to find a way to help them, and Help Our Heroes was born.
What it does
Provides resources for healthcare workers in categories such as mental health and discounts.
How we built it
We used a combination of HTML, CSS, Javascript, and Bootstrap to build our website.
Challenges we ran into
The navigation bar was giving us some trouble, but eventually we got it fixed.
Accomplishments that we're proud of
One of us(Sneha Sriram) was a beginner when we started this website. We are most proud of learning as much as we did during the development.
What we learned
Incorporating javascript into the website
Built With
bootstrap
css3
html5
javascript
Try it out
hack3.glitch.me | Help Our Heroes | Help Our Heroes is a website that provides resources tailored to help healthcare workers with different aspects of their lives in this trying time, such as mental health, food, and transportation. | ['Puja Teakulapalli'] | [] | ['bootstrap', 'css3', 'html5', 'javascript'] | 38 |
10,283 | https://devpost.com/software/dappsule | . | . | . | ['Praveen Kumar'] | [] | [] | 39 |
10,283 | https://devpost.com/software/covid-19-rakshak | NEXTGEN HEALTH CARE DEVICE 2
UVC CHAMBE#R
NEXTGEN HEALTH CARE DEVICE
BEDS FOR COVID 19
BEDS FOR COVID 19 2
HANDSFREE BASIN
Inspiration
During this Global Crisis of COVID -19, Our Doctors, Nurses, Police, etc working on the frontline against this. So, why are we falling back? After all, we are engineers with an innovative mind. Now, we are here with the combined Hardware and Software solution, after analyzing some problems. And, try to make the next generation Healthcare Device for Protecting Vulnerable Population during and after this pandemic situation.
Nowadays, COVID-19 patients are facing issues in finding beds in hospitals. They have to go to different hospitals and check if the bed is available or not. And even if the beds are available then they have to make sure that it is affordable.
During the COVID-19, we are facing a lot of disasters like cyclones in Orissa, West Bengal, Maharashtra, etc. and floods too. Also, many people have lost their lives and many people are missing. Lots of parents have lost their children during these disasters.
What it does
There is a Hardware setup that consists of a self shoe sole sanitation device, health band, and hand-free basin. So, the pedestal comes near to our setup and stands over it through that way the shoe sole is sanitized by UV light (this way restricts the transmission of coronavirus through shoe sole). Then, there is a health for which users connected our app with the setup through bluetooth, which measures Heart Rate, Blood Oxygen Saturation Level and Body Temperature, and get the whole data over the app. In last, there is the contact-free basin to wash their hands in public spaces ( everyone washes their hands without touching the tap, which means restricted the transmission of the Coronavirus through the surface and reduces wastage of water).
We have made an app, to find the number of beds available in the hospitals in the city they live in. The app will show the number of available beds in different hospitals and will tell the price too.
KHOJ will help to find a missing person. The guardian of the missing person will upload the details of that person, so the image will be added to the KHOJ database.
If any person finds a person who was missing, he can also upload the picture of that person through KHOJ.
KHOJ will try to find that person in the database, and if found it will notify the nearest police station from where the person is found and the guardian of the missing person.
How we built it
In the Hardware setup of this project, shoe sole sanitization device by using UV-C type and its choke, contact-free basin by mechanical parts like the pedal, spring, metal wire, cistern, etc and health band is done by Arduino Uno, Max30102 (for Heart Rate and Blood Saturation), Ds18b20 (for body temperature ) & Bluetooth HC-05.
We build our app on Flutter which is a hybrid platform using dart language. Because Flutter apps can run on ios and android phones. Then, we used an API provided by disease.sh for details related to COVID-19 like total cases, active, deaths, etc.and did some other stuff also with our skills.
Challenges we ran into
Due to lockdown, we don’t have NodeMCU (IoT device) to store data over the cloud. So, we try to figure out this with Arduino Uno and Bluetooth. First, we send data over the mobile app than over firebase to store.
Accomplishments that we’re proud of
Working, health band with real-time data functioning backend servers for data management easy to use app which can present real-time data to the user over the app.
What we learned
Learned a lot about that without having an IoT device, how to store data over a server and also, about the new sensors.
Moreover, it built a more functioning mobile app.
What’s next for the team
In terms of enhancing the project, we can identify the user’s state of mind using EEG or GSR reading & Machine Learning. Also, determine blood pressure through self-test kiosk.
In terms of marketing the product, we would like to initially target public spaces in our region.
Built With
android-studio
arduino
bluetooth
bootstrap
django
Try it out
drive.google.com
docs.google.com | COVID-19 Rakshak | COVID RAKSHAK (#atmanirbhar) - Beds for COVID-19; NEXTGEN Healthcare Device; UVC Chamber; Hands free basin; Find, missing people during disasters; | ['Anubhav Sinha', 'Keshav Bathla', 'Kaushal Bhansali'] | ['Best Hardware Hack'] | ['android-studio', 'arduino', 'bluetooth', 'bootstrap', 'django'] | 40 |
10,283 | https://devpost.com/software/safe-tweets | . | . | . | [] | [] | [] | 41 |
10,283 | https://devpost.com/software/contact-tracing-yodvuj | Inspiration
In my country, many stores use a similar system to track which customers entered and left their business at what time. This allows them to notify clients who were in the store at the same time as an infected individual and helps businesses contact trace.
What it does
It is a login/logout system where customers just write what time they entered, left, and their phone number. All this information is put into a CSV spreadsheet.
How I built it
The code was built in python and is remarkably simple. The fact of the matter is that I did not have much time or expertise to work on my submission. The code work by setting a few text input boxes, labels, and a submission button. Once submit is pressed, a function called submit is called which adds all the input boxes to a CSV using a library called pandas.
Accomplishments that I'm proud of
I have only been coding for about a month now so this project is very simplistic. However, I had to learn a lot to make the program not be just a console and that I am proud of. I also was able to really challenge my troubleshooting and online forum skimming skills.
Built With
pandas
python
tkinter
Try it out
github.com | Business Contact Tracer | This software is a check in checkout system for stores, malls, ect. Responses are put in a spreadsheet and it allows stores to contact customers who were in the same place as a COVID19 postive person | ['Jakob Danninger'] | [] | ['pandas', 'python', 'tkinter'] | 42 |
10,283 | https://devpost.com/software/shop-safe-app-cigf21 | Inspiration
During this Covid-19 Pandemic, it is advised to avoid gatherings and avoid contacting people as it can lead to an increase in the spread of the disease. People followed social distancing when they went for essential commodities shopping, this led to long waiting lines outside the shops. Moreover, in some places, it even becomes a huge problem as proper social distancing cant be employed because of crowding.
The other issue is contact tracing. An infected person might have visited several places before he got confirmed as COVID positive. This will create huge distress as every place he travelled in the last few days must be sanitized properly.
Another minor issue is forgetting to wear a mask while moving out of our home.
ShopSafe App is built to all these issues.
What it does
ShopSafe App can be used to join the Queues of the Shops virtually. Hence there is no necessity for everyone to stand in the long queues. Thus Social Distancing measures can be followed efficiently.
Once the user is logged in.
User can see various stores nearby. Also, various details about the number of people in the queue (with estimated wait time) are provided.
User can Join Queue of any Store with a single button click.
Once the user position is confirmed, he will be notified at the time of his turn.
A pin is provided which is needed to be shared with the Store Manager at the shop for verification.
Also, if not interested, the user can opt-out from the Queue anytime until his turn come.
All the corresponding details about each customer is available to the separate dashboard provided to Store Manager. So that he can contact his users if required. Also, the Store Manager can edit and manage details about the store at any time.
For easier contact tracing all the details about the previous stores visited the user are stored locally. So that they can be easily used for contact tracing if in case the user is infected.
ShopSafe uses GeoFence where user can mark his home territory and once the user is about to move out his marked territory. An automated notification pops up which will ask the user to wear a mask. Hence, next time you move out of your home, you will not be forgetting to wear a mask.
How I built it
The app is built on Android Studio using the Android SDK.
Additionally, several open-source libraries were used at required positions to speed up the development process.
Challenges I ran into
Database generation and building Ui Architecture has become hard initially. Later I followed a few online tutorials to get proper understanding and get my things done.
Accomplishments that I'm proud of
I am proud that I was able to ideate and complete building the whole prototype within 13hrs.
What I learned
I have learned various things about GeoFencing mechanism and its integration into android.
What's next for Shop Safe App
I will try to integrate a feature where user prior to the visit, can provide a list of items which he is willing to get from the store. This people at the Store can keep them ready by the time user arrives for the pic up.
Built With
android-studio
firebase
geofence
java
Try it out
github.com | Shop Safe App | Virtual Queuing Platform which simplifies contact tracing and social distancing. Added with a feature which notifies you to wear mask automatically. | ['Sainag Gadangi'] | [] | ['android-studio', 'firebase', 'geofence', 'java'] | 43 |
10,283 | https://devpost.com/software/c-care-f4zd7j | Inspiration
During this current COVID 19 pandemic, I see health worker is curing the patients, doctors are innovating new medicine, the police is controlling the crowd movement and even bus drivers are helping people to get back to home. As a future engineer, I felt like my contribution is none, so I felt motivated to do my part and try to bring a positive change and to make sure my product can also be used in a future pandemic.
problem our project solves
The massive spread of COVID 19 is due to a measure reason, When a person is infected he can be asymptomatic for up to 21 days and still be contagious, so the only way to contain the spread is by wearing a mask and maintaining hand hygiene. WHO and CDC report said that if everyone wears a mask and maintains hygiene then the number of cases can be reduced three folds. But HOW we will do that? , How can we make ever one habituated to the following safety precaution so the normalization can take place.
What our project does
Our app is 1st of its kind safety awareness system, which works on google geofencing API, in which it creates a geofence around the user home location and whenever the user leaves home, he will get a notification in the C-CARE app ( ' WEAR MASK ' ) and as the users return home he will get another notification ( ' WASH HANDS '), ensuring full safety of the user and their family. It is also loaded with additional features such as i.) HOTSPOT WARNING SYSTEM in which if the user enters into a COVID hotspot region he will be alerted to maintain 'SOCIAL DISTANCING' And it also has a statics board where the user can see how many times the user has visited each of these geofences. With repeated Notification, we will make people habituated of wear masks, washing hands, and social distancing which will make each and every one of us a COVID warrior, we are not only protecting ourselves but also protecting others, only with C-CARE.
Challenges we ran into
1,) we lack financial support as we have to make this app from scratch.
2.) the problem in collecting data regarding government-certified hotspot and also we have to do a lot of research regarding the spread pattern of COVID-19.
3.) Due to a lack of mentors, whenever the app stop working we had to figure out by ourself, how to correct the error.
4.) It took us too long to use it in real-time as during lockdown it was too hard to go outside in the quarantine but finally, after lockdown loosens a bit we tested it and it gave an excellent result.
5.) we didn't know much about geofencing before that so we have to learn it from scratch using youtube videos.
Accomplishments that we're proud of
We’re proud to have completed our project in the period of this hackathon. Additionally, we’re proud of how we’ve dealt with time pressure and worked cohesively as a team to actualize our start-up goals, which we believe would have a genuinely positive impact on saving many lives once implemented properly.
What we learned
All team members of C-CARE were able to grow their area of competence by participating in the whole process of idea definition, market research, validation, prototyping, and presentation. Through different mentor sessions, we learned that problems could be approached by many means, but most importantly our mission should be clear.
What's next for C - CARE
COVID cases are increasing every day, and chances are low that we can create a vaccine immediately, apps like C-CARE will play a crucial role in lower the spread of infection till a proper vaccine is made. Our app can also be used for a future pandemic or seasonal diseases such as swine flu or bird flu.
Built With
android-studio
geofence
google-maps
java
sqlite
Try it out
github.com | C-CARE APP | C - CARE An app that makes ever person a COVID warrior. | ['Anup Paikaray', 'Arnab Paikaray'] | ['Our First Hackcation'] | ['android-studio', 'geofence', 'google-maps', 'java', 'sqlite'] | 44 |
10,283 | https://devpost.com/software/code-assistant | Inspiration
With the recent COVID-19 pandemic, students worldwide have transitioned to online schooling. For some students, however, the transition has been harder than for others. Near where Veer lives is the oldest school for blind students: Perkins School for the Blind. Veer had always wanted to help them, and, during these times, he decided to help them when they needed it more than ever. Together, our team worked on an online platform dedicated for the blind and targeted for our favourite lesson: programming.
According to the National Federation of the Blind, COVID-19 has had a disproportionate impact on the blind, with many facing additional challenges during the pandemic. From an education standpoint, blind students and blind parents face uncertainty about the types of electronic materials they will be expected to use for the remainder of the academic year, making it hard for them to keep up with classes. Lastly, it is difficult for the visually impaired to learn how to code on their computer, a challenge which has been exacerbated by the pandemic.
What it does
We utilized a complex tech stack incorporating alexa skills, flask endpoints, rest api's, google cloud speech to text, and a python desktop application in order to build a speech-to-text editor which can listen to speech, translate it to Python code, and then display the code in an desktop text editor. The platform is complete with voice enabled git commands which the user can perform using the amazon alexa.
We used natural language processing to:
Allow the visually impaired to code in python by simply speaking
Provide a handful of voice enabled git commands and speech recognition features to effectively teach coding and version control
Display the spoken code in an online IDE, where one can run it and get the output
How we built it
We used:
Amazon Alexa
Flask
Python
Natural Language Processing
Google Cloud Speech API
Challenges we ran into
At first, we wanted to run everything through Alexa; however, we soon learned that we could not read raw text directly from an Alexa action. Thus we decided to pivot to using Alexa for specific commands and the GCP to intake and process all the commits. We also faced difficulty with time zone differences and staying connected.
Accomplishments that we're proud of
We're proud of how we handled the situation once we figured out Alexa couldn't process raw data. We were able to pivot our project nicely and create a product we're proud about. Despite never having met in person, our team as a group was able to be flexible and adapted to the situation.
What we learned
We learnt how to use speech recognition and execute the code in string form. We also learned more about linking Alexa skills and git commands with Python, as well as connecting to flask endpoints through desktop applications, especially for real-time work. Overall, three of our four teammates learned about Alexa and its compatibility, and we all tried to learn more by combining different technologies together.
What's next for Alexa, Let's Code
We eventually want to push this out for all Alexa users. In addition, we want to connect our application with popular text editors, such as VSC, Repl.it, and Atom. We want to eventually expand our project to more languages and functionalities to ease the life of developers.
Try it out
github.com | Code Assistant | Alexa enabled git commands and text editor helping the visually impaired to code | ['Veer Gadodia'] | [] | [] | 45 |
10,283 | https://devpost.com/software/emergex | Inspiration
We had won the grand prize at an APAC-level hackathon and he got a chance to go to a startup conference in HongKong this summer. But due to the pandemic, it was canceled. Now, we are not sure if we’ll go even if the conference is next year. We realized this would be the case for many travelers, both leisure and business. One of the major problems would be making travelers gain the confidence to travel again. We decided to do something to encourage people to travel, by assuring them of safety.
Problems Tackled
We have created a web-based service that will be sold to various travel websites with add-on safety features and Private tourists spots such as restaurants, hotels, etc for better analysis of the implication of social distancing norms. We plan to solve the following problems:
From the website, the user can get the latest news of the place of travel (Government restriction, Border Closures, etc) along with the number of live COVID cases in that place.
The website also uses a Machine learning model called the RNN which will help predict future trends in COVID places of that particular place. This will help travelers make more informed decisions about their travel dates.
We also incorporated a feature for Touchless travel wherein the user can fill the Immigration or the Custom declaration form via the website to prevent contact at the airport.
We will also mention the safety features of various hotels/restaurants enabling the visitors confident enough to travel those places.
Our Social Distancing ALgorithm along with the mask detection algorithm which will be sold as a service to private tourist places will help in analyzing the implication of such norms on the public.
What it does
We have created software services for hotels, airports, parks, restaurants, museums, theatres, and other enclosed private tourist spots. Our system will automatically detect whether people are following social distancing and whether they are wearing masks or not, from CCTV footage. These places can advertise that they’re using an automated system to ensure safety, and this will attract more tourists. The other facet of our solution is a website for travelers/tourists. This service can be used by any travel company as an additional service to the users. Users can pick a destination and a date of interest. We will show them the updates of that city, and give the estimated number of cases along with news of that place. This estimation is based on a predictive ML model. This will help users make an informed decision and they can postpone their trip well in advance, without losing out money on cancellation charges. This will also help air travel companies and hotels, who have to bear losses if a person cancels their stay. Lastly, an online immigration form will be provided to minimize physical touchpoints.
How we built it
We have taken a sample recording of the CCTV camera footage. A machine learning model detects and classifies various bounding boxes based on the distance between people in the video. Also, we have the Mask detection algorithm which was built using CNN and it checks whether people are wearing a mask or not and creates a bounding box around the face. So the viewer knows the number of people violating the norms. These models were built in Python. For COVID trend prediction, we had used the RNN model.
For news and daily updates of the COVID cases, the data is scraped online and displayed on the website available online. The website for travel users (of the hotel, market, tourist spot) was built using React, Firebase, and Node.
Challenges we ran into
Data Privacy was the one challenge we encountered during this Hackathon. However, the service which we will be providing will enable the private tourist places to analyze the data and count the number of people maintaining the social distancing norms without giving any private information of the person from the CCTV frames. The clients will just have to push the data in the backend for our analysis so that we can give them a safety rating. In the future, we plan to add a pipeline which will blur the faces of the people present to provide an even safer and secure service.
Accomplishments that we're proud of
We are proud of the fact that our project will help many travelers, both leisure and business in the aftermath of this pandemic. We will be providing one of the major strengths to travelers that are gaining confidence to travel again. We are really happy to be part of the change that will boost & encourage people to travel safely.
What we learned
We learned the importance of team work and work together even though not being in touch physically. We interacted online and distributed task to each other. We learned to ideate and come up with an innovative solution in a short span of time. We faced many challenged while the hackathon but we were determined to go on and we persevered.
What's next for EmergeX
The next plan would be to host our entire web application on the cloud. The ML models and the backend will be deployed on the cloud. In phase 1, we would like to try out this solution locally. We will tie-up with local hotel chains and tourist spots in Mumbai and devise a basic billing plan to start earning revenues along with other travel agencies so that they can use our web app as an additional module or service. We will also release our app for tourists on the play store. After these iterations and learning from the results, we would like to partner with more places and or a company like Trivago which can in turn sell these services to its partners.
Built With
angular.js
machine-learning
node.js
python
Try it out
github.com | SafeT | Every life counts | ['Vedant Kumar', 'Siddhant Kumar', 'Pradhuman Singh', 'Parth Shingala'] | [] | ['angular.js', 'machine-learning', 'node.js', 'python'] | 46 |
10,283 | https://devpost.com/software/hideyoshi | Hideyoshi
Cherry Blossom Tree
We're beginners in the world of game development and would like to enter this Hackathon under that category.
Inspiration
We wanted to create a game that would be easy to use yet fun to play. We were inspired to make this game from the lack of difficult and entertaining shooters.
What it does
For Hack3, we created a 2D desktop arcade game, designed for recreational use. The goal of the game is to defeat the two samurai bosses. We developed a storyline for the game to make it more unique compared to other shooter games. The player controls a character with a gun. A mysterious man in a purple hoodie, his only purpose in life is to defeat the samurai roaming the Japanese-style area. Amongst cherry blossom trees and mountains, he risks his life to save the world.
To shoot, the player left-clicks with his or her mouse. The player can dash by holding down shift and clicking at the same time. The “bosses”, aka the samurais, use ranged melee, with a combination of both katana slicing and long-distance firing of weapons. They follow the player, who has no choice but to shoot for dear life at the powerful samurai.
Storyline
It is the year 2052. Four years ago, the president was assassinated by a good-turned-evil legendary samurai known as Kobayakawa Hideyoshi.
You are the son of Kobayakawa Hideyoshi. You are the only person in the world who not only knows his ways around a gun, but also knows the ins and outs of samurai life.
The world has gone corrupt. And with every year that passes, the fate of humanity creeps closer.
With this knowledge in hand, you set out to do the impossible—defeat your father, the leader of the United States.
How we built it
We created this game on Godot. We drew most of our own sprites—the player, trees, petals, rocks, and mountains are all our creations. We implemented A* navigation for our enemies AI and we tried to make the game difficult. The game has little playtime but turns to be a difficult challenge that makes you want to play it over and over again. We were inspired to make this game from the lack of difficult and entertaining shooters. The arcade style is a lot of fun for its simplicity and we enjoyed playing while we made it.
Challenges we ran into
Inexperience, especially concerning game development. It was tough to learn Godot and create a functioning game using it within 24 hours. Asset creation was also difficult, as we had to make sure the sprites would animate smoothly, and the assets need to all come together in one theme. None of us are artists, so creating assets was a new, difficult, but interesting task.
Accomplishments that we're proud of
We are quite proud of creating a fun-to-play game in under 24 hours. The fact that most of the assets were our creation brings us joy, as we can truly call the game ours. Since this is the first game we made, we're pretty proud of how far we've come in such little time, and we hope to pursue game development in the future.
What we learned
We learned a lot about Godot and game development in general. We also learned about asset creation and what it takes to make a sprite look like it's moving smoothly. We learned about how different aspects of a game can tie the entire program together and make our ideas come to life, and it was truly a joy for us to familiarize ourselves with the process of creating a game.
What's next for Hideyoshi
We hope to add more levels to the game, along with more bosses. We plan to extend the storyline so that the encounter between the player and his father is more emotional and exciting. Hopefully, in the future, this will become a full-fledged game that people across the world can play and enjoy.
Built With
adobe-illustrator
gdscript
godot
Try it out
ghostwalker562.itch.io
github.com | Hideyoshi | My Father is my Enemy. | ['Philip Vu', 'Kathie Huang', 'Madhavi Vivek'] | [] | ['adobe-illustrator', 'gdscript', 'godot'] | 47 |
10,283 | https://devpost.com/software/translatepix | Inspiration
A lot of people don't know the native language of the place they are travelling to, so i built an app that can help them with that.
What it does
It takes a picture and from the picture recognizes the words in the native language and translates them into a language the user can understand
How I built it
I built it using android studio, java, and firebase including its extensions
Challenges I ran into
I wasn't able to read the images at first because the images were too blurry, which was a constant issue that i had to resolve several times. In the end it started to work better, but in the future i have plans to make the images more clear and easier for the image analyzer to analyze the image and decipher the text
Accomplishments that I'm proud of
I'm proud of being able to use firebase extensions and im also proud of my grit and hard work even with a problem that i basically suffered with throughout this hackathon
What I learned
I learned grit and hard work, as well as i learned how to use firebase api and extensions. This was a very interesting learning experience for me and i plan on continuing in the future.
What's next for TranslatePix
UI improvements, clearer images, more support languages, maybe even a logo
Built With
android-studio
firebase
java | TranslatePix | A new way for travellers to understand the native language used around them when travelling | ['Sifat Hasan'] | [] | ['android-studio', 'firebase', 'java'] | 48 |
10,283 | https://devpost.com/software/spark-cdr0kl | Inspiration
Over the past decade, Northern California has been constantly plagued with hundreds of inevitable wildfires, many of which are sparked by malfunctions in the power grid. As utility companies like PG&E have utility lines spanning thousands of miles, routine checks on each of these lines can only happen once a year, so vegetation around utility poles can grow out of control and come into contact with power lines.
What it does
It is a simple-to-use iOS application that helps anyone with an iOS device seamlessly notify utility companies of hazardous situations such as a tree branch leaning on a utility pole or power line, which could potentially start a fire. It’s quick, accurate, and very helpful for utility companies to use.
How we built it
The application was built using the Swift Programming Language via the XCode IDE. We started with developing the UI of the application to make it responsive and easy-to-use. Next, we programmed the logic of the buttons and saving the data in Google Firebase Realtime Database. Using the CocoaPods Framework, we were able to file a report into the database sending the report information and a picture. We also created a website using HTML/CSS/JS to be able to present the data extracted from the Firebase Database and display that for utility companies to view. We finally added the Map to pin-point the user locations.
Challenges we ran into
Particularly, sending the image that the user took of the incident to the database was very hard to master. Unlike text, where you can just send data as a string to be displayed as such in the database, images have completely different protocols for transmissions.
Accomplishments that we're proud of
Other than this, finding bugs were the most rewarding yet frustrating part of the experience, as we learned that nothing is more fulfilling than fixing a SIGBART fatal error. However, being stuck trying to solve that same error lends a feeling of hopelessness, and so the duality of the experience really appealed to us.
What we learned
By the end of development, we learned a lot about troubleshooting and teamwork, as even the most minor problems once seemed like impossible obstacles to overcome.
What's next for Spark
So far, our solution has the potential to save the state of CA millions of dollars because it solves a problem that would otherwise be addressed using expensive hardware systems that, as estimated by the California Public Utilities Commission, could cost well over five million dollars. PG&E has been proposing many solutions to this issue, but none have been passed because of the hefty price tags of these propositions. This is why we would like to create a second version of the application that can create a complete database of every utility pole in the state of California.
Built With
cocoapods
css
firebase
html
javascript
swift
Try it out
drive.google.com | Spark | Spark is an application created to simplify the process of reporting potentially hazardous vegetation and wires near power lines and utility poles. | ['Aditya Sharma', 'Ishan Goyal'] | [] | ['cocoapods', 'css', 'firebase', 'html', 'javascript', 'swift'] | 49 |
10,283 | https://devpost.com/software/bithacks | Red line shows the true line for Cali 's deaths if we follow our current path to follow CDC. The true-labels relatively match predictions.
Shows the num deaths that will occur on our ability to wear masks. This would influence many people by showing them the death trend in Cali
Not apart of my video, but it contains the info about my implementation of CNN. This was my first time, so I was new on the terminology
Inspiration
I was watching the news and I found that many people did not want to wear masks and they believed that it did not "effect" the curve regarding the number of deaths regarding COVID-19. I realized that if more people had this philosophy then the pandemic would never come to an end. So I came up with the idea to show them how their undesirable decisions effect the number of casualties regarding the pandemic.
What it does
This architecture will use John Hopkins data to determine the future number of deaths regarding on people's ability to follow CDC guidelines on wearing a mask. Furthermore, the results of this code will show a graph that displays the future number of deaths for each state depending on each scenario on following CDC guidelines. This will show people in the US(a hotspot for coronavirus cases) how important and crucial it is to wear masks and follow CDC guidelines.
How I built it
I built this architecture using linear regression models and CNN layers for accuracy. I grouped all the data from Github into training and test sets and organized it with states as it was first plot within each county. I created an algorithm that would take the past 7 days and predict the next 14 days without using networks like RNN. I then further added weights to sort of smooth the linear regression curve. Furthermore, I added CNN layers to make the curve more smooth without the outliers. Then I added multiple CNN layers such as dense, convolutional, and maxpooling layers in order to make my mean squared error low.
Challenges I ran into
As I am a beginner at machine learning, it was a lot harder for me to adapt to the syntax of the linear regression model and the CNN network. Consequently, I ran into multiple challenges while making this project. For instance, I ran into multiple challenges while creating and applying my algorithm. My code would often get very complicated where I would have to go back to the code and traverse through it again which would take a long time. Moreover, I would have to constantly create multiple matrices in order to balance my shape and to add data into my architecture which would get more complicated as time went on which set me back a lot.
Accomplishments that I'm proud of
I am very proud of my architecture as I was able to properly apply my knowledge on deep learning and sklearn models into my code. Even thought this architecture took a lot of time and energy, I was very happy with my results as I was properly able to implement my CNN layers to make my root mean square error relatively low. Furthermore, I found ways to add more weights as to match the true labels with my predicted labels.
What I learned
I learned a lot while working through my project. For example, I learned to always keep in mind the shape of your matrices as you apply them and interchange them with other matrices. I would have simple matrices containing data of the cases and deaths per state be like (2 X 14), and later they would end up being(112, 14) which would often get frustrating as I would have to go through the code again. By knowing the shape of my matrices I was able to often track my mistakes as I would realize that I implemented my algorithm wrong.
What's next for "COVID-19: Predicts the future deaths using past data"
I was also thinking later to add RNN syntax into my code in order to further increase the accuracy. Furthermore, I want to apply this algorithm to bring awareness to other social problems in the world such as the starvation issue occurring in Yemen that not a lot of people are doing much about. I think that this algorithm can bring awareness to many people around the world as a means to encourage them to stand up to all parts or regions of the world as they are being severely impacted.
Built With
python
Try it out
github.com | COVID-19: Predicts the future deaths using past data | How do you convince people to wear masks and follow CDC guidelines? Simple. You just scare them into doing it! :) | ['Avi Choudhary'] | [] | ['python'] | 50 |
10,283 | https://devpost.com/software/covid-19-cases-predictor | Start Screen: Selecting Data of Interest
Initial loading screen for state and county data sets
County - specific data set visuals
Polynomial Regression model button activated (Zoomed in)
COVID-19 Cases Predictor
Creator: Piero Orderique
This COVID-19 Cases Predictor program takes a machine learning approach to battling COVID-19 cases in America. Written in Python using tkinter, matplotlib, and scikit-learn, the easy-to-use UI permits users to display data of their choosing and run a polynomial regression model on the data to see how cases will tend towards in the future.
My Purpose
Mostly inspired by the recent spikes in the COVID-19 pandemic in the United States, I decided to use machine learning to tackle the problem starting at a fundamental community level. Most visuals out on the internet showcase either worldwide, national, or state data. While this data is beneficial to all, I believe that showing visuals at a county level will help bring a more personal awareness of how the pandemic has affected the community around us. Not only does this program make these visuals available to users, but by allowing them to run a regression model, users can further see the potential implications on their communities if the current state of the pandemic continues to grow.
Features
National, State, and County Selection Data
Regression Button that trains the model
Navigation Bar to zoom into more or less recent dates
Evaluation Summary of Model when tested with "outside" data
How it was Built
Python was used for the entire program along with tkinter, matplotlib, and scikit-learn libraries.
What I learned
How to embed graphs into tkinter windows, how to run a polynomial regression model on COVID-data, how to create training and testing data sets
Challenges
A new Navigation Bar object was created every time a new graph was selected, eventually covering the entire screen. Regression model generalization in order to use one function to handle all possible graph selection events.
Future Goals
The current goal is to further develop the regression model to where the program can distinguish between logistic, polynomial, and exponential trendlines and make a decision to which one fits the data best. Furthermore, I would love to take this project into augmented reality to showcase data in 3 dimensions.
Built With
matplotlib
python
scikit-learn
tkinter
Try it out
github.com | COVID-19 Cases Predictor | A machine learning approach to bringing awareness of rising COVID-19 cases at an individualized community level | ['Piero F Orderique'] | [] | ['matplotlib', 'python', 'scikit-learn', 'tkinter'] | 51 |
10,283 | https://devpost.com/software/covid19-kit | dashboard
please check video for the features of the app
booking an appointment with the proctor/ faculty
project and document submission
creating channels for online teaching and mentorship
please check the mentioned github repo for the App. This app is for caretakers of patients with serebral pasly and people on wheelchais.
body temperature, heart rate, alarm functionality with data stored in the cloud database.
dues and assignments
messaging services
online proctored tests
Inspiration
During online classes, many students verbally harass the teachers and students of the class. This spoils the whole environment of the class, So we decided to block these students using speech recognition technology.
Then we all must have seen that delivering the things without contact has become a major problem, therefore we designed a hand gesture moving messenger who deliver things to Covid19 infected people in care centers.
What it does
The first part is a
remote education
android app which resolves all the problems stated above. It contains all the features a student will want in his/her app. We tried to involve every activity that we use to do in offline college times in this App. It consists of
Video call functionality
with a special feature of blocking students who are speaking abusive or bad words during a live session. The student will be reported to the admin of the app and all the records of the blocked student will be sent to the admin app. Admin can unblock the student again. Then our app contains a
chat room
for each classroom a student is enrolled in, it will allow the students and teachers to communicate as they use to do in Offline College. Then comes the appointment feature. Before contacting any teacher we have to make an appointment with him/her to ask for their time. So our App includes this cool feature of
appointment
for the students. This reduces the chaos and brings the working thing so that follows proper protocol. Teachers wanted an invigilation system to invigilate students during the test. In our App, we provided this feature by
camera proctored examination
feature. Under this a teacher can proctor all students through their webcams while the students are giving tests, also the teacher can pass their voice in the whole class to
convey messages
during tests. Also, our app has a feature of
assignment submission
. The teachers can upload the assignment questions along with the due date and students on the other hand can upload the solutions of these tests on the app itself.
How I built it
We used the android studio to build a remote education app. For backend, we used firebase realtime database. For identification of abusive words we used IBM speech to text services to convert the speech of the students in text and then we used this text in the loop to find whether he is abusing or not. We took the dataset of abusive words from Kaggle and gitHub.
For our IoT bot, we used the hand gesture sensor and on the basis of the gesture, the robocare bot will move and deliver thing to patients. It can also be used as a wheelchair.
Challenges I ran into
We faced many challenges like detecting and blocking students who speak the abusive language during the live class. We wanted to make something that everyone can relate with offline college activities. Therefore, we need proper planning and structure. The assignment section needed a proper structure to be executed.
Teachers all over the globe wanted a platform for cheat-proof examination. Our challenge was to make a cam proctored examination with cheat-proof features like on leaving the test you can not re-enter it.
Accomplishments that I'm proud of
We are proud of our abusive language detector system which blocks users when they speak bad words. Also, the structure we made is highly related to offline day to day activities. Our cam proctored test system is awesome, and it restricts the user from cheating and helps the invigilator to invigilate during a test.
What I learned
We learned, how to work with the realtime database, how to use IBM's speech to text services to detect abusive words. In this pandemic situation, we learned the complete use of GitHub and how to collaborate our work with teammates. Also, we learned some new IoT features which helped us to make the robocare bot.
What's next for Covid19 Kit
For future aspects we are planning to make a complete, general messenger system for private and government offices which they can use to share files, letters, assigning task and doing all other stuffs which people do in offline office hours.
Built With
android-studio
arduino
e-learning
education.com
firebase
ibm-watson
iot
Try it out
github.com
drive.google.com
drive.google.com | Covid19 Kit | An android app, an IoT device, and a Covid19 tracker, a complete kit for students, doctors, patients, and common people. An IoT bot to follow social distancing practices. | ['Ayush Sharma', 'Elio Jordan Lopes', 'Shaolin Kataria', 'Ritik Gupta', 'DEVANSH MEHTA'] | ['The Wolfram Award'] | ['android-studio', 'arduino', 'e-learning', 'education.com', 'firebase', 'ibm-watson', 'iot'] | 52 |
10,283 | https://devpost.com/software/hack3 | This is our logo.
Guest Manager
We are Team Yes.
Inspirations
We were inspired to create this application because of the restrictions placed on businesses and public places during COVID-19 as they can only have a certain amount of people in their building at a time. Our project addresses this issue by keeping track of how many people are currently present as well as how many reservations there are.
Learning and Building
Throughout the project, we learned how to work with the current time in Java as well as styling with CSS in Gluon Scene Builder. Our project has two main classes, being the Person class and the GuestController class. The Person class holds information such as their name, phone number, how time they can stay, and the time of their reservation (if they have one). The GuestController creates ObservableLists of Person objects in order to properly display them in the table with all of their information. It also controls the rest of the display for the user.
Challenges
One of the challenges that we faced was figuring out whether the reservation time was AM or PM. In the end, we decided to have the user input AM or PM in the textField so the program could read the last two characters and set up the reminder accordingly. A similar issue to this is verifying that the user entered a valid time. We realized that a user could enter 13:61AM as a time, which makes absolutely no sense. To fix this issue, we had to edit the way that we read the time from the user. Another issue that we faced was with the guests or reservations that were expired. Since both had similar implementations, we got confused quickly when we tried to create methods that applied to both circumstances. Ultimately, we decided to make a set of methods for both guests and reservations.
Another challenge we faced was when manipulating the guest data for guests that expired. We figured out that our problem was that when getting the selected table cells, we are given a readable list, which limited what we can do with it, and caused errors as we tried to manipulate it. Once we converted it to a writeable list, we were able to then apply the changes correctly as necessary.
Built With
css
java
Try it out
github.com | Guest Manager | Guest Manager, the all-in-one-tool for public and private gatherings. | ['Vijay Sreenivasan', 'Adrien Bekker'] | [] | ['css', 'java'] | 53 |
10,283 | https://devpost.com/software/unlock-jewtxs | Inspiration me
What it does people crazy
How I built it 98
Challenges I ran into
Accomplishments that I'm proud of professionally hacking
What I learned will see
What's next for Unlock everything
Built With
better
live-matrix
Try it out
zeewest.zendesk.com | Unlock | Home | ['Zdenek Gazi'] | [] | ['better', 'live-matrix'] | 54 |
10,283 | https://devpost.com/software/wizard | Here as the problem statement defines we have came with the solution of smart hotel check in and also adding some functionalities to our product
which will be helpful for users like the problems of queue for check in, manual billing and many more.
So at a time we are trying to solve two problem statements i.e
Take out the human interaction
and
Smooth the step in
.
Here we are introducing:
Website
- used for hotel rooms booking,
An App
- for hotel check in,
An IoT setup
- used for setup of local server and to provide user the security while hotel check in process
*
An AI chatbot for better preference
*
Augmented reality
The product is the one-stop solution to take out the possible human interaction throughout, getting a glimpse of the hotel room and smoothing out the check-in and check-out processes.
That will consist of a web and an android app for automating a manual process, 3D modeling using AR/ VR for 360' view for hotel's room real scenario, AI chatbot for easy booking and query handling, and an IoT setup for contactless entry-exit.
The outcome of implementation of such system will be minimisation of queue problem, better customer trust, engagement and curtailing the spread of virus. It will be having implications on tourism industry post-pandemic which will be contributing to the economy as well would be helping to flatten the curve where people are expected to live alongside the virus.
The solution can be divided into mainly these parts:
• Web and Android app for automating manual process The user's profile on the website is first created by registering his details for booking of hotel rooms. A unique code is assigned to the user at the time of registering which has to kept securely for the future authorization process.
• 3D modeling using AR/ VR for 360, real scenario glimpse
A gallery of pictures of various hotel rooms presented at the website. Apart from that when the image of his room interest is clicked on by the user, 360' view image of the room can be viewed by him
• AI chatbot for easy booking and query handling
We have tried to provide much security and privacy while hotel checkin. Here the
admin panel plays an important role for it.
Admin will be
alloting an unique id
to each room with unique user no.
For notification
, when the user will come under the range of Human sensor and will scan the QR code successfully, admin will recieve notification of it.
FOR REFERENCE WE HAVE ADDED VIDEO OF OUR WORK IN EACH FOLDER
Built With
c++
css
javascript
php
python
tsql
Try it out
github.com | wizard | Our idea is generally based on Hotels, offices as we know that hotels and airlines are going through a huge loss so we came up with ideas to solve this problem and using etherum also. | [] | [] | ['c++', 'css', 'javascript', 'php', 'python', 'tsql'] | 55 |
10,283 | https://devpost.com/software/https-codepen-io-isabellaacosta03-pen-lygmrwk | Inspiration
This is a final project of mine for a coding program I signed up for
What it does
It lets you know everything you need to know about purchasing one of these great harnesses for you dog!
How I built it
I followed the guidelines but I added a little twist of my own and made it as user friendly as possible.
Challenges I ran into
Making navigation links
Accomplishments that I'm proud of
The final product... It was my first time coding something like this!
What I learned
As long as you keep practicing and staying true to you, everything will work out!
What's next for
https://codepen.io/isabellaacosta03/pen/LYGmRWK
I will keep adding to it and tweaking it and making it as close to perfect as possible!
Built With
css
html
Try it out
codepen.io | https://codepen.io/isabellaacosta03/pen/LYGmRWK | This is a Product Landing page for Personalized Dog Harnesses from DoggyKingdom. This page highlights all the main things that you need to know about shopping for the best harness for your pooch! | ['Isabella Acosta'] | [] | ['css', 'html'] | 56 |
10,288 | https://devpost.com/software/daisy-krjcu3 | --Inspiration
During the COVID-19 lockdown in South Africa, gender-based violence has increased significantly.
One observation regarding abusers, is that they are very suspicious of anything the victim might do on their phone. Therefore it is important to be as discreet as possible when trying to solve this problem.
We wanted a solution for these victims to get the necessary help.
--What it does
Daisy is the website we have created to help assist gender-based violence victims. Daisy uses a façade.
We use a period tracker as our façade. Most men stay as far away as possible from any topic period related and thus, we chose this.
On first look the website is a basic period tracker with information regarding menstrual cycles and whether yours are healthy. But we have a button on our page that will take you to a whole new level of the website - our help site.
At the help site, you can choose to press an SOS button in case you need police intervention immediately. There is also a chat channel, if you just need to talk to someone or have questions regarding abuse.
The biggest reason why our solution is unique from other solutions, is because we use the façade. And especially the kind of façade that most abusers wouldn’t think much of even if they were to catch the victim using our site.
The way in which we respond to SOS signals is not in a usual manner. We do this silently and without attracting any attention to the victim. We do not call, SMS or make noise, because we don’t want the abuser to get suspicious and possibly harm the victim even more before help arrives.
In practice all the messages and SOS signals will go to an organization that has a 24-hour hotline. The organization will be responsible to contact the correct authorities and in the case of the chat button, communicate with the victim. However for the purposes of this hackathon, all the messages and SOS signals will be sent to us.
Lastly our name and theme are unique. Our theme is chamomile – a beautiful, delicate flower that helps reduce stress and anxiety just like our website, but our name is Daisy. This is a metaphor for the app looking like one thing but being something completely different.
-- How we built it
Technologies used
FRONT-END
Angular 9
Angular Material and Angular Flex
Bootstrap
Photoshop
BACK-END
Node.js
Express
DATABASE
PostgreSQL
Sequelize
Heroku server
BROWSER SUPPORT
Brave
Chrome
Safari
-- Challenges we ran into
We didn't particularly ran into challenges we couldn't solve. But it was very challenging to create this whole working website in just a weekend.
-- Accomplishments that I'm proud of
We finished our project and we think it looks really nice. The main functionality works perfect.
-- What I learned
It was nice to work with a girls only group. We worked very well together and we definitely built on our skills.
-- What's next for Daisy
We would like to refine the website, especially the period tracker part and hopefully provide this website as a real life solution for women in need.
Built With
angular.js
bootstrap
express.js
heroku
node.js
photoshop
postgresql
Try it out
github.com | Daisy | A gender based violence assistance app that is a disguised as a period tracker. | ['Charlo Jacobs'] | [] | ['angular.js', 'bootstrap', 'express.js', 'heroku', 'node.js', 'photoshop', 'postgresql'] | 0 |
10,289 | https://devpost.com/software/edu-ar | workflow
solution defination
Problem statement
solution
Inspiration
This idea struck me when one of my junior friends who just landed at engineering college called me complaining about the education system. all I could do is agree because it's been years that our education system hasn't updated their way of teaching and this pandemic situation has put a major effect to it by online classes where students cant understand subjects which include heavy machinery .so that's where I got an idea bout projecting 3d model for a better understanding. I did research what does it require it took me 2 months to learn the augmented reality and here I'm with my first idea.
What it does
Our product is basically based on Recognition based AR, where a target image is placed in front of the AR camera and a 3D model of it is projected.this all featured are integrated into an Andriod application where people can scan images and view 3D model anywhere anytime with a 360-degree view with Audio effect.
How I built it
I used Unity 2017 student version,vuforia cloud , Andriod SDK, unity asset, ARKIT in Windows 10 environment
Challenges I ran into
The main challenge in building this was the error in scripts and model making.it took weeks to add animations I used unity forum to find answers and fixed it.
Accomplishments that I'm proud of
proud moment was to see all the things working up at the end despite of re trying many times. it was an eternal satisfaction but still, there is a long way to go.
What I learned
patience is the biggest lesson I learned over here. I had a point where I was about to give this project but having patience helped to go through that stage.
What's next for EDU-AR
we are currently working on the next update on EDU-AR where we are covering more subjects of engineering students and more models with better animation. we are also looking for mixed reality to enhance experience
Built With
andriodstudio
android
arkit
c#
unity
vuforia
windows-10
Try it out
ltiwdh2uolaziajfbguzaq-on.drv.tw
github.com | EDU-AR | Our idea is to use Augmented reality technology in education field during this pandemic situation where students are bound to online classes and are lacking practical skills. | ['ks keshava rao', 'Anirudh Soni', 'Sohail Ahmed'] | ['1st Place'] | ['andriodstudio', 'android', 'arkit', 'c#', 'unity', 'vuforia', 'windows-10'] | 0 |
10,289 | https://devpost.com/software/interact-y6r0ft | start up page
Inspiration
As a current high school student and after contacting various other high school and middle school students, the main issue I came across was that these students were complaining about being unable to play games with their friends, and the majority of fun online games were very costly and too competitive, whereas they would prefer games that were more interactive with each other over a call.
What it does
On Interact, we solve this problem directly by allowing users to create video calls and play games, watch movies, or even study, all on one platform.
How I built it
I built the initial site using HTML, CSS, and Javascript. I built the 2 demo games using java. The video chat API I was using was vidyo.io.
Challenges I ran into
I couldn't get the java programs to run on an HTML website and be synced during a call over multiple devices.
There were also problems with video chat connecting and being organized on the HTML site. We tried various APIs, but eventually ran out of time in trying to implement it.
Accomplishments that I'm proud of
Developing the 2 games took a lot of times and effort and is the most interesting part of the project, even though they weren't implemented into the website properly, they still functioned amazingly.
What's next for Interact
In the future, we hope to fix the problems we faced, add more games, as well as add a feature to watch movies in sync.
Built With
css3
html5
java
javascript | Interact | Interact: A virtual hangout, where you can video chat with your friends so you can play games, watch movies, and even hangout or study. | ['Ishan Kapoor'] | ['Team Minekee Membership Prize'] | ['css3', 'html5', 'java', 'javascript'] | 1 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.