hackathon_id int64 1.57k 23.4k | project_link stringlengths 30 96 | full_desc stringlengths 1 547k ⌀ | title stringlengths 1 60 ⌀ | brief_desc stringlengths 1 200 ⌀ | team_members stringlengths 2 870 | prize stringlengths 2 792 | tags stringlengths 2 4.47k | __index_level_0__ int64 0 695 |
|---|---|---|---|---|---|---|---|---|
10,412 | https://devpost.com/software/bachelorbattles | Inspiration
As serious Bachelor Nation fans, we were disheartened by the results of the most recent season of The Bachelor and saddened by the postponement of The Bachelorette. We wanted to bring some Bachelor love to this hackathon.
What it does
We built a game called The Bachelor Battles, a 2-player Bachelor-themed card game. On the first page of our website, Chris Harrison, the host of the Bachelor series, greets you and invites you to the most dramatic season of the Bachelor ever. Let’s play the game by pressing start.
The next page is the character selection page. You type in your name, and choose the four bachelorettes that you think are the strongest.
Right now, users can only play against the computer. So when we start the game, we can see the bachelorette cards that our opponent, the computer, has chosen. Each card has a number of HP, and the last remaining player wins the Bachelor’s heart. To play the game, you pick one of your four cards to attack one of your opponent’s cards, which is your target. The game ends when a player runs out of cards.
How we built it
We built our website using HTML and CSS on the frontend, and Python and Flask on the backend.
Challenges we ran into
We had challenges re-rendering our different HTML pages with new data and challenges with circular imports. We also had difficulty creating our backend models in the most efficient way.
Accomplishments that we're proud of
Neither of us had built a website using flask before. We're proud to have built a fully-functional, multi-page, entertaining game that fellow Bachelor fans could enjoy.
What we learned
We learned how to build a website from flask from scratch. We learned to be aware of circular imports.
What's next for BachelorBattles
With more time, we would implement the ability to play with multiple people simultaneously, a more polished UI, more intricate features like different kinds of moves for each card, and introduce special power-ups for certain cards.
Built With
css
flask
html
python
Try it out
github.com | BachelorBattles | Pokemon cards meets Bachelor Nation. Send home the bachelorettes who are there for the wrong reasons. | ['Jennifer Xu', 'Angela Luo'] | [] | ['css', 'flask', 'html', 'python'] | 40 |
10,412 | https://devpost.com/software/smartchef-w4s1ef | Homepage
About Page
Inspiration Gallery 1
Inspiration Gallery 2
Inspiration
Being in quarantine
Finding recipes but they lack written instructions and you either end up forgetting or have to watch while you attempt a recipe-- which can be both time consuming and a hassle
What it does
Streams of cooking at home and food playlists with a note taking option and an inspiration gallery to get you hungry and motivated!
How we built it
Youtube API
CSS, Javascript, HTML
Challenges we ran into
Spacing/Alignment issues
It is not a registered webapp by Google so it requires signing in so Google can verify that it is a trusted source. It can only function on a server.
Filtering through parameters
Accomplishments that we're proud of
Getting the youtube API to work!
Working in different time zones and remotely
What we learned
How to use the youtube api to get a list of videos from a channel and their respective ID’s to use for showing the videos on the site.
What's next for SmartChef
Putting more videos on the site from different youtube channels
Add a search function so users can find videos
Built With
css
html
javascript
Try it out
glitch.com | SmartChef | Struggling to keep up with a cooking tutorial? Bored during quarantine? SmartChef has got you! Discover foodie videos from youtube while taking notes to remember cool new recipes/techniques you learn! | ['Siri Tanguturi', 'Vyshnavi Rajeevan', 'neha konjeti'] | [] | ['css', 'html', 'javascript'] | 41 |
10,412 | https://devpost.com/software/personify-of5iwx | Inspiration
With the current world environment, increased use of streaming means that trends are changing faster then ever. We wanted to give people the ability to understand more about their music listening habits. So, we create da fun way to visualize music taste and analyze users Spotify patterns.
What it does
It retrieves user data through the Spotify Web API and parses the data for top ten songs recently listened to and features of those songs such as dance ability, tempo, and loudness. These are displayed on a webpage for the user to look over.
How I built it
The Spotify web API was used to retrieve data via a Node.js server. The JSON that was returned was parsed and forwarded to a wix endpoint to be entered into a database. This database allows for the wix webpage to be populated with the users data. For the wix webpage, we used the existing UI elements available to us to build out the website.
Challenges I ran into
Connecting to the wix endpoint possed a challenege for moving our JSON data from Spotify for use in the front end. Team members are also not proficient in javascript/web development which limited some of what we could for the front-end.
Accomplishments that I'm proud of
The design of the web page overall. Being able to use an API we never used before.
What I learned
Spotify API has access to many interesting utilities and music data for use.
What's next for Personify
Improved use of Node. Analyzing additional music features and user patterns.
Built With
css
express.js
html5
javascript
node.js
spotify
wix
Try it out
jmk212.wixsite.com
github.com | Personify | Display features of a users spotify habits in a new way similar to a personality test. Show the user the dominant traits of the songs tehy listen to the most and allow them to compare with others. | ['John Light', 'Scott Breece', 'sunjin ☀️'] | [] | ['css', 'express.js', 'html5', 'javascript', 'node.js', 'spotify', 'wix'] | 42 |
10,412 | https://devpost.com/software/quick-sketch | Inspiration
I created Quick Sketch because I noticed people were struggling to find ways to be entertained at home and connect with friends during these times. I wanted to solve this issue with a fun and competitive game that encourages friends and family to remain connected in a unique way.
What it does
Quick Sketch is a real-time Pictionary-style online drawing game that involves players starting/joining a game consisting of 5 rounds. In each round, the players take turns drawing a randomly selected word and guessing that word in 40 seconds. If the player whose turn it is to guess, guesses correctly, then the drawer gets 1 point and the guesser gets 3 points. If the person guessing is unable to guess the word, then both players get 0 points.
How I built it
Quick Sketch was created using the create react app, NodeJS, and Socket.io (real-time bidirectional event-based communication).
Challenges I ran into
I was constantly running into bugs with using Socket.io in conjunction with Express, game rendering, and the overall aesthetic of the website. I'm more experienced creating games with Android Studio, so diving into the realm of browser games was challenging albeit enjoyable.
Accomplishments that I'm proud of
I’m proud that I was able to learn Socket.io despite the time crunch. I'm also proud that I had fun while designing the website.
What I learned
Increased comfort using create react app
Experience with Socket.io
Experience with NodeJS
What's next for Quick Sketch
I'd like to add more features, including allowing players to choose from different colors when drawing, the ability to add multiple people to a game, and even a leaderboard. I would also like the players to be able to view the sketches at the end of the game and save them if they like. Finally, I'd like to add levels of difficulty to the words that the players would have to draw and guess (for example, words like cat or dog would be Easy whereas something like stain or mirror would be Hard; players can choose which level of difficulty they want before creating a game).
Built With
create-react-app
node.js
react-sketchpad
socket.io | Quick Sketch | A real-time Pictionary-style online drawing game | ['Riya Danait'] | [] | ['create-react-app', 'node.js', 'react-sketchpad', 'socket.io'] | 43 |
10,412 | https://devpost.com/software/nomnom-eajqyw | Inspiration
We were inspired to build NomNom for one main reason.
When we are bored, it is always difficult for us to decide what to do. And, as we are both very passionate about food, what better way than to develop a platform that finds you recipes based on your mood!
What it does
The purpose of NomNom is to generate food recipes based on one’s mood. When you visit the site, you will see six different moods available: happy, sad, adventurous, angry, bored, and anxious. When you click either of these buttons, a random list of recipe suggestions is generated based on your mood. You can view the recipe by clicking the "view recipe" button. We have one additional feature on this web app. On the top-right corner of the webpage, there is a “Find Sweets Near You” button. When you click this, it will take you to a page that has a map. You will be asked if your location could be accessed, and if you click allow, around 20 nearby dessert-related restaurants have a marker on them. When you click the marker, on the blue/purple box to the left of the webpage, the food place’s name, address, and website pop-up. You can also see the place’s ratings.
How we built it
NomNom was built using HTML, CSS, and React JS. We used three APIs, the Google Maps API, Places API, and Tasty API. The Google Maps and Places API were used via Google Cloud.
We split the work with Aditi working on the Maps portion and Archi working the React JS and UI.
HTML was used to implement the Google Maps and Places APIs.
Challenges I ran into
We came across a handful of obstacles when building NomNom. One main obstacle we faced while was generating the recipe suggestions. One of the main aspects of the application, recipe generation, was difficult to start because of errors with integrating the Tasty API. However, after extensive googling and playing around with the JS fetch API, we figured out how to get the data properly. With the Google Maps API, we faced some issues with the panel and marker displays.
The markers were not appearing as they should and the panel was not showing the full website URL. To fix the panel problem, we had to make a few changes to the CSS. For the Maps markers problem, we had to fix and change some variables in the HTML code.
Accomplishments that I'm proud of
Upon the completion of this project, there are several accomplishments we are proud of. First of all, we were able to successfully use APIs, which none of us were too experienced in doing. Secondly, we learned how to geolocate a user using the Google Maps API in HTML, by using the property of W3C Geolocation standard navigator. Moreover, we were able to further develop our skills in React JS.
We were introduced to and able to work with many different topics, which we are both very proud of.
What I learned
We learned how to use the Google Maps, Places, and Rapid APIs. Both of us were able to further enhance our knowledge on APIs and Google Cloud. Throughout the process of developing NomNom, we came across various errors. As a team, we were, together, able to troubleshoot and fix the issues. Moreover, we both were able to learn more about working in React JS. As none of us are very experienced with implementing APIs in React JS, we were able to walk through many tutorials and find our way through.
One key topic we learned in this hackathon, in terms of the Google Maps and Places API, was how to geolocate users. When you want to access a user’s location, you must identify their location, which we learned and did through HTML.
Overall, participating in “New Hacks, who this?” really helped us both in enhancing our coding skills and knowledge to build a cool application!
What's next for nomnom
We have several ideas in mind to take NomNom a step forward. We want to change the way nearby locations show up on the “R U Hungry” page. Instead of having them appear when you click a marker, we want all the locations to show up in a list and allow users to move forward from there. Also, we would want to custom style the Google Map to make it more appealing and website-themed. For the recipe suggestion aspect, we will add options to add dietary restrictions and allergies.
Built With
css
html
javascript
react
Try it out
github.com | nomnom | recipes based on your mood | ['Aditi Parekh', 'Archi Parekh'] | [] | ['css', 'html', 'javascript', 'react'] | 44 |
10,412 | https://devpost.com/software/find-a-safe-space-3zv8ry | GIF
User-interaction-with-chat-both
What It Does
In three different modes (scheduler, chat-bot information, and map) users can stay informed about what areas covid has seen increase and glance at how many people are in areas to avoid over gathering and rapid spread. In our scheduler, users can easily book events with friends and nearby neighbors that will show up on our map. Other users can publicly or if send directly to friends, view the virtual event and notify the host they can attend.
Our chat bot streamlines information from our covid data set (build free from google cloud) to assist users in finding areas with low occupancy. This allows users to reconnect with friends in person while taking precautions. We want to create a smooth transition to reopening and this app is aimed just at that!
Inspiration
During the reopening period within the U.S, cases began to go on the rise and new hotspot states appeared --- leaving many of us to wonder if we will ever return to normalcy and see each other physically. This app was designed with creating a safe and up to date information bank that users can access to make better-informed decisions about where they would like to go visit or hang out with each other at safe/no crowded spaces.
How We Built it
Richard (Website) - The website was mostly coded with HTML and CSS.
Macarthur (Scheduler) - Created a SQL backed schedule and notification system
Mo (AI Chatbot) - Created with the help of Google cloud's dialogue flow and JSON tools
Challenges We Faced
Having shared information and a secure database that could help all our information accessible was the biggest challenge. We were also experimentally a lot with keeping everything on google cloud and that took quite the effort.
What We Learned
We experimented with a lot of different types of technologies and Softwares. On the front end side, we experimented with some javascript libraries and SQL databases to connect with them. Using google cloud platform, we also had to learn some JSON and terminal commands for working with a remote servers. The scheduler required a lot of storage and authentication.
After the 1yr from this date you can probably see it at crimsontrife.com/nhwt(once I set that up).
Built With
canvas
css3
debian
dialogueflow
flask
google
google-cloud-messaging
google-cloud-sql
html5
javascript
nginx
postgresql
python
sql
Try it out
github.com
findasafearea.online | Finding a Safe Area (FSA) | Helping create connections and build bonds through safe gathering and well informed decisions | ['Macarthur Inbody', 'Richard Ho', 'Mohamed Omane'] | [] | ['canvas', 'css3', 'debian', 'dialogueflow', 'flask', 'google', 'google-cloud-messaging', 'google-cloud-sql', 'html5', 'javascript', 'nginx', 'postgresql', 'python', 'sql'] | 45 |
10,412 | https://devpost.com/software/code-blue-4ty3rs | Home
Message
Change Background
Learn More
Inspiration
As frightening headlines flash across our screens daily and crisis seems to unfold, it's often easy to forget what happens to people after these tragic events. Whether that be veterans of war or regular people, life can often seem to move on without anxiety. The long term consequences of suppressing the anxiety lead to worse conditions and symptoms, impacting health, life expectancy, and relationships as a result. This app was made for one of the most deadly side effects, panic attacks.
What it does
Code Blue is a Mental Health Platform, which includes an app and a calling system. Through building a simple APP, we aim to make it accessible for anyone in need. The main page of the app gives professional advice to use during a panic attack. This solution will bring a steady hand to people who are experiencing panic attacks at that moment. The calling page of the app gives the option for individuals to call their loved ones during that time.
How I built it
I built this app using swift and xcode. Using these 2 amazing programming languages and softwares, I programmed the app to meet needs that professional medical companies use during panic attacks. To add the call function to the app, I added an api that gives you an easy option to call people.
Challenges I ran into
Problems with the timing function in the app took up a huge chunk of my time but after doing some research on it I was able to find and solve the problem.
Accomplishments that I'm proud of
I am proud that I was able to make a full app that helps save the world in such a short amount of time. Being able to help contribute to the mental health community with this app was a huge accomplishment in itself.
What I learned
As I learned a lot more programming in Swift, I also started my path in much more advanced things related to coding. I increased my knowledge about hackathons, and now I have become much more confident.
Built With
adobexd
api
panic-attack
swift
uikit
xcode
Try it out
github.com | Code Blue | Stay safe during a panic attack using Swift | ['Navadeep Budda'] | [] | ['adobexd', 'api', 'panic-attack', 'swift', 'uikit', 'xcode'] | 46 |
10,412 | https://devpost.com/software/syncitup | Inspiration
It's hard to listen to Spotify together remotely. Sharing the song you're listening to, grabbing the timestamp, then repeating this all over again when the next song comes around is not convenient -- SyncItUp is here to solve this problem.
What it does
SyncItUp lets you connect with your friends by entering their Spotify usernames. If a user has chosen to share their music right now, their friends can listen in.
How I built it
SyncItUp is a solo project and was built with the MERN stack; data is stored with MongoDB, the frontend was built with React, middleware is provided by Express.js, and the server made with Node.js.
MongoDB is provided through MongoDB Atlas and deployed to Google Cloud Platform.
Challenges I ran into
Currently, the backend server is
not fully
linked up to the React frontend. Many parts are integrated, such as retrieving a list of friends, a user can sync up to a song, and the API functions to provide multiple endpoints. As I built this alone, I ran out of time to fully connect the two.
I am also a solo hacker in this hackathon -- originally I planned to work in a team of two, with me working on the backend and another hacker on the frontend, but this fell through. It was quite challenging to attempt all aspects of full-stack JavaScript development alone.
Because of this, I may also be quite tired. :D
Accomplishments that I'm proud of
This was my first time making a project with React and I think I developed a good understanding of this framework. It was good to properly link some of my API endpoints to the frontend seamlessly on the same port. I also learnt how to handle a variety of lower-level details such as user sessions.
I believe the backend code that provides the API and makes requests to Spotify
What I learned
This was my first time making a project with React (outside a tutorial) so was a challenging experience of understanding a new framework. I had not worked with the Spotify API before. I had not managed user sessions and database updates based on these without using packages such as
passport
until this weekend (this was necessary), increasing the complexity of the project.
What's next for SyncItUp
I first plan on fully connecting the React frontend and the backend server to ensure the website is fully responsive. I then plan to redesign the website -- currently it has a... functional... design.
Built With
css
html
javascript
node.js
react
Try it out
github.com | SyncItUp | Sync up to your friends' Spotify! | ['Tom Nudd'] | [] | ['css', 'html', 'javascript', 'node.js', 'react'] | 47 |
10,412 | https://devpost.com/software/newhack-x4n0i3 | Home Page
Precautions
Quiz
BMI calculator
Corona Tracker
Remembering Game
Description
To update people with the latest coronavirus stats across different countries and to encourage them to check on their mental and physical fitness We have made an app named 'LinkUs'.
some salient features of our app :
-! A game to play with ur family ..🖖🔥 .
-! coronavirus tracker
-! Bmi calculator (to ensure u remain fit during this pandemic)
-! Quiz on corona
-! What precautions you should take
Inspiration
We just wanted to do something good for the society
What it does
We have Built something that helps people stay connected during these times.
How we built it
We have built it with love using flutter and dart.
Challenges we ran into
*Deciding the UI
*Making the UI interactive and simple
*How to use APIs
*Linking different pages into one app
Accomplishments that we're proud of
We finally made it within the required time.
What we learned
Better use of flutter.
Built With
dart
kotlin
objective-c
swift | NewHack | Fuel your lockdown with LinkUs | ['Simran Srivastava', 'Nikhil Sukhani', 'Sneha Gupta'] | [] | ['dart', 'kotlin', 'objective-c', 'swift'] | 48 |
10,412 | https://devpost.com/software/alexa-i-m-bored | Inspiration
My inspiration for this project was needing something fun to do while bored at home. Especially in these times, being stuck at home is more relatable than ever, and taking up new, socially-distant hobbies is a great, productive thing to do.
What it does
When my Alexa skill is invoked (say "Alexa, I'm bored), Alexa will randomly give you something to do. If you're not satisfied, just say the words again until you find something you like! This skill is available in two languages, English and Spanish.
How I built it
I built my hack using the Alexa Skills Kit in the Alexa Developer Console. I created a custom, Alexa-hosted (node.js) skill using the Fact Skill template. The template comes with one pre-made fact, so I changed it into an action and added many other actions that one can do while at home.
Challenges I ran into
This is the first Alexa skill I have built, so I had to learn how to build skills all in this weekend. It was stressful, yet fun, and explains why my hack is pretty simple. I hope to develop many new skills with my new Alexa device.
Accomplishments that I'm proud of
I'm proud that I finally made my very first Alexa skill, because it is something I've been wanting to learn how to do for a while. I have never really used hardware for hacks, and I'm proud that I tried it.
What I learned
I learned a lot about the introductory steps to creating Alexa skills. I can't wait to make more and hopefully release them for live demo in the Skills Store.
What's next for Alexa, I'm bored!
I want to further build my skill by adding more actions and developing the ones already made. For example, for the "watch a movie" action, I want Alexa to list some of my favorite movies to watch if you ask her more. This idea can also be applied to the "read a book" and "listen to music" skills. I also plan to convert the skill into more languages.
Built With
alexa
node.js | Alexa, I'm bored! | An Alexa skill for when you're bored at home and need something to do | ['Nithya G'] | [] | ['alexa', 'node.js'] | 49 |
10,412 | https://devpost.com/software/covid-safety-all-in-one-website | Home Page
Safety and tips page
Videos page
Inspiration
As a normal person I also searched about
COVID-19
when it was just in some countries, but couldn't find a proper website that gets me all the info about it. So, I decided to make one that has all the latest information about the current pandemic and precautions.
What it does
This website helps people to find the
screening tool, latest news and statistics of countries, preventive measures, some useful videos on masks and sanitization
. This will help people to be more clear about the prevention of
Coronavirus
and will help in clearing the doubts of people.
How I built it
I built this website by using
HTML-5, Javascript, Bootstrap and CSS
which is fully responsive for all platforms
(Windows, Android or MacOS)
so you won't ever miss anything!
I made sure that it should be as fast, smooth and as much informational as it can be.
Challenges I ran into
I faced some issues understanding how
Javascript
works, but somehow managed to make it work. Another challenge that i faced was with
Coronavirus Stats Tracker
, most of the trackers were making website slow if embedded but after digging deep, finally found a good website which works smoothly with the website.
Accomplishments that I'm proud of
Despite the technical challenges, lack of sleep, and occasional procastination, i am very proud to complete the functionality and the UI design of the website in 48 hours. It was challenging but fun at the same time to learn something new.
What I learned
This hackathon gave me an opportunity to get the experience of using web technologies and helped me gain confidence in my skills. It's always good to have new ideas that will help people and in such short time ideas turn out to be really good! It was a great experience which helped me boost my skills.
What's next for CovidSafety
In the near future, i want to expand the functionality of the website by adding an
inbuilt news section, a chatbot and a status updater based on their location
. Also, accounts functionality that will help people save their location and bookmark their
favorite videos
,
news articles
and also
chat with random strangers
all over the web!
Built With
bootstrap
css
html5
javascript
Try it out
covidsafety.yashgarg.co
github.com | CovidSafety - All in one website | A wesbite that will help people to get the latest information about the pandemic. | [] | [] | ['bootstrap', 'css', 'html5', 'javascript'] | 50 |
10,412 | https://devpost.com/software/e-learning-platform-for-students | Different AR Demos
AR - Microscope
Our App
online meet on conducted on our sited
Flowchart
Report - After attempting test
Wordpress - Google Cloud
E-learning-platform-for-students
Brief description of steps taken to complete the project
The initial set up :
1. Bought Virtual Machine.
2. Installed Word Press on Virtual Machine.
3. Designed user-friendly Website.
4. Created AR (Augmented Reality) based app for visualisation
Connections :
1. Virtual Machine IP Address linked to domain.
2. Generated SSL Certificate (https) for website
3. Developed the App
The output :
1. Students can visualise 3D models in AR as well as Browser
(e.g. Digestive system, Earth's Core, Microscope)
2. Teachers can mark student attendance and add exam marks on our portal.
3. Study material (Resources) for students to study during pandemic.
4. Parents can see their child's attendance and marks on portal by logging in.
5. Students can attempt proctor (webcam) based exams (ensures no cheating)
6. Teachers can see students online exam report in detail.
(Face detected or not if not - screenshot , recording when noise detected)
7. Students can attend live lectures (classes) on our site itself.
[To download app Click Here]
Technology Stack
For hosting website -
Google Cloud
For integrating Augmented Reality feature -
echoAR and Unity
Improve site performance (CDN) -
Clouflare
Domain service -
.xyz
For designing website -
Wordpress & Elementor
Built With
.xyz
cloudflare
echoar
elementor
google-cloud
unity
wordpress
Try it out
github.com
www.dscjscoe.xyz | E- Learning Platform + Augmented Reality + Login portal | Online portal for Student, Parents, Teachers to see attendance, marks, upcoming events ( Website & App) Portal + Augmented Reality Library (Created by us) for visualisation and better understanding | ['Sanket Patil', 'Chaitanya Abhang', 'Tejas A', 'Mahesh Gavhane'] | [] | ['.xyz', 'cloudflare', 'echoar', 'elementor', 'google-cloud', 'unity', 'wordpress'] | 51 |
10,412 | https://devpost.com/software/my-portfolio-dxr9be | Inspiration
I want to build my own portfolio website(without coding) which is responsive, and want to host it for free. I found Adobe Portfolio where we can build a portfolio website and host it for free. Many students don't know about this application.I think this might be helpful to many of them.
What it does
It shows my projects, Skills, and contact details
How I built it
Adobe-Portfolio
Challenges I ran into
Accomplishments that I'm proud of
What I learned
What's next for My Portfolio
Built With
adobe-portfolio
photoshop
Try it out
pegadapoornachander4.myportfolio.com | My Portfolio | Build a Portfolio website(no coding required) | ['poornapegada Poornachander'] | [] | ['adobe-portfolio', 'photoshop'] | 52 |
10,412 | https://devpost.com/software/sentiment-analysis-of-covid-19-tweets-dashboard | Inspiration
Our objective is to ensure people’s mental health in the special COVID-19 pandemic. And we will give detailed suggestions to people who feel mental discomfort during the quarantine period. This is the logic of our project : Our data source is Tweets with the hashtag #StayHome on Twitter, and our main analysis method is to understand the emotions of people and identify possible factors that influence their emotions, like a specific topic or activity. The main tool we use is NLP, including sentiment analysis, word clouds, and topic modeling.
Presentation:
Twitter Sentiment Analysis-COVID-19-Visual Dashboard
What it does
Based on the analysis on topics and single word, we give the following suggestions to people who feel mentally discomfort. There are several popular that are proved to be very positive: including listening to and singing songs, watch movies or TV series, virtually connecting with your closed ones even though you are physically distant, and workout. Besides, we also found several activities that are surprisingly very helpful to your emotions:
First, help others, whether sharing free food or donating money helps you feel better
Second, say a prayer does help release your emotional discomfort
Third, ordering food online rather than cooking has a more positive effect on your emotion, where you can enjoy more delicious food
Last but not the least, Stop playing games all day. It’s not a very good way to make you happy.
Social Impact:
Besides the sentiment of topics, we are also curious about what activities people do across different cities. So we listed the top3 activities in each city based on the number of tweets related to the topic. We got several interesting findings:
Watch videos, workout and delivery are the popular activities in many cities.
People in New York prefer staying with their friends and family, people in LA most like watching shows and movies
Communication through word, message and picture is very popular in Boston. People in different cities do have preferred activities!
Accomplishments that I'm proud of
Analyses twits classify it to Positive negative and neutral, Processed data using Natural Language Processing and Used various analysis methods to show in graps
Scope:
We started from conducting sentiment analysis on our data. We analyzed the tweets and tried to understand people’s opinion expressed by the text. Using the textblob library in Python, we can quantify the sentiment behind each tweet with a positive or negative value, called polarity. We also plotted the average sentiment score over time, and we can see that generally people’s tweets are becoming more and more positive.
After dividing the tweets into positive, negative, and neutral categories based on polarity, we calculated the word frequencies for positive tweets and negative tweets separately and drew a word cloud for each of the two categories.
From the first word cloud, we can see that positive tweets often contain positive words like “safe”, “great”, “happy”. There are also words like “video”, “music” that imply positive activities.
Built With
data-science-toolkit
natural-language-processing
python
testblob
twitter
voila
Try it out
drive.google.com
drive.google.com | Sentiment Analysis of Covid-19 Tweets- Visual Dashboard | Our objective is to ensure people’s mental health in the special COVID-19 pandemic. And we will give detailed suggestions to people who feel mental discomfort during the quarantine period. | ['Pramod Paratabadi', 'Supriya Shivanand Madiwal .'] | [] | ['data-science-toolkit', 'natural-language-processing', 'python', 'testblob', 'twitter', 'voila'] | 53 |
10,412 | https://devpost.com/software/covid-19-healtheval | A look at the home page of COVID-19 HealthEval on the computer.
A look at the MH Evaluation Quiz on the computer.
The physical health portion, CoreHealth, home page.
A look at the MH Evaluation Quiz on the iPhone 8.
Inspiration:
For this project, my inspiration came from my surroundings. For both my family, friends, and myself, life had become extremely boring during lockdown. Doing physical exercise was difficult, since gyms were closed and sports teams had been put on hold. Mental health had also begun to deterioriate, after sitting in the house for prolonged periods of time, and due to pessimism about the entire situation of the pandemic. That's why I decided to make COVID-19 HealthEval, which helps to solve both of these problems.
What it does
COVID-19 HealthEval allows you to evaluate your own physical and mental health using a short and simple quiz for each. For the mental health quiz, you get personalized advice about coping with your current detected stress levels. For physical health, there are different quizzes for different body parts that you want to make stronger, or exercise more. Based on the results of the physical health quiz for a specific body part, you get a personalized diet, a sample meal plan, and a workout routine that has been designed for people to be able to easily do at home, without any equipment that you would use at the gym.
How I built it
I built COVID-19 HealthEval with HTML, CSS, VanillaJS, and JQuery. Here's a breakdown of how I did each section of this project:
Home Page: HTML + CSS
Mental Health Home Page: HTML + CSS
Mental Health Evaluation Quiz: HTML + CSS + VanillaJS + JQuery (The quiz was made with the JS and JQuery)
Physical Health Home Page: HTML + CSS
Physical Health Application (CoreHealth): I made an entirely separate website for the physical health section, also called Core Health, due to the many pages. I did not want to crowd it all into one website.
Home: HTML + CSS
cH (Core Health) Quiz: HTML + CSS + VanillaJS + JQuery
Diet: HTML + CSS
Workout: HTML + CSS
About: HTML + CSS
Challenges I ran into
The main challenge for me was the media queries, and adapting the website to be functional on the phone. For the different versions of phones, there are different widths and heights that need to be adjusted to, which required lots of precision.
Making the design look professional + inviting. I felt that this was very important, as the look of the website when a person accesses it makes the initial good/bad impression.
Accomplishments that I'm proud of
Being able to make the media queries work. This was the most time-consuming portion of the project for me.
My attempt to solve a very prominent problem in today's world. The pandemic has created many difficult situations, and I have put in a lot of effort into trying to fix one of them.
What I learned
I learned a lot about the physical and mental health problem during this pandemic. The personalized advice for each score came from research about mental health. The same followed for physical health.
I learned many new components of HTML, CSS, and VanillaJS that I did not know of before, which has allowed me to expand my horizon for web-design.
What's next for COVID-19 HealthEval
For me, the next step is to learn Swift, and put this out on the iOS App Store. This will make it easier to use for people who primarily use their phones over their computers.
After this hackathon is over, I will definitely be working on this project to improve it. I am alone on this team, and I think that I am the right person for this project, because I really have a passion for computer science and a sincere drive to improve the physical and mental health situation for everyone during this pandemic.
Built With
css
html
javascript
jquery
Try it out
vardaansinha.github.io
github.com | COVID-19 HealthEval | With COVID-19 HealthEval, maintaing your physical and mental health will be easier than ever before during these extraordinary times. | ['Vardaan Sinha'] | [] | ['css', 'html', 'javascript', 'jquery'] | 54 |
10,413 | https://devpost.com/software/shipworthy | Steering the ship with the steering wheel
Pirate Ship
Steering wheel
Inspiration
I have always wanted to steer a pirate ship. Ever since I watched Pirates of the Caribbean as a child, I've always thought that the coolest job was steering the wheel.
What it does
Shipworthy is an app that allows you to steer a ship in the Unity game engine using a real steering wheel. It detects the color green on a steering wheel in the center and top. Depending on how we turn the steering wheel, the ship reacts respectively.
How we built it
We used several python libraries to make Shipworthy sail. The detection is done with the python library, OpenCV, and to execute the keypresses, we used XQuartz.
Challenges we ran into
We ran into some difficulty trying to find a ship simulator that would run on Mac/IOS using the keys, wasd, so that we could demo our python detection code. We did not find anything and ended up making our own Unity game from a Youtube tutorial
link
, the first of a four-part tutorial.
Accomplishments that we're proud of
We are extremely proud that we were able to make our own unity game even though it wasn't our main goal. Being able to test our python code on a game we made ourselves was a very cool experience. It allowed us to learn how to use OpenCV as well as our newfound skills in Unity for future projects.
What we learned
We learned how to use OpenCV to detect colors and track them within an emulator. We also learned how to make a wave simulator in Unity as well as rigid body physics to allow our ship to properly react to waves.
What's next for Shipworthy
We hope to further improve our detection software and make a more official steering wheel. We are also interested in further developing the game we made to test our python code.
Built With
imutils
numpy
opencv
python
unity
xquartz
Try it out
github.com | Shipworthy | Shipworthy is an app that allows you to steer a ship in the Unity game engine using a real steering wheel through computer vision detection. | ['Mohsin Zaidi', 'Michael Li', 'Tanush Chopra'] | ['First Overall'] | ['imutils', 'numpy', 'opencv', 'python', 'unity', 'xquartz'] | 0 |
10,413 | https://devpost.com/software/watered-down-waste | Gameplay
Title Screen
Inspiration
We were inspired by ocean waste, to create a game about cleaning our oceans.
What it does
The game revolves around a diver who you control. Your goal is to collect as much trash as possible, before you get eaten alive by sharks! We all die some day, why not be at the hands of a shark while cleaning the ocean?
How we built it
We used the Unity Engine to create our game. We also used programs like SFXR to make sound, as well as Aesprite for all the artwork. Everything was made during the competition.
Challenges we ran into
We ran into a few graphical issues, and audio issues. But once those were fixed, the game was to where we wanted it.
Accomplishments that we're proud of
We produced a fully finished and implemented game, something we have had trouble doing in past hackathons/game jams. Usually we produce a product we aren't quite happy with, but this time our project is something we are very proud of.
What we learned
We continued to enhance our skills at game design, and usage of the unity engine, as well as C# as a coding language. Our talents at spriting and graphics have also improved from our last game jams as well.
What's next for Watered Down Waste
I assume the game will not have continued development, as the game was made to be a - play while you are bored - kind of game. Only implementations I could see, would be to make the game mobile, add micro transactions / ads, and more cosmetics to make the game more re playable.
Built With
aesprite
c#
sfxr
unity
Try it out
leyamez.itch.io | Watered Down Waste | 🦈Collect Trash, Avoid Sharks🦈 | ['James Hicks', 'Dominick Serwe', 'Tyler Lee'] | ['Second Overall'] | ['aesprite', 'c#', 'sfxr', 'unity'] | 1 |
10,413 | https://devpost.com/software/hackitshipitpackittrackitwatchit | Inspiration
Tracking fragile, expensive, critical and perishable goods across oceans, air and land can be immensely useful for people transporting these kinds of items long distances. Say a person is transporting priceless medicine across the atlantic. The extremely fragile insides must be protected and tracked, and kept in good condition. Our product allows the user to know the conditions of the smart container at all times. Perishable high value items like vaccines, medical supplies, sensitive equipment are sensitive to various adverse conditions in transit, and the ability to track these shipment conditions real time is a necessity
What it does
Hardware: our smart crate collects the following data with direct sensing:
Movement (vibrations, shocks, rolls) - 6 axis accelerometer/gyroscope
CO2 - CCS811
Ambient Temperature - DHT11
Humidity - DHT11
Object temperature - Gravity Analog Temperature probe
In addition, we can provide on demand imaging and video stream with a raspberry PI camera, attached to a NOIR filter for low light conditions (containers are usually dark) and on demand lighting using a NeoPixel LED ring
Ideally, this data would be relayed via the container ship (or transport vehicle’s) relay computer, which would add the GPS data from the vehicle GPS system (since we do not have access to a container ship or cargo truck, this part was hard coded)
The mobile app displays the data collected by the shipped package. The Maps Page shows where the package currently is and you can check where it is and how long it would take to get to your house. The data page shows you a live stream video of the package and also shows you the data collected from the multiple sensors in the smart container, such as temperature, humidity, gas levels, and more. The contact page allows the user to contact the people in possession of the app with the click of a button.
How I built it
Hardware: The box was assembled using cardboard, lots of tape, glue, paper and patience!
The core sensors used were MPU6050 IMU, CCS811, DHT11, Grove Analog Sensor probe and raspberry pi Camera with NOIR filter and low light add-on kit.
The main controller logic was on a raspberry pi, with some of the analog readings channeled through an arduino connected to a serial port on the pi. The neopixels were programmed to illuminate the box interior just before an image is captured or a stream is started.
The mobile app was developed with Flutter. We used Flutter because it is a multi platform app that you can create simple UI quickly and efficiently. We also chose Flutter because it was easy to integrate essential APIs such as the Google Maps API.
Challenges I ran into
We had a lot of trouble building the smart container. It was tedious trying to put all the components within the box and making it work.
Accomplishments that I'm proud of
All the hardware works! We are also happy to produce a pretty good UI that gets the message across simply and efficiently.
What I learned
Building a box is harder than it looks!
What's next for HackitshipitPackitTrackitWatchit
Mixtape then album
We want to more cohesively integrate the hardware and software so that the two run perfectly together.
Built With
flutter
python
raspberry-pi
Try it out
github.com | HackitshipitPackitTrackitWatchit | Track your precious goods with ease! | ['James Han', 'Muntaser Syed'] | ['Third Overall'] | ['flutter', 'python', 'raspberry-pi'] | 2 |
10,413 | https://devpost.com/software/sally-standup | Inspiration
I've realized that way too much of my time at work is dedicated to just remembering what I did yesterday, only to forget and spend 5 minutes scrolling up until I see the standup I did yesterday and copy it word for word.
Sally Standup solves this problem by having a place where you can send stuff that you've done, or stuff that you need to do.
Then, when it's time for your standup, all you need is a quick
/sally standup
that shows what you've done yesterday, all presented in a clear manner.
How to use
sally standup
: Displays your current todos/done/blockers, then deletes your completed and blockers.
sally add <string>
: Use it anytime, anywhere. Whenever you get assigned a new item, wheever a meeting happens, it doesn't matter. One quick line will add a todo to the bot.
sally finish
: Choose to send one f your predefined messages and move it to the finished list.
sally blocker <string>
: Adds a blocker to the list
sally list
: View the current status of all your lists
sally clear
: Clears all your records.
How I built it
NodeJS and the
Slack SDK
were a great help.
Persistence built in by
node-json-db
.
Challenges I ran into
Setting up the slack environment was challenging, requiring ngrok even for local development. Many small errors were made with the structure for the interactive elements for Slack as well, sucking up a lot of time.
Accomplishments that I'm proud of
Assuming it saves 2 minutes/person/day, at a company at around 500 employees,
This gives 1000min saved/day.
In 2020, there are 252 working days, meaning that it saves 4200 hours/year.
Assuming pay of $25/hr, this saves $105 000/year! In other words, a
lot
of turnips.
Also - since it's on Slack, I guess you can call this responsive design?
What I learned
First time I've made a NodeJS application without being destroyed by race conditions/promises/etc. Yay!
What's next for Sally Standup
Integrations! Especially Outlook Calendar for having a line for saying
[x] hours of meetings scheduled today
. However, had some problems with signing up for MS auth so will hopefully fix that soon.
Built With
bot
node.js
slack
slackbot
Try it out
github.com | Sally Standup | Daily standups - without the memory leaks. | ['Kevin Jiang'] | ["Yo Ho It's a Hacker's Life for Me"] | ['bot', 'node.js', 'slack', 'slackbot'] | 3 |
10,413 | https://devpost.com/software/pirates-of-virtual-caribbean | MIT App Inventor - Blocks
Google Firebase - Realtime Database
Inspiration
Since the start of quarantine, we haven't been able to go out much. We miss the feel of walking in parks and other wonderful and beautiful places. So we thought, If we can't go outside, why not bring the world to us?
Pirate of Virtual Caribbean is a VR application that allows you to explore different places by walking through them and by moving your body around.
What it does
The
Senso
app installed on your second phone collects the accelerometer data in X, Y and Z axes and sends it to
Firebase's real-time database
. Then, this data is used by
Unity3D
to figure out user motion and move in the virtual environment. The accelerometer of your second phone inside
VR Box
allows you to look around while you move giving you a feel of being in the environment you are viewing.
You have several options to explore inside the VR:
The Submerged forest
Ice-sea Himalayas (pun intended)
The Pirate Mothership
The Guerilla War riverboat.
How we built it
The
Senso
app was built using MIT app inventor. This app is connected to a
Firebase's Real-time Database
which communicates with
Unity3D
using a python script. All the scenes inside Unity were created using Standard and free assets.
Challenges we ran into
This was the first time for us using Unity and App inventor. It was difficult mapping the accelerometer to Unity's controls and it took a lot of trial and error. There were some lighting related issues that were fixed after we moved from development to stable build.
Accomplishments that we're proud of
This is kind of a dream project for us. We are proud the we could learn to make apps and games using Unity over the course of a hackathon and have such a cool project with us in the end. All the scenes we created turned out to be absolutely marvelous and beautiful. The app turned out to be so robust and fast the there was virtually no time lag.
What we learned
Making apps using MIT's App Inventor
Making Games using Unity3D
Adding VR capabilities to games
Learnt how to create shaders for scenes and add character controllers
Learnt the basics of using sensor data
What's next for Pirates of Virtual Caribbean
We'll add a feature to explore different environments with your friends.
Built With
firebase
google-cardboard
mit-app-inventor
python
unity
virtual-reality
Try it out
github.com | Pirates of Virtual Caribbean | For all your Virtual Pirate Adventures | ['Anupama Koley', 'Jatin Dehmiwal'] | ['Find Your Sea Legs'] | ['firebase', 'google-cardboard', 'mit-app-inventor', 'python', 'unity', 'virtual-reality'] | 4 |
10,413 | https://devpost.com/software/sea-pals | Pal Page 1
Pal Page 2
Profile Contribution
Home
Profile Affection
Chat
Map Info
Exercise Game
Food Markers
Check-In
Map
Inspiration
For a flash, the ‘deep sea’ seems to only whip up ideas of the terrifying unknown--a dark place filled with horrific creatures. Soon after, though, thinking of the ‘deep sea’ transports us back to high school. The time when learning that the ocean floor contains millions of human-produced, nearly invisible, and killer beads called microplastics lit the initial fire for us to engage in environmental petitions, city-wide protests, and even individual microplastic detection research.
However, we now realize that focusing on the growing spread of microplastics disconnects us from its root cause: our unexplainable complacency and ignorance to the ocean, though it covers nearly 70% of our Earth. Now, nearly a third of all marine creatures are near extinction, including, unsurprisingly, deep-sea creatures.
So, entering this project, our team’s leading question was: how can we both get people to care and contribute to the cause for our ocean’s life? Well, what makes parents who do not want dogs, love their dogs once they are home? What leads people to fight everyday for animals that some of us have never even heard of?
Partnership. Feeling that that animal is not merely a title of a specific branch of the animal kingdom tree, but is something real--a ‘Pal’ of sorts.
What it does
In comes ‘Sea Pal,’ a pet simulator merging virtual and real-life components for users to nurture an ‘affection bond’ with their sea creatures, aptly named ‘sea pals.’ This affection bond is tracked with levels and is affected by both components.
The real-life component is the primary way to increase affection. It allows users to view both their own and nearby outdoor, arts and entertainment, or dining places’ locations. Once they are at one of the locations, they can ‘check-in’ to the location, opening up a chat room where they can talk to their very own sea pal! The real-life component provides additional features for ease-of-use, including a ‘check-in’ history log for the past week, the 3 closest locations to the user for a location category, and a view reset that centers back onto the user.
The virtual components are the primary way that affection may decrease. For each sea pal, there are meters for happiness, exercise, hunger, and sleep that must be maintained. Each one has its own interactive, virtual method of increasing: petting the pal (happiness), playing a game swimming through the ocean (exercise), feeding the pal (hunger), and singing the pal a good night song (sleep).
The sea pal’s profile is where the users can view their affection bond with the creature as well as customized blurbs on their relationship together and the users’ contributions so far; that is, an indicator, based on the users’ affection and time spent with their pals, of how much money has been donated to save sea creatures just like their own!
The home page is where users can see daily deep sea facts, a short description of our donation system, and their pals, as well as create a new pal if they so wish.
How we built it
The application was built entirely with React Native, requiring primarily Javascript, but also Objective-C to configure the iOS files for proper and optimal functioning.
All components ‘speak’ to each other and dynamically keep track of each pal’s need meters and the actions that the user takes, increasing and decreasing the affection as needed.
The sea pals themselves are all designed using React Native’s animations and animatable components.
The real-life component is our integration of the APIs and SDKs (configured for the iOS files) of Radar.io and Google Maps. Google Maps was used with custom map styling as a base for the displaying of the data points that were collected and parsed. These data points were all collected using Radar.io, employing a high amount of their features, including their iOS and React Native-integrated SDK to retrieve user location live, verifying accuracy of locations with their IP geocoding API, retrieving addresses with coordinates with their reverse geocoding API, and dynamic creation of geofences to provide a user-friendly ‘check-in’ functionality.
The AI pals that users chat with are configured with their own personalities (the majority lean on the kind side) as they have been trained with sample user input and bot output conversations using Dialogflow. In real-time the program sends the user’s messages to the Google Cloud Dialogflow API, retrieves a response, and displays it back to the user.
The ‘Exercise’ meter virtual component uses the react native game engine library to dynamically generate levels and fills up based on user’s performance in the minigame.
Challenges we ran into
Integrating Radar.io and Google Cloud (Maps, Dialogflow) APIs/SDKs required extensive planning into component division, considering potential repeated requests (which would result in lag) from overzealous state hook refreshes, creation of order in API/SDK requests as some components needed a request’s information before they could make their own requests, and configuration of keys for both the APIs and the SDKs (done in iOS separately).
Related to the previous paragraph, optimizing application performance considering the API/SDK usage of Radar.io and Google Cloud (Maps, Dialogflow) also presented a challenge. In the video demo, the application runs smoothly as optimizations were painstakingly completed by configuring component loading to be more distributed (e.g. do not load all at once, preload some, deload others), and, within that distribution, placing heavier API/SDK use on lower loading periods.
Configuring React Native libraries, as well as our API/SDK components to run smoothly (let alone function) on iOS presented a challenge in terms of dealing with not only React Native/Javascript, but also Xcode/Objective-C. Some of the most frequent issues encountered were components and features turning out to not be cross-platform, and react-native link installations incorrectly configuring on iOS files, requiring manual deletion and linking.
Accomplishments that we're proud of
Though the project initially began with purely virtual components as a full-fledged typical pet simulator, we are extremely proud that we were able to push through and integrate real-life components into the game play in a natural and seamless way, creating a truly unique bonding experience for users. With check-in and chatting features, the sea pal comes to life with its own personality and, combining that with the time investment and real contributions that users make with their time spent, we believe a genuine bonding experience is awaiting our future users.
What we learned
Though we had coded in Javascript before, it was our first times coding in Objective-C, integrating multiple APIs/SDKs into a project, and using the React Native framework, let alone doing mobile app development.
So in the short timespan of under 48 hours, we learned a lot about everything aforementioned, but also generally about application development and the ways that server-side features can be integrated to enhance user experience and mobile features.
What's next for Sea Pals
Though the project began as a Hackathon submission, we have fallen in love with it. So we hope to begin beta testing the application with users after creating and integrating a database into our project to manage users and user data. With user feedback, we want to refine the application, focusing on the bond that the users develop with their sea pals, ensuring that the bond is as genuine as possible.
To manage the donation aspect, we hope to eventually grow the application into a startup, beginning with pitches. One place we know for sure we want to pitch at is our university’s (Duke) very own Innovation Co-Lab, where lots of grants, advice, and more await on our journey to beginning a fully-fledged startup.
Built With
dialogflow
gcloud
gmaps
gps
javascript
objective-c
radar.io
react-native
react-native-dialogflow
react-native-game-engine
react-native-maps
react-native-radar | Sea Pals | Sea Pals will help you explore, grow, and bond with your own sea creature come to life (soon your best pal!), and help save other sea pals just like your own. | ['Joon Young Lee', 'Alicia Wu'] | ['Most Creative Radar.io Hack'] | ['dialogflow', 'gcloud', 'gmaps', 'gps', 'javascript', 'objective-c', 'radar.io', 'react-native', 'react-native-dialogflow', 'react-native-game-engine', 'react-native-maps', 'react-native-radar'] | 5 |
10,413 | https://devpost.com/software/pirateradio | Fun fact, pirate radio stations were intially trasmitted from ships to avoid getting in trouble for running "illegal" radio stations.
Inspiration
My Inspiration was to embrace both the hack and nautical (pirate) theme of this hackathon. I am going for the hardware nautical theme. I wanted it to be completely portable so if I were on a ship, I would be able to use it without the need for any internet.
What it does
It turns a Raspberry Pi into a FM Transmitter, that can connect with a bluetooth audio device, essentially making a pirate radio. It can play local files as well.
How I built it
I used a raspberry pi, bluetooth adapter, raspbian linux, a couple of wires connected to the gpio, and blueAlsa as way to pipe audio it recieves. It relies heavily on (PiFmRDS)[
https://github.com/ChristopheJacquet/PiFmRds
].
Most of the work I did was actually creating scripts and setting up the proper parameters to allow the different libraries to talk to each other.
Challenges I ran into
I currently only have a chromebook and so getting that into developer mode and installing crouton and setting up a dev environment took as long as setting up the raspberry pi and configuring blueAlsa to detect bluetooth and pipe that to SOX. I also needed to find a old school fm reciever to be able to test it. I ran into some antenna trouble and finding a clean unused fm frequency that wouldn't get me in trouble with the FCC.
I also could not edit a nicer video for my project, because I am working from a chromebook, but I managed the main features of my setup.
Accomplishments that I'm proud of
I am proud of how it is completely standalone. With a startup script, it shows up as a bluetooth device and after finishing boot, the raspberry pi will keep working as long as the battery bank works. I calculated that the battery bank would allow it run for (~1000mA draw for the rpi + bluetooth, with a 26800mAh battery bank ) 24ish hours. Skipping tracks or changing the gain on the radio, while making audio clearer, would cut down on the battery life run time.
What I learned
I learned a lot about lower level linux operations in how it handles Audio. I also learned to appreciate how much can be done with so little.
What's next for PirateRadio
Currently HD Radio has been pushed for digital radio broadcasts, however the format is encrypted and a license is needed to run a "pirate" radio station. I would love to reverse engineer the broadcast format, the codec used is known but there is DRM attached unfortunately. Most radio stations still have a fallback, and the honda civic I used only has an analog radio.
Something else that has interested me is going all in on ICECast and running an internet radio station.
Built With
bluealsa
bluetooth
raspberry-pi
sox | PirateRadio | A Raspberry Pi that acts as your own Pirate Radio Station (Analog FM) and uses Bluetooth to connect to the audio source | ['Karan Naik'] | ['Best Hardware Hack presented by Digi-Key'] | ['bluealsa', 'bluetooth', 'raspberry-pi', 'sox'] | 6 |
10,413 | https://devpost.com/software/here-fishy-fishy | domain.com
Here Fishy Fishy
Fish data catered to current location
More details on fish including sustainability score
Fish Nutrition
Recipes
Recipe
Reviews
Game to test your knowledge
Upcycling marketplace
Marketplace
Inspiration
Fish taste good but smell bad. But you know what smells worse? Overexploitation resulting in no fish.
The overexploitation of the seas and oceans is leaving them without fish. United Nations Food and Agriculture Organization (FAO) thereby calls for a sustainable fishing model to ensure the survival of species and fishing activity. Careless fishing practices while attempting to meet demands have led to depletion of fish and shellfish populations around the world. Fishers remove more than 77 billion kilograms (170 billion pounds) of wildlife from the sea each year. Approximately 3 billion people in the world rely on seafood as a primary source of protein. Making responsible seafood choices is more important than ever. Waste is a big issue that occurs in all stages of the seafood supply chain. In the U.S. alone 40%–47% of seafood destined for the market goes to waste or to low-value byproducts, an amount that could satisfy the annual protein needs of 10 to 12 million people.2 In the U.S!
What can it help you with?
Sustainable Seafood 101
HereFishyFishy is a compilation of various sources on sustainable seafood. It uses all the data to generate a sustainability score for each fish. This data is catered to you based on your current location. It tells you about the population level, fishing rate, habitat impacts, bycatch, availability, source, color, texture, taste, nutrition facts and so much more for each fish. You can test your knowledge via a mini game.
Step up your seafood cooking game
HereFishyFishy provides you with community-sourced recipes for each fish along with reviews. You’re encouraged to try them out and leave your reviews!
Upcycle seafood waste via exclusive Marketplace
HereFishyFishy allows you to upcycle seafood waste classified into three categories:
Fish Skin
Fish Skin has several uses ranging from treating skin burns to making leather products.
Shellfish shells
They can be used for animal feed/fertilizer.They can also be turned into compostable bioplastic with natural antimicrobial properties.
Fish trimmings
In addition to animal feed/ fertilizer, extraction of high-value, collagen-based ingredients for applications in functional nutrition, pharmaceuticals, biomedicine, biomaterials, and cosmetics are being explored using fish trimmings.
*The marketplace is exclusively for byproducts of sustainable seafood.
How I built it
Backend:
We collected data from the NOAA fisheries repository, seafoodwatch and other verified data sources about fish profile, habitats, fishing status, and also recipes. We assigned a score for each fish based on various factors such as sustainability, fish stock availability, habitat endangerment and bycatch. We structured and stored this curated data on mongodb and used google cloud to serve endpoints to our system
Frontend:
React Native app
Challenges I ran into
Compiling and generating sustainability score.
Accomplishments that I'm proud of
We can now eat fish with peace of mind
What I learned
Fish are very diverse
Domain name
somethingfishyinthis.space
What's next for HereFishyFishy
Add global fisheries data
Built With
adobe-illustrator
google-cloud
mongodb
react-native
Try it out
github.com | Here Fishy Fishy | Sustainable Fishes for Sustainable Dishes | ['Muntaser Syed', 'Ebtesam Haque'] | ['Best Domain Name from Domain.com'] | ['adobe-illustrator', 'google-cloud', 'mongodb', 'react-native'] | 7 |
10,413 | https://devpost.com/software/my-weekly-routine | My Weekly Routine
Inspiration
I wanted to create something to make my life easier and figured it might make someone else's life easier as well.
What it does
This program first asks the user what job they are in search for, this is the only input that the user will put in throughout the whole program, Everything after the user input is automated. The program then goes the web application google chrome, then into linkedIn.com, and searches for the job stored in search.txt. The benefit to this part of the program is that the program filters to only show results in the past week. After the program retrieves all the data that is needed for the program, it then goes to google jobs and does the same task.
How I built it
I used UiPath to create everything, I first created a note file that the program could read and store whatever is read into a variable. The variable can then be used several times across several job searching websites(#reusability). I used "Data Extraction" while on the specified website to extract from the search results. I used "Append to CSV" and "Write CSV" to export the data into a readable format. Then I opened the google calendar and used the variables I stored from the search to create a reminder that would let the user know of the top two searches that the user can apply for.
Challenges I ran into
Since this is my first hackathon I quite frankly did not know what to do at all plus I did not have a team so the proved very difficult. This is my first time using UiPath so I don't know if there is a more efficient way to store variables, that was a major hiccup. I didn't want the user to enter the same search several times. Extracting the URL was by far the most challenging task for me because there were not many ways to do it. There was one way to do this task and it would just not work sometimes. I managed to get it to work after messing with the anchors for it.
Accomplishments that I'm proud of
I am just proud that I finished a project for my first hackathon. I didn't think I would be able to do that and I am just so grateful that I participated. I look forward to participating in more and hope to make some friends in the next one.
What I learned
I learned the basics of UiPath. In the process of learning UiPath, I wanted to create a website to showcase the data from the excel sheet into a pretty Ui so I learned some HTML and CSS but I didn't learn enough to make the Ui. Wish I could learn everything about web development and create one in a few hours.
What's next for My Weekly Routine
Creating a Ui and finding ways to make the program more efficient. I may even learn how to code the whole program in java just for fun.
Built With
uipath
Try it out
github.com | My Weekly Routine | I automated my weekly routine internship/job searching. | ['Daniel Akoto'] | ['Best UiPath Automation Hack'] | ['uipath'] | 8 |
10,413 | https://devpost.com/software/amplify-7pos2c | App Home Screen
Write to Your Representatives Workflow
Find the Right Words Workflow
Inspiration
Over the past few months, the world has seen what young people can do when they organize around common goals. Many people want to make change, but they're not sure where to start. Although it seems like a small action, emailing or calling representatives can have a huge impact. Such an impact is seen in Colorado, where Elijah McClain's life was taken at the hands of police brutality. After public outcry, the governor of Colorado has appointed a special prosecutor to reopen investigation into Elijah McClain's murder. Justice for Elijah may have been neglected if the public didn't rally for his life. Sometimes its difficult to find the right words. This Android app, Amplify, combines the voices of citizens to guide those seeking change through the process of contacting their representatives.
What it does
This app has two main features: (1) send emails to your representatives and (2) see what other people in the country are writing to their representatives about. In order to send an email to your representatives, the app prompts users to enter their address. After entering their address, a list of all of the user's representatives are listed along with their political affiliation. Users then write the subject and body of their email and send it to their representatives in the app. If the user chooses to, they can share their email subject and body with other users for inspiration. All users can visit the 'Find the Right Words' page to see a map of what other people around the country are writing to their representatives about. Users can visit the 'Find the Right Words' page if they're looking to become more aware about issues around the country, learn from the perspectives of others, or find inspiration for their own letters to representatives. This entire app was written during HackItShipIt.
How I built it
Amplify is an Android app that I wrote in Kotlin. Amplify leverages the Google Cloud Platform in several ways. First, Amplify uses the Google Cloud Platform to make requests from the Google Civic Information API and Maps SDK for Android. Secondly, Amplify stores all user messages in a Realtime Firebase database. When a user enters their address, the app makes a request to the Google Civic Information API for their representatives. When the user is done writing their letter, the app writes their address, subject, and body to Firebase. When the user visits the 'Find the Right Words' page, the app reads all of the submissions from Firebase and places them on Google Maps with markers via the Maps SDK for Android.
Challenges I ran into
The first challenge I ran into in this project was parsing the JSON response from the API request to a Kotlin object. I've never worked with JSON before (I have only used Protobuf), so I had a difficult time understanding the structure of the response and how to extract the data I needed. Fortunately, after some YouTube tutorials, I was able to nail it. I also ran into the challenge of reading the user data from Firebase. After reading more of the Firebase documentation, I was able to figure it out.
Accomplishments that I'm proud of
I'm proud that I was able to use the Google Maps Geocoder to map user addresses to latitude and longitude. This was one of the last tasks I completed, so I was running out of steam. I was able to push through, and as a result, the app accurately places the markers of different submissions on Google Maps. I'm also proud that I was able to use Firebase. This is my first time working with a database in any of my projects, so I was intimidated by how I would get it working. Fortunately, Firebase is pretty simple to use and there's plenty of documentation!
What I learned
I learned how to make an Android app that makes API requests. I've never worked with the Google Cloud Platform before, so I learned by creating a new project in the Google Cloud Console and the Firebase Console. I've just scratched the surface of their capabilities, so I'm looking forward to exploring more!
What's next for Amplify
Right now, there isn't any email integration for the app. The app is just storing user submissions in Firebase. I would like to add Gmail integration so users can send their letters in the app. I would also like to improve the interface for the Google Maps markers by making a custom marker. The custom marker would allow the app to display the entire user's message, rather than a snippet. In addition, users should be able to select which representatives they would like to send their letter to.
Built With
android
android-studio
firebase
google-civic-information
google-cloud
google-maps
kotlin
Try it out
github.com | Amplify | Our voices are louder together. Amplify connects citizens with their representatives to write letters for change. Users can view letters written by fellow constituents for inspiration. | ['maddiemford Ford'] | ['Best use of Google Cloud'] | ['android', 'android-studio', 'firebase', 'google-civic-information', 'google-cloud', 'google-maps', 'kotlin'] | 9 |
10,413 | https://devpost.com/software/arrr-you-a-pirate | Inspiration
The hack it ship it hackathon reminded us of pirates. We wanted to do something related to pirates, and we came to the idea of "how much of a pirate the user is."
What it does
Submit a quiz, and based on the quiz results, we will output a score and based on that score, we will give you a pirate "rating."
How I built it
We used html and css for the website and javascript for the quiz.
Challenges I ran into
Eliana: My teammates were difficult to get online, so I did most of the work on my side. I stayed up pretty late and woke up pretty early too, but what is sleep during a hackathon anyways? I also struggled for the whole saturday trying to insert the quiz into the website. I tried learning python inside html, but nothing worked. That's why the quiz and the website are separate as of right now.
Also, three teammates left our original team of 7 on the first day. I don't know why, but that caused us to change our plan. We wanted to do image recognition and compare an image of the user with an image of the pirate, but we changed to just a quiz.
Finally, there were just some teammates that were difficult to work with. Pretty much no one responded to my discord chat pings yesterday, and that was just plain sad. I know it wasn't because of the time difference since 1) they were active yesterday at the same time and 2) I was waiting for around 16 ish hours. So that was one of the main challenges. I thought I could depend on their skills/knowledge but I was left with figuring things out myself. To end on a positive note, I touched up on a lot of my debugging skills.
Gerald: I helped a bit however because of internet issues I was not able to fully support. Fortunately, my internet issues were resolved I was able to finish the program. I worked on the back end and made a few adjustments to the look of the game.
EDIT: after the hackathon, Gerald told me he was busy and had internet issues.
Accomplishments that I'm proud of
Eliana: I did a lot of the css myself this time, which was kinda challenging but after searching online, I found out the answers.
Gerald: I was able to help create a fully functional pirate game. I have never done one before
What I learned
Eliana: I learned that a good team and time management is important.
What I learned. Gerald: Never give up. You must always persevere until you make it. I also learnt that time managements skills are important
What's next for ARRR you a pirate?
I plan to try to get the quiz into the website, and that should be one of the most important finishing touches.
EDIT: We finished our project(yay!) and now it is fully functioning.
Built With
css
html
javascript
python
Try it out
repl.it | ARRR you a pirate? | Find out how much of a pirate you are! | ['hello people', 'Gerald Akorli', 'Sarthak Gautam'] | [] | ['css', 'html', 'javascript', 'python'] | 10 |
10,413 | https://devpost.com/software/quarantime-5d6amj | Inspiration
Since the outbreak of coronavirus disease (COVID-19), we are aware of the term quarantine and its significance in preventing a further outbreak in communities and individuals. It is necessary to track the individual locations for the ones who have been in contact with an infected person- directly or indirectly, for a variety of reasons including contact tracing, prevention of exposure to the virus.
What it does
The global pandemic needs scalable and distributed monitoring. This calls for a need of apps that can monitor individual activity with precise location and deliver timely updates to the connected services which notifies whenever an isolation or quarantine rule has been violated.
How I built it
Platforms
I built this application with
Google Cloud Platform
, it is essentially a Xamarin.Forms cross-platform mobile application that supports Android, iOS, and Android Wear.
Languages and Frameworks
The app is built with Xamarin.Android and is written in C#.
Cloud Services
Google Geolocation APIs power the app's location-awareness features and provide an identity platform to authenticate every user associated with the app.
Mobile Backend
The backend runs on
ASP.NET Core
, which attaches the webhook to the communication provider.
Communication Provider
The app uses
Twilio
programmable SMS to send texts on appropriate HTTP triggers.
Hosting
To keep the testing process hassle-free and lightweight, the app is hosted on secure tunnels using
ngrok
.
Challenges I ran into
Build process for multiple web services.
Accomplishments that I'm proud of
My first Android WearOS project
What I learned
Connecting services and cross-platform integration to support for a wide range of functionalities, hosting, debugging, and much more!
What's next for Quarantime
Scalability can be achieved starting from the current version of the app itself, to provide real-time COVID-19 updates, connect users with essential services that provide doorstep deliveries, to ensure social distancing, and a lot more!
Built With
android-wear
asp.net
c#
github
google-cloud
google-maps
google-places
ngrok
twilio
xamarin
Try it out
github.com | Quarantime | Smart geolocation tracker to prevent the transmission of COVID-19 by restricting individual activities and residing perimeter to prevent exposure from the deadly COVID-19. | ['Shivam Beniwal'] | [] | ['android-wear', 'asp.net', 'c#', 'github', 'google-cloud', 'google-maps', 'google-places', 'ngrok', 'twilio', 'xamarin'] | 11 |
10,413 | https://devpost.com/software/learningapp | Description
This application is made with flutter using dart language. Flutter apps are available for both IOS and Android users. This app allows students to capture their work-based experiences and share these learning experiences with other students through text, photos. This app allows students to share their work experiences and projects with other students. And then they can chat about their experiences with other students.
Built With
dart
flutter | LearningApp | E-learning Application | ['Simran Srivastava'] | [] | ['dart', 'flutter'] | 12 |
10,413 | https://devpost.com/software/take-me-somewhere-j6qo7n | Inspiration
I have began learning java web technologies and how to contact REST apis and so I decided I wanted to try to take my skills to the next level by going for a relatively ambitious project based on my experience level
What it does
It provides a simple user interface to plan a road trip from their location and email themselves the information
How I built it
I used HTML, Javascript, Bootstrap and some CSS for the front-end, Java for the backend, tomcat as the server provider, and the Java Mail api to send mail.
Challenges I ran into
I had to use a considerable amount of Javascript to get the appropriate data for the server and get the maps working; as someone who is not comfortable with Javascript this was quiet difficult. As well, when using the Java Mail Api I seemed to have gotten a strange ClassNotFoundException that took me quiet some time to fix
Accomplishments that I'm proud of
Creating a potentially useful app using technologies I am not too familiar with; this has been an incredible learning experience to get my hands dirty with web development and REST apis.
What I learned
How to use some of the many apis provided by google, how to send mail programmatically via Java, and some parts of Javascript applicable to web development
What's next for Take me Somewhere
Adding real user authentication with a database and the JDBC, and adding waypoint functionality in the directions page.
Built With
bootstrap
google-directions
google-maps
google-streetview
java
java-mail-api
javascript
jsp
tomcat
Try it out
takemesomewhere.herokuapp.com
github.com | Take me Somewhere | With it being summer time and Covid-19 restrictions loosening in some areas why not find something fun to do! Take me Somewhere provides an easy to use interface to plan a road trip | ['John Amiscaray'] | [] | ['bootstrap', 'google-directions', 'google-maps', 'google-streetview', 'java', 'java-mail-api', 'javascript', 'jsp', 'tomcat'] | 13 |
10,413 | https://devpost.com/software/we-sea-you | Inspiration
We Sea You
is an inclusive application aimed at visually impaired users. It intends to give the visually impaired individuals experience of the underwater world by translating their surroundings to speech. This will allow them to carry out recreational activities like scuba diving and also give them access to more jobs (such as that of a marine researcher) in the scientific community.
I was inspired to create this app on reading an article ‘To Sea With a Blind Scientist’ from Geerat J. Vermeij, a nationally recognised marine biologist. In his article, he questions ‘could a blind person ever hope to be a scientist?’ On reading the article, I realised that there is a lack of opportunities in the scientific research community for visually impaired individuals, and this motivated me to bridge this gap by creating
We Sea You
.
What it does
The app detects the surroundings and translates it into speech making it possible to explore the underwater environment. I have made sure the app is realistic by considering the following points -
The ML model will be available offline on the app for object detection.
There exists mobile technology as well as speakers that are waterproof and can be fitted into a full face mask.
Hardware considerations reference
As an alternative, visually impaired users can use submersibles to explore the sea.
How I built it
We Sea You
app is built using Flutter for an android device, although, I plan to extend its functionalities to an iOS platform as well. The app uses a TensorFlowLite API to detect objects in the scene (with the help of tflite plugin). The labels are used to generate text which is converted to speech using a Flutter plugin for Text to Speech (flutter_tts). The flutter_tts plugin also supports multiple languages, and therefore the use of the app is not restricted to English-speaking users. The ML model used for object recognition is Mobilenet_V2_1.0_224_quant and the model is hosted on Firebase.
Challenges I ran into
Translating surroundings from a real-time video to speech, frame by frame has been a challenge, especially since I didn't have much experience with Flutter before creating this project. Furthermore, managing deadlocks between concurrent processes such as image streaming, ML object detection and speech generation was quite demanding and is something I am still working on. Although, this has given me a better understanding of what deadlocks are and how to manage them.
Finally, I would love to test my app underwater and I am upset I haven't been able to because of the COVID-19 situation :(
Accomplishments that I'm proud of
Even though I have not been able to complete the demo video for the app, I am proud of what I have learned and accomplished in the past 48 hours, but I do hope to get better at managing my time in the upcoming hackathons.
(I will be posting a link to the video soon :)
What I learned
Stick to coding in python :)
What's next for We Sea You
I plan on increasing the functionalities of the app by including a speech to text API to allow the users to record their experiences. I also plan on training my own ML model with images of different marine creatures. Another possible extension can be including a dataset of description for each of the detected objects in order to give the user more information about the characteristics and nature of the object.
Built With
firebase
flutter
flutter-tts
mlkit
tensorflowlite | We Sea You | An inclusive app for marine researchers with visual impairments. | ['Manya Girdhar'] | [] | ['firebase', 'flutter', 'flutter-tts', 'mlkit', 'tensorflowlite'] | 14 |
10,413 | https://devpost.com/software/trash-hunt-kxjlv7 | Start Screen
Game start
trash spawns
Sharks
Game Over
Sound FX
Inspiration
As the oceans are being polluted with litter at the beaches, we decided to create a game for people so not only they have fun but also understand that its their responsibility to keep oceans and other water-bodies free from trash.
What it does
Its a simple game, where player cleans ocean by collecting trash and dodging sharks. The speed and frequency of the sharks keep increasing as the player score increases.
How we built it
We used pygame module of python to develop this game.
Challenges we ran into
As all the members were new to game development it took us some time to understand the mechanics. Youtube tutorials and official docs were really helpful to get us started quickly. It was also a challenge to come up with music to fit the mood of the game having never made music for a game before.
Accomplishments that we're proud of
We are proud of creating a complete game with original music within 2 days, being completely novice in game development and game music composition.
What we learned
We learned the pygame module of python.
What's next for Trash hunt
we are thinking about making a donation platform based on our game so people can donate to organizations that keep oceans clean.
Music
https://soundcloud.com/mumblejumbo/trash-hunt-game-music
Built With
pygame
python
Try it out
github.com | Trash hunt | trash hunt game | ['Shikhar Sharma', 'Taehyeon Kim', 'Daniel Benton', 'Donghyeon Kim', 'Anshika Mishra'] | [] | ['pygame', 'python'] | 15 |
10,413 | https://devpost.com/software/b-p-b-bird-protection-buoy | Controller
Buoy
Everyone’s Role
Aryam Sharma
(Disc. @imaryamsharma#8716) - He was our programmer and also did some research to identify and solve the problem. He programmed and created the simulation of the solution.
Matthew Simpson
(Disc. @inferno#8410) - He was our electronics engineer. He created the circuits and found what parts our speaker system would actually need. He also voiced and created the demo video.
Shahmeer Khan
(Disc. @PotatoTheTomato#4133)- He was our main designer and modeler. He created the models and renders of the buoy itself.
Ishpreet Nagi
(Disc. @Bapple_Boi#5294)- He was our documentar and main researcher. He documented all the work in the end and did the majority of the research in finding the problem and solution.
The Issue
Oil spills occur all around the world, unfortunately, birds are one of the most affected animals due to these oil spills. According to Ornithology.com, “Each year over 500,000 birds die worldwide due to oil spills.” (Lederer, D). The oil from an oil spill floats on the surface of the water, and birds are unable to distinguish the oil from the water. So when they come and sit on top of the water for either food or for rest, they get covered in the substance. Once covered with oil, the warm-blooded vertebrates are rendered flightless, their waterproofing ability is impaired, and they become more sensitive to extreme temperatures (International Bird Rescue).
Our Vision
Bird Protection Buoys (B.P.B) are small buoys that would be placed around an oil spill, once in position, they would emit a frequency that deters the birds most susceptible to the oil. The buoys will continue to deter the birds from the oil spill while workers can try to clean up the mess. Once the mess is cleared, the buoys can be collected and reused.
How it works
Each of the buoys would contain an Arduino Uno R3 that would be the central processing unit and will all be toggleable via a controller that all buoys will wirelessly connect to using NRF24L01 transceivers, this will allow the buoys to be turned on and off without a need to manually adjust each buoy.
The Arduino would receive power from the rechargeable battery on the buoy itself that will be getting charged from solar panels also located on the buoy. The Arduino would contain the code which would instruct the omnidirectional speaker located at the center of the buoy to emit the 4000Hz frequency. The buoys will be placed 30m apart from each other around the oil spill and would surround the spill zone.
Challenges
-Time management was a huge issue during the construction of this project as we had many personal problems constantly coming up, so it was hard to get our parts done on the decided time.
-Agreeing on a solution was proving rather difficult for us as we had so many ideas. We could not see which one would work the best and would prove the most useful.
-An issue we encountered in creating circuits was that we were having a hard time ensuring that data was being transferred between the buoys and the main controller in charge of turning them on and off.
-Another issue we encountered in creating the circuits was that we could not get all components to get power from the power source and so had to mess around with the circuit and experiment with different wiring methods.
-It was difficult to emulate all of the electronics and hardware for the project due to limited access to emulators in current online status.
-It was our designers first time 3D modeling water so it was difficult to get the hang of at first
Our Accomplishments
-We were able to complete as much as we did in the small time frame we had and with all the personal issues coming up constantly.
-We were able to create a functioning simulation of our solution using Pygame.
-We were able to work as a team to create a solution that solves a real-world issue
What we Learned
-We learned much about the impacts of oil spills on the animal life of its surrounding area, we did not know the amount and variety of animals that are impacted by oil spills.
-We learned a lot about tools we had previously used and new tools, for example for some teammates they accessed Pygame and certain tools for the first time.
-We learned how to better communicate with one another during these chaotic times. We are able to better understand each other and their work schedules so we can create time frames when we could all log on and collaborate.
-We learned how to structure and create a demo video of our solution allowing us to communicate our idea using different medias
Resources
Lederer, D. (2018, April 22). Birds and Oil Spills. Retrieved July 18, 2020, from
link
International Bird Rescue. (n.d.). Retrieved July 18, 2020, from
link
Built With
autodesk-fusion-360
pygame
python
tinkercad
videoleap
Try it out
github.com | B.P.B. (Bird Protection Buoy) | Protecting birds from the extreme damage and harm that comes from oil spills all around the world. | ['Matthew Simpson', 'SK - 11EA - Jean Augustine SS (2612)', 'Ishpreet Nagi', 'Aryam Sharma'] | [] | ['autodesk-fusion-360', 'pygame', 'python', 'tinkercad', 'videoleap'] | 16 |
10,413 | https://devpost.com/software/smart-door-opening-system | Inspiration
In this time of pandemic, the lesser the contact the better, and yet one has to keep going out for groceries or open the door for deliveries.This hack is sipposed to prevent people from having to use keys, or having to touch the door lock or even having to go near it to unlock it.
What it does
It is a raspberry pi + telegram controlled bot that unlocks the door when the user asks it to.
How I built it
I built it using Raspberry pi 4, Telegram and servo motors
Challenges I ran into
Integrating various services is not always an easy thing to do.But it was bottomline fun to keep getting around whatever came up.
Accomplishments that I'm proud of
I am proud that I was able to integrate everything especially the hardware without all the proper tools.
What I learned
I learned how to automate and use the raspberru pi gpio pins remotely. I have only used mqtt before.
What's next for Smart Door Opening System
I hope to add a UV sanitising unit in case it is delivery.
Built With
arduino
node-mcu
python
raspberry-pi
Try it out
github.com | Smart Door Opening System | Unlocking the door to unlock from anywhere | ['Ritvi Mishra'] | [] | ['arduino', 'node-mcu', 'python', 'raspberry-pi'] | 17 |
10,413 | https://devpost.com/software/covid19-helping-app | Welcome Screen
Nearest COVID-19 dedicated hospital and to know the symptoms of COVID-19(One of the ways to get the symptoms))
COVID-19 cases in user's locality info and test centres near the user
Daily updates about COVID-19 and bed availability information in nearby hospitals.
Fact check sample and COVID-19 stats data
Inspiration
The Google Assistant App is intended to be used by people across the globe to access accurate information about COVID19 pandemic and spread awareness about the claims that are globally being forwarded on different social media platforms either to create hatred or to mislead people. Some platforms are also misleading people with wrong statistics on number of cases. This all lead to the idea of making an interactive app "CORONA HELP ASSIST".
Sometimes, we reach out a hospital, but we find that there are no beds available and we have to move to another hospital, which can cost to death. The app also solves this problem,
you can simply ask Beds available in the nearest hospital or Beds available in the nearest hospital and the app will guide you!
What it does
The app can be used by voice and typing and through mobile phones as well as Google home devices.
The app has been trained to work on text input and voice. The app answers most of your asked questions on COVID-19 virus. It can also cross verify the claims that you get in whatsapp forwards or on other social media platform, So if you are not sure if the fact provided you is true or fake, You just need to ask for the fact check!
It also answers frequently asked questions about COVID-19, like what are the symptoms of COVID-19, Nearest COVID-19 dedicated hospital me, and more...
It solves a difficult problems related to hospital beds availability,
You can ask about bed availability in the nearest hospital or nearest hospital where beds shall be available or bed availability in hospital X
. It also notifies you of latest updates on COVID-19 (only if user wants to!).
The app also has the potential to dial the national COVID-19 helpline number in case of any emergency.
How we built it
We have made it using
Google Action Console
. We used APIs from
COVID19INDIA.ORG
. We used different services offered by
Google Cloud Platform
. There were a lot of issues at the starting but we fixed everything.
Challenges we ran into
We were new to developing apps for Google Assistant. So it was a challenge for us to tackle all the issues we got.
We were having problem with APIs first and when we resolved it then we had problem with deploying the app for testing
However, We were able to complete the project within 24 hours from the time of ideation.
Accomplishments that we're proud of
Completing the project within 24 hours from the time of ideation phase of our group.
Having first time hands-on experience of Google Action Console.
Developing all the intents and scenes easily.
We made it dynamic such that it would look as if you are talking to a real person and it uses different word at different time on the same question.
Using APIs
Implementing webhooks.
What we learned
We were extremely new to developing Google Assistant App, So, everything we learnt was new to us. Along with the basics we learnt, We also learnt making the speech of the assistant more clear and making it sound like real person. We also trained it to be dynamic, That means it uses different words when asked the same question multiple times.
What's next for CORONA HELP ASSIST
It has not been released yet! We will have to fully develop it and add more functions to to submit it for review. After that, It will be available for everyone.
THE APP ISN"T AVIALABLE AS OF NOW IN GOOGLE ASSISTANT FOR EVERYONE AS GOOGLE HAS NOT REVIEWED IT YET!
Built With
google-actions
google-assistant
google-cloud
javascript
node.js | CORONA HELP ASSISTANT | Providing accurate information about COVID-19 pandemic to users, giving correct stats and bed availabalitiy in hospital information to users. | ['Manan .', 'Jay Patel', 'Rahul Sinha'] | [] | ['google-actions', 'google-assistant', 'google-cloud', 'javascript', 'node.js'] | 18 |
10,413 | https://devpost.com/software/stock-vakri-dq17i2 | Inspiration
Great
What it does
Loved it
How I built it
Challenges I ran into
Accomplishments that I'm proud of
What I learned
What's next for Stock | Stock | project | ['Hrithik Sahu'] | [] | [] | 19 |
10,413 | https://devpost.com/software/coviddataproject | First half of the main page, this explains why this project can help humanity and our goal.
Second half of the main page, this displays information and charities users can look into to help research Covid-19 or aid with struggles.
Statistics of global cases of Covid-19 that gets updated daily through the use of an API.
Survey that gives us the input of the people, their concerns and experiences
covidDataProject
This project was bootstrapped with
Create React App
.
Available Scripts
In the project directory, you can run:
npm start
Runs the app in the development mode.
Open
http://localhost:3000
to view it in the browser.
The page will reload if you make edits.
You will also see any lint errors in the console.
npm test
Launches the test runner in the interactive watch mode.
See the section about
running tests
for more information.
npm run build
Builds the app for production to the
build
folder.
It correctly bundles React in production mode and optimizes the build for the best performance.
The build is minified and the filenames include the hashes.
Your app is ready to be deployed!
See the section about
deployment
for more information.
npm run eject
Note: this is a one-way operation. Once you
eject
, you can’t go back!
If you aren’t satisfied with the build tool and configuration choices, you can
eject
at any time. This command will remove the single build dependency from your project.
Instead, it will copy all the configuration files and the transitive dependencies (webpack, Babel, ESLint, etc) right into your project so you have full control over them. All of the commands except
eject
will still work, but they will point to the copied scripts so you can tweak them. At this point you’re on your own.
You don’t have to ever use
eject
. The curated feature set is suitable for small and middle deployments, and you shouldn’t feel obligated to use this feature. However we understand that this tool wouldn’t be useful if you couldn’t customize it when you are ready for it.
Learn More
You can learn more in the
Create React App documentation
.
To learn React, check out the
React documentation
.
Code Splitting
This section has moved here:
https://facebook.github.io/create-react-app/docs/code-splitting
Analyzing the Bundle Size
This section has moved here:
https://facebook.github.io/create-react-app/docs/analyzing-the-bundle-size
Making a Progressive Web App
This section has moved here:
https://facebook.github.io/create-react-app/docs/making-a-progressive-web-app
Advanced Configuration
This section has moved here:
https://facebook.github.io/create-react-app/docs/advanced-configuration
Deployment
This section has moved here:
https://facebook.github.io/create-react-app/docs/deployment
npm run build
fails to minify
This section has moved here:
https://facebook.github.io/create-react-app/docs/troubleshooting#npm-run-build-fails-to-minify
Built With
css
html
javascript
Try it out
github.com | covidDataProject | Covid-19 has taken the world by storm. Wearing masks and social distancing has worked greatly but cases keep on rising. This project will help study the patterns of the virus and its hotspots. | ['Kervens Jasmin', 'mb natavio', 'Leonardo Matone'] | [] | ['css', 'html', 'javascript'] | 20 |
10,413 | https://devpost.com/software/flabo-gho4y6 | Inspiration
Online shopping has come a long way. People tend to prefer it more these days. But what about some drawbacks of it some people face like not knowing which product to buy, offers etc., In retail shops there are actual humans who recommends us products and take returns if there's anything wrong, tell us offers avilable and so on. These little HUMAN things are not available in online stores. We tried to bring that Human Touch to online stores with Flabo and fill this gap.
Built With
css
html
javascript
Try it out
mohinishteja.github.io | Voice | Voice | ['Mohinish Teja', 'Abhijith Gunturu', 'vibhav chirravuri', 'Sampath Puvvada'] | [] | ['css', 'html', 'javascript'] | 21 |
10,413 | https://devpost.com/software/code-camera-app | Inspiration
It's quite common that we always end up writing lines of code on paper during our thought process. I always think of having a good way that can help me to give life to the code when is written on paper. However, typing everything is a big hectic process. That's where our app CodeCamera comes in rescue.
Now you don't need to type all your handwritten code onto your pc. CodeCamera will automatically detect the language in which you have written the code and converts them ready for the execution, and you can also directly save the code as a file.
What it does
As soon as you open the app, you can see two options on the home page.
You can either upload a previously captured image of the handwritten code.
Or you can start capturing a new image using the Capture Code button.
After providing permission to access Camera, you can start capturing your handwritten code.
There is an option to turn on flash and also you can adjust focusing using autofocus button.
Once you fit your camera frame and click capture, the captured image is processed and using the specifiers it will detect the language of the code written.
Once you click on the decode button, you can see the converted code on the screen.
You can also save the code as a file by specifying a name.
How I built it
I have build the whole app on Android studio using Native Android SDK.
I used Google cloud platform ML kit and it's vision API for image processing.
Challenges I ran into
I started to build the projected very late. As a result.
Image processing is a bit complicated initially. However using Vision Api provider by ML Kit of Google Clouds Platform, I was able to easily get my work done in time.
What I learned
Image processing is something new, that I have learnt this time and it is amazing.
What's next for Code Camera App
I would like to add live onscreen code conversion in the app. Also, I look forward to adding an inbuilt execution system for the files captured and decoded.
Built With
android
firebase
google-cloud
image-processing
ml-kit
Try it out
github.com | Code Camera App | Convert Handwritten Code into executable file using your Mobile Camera. Give Life to your Handwritten Code. | ['Nitish Gadangi', 'Sainag Gadangi'] | [] | ['android', 'firebase', 'google-cloud', 'image-processing', 'ml-kit'] | 22 |
10,413 | https://devpost.com/software/odis-obstacle-detection-inside-the-sea | GIF
Abstract
One of the problems facing by underwater vehicles and ships is in detecting any incoming obstacle to avoid a collision which might lead to hazards. Thus, this project focused on the design of an underwater obstacle detection system using a sonar sensor.
This project uses the sonar sensor as distance detection to determine the distance between sensor and obstacle and camera to predict the type of obstacle is it using Deep Learning.
Introduction
Every day, many boats and ships run on the water, and most of the ships have well defined navigating systems with radar technologies but most boats don’t have! Also, the navigating systems are present for the upward detection not underwater.
There are many cases we find most of the time where the boats or ships sink due to the underwater obstacles like huge ice blocks, rocks, or any other underwater hindrances. But we found there were no certain devices or technologies which help in predicting underwater obstacles to detect the objects accurately and warn the pilot for the same. Also, if some devices are there, those are much expensive which are usually not affordable by the boatmen! Considering the said problem, we have come up with a unique and low-cost solution.
Introducing,
ODIS
(Object Detection inside Sea); a smart device that will detect the underwater obstacles with the help of underwater radar and image processing. The device will detect the underwater obstacles with the help of radar technology and with the help of image processing we can get to know about what type of object is it and image processing with deep learning will increase the accuracy of the system.
Components
Arduino Uno (for now later in the actual model we can use Raspberry Pi for the whole system)
Underwater radar senor/Ultrasonic Sensor
Camera with thermal image processing
Laptop/PC/Mac (a system where we can run the model for now).
Radar Sensing
-The radar sensing will be based on simple ultrasonic technology. The ultrasonic waves can travel inside the water. It works in the following principle:
-Using the help of this technology, we can detect underwater obstacles. The sensors will be fitted around the four corners of the ship/boat for better accuracy.
Block Diagram
Workflow
Image Processing
-Image processing with Deep learning has great scope in detecting the objects accurately. It can also detect what type of object is it then it can respond accordingly.
-We’ll be using the Keras Model for the prediction and the images are trained using Google’s Teachable Machine. It’ll basically detect the object under the water and give a result of whether there are obstacles or not.
In details
The radar sensor will create a virtual underwater map that will be displayed on the dashboard. The camera will provide the live feedback of the underwater scenario and the algorithm will scan the obstacles and give a perfect prediction before warning.
If there will be an obstacle in front of the boat/ship then it'll give pre-warnings to the pilot. The warning will be via buzzer, lights!
-The combined system, radar sensing, and objection will make the entire system more accurate.
-In this way, ODIS can control the hazards due to the obstacles which will save many lives and cost also!
Testing Link:
Link
Challenges we ran into
Due to the pandemic, COVID19, we can't get the chance to go out and test our work. We faced major problems in the image processing part where we have to work with the mobile phone for photo testing.
Accomplishments that we're proud of
We are proud that we created the system which will help many navigators and boats during their journey!
What we learned
During the work, we found many exciting things about the ocean and ocean lives. Also, we learned Tensorflow and Keras during our project!
What's next for ODIS (Obstacle Detection Inside the Sea)
The next plan for ODIS is to implement and test. Realtime data will help us to get more accurate value and results!
Built With
arduino
javascript
machine-learning
python
tensorflow
Try it out
github.com
piysocial-india.github.io | ODIS (Obstacle Detection Inside the Sea) | "A smart autonomous system to detect obstacles under the water and alert the pilot to avoid hazards and accidents." | ['Saswat Samal', 'Sanket Sanjeeb Pattanaik'] | ['Sustainability Track'] | ['arduino', 'javascript', 'machine-learning', 'python', 'tensorflow'] | 23 |
10,413 | https://devpost.com/software/captainsquest | Inspiration
We wanted to build a game that gets people out of the house and gives people a sense of adventure. The game is very similar to pokemon go.
What it does
Captains Quest uses the users location to determine whether or not they are at a location (treasure chest, etc).
Users can spend time visiting locations to gain coins, treasure, etc.
How I built it
We used android studio to create an android app. Google cloud was used to access the google-maps api to use users locations and have access to the map.
Challenges I ran into
We had a lot of bother trying to get the permissions to access users locations to work. We also had problems when trying to place the items (treasure chest, etc) in the correct places. We struggled to implement the APIfunctionality as well.
Accomplishments that I'm proud of
This was the first time either of us has worked on an android app first hand, we are very proud of creating an application. This app is also written in Kotlin which neither of us had past experience with. We are also proud of getting the map working with treasure chests and other items.
What I learned
We learned how to use Kotlin which was a bit of a learning curve but rewarding. We also learned a lot about how an android application fits together and how to use google maps API.
What's next for CaptainsQuest
We would like to add more locations, as well as implementing the daily quests and more interesting graphics for when the user reaches the location.
Built With
android-studio
google-maps
kotlin | CaptainsQuest | Join Captains Quest for an adventure near you! | ['shanna balfour', 'Nick Deane'] | [] | ['android-studio', 'google-maps', 'kotlin'] | 24 |
10,413 | https://devpost.com/software/ship_anticollision_system | The Simulation
These are the two Arduinos used (also the only two that I own)
Inspiration
I've become fascinated with autonomous ships - mainly due to how they could visually look, but also on how they could revolutionise the logistic industry. It also seems like a logical stepping stone to an autonomous world.
However, ships are notoriously bad at manoeuvring. This combined with the poor weather conditions mean that ships are still colliding with each other.
Infrasound travels huge distances (even at low amplitudes), especially at sea. Further, it can't be heard by humans, so prevents discomfort. With machines that have turning circles measured in nautical miles, this seemed like a perfect solution to the problem.
Doppler Shifts of sound waves are also measurable, allowing for more information to be gathered.
What it does
At the moment, two ships are simulated on a collision course. Virtual infrasound waves are transmitted with serial information with all information encoded coming from Arduinos. The sound is virtually transmitted through an animation, and upon collision with the other ship, the information is transmitted to the Arduino. Upon receiving the transmission, a response is transmitted to allow for distance to be measured (i.e. echo location).
How I built it
The animated aspects are simulated on c# using Blend for Visual Studio 2019. This includes the travelling of the ship, to scale, and the transmission of the sound waves. The sound waves were reduced in speed by a factor of 100, since the animation was too quick for the collision event to be triggered.
Challenges I ran into
Unfortunately, I do not have two ships on which I could test this on. Further, I do not have any speakers or microphones for the Arduinos, so all real world occurrences had to be simulated virtually.
Animating on Blend proved a challenge. I could not figure out how to use the storyboard as suggested online, and had to make do with the
DoubleAnimation
object instead.
C/C++ has far fewer functions available than I'm used to. Even things like length() aren't there. Most notably is the difficulty I came to trying to declare a 2d global string array (although in hindsight this may have been a memory issue instead). I overcame this by creating a 2d char array with a pointer (although I am aware this probably was not the best practice). I also created a small function designed to reduce the size of my arrays by cleaning out old data.
Running out of memory on the Arduino's occurred several times, leading to me having to be conscious about how big the program was. Not something I've had to do before.
The Serial communication appeared to be very poor, both with delays and loss of information. Never having used serial connections before, I am unaware on how to fix this, so it is still unfortunately broken.
Accomplishments that I'm proud of
Finally decided to try out OOP, despite my comp sci friends trying to get me to for ages! It worked really well!
I've also got the basics of Blend now, which should hopefully help in the future!
I've been wanting to try out using an Arduino for a while now, and have finally done it! Despite getting annoyed at the low-level nature of the programming language, I think this will definitely help me get a better understanding of how higher level languages work.
What I learned
OOP, Arduino, Blend, Serial Communication
What's next for ship_anticollision_system
I need to figure out how to fix the serial transmission issue. After this, I need to get the Arduinos to be able to communicate with the program to adjust ship courses. Then, I might even ...
CREATE MINI SHIPS TO REPLICATE IT!
Built With
arduino
blend
c#
visual-studio
xaml
Try it out
github.com | Ship Anti-collision System | Aircraft have emergency systems for collision avoidance. With ships however, this issue has been greatly ignored. In an era of potential driverless tankers, this would be hugely beneficial. | ['rjb255 Brown'] | [] | ['arduino', 'blend', 'c#', 'visual-studio', 'xaml'] | 25 |
10,413 | https://devpost.com/software/ahoy-scallywags | The Main Screen
Inspiration
Sailors are alone on the high seas for months at a stretch. Many a sailor has expressed their lonesomeness in their journals at sea. We came up with an innovative way for sailors on different ships to chat with each other if they're in the same stretch of sea!
Our inspiration, Jack Sparrow.
What it does
Ahoy Scallywags enables sailors to communicate with other sailors on nearby ships (radius of ~10km or 5.3 nautical miles). Sailors can hang out, sing pirate songs across ships, or form a floating nation!
It uses the Fetch API to search for geofences near the sailor with Radar.io and if it doesn't find any it creates a geofence around the user. All the users in a geofence will be able to chat.
The app shows you a map where you can see all the other sailors. Once close to another ship it switches to a dual interface where it creates a chat room alongside the map where you can talk to your fellow sailors.
How we built it
Ahoy Scallywags is built with JavaScript, HTML, and SCSS with the chat server in Node.js and Socket.io. It connects users based on geofences created in Radar.io.
Challenges we ran into
We were new to almost every single technology we used, like Radar.io API, Here API, Node.js, Socket.io, and even MongoDB.
Making HTTP requests to the Radar.io API was tricky. Being new to Node.js, Socket and their documentation wasn't ideal. The learning curve was steep and treacherous. Being new to Node, Mongo DB, and Mongo DB Atlas isn't a fun time - neither is being completely new to the concept of APIs or HTTP requests.
Since we were brand new to a lot of these technologies we had to spend a lot of time working through the errors but in the end we managed to pull through!
Accomplishments that we're proud of
Some of us who were completely new to most of the exotic stuff (if not all!) we used was able to actually develop an MVP. We learned tonnes about the APIs, technologies, cloud computing, and more. We were complete strangers before the hackathon but we still managed to work cohesively and create a project!
What we learned
We learned so much about developing web apps with the Radar.io API, making HTTP calls in general and making a server with Node.js. We also learned how to make a real time chat with Socket.io.
What's next for Ahoy Scallywags!
We hope to add some new features to this project including voice chat!
Built With
css
html
javascript
mongodb
node.js
radar.io
socket.io
Try it out
github.com | Ahoy Scallywags! | Ahoy sailor, fear ye lonesome no more! Embrace ye pirate self and talk with nearby sailors! | ['Ashish Selvaraj', 'Vedvardhan Gyanmote', 'Adit Garg'] | [] | ['css', 'html', 'javascript', 'mongodb', 'node.js', 'radar.io', 'socket.io'] | 26 |
10,413 | https://devpost.com/software/emergenseateams | Inspiration
A lot of things inspired us. We wanted to make something that was impactful – it would help those who used it in extraordinary ways. We were inspired by the world around us – the ongoing protests, unrest, and general unease in the world today prompted us to find ways in which we could keep people safe in potentially dangerous situations.
What it does
In times of emergency, we want to make sure that those who should be aware have knowledge of our whereabouts and are kept apprised of our situation. With Emergensea Teams, we found a way to do that. A responsive web app designed to work on mobile and desktop, our app allows you to create unique "groups", set a safe meet-up location, and invite your friends to join. It will also make a list of safe-hubs within a certain radius of your particular location (safe-hubs are anywhere that is public). Anyone with a phone or laptop can tune in using the unique link, chat with you and other invited friends, and suggest safe-zones and meet-up areas. The emergency button on the bottom every page will send an emergency message and your location to all the invited friends in case you find yourself in a dangerous situation. This Find My Friends, Maps, and chat hybrid ensures that you will get assistance if you are unsafe.
If you were out at a protest, walking alone in an unsafe place, or even if you are meeting up with your group of friends –we believe that having all the functionality of our app, available on any device, can help.
How we built it
Front-end: React, HTML/CSS, Bootstrap
Back-end: NodeJS, JavaScript
Other technologies: Radar.io, Socket.io, Mapbox API, Heroku
Design: Figma, responsive design
Challenges we ran into
We are all pretty new to technologies like React and NodeJS, so it was a learning experience for all of us! Figuring out the design components, how we would enable people to have the best experience and most intuitive experience – since it could be a situation where there are potential dangers. It was interesting to do things virtually and we're all on different timezones, so waking up early was a must (collaboration and teamwork is key!).
Accomplishments that we are proud of
The design, functionality, and being able to come together during this time to work on something we all care about!
What we learned
React, Node, the importance of good UI/UX, a bunch of cool APIs, and just how much you can do with all of it!
What's next for emergenseaTeams
We're definitely going to add more features! Some include:
Suggesting only safe-hubs that are currently open (Google Maps API)
Making it into a progressive web app so that it can be downloaded!
Built With
css3
html
javascript
mapbox
node.js
radar.io
react
socket.io
Try it out
github.com | emergenseaTeams | Keeping you safe and your friends informed in emergencies! | ['Shirley Yang', 'Maggie Y.', 'Sharon Lin'] | [] | ['css3', 'html', 'javascript', 'mapbox', 'node.js', 'radar.io', 'react', 'socket.io'] | 27 |
10,413 | https://devpost.com/software/seafluent | What it does
This is a fun underwater themed website to learn about sea, sailors, and pirates to get you prepared for your next sea adventure. Take a quiz. score full points and earn a badge to flex on your adventure.
How I built it
It's built using HTML5, CSS3, and JavaScript.
What's next for Seafluent
Deployement and connectin the domain
Domain name: seafluent.online
Built With
css
html
javascript | Seafluent | Test and Increase your sea knowledge | ['Lepakshi Agarwal', 'TheNameIsAbhi'] | [] | ['css', 'html', 'javascript'] | 28 |
10,413 | https://devpost.com/software/swimsafe | Inspiration
The theme of HackItShipIt reminded my brother and I of the few experiences (living in landlocked regions our whole life) we have had around water - swimming at our local recreation center! We thought about all the times our parents would not let us go without supervision due to safety concerns. Thinking about this made us realize that countless other people may have the same thought process. The lack of comfortability with letting a loved one go for a swim for reasons such as age, disability or, in some cases, risky behavior (if recorded in the past by the organization) is prevalent despite the presence of lifeguards. We wanted to find a solution that would not only provide more security around water and prevent accidents but also make the job of lifeguards easier.
What it does
This application allows lifeguards to track swimmers at their pool location based on the necessary care/monitoring level. Swimmers can check in and check out through the same app updating the lifeguards display in real time.
How we built it
We listed the Registration/Sign, Checkin/Checkout, and Lifeguard Display as the higher level tasks that could further be split up. To enumerate:
Registration/Sign in: Pool location (the organization which, for example, may be a recreation center) can sign in and register on these pages. We used a function of Google Cloud's Firebase called Authentication which allowed us to register each pool location with an email and password. Kanav Bengani worked on this portion.
Checkin/Checkout: Members (customers) of the pool location can check in and check out here. There would be a tablet/phone setup outside the pool area (may be manned by an employee) where members can take a picture of the swimmer (done with the image-picker API), enter their name, and attention level needed. All this gets sent to our Firebase database to be stored. As they are exiting the premises, the Checkout page will allow them to enter their names and check out deleting their database and Lifeguard Display entry. This portion was completed by Ricky Bengani.
Lifeguard Display: This feature is used by the lifeguards to decide which member of the pool’s location needs to be monitored more closely. The way it works is that it obtains a snapshot of the pool’s members from Firebase's Cloud Firestore (attributes include name, image, & monitoring level) as well as the image link associated with it in the Firebase Storage Bucket. It then reads this snapshot and filters the members based on their monitoring level. The data is then displayed in different pages of the bottom tab bar navigator. This UI portion was completed by Ricky Bengani, and the backend integration was completed by Kanav Bengani.
Challenges we ran into
The biggest challenge we faced was performing calls to the Firebase database. To elaborate, in order to separate the profile blocks between the tabs in the Lifeguard Display, we had to parse through data obtained from a snapshot of the database documents. This proved to be quite difficult due to the complex logic and snapshot filtering syntax, which was needed to filter and sort the different members based on necessary monitoring level.
Accomplishments that we're proud of
We are extremely proud to have gotten all the elements successfully implemented in the given time. The UI that we had planned for came out as we had imagined, despite the detail it called for, thanks to the quality that the Flutter SDK provided. Overall, our workflow was very smooth and efficient!
What we learned
This project was definitely UI intensive. There were several components we aimed to implement within the short period of time. This meant that a good portion of our time would be spent planning and deciding how we would split up the work and integrate everything. It was an excellent learning experience in that regard. Furthermore, neither of us have worked extensively with the Flutter SDK prior to this. While we were pleasantly surprised by how easy it is to work with and create beautiful UI, there was still a lot to learn. As for the backend, Firebase was something that none of us had much experience with either. One of the biggest things we learnt was the integration and calls between Flutter and Google Cloud's Firebase.
What's next for SwimSafe
We believe that this application has the potential to scale even bigger to provide security to organizations and guests/customers alike - these may include gyms, trampoline parks, and recreation centers. Companies like Costco and Sam’s Club which require membership could automate the member check in process with this functionality as well. Further technology implementations we plan to explore as we continue to develop include geofencing and image recognition APIs for automatic checkins, firewalls for added security, data collection and analysis with AI/ML, and more!
Built With
dart
firebase
flutter
image-picker
visual-studio
Try it out
github.com | SwimSafe | Lifeguards face the daunting task of ensuring water safety for everyone they monitor. How can they allocate their attention more effectively? Our app targets and efficiently addresses this challenge! | ['Ricky Bengani', 'Kanav Bengani'] | [] | ['dart', 'firebase', 'flutter', 'image-picker', 'visual-studio'] | 29 |
10,413 | https://devpost.com/software/chadburn | Home page
Example chatroom (you can join this room, phrase is 'secret_treasure', password is 'pass')
Inspiration
We wanted to create a way to privately communicate without companies or government organizations having access to your data (especially with recent events that have been occurring).
What it does
Users can create private chatrooms by setting a one-word phrase (think of this as the chat room name), password, and expiration date. The chatroom is then created, and anyone else with the correct phrase and password can decrypt, read, and send messages.
How we built it
We used HTML/CSS/JS for the front-end and PHP and MySQL for the back-end.
Challenges we ran into
PHP
Accomplishments that we're proud of
Both of us are proud of picking up PHP/MySQL during the hackathon. This is also my (Emily) first hackathon, and first time I've worked on a project at this pace/scale. I'm used to working slowly and improving by code little by little, so this was "unsailed waters" for me :)
What we learned
I (Emily) learned that I should keep my code organized when I'm writing it so I don't have to go back and fix it later. It was hard for me to go back and make changes because I kept writing new classes to display the conditions I wanted it to have.
I (Max) learned all about the ups-and-downs PHP and MySQL, but it was a rewarding experience. I've done a lot of algos/scripts before this, so working on a webapp was a new (and rewarding) project.
What's next for Chadburn
Realistically, we are both going to take a break and get some sleep right after I finished writing this; however, we're both interested in polishing a couple of features added towards the end and continuing to maintain this into the future
Built With
css
google-cloud
html
javascript
jquery
mysql
php
Try it out
chadburn.tech | Chadburn | Private & Secure Webapp Messaging | ['Emily Yao', 'Max Vogel'] | [] | ['css', 'google-cloud', 'html', 'javascript', 'jquery', 'mysql', 'php'] | 30 |
10,413 | https://devpost.com/software/landlubber-compass | Dope lookin octopus
Tight lookin seahorse
Inspiration
Well I watched surfs up again, that movie is a gem so I had to pay my respects.
Usage
Once you search for your specimen, the app present a sleek picture and some info on the creature that you can show off to your homies... I mean crewmates.
How I built it
It took a lot of blood, sweat, and tears to get this far but basically I took it as a chance to polish some web design. I started on the frontend using Vuetify's material library. Its super robust and offers mad customization. After finishing the base I started adding interactions with button clicks and searching for fish databases that I might be able to scrape from. In the end I used Wikipedia but there's a really cool REST api for
fishbase.org
that I was going to use at first. From there it was hammering away at buggy behavior on the backend.
Challenges I ran into
At the end I was really running into some trouble making http/https requests. Radar only takes curl requests and it took me forever to find out that those are just particularly formatted http requests. Lost a lot of time just trying to get data out of APIs.
Accomplishments/What I learned
I'm proud that I was able to make use of tools people made available. In all my past hackathons I always thought I had to make everything from scratch and reinvent the wheel but it finally clicked for me that the open source community is literally about helping each other. I think that's pretty cool. I'm also proud since this is the first time I'll submit at a Hackathon!
What's next for Landlubber Compass
I really want to finish it up and next on the block is adding the aquarium page to the tooltip and after that its moving the map marker to the aquarium's location. Shouldn't be too hard...
knock on wood
Built With
css3
javascript
vue
Try it out
github.com | Landlubber Compass | The landlubber compass is a treasure map to a sea-creature loving, swashbuckling, no-nonsense having pirate! | ['Samuel Adetunji'] | [] | ['css3', 'javascript', 'vue'] | 31 |
10,413 | https://devpost.com/software/garbagedetector | Recycle.AI - The Smart Cleaner
Why we need Recycle.AI?
Recycle.AI Logo
RecycleAI
Introducing Our Initiative, recycle.AI!
Our Initiative
Recycle.AI is a multiphased initiative utilizing modern technologies such as Machine Learning, robotics, and game development to encourage the responsible usage and consumption of our natural resources around the world.
We noticed that most of recyclable materials and products are not actually recycled but rather thrown into landfills. Note that roughly 80% of rubbish in landfills is recyclable which is honestly, way too much!
Our initiative focuses on the youth, households, organizations, and the government, aiming to encourage recycling amongst our local and the global community.
Youth Phase
Introduction
The youth phase is a recycling-based game where children can score points by correctly identifying if an object is recyclable or not, helping them understand recycling from a young age. The game is built for any setting and simply works by clicking the right bin for the item that is to be disposed of. It can be used to teach children how to recycle in classrooms or can be an educational activity children can do with their parents.
How it is built
Using C# and the unity game engine, the CAD was made in Autodesk inventor
Households and Society Phase
Introduction
We built a tool targeted at small organizations and households that can identify if an object is recyclable or not. The tool is implemented on our website where the user can read about our mission as well as utilize our tool to ensure they are disposing responsibly
How it is built
Using Machine learning and HTML, namely using the tf.keras framework to build a convolutional neural network, and the Flask API to connect the python with the HTML. The essence of it is that we trained a Deep Convolutional Neural Network to classify images using a dataset and label them based on one hot encoded values.
Issues and how they were overcome
The main issue regarding the performance of the network was addressed when we increased the size and shape and made the network far bigger, however we did not get enough time to trial and error the design so we were not able to improve on our second iteration. The padding of the images inputted by the user. This was fixed using a Pillow implementation that added padding:
`` python
if test:
inputData = Image.open('test/'+testfile)
else:
inputData = Image.open(testfile)
desiredSize = (512,384)
im = inputData
old_size = im.size
ratio = float(max(desiredSize)) / max(old_size)
new_size = tuple([int(x * ratio) for x in old_size])
delta_w = desiredSize[0] - new_size[0]
delta_h = desiredSize[1] - new_size[1]
padding = (delta_w // 2, delta_h // 2, delta_w - (delta_w // 2), delta_h - (delta_h // 2))
new_im = ImageOps.expand(im, padding)
im = new_im.resize(desiredSize, Image.ANTIALIAS)
im.show()
inputData=im
The model
The model can be seen below:
model.add(layers.Conv2D(32, (4, 4), activation='relu', input_shape=( 384,512, 3)))
model.add(layers.MaxPooling2D((2, 2)))
model.add(layers.Conv2D(64, (4, 4), activation='relu'))
model.add(layers.MaxPooling2D((2, 2)))
model.add(layers.Conv2D(64, (4,4), activation='relu', input_shape=(384, 512, 3)))
model.add(layers.MaxPooling2D((2, 2)))
model.add(layers.Conv2D(128, (4, 4), activation='relu'))
model.add(layers.MaxPooling2D((2, 2)))
model.add(layers.Conv2D(128, (4, 4), activation='relu'))
model.add(layers.Flatten())
model.add(layers.Dense(128, activation='relu'))
model.add(layers.Dense(64, activation='relu'))
model.add(layers.Dense(32, activation='relu'))
model.add(layers.Dense(16, activation='relu'))
model.add(layers.Dense(6))
this is a multilayered convolutional neural network using 4x4 filters and the relu activation function, our loss metric was Mean Squared Error.
Society Phase
Introduction
This concept involves utilising robotics to effectively locate and manouver small, recyclable materials to recycling bins, as they are often not as readily avaliable as regular dustbins. These robots should be autonomous, however as of now the robot has just finished construction, it is able to pick objects with up to 8 inches in diameter awith the idea being to install a bin bag in the large empty space to store the objects.
How was it built
As evident in the CAD file it was built using the VEX Robotics V5 system, as of now, the components do not have the computational power to fully impliment an algorithm as computationally intensive as YOLO, and so we chose not to try port it to the system.
Complications
The robot only finished construction about 5 minutes before the video was being made and so it could not be showcased fully, but the CAD renders are avaliable on this page.
Quick overview
-The intake flaps increase the contact between the target and the bot
-The rubber treads on the intakes increase the traction of the intakes
-The 8:1 gear ratio of the drivebase ensures the robot operates at maximum speed and efficiency
Future plans for the robot
The final aim of this phase of the initiative is to implement the YOLO algorithm alongside the robot. This algorithm draws bound boxes around the images it is interested in in real time. The next step would be to implement PID controls so that the robot is able to meet its target without overshooting, by slowing down as it approached, or alternatively, using a gyroscope, or even odometry(i.e. position tracking) to maintain the robot's movements
Built With
bootstrap
flask
html
keras
python
tensorflow
Try it out
github.com
recycleAIdemo.ved07.repl.co
sharemygame.com
drive.google.com | Recycle.AI | Responsible consumption, reduced depletion | ['Vedaangh Rungta', 'Emmanuel Ma', 'Ishan Baliyan', 'Mahad Ali Khan'] | [] | ['bootstrap', 'flask', 'html', 'keras', 'python', 'tensorflow'] | 32 |
10,413 | https://devpost.com/software/sea | Logo
VGG16 Convolutional Neural Network Architecture
Example of Walrus prediction
Example of Sea Otter prediction
Graph of training loss and accuracy
Inspiration
Each year, thousands of animals go extinct and thousands more become endangered. A majority of these endangered animals live in the ocean. Researchers are developing new ways to protect these endangered species. One of these ways is recording underwater footage from a submarine to check the area for any endangered species. However, combing through hundreds of hours of footage can be tedious. SEA is our solution.
What it does
Our project, Save Endangered Animals, or SEA for short, is a convolutional neural network trained to detect endangered species from an image. As of right now, we can identify 4 endangered or vulnerable marine species, which are the Sea Otter, Hawksbill Sea Turtle, Walrus, and Hector’s Dolphin.
How we built it
We used the VGG16 Convolutional Neural Network Architecture. We chose this specifically because it won the ImageNet competition in 2014. The python libraries used were Tensorflow, Keras, and OpenCV. The graph shows training loss and accuracy when we trained our model. As you can see, training accuracy fluctuates around the same level as validation accuracy, which is a good sign that there is no overfitting. The VGG16 Convolution Neural Network Architecture consists of convolution and max-pooling layers, followed by a fully connected layer. We made the datasets ourselves by utilizing the Bing Image Search API to quickly download hundreds of images.
Challenges we ran into
We underestimated how much training data we would need. We first decided to get labeled datasets using just the animal's names. However, we quickly realized that for marine animals, you needed underwater pictures as well. So we downloaded more images. In the end, we developed a practically accurate model that can detect the 4 endangered animals that we trained it on.
Accomplishments that we're proud of
This was our first time making a Convolutional Neural Network, and while it seemed scary, it ended up being a very rewarding process. Seeing our model train for 100 epochs, then running predictions on it that yielded accurate results was a satisfying feeling.
What we learned
We learned how to interact with Flask in order to deploy an image classification machine learning model to the web. We also learned about the code that is behind making a CNN.
What's next for SEA
We hope to launch SEA as an app or program that researchers can run on large datasets of images that are extracted from video footage. Given that we only had a limited amount of time to collect our dataset and train, it is definitely possible to train our model to detect even more endangered marine animals, such as the Vaquita and the Narwhal.
Built With
css
flask
html5
keras
opencv
python
tensorflow
Try it out
drive.google.com
drive.google.com
endangeredanimals.tech
endangeredspecies.ucraft.site | SEA | Researchers track endangered marine animals using submarines. However, it is tedious to look through hundreds of hours of footage. SEA is a CNN that can detect endangered animals and label them. | ['Mohsin Zaidi', 'Tanush Chopra', 'Michael Li'] | [] | ['css', 'flask', 'html5', 'keras', 'opencv', 'python', 'tensorflow'] | 33 |
10,413 | https://devpost.com/software/beach-buddy | Main Loading Screen on Beach Buddy
Most popular beaches rated by Beach Buddy
Top 4 beaches near Fremont, CA
Beach Buddy Logo
Inspiration
Due to the current situation in the world, it is very difficult for people to even dare to step foot outside, let alone go to the beach. Even if people desire to go to the beach, the lack of a solid plan including food options, lodging, etc all contribute to the lack of enjoyment on some of these beach trips.
What it does
Beach Buddy takes in your address, zip code, or city name and scours the internet for the top 5 beaches near you, based on their beach rating. This beach rating is uniquely calculated with our novel machine learning algorithm, built on PyTorch. This algorithm takes into account several factors:
Crowding at the beaches (important for maintaining social distancing)
Weather/ Temperature (so you don’t have to raincheck)
Distance from your inputted location to the beach (so you don't have to travel too far)
Nearby (within 1 mile) food options to the beach (because we know you get hungry)
Local lodging options (if the beach is too amazing to take in for one day, why not stay in a hotel and explore the beach again the next day)
Using these 5 main factors, and several other small components, Beach Buddy generates a rating out of 10 to recommend you the perfect beach for your next trip.
How I built it
Before this project, I was still getting introduced to the idea of creating a project. This project further developed my skills in the end-to-end programming pipeline. I started by first mapping out what I had to do:
Build a basic Flask application
Research and understand which factors go into deciding/planning a trip to the beach
Find APIs/modules to get data about these various factors
Create and train a machine learning model that optimizes variables listed above
Portray all results in a pleasant manner on Flask application
Add home page, aesthetic images, and details page that allows users to look at more details about each beach
Add review-based content, so that people can add reviews for each beach and recommend to their friends on social media platforms
I followed this process thoroughly and used various Google Cloud Services to perform data/request collection. I also used various APIs including OpenWeatherAPI, WaveTide from NOAA, and various others.
I trained my machine learning model on PyTorch, which I was completely new to prior to this project.
Originally, I had meant for Beach Buddy to be an IOS/Android application, but I was not very familiar with Flutter. Still, I was able to make some progress and created a basic visual interface that I will definitely build upon in the next few days.
I also learned the importance of using various REST APIs including Coastal CA and various location-based npm packages.
Challenges I ran into
I was very new to the core fundamentals of three very important parts of my project:
Flutter (Xcode/Android-based applications)
Flask (for running my python backend)
PyTorch (for training machine learning models)
I often ran into various errors, but my inspiration for this project motivated me to keep going. I often consulted YouTube videos about these topics and StackOverflow forums.
Accomplishments that I'm proud of
This project is nothing like one I’ve done before. I was very new to the use of Flask, Flutter, and PyTorch but learning the main fundamentals of each has really made me proud and has encouraged me to learn more about these various softwares.
I am proud that I completed this project, which has several components, all by myself within the span of hours.
What I learned
I fully learned the end to end programming pipeline of sorts. In particular, I learned the ins and outs of the various APIs I worked with and understanding how to train models on PyTorch. I also built up a strong intuition on using both Flutter and Flask.
What's next for Beach Buddy
There is a lot in store for Beach Buddy, as can be seen in the video demonstration. We have yet to fully deploy the Flask application and will be finishing that up soon by deploying onto Google Cloud. Also, as I have learned more about Flutter and the Dart language used in it, I look forward to creating a similar interface like the Flask application onto an IOS/Android platform for millions of people to use throughout the world.
As my previous project relied heavily on Facebook Messenger, I thought to brush up my skills in that domain by creating a Beach Buddy Facebook chatbot, so that people can quickly converse with a bot, rather than doing a website search (although using our web application would definitely provide more information).
I will continue posting updates to this submission, so keep an eye out for more about Beach Buddy!
Built With
flask
flutter
ios
node.js
python
pytorch
xcode
Try it out
docs.google.com | Beach Buddy | Let's uncover the best beaches near you | ['Viren Khandal'] | [] | ['flask', 'flutter', 'ios', 'node.js', 'python', 'pytorch', 'xcode'] | 34 |
10,413 | https://devpost.com/software/captain-scallywag | The pirate flag! Arrgh!
Walk the plank, with Captain Scallywag!
Test your inner pirate!
How well do you know the seven seas?
Ahoy, Me Hearties!
Ever wanted to engage in pirate speak? Or feel the thrill of walking the plank? Captain Scallywag is a Discord Bot and the main character of the web game: Captain Scallywag and the Trash Pearl.
Captain Scallywag:
Invite
the discord bot to your server!
Use the
!pirate
command translate your speech to the pirate lingo. Or maybe you're looking for some fun? Try the
!walk the plank
command to play a game and win yourself some good booty.
Captain Scallywag and the Trash Pearl: Visit the site
here
!
Join us once again to learn about the garbage patches entrenching the earth. Test your knowledge, and let's see how well you know the planet you live in!
How we built it
The discord bot was made using python (discord-py). The pirate command works by using a dictionary of common English words with their pirate translations. Words for Walk the plank was imported from the RandomWords library.
The site was made using reactjs and threejs and is hosted using Netlify. Our front-end master @JapneetSingh created the beautiful UI you see.
The backend is hosted using heroku, and uses MongoDB to manage the questions and the leaderboard.
What we learned
Hackathons are pretty fun!
More than half of us are first time hackers, and you won't believe the number of noob questions we had. We quickly caught on though, hurrah!
We learnt to work together as a team.
It was rough at first with the varying time zones and everyone ghosting. However, we managed to patch the merry band towards the end!
Do NOT forget to disable CORS.
Especially if you're hosting your front and back-ends on different servers. Yes, I'm being serious.
What's next for Captain Scallywag
This is certainly not the end! Captain Scallywag will continue to sail the seven seas! More features, quizzes, who knows?
Built With
css3
express.js
heroku
html5
mongoose
netlify
python
react
three.js
Try it out
captainscallywag.netlify.app
github.com
github.com | Captain Scallywag | Your one-stop pirate experience! ARGHH! | ['Vikram Jaisingh', 'Joshua T', 'Japneet Singh', 'Immanuel Ifere', 'yahayaohinoyi SULEIMAN'] | [] | ['css3', 'express.js', 'heroku', 'html5', 'mongoose', 'netlify', 'python', 'react', 'three.js'] | 35 |
10,413 | https://devpost.com/software/pirate-translator | Home page
Demo 1
Demo 2
Inspiration
This is the first hackathon for two of our members so we wanted to do something lighthearted and fun. We came up with the idea to make a modern English to pirate speech translator.
What it does
The user inputs a word, phrase, or short passage, then the app will display a translated message. For example, "Hello" becomes "Ahoy"
How we built it
We built the app using flutter. We built a thorough conditional system, in which we take two arrays with phrases and their translations. Then we run them through loop changing all recognized parts of the users message.
Custom art assets were developed in Photoshop. We used stack widgets paired with container widgets to place our assets into the app.
Custom music was developed in MuseScore. We implemented this audio using an external audioplayer package. There is an intro song that is played at boot up using the initialize function in flutter. Then we play a small jingle whenever a user translates a message successfully.
Challenges we ran into
We have 148 conditionals. This process was difficult to implement to say the least. What seemed to extremely simple evolved into a great source of problems. To start, at first we used individual conditional statements for each phrase/word. This made debugging slow and adding more conditionals was painful. We also had a problem with smaller words replacing parts of larger words. For example, "This" would get translated using "is" instead of "this" becoming "thbe". We also had the problem of words getting translated and then translated again. For example, "this" became "tis" would get translated again for "is" and become "tbe". The first solution was came up with was to keep the inputted string the same and copy it into an output and adjust the output. This was a problem due to addresses of the strings being different. We tried developing an algorithm to adjust the addresses based on changes in the output. This worked, but had a few edge cases in which it caused out of bounds cases. This was due to the output becoming smaller so we need to subtract to find the right address compared to input. This would break if address - adjustment was less than 0. After multiple iterations we decided to look at a new implementation. In the flutter string documentation we found a function that would replace the first instance of the pattern which helped greatly. After that we would copy input into output and whenever we found a "phrase" we would translate it in output and remove it from input. After that we had a problem in which duplicates would not be translated. We fixed this with a while loop which checked if the message was changed and would rerun until it would not have registered any changes. To fix smaller words we checked for spaces before and after this came with problems within its self. Words at the start,end of the message, period, commas, !, ?, would not translate that message. We added spaces at the end and at the beginning. And at the start we added spaces to periods and ect so that the words would get read. We remove those spaces after.
In terms of the look of the app, we ran into issues regarding cohesion when it came to obtaining assets. Different picture files picked from a plethora of different places did not read well together. The solution was to have almost all of the visual assets in the app drawn from scratch. This brought its own restrictions. One of the many benefits of Flutter is that the widgets allow for code to be applied smoothly to any aspect ratio, however the system is less smooth when dealing with png image assets. The fix for this was to design the art in such a way that there was a certain level of uniformity in the x and y direction to accommodate for stretching or warping when fitting assets to the screen.
The major challenge for the music end of the app was designing a song that conveys the "pirate" feeling in a short amount of time. This was especially challenging since "pirate" music does not use traditional classic music theory. Constructing both songs required thinking outside of traditional boundaries of music. Using unconventional instruments that pirates would have access to (accordion, viola, stamping on the floor, tambourine, etc.) along with a jaunty offbeat percussion section gave off a strong pirate feeling without required to much time investment in the music. This allowed for a solid soundtrack to be developed in under a day. The second challenge was implementing this music. Flutter has many shortcuts for adding assets such as images, so it was originally presumed that Flutter would have a similar shortcut for music. However, we ran into a huge roadblock when we realized Flutter did not actually have a method of implementing music into an app by default. This required a lot of on the spot research on how to use an external library to implement music playing function into this program. After spending far too many hours on this unexpected detour, we eventually got a working soundtrack for this app.
Accomplishments that we're proud of
We are extremely proud of our custom asset for art and music. This made the project truly feel like our own. We are also happy that the app works. Something about seeing all our work come together was tremendously satisfying.
What we learned
Many things we did here were our first time doing so it was very fulfilling. We had very low expectations for our project, and we ended up learning how to implement assets, scaling images, change routes in an app, audio implementation, and team management
What's next for Pirate Translator
We set out with the goal of learning and having fun. We have accomplished those goals. We are going to show family and friends the fun app we made. More importantly, we are going to take the skills we learned for a future of app development. Pirate Translator has inspired use to go further into the field of app development.
Built With
android-studio
audioplayers.dart
dart
flutter
musescore
photoshop
Try it out
github.com | Pirate Translator | A fun app in which users can type any message and it will be translated into pirate jargon. | ['Jean Vigroux', 'Jacob Hocking', 'JamesCalano'] | [] | ['android-studio', 'audioplayers.dart', 'dart', 'flutter', 'musescore', 'photoshop'] | 36 |
10,413 | https://devpost.com/software/blockchain-based-evidence-management | I have seen when carrying or sharing digital evidence from the crime scene from the police station to forensic many times evidence may be tampered, so I came up with the idea of Blockchain which when uploaded on this network makes the data immutable and only accessible to authorized people.
It's simply an evidence management system where police can upload evidence on the blockchain with date, time, and description of uploading evidence, and making FIR upload on the blockchain with is only accessible to the forensic person having all login credentials of forensic login portal.
I built this on Linux using npm and node server.
Creating the back-end of the Evidence management system and connecting it to ethereum blockchain network.
I learned more deep usage of Blockchain network.
Built With
blockchain
html5
javascript
node.js
npm
solidity
Try it out
github.com | Blockchain Based Evidence Management | Making evidence tamperproof while sharing using blockchain technology | ['LAVISH Garg'] | [] | ['blockchain', 'html5', 'javascript', 'node.js', 'npm', 'solidity'] | 37 |
10,413 | https://devpost.com/software/spotcheck-mvurnd | Inspiration
I was inspired to build this because schedule adjustment period is coming up at my school, but there's no good way to be notified if a class you want opens up. This forces students to sit at their computers refreshing desperately in the hopes of getting into a good class. But, no longer!
What it does
Spot Check scrapes the course directory of Wesleyan and creates google datstore entries for each class with the number of seats open, and if one opens up it emails everyone who has subscribed to the class on the Spot Check website.
How I built it
I used BeautifulSoup and requests to scrape the text from Wesleyan, and then created a datastore entity for each course with the number of seats open. Then, I used cron to schedule that scraping job every few minutes so that if there is a change in the number of seats open everyone who has signed up to be notified for that class will automatically receive an email using yagmail. I used flask to create the python webapp and hosted it on google's app engine.
Challenges I ran into
Oh boy... I had a ton of trouble with the google tools. The google app engine standard environment does't allow for multiprocessing, but the flexible environment is very finnicky and hard to work with, so I had to give up on parallel processing altogether, which was a shame. I also had to switch from selenium, which I usually use to do scraping, to BeautifulSoup and requests, because I couldn't host google chrome on the web engine. This actually sped things up quite a bit though!.
Accomplishments that I'm proud of
Everything! I had never created a webapp or used any of the google tools before this, and this was also my first time using HTML and CSS, as well as doing a whole CS project by myself!
What I learned
HTML, CSS, Flask, google app engine, how to sit in one spot for 12 hours at a time.
What's next for SpotCheck
I'm going to add some more functionality and then release it to my school when our adjustment period starts so everyone can use it!
Built With
beautiful-soup
flask
google-app-engine
google-cloud
python
Try it out
wescraper.ue.r.appspot.com | SpotCheck | Sends automatic email notification when a spot opens in a desired Wesleyan class! | ['Daniel Knopf'] | [] | ['beautiful-soup', 'flask', 'google-app-engine', 'google-cloud', 'python'] | 38 |
10,413 | https://devpost.com/software/pirate_talk | Ever heard of national Talk Like a Pirate Day? That's every day with Parrrley!
We created a single-page React app that takes plain English input and translates it into pirate-speak. Using a healthy amount of RegEx, we match the user's input to our pirate dictionary. If there's a match, that word is translated!
You'll also find some Parrrley-o-meters which measure the piratey-ness of your text. Ideally, those go up as you're text is translated.
One of the hardest parts of this project was creating a RegEx to match the bases of words so that we could correctly change them in consideration of tense, pluralization, and possessiveness. English is crazy! We tried to incorporate Speech Recognition as well, but that ended up taking too much time for the final demo.
In the future, we'd love to include a larger dictionary of pirate terminology as well as the Speech Recognition mentioned earlier. Batten the hatches and set the sails! We're off to another adventure.
Built With
bootstrap
css
html
javascript
react
reactstrap
Try it out
github.com | Parrrley | Speak like a pirate with this pirate translator! | ['Zach Nicholson', 'Zane Bliss', 'johnbain881', 'Tanner Brainard'] | [] | ['bootstrap', 'css', 'html', 'javascript', 'react', 'reactstrap'] | 39 |
10,413 | https://devpost.com/software/poseidon-s-corner | Inspiration
Greek mythology and the theme of the hackathon
What it does
Gives information on different oceans of the world
How I built it
using CSS and HTML
Challenges I ran into
Accomplishments that I'm proud of
I made this on my own without a group
What I learned
What's next for Poseidon's corner
Research Mythologies related to sea gods and the oceans.
Built With
css
html
Try it out
brishti9.github.io
github.com | Poseidon's Corner | An informative website on the world's ocean | ['brishti9301 Basu'] | [] | ['css', 'html'] | 40 |
10,413 | https://devpost.com/software/wingscythe-arena | Teaser/Thumbnail
Sneak Peek
Inspiration
Creating a unique first person multiplayer fighting game.
What it does
-Realistic sword animations
-Global multiplayer server and client capabilities
-Synchronization of state machines and animations
-Interactive GUI
How We built it
-Unity game engine for physics and foundation
-Blender for modeling and animations
-PiskelApp for 2D Sprites and UI design
-Photon PUN for networking and synchronization
Challenges We ran into
-Blender not importing correctly
-Time constraints for a multiplayer game
-Setting up Photon
-Blender work randomly deleting itself
Accomplishments that We're proud of
-Actually completing the first prototype for the project
-Somewhat realistic animations and accurate synchronization
-Sheer work hours saved
What We learned
-How to network in Unity through Photon PUN
-Properly setting up animations and collisions
-Procedural generation of GUI
What's next for WingScythe Arena
-Loads of more features
-Visual effects and coloring
-Proper character models
-More weapons
-Skills
-Artificial Intelligence
-Combos
-Fluid Movement Controls
-Background Music
-Story line and difficulty scaling
-Advanced movement mechanics
-Player Classes
Built With
blender
c#
gimp
photon
piskelapp
unity
zenhub
Try it out
github.com | WingScythe Arena | First Person Multiplayer Combat, Hack and Slashy! | ['Ryan Xu', 'Andy Zheng', 'Eric Tong'] | [] | ['blender', 'c#', 'gimp', 'photon', 'piskelapp', 'unity', 'zenhub'] | 41 |
10,413 | https://devpost.com/software/ftp-social | What is it
Inspired by SSH tilde servers, ftp.social is an experiment in building an FTP-based social network
All interaction with the service is through FTP.
See the website for more information.
How I built it
The project is mainly built around the pyftpdlib library, reacting to the many hooks triggered by FTP events (file uploads, etc).
User information and follower relationships are stored in a PostgreSQL database.
All this is hosted on Google Cloud, using Compute Engine and Cloud SQL.
Challenges I ran into
The hardest part of building ftp.social was dealing with network issues.
FTP wasn't designed with NATs and firewalls in mind and easily can easily get confused about where to send data.
Reading through RFCs that predate my birth and getting caught up in minute technical details was not fun.
What I learned
I feel quite confident working with FTP, now knowing how it works internally.
This was also the first real-world website our web developer worked on.
What's next
In the same spirit of using the increasingly archaic FTP, I plan to add a read-only Gopher interface.
If technically feasible, I also plan to add a mailbox system where users can receive text emails.
(A user named
example
could receive emails in a
mail
folder in their home directory at the
example@ftp.social
address.)
In the very long run, it's possible I could rewrite the whole thing from scratch, FTP server included, in order to be able to design the system in a more efficient way with certain hooks.
Built With
ftp
postgresql
pyftpdlib
python
sqlalchemy
Try it out
ftp.social
github.com | ftp.social | An FTP-based social network | ['Hunter Han', 'Grace He'] | [] | ['ftp', 'postgresql', 'pyftpdlib', 'python', 'sqlalchemy'] | 42 |
10,413 | https://devpost.com/software/snackdrone | Web Interface
Arduino circuit - IR Sensor and Microphone
Facial Tracking Command Interface
Inspiration
I wanted to make something silly and "hacky". Original idea was to have it pull on a chord to turn a light on and off but that required too much force for this drone.
What it does
It delivers snacks using various activation methods. It can be launched, called, and landed from a web app. These three actions can be performed with a controller via an IR sensor. The drone can also be called by screaming loud enough. A facial tracking option can be opened to perform landing, launching, and requesting commands. This requires the user to move their head into the zones marked within the image on their screen.
How I built it
The Tello drone requires a device to be connected to its wireless network in order to transmit commands. This means that for a device to host a server for a client interface it must have multiple network connections. I needed to use two devices. One machine ran a node.js server that handled client requests received though its wireless network that it forwarded to the second device via Ethernet. The first device also ran the python script using an opencv haarcascade to track the users face. The script is spawned as a child process of the node server. The node server used a serial library to receive input form the microphone and IR sensor connected to the Arduino. When the node server forwards a request it is received by a python http server running on the second device. The device parses the JSON it receives and sends the appropriate commands to the Tello drone via the wireless network.
Challenges I ran into
Designing a means of carrying the snack that didn't cause the drone to lose all control and crash. Honestly surprised the method I used worked as well as it did. Had to stop a few times to simplify the process so the software didn't become more complicated than it needed to be. I don't know much javascript or node so that's generally where I struggle. Drones short battery life can also be limiting.
Accomplishments that I'm proud of
I got all the silly things I wanted done and it doesn't crash into my wall.
What I learned
The amount of stuff I keep on my desk that's easily blown away.
What's next for SnackDrone
Might try to add some more silly unnecessary features like a calendar scheduler.
Built With
arduino
css
html5
node.js
opencv
python
tello
Try it out
github.com | SnackDrone | A drone that carries snacks in and out of reach with increasingly rediculous ways of being activated. It is purely a hack for the sake of hacking and to hopefully entertain. | ['John Light'] | [] | ['arduino', 'css', 'html5', 'node.js', 'opencv', 'python', 'tello'] | 43 |
10,413 | https://devpost.com/software/save-the-turtles-ajyx8z | Home page
Instructions
Requests user input to determine the number of obstacles
Game play
Game over screen
Game play with pirate hat
Inspiration
Popularized last summer, “save the turtles” has become the slogan for ocean conservancy and reducing plastic usage. An estimated 14 billion pounds of plastic is deposited into the ocean every year, posing a threat to the entire ocean ecosystem. Part of these endangered species are turtles.
Over 1,000,000 marine animals, including over 1,000 turtles, are killed each year due to plastic waste such as fishing nets, six pack rings from canned drinks, plastic bags, and most infamously, plastic straws
.
The deep sea theme of HackItShipIt immediately made us think of how we can introduce the
importance of reducing plastic usage to younger audiences
.
We named Cap'n Scutes after the bony plate on a turtle's shell, which is called a scute.
What it does
Save the Turtles is an interactive game designed to educate the public about the threat that plastic pollution poses to our marine ecosystem.
The player controls the movement of a turtle as it tries to navigate an ocean with hazardous plastic waste, nets, and oil spills.
The difficulty of the game depends on the player’s own impact on the environment.
Before the game starts, the player inputs how many single-use plastic items they used that day. The more plastic waste the player has produced, the more plastic waste will show up in the game, posing a greater threat to the turtle’s safety.
The turtle starts out with 3 lives and loses a life every time it eats plastic waste or gets caught in a net or oil spill. The player scores points by eating jellyfish. Keep an eye out for the special pirate hat. Finding the pirate hat gives the turtle an extra life, and while the turtle is wearing the hat, the jellyfish are worth double the points!
After the player has experienced the reality of millions of marine animals, they will be more likely to think twice before using plastic items such as plastic bags, water bottles, utensils, cups, and straws.
How we built it
We developed the game with
HTML, CSS,
and
JavaScript
. We used
p5.js
to animate our turtle character and move obstacles across the screen. The
p5.collide2D
library was used to check for collisions. We used
repl.it
to collaborate online. We made our code efficient and organized by using object oriented programming. The background, turtle, pirate hat, and obstacles were all drawn by our team members.
Challenges we ran into
One of the challenges we struggled with was collision detection. Our game only detected when the turtle hit the obstacles head on and didn’t detect when the turtle collided with an obstacle from the side. To solve this, we used the p5.collide2D library to detect any collision between the turtle and an obstacle.
Another challenge we ran into was positioning the text on the screen while allowing it to have an overflow that fits onto different screen sizes. We solved this issue by creating sub-renderers for our text displays which were dynamically sized to match any screen size and could have text added to them before being displayed on the website.
Accomplishments that we’re proud of
The accomplishment that we’re most proud of is our cute and friendly user interface and animation. We wanted our game to be inviting for young children to play so that we could educate a younger audience about the importance of recycling and conserving our natural environment. Our adorable main character, Cap’n Scutes, is animated to look like he is swimming. We love Cap’n Scutes even more with a pirate’s hat atop his head.
What we learned
While we had previously worked with p5.js, none of us have ever made a full-fledged game, especially one which is so heavily dependent on user input through buttons and sliders. We had to learn how buttons and sliders work, adding them as DOM elements to the HTML canvas, while text and images were all displayed within the p5 renderer.
We also had never worked with sub-renderers before, so we had to learn how renderers can overlay on each other and act as images in the context of the main renderer. However, once we learned how to use sub-renderers, we were able to make our designs much more aesthetically pleasing, while keeping the positioning of text elements relatively simplistic.
We have also never made JavaScript objects which interact with each other in so many different ways. Not only did each of our Obstacle objects have to be linked with the Turtle for collision checking, but the result of that collision would depend on the specific image the object shows, as well as the current state of the Turtle. We also learned how to use the p5.collide2D library, implementing it to detect collisions between the Turtle and each Obstacle.
We also learned what a
scute
is :)
What's next for Save the Turtles
The next step is to create a database so that players can log in and their high scores can be saved. We would also like to develop achievements for the player, such as collecting 20 pirate hats and eating 100 jellyfish, which would unlock different costumes for Cap'n Scutes to go along with the pre-existing pirate outfit.
We’re also excited to add more characters to elucidate the consequences of ocean pollution on other marine animals. This could include noise pollution where players must distinguish between a motor boat roar and clownfish clicking to elucidate the struggles clownfish face when trying to communicate with their family.
We would like to expand our project into an educational gaming website for all things environmentalism. In addition to ocean pollution, our games would also cover topics such as deforestation, water and energy conservation, and food waste.
Try it out
github.com
www.savetheturtles.space | Save the Turtles | Welcome aboard, Cap'n Scutes! Distinguish jellyfish from plastic bags and dodge plastic utensils, avoiding fishing nets, and oil spills as you search for your captain's hat! | ['Michelle Bryson', 'Kevin Gauld', 'Christina W', 'Ethan Horowitz', 'Jendy Ren'] | [] | [] | 44 |
10,413 | https://devpost.com/software/whalewatchers | Inspiration
eee
Built With
babel
css3
html5
javascript
radar.io
vue
vuetify
whale-hotline-api | Find | e | [] | [] | ['babel', 'css3', 'html5', 'javascript', 'radar.io', 'vue', 'vuetify', 'whale-hotline-api'] | 45 |
10,413 | https://devpost.com/software/locality-xjurmb | Jobs Page
Business Page
Landing Page
News Page
GoFundMe Section
Inspiration
We were inspired to build Locality after witnessing the debacle that the COVID-19 pandemic had brought to small businesses. We wanted to do our part to support the backbone of our economy, since small businesses account for half of the GDP. Specifically, we noticed that some of favorite restaurants in Toronto were shutting down for good, due to their inability to cover overhead expenses such as rent. We thought that if we we're able to create a "crowdfunding" platform for our community's businesses, we would be able to mitigate the economic damage from this outbreak.
What It Does
Locality is a platform to promote local businesses during this COVID-19 Pandemic. As many are now relying on online and delivery companies such as Amazon and eBay, local businesses are losing customers and shutting down very rapidly. The purpose of Locality is to allow users to view and support local businesses instead of large corporations and chains. GoFundMe pages of various businesses are embedded on the site, allowing users to donate money for a good cause. Locality also features a news feed about small businesses in the users location, to help keep everyone educated about the current situations in their communities.
How We Built It
Locality was built with Python, HTML, CSS, and JavaScript. The micro-web Python framework Flask was used to build the backend.
The APIs we used include:
Radar.io API
IP Geolocation API
Google Maps API
Yelp Businesses API
News API
Linkedin API
To host Locality, we used Heroku and Domain.com for our custom domain, provided by MLH.
Requirements
The custom modules used:
requests==2.22.0
flask
gunicorn
flask_simple_geoip
flask-googlemaps
Steps and Commands To Run The Code
git clone (or download the files)
This will download the files to your local system.
cd into the folder via Terminal (Mac) or CMD (Windows)
This will direct your terminal window to the folder where the code resides.
pip3 install -r requirements.txt
This will install all the modules required for our Flask back-end to function. Without these, the app.py file will not run.
python3 app.py
This will execute our website and will open a local version for you to view on localhost:5000 or 127.0.0.1:5000.
Challenges We Ran Into
The first challenge we ran into was finding good API's to source our data to our specifications, however after thorough research we were able to find what we needed.
We were also experiencing issues with some of the overall CSS, especially for the interactive Google Maps widget.
Another challenge we had was implementing actual direct donation links. In order to resolve this, we added static interactive donation features and a "Add Business" button which demonstrates the concept of having businesses providing information to be used for features such as donations.
Accomplishments That We're Proud Of
We were able to make around 85% of the website dynamic. This is evident throughout features that adapt based on user location, such as the nearby local businesses and the local news. This improves the user experience, by making Locality a personalized and tailored experience for every user which uses our platform.
We're proud of the final appearance of out platform; it has a very modern and slick interface, which is visually appealing for users and prevents distractions.
We were able to implement a large majority of the features from the initial brainstorm of the app in a seamless and efficient manner, such as the interactive Google Maps API, or personalized news feed, as well as the LinkedIn Oauth feature.
What We Learned
How to utilize the Flask framework to efficiently build a web platform, which includes the Jinja web template engine.
How to implement the LinkedIn Oauth Sign in onto a web page with the use of cookies to store login data.
How to integrate a domain from domain.com into Heroku using DNS Records and CNAME.
How to properly use the GIT commands in terminal to push and pull, along with GitHub's collaborative features.
How to properly utilize several API's such as the ones listed in the "Built With" section.
What's Next For Locality
We were planning on adding personalized job recommendations for users who logged in via LinkedIn, however, this was not possible since we needed to email and request for access to this API feature, which could take up to 2 weeks. Instead, we added static jobs to prove the concept. We want to add this feature in the nearby future.
We also want to add a way to encourage consumer spending at particular locations, one way this could have been done was by using the Privacy.com API to help users generate gift cards for the favorite businesses. This was not possible through the sandbox API which we initially thought would be possible during brainstorm.
Built With
css
flask
google-maps
heroku
html
javascript
linkedin-api
linkedin-oauth
news-api
python
radar.io
requests
yelp
Try it out
github.com
locality.space | Locality | Locality is a platform to promote local businesses during this COVID-19 Pandemic. This platform allows users to find and support businesses near them by donating, spending and educating themselves. | ['Aditi Parekh'] | [] | ['css', 'flask', 'google-maps', 'heroku', 'html', 'javascript', 'linkedin-api', 'linkedin-oauth', 'news-api', 'python', 'radar.io', 'requests', 'yelp'] | 46 |
10,413 | https://devpost.com/software/tracewith-space | The section to change the values of how the drawing will look.
The tracing aspect of the app.
The starting screen of the app.
What it does
Trace with space is an app that allows you to relax by tracing pictures. You can share the pictures with your friends, and you can customize. your relaxing space.
How I built it
I built the app entirely in swift using apples UIKit framework.
Challenges I ran into
I ran into many problems while incorporating the sharing feature to the app, but i was luckily able to fix it in time.
Accomplishments that I'm proud of
I am proud that I was able to make a full app in such a short period of time by myself.
What I learned
Along with improving my Swift skills alot, I also learned about how to do hackathons in general.
What's next for tracewith.space
I would like to add more functuality to the entire app, including adding more traces.
Built With
swift
uikit
xcode
Try it out
tracewith.space | tracewith.space | Relax like you were floating in space with Trace With Space | ['Navadeep Budda'] | [] | ['swift', 'uikit', 'xcode'] | 47 |
10,413 | https://devpost.com/software/deep-sea-race | Front page.
Animal selection page.
Race simulation page.
Inspiration
While thinking up sea-related ideas, we stumbled across a video of a sea anemone swimming to escape the clutches of a starfish. Inspired by the sea anemone's strange movements, we decided to base a simple application around it and made this simulation.
What it does
The user can select two sea creatures from a provided list and start a race between the creatures to see which one gets to the end of the screen first.
How we built it
We built the simulation using Java and the Java Swing library.
Challenges we ran into
Swing is a tedious language to use, and was unfamiliar to us, so we needed to do a lot of learning.
Accomplishments that I'm proud of
We are very happy with the graphics and the layout considering our limited knowledge of Swing!
We also solved problems that we were not able to solve in the past, such as changing panels.
What I learned
We learned how to make a better GUI. We also learned a lot about the different animals we put in our simulation.
What's next for Deep Sea Race
We'd love to have more educational simulations included in the application. One thing we thought of while brainstorming was a mantis shrimp punch simulation! We could also include more animals in the race simulation.
Built With
java
Try it out
github.com | Deep Sea Race | A race simulation of sea creatures. | ['Lucy Wang', 'Audrey Yang'] | [] | ['java'] | 48 |
10,413 | https://devpost.com/software/ocean-nation | Inspiration
With a whirlwind of events happening around the world, the problem of ocean pollution has faded many people’s minds. Ocean pollution awareness is an important matter that needs to be reinforced again so everyone’s learning starts now! The amount of trash in the ocean such as The Great Pacific Garbage Patch is frightening and we want to be an impactful source that can make the difference in people’s lives. Having more of an engaging website allows us to target not only adults, but the youth that will become the future of this world. Teaching youth about matters early, allows them to innovate and create change in the long run.
What it does
Oceanation is a website that has two pages filled with information about organizations, lists major facts, links to useful innovative sites and displays images of many species living in the oceans. The interactive part is the game that pushes awareness for our world’s ocean pollution problem. It allows people to move their mouse, which is essentially the hook, and has a goal of collecting all the harmful materials in the ocean. This game targets a younger demographic that has a big influence over our generation. Our website truly entertains and educates.
How we built it
For the website we used
HTML
to display the images and texts. We then styled and formatted it with
CSS
. The interactive game was built using
JavaScript
and the website was hosted on
glitch
.
Challenges we ran into
One challenge we ran into was working on the game. As many of us visioned it differently, we had difficulties coding when everyone split up working on different parts. With this in mind, we drew out what our site would look like, then delved into our game features. After this, another challenge aroused as the functionality was slow and the game only lasted for a few frames. This forced us to fix and reload the system with images randomly placed moving down with the background image. Lastly, the challenge of coordinating over three different time zones delayed some of the beginning progress, but pushed us to better plan for future meeting times and forced us to effectively communicate.
Accomplishments that we're proud of
We’re really proud of improving our speed in the game and smoother collection time with the hook and different objects. We’re proud of the end result of the aesthetically pleasing webpages and the amount of knowledge we learned through researching facts and resources on ocean pollution.
What we learned
We learned how to link our website to a domain that we found on Domain.com. We ran into troubles having it display the link to where we hosted it instead of the domain name itself when the page was loaded. Also, we realized that building a simple game on a site and displaying useful information will allow people to easily learn while they have fun. We too enjoyed learning by building web pages, games and researching about the world’s ocean pollution problem.
What's next for Ocean Nation
Next steps for Ocean Nation includes creating charts and graphs that allow users to easily notice at first glance how bad ocean pollution is. We also want to create new links and constantly add the latest new articles concerning ocean pollution so people know that our site is the “one shop for all” ocean pollution website. On the game aspect, we want to provide more levels, implement a map that shows where they are fishing the garbage out of the oceans and increase the game fluency making it more engaging for everyone.
Built With
css
html
javascript
Try it out
www.oceanation.tech | Ocean Nation | Oceanation is a website that allows people to learn about ocean pollution. It includes an interactive game that allows individuals to recognize materials in the ocean that harm animals in the ocean. | ['Oleh Shostak', 'Vyshnavi Rajeevan', 'Alexis Solorio'] | [] | ['css', 'html', 'javascript'] | 49 |
10,413 | https://devpost.com/software/mask-up | Inspiration
The inspiration came when I decided to make my own masks in a hot summer. The materials used were clothing taken from a second hand store, rubber bands and tape. The outcome was it lasted for only a few days and I went on to find a better solution but there are little communities available to help
What it does
Business Case: This application allows searching of mask solutions and latest mask making methods
Step 1: Search for mask type and how to make them
Step 2: Find the mask origin and view through the map
Step 3: The user can discover new mask making communities using our recommendation system
Step 4: The user could form his / her own communities using the application for tips on getting better masks
How I built it
The application was built using node js framework, javascript, google firebase, google cloud
Challenges I ran into
The integration of different technologies to make an application fully functional was challenging
Accomplishments that I'm proud of
I am proud in building an emergency response community which renders help to mask masking beginners
What I learned
I learned about new technology providers such as web APIs and rapid prototyping of this application to improve social health
What's next for mask-up
The application would be further scaled as a travel and health community for users to get masks wherever they are.
Built With
balsamiq
css3
google
javascript | mask-up | Mask-up is a tech initiative platform for designers to showcase their designs, sell their masks for greater public and improve social health | ['Irene Lee'] | [] | ['balsamiq', 'css3', 'google', 'javascript'] | 50 |
10,413 | https://devpost.com/software/oceanmaster | Home Page!
Trivia Game: We've loaded a bunch of different questions into the database so you can have a different experience every time you play!
Facts and Figures: Learn interesting information about oceanic pollution and see why Marine Conservation is such a pressing issue!
How You Can Help: Displays Marine Conservation centers located across the United States with a detailed description on what they do.
Image Gallery: Shows shocking images on oceanic pollution and how it is drastically affecting aquatic wildlife.
Infographic 1
Infographic 2
Inspiration
For our project, we wanted to focus on an important issue, Oceanic Pollution, that affects aquatic plants and animals around the world. We realized that many people in this area are oblivious to the events occurring in today’s society and we wanted to create something that can help us change this narrative.
What it does
OceanMaster provides people with a platform to educate themselves on environmental issues and learn what they can do to help save the oceans.
How we built it
For the main design of the website, we used a Bootstrap template and used HTML and CSS for the main features of the site. For the Trivia game, we used Javascript to create new questions, check if the users’ answers are correct, and keep track of scores. For the “How You Can Help” Section, we used the Mapbox API to display a real-time satellite map and used jQuery geocoding to show some of the marine conservation centers, through markers with detailed pop-ups, located throughout the United States.
Challenges we ran into
We faced many challenges while developing the website; from getting a functional map to properly storing questions and displaying them on the page. Along with this, we also struggled to adapt to the workflow of a team environment, since we had conflicting schedules and most of our team was unfamiliar with aspects of the project development process and new to computer science in general. However, by conducting more research and establishing better communication, we were able to overcome these hardships and successfully build our website.
Accomplishments that we're proud of
We were able to develop this project with skills we were previously inexperienced with. By finding the resources necessary, we were able to obtain the skills needed in order to successfully accomplish our goals. This hackathon allowed us the opportunity to grow, not only as computer scientists, but also as contributors to our communities.
What we learned
Team collaboration and communication is critical in order to efficiently develop a project. Without the strong team-based environment we created for ourselves, we would not have been able to complete this project in such a short time frame. Although most of our team was new to computer science, by using javascript tutorials and having effective communication, we were able to create a project that is fully functional and ready for immediate deployment.
What's next for OceanMaster
Polishing and expanding OceanMaster in order to make it readily accessible for communities around the world. We also plan on gaining the support of Marine Conservation centers to help us on our campaign to protecting aquatic wildlife and our oceans.
Built With
bootstrap
css
html
javascript
jquery
mapbox
Try it out
github.com
oceanmaster.netlify.app | OceanMaster | Educating the Community about the Deep Sea | ['Abhi Nayak', 'Avin Naik', 'Anish Naik', 'Aditya Bommareddy'] | [] | ['bootstrap', 'css', 'html', 'javascript', 'jquery', 'mapbox'] | 51 |
10,413 | https://devpost.com/software/battleship-ikd7ln | Inspiration
It's a game with ship in the name
What it does
Allows you to play battleship with the computer
How I built it
Built with c++
Challenges I ran into
Making the computer remotely intelligent
Built With
c++
Try it out
github.com | Battleship | It's just battleship... | ['Carolina Brager'] | [] | ['c++'] | 52 |
10,413 | https://devpost.com/software/ar-shipment-label-tracker | Inspiration
Shipment labels are hard to read and are hardly useable
What it does
Makes the shipment label interactive and useful by adding more functions to it like , navigation , shipment status tracker , customer details etc
How I built it
Used Vuforia engine to display in AR and Unity engine to make it all come together
Challenges I ran into
Accessing android services using Unity
Accomplishments that I'm proud of
Completing all the functionalities I set out to integrate
What I learned
AR using unity and android
What's next for AR Shipment Label Tracker
Pitch to big shipping companies and possibly score a contractual interest
Built With
html
unity
vuforia | Label Scanner | Shipment labels made easy with AR | ['N K'] | [] | ['html', 'unity', 'vuforia'] | 53 |
10,414 | https://devpost.com/software/hack-to-rsvp | Logo Generated with Hackathons
Photo Mosaic Generated with Code
Our Wi-Fi Camera built with Raspberry Pi & Google AIY Kit
Usage of the Wi-Fi Camera
(Secret) Rubber Ducky friend mask for MoviePy
Inspiration:
We thought representing the history of MLH and all of the competitions that have happened over the past 7 years somehow would have been really cool. By putting together some slideshows, videos, and images together, all of which are based on scraped data from MLH, there's a nice nostalgic feeling.
What it does:
Hack to RSVP
uses a combination of technologies to watch previous memories and create new ones! The project produces videos and photo mosaics with programming and pictures from past hackathons.
Hack to RSVP
handles heavy data scraping, video clip rendering, image manipulation, and even includes a Wi-Fi camera created with Google AIY hardware.
How I built it:
We split the responsibilities of the project among the team surrounding the following elements:
1. Scraping
Several
UiPath
processes were used to get hackathon names, find facebook albums, find picture links, and save all the pictures locally. The entire process took around 2:30hrs and generated 57000 pictures.
2. Image Mosaic
Using the scraped images, as well as a base image from the actual hackathon, we were able to make an image mosaic comprised of scaled-down versions of the scraped images. This works by splitting the base image into a grid and getting the average color of each grid section and then mapping that grid section to one of the smaller photos with a similar average color. The averages were done using NumPy, and the actual image manipulation was done using the Pillow module for python.
3. Word Cloud (Logo Style)
Using the scraped list of competition names from the MLH website along with a base image made in photoshop, we were able to use python to: create a dictionary of the most common words in each competition, scale each term based on frequency, and apply each word onto the outline of a base image. This ended up becoming one of the parts of our logo.
4. Video Slideshow
Going back to the scraped images, we opted for the Moviepy module to sort through the pictures and use them to do produce videos, all through
pure code
. With the region detection and masking functionality of Moviepy, we wrote the code to take images appropriate for both a background (outside the MLH duck) and a foreground (inside the MLH duck) and assembled them into slideshows. The slideshows are entirely unique and are generated with any of the 50,000+ images at runtime.
5. WiFi Camera Hardware
We wanted to create a project that leverages all the technology available to us. Therefore, we decided it would be a really cool idea to build a camera that instantly sends the picture taken to the computer for a video to be processed. This was done with a Raspberry Pi Zero W connected to Google's AIY Bonnet. The Pi communicated with the computer via SSH & FTP for file transfer.
Challenges I ran into
The pages scraped using UiPath used a rendering strategy called "lazy loading", which limited the content showed. After some experimenting with RPA, we were able to develop a solution to simulate user interaction by sending hotkeys to scroll to the bottom of the page.
The process to scrape all the images was very long and inefficient. A few errors during the execution of the automation delayed us from getting all the pics.
Since UiPath is a robotic tool, Facebook blacklisted the accounts as Spam/Bots. To fix this, we implemented delays between the actions.
Having a clear image mosaic means a lot of images. Although we had 57 thousand images, running everything took an extremely long time to process.
Creating videos through pure code means we had to fully render the video each time to really confirm it was doing what we wanted it to do. Since we had to do a lot of positioning adjustments, this took away from our total project time.
Corrupted MicroSD issues on the Raspberry Pi that required to format the drive and reconfigure the entire device delayed us.
The WiFi Camera Hardware requires a power supply to be connected to the device so it can function. However, we wanted to make something more "remote". To overcome that, we had the great idea to use a portable charger to power the device. However, the amperage conducted on the device was not enough. Therefore, we had to connect 2 portable chargers and then cut both cables and connect them into one to provide double the amperage. This made it work really well!
Accomplishments that I'm proud of
We are proud of revisiting MLH's history with photos, videos, hardware, and much more! We were able to build this awesome project by helping out each other as a team with different levels of experience. We are proud of being part of MLH and working on a project that attempts to use all the technology provided.
What I learned
How to use UiPath: create DataTables, variables in VB, Data Scraping using CSS selectors, "If" statements, loops, creation of CSV files, reading CSV files, making HTTP requests, downloading pictures.
How to use Moviepy: How to create different types of clips, usage of advanced masking, region detection, cropping, resizing, adding music, stacking clips, and multiple clip rendering.
How to use Raspberry Pi, SSH, FTP to connect hardware with software.
How to manipulate photos with Numpy & PIL.
How to double the amperage!
What's next for Hack to RSVP
We want to put
Hack to RSVP
in the cloud for others to utilize our service to see past memories and create new ones!
Domain.com Submission
HackToRSVP.online
(Domain bought, but no DNS configuration at the moment)
Built With
google-cloud
google-video-intelligence
googleaiy
moviepy
python
raspberry-pi
uipath | Hack to RSVP | Using most of the sponsor technologies to build a memorable birthday gift for MLH that uses photos, videos, hardware, and robots. | ['Nathan Kurelo Wilk', 'Ariel Kurelo Wilk', 'Ben Ruddy', 'Adam Hassan'] | ['First Overall', 'Best UiPath Automation Hack'] | ['google-cloud', 'google-video-intelligence', 'googleaiy', 'moviepy', 'python', 'raspberry-pi', 'uipath'] | 0 |
10,414 | https://devpost.com/software/augmented-reality-birthday-fiesta | Inspiration
Birthdays are special .
A reason to celebrate , party and have fun from that hectic schedule.
but since the start of quarantine, we haven't been able to go out much. We miss spending time together with old friends and relax. So we thought, If we can't go outside, why not bring happiness to us and to our loved ones?
Augmented Reality Birthday Fiesta is an AR application that allows to have that special birthday experience inside and also helps in mental piece of mind .
What it does
The
Vuforia
apps installed in your phone with provided image targets in the
Vuforia database
will show all the birthday _ bombings _.This Unity3D birthday AR allows you to recieve databases including image targets and to regenerate all your birthday memories in reality.
The
AR Foundation
app installed in android with
AR Core XR Plugin
helps to provide session and to identify default plane for the balloons to be at the specific location tapped by the user. Also AR Foundation with
AR Face Tracking
library which will track your face and then you can try funky stickers for your face 2D + 3D Effect in replacement of real face masks.
You have several options to explore inside the AR:
Balloons
Card magic tricks
3.Funky faces
3D Cake
3D Gifts
Glass Cheers
How I built it
Inside Unity using packages:
1.Vuforia
2.AR Foundation
3.AR Core XR Plugin
4.AR Face Tracking
and installing apks in the mobile with target images.
Challenges I ran into
It is compulsory for AR Foundation application for android that
Google Play Services for AR
should be pre-installed inside the android else you will only see black screen.
It was hard to find models and alignment for perfect experience.
Importing packages and databases multiple times
Accomplishments that I'm proud of
It will give you the best birthday experience inside home.
Every thing in the applications works .
What I learned
Making apps with Vuforia
2.Vuforia database and importing
Vuforia with 2D and 3S
3.Manipulating Images
AR Foundation with AR Core XR plugin
Using AR Face tracks
What's next for Augmented Reality Birthday Fiesta
We will add more elements like fireworks, party bombers etc.for better experience
Built With
ar
arfacetrack
arfoundation
unity
vuforia
Try it out
github.com | Augmented Reality Birthday Fiesta | for all our party people | ['blackcrabb Niyati', 'saumya shakya'] | ['Second Overall'] | ['ar', 'arfacetrack', 'arfoundation', 'unity', 'vuforia'] | 1 |
10,414 | https://devpost.com/software/savior-u8dwn2 | saviour-help
saviour-pseudocode
saviour-practice
saviour-style
saviour-check
Inspiration
There were three main problems we faced when we started with programming:
First: We didn't have an active internet connection and since all the coding challenge websites like Hackerank and CodeWars worked only with an internet connection, it was particularly difficult to practice our skills.
Second: We had to search StackOverflow and other platforms to find solutions for programming problems. But, a lot of times the code would not be explained properly and we had a hard time figuring out what it did. Also sometimes we'd write code without properly documenting it or adding comments and then later wonder how it worked.
Third: We didn't knew how to style our code properly or what styling practices to follow.
We created a python package that helps us with all these issues aka our Saviour.
What it does
It has three main components, you can view a list of commands by running saviour-help. They are
Saviour-style -> used for code styling using pep8 standards
Savior-pseudocode -> used for generating pseudocode from a program
Savior-practice -> Used for practicing programming questions offline
How We built it
We have used UiPath automation tool to scrap the questions data from the Project Euler website.
For the code styling, we used autopep8 to conform the python code to pep8 styling standards.
The pseudocode generator works primarily by replacing certain keywords and functions from a normal line of code to plain-english so it is understandable by anyone.
Challenges We ran into
3 of our team members are new to programming with python, so we had some difficulty working with packages.
The autopep8 library had poor documentation so it took time to get it to work. We had to make the output code not only correct but also highly presentable.
The pseudocode generation part was particularly difficult as we had to consider a lot of conditions to get it to output in as simple English as possible.
Other than that, we all live in different time zones, so we had to literally keep track of each other's progress everytime.
Accomplishments that we're proud of
We are proud of the fact that we were able to make a program closest to the actual idea we had.
What we learned
Data scraping using UiPath
Pep8 code styling standards
Making python packages and command line scripts
What's next for Savior
We will work towards building an even smarter pseudocode generator that can work with multiple programming languages. We will try to add more questions to our offline Saviour-practice module.
Built With
pep8
projecteuler
python
uipath
Try it out
github.com | Saviour | Want your code to be stylish, or generate Pseudocode to understand someone's program? Introducing the Saviour, it can do all of this plus give you challenging questions inspired by Project Euler. | ['Amanjot Singh', 'Jashanjot Pruthi', 'Jatin Dehmiwal', 'Priyank Patel'] | ['Third Overall', 'Best Hack for Hackers'] | ['pep8', 'projecteuler', 'python', 'uipath'] | 2 |
10,414 | https://devpost.com/software/wacky-chess | Inspiration
We play chess and other board games a lot so we wanted to do something related
What it does
Basically a 2 player chess game that adds twists every 10 turns
How we built it
The program was split into 4 parts: display, game board, game pieces, and custom modules that added a twist to the game.
Challenges we ran into
We lost contact with 2 of our team members shortly after brainstorming, so, unfortunately, we had to rush this as a group of 2. there were a few logic errors but were easily fixed with debugging. Overall, the project went rather smoothly.
Accomplishments that we're proud of
Working on this for 18 hours straight
What we learned
Although programming is very time consuming, it is completely possible to pull off a project within 24 hours.
What's next for Wacky Chess
We would like to add more twists, improve the user interface, and maybe implement online multiplayer with sockets.
Built With
pygame
python
random
Try it out
github.com | Wacky Chess | Chess with a twist | ['Brandon Cheng', 'Nicholas Jano'] | ['Your First Hackathon Birthday?!'] | ['pygame', 'python', 'random'] | 3 |
10,414 | https://devpost.com/software/masque-txt | Masque-txt logo and initial design
A screenshot of the Masque-txt app
Arduino circuit and LCD display
A LCD case design
A LCD hinge attachment
A LCD nosepad attachment
Inspiration
The recent pandemic outbreak has led to further emphasis in personal hygiene and protective face masks. Although this situation has been hard for everyone, it has been especially difficult for those who struggle to communicate verbally. Our team wanted to build a product that promotes staying safe, prevents the spread of the virus and empowers those who need a voice.
How it works
Masque-txt is a smart mask that displays messages on a face mask by communicating with a phone app to obtain user inputs. In addition to typing messages that users wish to display, the user also has the option to add and delete common phrases for personalized and quick message display. Furthermore, the face masks are custom designed to have an LCD display which is connected to an Arduino wifi setup to the Masque-txt Android app.
Building Process
The first step in our design process was deciding on the physical design of our face mask and the integration of a display with a comfortable face mask. While flexible screens do exist, they were not up to the standard we desired nor available to us for this 24 hour hack. Thus we decided to use a LCD display that would attach to a custom face mask.
We also wanted a wireless communication between the app and the facial display. To achieve this, we decided to utilize an Arduino in the circuit to control the LCD display and to communicate with our app via wifi. In terms of hardware, we were able to display custom messages on the LCD and wire the Arduino such that it was free-standing (connected to a smaller power source), allowing for a smooth integration to our mask designs had we had access to the necessary parts.
The UI was designed using Figma, with an Android as our target device. We aimed for a clean and minimalist design with a light colour scheme. Once it was completed, we moved on to coding the app functionality features, using React Native. We spent the majority of our time familiarizing ourselves with React Native and Firebase, debugging the text input, and the message save and delete functions.
Accomplishments & Challenges
Since half of our team consisted of first time hackers and we were all using React Native for the first time, it was a steep learning curve for all of us. Hardware integration was challenging due to limitations on the technology available, as only one team member could work on the hardware due to the virtual setting. Furthermore, researching, debugging and implementing the wifi feature with the Arduino proved to be difficult and with our limited time, we were unable to fully complete that feature. Nonetheless, we were able to design multiple face mask LCD attachment models, and we were able to learn a significant amount and we are all excited to attend more hackathons in the future!
What's next for Masque-txt
During team discussion, comfort was a common theme, thus we created various custom 3D face mask designs. In the future we hope to finalize all components of our product, fully complete the integration process and be able to test these prototypes with different materials to identify future areas of innovation and improvement.
Built With
android-studio
arduino
figma
firebase
java
javascript
lcd-screen
react-native
solidworks
xcode
Try it out
github.com | Masque-txt | Face masks require projected voices, a difficult task for those struggling with speech. To foster inclusivity, we created Masque-txt, a smart mask that shows messages that users input to their phones. | ['Veronica Nguyen', 'Emily Lukas', 'Matthew Sam', 'Christopher Tang'] | ['Best Hardware Hack presented by Digi-Key'] | ['android-studio', 'arduino', 'figma', 'firebase', 'java', 'javascript', 'lcd-screen', 'react-native', 'solidworks', 'xcode'] | 4 |
10,414 | https://devpost.com/software/sneksnek | Our logo featuring our glorious sneksnek
Our landing page, complete with a key for which items are which
The initial setup of sneksnek upon starting the game
Looks like sneksnek is in a bit of trouble- hopefully he won't eat all of those poisonous apples
Inspiration
Quarantine is boring. sneksnek is fun as heck. We wanted to create the classic snake game with a mix of battle royale. Based on the concepts behind games like tetris 99, sneksnek allows you to battle against other people online to claim the title of apex snek.
What it does
Welcome to sneksnek! Get ready to put all hands on deck, because this game is a trek to wrek the other players. You'll have to check for apples to peck, as well as walls and poison green apples to inspek and avoid. Blech. When you grow, you subjekt other players to more walls or poison apples.
🍎 Red apple: +1 points, gain length (sneck)
🍏 Poison apple: -1 points, lose length (blech)
☠️ Wall: Immediate Death (wrek)
To become the apex snek, you mustn't crash into your neck. Or the walls. Beckon your fast reaktions, because when you're at the top, your teknique has to be impekable. It's not complex, get out there and do your best!
How we built it
We developed the game with
HTML5, CSS3,
and
JavaScript
. We used
repl.it
to collaborate online. For the gameplay graphics and functionality, we used the library
p5.js
. To create rooms to allow for the individual games to connect with each other, we used
Socket.io
.
Challenges we ran into
One challenge we ran into was scaling the snake to the size of the screen. Initially, our design had the snake with a constant size regardless of screen size, with the canvas changing its width and height to match. We quickly realized this gave an unfair advantage to people with larger screen sizes, and were forced to rewrite much of the scaling to allow for the snake to scale up and down with the grid, so as to eliminate biases favoring larger screens.
Another challenge we ran into was that, initially, when we started up the socket, the p5.js script refused to run. After lots of debugging, we discovered p5.js's instance mode. This mode allows the p5 script to act as an object, allowing us to initialize multiple rooms with the same p5 object, giving them access to unique instances of the game board within a shared socket.io room.
Accomplishments that we're proud of
We are extremely proud of the design of sneksnek. From the background to the design of our little snek, we are really happy with how our final product turned out. We are also happy with the scalability of our project, as, while currently only 3 players are allowed in the room at once, this can be scaled up in the scenario where the game becomes more popular. We are also very proud of our creative domain name from domain.com: sneksnek.tech
What we learned
We learned a lot about how libraries can interact with each other through this project, as well as how objects within a room can communicate with each other. In implementing the p5 script for the snake game, we learned about instance mode, which allows each user within the room to control their own instance of the snake. We had never used this encapsulation procedure before while working with p5.js and it was really interesting to learn about how scripts using specific libraries can be implemented as an object of that library rather than as a standalone script.
What's next for sneksnek
Sneksnek has a bright future in the dark space. In the future we have plans to implement sneksnek in 3D and perhaps allow for movement in the x, y, and z directions with a whole maze to traverse through. This maze could be generated using recursive backtracking, and implemented in the WEBGL p5.js renderer, or three.js. SnekSnek may later come in different skins where the player can customize their snek based on how many times they've achieved the level of apex snek. We also hope to implement a way where players can see how well their opponents are doing, as they gotta keep trak of the other sneks!
Built With
css3
html5
javascript
p5.js
repl.it
socket.io
Try it out
github.com
www.sneksnek.tech | sneksnek | A battle royale game where the goal is to wreck all the other sneks and become the apex snek | ['Kevin Gauld', 'Ethan Horowitz', 'Jendy Ren', 'Michelle Bryson'] | ['Best Domain Name from Domain.com'] | ['css3', 'html5', 'javascript', 'p5.js', 'repl.it', 'socket.io'] | 5 |
10,414 | https://devpost.com/software/hackshare-tn675b | A screenshot from the iOS
Share your hack!
Explore hacks around the world
Inspiration
During each hackathon, in our rush to build a working project, we learn so much. Sometimes, these are small tidbits of information that's too small or trivial to blog about. It could be anything, from something as trivial to bypassing cors, or aligning an element with css. Whatever the case, this thing that you spent some time googling is worth sharing with the hackers of the world. Platforms like StackOverflow encourage you to ask about something you got wrong. We're looking to reverse this trend, let's share what we got right instead.
What it does
Hack it, Share it! is a cross-platform app (yes, you can use it on Android, iOS and the web) that lets you share anything that other hackers might find helpful. It's a simple experience, no signup required. All you do is type in your hack and give it a title. All shared hacks can be seen via a world map with pins showing you from where the hack was submitted.
How we built it
The app was built using flutter in order to have that cross platform experience. We used the google maps sdk to show the world map with the pins, and further integrated the app with Google's Firebase to store hacks and related info on Cloud Firestore.
Challenges I ran into
This is the first time we're using Flutter, so it was an interesting journey. A hitch I faced in the beginning was actually getting flutter installed on my machine. Finally got it to work after I switched to openjdk8. Getting markers to show up on the map was also a challenge, mainly because the API had recently changed and we could only find old code on the web.
Accomplishments that we're proud of
This is the first time we used flutter to create an actual app. It was a pretty
wow
moment when we finally got the app to work. Almost all technologies that we used (Dart/Flutter, Google Maps SDK, Firebase/Firestore) were completely new to us, and it was really great how we got everything to finally work out.
What we learned
This hackathon, being shorter than the usual 48 hours, taught us a great deal on how to manage our time properly. It's always a scramble for time at the end, and with each passing hackathon, I'm glad to report that we are slowly improving.
What's next for Hack it, Share it!
At the moment, there isn't a way for hacks to be searched for. In the future, I plan to allow adding tags to submitted hacks. Maybe a search feature would also do all the hacks some justice.
Built With
android
dart
firebase
firestore
flutter
google-maps
ios
iphone-sdk
Try it out
github.com | Hack it, Share it! | Share the tiniest of hacks with the world! | ['Joshua T', 'Jason H'] | ['Best use of Google Cloud'] | ['android', 'dart', 'firebase', 'firestore', 'flutter', 'google-maps', 'ios', 'iphone-sdk'] | 6 |
10,414 | https://devpost.com/software/paper-web | Inspiration
The ability to create websites fast and easily for those who may not have much experience
What it does
Takes an image and converts it into an html file based on a provided resource model.
How I built it
Built with an Angular frontend and flask backend, this project uses the power of OpenCV and Google Cloud Vision to scan images into html objects
Challenges I ran into
Primarily in converting objects and positioning them properly
Accomplishments that I'm proud of
Using vision cloud services for the first time! This was new for all of us and definitely something we were happy to see work
What I learned
Frontend and backend concepts that we weren't too familiar with prior, and computer vision techniques that were new to all of us
What's next for Paper Web
Built With
angular.js
bootstrap
flask
javascript
opencv
python | Paper Web | The quickest and easiest way to make a website! | ['David Morfe', 'Hafeth Wadi', 'Syed Haider'] | [] | ['angular.js', 'bootstrap', 'flask', 'javascript', 'opencv', 'python'] | 7 |
10,414 | https://devpost.com/software/escape-class-room-oexqpl | Project Details and Explanation:
Since you can’t actually play our escape room, here is what would happen if you did (also refer to our video) →
Upon looking around the room, you find an index card saying “:) most to least faces :)”. This is a hint that the passcode into the chest is based on the platonic solids, in some order of least to most faces. There is also a poster on the wall with this same “:)” and a diagram of the faces of the platonic solids to serve as another hint to play around with them. After seeing the single letter options you decide on the code THODI (tetrahedron, hexahedron, octahedron, dodecahedron, then icosahedron in order of faces).
Once you open the chest, there are a bunch more shapes and then a sort of faucet-looking item. After moving around with the faucet in hand you realize that if you get close enough to the sink the handle teleports to the faucet. Upon looking around the room you realize that there is both a watering can and a plant, and a poster of photosynthesis that suggests watering that same plant. If you walk to the sink with the watering can, it fills up with water, at which point you are able to water the sunflower. The sunflower grows and hits the ceiling, releasing a CD that falls from a moved ceiling tile.
At first, a CD seems to have no place in this classroom, but upon looking around you see a CD slot connected to a projector. If you get close to the CD slot while holding the CD, it goes right in the slot and a projection of the periodic table appears on the white board. Upon close inspection, you realize that it is missing a few elements. You search around the room and find papers with a bunch of different elements, and have to figure out which ones are already on the table and which ones are missing. You notice that you need to submit another passcode but this time on the teacher’s computer.
You look around and see the poster about what an atomic number is, and you realize the passcode must be related to this. You also notice a clue on the white board about sorting in alphabetical order, and you decide to sort the atomic numbers of the three missing elements in alphabetical order and enter the code 461474 into the computer. The teacher’s desk opens, and you have the final key. You run to the door, unlock it, and escape! You had some fun and learned along the way ;)
Our project today:
In a time where we are largely confined to our homes, virtual reality technology provides us an opportunity to explore new spaces - and new realities - without having to venture out into public. Escape rooms have always been a special activity for us to do with friends, and we wanted to utilize virtual reality to create a digital version of a beloved pastime. We aimed to combine our passion for escape rooms, gaming, and education in this project.
Having been out of school since March and unsure when we will return to the classroom we wanted to create a digital classroom space as the setting for our escape room. Because of our background with educational video games, we wanted to introduce an academic element, hence the science and math concepts that form the basis of our clues. We hope that students will not only be entertained by our escape room game, but will also learn from completing it.
We also designed this classroom in a way that it can be adapted and used for actual virtual learning in the future. We have had conversations with Dr. Fan and hope to pursue this idea of a “virtual classroom” (without the escape room element and more lesson based) in the future.
About our project:
Before the hackathon we brainstormed ideas and came up with the idea of the classroom escape room simulation but did not begin any physical work or create our project file. The UE4 project file was newly made at the beginning of the hackathon on saturday. We used the basic map and controls in UE4 for Oculus Rift as well as free downloadable texture packs from epic games. Our models are a mix of CAD models we made during the hackathon and open source models (we can provide a full list of resources used).
Blueprints:
We made all codes in this game completely independently, without any tutorials or open source work. If you want to see full blueprints we would be happy to send. :)
Backstory:
Anna and I met at the NYU Mechatronics lab last summer as interns where we predominantly worked with Unity 2D/3D making user interfaces for phD projects. We were rising seniors in high school at the time, and we became fast friends bonded by our passion for STEM. In January we began working for Girls’ Angle, a math organization based out of Cambridge, MA that works to get young women and girls interested in math. Under the guidance of founder and CEO Dr. Ken Fan, we have been exploring how we can use UE4 and virtual reality to help students visualize and understand higher dimensions. We hope to continue to develop this project and find other ways to use technology to enhance math education, particularly for underserved girls, like those who benefit from Girls’ Angle’s resources.
Cindy has studied animation and digital design at the Art Institute of Chicago, Carnegie Mellon University, and will be attending the Digital Media Design program at the University of Pennsylvania this fall. She is interested in art for film, tv, and video games. While she is experienced in Unity programming, she contributes mainly to the visual elements of the project.
The three of us are all recent (this June!) graduates of girls high schools and going on to study engineering in college next year.
Full version (6 minutes):
https://youtu.be/s1Z3wyDstYk
2 minute demo linked below.
Built With
blueprintprogramming
cad
oculus
oculus-gear-vr
unreal-engine | Escape (Class)Room | Escape (Class)Room is a virtual reality escape room where users solve science and math themed puzzles to find the key to the door and escape. | ['Erin Donahue'] | [] | ['blueprintprogramming', 'cad', 'oculus', 'oculus-gear-vr', 'unreal-engine'] | 8 |
10,414 | https://devpost.com/software/happybirthdaymlh | Easy mails customer support
Hi there!
Basically, this is a Automation of Customer Support Email Reply system using Uipath studio. In this project we read all the emails for our inbox and send them replies by generating a tickect number for there issue.Finally the data is stored into a excel sheet.
happybirthdaymlh
Built With
uipathstudio
Try it out
github.com | Easy mails | An Customer Support Email reply system using uipath studio | ['lohit sai'] | [] | ['uipathstudio'] | 9 |
10,414 | https://devpost.com/software/musiclean | Inspiration
A VR GAME related to Space kinda , just to enjoy.
Built With
a-frame
blender | VR GAME | A VR based programming language game making life fun. | ['Hrithik Sahu'] | [] | ['a-frame', 'blender'] | 10 |
10,414 | https://devpost.com/software/apifinder | Inspiration
Everytime I start a project or attend a hackathon, a large amount of time is spent on finding a suitable API for my project. I would have to search multiple websites, open multiple tabs and spent countless hours finding the suitable api. During my first hackathon, I did not have any idea of what to create. If I had a web app like this api finder, I would have been able to get some inspiration for a project
What it does
It first shows the user the public APIs that is available to be used for free. The user is also able to search for other APIs if they have a specific API in mind
How I built it
I used reactJs to build the web app. I had also used an API to get data on other APIS.
Challenges I ran into
Accomplishments that I'm proud of
I was able to use the API in reacJst as this is my first time using reactJs and I had to watch a lot of youtube videos to learn it.
What I learned
how to use reactJs and use APIs with ReactJs and style components
What's next for APIFinder
add more features such as an API tester and a JSON Parser to make it more useful for first time hackers
Built With
brains
css
html
javascript
react
Try it out
pradneshsanderan.github.io | APIFinder | A web app to help experienced and new hackers find apis suitable for their projects in a quick and easy manner. | ['Pradnesh Sanderan'] | [] | ['brains', 'css', 'html', 'javascript', 'react'] | 11 |
10,414 | https://devpost.com/software/code-capture-compile | Inspiration
While in recent times, the world has started moving towards pro-CS education, the fact is that buying computers is a distant dream for most students and educational institutions across the globe even today. In most developing countries, which account to
over 85% of the world
, the ratio of CS students versus the number of computers available is highly skewed and most students are still learning programming via pen-and-paper. At the same time, however, the number of people who own mobile phones has significantly increased.
Statistically speaking, 4 out of 5 people have access to smartphones versus only 2 out of 5 to computers
. Smartphones simply are more accessible than computers. Bridging this gap between pen-and-paper coding and coding on a computer by using a technology that people already own can bring a significant difference in the adoption of Computer Science education today.
What it does
CodeCapture is a web application that aims to ease coding education for individuals that have limited to no access to personal computers and computer labs at school or college using pen-and-paper and their smartphone.
CodeCapture allows you to simply
take a picture of your hand-written code, extract it, and run it in your browser
. With CodeCapture, you will be able to code without typing. Built around this amazing functionality is an all-inclusive mobile-first learning platform. With facilities such as customized programming courses for students and educators, code testing and assessment facilities, and virtual classrooms, CodeCapture shall offer an unparalleled online CS education experience. CodeCapture will also feature a plethora of curated learning content for all users (individual and institutional) as well as a safe space to connect with the learner’s community and share their work.
With CodeCapture,
YOU DO NOT NEED TO TYPE TO CODE!
How we built it
While developing our product, we had to ensure that our solution, while fulfilling the consumer’s need, also remains highly fault-tolerant, scalable, and available. Therefore, we made our system design decisions in order to ensure a robust product.
Azure Cognitive Services
is our product’s hero, enabling us to take code from paper to mobile. While text extraction facilities have been available in the market for a while, this is a first-of-its-kind use-case.
We utilized
ReactJS
as our primary technology for the consumer-facing side of the product to ensure better performance.
Azure Static Web Apps
as a tool also enhances scalability and offers high availability.
We chose to forgo a traditional back-end and instead use a serverless architecture via
Azure Functions with .NET
for all our infrastructural work to ensure maximum scalability and lower cost.
For our database needs, considering our structure, we chose
Azure Database for MySQL
due to its high availability.
Our storage has been enabled by
Azure Blob Storage
in order to ensure high availability, accessibility and security of stored information.
In order to facilitate compilation of code, we are using an
Azure VM
where compilers are installed, where a custom-built Python API is utilizing the system resources to manage the code compilation.
Thus, we are pushing the boundaries of traditional coding education platforms in order to stand out and disrupt the market, both in terms of idea and execution.
Accomplishments that we're proud of
The
CodeCapture Web App Prototype
was accepted by
Code.org
to be featured in the
Hour of Code 2020
. It was featured on
hourofcode.com
for the
Computer Science Education Week 2020
.
The
CodeCapture Mobile App
was featured in the
Student Showcase at .NET Conf 2020
.
The
CodeCapture Web App
was selected as the
National Winner
in the
Education category
at the
Microsoft Imagine Cup 2021 India Championship
The
CodeCapture Web App
won the
Popular Choice Prize
at the
Microsoft Azure AI Hackathon
What's next for CodeCapture
We have a lot of plans to scale CodeCapture to an even greater extent. Using this amazing product as our foundation, we intend to build an all-inclusive learning platform. With facilities such as virtual classrooms, curated learning content, instructors' upskilling, and learners' community, CodeCapture shall offer an unparalleled online mobile-first classroom experience.
We aim to empower youth to be future ready by making coding education accessible to all
Built With
azure
computer-vision
csharp
mysql
python
serverless
twilio
xamarin
Try it out
codecapture.study
github.com
github.com | CodeCapture | An application that allows you to code without typing | ['Aditya Oberai', 'Simran Makhija', 'ekansh gupta'] | ['Popular choice'] | ['azure', 'computer-vision', 'csharp', 'mysql', 'python', 'serverless', 'twilio', 'xamarin'] | 12 |
10,414 | https://devpost.com/software/comsafe-ng2m3x | Inspiration
ComSafe originated amid the Covid-19 pandemic as we rarely interacted with our neighbors. We had no awareness of what was happening like how one of our neighbors left with minimal notice and there were lots of other things that we wouldn't know because there was a time when we could meet. I also took a little inspiration from Ring with their neighborhood maps and icons and such. I liked the Ring app but I thought it was lacking something so I made ComSafe which isn't just security.
What it does
As the Covid-19 pandemic continues, neighbors all around the world are staying isolate and we are barely interacting. ComSafe came about to assist this fact, it helps the neighborhood by connecting people to one another and keeping the community safe and reliable.
The idea about ComSafe is that it's “the neighborhood app” and that you can stay up to date on your neighborhood even if you are busy or just can't talk. ComSafe is also a great app for safety and having a better community. Let's talk about safety. In the app, you should be able to post things or pin the map so that you can inform your neighbors any plans and you can also ask for help or plan an event. Using the pin map feature you can easily start a birthday party or as for a shovel. You could also ask for assistance using the emergency or alert posts. As of now, there are neighborhood IDs so that a random person couldn’t join it. To get a neighborhood at the moment you’ll have to email the support team and they will make it for you if you provide the address and we will give an ID.
How I built it
We used bootstrap templates and basic Html and CSS and some JS for firebase to build ComSafe as it is right now.
Challenges I ran into
A challenge we ran into was when the account icon was too small and we couldn't make it bigger.
Integrating a fully functioning map and feed to allow for communication between people.
Accomplishments that I'm proud of
We're proud of taking the website and adding multiple customized pages, including the account page with interactive maps and accounts page.
Making the scale of everything flow well with the numerous frameworks coexisting
What I learned
We learned how to code some Html that we didn't know already.
We learned how to manipulate templates that we found online and that trying to use more than one template is a pain. We are all also fairly new to Html and CSS so we had to use docs a lot and use tutorial videos. One of our members didn't even know HTML or CSS but he knew Java.
What's next for ComSafe
We plan on making a working login and password and we want it to be able so when you click on a map marker, it shows you who wrote the message.
Expand on what it means to be a community and do research on future applications in all fields.
NOTES
Since we haven’t set up sign ups yet, you will have to use the demo user:
Username:
infant.elfrick@gmail.com
Password: password
Built With
bootstrap
css
firebase
gcp
html
javascript
maps
material
Try it out
comsafe.netlify.app | ComSafe | The community app | ['Shooter913b Gnanasusairaj', 'Tanish21', 'MiceyC0Der Girish'] | [] | ['bootstrap', 'css', 'firebase', 'gcp', 'html', 'javascript', 'maps', 'material'] | 13 |
10,414 | https://devpost.com/software/hackorganized | Team Building page, you can add team members by email
Project Details
Current Status
Home Page
Reference links
Inspiration
The inspiration for this challenge was to deploy an app to assist hackathon participants, especially during COVID. With HackOrganized
What it does
Through the tabs Home, TeamBuilder, BrainStorm, Project Details, Assignments, Current Work, and Resources, hackers are able to plan and organize their projects.
How we built it
We created a web app using react on Visual Studio Code, and through Firebase, we gave authentication to our users. We also created a chat room for hacker teams to collaborate in a realtime database.
Challenges we ran into
Melika: As a freshman, I lacked experience with react. Moreover, due to current COVID situations, I had to work on a Mac for the first time. Being able to figure out git commands and being able to upload to GitHub was a struggle.
Accomplishments that we're proud of
We're proud of having a fully connected firebase backend that shows user-IDs and attributes. Moreover, being able to put users in a virtual chat room and sync pages across the team member users was very exciting.
What we learned
We learned how to set-up a backend using firebase and store data from the user input.
What's next for HackOrganized
The next step for HackOrganized is to create an account page where participants may have profile pictures. Moreover, we could add more hackathon complimentary tips/assisting pages.
Built With
firebase
html
javascript
react
Try it out
github.com | HackOrganized | Helps Hackers Organize Their Project Simultaneously Across Web | ['Melika Nassizadeh', 'Syed Rizvi'] | [] | ['firebase', 'html', 'javascript', 'react'] | 14 |
10,414 | https://devpost.com/software/computer-science-trivia-sht1oq | Inspiration
I was on a flight and the plane had a trivia game app on the little tvs so the passengers can learn about their destination.
What it does
It is suppose to give the user an idea of how much they know and don't know. They take a quiz and can see how much knowledge they have on certain CS topics.
How I built it
Using html, css, and JS
Challenges I ran into
I was traveling so I only had a couple hours to actually try to build the app.
Accomplishments that I'm proud of
That I have at least one quiz working.
What I learned
What's next for Computer Science Trivia
To add more quizzes and build the homepage.
Built With
css3
html5
javascript
Try it out
github.com | Computer Science Trivia | The idea of this app is to make a quiz app based on computer science knowledge. The app is to have multiple quizzes of different levels and different concepts of CS for beginners and experts. | ['Laura Lara'] | [] | ['css3', 'html5', 'javascript'] | 15 |
10,414 | https://devpost.com/software/mentii | LOGO
PAGE CONCEPTS
We wanted to solve a problem that has always existed, but got enhanced due to corona virus. When thinking about what we missed out this summer and are afraid of missing out this fall, it was networking. Networking is so important, but somehow a zoom call doesn't cut it. We also realized that this problem is even worse for students from low socio economic regions who don't have the same family connections as wealthier students. We developed Mentii to solve this issue of legacy networking.
Mentii allows students to reach out and join mentor streams even from the safety of their home. As an easy way for those who want to make an impact, Mentii allows the mentors to setup their stream in seconds and share their most valuable asset with hose who need it: knowledge. It is a relaxed environment for both parties which we hope will engage more candid conversations and build stronger relationships.
We used a Flask api as backend and served a front end React App created using npm and nvm. We used a Twitch API to integrate streaming into the platform as well as for the login feature. We used sqlalchemy to create a database and store user information. We also used a multitude of front end libraries such as material-ui and style components to make the site look nice.
We initially struggled a lot with the logistics of the programs, as we didn't have the programs necessary downloaded on our working environments. In fact, our laptop upgraded from wsl to wsl2 which caused huge problems with npm create app. Our environment gave us so many problems that it was only until 12am we successfully tied our front-end and back-end together. We also had some problems with the twitch API as only one of us had experience with it especially running on no sleep and only coffee.
We're proud of how we persevered throughout the night. We had a lot of setbacks and misfortunes. One of our members even went out and bought a new laptop because his broke. However, even though we had a 6 hour gap where our tools were all breaking and we couldn't actually code, we persisted through it. With 2 first time hackers getting a first taste of hackathons, we definitely had a great time.
Each of us added to our knowledge in different areas. We all learned more about web app development, but some of us, more experienced in back end, learned a lot about front end development, while others had their first foray into Flask and its particular set of challenges.
In the future, we want to allow users to add their linkedin and more information to make it even easier to network on our app.
Built With
css
flask
html
node.js
npm
nvm
python
react
sqlalchemy
twitch
Try it out
github.com | Mentii | Mentor Streaming to Help Networking during Covid | ['Daniel Zheng', 'lijames106 Li', 'duttar18 Dutta'] | [] | ['css', 'flask', 'html', 'node.js', 'npm', 'nvm', 'python', 'react', 'sqlalchemy', 'twitch'] | 16 |
10,414 | https://devpost.com/software/know-yourself-better | email that you recieve
Inspiration
As a small kid, we always wondered to which cartoon character we resemble the most.
So, we made The Sorting Hat.
Personality of a person has a direct relation with what he likes or what he will choose.So we design this The Sorting Hat that tells you what your loved ones want on there birthdays by analysing their personality and interests.
What it does
You send the link to your friend/family and then they have to answer a few basic questions like name, age, and some #personality-based questions. According to their answers,the machine learning model which is already trained at back end will predict what to gift your friend and according to the personality name of a cartoon character is predicted and is shown on the user's screen and you will get an email in which a predicted gift is mentioned according to our sorting hat. Well, Well, here is the big secret of our sorting hat the person who uses the sorting hat to fill details of his/her will not get to know that you are receiving a mail regarding the gift according to their personality.It will be a simple personality based cartoon character predictor quiz for them and you can surprise them with their favorite gift on their birthdays.
http://3.19.29.42/radha.html
How we built it
We first collected the data set of different aged people with different interests and personalities. Then we trained a machine learning model on that data that would predict what gift they want.
We used hierarchical Clustering first on data to group it and then we applied ExtraTreeClassifier algorithm on our data.Then using that model we predicted the Gifts .
we Created an AWS EC2 instance and we made an html page on that instance hosted by Apache2 server. Which will take Inputs from the user and predict the Gifts using our ML model.
What we learned
Learnt how to use Lasso-Lars Algorithm
Learned how to choose the best fit algorithm for our model
Making virtual machine using AWS EC2 Services
Making an Html page .
Built With
amazon-web-services
machine-learning
php
python | The Sorting Hat | Know Yourself Better | ['Anupama Koley', 'Gautam Uppal'] | [] | ['amazon-web-services', 'machine-learning', 'php', 'python'] | 17 |
10,414 | https://devpost.com/software/planet-rc92e1 | Home screen
Todo module
Budget module
Planet
What is Planet?
As programmers and as students, we have a lot on our plate.
Keeping track of assignments can be overwhelming. Planning events can seem daunting.
Our app, Planet, saves you time and unwanted stress that comes with planning events. Whether it’s bowling night with the coworkers or an extravagant wedding, Planet helps you organize events with its intuitive interface and minimalistic design.
Planet has features such as a “To Do” list, a “Budget”, a “Sponsor” list, and a “Guest List” that reduce your party planning stress.
Within the To Do module, the user can add tasks due at a specific date and set at a certain priority. Each task is placed under a specific category that the user creates. The tasks can be sorted in many different ways, such as by Completion, Category, Due Date, Priority, and Alphabetical Order.
Here, the user can plan their budget for their event.
They can record expenses, along with each expense’s Projected cost and Actual cost. Like the tasks in the To Do module, the expenses in the Budget Module are sorted into categories that the user creates. At the bottom of the module, the user can see the projected and actual sum of their expenses. At the top of the module, there is a bar graph where the user can see how much of their budget is spent on specific categories.
In the Sponsor module, users can keep track of any sponsors that they have for the event, along with the amounts that these sponsors are contributing. The user can also keep track of the status of the sponsor and the status of the sum of money that the sponsor is contributing.
Using the Guest List module, the user can compile a list of their invitees. They can record whether their guests have been invited or not and whether their guests RSVP’d or not.
Although these are the only modules we have on display now, the Planet team is working very hard to find new ways to help anyone become a party planner!
Our modular app is a one stop shop for all of your event planning needs!
Planet, so you can Party!
Inspiration
We are planning a hackathon at their own high school for next year. When we first started, we did not know where to start and how to organize our planning.
What it does
Planet helps you organize events, such as a hackathon.
How we built it
Planet is solely written in java and and uses awt elements.
Challenges we ran into
One of our biggest obstacles was tackling the challenge of formatting user's input for a date. Our first idea was to implement a GUI calendar, but we realized that the feature used swing which was not compatible with our code. We ended up having the user input 3 separate strings for day, month, and year which our program converts into calendar dates. We also did not end up finishing some of the features in the Budget, Sponsor, and GuestList modules, so something the Planet team can work on in the future is time management.
Accomplishments that we're proud of
Even though none of us have a background in app design and development, we have managed to piece together a running program that somewhat resembles a desktop app. This was our second time dealing with JPanels and clicking items on the screen, and we're glad we pulled it off :)
What we learned
We learned about the existence of the getCurrentTime() method and the calendar object in java. We also learned a lot about formatting dates and money, subclasses, interfaces, and dealing with awt.
What's next for Planet
We hope add more features to the Budget, Sponsor and GuestList modules, and maybe allow collaboration between users in our future iterations. We also hope to add more module types and customization.
Built With
java
Try it out
github.com | Planet | Planet, so you can party! | ['Melody Vu', 'Vinhan Truong'] | [] | ['java'] | 18 |
10,414 | https://devpost.com/software/generocity | Inspiration
We wanted to create something that would encourage individuals in our society to show compassion and understanding towards others, especially during the time of the Covid-19 pandemic. We believe that an act of kindness can positively impact your community and help build a happier society. This inspired us to create Generocity, a website that vitalises the importance of being kind to others.
What it does
On our website, users can post their contributions to the community and get recognised for their efforts by earning badges. We use sentiment analysis as a means to evaluate their tasks, providing a score as an indication of how positive a task is. Users also get awarded badges for completing a range of tasks which also includes contributing towards UN sustainable goals. Furthermore, users can share their achievements on social media or use their profile as a portfolio for applying to jobs and volunteering opportunities.
How we built it
Generocity is a Flask application that uses Google’s Firebase realtime database to store user information. We also used Firebase authentication to handle passwords and user login. This was made possible by utilising Pyrebase, a Python wrapper to handle Firebase commands in Flask. Morever, the website makes use of Google’s Natural Language API for sentiment analysis and Google Maps Javascript API as well as geocoding API for the functionality of google maps. The frontend was designed using HTML, CSS and Bootstrap framework.
Challenges we ran into
We spent a majority of the beginning of the hackathon struggling with import errors; due to issues with pip, even after hours of research, one of our members could not use our code and had to isolate her code and work on it separately.
We also had various issues with Git such as work being overwritten, as we were pushing commits frequently due to the short time period.
Moreover, we encountered other minor problems such as passing objects into JavaScript and calling Python functions from within a template. After much research, we were able to implement the desired features.
Accomplishments that we're proud of
We’re proud of implementing the majority of the ideas and working with APIs to create a functional website within 24 hours. We also feel accomplished as we learnt how to use a realtime NOSQL database even with lacking much experience with using databases. Finally, we are happy to take home a project to work on and further develop it during the summer break.
What we learned
Through this project, we learnt how to use various new technologies such as Google’s various APIs and Firebase. We gained experience with using Git, and resolving various Git issues. We also learnt how to split our team effectively to get as much work done in the 24 hours, and solve problems collaboratively.
What's next for Generocity
One of the tasks that we were unable to complete in the short time frame was showing the user’s task history, so completing this would be a priority.
Currently, users can only earn badges and points for entering tasks; however, to incentivise users, we aim to partner with companies and organisations to offer vouchers or other incentives.
Moreover, we aim to also offer the ability for users to donate to various charities through our website, and for this to be automatically added as a task.
Finally, we plan on adding a reporting system and training a custom Natural Language model (using AutoML Natural Language) to identify what a false positive and false negative project description might look like.
Built With
bootstrap
css
firebase
flask
geocoding-api
google-maps
html5
javascript
natural-language-api
pyrebase | Generocity | Promoting generous giving in our cities. | ['Manya Girdhar', 'Ayesha Akhtar', 'Ziyang-yyu yu', 'Drishika Girdhar'] | [] | ['bootstrap', 'css', 'firebase', 'flask', 'geocoding-api', 'google-maps', 'html5', 'javascript', 'natural-language-api', 'pyrebase'] | 19 |
10,414 | https://devpost.com/software/pyevolution-simulator | Inspiration
Evolution and reproduction are interesting topics that have always fascinated us.
Understanding evolution and how it works can be difficult to visualize and understand. That is why we decided to create a simulator.
What it does
The project simulates an environment in which creatures move around and collect food in order to survive and reproduce, passing on their existing genes through our allele and genome system based around binary bits of numeric statistics for each creature.
How I built it
It was built using the Curses library in order to display the environment, and creating a gene system involving the binary bits of numeric statistics, using 0 as recessive alleles as 1 as a dominant allele to implement mutation and sexual reproduction.
Challenges I ran into
We ran into some challenges involving the logic of the genetics as well as implementing the Curses library for display, however we were able to work out how to fix our problems.
Accomplishments that I'm proud of
Creating the terminal GUI via the Curses library and implementing the genetic systems are things that are relatively new for us and we're proud that we were able to accomplish it.
What I learned
We learnt quite a lot about how biology and real-life genes work in the process as well as using the Curses python library.
What's next for PyEvolution Simulator
Adding more statistics like speed, and possibly aggression and intelligence so as to maximise the realism and simulatory potential.
Built With
curses
python
Try it out
github.com | PyEvolution Simulator | A simulation device for realistic evolution built using the Python language. | ['Day91 B', 'Myst rite', 'Willwam845', 'clubby789'] | [] | ['curses', 'python'] | 20 |
10,414 | https://devpost.com/software/resource-manager | Inspiration
With new hackers joining in every week and trying their hands at all the technologies available, I can't help but think of the first time I started with my projects. Especially everyone starting out with hardware for the first time. The variety of options available and the number glitches possible can be daunting. So I decided to create a simple hack with raspberry pi, that updates the goole sheets using the apis.
What it does
It has 2 types of simple hacks. One is a data logger, it is a simple script that logs any kind of data into a google sheet. The second is a bluetooth connector, it conncets to any nearby bluetooth device using low energy connection and helps in exchanging the data.
What's next for Data Logger
I am planning to add more ways of exchanging data between iot devicescalong with detailed instructions in the readme file, so that it can be a comlete library of starting to connect different devices. I am also planning to add some open source RTOS frameworks like zephyr so that people can explore more options and appreciate the extent of the possibilities.
Built With
google-spreadsheets
raspberry-pi
Try it out
github.com | Data Logger | Looking through and for data! | ['Ritvi Mishra'] | [] | ['google-spreadsheets', 'raspberry-pi'] | 21 |
10,414 | https://devpost.com/software/signacademy | Inspiration
We wanted to develop a tool that would enable people to learn ASL without a need to physically interact with a teacher. While the basic ASL concepts are clear and simple, people might often display the characters in a slightly different manner -- which could be easily determined by our AI-powered assistance. SignAcademy enables its users to practice displaying the ASL characters and helps to further develop accessibility and world communication.
What it does
It uses a TensorFlow powered model that recognizes ASL characters only when those are correctly displayed by the user. In the future, we plan to add multiple features like gesture recognizing and personalized feedback so that our users will get customized advice on how to improve their language skills. Moreover, we also provide access to some educational resources and hope to foster an interest in learning sign languages.
How we built it
We pre-trained and built the TensorFlow model using Google's Teachable Machine project: we manually exposed the model to ~300 instances of each of the 26 character-based classes and tried to make the model work efficiently in terms of response time and quality.
Challenges we ran into
One of the hardest things was figuring out the variations of the characters in ASL and the ways to display them. While we had to practically teach ourselves the basics of the language, we tried to apply our technical skills to create an impactful tool that fosters learning and communication.
Accomplishments that we're proud of
We're proud of building something that is accessible for everyone and serves a greater purpose!
What we learned
We learned that communication is very important and that training models takes a lot of time and effort :)
What's next for SignAcademy
In the future, we plan to develop gesture recognition, improve our character recognition model, develop other internal educational resources, and partner up with other companies to provide them with access to our machine. We think that collaboration is the key to making things work on a global scale!
Built With
css
html5
javascript
netlify
p5.js
tensorflow
Try it out
signacademy.space
github.com | SignAcademy | An AI-powered assistance for learning ASL | ['Oleh Shostak', 'Anthony Franco'] | [] | ['css', 'html5', 'javascript', 'netlify', 'p5.js', 'tensorflow'] | 22 |
10,414 | https://devpost.com/software/major-league-majors | Inspiration
We were inspired by Girls Who Code.
What it does
Our project explains which programming language students in different majors would benefit most from learning. This directly aligns with our mission as officers of Girls Who Code (UF Chapter), as we aim to help bring more women into the field of computer science and help do our part in closing the gender gap.
How we built it
We used Angular as our framework, and our code was written in a combination of HTML, Typescript, and CSS.
Challenges we ran into
We've never competed in a hackathon before, and all of us are pretty new to coding.
Accomplishments that we're proud of
We're proud that the project directly aligns with our values, and when the hard copy is said and done, we will most definitely be using this simple tool to help students at UF.
What we learned
We learned that broad prompts may be the ABSOLUTELY HARDEST to come up with a project for, and we also learned more about coding and teamwork.
What's next for Major League Majors
We would like to host a Hackathon at UF that is MLH certified, as well as compete in more Hackathons and improve.
Built With
angular.js
css
html
typescript
Try it out
github.com
majorleaguemajors.tech | Major League Majors | Want to know which programming language would benefit students in your major the most? Simply select your major from the dropdown menu and see what coding languages you should learn and why. | ['jessica bogart', 'Reeya Gupta'] | [] | ['angular.js', 'css', 'html', 'typescript'] | 23 |
10,414 | https://devpost.com/software/hackercamp | Landing Page
Some Tips and tricks
Resources
Resources Description and Link
Activities
Gallery Wall
Inspiration
Usually, the new hackers join the hackathon but are unaware of what to build and how to proceed. So, I've built a website to overcome all those problems!
What it does
It has all the necessary information and Resources that are required for a new hacker and also some References as well!
How I built it
I've built it Using HTML,CSS,SCSS and BootStrap framework. I've also used FontAwesome Icons in it!
Challenges I ran into
One of the challenges I faced to keep the design as minimal and to the point so that maximum things can be grasped. So to keep things minimal and artistic was a challenge.
Accomplishments that I'm proud of
I've embedded Youtube video for the first time and made that Responsive as well! Also, I Integrated Walls.io for the gallery Wall.
What I learned
I've learnt with respect to the framework. Apart from it, I learnt time-management and UI design knowledge.
What's next for HackerCamp
To make it more interactive and connect/integrate it with other platforms!
Built With
bootstrap
css
css3
html
html5
javascript
scss
Try it out
arps18.github.io
github.com | HackerCamp | One stop Website for all the new Hackers! | ['Arpan Patel'] | [] | ['bootstrap', 'css', 'css3', 'html', 'html5', 'javascript', 'scss'] | 24 |
10,414 | https://devpost.com/software/ninjutsu-simulator | Inspiration
As anime fans, we all enjoyed the classics. Of course, one of the main ones being Naruto. It was always fun imagining what we can do in that world. So we thought, why not bring that world to us?
What it does
The idea was to have the app detect what sort of symbol the user was makiong with their hands. Once a certain sequence of symbols were made, the appropriate animation would be made (a fireball, summoning a toad, etc.)
How we built it
In order to capture the user's motion and display the animations, the plan was to use Unity and GoogleARCore. We took an image every few frames and sent it to our server. There the image was processed to check if there were any hands. If so, That part of the image was would be sent to the Google-Vision-AI that we trained with our own images and labels. It would return the type of symbol that the hands were making. From there, the plan was to put them in order and display an animation if it matched a preset sequence.
Challenges we ran into
The main issue was getting Unity to work. Problem after problem occurred while dealing with Unity that eventually, the person who was working on the Unity portion was unable to open the project any more! Due to this, the project was unable to be completely finished on the client side. In addition, determining what ML libraries to use, as well as decoding and encoding images from GoogleARCore proved to be quite a problem. Another issue was figuring out how OAuth worked. It apparently was needed for the Google-Vision-AI.
Accomplishments that we are proud of
The major accomplishment that we are proud of is getting the image recognition AI trained and actually able to detect images. In addition, learning to network from Unity was something new that we did as well.
What's next for Ninjutsu Simulator
Once we get Unity fixed, finishing the animations would be great.
Built With
c#
google-arcore
google-vision-ai
handtrack.js
node.js
unity
Try it out
github.com | Ninjutsu Simulator | Ever wanted to be the Hokage, the strongest ninja in the village? With this app, you can harness your chakra and use powerful ninjustu. Provided that you know the hand symbols of course. | ['Nachiket Ingle', 'Kevin Luo'] | [] | ['c#', 'google-arcore', 'google-vision-ai', 'handtrack.js', 'node.js', 'unity'] | 25 |
10,414 | https://devpost.com/software/covid-checker-eogx3l | Feedback after self-reporting
Geofence with time of creation displayed upon hovering mouse
Inspiration
The inspiration comes from those who are doing risking their lives in order to support individuals who are COVID positive. We're a team who wants to support workers in the front line, ranging from medical workers to grocery employees to food delivery drivers. When medical workers help patients recover, we would like to support them by pushing for local awareness and being able to paint a better picture which may be different from online statistics.
The visual data provided in certain popular COVID-related databases may be difficult to put into perspective based on scaling of graphs, so we are trying to show this data with respect to proximity.
What it does
COVID-Checker is a platform which offers the ability for users to self-report their health with respect to the pandemic.Upon entering the website, user will be able to view nearby COVID cases as symbolized with geofences/circles. The circles suggest to take extra precaution when moving inside it's location. The Positive button, when pressed, will generate a geofence based on the user's location. The second button will delete the geofence if the user is confirmed to have recovered. If the user hasn't self-reported a recovery, then by default the geofence will delete 3 weeks after creation.
How I built it
React/Javascript/Html/CSS was used for front end. Mapbox was utilized to generate a map and create/delete circles.
Challenges I ran into
Working with the Mapbox API served as an initial obstacle, as working with any API for the first time should. Getting familiar with React and it's capabilities were also a challenge.
Accomplishments that I'm proud of
Proud to have worked on a project for a better cause. Working with API's is relatively new to me so I'm proud of achieving my first project mostly based on API's.
What I learned
More in depth knowledge of async, React capabilities, APIs and the get/put/delete/post functionalities.
What's next for COVID-Checker
Scale larger for double/triple digits of existing circles. Make the site easier for the user so they will have less inertia to contribute to a greater cause.
Built With
css
firebase
javascript
mapbox
react
Try it out
github.com | COVID-Checker | A crowd sourced information gathering tool which displays COVID-19 cases throughout the world. | ['Sam Yeo', 'Nick Kim', 'Clifford Ng'] | [] | ['css', 'firebase', 'javascript', 'mapbox', 'react'] | 26 |
10,414 | https://devpost.com/software/e-kairos-qltz47 | Inspiration
E-Karios is a AR/VR solution which helps to solve issues with both cultural institutions and education and can be applied to multiple fields, from site heritage to
immersive educational experiences
, E-Karios offers a unique solution to revitalise cultural institution and education in the digital era. The project began on the 25th of July and has been improving since. All content has been created and designed by me within the weekend (25th and 26th).
Last year after a trip abroad, my family and I planned to visit many monuments and landmarks, but due to unforeseen circumstances we were not able to visit the institution. Inaccessibility, from unforeseen circumstances or even renovations, could cause a huge loss of tourism coming at a huge cost for these institutions.
The second issue which I came across was the
damage caused by tourism in certain cultural/historic institutions
, which caused negative effects to the institution physically and financially. Whilst many solutions for virtual museums and tours are available, there is market gap for immersive experiences which allows for user inputs (e.g. Q and A).
A solution which could help provide an immersive experience to act as a hub for information is needed for cultural institutions to provide a more
accessible service
. For many having the opportunity to visit these places can be amazing and possibly change the experience with education.
Ultimately these cultural institutions can make a huge effect on education worldwide, and despite 3D solutions on the market, very few truly offer an immersive feature.
Today, we have seen two problems caused by
COVID-19
which has negatively affected these institiutions, which find it hard to make money and survive without any customers. Moreover, COVID-19 has impacted immersive
education
more than every before, and many look to these cultural institutions for inspiration to help enhance their education. A solution to help
students
to achieved immersive education at home is required.
What it does
E-Karios is a AR/VR solution which helps to solve issues with both cultural institutions and education and can be applied to multiple fields, from site heritage to
immersive educational experiences
, E-Karios offers a unique solution to revitalise cultural institution and education in the digital era. The features are as below:
1) AI-Powered Chatbot Host for the Tour
2) 3D Visualisation
3)Q/A with the virtual host
How I built it
Chatbot
Chatbots were created using Amazon Lex, with around 2-3 questions for each intent, to ensure the chatbot could understand them with a high success rate. I then added responses for each of the instances, providing answers to the different questions. Currently, there are a few available, but this can be added later on.
Greek Temple
For the demo, I created the model using blender where I added a few materials, manipulating planes, cubes and spheres. This took around an hours to make ! The pillars were made using cylinders and edge loops, and extruded inwards.
Sumerian
I brought it all together using Amazon Sumerian, which allowed for the linking of the amazon lex bot to a demo host model. In this case, I had to simply connect the dialogue from Lex through the bot, and create a simple flow between asking the question and responses.
Challenges I ran into
Chatbot
: Parts of the section of Lex included debug errors in the host's speech, to change this I had to change the order of the responses and reduce certain aspects, eventually solving the issue. Additionally, certain phrases were not being picked up hence I was forced to make more questions to train the model. By adding more data, the bot was more responsive and achieved higher fidelity in understanding responses. More can be added !
Importing the Blender model in Sumerian
: The imported model caused huge amounts of errors, with textures not importing properly along with incorrect lighting. As a result, I spent time altering the textures and lighting until the model was similar to the blender model.
Controlling Movement
: Originally, for class-platform use, I added the ability to touch to call the host. However, this made it impossible for the user on a mobile to travel around the model without triggering the chatbot ! To alter this I change the input.
Accomplishments that I'm proud of
First AWS VR/AR Project:
This was my first AWS Sumerian project, and I am still quite new to AWS. Overall, I learned a range of new skills within this hackathon period ! Normally, my speciality is hardware (e.g. arduino projects) but I wanted to move out of my comfort zone, using new skills !
Blender Project:
Normally, I also model using software such as Fusion or Solidworks, for project such as robotics and hardware builds ! However, this project led me to use blender, due to versatility ! Hopefully, I can improve these newfound skills in the future !
Educational Understanding:
Within this hackathon,I learnt about educational immersion and the effects of this form of learning as opposed to reading textbooks and simple images etc. When I tested it out it was clear the impact the immersion made on usability and retention !
What I learned
AWS Lex:
Through trial and error, I was able to learn how to develop an AWS lex chatbot. This led me to learn not only about lex, but understanding how it works (artificial learning) and why it sometimes fails ! This taught me the basic of machine learning, which I would like to explore in future hackathons.
AWS Sumerian:
This was my first project using AWS Sumerian, and the first time I used a software similar to sumerian. Through this I learn a lot about lighting, animations which is really useful for future AWS projects, and also learning software such as Unity (on my to-do list !)
Education Communication:
Communication methods for effective teaching and interactions to boost the impact on learning. First type of project, which looked at education and immersive experiences.
Time Management:
I only had one to two days (even less really) for this submission, and to ensure I made the deadline I practically applied time management skills for an independent project.
Blender:
How to use blender for designing and rendering visuals (i.e/ Greek Temple) and how to export and use in external software (i.e. summerian). I learnt a lot about the features as well as lighting !
What's next for E-Kairos
Further data to streamline AI Chatbot, looking at features involving lambda, external APIs to collect learning info and integrating tests for students !
Possible application using Unity for increased immersion, possibly using different chatbot software (even perhaps making my own from scratch) !
Built With
amazon-web-services
echo | E-Kairos | Immersive Chatbot and 3D Visualisation for Education and Cultural Institute Preservation | [] | ['Education Track', 'Open Water Accelerator Internship'] | ['amazon-web-services', 'echo'] | 27 |
10,414 | https://devpost.com/software/6ixify-03sypl | 6ixified version of "Never Gonna Give You Up"
Before 6ixify: a regular conversation between two friends
After 6ixify: a regular conversation between two Toronto mandems
Inspiration
In light of the current pandemic, there have been many social and physical distancing measures put in place across the world. Because of this, many people have relied on communication via messaging apps such as Facebook messenger, texting, etc. Our team wanted to build a lighthearted hack that would be entertaining to its users. Since we’ve always been baffled by slang used in Toronto (also known as the 6ix), we decided take advantage of the vast vocabulary in Toronto slang and build 6ixify.
What it does
6ixify is able to read text within Google Chrome, and “translate” it into Toronto Slang by replacing key words with their Toronto equivalents. It works especially well on Facebook Messenger as users tend to have more casual conversations, leading to funnier translations. It also works on news articles, Google search results, Stack Overflow posts, etc.
How we built it
We built the chrome extensions using the following various web development technologies, primarily JavaScript (but also HTML), in order to interact with the elements on the webpages. Using a Python script, we generated a JSON object that linked common English words to their respective Toronto slang. We then used JavaScript to parse the plain English text, convert English words to their Toronto equivalents, and replace the old text in the webpage.
The entirety of the project was built during the hackathon.
Challenges we ran into
This was our first time developing a Chrome extension, and as a result, we needed to navigate some of the unique aspects that come with it. For example, two challenges we ran into included understanding the limitations of JavaScript in the context of a Chrome extension and realizing that certain DOM elements could/should not be tinkered with.
Another challenge we faced was determining the best method to implement the translation aspects of the extension. We needed to find a solution that was feasible to develop in a day while still being functional and fast. Despite our initial debate over whether to translate specific highlighted text or the entire webpage, we managed to create a solution that quickly and accurately translates text on the entire page, which also best suited our vision for the extension.
Accomplishments that we're proud of
We made it work! Chrome extensions are pretty versatile (once you solve the problems that come along with them), thus building our first extension was really rewarding. This hackathon was our first time coding together as a team so we're proud of finishing it.
Moreover, after testing it out ourselves and sharing it with a couple of our friends, we realized that 6ixify brought a lot of entertainment value - something that's important during the pandemic! For people who don’t know the mapping of the words, each time they send a message is an opportunity for a possible Toronto slang translation. We’re proud of making a fun Chrome extension that has the potential to make any website into something entertaining.
What we learned
There are many differences compared to NodeJS or React when developing a Chrome extension in plain JavaScript, namely the complications involved in reading a local file or calling an API. However, after building 6ixify, we learned that both of these were very doable, it just involved a couple extra steps.
Aside from all this, one of our big takeaways is our increased knowledge of Toronto slang, which we learned from researching the phrases and testing out 6ixify.
What's next for 6ixify
First and foremost, we want to expand the translator’s vocabulary using natural language processing! By going through Tweets, for example, we can gather more interesting and accurate examples of Toronto speak. Increasing the amount of words and phrases 6ixify is able to detect will allow it to be more accurate and versatile. As well, we would like to work on implementing better grammar so that verb tenses can be maintained.
Next, we would like to develop an “autocorrect” function that allows the user to alter their text to Toronto slang prior to sending a message on various messaging apps. Currently, the extension supports messages that have already been sent, however, expanding the scope to cover messages that have not been sent will allow for an improved user experience (and greater entertainment value). We will need to consider privacy issues with this, though.
Lastly, we hope to implement different types of slang moving forward - while our initial focus is on the Toronto area, we hope to expand the extension such that it is able to translate text into slang from all around the world.
Built With
html
javascript
python
Try it out
github.com | 6ixify | Slang translator: from English to Toronto slang. Bringing people together through language and entertainment during the pandemic. | ['Grace Huang', 'Oustan Ding', 'Jonathan Cui', 'Varun Mokashi'] | ['3rd Place'] | ['html', 'javascript', 'python'] | 28 |
10,414 | https://devpost.com/software/spotify-filter | Inspiration
We all have songs we love. Probably you have thousands of music tracks you "liked" on streaming services like Spotify, but it is not trivial to pick just the right song at the right moment. You need songs without lyrics when you have to focus, and high energy songs when you work out. Spotify provides great metadata for its catalog, so I thought I should be able to implement a nice & easy interface to filter private playlists.
What it does
This web application allows users Spotify playlists by each track's metadata like tempo or loudness.
How I built it
This is a single page web application built with React and Redux.
Challenges I ran into
I could not find any javascript wrapper for Spotify API which provides rate-limiting feature. When a user tries to load up a long playlist (>200 songs), Spotify responds with an error code. It is possible to implement a rate-limiting feature by myself, but I had no time to finish it.
Accomplishments that I'm proud of
"Hacky Birthday MLH" was a 24-hour hackathon, and I am glad that I could submit something before the deadline!
What I learned
I learned a lot about "OAuth Authorizaiton Code Flow with PKCE" during this project. This particular auth flow was useful for client-side authorization without secret.
What's next for Spotify Filter
Potential improvements:
More metrics & metadata to filter tracks with.
Allow users to filter public playlists to suit their needs.
Better styling with CSS.
Built With
react
redux
spotify | Spotify Filter | Filter Your Spotify Playlists by Metadata | ['Donghyeon Kim'] | [] | ['react', 'redux', 'spotify'] | 29 |
10,414 | https://devpost.com/software/coradicator | GIF
GIF
The 3D layout of the bot.
Our Apps
Our Bot
Coradicator
A smart robot that is built with UV-C lights to eradicate the coronavirus laying around.
Introduction
Coradicator is a room disinfection device based on Ultraviolet-C radiation. It offers the capacity to be remotely programmed using an Android mobile device and it has an infrared detection security system that turns off the system when triggered.
The system here described is easily scalable to generate higher ultraviolet dosages adding more UV-C lamps. This device has very high effectiveness to eliminate high bacterial inocula
The sanitizing method employed by this device affects a very wide range of microorganisms and it has several advantages of respect to chemical based-sanitizing methods
This device represents a secure, fast, and automatized equipment for room disinfecting. The device is configured in less than three minutes and it does not require continuous monitoring.
Theory
Since the last years, mobile systems based on UV-C radiation have been used for cleaning and disinfecting hospitals. The contribution of this equipment to the conditioning of hospital areas makes these systems useful for other kinds of spaces that require periodical disinfecting.
UV-C radiation inactivates microorganisms causing DNA damage by producing cyclobutane pyrimidine dimers (CPDs), altering DNA structure, and thus interfering with DNA replication.
According to the World Health Organization Global Solar UV Index, the UV region covers the wavelength range from 100 to 400 nm and is divided into three bands:
UV-A (315–400 nm)
UV-B (280–315 nm)
UV-C (100–280 nm)
UV-C light, which is absorbed by the atmosphere, represents the most lethal wavelength for a wide spectrum of microorganisms. The maximum germicidal power of the ultraviolet radiation is at wavelengths near 260 nm and it drops dramatically below 230 or above 300 nm.
Hardware Description
We have constructed a UV-C radiator device that includes a microcontroller board, an Arduino UNO board.
For controlling the robot and the UV lights we are using the Bluetooth module HC06.
For live camera feedback to the controller, we are using the ESP32 Camera Board to get the live video feedback so that we can control it outside the room.
In addition, the equipment can be operated from a wide range of Android mobile devices with suitable screens and processing capacity (tablet, cell phone, etc), taking advantage of the ubiquity of these devices, and lowering the cost of its construction.
Build of the UV-Bot
The construction of the device involved three stages:
Structural building
Electronic assembling
Programming of the microcontroller and the mobile application
The scaffold structure was made by attaching to a central column four holders for UV-C germicide lamps, connected in parallel. The central column was placed on the bot which has four wheels for locomotion.
The control unit is based on an Arduino UNO board; this gives the order to the switch to turn on the UV-C lamps using an electromechanical relay. An HCO6 Bluetooth module is used to communicate with the board using Bluetooth devices.
Three LEDs were installed to indicate its functional estate:
Connected to the electric supply (green LED)
Bluetooth connection established (blue LED)
UV-C lamps activated (red LED)
The red LED is combined with a passive buzzer to start a warning sequence just before the activation of the UV-C lamps. Because the UV-C radiation is harmful to humans, a PIR sensor was added as a security measure. In this way, the device is automatically turned off when a user is near.
The ESP32 Camera is attached to the front part of the robot which will send the realtime live video to the controlling app so that the person who will be controlling the bot.
Finally, a mobile application was developed to control the disinfecting unit.
This app was designed using the MIT app inventor 2 tools.
The interface of this application is used for connection to the device via Bluetooth, and for controlling the robot wirelessly.
The Robot
Working
The dosage values can be used to estimate the required exposure time according to the following simplified method:
The UV-C dosage received by surface unit (D, expressed in J/cm2) at a given distance (r) from the sanitizer, depends on the power of the emitted UV-C light (P, equal to 48W for our device) according to this equation:
D = (P.t) / (2π.L.r)
Where L is the length of the UV-C lamps (89 cm) and t is the exposure time expressed in seconds.
Based on this equation, the exposure time can be calculated as follows:
t = (2π.L.r.D) / P
Using this method, a tool to estimate the minimum exposure time to reach the desired dosage for a certain distance from the device was developed and is available in the initial screen of the app controlling the device.
The Schematics for the UV Light
Now coming to the driving part.
The driving is done through the mobile app through Bluetooth. The realtime video helps the driver to control the bot from another room so that UV rays will not harm him.
Further, we are planning to make the bot completely autonomous using ROS.
The Schematics of the Bluetooth Control
The Controlling App
Conclusion
A UV-C room disinfection device was made with similar functions to proprietary commercial systems. The presented model can be easily scaled up, modifying its structure (adding more UV-C lamps) and programming (editing the open-source code of the Arduino board and/or of the Android application), achieving savings for more than 80% respect to the price of similar proprietary commercial equipment
Resources
Full Video:
Click Here
Short Video:
Click Here
Documentation:
Click Here
Presentation:
Click Here
UV Light App:
Click Here
Controlling App:
Click Here
Built With
arduino
c++
mit-app-inventor
Try it out
piysocial-india.github.io
github.com
github.com
github.com | Coradicator | Corona + Eradicate = Coradicator ! | ['Saswat Samal', 'Saswat Mohanty', 'PIYSocial', 'Sanket Sanjeeb Pattanaik'] | ['3rd Place Overall', 'Best Design'] | ['arduino', 'c++', 'mit-app-inventor'] | 30 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.