hackathon_id int64 1.57k 23.4k | project_link stringlengths 30 96 | full_desc stringlengths 1 547k ⌀ | title stringlengths 1 60 ⌀ | brief_desc stringlengths 1 200 ⌀ | team_members stringlengths 2 870 | prize stringlengths 2 792 | tags stringlengths 2 4.47k | __index_level_0__ int64 0 695 |
|---|---|---|---|---|---|---|---|---|
10,371 | https://devpost.com/software/comed | Welcome screen
Login screen
Registration screen
Link screen shown after Login/Registeration
Invalid screen
Date selection page (after selection)
Date selection page (before selection)
Info page for user and dates countdown
Inspiration
The inspiration for making the App comes from the idea of creating order and preventing plausible chaos that can be created once the vaccination are developed.
One of the main reasons for the rise of the virus was lack the fact that the people were not aware of the extent of the problem and hadn't completely followed the rules laid down. This app has been made with intention to prevent that problem.
What it does
The aim of this App is to ease the process of distribution of vaccination of the COVID-19 virus once the vaccination is produced and is ready for the people.
The app allows users to register and choose the date of vaccination which. they can also change before the due date
based on that they are given a hospital close to there vicinity which changes if the user changes their date.
This system of picking the date in advanced would keep a check of max 50 people in one hospital on a particular day and would also allow the Hospital to be prepared for the people.
The app would lead to a systematic flow thus creating a sense of knowledge as to what is going.
It also shows a countdown of the days remaining for the vaccination day the user has selected.
How I built it
The app was built using Flutter and was connected to a Firebase backend.
We made a Login/ SignIn page that uses the firebase authentication.
Firestore is used to store the data.
We used the geolocation API to get the current location of the user and thus the permission to use the location of the phone was taken at the start.
Google Places API was used to get the data of the hospitals based on the users current location.
Challenges I ran into
Firebase connectivity was a challenge we had to overcome. Firebase was something new that we hadn't worked on before.It was eventually figured out and everything was working thereafter.
Google Places API was something that was new and discovering it was really exciting .
Accomplishments that I'm proud of
The major accomplishment that I'm proud of is that without having any knowledge in Firebase we managed to figure it out work with it.
Learned about some new API's,
Finally were able to make something we could show.
What's next for CoMed
Technical Aspects :
1) A parameter alarm will be placed to ensure social distancing is maintained. If any 2 users of the app come closer than a particular distance.
2) We also intend to provide the user with a feed section to showcase the latest news.
3) Plan to incorporate mail service reminders to alert the user about any Covid cases detected in its vicinity.
Business Aspects :
1) Increase awareness about the app using social media marketing.
2) Think of evolution plan so as to find a way to keep helping the community after the vaccination period is over.
Built With
dart
firebase
flutter
geolocaion
google-maps
google-places
Try it out
github.com | CoMed | The aim of this App is to ease the process of distribution of vaccination of the COVID-19 virus once the vaccination is produced and is ready for the people. | ['Dhruv Tewari', 'AdGit17', 'subhash tanikella'] | [] | ['dart', 'firebase', 'flutter', 'geolocaion', 'google-maps', 'google-places'] | 125 |
10,371 | https://devpost.com/software/boba-map | Opening screen of app
More zoomed out map view
Form for adding a drink
Result of adding a drink
Personal profile
Activity feed
List of people you're following
Inspiration & What it does
As an avid drinker of bubble teas fortunate enough to be surrounded by an ever-growing number of boba cafes, I wanted a way to keep track of my favorite drinks at each of these places. Boba cafes tend to have extensive menus that can be overwhelming to a newcomer. To solve this issue, I created an app that tracks your boba drinking habits and those of the people that you follow. When you go get boba, you can drop a pin at that location on the map and include details like what drink you got (toppings, sugar and ice levels, etc.), what you thought of it, and how you rate it. People that follow you would receive a notification about the drink you're sipping and vice versa. If you feel stuck deliberating what to get at a cafe, you can refer to the app to see drinks that the people you follow got as well as their opinion about them. The main view of this app features a map with pins of drink ratings at various locations. You get to compare what you and your community think about the drinks at various boba cafes. Additionally, the app keeps track of the number of drinks you've gotten overall as well as those of the people you follow.
How I built it
I developed the app using Dart and Flutter. For the map features, I used the Google Maps SDK through Google Cloud Platform.
Accomplishments that I'm proud of & What I learned
I learned a lot about app development and about using a coding language with a declarative structure. This was my first time using Flutter, so I'm proud of all that I was able to do within the hackathon time such as being able to integrate the Maps SDK, creating the essential app views, and developing most of the functionality out.
Challenges I ran into & What's next for Boba Map
Due to the time limitations and being a team of one, I was not able to complete all the functionality that I envisioned for this app. Some of that functionality would be tracking boba drinks over different periods of time, sorting through people you're following by different criteria, searching through reviews, and having achievements for drinking boba and using the app.
Built With
dart
flutter
google-cloud
maps-sdk
Try it out
github.com | Boba Map | An app that let's you track your favorite bubble tea drinks across all cafes, and share how much you love boba among your friends | ['Michelle Yang'] | [] | ['dart', 'flutter', 'google-cloud', 'maps-sdk'] | 126 |
10,371 | https://devpost.com/software/georgia-tech-website-chat-bot | A sample dialogue with our favorite Mascot Buzz!
Another sample dialogue with Buzz.
Inspiration
Our inspiration for the "Chat with Buzz" concept came about as we realized how difficult it could be to find information on the Georgia Tech website. Many other sections of the website have their own chat bots so it only makes sense for the front page of our school website to have one as well.
What it does
The user is asked to input questions they have from everything pertaining from general school information to athletics and EA deadlines for prospective students. After posing the question, Buzz will do his dear best to answer it (though he does sometimes miss the mark).
How I built it
On IntelliJ IDE, we used Code With Me to collaborate on the same class files at the same time, allowing us to maximize efficiency and bounce ideas off each other. We are both taking the same Java level class at this time and we used our knowledge gained from class in this project, building off the ideas of Object Oriented Programming.
Challenges I ran into
We are both new with anything related to graphics and hackathons in general so creating the GUI, though it is a very simple one, was no easy feat. We did as much research as we could online and watched YouTube video upon YouTube video to learn as much as we could. Coming from no knowledge, we are more than satisfied with the end product.
Accomplishments that I'm proud of
From attending The Agency organization lectures (a machine learning club here at Georgia Tech), we applied our basic knowledge of neural networks to our project. Also, it was very fruitful to gain exposure to GUIs.
What I learned
We learned the value of time management when completing a project and how to effectively collaborate on code. We also learned how to adapt open source code into our own program.
What's next for Ask Buzz
Hopefully our next Buzz will be able to learn effectively from itself and the acquisition of questions from the school website.
Built With
gui
java
machine-learning
swing
Try it out
github.com | Ask Buzz | Often on our school website, information is difficult to find. Our chat bot is designed to be integrated into the Georgia Tech website to allow for students to gain quick responses to their questions. | ['Brandon Noll', 'markshark25 Lau'] | [] | ['gui', 'java', 'machine-learning', 'swing'] | 127 |
10,371 | https://devpost.com/software/rad-ish | data-input design mockup, version alpha
product management design mockup, version alpha
database view design mockup, version alpha
Why We're Here
The COVID-19 pandemic helped us see the struggles small businesses go through. Without expensive corporate organization techniques and cutting-edge technology, many of these businesses find it difficult to compete in an already cutthroat market.
One particular problem we noticed was the amount of expired food small businesses had to throw out. While large grocery stores have the technology to track expiration dates of items, small stores have neither the technology, nor the time to do this.
We wanted to make a difference. So we created
rad(ish)
.
What Is
rad(ish)
?
rad(ish)
is a web app that allows small business owners to automatically place items reaching their expiration date on sale. Business owners and employees alike can either upload databases of their current inventory or use our intuitive interface to manually enter inventory information.
After data has been entered,
rad(ish)
will analyze expiration dates and current list prices of database items, deciding which items should be put on sale and how much their prices should be reduced. As items get closer to their expiration date, their prices are reduced more and more! This incentivizes buying items that would otherwise be thrown away, thereby minimizing food waste and lost revenue.
How we built it
Front-end developed with HTML, CSS, and Javascript with pair-programming and initial testing done using Glitch servers.
Back-end developed with Python for CSV parsing, database analysis, and algorithmic sale allocation on inventory items.
Complete project integration done using Flask and Django.
Technologies Used:
• Python
• Flask
• Django
• HTML
• CSS
• Javascript
Challenges we ran into
• Integrating Python-based algorithm into the web development application, Glitch
• Front-end component responsiveness on mobile devices
• Proper CSV parsing and database management
What's next for rad(ish)
We have begun to create a login page for employees to add a layer of security, but the database is not fully implemented yet. rad(ish) has many new features yet to come - we hope you'll stick around!
Built With
css
django
flask
html
javascript
love
Try it out
radish.glitch.me
github.com | rad(ish) | rad(ish) is a business-facing web app focused on minimizing food waste. It databases and manages food inventories and expiration dates, dynamically putting items on sale when they are about to expire. | ['Vikrant Bathala', 'Sudhan Chitgopkar', 'Joanna Zheng', 'Marylyn Chen', 'Vik Bathala'] | [] | ['css', 'django', 'flask', 'html', 'javascript', 'love'] | 128 |
10,371 | https://devpost.com/software/replaylist-bk03yl | replaylist
Inspiration
Have you ever listened to an amazing radio station or queue on Spotify and wish you could have saved those songs? Was the process of rewinding song by song tedious and cumbersome? Do you wish you could add multiple songs to playlists at once? Don't you wish there was an easier way to access your listening history? We know we have, and that's what inspired us to build replaylist.
Spotify is one of the most widely used music streaming services, but it lacks various features we found essential. While Spotify keeps track of your listening history, there is no way to access it aside from repeatedly rewinding song-by-song! While this solution may be useful if you are wanting to find a song you listened to a few minutes ago, wanting to save a music session or queue is highly repetitive and inefficient.
We wanted to make Spotify a better tool for listening sessions, so users can save various sessions and have access to them whenever they want, rather than needing to rewind for minutes to find a specific song. Users would get a customizable approach to managing their listening history and deciding where to put their music, while also maintaining the individualized music Spotify recommends.
What it does
Replaylist aims to reinvent how users interact with their music and how listening sessions are done. With replaylist, users can start listening sessions and at the end of the session, decide what songs to transfer to what playlist. Replaylist interacts with Spotify's API using user credentials attained through OAuth 2 to obtain and display the user's listening history securly.
How we built it
The core of the project's backend is Spotify's Web APIs. They offer many of the tools required to obtain access to a user's public and private playlists without intruding on their privacy, while also allowing replaylist to transfer songs in mass.
On the other hand, the frontend is built using the classic trio of web development: HTML, CSS, and JavaScript. We implemented native JavaScript and made this application entirely client side, reducing many risks associated with external services such as replaylist. Furthermore, by implementing this project using native JavaScript and HTML, the user end code runs blazing fast and there are minimal delays syncing between Spotify and replaylist. The content and website were hosted using Netlify's web hosting services to pull from our GitHub repository.
Challenges we ran into
The first challenge we ran into was obtaining the proper credentials to make calls to Spotify's Web API. Having a solid understanding of this API was essential to set up a secure standard for users to interact with our website, OAuth 2. From there, we initially ran into issues reguarding various permissions and domains, but they were quickly sorted out by continually referencing the amazing documentation Spotify provides.
However, the biggest challenge we ran into was designing a user-friendly way to interact with replaylist and deciding on expected behaviors. Multiple decisions were made as to how replaylist would decide which playlists are available to write to, how replaylist will notify users that their session is going to expire, and how replaylist will give users the power to decide what happens to their playlists. We designed a framework where users can indicate playlists open to modification using a special tag in the playlist's description. From there, after having started and completed a listening session, users can move as many songs as they want over to any number of the selected playlists for future listening.
Accomplishments that we're proud of
We are very proud of the user experience we can provide users and the seamless integration with Spotify. We are also very proud of the execution of the idea, as we all worked hard to build the most robust application based on our initial idea. The UI feels very modern and intuitive, giving users a very enjoyable experience while using our product.
What we learned
Most of the team members have not previously used Spotify's Web API, and quickly picking up a new API is a very important skill to have in web development. Everyone also learned how to integrate OAuth 2 with this API and send HTTP requests and parse JSON responses.
What's next for replaylist?
The most important feature we are wanting to implement is auto filtering and sorting of music. Using Spotify's API, music can automatically be sorted using various audio features such as danceability, tempo, energy, acoustics, etc. Using this and our custom tags implementation discussed earlier, users can easily define how to sort listening sessions so they can focus less on where their music is going and more on what to listen to next.
Built With
css
github
html
javascript
netlify
spotify
Try it out
github.com | replaylist | replay your favorite music by adding listening history to a desired Spotify playlist | ['Melissa Hernandez', 'Simon Abrelat', 'Taleb Hirani', 'David Gordon'] | [] | ['css', 'github', 'html', 'javascript', 'netlify', 'spotify'] | 129 |
10,371 | https://devpost.com/software/sentiment-analysis-hackathon | An example of the live map data that this program provides.
A look at the command line showing what the sentiment of what each candidate overall is.
Inspiration
Getting a constant stream of thoughts from people is now possible with Twitter’s API and to process them automatically is not very hard so I thought, why not put them together. We used this "thought processing" to analyze how people thought about Donald Trump and Joe Biden (as there is an election coming very soon)
What it does
This takes in a live stream of tweets that have the key words "Trump" or "Biden" in them and will analyze them to see if they are positive or negative then it will aggregate all this information and display it in an easy to understand way. It also can analyze this data LIVE.
How We Built it
We Built this using the Tweepy API, a twitter API wrapper, the textblob python package to analyze the tweets and the leaflet.js package to display the data aggregated live.
Challenges we ran into
Our original plan was to program a bot that analyzed tweets to buy and sell stocks based on if they had a good sentiment or a bad one (good or bad vibes), but we had a issue which was that the markets were closed and we couldn't test out our bot which took us too long to realize. So then we pivoted and decided to made a program that could give data about candidates perception and favorability faster than any poll could, with in one or two seconds any tweet about Joe Biden or Donald Trump would be analyzed giving a sentiment value and categorized then displayed to a user giving live useful data about a candidate.
Accomplishments that we're proud of
We are proud that we got the thing running and functional. We are also proud that we have it all running live and that this program could be useful to anyone who wants to know the current political standing of a candidate.
What we learned
That you can't make a Stock Market trading bot on the weekend and that even if your idea doesn't work that doesnt mean its a waste of time and you can always make something cool out of it.
What's next for Sentiment Analysis Hackathon
We are hoping to run this during a debate to see the live perception of people change as candidates talk, We also hope to see this used in other applications or for people who just want to see how the candidates stand in each state in an unconventional form that would still give them useful information.
Built With
github
html
javascript
leaflet.js
python
tweepy
Try it out
github.com | Tweets for President | Analyzing Tweets to Predict the President | ['Emaad Shamsi'] | [] | ['github', 'html', 'javascript', 'leaflet.js', 'python', 'tweepy'] | 130 |
10,371 | https://devpost.com/software/kpop-community-app | Our project is a KPOP (Korean Pop) fan community application built in Dart and Flutter. Our team went to the Android app development sessions and learned Dart from the presentations. We thought it was cool, so we decided to use it to code our app. The app is mostly a hard-coded rough prototype, but most of the desired features are visually represented in some way.
We designed this application because all of the team members are KPOP fans who find it difficult to have meaningful interactions with other KPOP fans on social media like Twitter and Instagram.
-Eric, Tarah, Serena, & Rebecca
Built With
dart
kotlin
objective-c
swift
Try it out
github.com | KPOP Community App | The #KPOP App made for everyone! Get the latest news about your faves, look up information about different groups, and join the conversation in different forums! | ['Eric M.', 'Tarah Thompson', 'Serena Gao'] | [] | ['dart', 'kotlin', 'objective-c', 'swift'] | 131 |
10,371 | https://devpost.com/software/notes-dvyg6q | This was our first time using React, and we struggled creating this front-end project. However, we have big plans for the future of this app.
Built With
css
html
javascript
react
Try it out
github.com | Super Notes | A simple note-taking app | ['ynkelka Kelkar', 'Tariq Kariapper'] | [] | ['css', 'html', 'javascript', 'react'] | 132 |
10,371 | https://devpost.com/software/sentiment-trader | Inspiration
We have always had an interest in data analytics, especially in the realm of quantitative finance. Observing the effects of the COVID-19 pandemic on market volatility had us both thinking what trading signals could be developed to see through the chaos. During this hackathon, we arrived at lay sentiment as an interesting avenue to explore, so we decided to see what results we could obtain.
What it does
This program compares the performance of our sentiment-driven stock trading algorithm, SentimentStrategy, against the performance of an extremely common stock trading signal, the simple moving average (SMA), across 20 different companies within the S&P500 stock index from 1/1/2020 to the current date.
How I built it
We used python in conjunction with pushshift.io, firebase, vaderSentiment, and backtrader to perform our research. Pushshift was used to get comment data from Reddit. We used vaderSentiment to examine the positivity or negativity of each Reddit comment. Firebase stored our averaged sentiment scores for use with backtrader. Backtrader hosted our strategies and allowed us to compare their performance using real world price data.
Challenges I ran into
We had a lot of trouble collecting data in the beginning. Collecting large swathes of data using APIs can be challenging. Ready access to large databases or forum information could have made our data collection process much faster and freed up time for more in depth quantitative analysis.
Accomplishments that I'm proud of
Our sentiment strategy was able to outperform the common SMA crossover strategy. This proves that our research has real application in the world of quantitative finance and merits deeper investigation.
What I learned
We learned a lot about the modelling that goes into sentiment analysis, as well as what is required to create a trading strategy. Hopefully we can take this information further in expanding this research or into other hackathons down the road.
What's next for Sentiment Trader
We would like to expand our strategy to include more statistical measurements. We are presently limited to moving average measurements, but including exponential measurements or other more complex measurements could help us to refine our trading calls. We would also like to expand our data collection beyond just Reddit. Large amounts of stock sentiment data is created on platforms like Twitter and Facebook. Access to those datasets could allow us to generate more accurate sentiment measures for each stock. We would also like to perform back testing over longer periods of time. Right now, our back testing range is limited by our data collection scope. Improved data collection and larger stores of data could help us to generate longer term back tests that we could use to refine our strategy.
Built With
backtrader
firebase
pushshift.io
python
vadersentiment
Try it out
github.com | Sentiment Trader | An algorithmic stock trading program driven by a sentiment analysis of the most popular trading forum in the world, WallStreetBets. | ['danielwei816 Wei', 'Kwub Aduse-Poku'] | [] | ['backtrader', 'firebase', 'pushshift.io', 'python', 'vadersentiment'] | 133 |
10,371 | https://devpost.com/software/grocery-list-application | Inspiration
Our project at this point is not functional. We found that there were too many steps b/w jotting down a quick list of groceries, and finding a local price comparison/location on those items.
What it does
It is not functional. But we would like to use NLP to search through the Selling Service Catalog, and recommend items based on price.
How I built it
Challenges I ran into
Accomplishments that I'm proud of
What I learned
What's next for Grocery List Application
We would like to continue to work on this. We'd like to perfect our use of thee Spacy Natural Language Processing for groceries by testing it on a dataset of grocery lists, then developing a price comparison for local groceries stores. This would allow people to make grocery lists, readable by grocery pickup apps (like the Publix /Walmart app!) We could then show the most best store to visit for a given order.
Built With
expo.io
javascript
python
react
spacy
Try it out
github.com | Grocery List Application | We’d like to remove the friction between jotting down a list of your groceries (picture of notes) and knowing where to find them. The goal is an grocery list input, and a cost comparision output. | ['Naved Momin', 'dmedof Medof', 'Toby Nguyen'] | [] | ['expo.io', 'javascript', 'python', 'react', 'spacy'] | 134 |
10,371 | https://devpost.com/software/novid-zdin49 | Novid In Action
Inspiration
Unfortunately, some of our family members contracted COVID-19. After seeing the struggle that they went through, we wanted to take this opportunity to spread knowledge and tips for best practices to our users in order to help curb this global pandemic.
What it does
Our application allows users to interact with chatbot Novid, which can both offer facts about COVID-19 and advice on how to stay safe.
How we built it
We used Dialogflow to build and train Novid, which we integrated into our React application using JavaScript, HTML, and CSS, as well as Node technologies.
Challenges we ran into
We ran into challenges integrating Dialogflow with React, but we were able to resolve it by implementing a third party integration tool called Kommunicate into our code.
Accomplishments that we're proud of
We are proud of the fact that we successfully integrated Dialogflow with React, which allowed us to create a fully functional chatbot on a web application. As first year college students with no experience with prior hackathons, we are proud to have successfully created an application from scratch and hope to use this experience as a stepping stone for future hackathons.
What we learned
React, Dialogflow, and Kommunicate were all completely new technologies to us, and we learned how to effectively use and combine them to create a fully functional application. We also learned how to work efficiently under time constraints, as well as how to work well within a team of diverse people.
What's next for NOVID
For now, Novid functions as a user-friendly conversation tool about COVID-19. We are looking forward to expanding Novid's knowledge base using integration with up-and-coming healthcare APIs to provide our users with a more comprehensive and applicable experience.
Built With
css
dialogflow
google-cloud
html
javascript
kommunicate
node.js
react
Try it out
github.com | NOVID | NOVID: Here To Talk To You About COVID | ['Maniya Dahiya', 'Ramya Challa', 'Gautam Sugasi', 'Chinmayi Kompella'] | [] | ['css', 'dialogflow', 'google-cloud', 'html', 'javascript', 'kommunicate', 'node.js', 'react'] | 135 |
10,371 | https://devpost.com/software/sus-highlight | Inspiration
One of the most ignored documents is a company’s terms and conditions section. A recurring issue is a person blindly accepting these terms and conditions without reading the fine print. Therefore, this lack of attention to fine details can cause a person to fall into many problems. To combat this problem, our solution is to create a small summary of the details stated in these documents, as well as determining which sections of the document have a positive or a negative effect on the user.
What it does
The user is presented with a website that allows text input. The user is to copy and paste the Terms and Conditions’ text into the textbox provided on the website. The text placed inside the textbox will run through our algorithm. In the end, the algorithm will determine if the given text has a positive or a negative connotation, thus giving the user information about its effect.
How we built it
For the backend algorithm, we are using Python’s open-source libraries: scikit, sklearn, nltk, and NumPy. Together with these Python libraries, we are able to detect the sentiment of the user’s text. We are able to detect and classify the text into different features by using the Bag of Words technique to determine the frequency distribution of the features with a given text. The Random Forest Classifier helped us train datasets so that it is possible to predict the outcome of other datasets obtained from the user. For the front-end, we used Django and its system of populating HTML templates alongside CSS styling.
Challenges we ran into
In the backend, the team was not an expert with Natural Language Processing, so there were many challenges when it comes to understanding how machine learning is able to comprehend which words are considered to have a positive or a negative connotation. Because we weren’t experienced in this topic, the debugging process was long and tedious as we had to familiarize ourselves with the functions provided in the Python libraries. We also noticed that there weren't any data sets available with "good" or "bad" legal terms and phrases, so there was a lot of research that we needed to do in that regard.
Built With
amazon-web-services
django
jupyter
Try it out
github.com | Sus Highlight | Have you ever ignored 5+ pages of the terms and condition section and accepting the contract blindly? Well, our site will highlight the problematic statements that should be read thoroughly. | ['Ruth Pavoor', 'Shyam Patel', 'Caleb Werth', 'Kimberly Lie'] | [] | ['amazon-web-services', 'django', 'jupyter'] | 136 |
10,371 | https://devpost.com/software/rendezvous-dlejr5 | Splash screen
Inspiration
During the age of COVID, we’ve moved to a primarily virtual environment especially in the social aspect. Our application gives people a new way to quickly connect with others who share their interests with a quick real-time conversation
What it does
an Android application that allows users to meet and connect based on shared interests and video chat others for a more personable experience.
How I built it
Through VS code, we were able to use the Flutter framework to code in Dart and create an Android application that could be viewed and tested through the emulator. We generated links from Zoom for the video chat feature and uploaded our commits to Github using git.
Challenges I ran into
None of the team members knew how to code in Dart and setting up the framework was extremely difficult. Due to the virtual communication, we had to adapt to the virtual environment. We also had to tackle a lot of bugs which took a good chunk of our time. However, by the end, we were able to overcome most of the challenges.
Accomplishments that I'm proud of
As a team, our biggest accomplishment was how well we were able to work together to figure out the solution to a common problem that all of us have experienced. This is our first Hackathon and we were able to create a fully functioning Android application by using a language we had never seen before.
What I learned
We learned a lot more about collaborating and organizing our priorities to yield the best end product. We also learned more about how to create Android applications by learning new frameworks and languages.
What's next for Rendezvous
For Rendezvous, we wanted to add a Login/Sign Up System, a Splash Screen, In-App Video Calls, Location Filter, and use our knowledge of AI/ML for Tone Detection.
Built With
android-studio
dart
emulator
flutter
git
github
zoom
Try it out
github.com | Rendezvous | enables college students for spontaneous interactions through video services for personable connections within their communities | ['Natasha Bohra'] | [] | ['android-studio', 'dart', 'emulator', 'flutter', 'git', 'github', 'zoom'] | 137 |
10,371 | https://devpost.com/software/space-tech-online | Our team registered the domains blackturtle.tech and winendine.online.
This first domain - blackturtle.tech - was inspired by tech legend Steve Job and his fashion choices. He and his black turtleneck have certainly made their mark in the tech industry, so we thought this domain married the man's style with his legacy in the perfect way.
Our second domain - winendine.online - was born out of the search for the ultimate rhyme. We also found this name fitting with the world's current state; hopeless singles can still "wine and dine" each other, but they must do so... online. Notice the "end" in the domain as well. This year has certainly felt like the end of something, whether it be our faith in humanity or our faith in ourselves.
We learned how expensive domains could be (ex. $1,688 first year for hotbananas.com) and that creativity is key when it comes to naming your website. Honestly, it was pretty challenging finding the right domain; it seems that witty people have snatched up some of the best ones. Even so, trying to come up with domain names was a fun experience, and one that we really enjoyed. :)
Built With
domain.com | space tech online | Dope domains | ['allison fister', 'Atlas Coltrain'] | [] | ['domain.com'] | 138 |
10,371 | https://devpost.com/software/real-connections | Inspiration
We were inspired by VR technology and Gatherly to create a virtual space for musicians to practice during the pandemic.
What it does
Our program uses skyboxes to create a simple VR with audio students can use to practice singing.
How we built it
Our site uses JavaScript, CSS, and HTML for our general structure. To create our skybox we used the three.js source code and modified it to fit our purposes.
Challenges we ran into
Making the skybox work and combining our team's code were some of the most challenging parts of our project. At first the skybox only displayed a black screen rather that the 3D world. After a lot of double checking, Youtube videos, Articles, breaking down the code and guidance from our amazing mentor we were able to display the 3D world we wanted. Also, we had to do a few work arounds to get background sound to work in Google Chrome, but now the website works in all browsers.
Accomplishments that we're proud of
The skybox looks so cool, and I was so surprised by what we could create in the span of a single day. Especially because a few of us had little to no experience with HTML (I hadn't used it until today, and I learned a lot!).
What we learned
Three.js is an amazing Javascript library! We only scratched the surface of the things we could do with it. It has so many VR applications, and it's relatively user friendly. All of their guides are extremely comprehensible, and there are a lot of video tutorials and articles that were helpful as well.
What's next for MusiCloud
Our prototype is in quite a primitive state. In the future we would like it to be almost like a VR, in which users can move around to listen into the audio from other users. We would also like them to practice with prerecorded tracks so they can practice harmonization.
SOURCE CODE
We used Three.js to help complete this project! Here is a link to their github:
https://github.com/mrdoob/three.js/
Audio: Easy Stroll from Youtube Audio Library
Images: “Lancellottie Chapel.” Humus, 10 June 2013,
www.humus.name/index.php?page=Textures&ID=110
.
Link to Image License:
https://creativecommons.org/licenses/by/3.0/
Built With
css
html
javascript
three.js
Try it out
github.com
agnes-scott-college.github.io | Musi.Cloud | Our aim is to create a virtual space for music students to listen and practice music in an interactive setting. | ['Lauren Whiteley', 'Jahve Hawkins', "Ja'Zmin McKeel", 'Jingyu Zhang'] | [] | ['css', 'html', 'javascript', 'three.js'] | 139 |
10,371 | https://devpost.com/software/pose-estimation-for-exercise-form-evaluation | Inspiration
We like to workout and having good form is imperative for making gains and staying safe. Thus, we tried to create a form evaluator that will help people correct their form.
What it does
It uses pose estimation to find the pose of a correct form, and we were working on being able to upload your own video to compare yourself to it.
How I built it
We built it using posenet (
https://github.com/tensorflow/tfjs-models/tree/master/posenet
) in a website.
Challenges I ran into
Being able to upload a video, load it onto a page, and then have it be processed by the posenet.
Accomplishments that I'm proud of
Being able to apply the posenet onto a pre-recorded video and display it.
What I learned
Using different functionalities of HTML and using javascript for more than just basic website logic
What's next for pose estimation for exercise form evaluation
Being able to upload your own video for comparison and overlay a scaled template pose onto that video.
Built With
html
javascript
posenet
Try it out
0712zwang.github.io | Exercise Form Evaluator | Using pose estimation to help people compare their form to correct form | ['0712zwang Wang'] | [] | ['html', 'javascript', 'posenet'] | 140 |
10,371 | https://devpost.com/software/financial-assistance-tool-8btvxf | With a lack of financial calculators/budgeting assistance tools we decided to create something that would help
It accumulates data from the user to analyze and judge what decisions should be made as well as help keep track of data that is very useful to know in financial literacy.
We built this project using JAVA through the Eclipse and Microsoft VS Code IDEs. Using an inheritance structure and several packages from a common library we were able to create methods that analyze data received
While creating this application we wanted to create a full front-end section to format the outputs as an app that can be put into mobile devices and a website, however, we were unable to successfully implement these features.
We are proud that we were able to create a tool that can be so versatile and have many applications in future projects that may be more complex
We learned a lot about implementing packages and utilizing methods from packages given in common libraries. It helped enhance our efficiency and build a more effective product.
The next Financial Assistance Tool is a version that includes a space for listing stock market investments and keeps a live track on how the stocks are performing and whether or not it is a good time to buy/sell.
Built With
java
Try it out
github.com | FiCal | A financial assistance tool to help individuals and small businesses keep track of their financial records and plan for the future accordingly! | ['Rohit Nambiar'] | [] | ['java'] | 141 |
10,371 | https://devpost.com/software/engpredict | EngPredict Results
Inspiration
Imagine that you could predict the next time that your car's "check engine" light came on. Or the next time an appliance in your home needs to be replaced. As sensors become cheaper and find themselves onto everyday machines, an opportunity arises for an application that can predict the remaining useful life (RUL) based on the sensor outputs.
Knowing when machinery requires maintenance ahead of time is very valuable knowledge. For example, if you know that your check engine light is about to turn on in 5 days, you probably wouldn't schedule a long road trip this upcoming weekend. Alternatively, if you know that your refrigerator is going to break in 30 days, you can schedule a repair man to come by in 3 weeks.
An accurate RUL prediction is vital, otherwise, knowing an inaccurate RUL is as useless as not knowing at all. If the model predicts too high of a RUL, you may schedule the maintenance too late and risk a breakdown. If the model predicts too low of a RUL, you lose money by scheduling maintenance too soon and not getting the most out of our equipment.
What it does
EngPredicts is a product designed to tackle the problem of predictive maintenance. EngPredicts was inspired by predicting Jet Engine Remaining Useful Life but can be extended to any machine or appliance that collects data from a series of sensors. The product has capability to make predictions on RUL, even if the appliance or machine changes operating conditions. The product takes in training data in the form of sensor data and cycles remaining before failure. Then, given a set of sensor values, EngPredicts will predict the remaining useful life of the product.
How I built it
At a high level, there are 4 problems to solve so we can successfully predict the Remaining Useful Life (RUL) of our jet engine:
Accounting for different operating conditions
Accounting for the sensor noise and picking the useful sensors
Developing a model between the sensors and the RUL
Using the models to predict the RUL for the testing data engines
See
Jupyter Notebook
for full explanations.
Challenges I ran into
Developing a model that worked for all 4 test cases instead of just the simple test case. Also, developing our own approach to best fitting the data.
Accomplishments that I'm proud of
We were able to get results that closely matched the test data and we had an innovative approach compared to others who have tried this problem!
What I learned
We got way better at using pandas, sklearn and Data Science
What's next for EngPredict
Next steps for this project would be to test different machines and appliances to see if the model can be extended beyond its inital use.
Built With
python
Try it out
github.com | EngPredict | Predictive Maintenance for Jet Engines and More | ['Ted Vlady'] | [] | ['python'] | 142 |
10,371 | https://devpost.com/software/lack_of_facts | Lack_of_Facts
Introduction
By: Nisha Rajendran and Manmeet Gill
In this day and age, especially with the election coming up soon, we realize that news and media outlets have a powerful impact on the result of the election. During an election time, it is crucial that only factual articles are posted for the public to read. However, people do not always have the time to read an entire article and instead just use the headlines to obtain knowledge of a subject. According to a study by Colombia and the French National Institute, 59% of the general population only read the headlines and shared the link without reading the rest of the article. This could pose as a huge problem since if people are not careful, they could spread misinformation unknowingly. While we can’t change human behavior, we can assist them by giving them an easier way to decipher implicit bias behind article headlines without making them read the entire article. We have created a machine learning model to analyze an article’s title and output whether or not it is biased with a 95% accuracy rate.
How the program works:
We implemented a machine learning algorithm using Python libraries such as sci kit learn, numpy, pandas. Using these libraries we used the Naive Bayes classifier algorithm to create our program. In order to train our model, we procured a dataset from Kaggle(
https://www.kaggle.com/clmentbisaillon/fake-and-real-news-dataset
) where the data was split into two csv files. We used an app.py file as an external script for a website - made from scratch using HTML - and used Flask to link the two html files. The html program takes in input as an article name, and once ‘predict’ is clicked, the input is plugged into the machine learning algorithm, which predicts if the name is biased which is labeled as a 0 or if it is unbiased it is labeled as 1. After the machine learning algorithm makes its prediction, a second html file is created to display the correct output onto the screen using if-else statements. If the prediction outputs a 0, a headline stating that the article name was biased will be shown. If the prediction outputs a 1, then a headline saying that the article name is unbiased will be depicted on the screen. We added design elements to our html files as well to make it more appealing to the user. This included images, buttons, and text features.
The main goal of this project is to give a way for people to take an article name, and without even reading its contents, determine whether that article is biased or unbiased. This resource is especially important now during election time, where voters need quick and efficient ways to make sure they are reading unbiased articles.
Instructions on running the code:
Download all the files that are in the github repository and make sure that they’re in their respective folders - The python file, the datasets, and a folder called templates should be within a folder called GTHACKS and the html files should be within the templates folder. You can download the datasets from google drive link provided below:
https://drive.google.com/drive/folders/1pT-7Wa-93WIL6LgQAA68osd22tuoVTlI?usp=sharing
After you download the files, run app.py in the GTHACKS folder in your terminal using the command $ python app.py
Citations:
https://www.youtube.com/watch?v=i3RMlrx4ol4&t=484s
Built With
html
python
Try it out
github.com | Lack_of_Facts | It's the Lack of Facts for me | ['Nisha Rajendran', 'Manmeet Gill'] | [] | ['html', 'python'] | 143 |
10,371 | https://devpost.com/software/blogging-site | Inspiration
The project was more of a way for me to get to know what a hackathon was like and learn from the process of creating something. The project was the Web Dev Workshop project that I personalized. My goal was to implement a login feature, but unfortunately, I got stuck and time was running, and therefore am submitting without it.
What it does
It allows people to create new posts on the blogging site, delete posts, and view others' posts.
How I built it
I followed the resources provided throughout the Web Dev Workshop and those I found online.
Challenges I ran into
I didn't really know how a webpage was run so I ran into the problem of thinking of the web application as a whole rather than two separate applications that talk to each other.
Accomplishments that I'm proud of
I was able to follow and complete the Web Dev Workshop project guidelines, which I was proud of. I am most proud that I was able to learn from this experience, however, and am excited to expand my knowledge of software development.
What I learned
I learned the various frameworks I used in the project and how a web application functioned to present the webpage and the different pages of a website.
What's next for Blogging Site
I definitely want to learn how to implement a login feature with a user-password database, and I hope to implement other features that belong in a blog site as well.
Built With
express.js
mongodb
node.js
npm
react.js
Try it out
github.com | Blogging Site | My first Hackathon Project. | ['Yoshiki Kakehi'] | [] | ['express.js', 'mongodb', 'node.js', 'npm', 'react.js'] | 144 |
10,371 | https://devpost.com/software/cartclerk | An example cart page. Currently the cart is empty, but adding items will replace the place holder with actual items
This is the bar code scanner. it takes a barcode as input, and returns a integer which the we can use with the api to add an item to cart
This is the welcome page. It gives brief instructions on how to use the app.
Inspiration
We wanted to make shopping safer for both customers and employees at brick and mortar shops by reducing the need for in-person interaction as well as removing the need for long lines.
What it does
Provides customers with a platform to record the goods in their cart to allow on-device purchases to skip crowded and often non-socially distanced check outlines.
How I built it
We first built a wireframe of Figma using the NCR Design System to create a design and plan for various possible features for the app. Then we use android studio and java to develop an app for a proof of concept. For the barcode scanner functionality, we used an open-source library zxing which scans a barcode and returns an int which we use with the post API call to tell the NCR business services platform to add that item to the cart.
Challenges I ran into
The group ran into challenges learning to develop Android apps for the first time. We all knew vanilla java, but the android libraries, development studio, and XML files were all knew to use. This was also the group's first practical use of APIs to build a project, but studying postman we managed to get the API calls to successfully build and manage a cart.
Accomplishments that I'm proud of
We learned to develop an android app for the first time and learned and reinforced different skills such as web development and data science at various workshops and presentations, and forming a bond with previous strangers to complete a project.
What I learned
We learned android app development, practical usage of API, UI development.
What's next for CartClerk
We can expand upon the functionality with a checkout feature and search bar to add products that don't have a barcode.
Built With
android
android-studio
figma
java
ncr-buisiness-platform
ncr-buisness-services-api
zxing
Try it out
github.com | CartClerk | A moble self scanner that reduce the need for people to be in close contact promoting germ free and hassel free purchases. | ['Nicholas Hutchison', 'Cesar Morales-Xochipiltecatl', 'Charles Snider', 'Anna Terese Aucoin'] | [] | ['android', 'android-studio', 'figma', 'java', 'ncr-buisiness-platform', 'ncr-buisness-services-api', 'zxing'] | 145 |
10,371 | https://devpost.com/software/minirobot-waiter | Inspiration
From the guiding robot in Korea's Incheon Airport
What it does
A mini robot can walk around the restaurant and take order from dine-in customers for restaurants and coffee shops, which can help limit close contact during this pandemic, and the interaction with the robot can enhance the customers' experiences.
How I built it
Using node.js and react, as well as NCR's Silver API and SpeechRecognition library
Challenges I ran into
Handle state management in React
Accomplishments that I'm proud of
Finish implementing features that I wanted to implement. My teammates both decided to drop due to school commitment. I was very discouraged but I'm proud that I didn't give up.
What I learned
To be determined and stick with a project despite unexpected things
What's next for Minirobot waiter
Would like to integrate more advanced ML algorithms in understanding human intent while they are interacting with the robot, improve back-end system to handle situations when there are available promos/discounts or inventory is out-of-stock, integrate debit card transactions.
Built With
node.js
react
Try it out
github.com | Minirobot waiter | A mini robot that can take orders from dine-in customers in restaurants, coffeeshops | ['Tien Le'] | [] | ['node.js', 'react'] | 146 |
10,371 | https://devpost.com/software/find-create-lo4eas | Home Page
Finding a Project
Creating a Project
Inspiration
We came up with this idea because as first-year college students in an increasingly digital world, we were finding it more challenging than ever to find a group of peers to work on passion projects to solve real problems. There is no dedicated and personalizable platform for students to find and create projects that we are passionate about. Students have settled to using social media in an attempt to connect, however, Find + Create is built explicitly for this purpose. We have created a user-friendly interface dedicated to helping students work together to change the world.
What it does
Find + Create allows users to create a project, to find groups of like-minded students to collaborate with, and search through an expansive database of preexisting projects catered to each and every user through detailed search filters. We also have a project spotlight feature that highlights specific achievements and impacts that students are having on their communities, inspiring others to do the same.
How I built it
The front-end framework was built utilizing a Bootstrap template. Customizations were added to create multiple webpage links, buttons, forms, and other display features using HTML and CSS, that is hosted with Firebase. The back-end code is written in JavaScript utilizing the Cloud Firestore database in Firebase. We began the process of data integration from forms on the web page to the database. We have also begun the process of utilizing the Google Maps JavaScript API to display a map locating projects in a specific community.
Challenges I ran into
This was the team's first hackathon so we grew exponentially through the process. None of us had much prior experience in web development but were able to learn how to do front and back end and integrate the two components by working together and learning from one another.
Accomplishments that I'm proud of
One thing we were really proud of was seeing our idea come together to become a tangible product. All of us were new to hackathons and web development, but we played on our strengths to focus on different components while also teaching and learning from one another. We strongly believe that over the past two days, we have created and developed an idea that can truly make an impact and has the potential to grow and succeed in a new reality.
What I learned
One thing we learned collectively as a group was how to collaborate across different platforms as a team to accomplish a common goal.
What's next for Find + Create
We plan to have full data integration with the database with the forms and have full functionality with the Google Maps JavaScript API. We also see the potential to expand this platform to cater to other types of projects in the future.
Built With
bootstrap
cloud-firestore
css
firebase
google-maps-javascript-api
html
javascript
Try it out
github.com | Find + Create | Find + create a reimagined reality. Introduce and connect with personalized projects centered around a common ambition to make our world a better place to live in. | ['Ananya Pottabhathini', 'Kruti Gupta', 'Daniela Quintero'] | [] | ['bootstrap', 'cloud-firestore', 'css', 'firebase', 'google-maps-javascript-api', 'html', 'javascript'] | 147 |
10,371 | https://devpost.com/software/project-backbone-pbnyr8 | Inspiration
Our goal was to create a website where users could search for and add small businesses in their area to our site. Small businesses are the backbone of the American Economy, which is why we name our project “Project Backbone.” A recent article published by the Chicago Tribune showed that more than 60,000 small businesses closed permanently as a result of the COVID-19 Pandemic. The Pandemic has been negatively impacting small businesses across the U.S., and it will take the cooperation of community members in all corners of the United States to save them. That’s why we wanted to make it more convenient for consumers to find small businesses in their area.
What it does
We set out to create a Yelp-like service, but only for small businesses and without review capabilities. We planned to implement the functionality for users to add businesses themselves and to filter businesses by location, but we were not able to fully implement these designs within the Hackathon timeframe. We fully intend, however, to continue working on this project after HackGT. Our site currently supports searching by tag, viewing details of businesses (addresses, websites, descriptions, etc.), and browsing by type-of-business. Our free-to-use service is accessible by all modern browsers across desktop and mobile devices.
How we built it
We used Django and programmed mostly in HTML and Python, with some CSS mixed in. We used Django’s ‘startproject’ and ‘startapp’ functionalities to provide an organization scheme for our project. We also used Django’s template language to facilitate our HTML files with some additional capabilities, such as the creation of the cards on the search and subpages. Our templates can be found here:
templates
.
In order to deploy our project to the web, we registered a domain using Domains.com. In addition, we used the provided Google Cloud credits to launch a virtual machine through Google Cloud Compute Engine. Within this virtual machine, a Django server was created from our project files. We ran this as a background process using the Linux terminal multiplexer screen. We pointed our domain towards the external IP of our Google Cloud instance, resulting in a functional and dynamic website. This website can be found here:
http://projectbackbone.tech/
.
Challenges we ran into
Our challenges primarily consisted of scaling our app with feasibility. We originally planned to add a small character customization "minigame" as part of our web app to incentivize engagement with small businesses, but ultimately dropped this idea due to lack of artistic skill and time restrictions. Additionally, we planned to add QR code tagging of each business but were unable to implement this due to time restrictions.
Accomplishments that we're proud of
Coming into this project, only half of our team had experience with Javascript and that included only very basic programming. None of us had used Django and none of us ever deployed a website/web app, even on a local server. Therefore, we are proud of our efforts to learn all of this from the ground up and explore various frameworks/technologies to create a whole, functional product within a very limited timeframe. We are also proud that we have a working website that we can show to others, as this is something we have never done before.
What we learned
Through this project, we learned many technical skills in a variety of programming languages and frameworks. Above that, we also developed our understanding of group dynamics in project organization and the importance of planning in time-sensitive situations, especially in how to efficiently divide tasks in an area where too many cooks might spoil the broth.
What's next for Project Backbone
While we are satisfied with our creation during this Hackathon, the Project Backbone team has many ideas for future development. We hope to add the capability for users to tag their own local businesses, enable QR code identification to promote user engagement with businesses, and filtering businesses by location proximity. Lastly, we hope to spread the word about our product and help local businesses in this time of need along the way!
Built With
bootstrap
css3
django
domain
github
google-cloud
html5
javascript
linux
python
Try it out
projectbackbone.tech | Project Backbone | Project Backbone is committed to informing about small, local businesses - the backbone of the economy! It is important to support local businesses, especially during the COVID-19 pandemic. | ['Ryan Sequeira', 'Katrina Jurczyk', 'Quentin Mot', 'Gabriel Armstrong'] | [] | ['bootstrap', 'css3', 'django', 'domain', 'github', 'google-cloud', 'html5', 'javascript', 'linux', 'python'] | 148 |
10,371 | https://devpost.com/software/moodemic-20 | Data visualization using IBM Watson and IBM Cloud Pak
Web starting UI of MOODEMIC
Inspiration
COVID-19 has invaded the Earth. COVID-19 has immensely impacted the society not only physically but mentally. This is commonly referred to as "Corona Blue". While practicing quarantine and social distancing, people started to be mentally isolated, which caused depression, anxiety, and various mental disorders. Our team decided to develop a program that shows the correlation between people's mood changes and COVID-19 cases.
What it does
Our program, MOODEMIC-20, is an analytical platform that compares data from CDC and Google Trend to demonstrate the positive correlation between search terms related to mood swings and weekly COVID-19 positive cases across the US. We used the IBM Watson Studio technique and IBM Cloud Pak for data visualization. Input data were parsed based on articles and public data published on US CDC, Google Trend, and psychology scholarly journals. We then integrated this data visualization in web development to create an agile user interface for viewing the correlation.
How we built it
We first collected data in .csv format and parsed this into JSON format using the IBM Watson and IBM Cloud Pak (IBM API) technology. This enabled the visualization of graphs that best reflect the correlation between mood switches and the COVID-19 pandemic. The axes of the graphs were weekly date and number of positive cases/prevalence of the search terms. The data were pre-processed to match these standards using the IBM technologies, changing the data types using machine learning. The visualized graphs were exported as JSON files and then integrated into our web platform built with JS, node.JS, HTML5, CSS, and chart.js. VSC software was used to build the UI of the web and Figma software was used to create graphical representations.
Challenges we ran into
Collecting data with proven authentication and accuracy was a big challenge we encountered at the beginning of our project. For example, there were numerous data sets related to COVID-19 in the field but they were slightly different from one another. It was hard for us to find the data-set that best aligned with the goal of our project. We decided to use the COVID-19 data-set from CDC.gov based on the CDC's credibility and ethos. For diagnosing the increasing depression rate within the society, we decided to use search history data archived in the biggest search engine in the world, Google. Data achieved from Google's search terms could most accurately represent the society's up to date changing moods affected by COVID-19.
Accomplishments that we're proud of
This was our team's first time utilizing IBM's public API. Within the limited time of the hackathon, we were able to explore and adapt to IBM's Watson and Cloud Pak technology. We could also extend our knowledge to visualizing the wanted local data using this technique.
What we learned
Our team learned to efficiently connect the third-party sources (such as API) to local data and codes in order to get an output we wanted. We went through the step of collecting, processing, and visualizing data while making such a connection.
What's next for MOODEMIC-20
We will automate the process of collecting data from public APIs to provide real-time analysis and visualization. We will also utilize deep learning experiments and AutoAI learning experiments features of IBM Watson studio to provide expected mental related data.
Built With
chart.js
css
figma
html5
ibm-cloud
ibm-watson
javascript
json
node.js
visual-studio | MOODEMIC-20 | Did COVID-19 invade our minds? We visualized the correlation between COVID-19 and societal depression. | ['Hane (Stella) Yie', 'yunakim222'] | [] | ['chart.js', 'css', 'figma', 'html5', 'ibm-cloud', 'ibm-watson', 'javascript', 'json', 'node.js', 'visual-studio'] | 149 |
10,371 | https://devpost.com/software/timely-tables | Inspiration
Relating from personal experience of seeing the amount of wait times in restaurants until your order is taken can be frustrating. Thus, we wanted to create a simple app to help restaurants keep track of customers that have been waiting too long for their order to be taken.
What it does
The app has a basic functionality of starting a timer every time a customer arrives at a table(represented by fast food icons) . Once the timer is up the icon turns red and an error message is displayed alerting the manager or the waiter as to which table needs assistance. After the issue has been dealt with icon can return to its initial state by double clicking on it. It also keeps a track of how many tables were kept waiting above the time limit.
How we built it
We used widgets in Flutter for building the main page and the icons. We used Android Studio to code.
Challenges we ran into
None of us had any prior experience with app development and this was our first time learning about Flutter and Android Studio. So we started from scratch reading libraries and trying to learn about each aspect of the process as and when we encountered the problem. During the initial setup we faced a lot of problems getting Android studio and Flutter SDK to work.
Accomplishments that we're proud of
Completing an app with no prior experience is something that we are proud of though it is very basic in terms of functionality.
What's next for Timely Tables
Implementing Firebase for the login page. Also better animations and graphics for the warning system.
Built With
android-studio
flutter
Try it out
github.com | Timely Tables | A simple app to decrease the amount of wait time taken for your order at a restaurant. Help restaurant managers keep track of customers who haven't been served. | ['John Grilo', 'John Igieobo', 'Parth Shinde', 'Andy Borst'] | [] | ['android-studio', 'flutter'] | 150 |
10,371 | https://devpost.com/software/factcheck | Inspiration
With the many events that have turned 2020 into far from the vision we thought it would be and the array of fake and real news that plagues our headlines, we thought it would be applicable to create a fake vs. real news model to visualize so that we could not only technically learn a backend-> frontend product, but we could do so in a way applicable towards 2020.
What it does
The application is a machine learning model run via Flask, Python, and React to make it a web application with accessible UI. The user types in a headline and with a push of a button can check if it is fake or real news based on the model's prediction. It additionally mentions how we built it and has a link to a GitHub repo.
How we built it
We built it using a Multinomial Naive Bayes model after deciding against other models and turned it into a REST API via Flask and Python. We then made it into something to access on a UI with React as well.
Challenges we ran into
One challenge we first ran into was distinguishing what model to work with that trained and tested best. We tested Logistic Regression, Multinomial Naive Bayes, and Random Forest Classification as seen in our Jupyter Notebook file. The Multinomial Naive Bayes ended up performing best, based on the fact that we'd split our data into training and testing sets so the training could test on the testing. While this proved well for the data set that we had, we realized once we migrated the model to Flask and tested with our own inputs, the data set was very biased towards the 2016 election and our model wasn't the most robust to handle anything in 2020. In particular, Multinomial Naive Bayes classifies and predicts based on the probability of certain words appearing given a result in a dataset- it uses trends found in the dataset to predict what might be put in next. Had the world remained somewhat similar to 2016-2018 and the events in 2020 not unfolded in the way in which we did, this model might've been more successful. However, things such as COVID-19 could not have been predicted by our model based on probability of that term occurring, so we saw some failure there. We recognized the plethora of unexpected events could have played a role in the model's success on current articles, and realized that we likely could have picked a better model to handle the robust nature of the last four years. That being said, this was more of a learning for us and something we noted as being very interesting in pursuing this project in a 'post'-COVID era.
Accomplishments that we're proud of
We are proud of creating 3 models and finding which one was best based on solid statistics of accuracy to back it up. Additionally, we are proud of taking this model, making it a REST API, and creating a UI to visualize the model and its results. We are thus proud of the end-end completion of this project in the 36 hour timeframe we had.
What we learned
Each team member brought something to the table another didn't have. For instance, two of us were well versed in machine learning and were able to explain to the other two what was going on and why we picked each model and the reasoning behind picking the one we ultimately went with. The other two were well versed in frontend work with React and were able to explain turning this model into something we could visualize. As a result of the collaborative-heavy nature of our team, each of us was able to bring something to the table and take something else away.
What's next for FactCheck
We hope to find a new training dataset to better represent all news categories, thus branching out from political news.
We also hope to use a new model that is better able to analyze a variety of news topics and handle predictions. Looking into models such as neural networks or other NLP frameworks/libraries would be useful for us.
Built With
css
flask
html
javascript
python
react
Try it out
github.com | FactCheck | A machine learning model trained to distinguish between fake and real news headlines. | ['Sanjana Badhya', 'Shruthi Sundar', 'salina-nihalani Nihalani'] | [] | ['css', 'flask', 'html', 'javascript', 'python', 'react'] | 151 |
10,371 | https://devpost.com/software/kids-learn-math-website | Homepage
Example problem
Cloud Usage
Kids Learn Math Website
2020 HackGT7 Submission
Authors: Devin Moon, Derek Noppinger, Andy Boulle, Jake Perret
Uses Google Cloud as website host and the domain was created through domains.com (kidslearnmath.online)
LINK:
https://storage.googleapis.com/www.kidslearnmath.online/index.html
Providing the next generation of leaders and innovators with the tools they need to obtain a quality education is more important in these unprecedented times than ever before. The coronavirus pandemic has disrupted the education of millions of students. As current students who have had our classes moved online, we know that it can be a struggle to feel confident in the course material without in-person instruction.
That is where Kids Learn Math Online comes in! Using our simple interface targeted at elementary schoolers, students will be able to reinforce their foundational math skills. Our website challenges students’ masteries of the four fundamental math operations: addition, subtraction, division, and multiplication. We provide students with as much practice as they need by randomly generating problems of varying difficulties based on their mastery level. Additionally, we give feedback as to whether or not their answers were correct and provide supplemental resources for them to watch lessons if they choose.
Our hope is that our application can be used as a tool by this new generation of online students to fill the educational void that is left by a lack of in-person classes.
Built With
css
html
javascript
Try it out
storage.googleapis.com
github.com | Kids-Learn-Math-Website | Website for kids to practice the fundamental math operations online. | ['Devin Moon', 'Derek Noppinger', 'andyboulle'] | [] | ['css', 'html', 'javascript'] | 152 |
10,371 | https://devpost.com/software/fellini | Inspiration
We want to help Gen Z manage their money better by visually showing people how much money they've spent on avocado toast and coffee and what would happen if they saved that money in a savings account which accumulates compound interest or put it in an index fund. The project was supposed to use AR for people to visualize how much they've spent on a certain category-- say coffee. If I bought a coffee from Starbucks everyday and it costs around $3 a cup then the app would show me how big the coffee cup would be for the month and how many stacks of cash I put down for those coffee's in AR.
What it does
Retrieves user purchases, date, userId, total spent.
We wanted to create a visualizer for how much money someone spent in a month and see what would happen if we took that money they spent on lets say coffee that month and allocated it towards a savings account to see what the compound interest would look like after a set time like a month or a year or 10 years.
How we built it
python,flask, angular, js
Challenges we ran into
Azure servers would not let us login through sftp. Ran into time constrains so we couldn't finish the front-end or get the back-end onto the server.
Accomplishments that we're proud of
We finished the back-end server portion which takes in the NCR Digital-Banking API and spits out our own API which lets would have forwarded to the front-end to reveal the User, what they bought, how much they spent, when they purchased the items, and where they purchased the items. There are also functions that spit out the accountId, transactionDate, amount, institutionID, and isCredit by themselves.
What we learned
We learned that it is best to setup and research the technologies we would be using beforehand to limit time constraints and stress
What's next for Fellini
Even though we can't finish it in the allotted time frame we will continue to develop the app, hopefully having a working server and finishing up the front-end
Created by:
Alan Dang, James Gebara, Hargopal Singh, and Sukhmani Choudhry
Built With
angular.js
azure
javascript
python
Try it out
github.com | Fellini | Finance Management App | ['Alan Dang', 'Sukhmani Choudhry', 'Hargopal Singh', 'James Gebara'] | [] | ['angular.js', 'azure', 'javascript', 'python'] | 153 |
10,371 | https://devpost.com/software/doodletheater | Logo
UI
Welcome kit
DoodleTheater
GT7 Hackathon Project: AR app for children where they collaborate to create and share small stories featuring their doodles
One main goal we set for ourselves was to learn more about AR, and we wanted to create a project that addressed issues children are facing during these difficult times. Due to quarantine, many kids are unable to interact with their peers, which may have repercussions on their emotional and social development. Hence, we decided to make a drawing-to-life project, which enables kids to showcase their creativity while collaborating with others.
Challenges we faced: we overestimated the current capability of AR, which significantly reduced the scope of our project. We also ended learning about prototyping, cloud databases, voice-calls, etc. Although it was overwhelming at times, we felt that we were able to expand our skillset and become better developers:) We had a lot of fun doing this hackathon and we hope to expand on this project in the future!
Built With
agora.io
arfoundation
c
c#
c++
firebase
hlsl
objective-c
shaderlab
unity
xcode
Try it out
github.com | Doodle Theatre | Doodle Theatre is an AR drawing-to-life project where kids draw characters on a printable template, upload them, and interact with peers in a virtual theater with those characters. | ['Maggie Zhang', '18satoy', 'WenqingYin'] | [] | ['agora.io', 'arfoundation', 'c', 'c#', 'c++', 'firebase', 'hlsl', 'objective-c', 'shaderlab', 'unity', 'xcode'] | 154 |
10,371 | https://devpost.com/software/sumex-2jhlpb | Domain.com submission: shortentext.online
Inspiration
All of us have had many experiences with dealing with webpages, articles, and terms of service agreements that are really difficult to parse and understand. Because of this, we've decided to create Sumex, short for summary extension, to make so we never have to read one of those monstrously long documents ever again and to take the time that we would have spent trying to understand them back for ourselves!
Who is this for?
-People who have visual disabilities (for example requiring screen readers or lower text density)
-Anyone who’s having trouble reading through massive Terms of Service documents
-Anyone who wants short summaries of large bills or legislation
-Lazy people who don’t want to spend ages reading articles :)
What it does
-We've created a Chrome extension that allows you to add a page summary to the top of any webpage. You can click on the icon of the extension and hit the button that appears, or summarize just a part of the text by highlighting it and right clicking it and choosing the summarize button.
-This is done using proven Natural Language Processing (NLP) techniques and nltk to summarize text down from its original size to a maximum of log(number of sentences in original document) sentence long summary. We've added implementations of the luhn, textrank, and LSA text summarization algorithms.
-The whole backend is controlled by an Azure Cloud Function that takes in the webpage's contents whenever the extension is used and processes and returns the summary.
How we built it
-Built as an add-on for Chromium-based browsers using JavaScript
-Python-based Azure Cloud Functions to host an API for our add-on to summarize text
-We created an implementation of TextRank, which uses a similarity matrix to find similar sentences from which to create summaries, as repeated content is more likely to be important or related to the article's topic. However, our implementation ended up being noticeably slow, reducing usability.
-So, we looked into Luhn, LSA, and a more efficient implementation of TextRank. Exploring ML-adjacent technology was all new for us, but we found considerable speed increases by looking into these popular NLP approaches. We allow the selection of different models by changing the Azure Function-based API we call. By default, we use the smarter approach to TextRank.
Challenges we ran into
-Having to use vanilla Javascript was really difficult for those of us used to frameworks like Node and React
-We dealt with a lot of issues with text still having HTML tags and not being UTF-8 encoded, leading to buggy summaries
-Choosing the best NLP models for text summarization from the ones that sumy and nltk offer
-Both of Alan’s computers are broken :((((((
Accomplishments that we're proud of
-The NLP text summarization is working really well. It has a decent success rate and seems to work on the sites we tested it on
-App is fully functional and could be polished and deployed in a matter of days
-UI is clean, simple, and effective
What we learned
-Just how much filler there is in a lot of articles! And how hideously long bills and terms of service can be
-How to make a Chromium web-browser add-on
-How to use vanilla Javascript more effectively
-The power and utiltiy of Azure Cloud Functions
What's next for Sumex
-Tweak and tune NLP even further for better results. We can do this by taking into account the context/topic of the website (abstractive) rather than our current extractive approaches.
-Port to Firefox and Opera
-Further user customization
-Better custom handling for popular sites
-Better filtering of erroneous text
Built With
azure
azure-cloud-functions
beautiful-soup
chrome
html
javascript
nltk
python
Try it out
github.com
docs.google.com
shortentext.online | Sumex | Read less, know more.. | ['Alan Brilliant', 'Ray Altenberg', 'Farhan Saeed', 'Drew Ehrlich', 'Bryan Lim'] | [] | ['azure', 'azure-cloud-functions', 'beautiful-soup', 'chrome', 'html', 'javascript', 'nltk', 'python'] | 155 |
10,371 | https://devpost.com/software/locart-helping-local-grocers-and-restaurants-thrive | Inspiration
Looking through some of the main problems that merchants face, we attempted to create a multi-faceted solution. The two questions that we were initially faced with were how can we help merchants reduce the cost of a transaction and how can we help merchants optimize their supply-chain process? An Amex-commissioned study in 2018 showed that “for every dollar spent at a small business in the U.S., approximately 67 cents stays in the local community”. Therefore, we can see a mutually beneficial relationship between restaurants and local grocers. We came up with a solution that not only supports merchants in optimizing costs and the supply chain-process, but also supports small, local grocers.
Our vision is to create a website that links local communities together through mutually beneficial solutions to problems. Local restaurants face many problems with choosing the best suppliers and optimizing costs and local grocers rely on the purchases within their communities. Connecting these two through a website seems to be the most productive solution.
The name is a combination of the words local and cart (LoCart) to further reinforce the emphasis on supporting local businesses and communities.
What it does
First, a restaurant owner types in their restaurant’s zip code and chooses their restaurant from the list of restaurants in that zip code. After doing so, the restaurant owner is able to shop and support local grocers through looking up supplies that they need.
Restaurant owners are able to purchase supplies through local grocers without sacrificing any profits due to the variety of products and prices offered from each local grocery store. Restaurant owners can shop through multiple local grocery stores by sifting through products with the search engine. They can also do this through sorting by unit price (high to low or low to high) or proximity to the restaurant's location.
For restaurants that are looking to optimize their supply chain, decrease their costs and improve their goodwill among consumers, LoCart is the solution.
How we built it
The backend is a server using the Express.js framework. It is responsible for handling our API requests and allows it to give more info to the front-end of the project.
The frontend is using the popular web library React.js. It includes responsive elements and sleek design.
Challenges we ran into
We hit some roadblocks at the start when brainstorming ideas for our project. We started off talking about optimizing the supplies visible on shelves to customers in brick and mortar stores and eventually came to the final idea of LoCart. Two of us came in with very little website experience, but we managed to learn JavaScript, HTML, CSS and more through the workshops offered by HackGT.
Accomplishments that we're proud of
We are very proud that we finished a project after many doubts and hiccups in our process. We came into the hackathon without knowing each other and persevered to create a project that may actually have an impact on the world and local restaurant sphere!
What we learned
We got to learn how to implement and use different data sets in a website. A lot of us also learned a lot about different coding languages that we were not previously familiar with. We learned how to take an idea in a team and make it real through good communication and teamwork.
What's next for LoCart: Helping Local Grocers and Restaurants Thrive
We would like to include a login feature so restaurant owners can save their orders and schedule orders for certain dates to automate the entire process. Another next step for LoCart would be analyzing supply data with data science to optimize the dates for when restaurant owners should purchase new items to reduce waste. We would also like to create a recommendation system based on previous order data for the restaurant owners.
Built With
css
express.js
html
javascript
ncrdesignsystem
react | LoCart: Helping Local Grocers and Restaurants Thrive | A solution that not only supports merchants in optimizing costs and the supply chain-process, but also supports small, local grocers. | ['Mark Patrick', 'Enes Sert', 'Kristoffer Selberg', 'adharmavarapu'] | [] | ['css', 'express.js', 'html', 'javascript', 'ncrdesignsystem', 'react'] | 156 |
10,371 | https://devpost.com/software/sms-bot-z2g0t7 | Dynamic options
Runs entirely through SMS
Intelligently parses responses
Natural language flow
Varying story paths
Inspiration
To create a mobile game with a unique interface and method of interaction
What it does
Plays through a vaguely choose your own adventure style game but communicates with the user soley through sms text messages.
How we built it
Used Python and the Twilio API running on a Flask server. The program stores the game state for each user and parses input and send messages accordingly.
Challenges we ran into
Setting up the API and the Flask server was an initial challenge, but later managing the game state and sending the proper messages proved challenging.
Accomplishments that we're proud of
Saving all data into JSON files to be able to continue if the server resets. Managing unique states for every user. Handling different types of messages with delays, deadlines, and branching states.
What we learned
The amount of edge cases involved even in a seemingly simple problem can grow exponentially and require lots of testing to fully work out the kinks.
What's next for SMS-Bot
Adding longer stories with more varied paths and different endings. Also more complex communication paths, such as reminder texts and periodic status updates. Some more coding would be required for these, but mainly this would involve writing much longer scripts.
Built With
flask
python
twilio
Try it out
github.com | SMS-Bot | An SMS Based Game | ['Joseph Lee', 'OReece Reece', 'Stephanie Zambrana'] | [] | ['flask', 'python', 'twilio'] | 157 |
10,371 | https://devpost.com/software/grocery-app-awk2zm | Sign-in page
Database for users
Inspiration
Easy and modern grocery list solution for roommates to share a collaborative list and differentiate between individual and shared groceries for easier shopping and transaction management
What it does
So far, allows users to join room with screen name and adds users to database
How we built it
React.js, HTML/CSS
Challenges we ran into
Database not recognized by program, Node modules not found (compilation error for many hours), no free food :(
Accomplishments that we're proud of
Learned a lot, even though we weren't able to finish project, staying awake and being productive
What we learned
How to use routes, new languages!, React file structure, familiarized with git, databases
What's next for grocery-app
email/password?
collaboration on grocery list
cart feature where checking off moves items to cart that can be cleared after shopping
more than one room per user
more than one list per room
categories of lists to sort items
See you at hackgt8 ;)
Built With
css
html
javascript
shell
Try it out
github.com | grocery-app | collaborative roommate grocery list allowing for differentiation between individual and shared groceries | ['johntchen', 'Ellie Kim'] | [] | ['css', 'html', 'javascript', 'shell'] | 158 |
10,371 | https://devpost.com/software/viralventure | Harry the Determined Shopper
Harry vs. Coronavirus
Inspiration
Coronavirus, having fun, being safe
What it does
Provides entertainment and awareness to the user. A great stress-reliever!
How we built it
We used the Pygame module to build everything. We used Inkscape to design and Garageband to edit sound.
Challenges we ran into
Debugging code, familiarizing ourselves with Pygame's cool features, dividing the workflow among the three of us in such a condensed window of time.
Accomplishments that we're proud of
The overall aesthetic of the game -- the cute sound effects and the boss background music! Also the concept of Harry in grocery cart struggling to stay alive.
What we learned
A lot about building games, the basics of what goes into it, and the cool features Pygame has. We learned about task delegation as well.
What's next for Viralventure
Making it more challenging. Adding more power-ups and/or increasing difficulty as the game progresses. Perhaps by increasing speed of coronaviruses.
Citations
We got the background music from Octopath Traveler! "Battle at Journey's End":
https://www.youtube.com/watch?v=ce0K8BAol44
Built With
pygame
python
Try it out
github.com | Viralventure | Help Harry collect his masks to stay alive! Spray hand sanitizer at curmudgeonly coronaviruses — but don’t spray the masks because you need them! See how long you can help Harry battle COVID-19!! | ['Jennifer Deng', 'Wendy Sun', 'Michelle Wang'] | [] | ['pygame', 'python'] | 159 |
10,371 | https://devpost.com/software/jumpig | Inspiration
What it does
How I built it
Unity,
Challenges I ran into
Accomplishments that I'm proud of
What I learned
learned about game dev in Unity! especially colliders, tilemaps, and player movement controls
What's next for JumPig
Built With
unity | JumPig | 2d platformer in Unity that stars a jumping pig | ['Jessica Ding'] | [] | ['unity'] | 160 |
10,371 | https://devpost.com/software/livestock-9isgqe | Transaction history
Available stock
Main dashboard
Inspiration
We were inspired by the problems outlined in the NCR sponsored events, specifically how we can help merchants optimize their performance, supply chain, inventory, and reduce waste.
What it does
It monitors the inventory as well as tracking trends in purchasing to help the merchant view their store at a glance.
How we built it
We used Flask + HTML/CSS to build the website. While the website is not fully functional, we created a prototype in Figma that shows the features.
Challenges
Throughout our project we encountered a lot of obstacles as we reshaped what our goal was and worked towards delivering a product. In particular, understanding and then using NCR's development tools was difficult as we are new to hackathons, so we had to learn the technologies.
Accomplishments/Learnings
We learned more about the NCR development tools - specifically, the BSP API and using Figma to design professional user experiences.
What's next for LiveStock
The most important goal is to implement everything from our prototype into the actual website. Once that is done, we would proceed by using actual merchant data rather than the set we generated to see how impactful our tool would be in the real world.
Built With
css
figma
flask
html
python
Try it out
github.com | LiveStock | Our project is LiveStock, an innovative solution to allow vendors to manage their inventory, quickly restock, and view transaction trends. | ['Abhi Bawa', 'Rishabh Jain', 'Sreekar Madabushi'] | [] | ['css', 'figma', 'flask', 'html', 'python'] | 161 |
10,371 | https://devpost.com/software/jobchamp | JobChamp Logo
Inspiration
The internship application process is one of the most harrowing experiences that CS students around the world experience in their time in college. We hope to alleviate some of this process by taking away the monotony of filling out the same information over and over for each separate application.
What it does
By inputting the link of a specific job application, JobChamp will input all of your given information into all applicable fields.
How we built it
We utilized Python, Selenium, and Beautiful Soup to webscrape and fill in a given user's information to job application forms. We also utilized PythonGUI for our user interface.
Challenges we ran into
Being able to differentiate between different types of input boxes on each job application site was tough to figure out.
Accomplishments that we're proud of
We're proud that we were able to create a project that we truly care about while having a good time :).
What's next for JobChamp
We hope to generalize it's ability to fill out all text input boxes for all job application sites.
Built With
python
selenium
Try it out
github.com | JobChamp | Tired of applying to internship after internship and filling out the same information over and over again? Use JobChamp to make it a simple one-click process! | ['Nathan Zhu', 'Stephanie Yang', 'Andrew Zhao', 'Alice Zeng'] | [] | ['python', 'selenium'] | 162 |
10,371 | https://devpost.com/software/image-music-steganography | A familiar input image, a painting from Minecraft.
The output image after converting image to MIDI to image.
An example of many colors being saved into music and being converted back.
The output image after converting image to MIDI to image.
Another example of many colors being saved into music and being converted back.
The output image after converting image to MIDI to image.
SoundsAwful - Image to Music Encryption
About
SoundsAwful provides a novel method for cryptographically encoding pictures as music and reversing said encryption within a simple python GUI.
Instructions
To encode an image, select the image via the file selection dialog and click Convert. After the program finishes its conversion, you will be prompted to save the resulting midi.
The process is nearly the same for a midi. Select the file via the dialog and similarly select Convert-- you will be prompted to save the image outputted from the decryption process.
Demo video:
Examples
Input
Midi Output
Reconstructed Image
Burst midi
Fluid Midi
Description
The image-to-music algorithm works as such*
Read the image from the file
Downscale the image to a user-defined resolution (default 64 x 64)-- larger resolutions roughly result in longer output songs
Extract the red, green, and blue channels for each of these downscaled convolutions
The blue channel is normalized to be between 0 and the number of notes, so that it can then be mapped to a specific note.
The green channel is normalized to be between 0.25 and 2.00, representing the length of a note in terms of quarter-notes.
The red channel is normalized to be between 20 and 127, representing the volume of the note in the generated file.
Generate chords using the inputs of the notes, lengths, and volumes
The actual chord generation works by generating possible chords using a baroque chord progression ruleset and a user-defined key, and then inverting the chord such that the input note is the base note.
Output the converted .midi file
*please note that these values are configurable for the most part
The music-to-image algorithm works by reversing the above steps; read the midi, extract the chords & their base notes, determine their quarter-note lengths & volumes, and reconstruct the downscaled image.
Future Improvements
As shown in the video, there are a few ways we could improve this project to make it more useful for real-world applications (i.e. steganography). Some include:
Hiding the generated chords throughout a piece of music
More precise compression when converting from image -> midi so as to preserve more data
A more visually pleasing GUI
Quicker conversion (especially with larger resolutions)
More comprehensive encryption algorithms accounting for filetype, metadata, and other properties
Contributors
Made for the HackGT 7 hackathon by
Marius Juston
,
Russell Newton
, and
Akshin Vemana
.
Built With
music21
numpy
opencv
pygubu
python
scikit-learn
Try it out
github.com | SoundsAwful - Image to Music Cryptography | Composers have always been hiding messages in music, but how about images in music? A demo of what's possible, this project takes a crack at encrypting images into music. | ['Russell Newton', 'Marius Juston', 'Akshin Vemana'] | [] | ['music21', 'numpy', 'opencv', 'pygubu', 'python', 'scikit-learn'] | 163 |
10,371 | https://devpost.com/software/spooktober-summoner | The official Discord Bot
Spooktober Summoner forcing a Cat to appear
General commands
Full bot !help command
Spooktober Summoner
A handy Discord chat bot that can be used to hunt for exciting Trick'cord Treat avatars and more!
Background
With Halloween right around the corner everyone's mind is on trick or treating. Keeping with the spirit of the season, Discord (a popular chat service among 'the yutes') is running a month-long event called
Trick'cord Treat
which aims to provide a safe socially distant way to scratch the holiday itch. Users can opt-in to the service which will then generate random encounters on the chat server wherein spooky guests show up and ask for candy.
There's just one problem: these cute (and/or creepy) guests disappear as quick as they came so there is no time to admire them in all their glory!
Purpose
Enter the
SPOOKTOBER SUMMONER
!
(street name: S32_Adjunct)
This custom Discord chat bot was built from the ground up with a single purpose in mind - to hunt down and summon the various Trick'cord Treat guests at will. With its mighty grasp on the Discord API it is able to reach out into the void and pluck high quality images of the Trick'cord Treaters from the ether itself, storing them locally for adoring fans to marvel in wonder at their leisure.
Methodology
The
SPOOKTOBER SUMMONER
is built in Python using the Discord Bot framework. It relies on manipulation of the URLs used by the official Discord Trick'cord Treat Bot when it sends an image of a visitor. By scraping the URL for a given visitor from the official Bot
e.g.
https://cdn.discordapp.com/halloween-bot/Teddy-Bear.png
we see that it can be easily modified to search for other visitors stored on their server. The
SPOOKTOBER SUMMONER
uses this basic URL to hunt for other potential guests and create a catalogue of any it finds.
Usage
The
SPOOKTOBER SUMMONER
can be added to any Discord server using the
standard methododolgy
. Once added to the server it can be called using the following commands:
!summon help
-- Explains the way to summon guests.
!summon list
-- Returns a list of every known guest.
!summon <GUEST>
-- Tries to summon the specified guest. If successful it will return the guest's picture, otherwise it will ask the user to try again. When a new guest is found it will be added to the list of known guests and have its picture downloaded for later recall.
!summon add-<GUEST>
-- Admin command; manually adds a guest to the list of known guests.
!summon rem-<GUEST>
-- Admin command; manually removes a guest from the list of known guests.
Guest images as well as the list of all known guests are persistent across sessions. I spent a long time pontificating on the best backend file system to use in order to accomplish this. In the end I had to bow to industry best practices and yeet everything into a janky .txt file.
Extra Goodies
As part of getting ready for the main show, I messed around with some other features to add to the bot. Users can type
!99
to receive a random inspirational quote from the TV Show
Brooklyn Nine-Nine
or type
!bonk @USER
to send a fellow user to jail. Loudly exclaiming "BONK!" as they are sentenced to jail is optional, but recommended.
Built With
discord
python
Try it out
github.gatech.edu | Spooktober Summoner | A handy Discord chat bot that can be used to hunt for exciting Trick'cord Treat avatars and more! | ['Ryan Jones'] | [] | ['discord', 'python'] | 164 |
10,371 | https://devpost.com/software/smart-masks | Smart Mask Design
Our Problem Statement is that: With the ever-growing number of Covid-19 cases from the past 8 months, and several lockdowns issued by governments internationally, the human race has to define the New Normal in the Post-Covid era and start to re-open offices and schools.
However, the underlying issue is that people don't feel safe yet to let their loved ones out of their homes. And this is completely justified, because of the weakly regulated ‘prevention techniques’, of wearing a mask at all times, and to maintain 6 feet distance from others in public places.
Therefore, we decided to go on a journey to use technology to create the New Normal, in order for schools and offices to re-open as soon as possible. And we are doing this by releasing the concept of the ‘SMART MASK’. We plan to be B2B and B2C providers, but more about that later.
With our product, we are targeting two industries. The primary target industry is the Biotechnology Industry, in the BioPharma Market (with a Compound Annual Growth of 7.2% through 2020 to 2026), and the secondary target industry is Pharmaceutical industry, in the surgical mask market (with a CAGR of 8.38% through 2020 - 2024). We came to this conclusion as we plan to have a brand image of a technology company before having that of just any other mask company in the consumer’s eyes, and this is what makes us different.
The Smart Mask is a mask with IoT breathing sensors, which notifies you or your family if you are not wearing a mask in public areas. With the help of a team of Biotechnologist and IoT specialists, we can design these sensors, to record the location of your mask and your mobile application. Also the IoT sensors will record the breath of the person wearing the mask. So, if no one is breathing into the sensor, even if it is taken along in the public place, the person will be notified to wear it as well.
The mask asks for calendar and location access, and with that the user can be reminded to keep the mask nearby the night before a calendar event outside the house, and notify if the mask is forgotten at home while out of the house.The Smart Mask will come in a range of different designs. We plan to have designs with a variety of colours and also those that are cultural in nature.
In the B2C Model, Users have control over making and joining groups, with their families and loved ones, and can choose to share location and their mask’s location with them. So a parent can know if their child is wearing his/her mask in school or forgot it on the school bus.
In the B2B Model, While the businesses can’t track their employees after job, however they will be notified if two or more users have not worn masks or maintained social distancing for prolonged periods.
With the Smart Masks, reaching the ‘New Normal’ will come to a reality more than ever. This would mean re-opening shopping malls, schools, offices, public parks, if Smart Masks are used extensively.
In order to be aware of our strengths and weaknesses, I would like to share our SWOT Analysis. Our Strengths are that we have a strong vision for our product, we are learning developers, enrolled in the best universities in Australia and India for AI. Also being in the Gen-Z we both know the right ways to market this product to the youth and how to effectively use marketing growth channels to scale.
Also, we have many opportunities that will benefit our business, like having the right timing to launch the Smart Masks (as we are in the midst of a pandemic). We have no direct competition yet, as there are no such products available on the market, but also have a large customer base which is growing in hundreds of thousands on a daily basis (as Covid 19 unfortunately infects communities at a time). To add, we could also add machine learning and artificial intelligence features in the near future. All in all, the growth potential for the industry and scalability of our product both are promising.
However, like every other business, we do have weaknesses, as we have low financial funding, and do not have an experienced team of IoT and BioTech specialists. Further on, our biggest threat is the possibility of large monopoly conglomerates who would also release similar products. But still, the market is big enough for a startup like us to survive the competition until we scale significantly, and being the first comer to the industry will surely reward us with the loyal customers.
We are targeting to first sell the product in Indonesia, due to its rising cases of Covid 19, and so are benchmarking the data with the Smartwatch market in Indonesia. 2.5% of Indonesians have Smartwatches, out of which 23% of them wear it for fitness purposes (which means that 1.495 million people are ready to spend about a $100 on a tech-lifestyle device in Indonesia).
If the Smart Mask manages to attract even a mere 2% of those 1.495 million people to buy our Masks, and 10% of them to have it on a subscription model, then about 180,000 Indonesians would use our product.
To conclude, if our product gets a steady increase in the influx of customers for a year after the first MVP (of a working product), as shown in the table, then we can hit 3.8 million dollars in revenue, with a 2.16 million dollar cost… leaving us with a profit in the first year of 1.71 million dollars. And the only investment we need is the right resources, to gather the right team.
Built With
figma
html
Try it out
www.figma.com
docs.google.com | Smart Masks | As an attempt to minimise effects of Covid, we present 'Smart Masks' which connect with your Mobile Phone and notifies on correct your way of wearing the mask, and when you forget your mask at home. | ['Rahul Mawa'] | [] | ['figma', 'html'] | 165 |
10,371 | https://devpost.com/software/color_the_sky | Inspiration
Color the Sky’s intention, as a project, is to create a machine learning model capable of colorizing black and white or grayscale images and providing those colorized images as output. This particular project’s scope is presently limited to a focus on colorizing images of the sky, which is what the model was trained on.
The aim of doing this is both provide the basis for an image colorization model for future use and to educate on the process of creating and applying such a model.
What it does
Given an input image, Color the Sky resamples to a 512x512 px resolution to standardize the images placed in the raw_image folder. The resized images are output into the feature folder, which is when the trained model steps in.
The Color the Sky model takes the black and white images in the feature folder and applies colorization. The model looks at the difference between the gray scale and the colored images and tries to tune it’s parameters to map one image to another. The model consists of several convolution layers and uses the Adam optimizer to learn the parameter. The result after training is a model that can take any image that resembles the training data and produce a colorized version of it.
How I built it
The program is built using Python with TensorFlow with the Keras API, Numpy, and Pillow.
The images are processed using Java, JDK-11.0.8 with the Java util, awt, io, and imageio packages.
IDE: PyCharm, IntelliJ
Challenges I ran into
With the limited timeframe of the Hackathon, both creating the framework for and training the model was a challenge.
Sourcing sufficient and satisfactory training data for the project was also something that had to be done. There are many datasets available online for projects such as these, but given the focus on training using images of the sky, a custom dataset was produced by crawling Google and Yandex image results and then sorting those images.
While a usable model was produced, there are limitations to the accuracy of the colorization. However, as aforementioned, given the time frame within which this was produced, it is serviceable in its primary application.
Built With
java
keras
numpy
python
tensorflow
Try it out
github.com | Color_the_Sky | Converting gray scale images of the sky to colored images. | ['Thinh Pham', 'Thao Tran', 'jcarpenter48'] | [] | ['java', 'keras', 'numpy', 'python', 'tensorflow'] | 166 |
10,371 | https://devpost.com/software/gaps-in-history | A Sample Document
Inspiration
Have you ever wanted to rewrite history? What if you were the one in charge of writing the documents that crafted our nation and our literature?
What it does
Gaps in History allows you to rewrite history by changing key historical documents and literature. Our project lets you select a document and then change key words or phrases in that document to create outlandish alternate histories. Then you can share your story with friends by downloading or sharing the link!
How We built it
We deployed our app using Heroku on a Spring Boot framework. We used HTML for the main structure of the website, with routing handled by a Java program, and we used CSS to make it look weathered and worn like an old book from history. The back-end operations mostly use with JavaScript.
Challenges We ran into
Our team uses both PC and Mac, and trying to translate and troubleshoot the functions for setup proved to be time-consuming. We are relatively inexperienced in hacking, but we slowly got familiar with our setup. The largest hurdle we had to overcome with this project was receiving the user’s input from the HTML and passing it to different webpages so that it could be processed and displayed in different ways.
Accomplishments that We’re proud of
We were successfully able to implement communication between different webpages using user input, and we even learned how to send a custom file to Downloads on any computer. We also felt very accomplished and proud when we saw the first finished page of our website, knowing that our project was more than just some scripts and files. Finally, we were overjoyed to see our project running online on a dedicated website (with a sweet deal on a domain!).
What We learned
None of us knew much about creating and hosting web applications, especially with the framework we chose. We also only had limited experience with HTML, CSS, and JavaScript. We learned to be resourceful by scouring the Internet for example code and alternate solutions to various problems.
What's next for Gaps In History
We have many ideas for further development. There are some structure and formatting issues that could be resolved, as well as general code cleanup and refactoring. We also want to implement more features, such as allowing users to share their documents via social media and input their own documents to be altered (even storing these on a database for future users to enjoy).
Built With
css
heroku
html
java
maven
Try it out
gapsinhistory.xyz
github.com | Gaps In History | Reimagine reality by filling in the Gaps in History! You decide what outrageous events will lead to the present by coming up with the parts of speech necessary to complete a historical document. | ['Luca C.', 'Josh Rosenthal', 'Clark Mahaffey', 'David Cornell'] | [] | ['css', 'heroku', 'html', 'java', 'maven'] | 167 |
10,371 | https://devpost.com/software/zoomie-the-self-care-bot | Zoomie!
Inspiration:
Motivated by the pandemic, (and the stress of our first hackathon) our team sought ways to have technology make us feel better during difficult times.
What it does:
Our bot integrates into Discord servers and sends users a random self-care tip through private messages when prompted with "Hey Zoomie." We have also included some easter eggs ;)
How it works:
We used Python to code the bot and used the discord.py API.
Challenges:
As our team is composed of 4 emerging hackers, for each of us, this was our first true hackathon experience. We had to partake in heavy research in order to learn Python, Discord bot programming, and how to work as a team. We are also spread out in different time zones, which presented some other problems.
Accomplishments:
We are all proud that we came out of this with a finished product! We also made great friends with each other along the way :)
What we learned:
We all learned a lot of Python really quickly and learned how to separate a large project amongst each other effectively.
What's next for Zoomie the Self Care Bot:
If possible, we would like to expand the project so that our bot automatically senses when someone appears online for the first time that day, and sends them one daily message as soon as that happens. We also would like to include a music element to our program so the bot can begin playing a playlist if requested.
Built With
discord.py
python
Try it out
github.com | Zoomie the Self Care Bot | A Discord bot that will private message you an encouraging self-care tip! | ['Yhassan30 Hassan', 'Rafael Collado', 'Shay Fleming', 'afonsozhan Zhan'] | [] | ['discord.py', 'python'] | 168 |
10,371 | https://devpost.com/software/hackgt-5bdq40 | hackgt
code for rocket fins and maybe optimizing it
:)
Built With
matlab
python
Try it out
github.com | Rocket Code | This code evaluates your rocket's fin design quickly in order to speed up the prototyping process. | ['gmao0601 Mao', 'martian19'] | [] | ['matlab', 'python'] | 169 |
10,371 | https://devpost.com/software/mood-movie-machine-n9t48d | The homepage of Mood Movie Machine
The results page with the unique movie suggestion!
Inspiration
Ever really wanted to watch a movie but felt like you ran out of things to watch? Well, considering the number of movies out there, we all know that's not true! To solve this problem, we decided to build a website that would help you find a movie tailored to your preferences or your mood.
What it does
The Mood Movie Machine lets you select your preferences based on 4 filters: basic genres, age restriction, period in which the movie is set, and intensity. With these preferences, the Mood Movie Machine suggests a movie for you to watch, along with a description and image of the movie!
How we built it
We used HTML/CSS for the front-end and Flask for the back-end. To get movie names, descriptions, and images according to what is selected we accessed The Movie DB (TMDB) API. After we decided what we wanted to make we split off into two teams of two, one team working on the back-end while the other worked on the front-end. Using the questions that we created in order to narrow down the selection, each team went off to code. When one team arose with a problem both groups convened in order to come up with a fix for both sides. If there were any changes or accommodations that needed to make then they were done and uploaded to GitHub. Once everyone's part was complete we connected the back-end to the front-end, leaving us with a fully functioning website.
Challenges we ran into
We ran into a lot of errors with flask while trying to use the TMDB API. Flask was initially confusing to work with, especially while integrating the front-end and the back-end.
What we learned
Each of us learned a new skill from scratch, whether it be HTML/CSS or Flask. We learned how APIs work and how to use them.
Accomplishments that we're proud of
For most of us, this was our first full-fledged project! We're proud of ourselves for deciding to learn new skills from scratch and using them to build a complete (and super fun) project.
What's next for Mood Movie Machine
We plan on adding more filters to help the customer get the most accurate movie. Additionally, we hope to be able to return the movie rating and available streaming platforms in the future!
Built With
css
flask
html
python | Mood Movie Machine | In the mood to watch a movie but don’t know what? Well lucky for you, we have a simple fix! | ['Trinity Johnson', 'Varshini Chinta', 'Annette Pan', 'elise p'] | [] | ['css', 'flask', 'html', 'python'] | 170 |
10,371 | https://devpost.com/software/scanitforward | Example In-Store Reciept from Silver
Inspiration
Crumbs is a platform to connect qualifying front-line workers, local businesses and customers needs. The main goal is to reward those who put themselves at risk by partially or fully funding their meals. In addition, we also keep revenue within the community by supporting and promoting purchases with small and local businesses. Customers don't have any additional charges, but can choose to donate out of their own pockets if they wish to do so.
What it does
The main concept that we introduce is the idea of a
micro-fund
. Front-line workers, if they are chosen to be invited into the platform, can make a request for a bulk order of meals and ask the community to fund said meals. Local businesses that have opted into the donation process can stake some percentage of their profits as donations (somewhere in the neighborhood of 1-5% or even more!), which customers who paid for the meals can redeem and donate to the micro-fund of their choice. Customers can also choose to advance the funding on any micro-fund with their own money if they wish to do so.
Benefits
Besides obviously rewarding front-line workers for their hard work, small and local businesses also get a boost from this platform. Not only can they write off these donations, businesses will also be listed on our platform and become "Crumb certified". This will drive more customers to the business generate additional revenue, likely offsetting the cost of joining the platform. On top of that, micro-funds can only be redeemed with partnering restaurants further increasing cash flow.
Technology Integration
We chose to use NCR existing platforms to collect payment information and capture transactions made my customers. NCR Silver platform give us a great starting point from where we can modify the receipt to generate a custom QR code for the user to collect the donation made by the business. Since we did not have access to Silver's source code, we had to emulate this feature for this hackathon, but NCR could very reasonably add this feature to accommodate this requirement.
We also used NCR's Technology Document Management API to check if a transaction is valid as well as calculate the amount the business pledges to donate. Our back end makes sure that no double spend attack can be performed with any transaction, as well as track funds between businesses, users and micro-funds.
Built With
amazon-web-services
cassandra
datastax
figma
google-maps
ncr
silver
swiftui
tdm
Try it out
github.com | Crumbs | A platform that provides crowdfunded meals to frontline workers by driving sales through small businesses | ['Carson Brown', 'Rahil Patel', 'Ishuma Lahoti', 'Sai Aguru', 'Sean Nima', 'Alexander Proschek'] | [] | ['amazon-web-services', 'cassandra', 'datastax', 'figma', 'google-maps', 'ncr', 'silver', 'swiftui', 'tdm'] | 171 |
10,371 | https://devpost.com/software/mlh-best-domain | Bringing the search function from online shopping into the real world by using Google Cloud SQL databases to find item availability and stock.
Built With
cloudsql
kivy
python
Try it out
github.com | CTRL + F | Find the perfect product | ['prsingareddy', 'Sana Hafeez', 'Calvin Dong'] | [] | ['cloudsql', 'kivy', 'python'] | 172 |
10,371 | https://devpost.com/software/hackgt-2020-hackergurls | We were inspired by NSIN's fitness tracker challenge and recognized the need, beyond tracking a squad's fitness statistics, that an application like this could address. With the current state of the world due to COVID-19, and the emphasis on physical health and awareness of personal health, an application that tracks groups people's daily fitness could help organizations from daycares to nursing homes assure that those in their care are healthy and meeting daily goals. We built our front-end web interface using React on VSCode. We faced several challenges regarding animations, displaying grids/tables, and populating objects from a list of data, but overcame them through group discussion and effective pair-programming.
Built With
javascript
react
Try it out
github.com | HackGT 2020 - hackergurls | Displays group fitness statistics | ['Deirdre Murphy'] | [] | ['javascript', 'react'] | 173 |
10,371 | https://devpost.com/software/claudia-s-test-project | Inspiration
A test
What it does
Tests a test
How I built it
Challenges I ran into
Accomplishments that I'm proud of
What I learned
What's next for Claudia's Test Project
Built With
latex | Claudia's Test Project | A test project! | ['Claudia Chu'] | [] | ['latex'] | 174 |
10,371 | https://devpost.com/software/todo-stack | Inspiration
We wanted to make a more optimized todo list app that estimates your work for today and makes sure you do not work over that limit.
What it does
This is a cloud solution for a todo list app and estimates your time and hides tasks to help you focus on the most important one.
How I built it
We used Kotlin with Android Studio with a backend using Flask and DataStax Astra
Challenges I ran into
Learning to use DataStax Astra!
Accomplishments that I'm proud of
A working todo list stack application
What I learned
DataStax Astra and how to incorporate a backend to an Android app
What's next for ToDo Stack
Later we would like to add built in break timers and specific categories to help batch your work together
Built With
amazon-web-services
android
flask
kotlin
nosql
Try it out
github.com | ToDo Stack | An optimized way to using a todo list | ['Fletcher Wells'] | [] | ['amazon-web-services', 'android', 'flask', 'kotlin', 'nosql'] | 175 |
10,371 | https://devpost.com/software/the-aural-card | The Aural Card's Logo
The Aura Card's Main Dashboard
GIF
The Aura Card's receiving user input + general dashboard design.
What was the inspiration to create The Aural Card?
The Aural Card was born based on the idea that digital banks and internet financial experiences are cumbersome and confusing.
How many times have you wandered over your bank’s app or website trying to get your account number or transfer money to a friend?
The Aural Card came to life in order to change this reality - to change our reality and how we use financial services.
What does the Aural Card do?
It is as awesome as you think it is. With an Aural Card,
you can perform any banking operation, including, but not limited to money transfers, profile information retrieval, and deposits through voice commands
.
How was the Aural Card platform built?
The Aural Card’s back-end was developed by the Aural Card’s Voice guru, Nathan Wilk, who managed to build a flask server integrating the voice functionality with banking services available by NCR and Capital One. The Aural Card’s front-end was
delicately taken care of by Mauricio
, our team’s Front-end Master.
Everything you see on the Aural Card’s platform
was built using HTML, CSS, and Javascript. A few small icons were provided by the Font Awesome’s CDN. Mauricio also designed the logos, animations, and general illustrations using Adobe Photoshop, Illustrator, and Premiere.
Were there any challenges?
We absolutely came across some
mind-blowing challenges
. We were still researching ideas until Saturday morning and we came with Aural Card’s idea during lunch. Aural Card was also very challenging technology-wise.
It required many hours of research in order to come with a viable and feasible solution
for the team’s tight timeframe while also coming up with ideas to
create a user interface that is easy and intuitive to use.
Accomplishments that the Aural Card team is very proud of.
The Aural Card team is very proud to create a solution to change the unwieldy online financial reality.
We are also glad to be able to bring our idea itself to life - something that was unimaginable in our minds until this Saturday.
What did the Aural Card team learn?
The Aural Card team had never worked on such a big project before. We had to connect so many cables under the hood while making sure Aural Card works with our sharp and smart user interfaces. Therefore, the team accomplishments range from simple CSS animations to
complex back-end API integrations.
What's next for The Aural Card company?
The Aural Card is looking forward to expanding its reach towards
deeper, immersive, and instinctive experiences to change banking and financial realities across the globe
. The Aural Card team is also looking forward to increasing the Aural Card’s integration with Alexa, Google Assistant, Cortana, and Siri, where your banks and accounts could be
accessible from any device just using your voice.
Built With
alexa
css3
flask
google-cloud
html5
javascript
python
Try it out
github.com | The Aural Card | Changing the financial industry reality one sound wave at a time. | ['Mauricio Costa', 'Nathan Kurelo Wilk'] | [] | ['alexa', 'css3', 'flask', 'google-cloud', 'html5', 'javascript', 'python'] | 176 |
10,371 | https://devpost.com/software/neon-face-6ovs2c | Inspiration
What it does
Outlines key features of a person's face in neon colors.
How we built it
We used Facebook Spark AR to create the effect for instagram reels.
Challenges we ran into
We had trouble figuring out how to animate the outline and have it run through the different colors.
Accomplishments that we're proud of
What we learned
What's next for Neon Face
Built With
facebooksparkar
Try it out
www.instagram.com | Neon Face | A filter for users to use on Instagram reels that creates neon outlines of the user's face. | ['Kara Leong', 'Elaine Chen', 'Grace Wang'] | [] | ['facebooksparkar'] | 177 |
10,372 | https://devpost.com/software/ruby-lo2p7g | Logo
Stack and Architecture
CI/CD pipeline
A feed showing exercise videos grouped by different body parts fetched from YouTube
A training session with an ongoing YouTube video and the camera feed recording
Precise Fit
A personal visual evaluator that takes care of your posture during training sessions.
The impact of COVID-19 in our lives has been massive during this last year. It has affected all aspects of our lives (socially, economically, physically, and mentally).
We are very fortunate to be able to study and progress in our careers, regardless of the circumstance. Nonetheless, we do feel a passionate obligation towards reaching out to the most affected groups by the pandemic and generate a positive impact through technology.
One of the most affected communities during this pandemic is the older adults, as they suffered a hard impact on their lifestyles by being a vulnerable group. We have built this application with them in mind as we designed our engine with a simple, yet powerful,
User Interface
to make sure seniors exercise with correct posture quickly and easily without having to worry about fancy technology.
Here's where
Precise Fit
comes in. Exercise is a fundamental activity to maintain a healthy lifestyle. Due to COVID-19, going to the gym or with a personal trainer has become a potential health risk, forcing individuals to be more sedentary. The
Precise Fit
engine helps people engage in physical activity through YouTube training sessions and constant correction of their posture. Just as if they were training with a personal trainer.
We invite you to check out our demonstration video to have a deep dive into what we did this last 24 hours. In the video, we cover in more detail the aforementioned situation we aim to tackle, the Machine Learning algorithm we used to solve the problem, the technical stack, and some of the challenges we faced.
Link to presentation
Note for reviewers
Every time you click on a video to initialize a training session, the YouTube video gets downloaded as
mp4
over the server. We didn't manage to serve it as a buffer stream to start showing the chunks that get downloaded, instead, the entire data stream gets downloaded once, and then you can start watching the video.
We point this out because if you select a 60-minute video, the 150MB of the download will take a while, so we'd appreciate it if you guys could hold on there. On the other hand, if you guys pick a 10-minute video, it shouldn't take any longer than 5-10 seconds, so maybe that's easier and simpler to evaluate. Nonetheless, feel free to ping us out if you have any questions regarding the implementation of any specific details of interest :)
Thank you so much for the time invested on reviewing our project! We sure appreciate it!
Built With
cloud-build
cloud-run
docker
google-cloud
graphql
javascript
machine-learning
micro-services
ml5
node.js
react
tensorflow
tfjs
Try it out
frontend-5snxalmwva-uc.a.run.app
github.com | Precise Fit | A personal visual evaluator that takes care of your posture during training sessions. | ['Alex Domene', 'Ernesto García', 'Abdo .'] | ['1st Place Overall Prize: Cash Prize!', 'Best use of Google Cloud'] | ['cloud-build', 'cloud-run', 'docker', 'google-cloud', 'graphql', 'javascript', 'machine-learning', 'micro-services', 'ml5', 'node.js', 'react', 'tensorflow', 'tfjs'] | 0 |
10,372 | https://devpost.com/software/sympto-bot | Sympto-Bot
Sympto-Bot welcome page
Google Cloud's Dialogueflow Essentials was used to train the virtual agent to respond properly to user input.
Sympto-Bot can display COVID-19 data visuals when prompted to do so.
Sympto-Bot can navigate you to the closest testing center.
Sympto-Bot provides free risk assessments that consider your gender, age, preexisting conditions, and symptoms to assess risk.
Inspiration
We wanted to create a bot that would work with the current information about the COVID-19 pandemic and help people understand what they should do in order to face the problem. The best way to return to our daily lives is to follow all the given guidelines as we go about our day.
What it does
Sympto-Bot, a virtual agent that is capable of responding to everyone’s COVID questions and concerns. Sympto-Bot uses machine learning to dynamically answer questions and offers free risk assessment, directions to COVID testing centers, and COVID data visuals.
How we built it
React was used for the front end. THe chat bot was made with javascript. The API was made in nodejs to connect react and dialogue flow. The frontend recieves responses from Dialogueflow. Google Cloud's DialogueFlow was used to train the bot to interpret and properly respond to user input.
Challenges we ran into
One of the challenges we encountered during this event was gathering enough, and accurate, data about the COVID-19 pandemic and how best to disseminate it to our users.
Accomplishments that we're proud of
For many of out teammates, this project was our first time working with technologies such as Node.js and Google Dialogflow. We are very proud that we’ve been able to not only utilize these capabilities, but make them work together to create something unique.
What we learned
We learned how to use Dialogflow to train the virtual agent how to comprehend and properly respond to user input. Most group members we unfamiliar with javascript, and Node.js. We learned via hands on experience how to work with these technologies.
What's next for Sympto-Bot
The current Sympto-bot deploys its’ capability as a web browser, but it can be deployed on any platform like a mobile or a desktop app. In the future we would like to add additional functionality like calling 911 for severe symptoms that may need emergency treatment, improvements to the scope of the questions that the virtual agent is capable of answering, and a service that can provide periodical updates on the COVID situation in your area.
Built With
css
dialogueflow-essentials
google-cloud
html
javascript
node.js
react
Try it out
curtisf.dev | Sympto-Bot | Sypmto-Bot is a virtual agent that helps navigate COVID-19. Sypmto-Bot answers your questions, displays COVID-19 data, navigates you to the nearest testing center, and offers risk assessments. | ['Asish Boyapati', 'Curtis Fowler', 'Kylan Thomson', 'Thomas Proffitt'] | ['2nd Place Overall Prize: Cash Prize!', 'Cotiviti - Best Application of ML geared towards the Covid-19 Crisis', 'Best use of Google Cloud - COVID19 Hackathon Fund'] | ['css', 'dialogueflow-essentials', 'google-cloud', 'html', 'javascript', 'node.js', 'react'] | 1 |
10,372 | https://devpost.com/software/airpark | Inspiration
Have you ever been in a situation where you just cannot find parking, and your only option is to grudgingly check into a parking lot that costs $40 for a few hours? The sad truth is that when you are paying an exorbitant sum of money for a parking lot, thousands of personal parking spots are completely empty because their owners are not at home or in the office. Those days are now over. Introducing AirPark, a mobile platform for affordable and efficient sharing of personal parking lots. AirPark makes the process of buying and renting personal parking spots EXTREMELY easy, time-efficient, and affordable.
What it does
Renters simply add their parking spot with the click of a button, and our app automatically detects the location where they are reporting from. Next, the renter can either rent out their parking spot one-time, or create weekly rent schedules. The buyer on the other hand can search for nearby parking spots, or search by address for a remote spot, and then claim it. When they are leaving the spot, they can simply click a button to checkout and pay. Our system makes it more affordable for buyers, while benefiting renters. Airpark is also google assistant enabled for voice interactions.
On the business side of things, we plan to take 0.5% service fee on each transaction, as well as host in-app advertisements. We will also take a 1% service free on transactions above a threshold.
How we built it
React Native for front-end.
Python with azure serverless functions for back-end.
RDS + MySQL for database.
Figma for UI design
Google assistant for voice interactions
Challenges we ran into
CORS errors with the back-end.
making the app responsive.
getting everything integrated in a short time period.
Accomplishments that we're proud of
Integrating the app with the backend
Creating voice interactions for our app
Creating a clean and responsive UI.
What we learned
Using gcloud serverless functions
Using google cloud SQL
Creating voice interactions for google assistant.
What's next for AirPark
In the future, we would like to add an interface for reporting spam spots, and creating a rating system for each spot. We would also like to integrate AirPark with automatic reader scanning systems in personal parking spots. Lastly, we would like to implement more sophisticated filters for searching.
Built With
gcloud
mysql
python
react-native
Try it out
github.com
www.figma.com | AirPark | The Airbnb of Parking. | ['Nand Vinchhi', 'Veer Gadodia', 'Muntaser Syed', 'Ebtesam Haque'] | ['2nd Place Overall Prize: Cash Prize!'] | ['gcloud', 'mysql', 'python', 'react-native'] | 2 |
10,372 | https://devpost.com/software/autogrocer | The app on an iPhone home screen.
Lining up and taking the picture.
Processing a request.
The email confirming the order.
The longer story…
In these trying times, the elderly have been hit particularly hard. While normality is still far away, we can begin to build solutions that might help facilitate lockdown life. An area that desperately needs such solutions is online grocery shopping. Tons of new sites have popped up during the pandemic, but the inconsistent and often confusing user interfaces cause difficulties for the elderly and tech illiterate.
This is where autoGrocer comes in. Rather than asking for a user to change their entire grocery shopping routine, autoGrocer allows for a pain-free online shopping interface. All a user has to do is open the app, snap a picture of their paper grocery list, and sit back as autoGrocer adds all items in the list to a cart in the indicated quantities. Because autoGrocer is equipped with the powerful Google vision API for text detection, handwritten grocery lists are recognized accurately and consistently. After autoGrocer is done processing the request, it sends an email to the user with their order!
The inner workings of autoGrocer…
autoGrocer leverages a number of different powerful technologies to process requests. The flutter front end is used to take the pictures. Pictures are stored in Firebase and the generated URL is used to send a POST request to Google Vision. Google Vision extracts the text and the app sends the results via POST request to a Flask server that contains the python backend. Using the python library Selenium, a browser is opened and all items are added to the cart. Then, an email is sent out to the user with the smtplib and ssl python libraries.
Clear extensions…
autoGrocer could easily be extended in many ways to improve user experience and accessibility. Accepting filters for product brands or preferred grocery stores would allow users to customize their experience. Machine learning algorithms could be deployed to learn about the user dynamically and allow for superior, but still automatic, product choice. The possibilities are endless!
What we learned…
We both had to tackle entirely new technologies in order to build autoGrocer. Clare dealt with the front end and user experience, learning how to use Flutter in only a day and connecting the app to Firebase and the Vision API! Emilio worked with a Flask server for the first time, building an API that would allow easy access to the array of functions in the python backend. It was a fantastic learning experience for both of us!
Built With
dart
firebase
flask
flutter
python
selenium
vision
Try it out
github.com | autoGrocer | autoGrocer makes ordering groceries as simple as taking a picture. Snap a quick pic of your shopping list and autoGrocer takes care of the rest! Perfect for the elderly and tech illiterate. | ['Emilio Luz-Ricca', 'Clare Heinbaugh'] | ['3rd Place Overall Prize: Cash Prize!', 'Best use of Google Cloud', 'Altria - Most Innovative and Insightful Use of Virtual Agents', 'Genworth - Solution to help Elderly Return to Pre-Pandemic Quality of Life Without Additional Exposure Risks', 'Best use of Google Cloud - COVID19 Hackathon Fund'] | ['dart', 'firebase', 'flask', 'flutter', 'python', 'selenium', 'vision'] | 3 |
10,372 | https://devpost.com/software/roometheus | Roometheus. Like Prometheus, but for roomies.
Inspiration
Roometheus, named after Prometheus, the Greek mythology figure who stole fire for mankind, is a web dashboard for roommates. The inspiration comes from wants we've experienced in real life for communicating with roommates.
This challenge is for the CoStar - Best Hack for Group Living in 2020. Also the best domain challenge!
https://roometheus.com
What it does
It's a web dashboard for roommates. It has three main features:
Whiteboard:
Draw together in real time.
Leave notes, works of art, reminders.
Grocery list:
List stuff you need to buy, cross off and remove stuff once it's been bought.
Shared calendar:
Embedded Google calendar.
Share class schedules, when you'll be out of the apartment, when people are visiting, etc.
How I built it
https://github.com/MicahParks/roometheus
We used Vue.js, Caddy, Docker, docker-compose, Google Calendar, and a few open source projects.
Challenges I ran into
Team members didn't know Vue.js or feature creep caused.
What I learned
A lot about frontend frameworks like Vue.js.
What's next for roometheus
More features! Like rent sharing with Venmo or Zelle integration.
Built With
caddy
docker
vue
Try it out
roometheus.com
github.com | roometheus | You apartment hub for roommates to share calendar events, grocery lists, and whiteboard doodles. | ['Zephyr Headley', 'Ashley Beasley'] | ['3rd Place Overall Prize: Cash Prize!', 'CoStar - Best Hack for Group Living in 2020'] | ['caddy', 'docker', 'vue'] | 4 |
10,372 | https://devpost.com/software/honci-tonk | Inspiration
I wanted to make an educational game, so I thought of the simplest thing that I could teach in a game. The first thing that came to mind was the HONC 1234 rule in chemistry. I went with that idea and ended up with this prototype!
What it does
Visually lays out how atoms would form covalent bonds with one another. It challenges the user to create compounds based on the available elements.
How I built it
Using LMMS, I composed a four second beat with ten layers and then exported each layer individually to provide the game with adaptive music. Using the Godot engine I worked out the logic for drawing the bonds and detecting when the player has exhausted all pairs. Using Illustrator, I designed all the elements, giving each a distinct look so that the player can readily identify each element.
Challenges I ran into
Drawing the lines in game meant that I had to constantly check and update the positions of the lines' endpoints. The logic still isn't perfect, as there are visual bugs when drawing lines/forming bonds. However, while I did not have time to perfectly tackle this, I put in a workaround if the player comes across a level breaking bug with the restart button and next level button.
Accomplishments that I'm proud of
The lines are drawn and they (mostly) follow the elements when dragged around. This process took much longer than expected, but in software development, it's the seemingly easy tasks that take the longest. Most of all, I'm proud of the look of the game overall, as well as the initial design. Usually I'm not the one that comes up with the ideas, but this time I did!
What I learned
Better debugging skills, as I was very lost with the line drawing before I finally buckled down and went through step by step to trace the problem.
What's next for HONCi-Tonk!
More compounds, bug fixes, better game feel and links to educational sources
Built With
adobe-illustrator
godot
lmms
Try it out
syanic.itch.io
github.com | HONCi-Tonk! | A puzzle prototype where you form covalent bonds to create compounds! | ['Julia Wang'] | ['3rd Place Overall Prize: Cash Prize!'] | ['adobe-illustrator', 'godot', 'lmms'] | 5 |
10,372 | https://devpost.com/software/ramhacks2020-carmax-challenge | The Problem
On carmax.com, given more and more customers are starting their vehicle buying journey online, with over 35k vehicles within our inventory, how do we visually help users narrow down the vehicles by transfer fee?
Transfer fee is the cost customers can pay to move a vehicle from one location to another. Fees can vary from FREE to $2000.
What We Learned
HTML, CSS, JavaScript
How We Built It
Within the 24 hour period, we were able to build a mock website of CarMax by taking some images off the original website and pieced together our solution implementing using HTML, CSS, and JavaScript. Then we uploaded our code to be hosted on surge.
Challenges We Faced
Due to the complexity of how the website is designed, it was hard to replicate some of the features so to save time, we just hardcoded the design. Since CarMax doesn't provide an API for their data, we just took screen shots to mimic what it would look like.
DEMO
https://youtu.be/om4nUhVxdGE
Built With
css
html
javascript
Try it out
carmax.surge.sh
github.com | RamHacks2020-Carmax-Challenge | The simplest solution that you've ever seen | ['Spencer Kinsey-Korzym', 'Isabella Virtucio'] | ['CarMax - Visually narrow down vehicle transfer fees'] | ['css', 'html', 'javascript'] | 6 |
10,372 | https://devpost.com/software/transfer-fee-analysis-for-cars | GIF
highlighting search capabilities, finding transfer fees for fords to montana
GIF
demo showing how visualisation (transfer fee bar and map arrow colors) change as a function of estimated transfer fee
static screenshot of app
GIF
highlighting the fees for a variety of cars to montana
Inspiration
When you're buying a car online, you know that it can be shipped in, but from where? And what gravity does that have on the purchase of your car? In a word: is it
feesable
? We wanted to answer this question.
What it does
Shows you what the transfer fee looks like for car purchases
How I built it
we built it with python and git.
Rachel scraped some data (manually) from the car max website first with beautiful soup. Then, Brian read in csv data into the driver.py file. Andrew started working on linear regressions to approximate transfer fee. Rachel made a rough mock-up of the GUI in Tkinter. Andrew and Brian tried to scrape data from CarMax's private API with urllib. Afterwards, they got the plot working for the linear regression and added arrow drawings to the map, got the list of cars to display, and added the values for state codes in the menubar.
Challenges I ran into
Andrew: could not access CarMax through python urllib, ended up just using a manually collected sample.
Brian: regex
Rachel: Tkinter. scraping from CarMax is also hard because there is only a private API, and it was easiest to scrape an offline sample. However, it occurred to me retroactively that the smartest thing to do was probably pull data from the API and
then
get shipping data.
Accomplishments that I'm proud of
andrew: eating numbers (linear regression)
brian:
rachel: merge conflict resolution. also the guis pretty rockin'. remembered Pythagorean theorem
What I learned
andrew:
brian:
rachel: UI is just finding templates and themes and frameworks and then using them. so easy.
What's next for Transfer Fee Analysis for Cars
We would like to harvest a better dataset, and/or to pull data straight from the site/scrap from URLs of each car. In conjunction, we would also like the data set to include all CarMax locations, and not just generalize "location" by state. We also want to implement a histogram display by location and shipping cost to help the user better understand the data and options for car purchases. Additionally, we also want to filter for cars by criteria, display all viable options on the map, and more ways to compare car options.
Built With
beautiful-soup
math
matplotlib
python
tkinter
tkk
Try it out
github.com | Feesable | computes and visualises transfer fees for carmax vehicles. | ['Brian Wang', 'Rae Wu', 'Andrew Zhang'] | ['CarMax - Visually narrow down vehicle transfer fees'] | ['beautiful-soup', 'math', 'matplotlib', 'python', 'tkinter', 'tkk'] | 7 |
10,372 | https://devpost.com/software/costar-portal | Final Prototype
ios iteration
Talked about problems in cohabiting spaces. One issue that was consistent was the way deliveries are handled during COVID-19 and having to meet the guidelines for things like social distancing and contactless deliveries.
The app simplifies the way instructions are given to delivery drivers. Instead of writing down where your apartment is exactly located or the access code to your community. The app provides a one time code for the delivery driver that can also receive an instructional video of where your apartment is with any necessary information.
We started out by building a prototype in XD with a native Ui kit. From there we began developing the app in android studio to test out how it would work.
Challenges we ran into were not being experts in app development but with the help of two of our teammates, we managed to put together some of the pages necessary to showcase how the app would work.
It was a really great learning experience for all of us. We got to learn a lot about how native apps can be built and share how the design process can lead to a product.
We learned a lot about team dynamics and communicating ideas as well as being open to how those ideas work together.
What's next for CoStar Portal is being able to refine the prototype and possibly testing in order to design different iterations.
Built With
android-studio
java
xd
Try it out
xd.adobe.com | CoStar Portal | Making difficult in gated communities simpler. | ['Tyrone Frye Jr., MPI', 'Jasmine Meade', 'Carlos Jimenez'] | ['CoStar - Best Hack for Group Living in 2020'] | ['android-studio', 'java', 'xd'] | 8 |
10,372 | https://devpost.com/software/bridge-svbyia | Terminal output for our natural language processor
Inspiration
2020 has been an incredibly divisive year. From riots in the street to police brutality to the death of two iconic superstars of the black community, this year has left us feeling raw, exposed, and really, really, tired. With what might be perhaps one of the most controversial elections coming up, my teammates and I devised an application that might be able to encourage everyone to play just a little nicer and at least listen to someone else's opinions once in a while.
What it does
Bridge is a web application that focuses on education. We do this primarily by connecting you with people who have opposing viewpoints from you regarding sensitive topics like universal health care, immigration laws, government spending, and more. We've implemented a chat room that encourages thoughtful discourse and automatically detects and bans hurtful messages without any reasonable or logical basis. While we do plan to diversify the different learning opportunities on Bridge, one additional feature we have already implemented is a news feed fit with warnings regarding over-use of emotional jargon and the presence of hateful speech through the detection of vulgarity. To sum it up in a sentence, Bridge creates a safe and easy way for people to grow intellectually, allowing for honest conversation without the need to resort to violence or degradation.
How We built it
The tech that ties the entire site together is undoubtedly our unique natural language processing algorithm. Whether that's in determining people's views on certain issues, detecting overuse of pathos in writing, or preventing the sh*tposting of people who don't agree with you, we faced the challenge of having to write an algorithm that knew the difference between respectful debate and straight-up tantrums. To do this, we trained an AI to weed out certain words (mainly swear words), understand that certain phrases represent keywords that have more weight in determining the overall weight of the sentence, and recognize common structures in sentences that could result in otherwise misleading sentiment. Overall, our AI was fairly successful, able to detect all cases of profanity, personal insults/baseless accusations, and more. Of course, while the AI is not always completely accurate, especially for longer bits of text, my team and I are very happy with how it turned out in 24 hours. Outside of the NLP algorithm, however, we had to work through issues like web scraping, matchmaking, frontend design, and frontend/backend integration.
Challenges I ran into
This brings us to the issue of challenges, which to say the least, were plentiful in this project. Mainly, the biggest challenge was time for us. If we had another 24 hours, we're confident that we could develop a seamless web application and possibly even add a couple more features here and there. However, this time, after grinding through writing an AI and setting up the backend and API structure, my group and I really struggled with both getting enough time and enough energy to properly develop the site's frontend. This is definitely obvious in terms of how the user experience drops off significantly between the login/registration and the rest of the site, and given more time, definitely an aspect we would work to perfect.
Accomplishments that We're proud of
Writing out the NLP algorithm
Setting up Firebase and working with React for the first time
Actually setting up proper abstraction using Flask for our API endpoints
What We learned
Definitely a lot about natural language processing... Previously, our projects had always been just implementing some other dude's library that we found from Github, but actually building and developing our own this time was both really fun and challenging! Abstraction and OOP really helped us later on in the project, but alas, it was too late to save the site's frontend :((. We also learned to divide up tasks better, as in the beginning we would basically have one person working while the other two messed around. We certainly paid the price for that at 3 in the morning.
What's next for Bridge
Improving the frontend!!! No matter how great our idea is, if the UI isn't appealing, then people probably won't use it. We're really looking forward to making this much more user-friendly in addition to being rather powerful on the backend. What's more, right now the site is definitely really focused on user-to-user interaction, which we understand isn't ideal for everyone. Something that we can work on is definitely increasing the number of experiences people can choose on our site and thereby learn more from by doing so.
Built With
bootstrap
express.js
firebase
flask
google-cloud
javascript
node.js
python
react
rest-api
scikit-learn
Try it out
github.com
github.com | Bridge | In a year that feels like we're burning alive, Bridge hopes to bring everyone a bit closer together | ['Dylan Feng', 'Joshua Lorincz', 'Jonny Chang'] | ['Federal Reserve Bank - Collaboration', 'Altria - Most Innovative and Insightful Use of Virtual Agents'] | ['bootstrap', 'express.js', 'firebase', 'flask', 'google-cloud', 'javascript', 'node.js', 'python', 'react', 'rest-api', 'scikit-learn'] | 9 |
10,372 | https://devpost.com/software/seniorguidetotech | Domain we registered/website
Inspiration
We wanted to try and explain use of technology to people who did not grow up with it. Seniors are already at higher health risk than younger member of society; it is tough for them to have to learn to navigate the internet as well.
What it does
We split this up into four main categories we found relevant: financial, social, entertainment, and shopping. Each of these is divided up into subcategories by specific web apps. Each subcategory explains the fundamentals of using that resource as well as very basic troubleshooting advice.
How we built it
Using Vue and the Vuetify library, we developed a navigation system for the site. We each took a category and fleshed out explanations and troubleshooting guides.
What we learned
For three of us, this was our first experience with front-end programming, as well as using a GitHub repository to work on code with other people.
Challenges we ran into
Since, for most of us, this was our first time using the JavaScript framework Vue (and our first time using JavaScript in general), the majority of our time was spent learning how to use it and designing the website; we had a lot of small errors that we had to work past. Additionally, there was something of a learning curve when it came to learning how to use git.
What's next for ASeniorGuideToTech.
Here were two challenges we planned on tackling, but did not have sufficient time to do in the hackathon:
Ideally, we wanted to try and implement a chatbot feature within our troubleshooting sections; the idea was that the user could directly input their tech questions, and the bot would help her through it, but time didn't allow for that. Implementing the troubleshooting chatbot is the logical next step for this project.
This idea was inspired by Altria's "Most Innovative and Insightful Use of Virtual Agents" challenge.
We were also considering making simple, but effective data analysis products for the elderly to become more educated and aware of the current COVID-19 status. This idea was inspired by Cotiviti's "Best Application of ML geared towards the Covid-19 Crisis" challenge.
Just because the hackathon is over, does not mean this project has to stop here. As with many hackathon products, if not all, our product is simply a prototype to its true potential. What's next should be something we could look forward to in the future, and something we could ambitiously try if we ever come around to doing this project again.
Built With
html
javascript
vue
Try it out
github.com | ASeniorGuideToTech | Seniors are often left at a disadvantage in our technological world. The purpose of this guide is to try and put them on equal footing by explaining what younger people understand by intuition. | ['Joseph Lee', 'Steven Jia', 'Robert Phillips', 'Amanda Michel'] | ['Best Domain Registered with Domain.com', 'Capital One - Best Financial Hack'] | ['html', 'javascript', 'vue'] | 10 |
10,372 | https://devpost.com/software/nltkparser | Byte for Byte Copy of Files
Binary Parsing of File for Key Phrases
Binary Parsing of Raw Physical Disk for Key Phrases
Binary Parsing of Raw Physical Disk for Key Phrases
(Best Digital Forensics Hack)
Inspiration
When trying to locate data through logical or physical approaches, it can be difficult to decides which key words and phrases to use when parsing through data sets. Some tools provide similar ideas but require large amounts of time to prepare the do background tasks due to their overall robust nature. This introduces the need to parse a device in the logical or physical manner with less amount of extra computing or additional tasks/preprocessing involved.
What it does
The purpose of this program is to use the Natural Language Toolkit to analyze a set of text.
The NLTK provides sanitized and specifcally grouped text segments.
These Text segments allow for the program to search accross raw data for phrases as key words.
The Program can parse files and physical disks in binary formart block by block in search for the data.
When the data is found the program outputs the block number index and what key word was found in a text file.
It is essentially a raw preview and parse program of logical and physical data
This becomes exponentially more useful when using words found images against hundreds of Gigabytes of data or even a couple Terabytes of data. The program was tested on one large text sample broken in to several text files.
How I built it
I used Python as my base programming language. I then utilized the Gooey library which is a UI wrapper for Python's ArgParse library and provides visual verbose functionality.
I then broke down the sub requirements into:
installation of NLTK and corresponding libraries
generate and obtain test data
build algorithm/functionality of Natural Language Processing NLTK to read from a text file and parse accordingly
Build algorithm and functionality to check and read physical disks and logical files in binary format.
Create robust and simple UI to allow the user to complete their task
Combine NLTK generated data set and binary parsing of data to sequentially iterate block by block through data
Create a simple logging/reporting functionality for use to look back into when task was complete so user can make on the fly triaging decisions.
Challenges I ran into
Physical access to disks and binary parsing of files and physical disks
IO issues with opening closing writing to and from files
Parsing arguments given by user and handling exceptions
Accomplishments that I'm proud of
Accessing and Parsing Raw Physical disks in Python with minimal overhead
Using NLP and NLTK to generate clean keys words and phrases
What I learned
Using Natural Language Processing NLTK to read and tokenize and sanitize text( stop words)
combining IO of Raw physical disks
reading binary block by block
decoding binary to readable text.
improving error handling from binary IO
What's next for NLTKParser
Improved performance through multithreading
Improved NLP and NLTK implementation
More Robust Reporting features
Built With
gooey
machine-learning
natural-language-processing
nltk
python
windows
Try it out
github.com | NLTKParser (Best Digital Forensics Hack) | Parse Logical and Physical data containers for key words/phrases generated through the use of Natural Language Processing using the Natural Languge Toolkit | ['Midwest Coder'] | ['Cipher Tech - Best Digital Forensic Related Hack'] | ['gooey', 'machine-learning', 'natural-language-processing', 'nltk', 'python', 'windows'] | 11 |
10,372 | https://devpost.com/software/invoiceimageprocessor | Home view
Take picture view
Processing view
Inspiration
This App tries to solve the challenge of 'MarginEdge' in RamHacks 2020 which was to develop a mobile application that can take and process pictures. In order to do this, we propose a flexible Android app with the following main characteristics:
First, our app allows the user take pictures of invoices/receipts to process them in order to analyse the procedence of the receipt (store name), date, total cost and number of items bought. We use opencv and Machine learning models to process the picttures. Opencv is used for preprocessing. The ML models helps us to classify the image content and also extract all the text information.
Second, all the information processed is saved in Firebase (Firebasecloudstorage for the images and FirebaseDataBase for extracted information). We also maintain a local database (in device) using sqlite.
Third, the app provides a home screen that presents all the saved information as a list of items. This way you can keep track of all previous invoices/receipts or whatever image processed.
The main challenges we faced in this project were not only the need to propose a flexible camera application that uses machine learning models, but also learning for the first time the use of the ML models and opencv integration for android.
Specifically extracting the text from the images has been proved to be a difficult task. We used already ML models which were not trained for extracting information from invoices/receipts exclusively. With more time we would like to collect our own dataset based so our ML models would be more efficient.
Built With
android
firebase
java
mlkit
opencv
sqlite | InvoiceImageProcessor | InvoiceImageProcessor is an Android mobile app that allows you to take pictures from receipts/invoices to process them and extract automatically your spend information or more than that! | ['Paolo Cachi', 'Ronaldo Cachi Delgado'] | ['MarginEdge - mobile app with CV'] | ['android', 'firebase', 'java', 'mlkit', 'opencv', 'sqlite'] | 12 |
10,372 | https://devpost.com/software/safr | https://streamable.com/ud18mp
Inspiration
None of us had previous experience in the OpenCV library and cloud functionalities, so we were very excited to explore these tools.
What it does
A mobile application created using Android Studio designed to provide a secure location to invoice sales data and any other relevant files into a centralized database from which MartinEdge can access and analyze all of their consumers' data.
How we built it
We built it using Java and Android Studio and used Google Cloud and FireBase for all authentication and database features.
Challenges we ran into
Given our lack of background on cloud technologies, there was a very steep learning curve in our use of cloud and FireBase technologies and all of the intricacies that went into creating a dynamic flow of information between our app and the Google Cloud server.
Accomplishments that we're proud of
As mentioned in the challenges we faced, the lack of background in Google Cloud within our team made FireBase quite a weary task, however, we came out of RamHacks with a viable product that works and connects to a centralized database, which is not only an accomplishment but also a skill we can take to future projects and hackathons.
What we learned
We learned
a lot
about library integration and FireBase and all that it has to offer from Cloud services to APIs, in addition to how to implement it using HTML requests.
What's next for SAFR
Given more time, we hope to fix up the OpenCV and Vision API functionalities of our application, which would not only warn the user when the image was not taken properly but would also allow our app to use AI to read the contents of each image and upload them into the database for easy organization and access of invoices.
https://streamable.com/ud18mp
Built With
android-studio
cloud
firebase
google
java
Try it out
github.com | SAFR | SAFR (Scanning Application for Restaurants) is a mobile tool for restaurants to upload invoices to a centralized database from which MartinEdge can access and analyze all of their consumers' data. | ['Gregory Johnson', 'Meredith Seaberg', 'samsoncr', 'Tara Ram Mohan'] | ['MarginEdge - mobile app with CV'] | ['android-studio', 'cloud', 'firebase', 'google', 'java'] | 13 |
10,372 | https://devpost.com/software/mapbay-scan | Landing Page
Initial Search Results
Map Display After Selection
Some Places are Looking a bit Sus...
Inspiration
While living through the current pandemic that is COVID-19, the hardest thing for some people is to find joy while staying indoors. Despite isolation and social distancing are the advised lifestyle for many, there is no doubt that a significant number of people desire to enjoy the outdoors once again, without having to worry about being infected.
So in brainstorming ways to make this a reality, while keeping the at-risk population safe, we wondered if there was a way for people to find safe places that were free from reports of infection. If there was such a way to discover clean sites with minimal difficulty, then it would allow for those deprived of nature be able to find ways to responsibly let themselves outdoors again, regardless of their age.
Given that according to medical professionals and their
journals
it is considered safe to go outside as long as people stay away from each other, we decided to create a website that would aid people in their efforts, and promote responsible excursions. By finding out where the reports of COVID-19 infections are and seeing how many people have recorded themselves as having visited specific locations, we created a tool where people could be able to make safe, informed decisions about where to go.
What it does
Our web application, Virus Sus, takes information provided by Google's Open COVID Data, and provide location-based report layered with Google Maps' search results. Then when users search for places that they would like to visit (in plain english, so nothing complex required), the application returns a list of places nearby, and gives reports on how many COVID-19 cases have occurred around them. On top of that, users of the applications can "check-in" the size of their group to specific locations that they'd like to visit, so that when others look at the same places to perhaps visit, they can make informed decisions to shy away from danger by looking at the records of high-traffic locations. Additionally, there is an optional button which can be pressed that filters out high-visitor locations, and instead returning results that are low-risk.
How we built it
Tech Stack
The project was built using a server-less architecture, where the Front-End was created using ReactJS framework, and using client-sided API calls to other services as well as the FireStore Database network provided by Google. This allowed for lightweight and high-performance application that required minimal technical overhead on both the user and the provider of the service. The FireStore Database mainly caches locations that users have searched, and how many of the users have used the "check-in" feature to inform others about potential risks and dangers.
Infrastructure
For streamlined deployment and hosting process, the project is stored on the Google Cloud Platform, which takes care of a lot of the infrastructure requirements in a simple step. By combining this platform with other Google Cloud products such as Google Maps and Google's Open Covid Data, it was possible to contain the project in a centralized fashion that made collaboration, interconnection, and maintenance straightforward.
Challenges we ran into
Donghyun (Daniel) Park
A challenge that I had to face was the different nature of using FireStore as the Database, as I had no prior experience to it. While initially it was easier to set up compared to an SQL-based DB, correctly utilizing it and its customization settings to make it fit our needs was challenging. Then, making sure that the Front-End correctly reflected the data that was being stored in the DB was quite unique, and it was a valuable learning experience beyond the typical SQL databases and cloud hosting.
Lavanya Roy
Most of the challenges I faced during the course of the hackathon were related to how to actually work with a NoSQL database. All my previous database knowledge was not really useful and hence had to do quite a bit of reading for it. But thanks to my teammate Dan, especially for being patient with a first timer like me, I was able to firstly add Firebase to our web-app and then try to come up with a reactjs code which tries to increment the visitor count every time someone visits a place in order to provide real time crowd and keep the database updated. The toughest part for me was figuring out how the queries can be handled and how to update the database from firestore.
I certainly feel I am taking away a lot of new things I learnt from this Hackathon and plan to implement them in the near future.
Minyoung Na
I think working with the google cloud APIs was pleasant and overwhelming at the same time. With such an abundance of resources, I felt like we were able to create any projects within GCP. Yet I felt that it was very difficult to use different APIs and connect them together just due to their sheer volume
Working with different datasets was really tough. It took us a while to figure out how to use the location data from Places API in conjunction with Open Covid dataset. For instance, the state in Places API were abbreviated ( NY, NJ . etc..) where as Open Covid dataset had a non abbreviated name (New York, New Jersey). We had to manually parse these for the data to sync up. It was very rewarding at the end but it took us a long time.
Accomplishments that we're proud of
Learning new technologies (Firebase/FireStore, React, Server-less Architecture, Google Cloud Platform)
Creating a fast, simple but reliable UI
Ensuring intuitiveness within the UI to accommodate the less technologically savvy users.
What we learned
We learned that being able to enjoy outside once again might not be something we can take for granted again, and responsible outdoors behavior and limiting social interaction is key to quicker recovery for the society as a whole. If you're not an at-risk person and can tolerate social distancing, it is far better for the masses to stay indoors.
What's next for Virus Sus
Creating a better representation of the COVID-19 cases data with more precise geographical representation.
Creating a recommendation system based upon the number of checked-in users and case numbers at specific locations
Improving user functionality such as history, bucket-list, and other customization features.
Built With
firebase
firestore
google-cloud
google-maps
javascript
node.js
react
Try it out
plated-mechanic-290713.uc.r.appspot.com
github.com | Virus Sus | A intuitive tool to find safe outdoors places to visit for the elderly and young alike. | ['Minyoung Na', 'Lavanya Roy', 'Donghyun Park'] | ['Best use of Google Cloud - COVID19 Hackathon Fund'] | ['firebase', 'firestore', 'google-cloud', 'google-maps', 'javascript', 'node.js', 'react'] | 14 |
10,372 | https://devpost.com/software/vehiclebuy | Vehicles (Sorted by transfer fee from San Francisco)
Scanning Driver's License (Computer Vision and Machine Learning)
Car Details View Controller
Inspiration
Have you ever bought a car? If so, you know how frustrating it is to go back and forth to your dealership only to verify a handful of documents (such as Driver's License) that we think can obviously be done online (and yes, we've been in such situation, and we hate it!). Thus, we reimagined a system to solve that problem, and that's how the idea of VehicleBuy is born.
Click here to watch the pitch deck
What it does
VehicleBuy reimagines the way we purchase used cars by providing insightful car listings from
CarMax's
colossal vehicle inventory, intuitive data querying, and simplifying the document verification process using Machine Learning and Computer Vision.
How we built it
We did Web Scraping from
carmax.com
to collect cars and dealership data and fetched it to our
Datastax Astra
database, which we connected to our iOS app. By accessing both the live user location and the dealership coordinates, we used
MapKit
to determine the shortest possible routing and measure the distance to calculate the transfer fee (we use this to visually narrow down car transfer fees as well!).
To optimize camera performance, our iOS mobile app is written in
Swift
to create a smooth native experience for our users. By utilizing Computer Vision, users can scan documents with the device's camera and provide real-time dimensional feedback guiding lines using
VisionKit
to help users scan properly. Using the scanned image, we used
Google Firebase ML Kit
to recognize texts from the given document image, which we used for the verification process and to reject unclear images.
Accomplishments that we're proud of
Despite the difficulty of not being able to meet each other in person, we were able to coordinate and deploy a polished and useful application. We were able to utilize both Computer Vision and Machine Learning to provide real-time document dimensional feedback and text recognition.
Challenges we faced
Since
CarMax
has no publicly available APIs, we spent significant time in Web Scraping to retrieve the car listing items and cleaning the data to suit our app's needs.
What we learned
We should've planned regular checkpoint meetings to keep ourselves coordinated. A significant amount of time was wasted in waiting for another person to respond. Every mistake (including bug fixing and miscommunication) cost us a lot more time compared to if we were in an in-person hackathon.
What's next for VehicleBuy
We are planning to expand database with other car dealers. To expand our reach, we are going to extend support for Android and Desktop (Website). We will also try to utilize the CV and ML for vouchers, coupons, etc.
Built With
computervision
datastaxastra
firebase
ios
mapkit
mlkit
swift
visionkit
Try it out
github.com | VehicleBuy | Buying a used vehicle has never been easier using Computer Vision and Machine Learning! | ['Manuel Stefan Christopher', 'Michael Winailan', 'Matthew Winailan'] | ['Best Use of DataStax Astra'] | ['computervision', 'datastaxastra', 'firebase', 'ios', 'mapkit', 'mlkit', 'swift', 'visionkit'] | 15 |
10,372 | https://devpost.com/software/carmaps | Screenshot of Map and Parameter Sliders
Graph of Sample Data and Best Fit Line Parameters
Inspiration
One of our members is in the market for a new car, and he was looking for the fastest acceleration he could find under his budget, but all of the cars that were in his sweet spot were hundreds of miles away, and incurred a high transfer fee. When we saw the opportunity offered to us through Carmax, we knew we had to take advantage.
What it does
CarMaps scrapes the transfer fees from the Carmax website and displays the data on a map using concentric circles to divide distances into zones. The website features an interactive slider that allows users to modify the constraints of the map to focus on the distances they are primarily interested in. For each ring, the website displays min, max, median, and average for all transfer fees as well as the line of best fit for the relationship between distances of cars and their transfer fees.
How we built it
Front end:
For the user interface, we used HTML and CSS to develop the framework of our website, and we used JavaScript along with Google Maps API, Geocoding API, Places API, and Chart JS, to integrate the data on a Google Maps overlay. We also used Python and Flask to develop a Python server to run the website locally. Then, we also configured Google Cloud Storage Buckets, Google Cloud Load Balancers, and a domain name from Domain.com to host the website serverlessly on Google Cloud at
http://www.carmaps.tech
.
Back end:
On the back end, we used Selenium to scrape transfer fee data from the Carmax website, and we used SciPy and NumPy with Python to calculate the linear regression of distances vs transfer fees of the cars. We used Google Cloud Functions to host this process serverlessly, which was used to call the data to the website.
Challenges we ran into
The biggest challenge we ran into was with collecting the data. Since there was no official API provided during the competition, we had to scrape the data manually. This process was relatively time consuming, so we had to use techniques such as caching to make the scraping process as fast as possible. Another issue we dealt with was configuring our Google Cloud Functions. Since all of us were new to the services of Google Cloud, we had to take time to learn about the features Google Cloud offers, and figure out the best way to implement these features with our project.
Accomplishments that we're proud of
One big accomplishment we had is that we were able to host our website serverlessly using Google Cloud Storage Buckets and Google Cloud Functions. We were also able to tackle the challenge of obtaining all of our data and visualizing on a website.
What we learned
We learned a lot about the services offered by Google Cloud, and how we can implement them in the projects we do. These skills will be helpful for us in the future, as cloud technologies continue to develop. We also were able improve our skills in Data Science with the NumPy, SciPy, and Scraping methods that were implemented.
What's next for CarMaps
Although this site is currently flexible, it doesn't efficiently find the transfer fees for all cars from a certain origin. We could choose to implement a database that stores our data and caches it for future sessions. We could also implement other statistical techniques and make a more general model of the CarMax transfer fee estimate with distance and time as input variables.
Built With
css3
google-cloud
google-geocoding
google-maps
google-storage
html5
javascript
Try it out
www.carmaps.tech
github.com | CarMaps | Visualizes transfer costs for Carmax cars overlaid on an interactive map hosted serverlessly on Google Cloud | ['Kabir Menghrajani', 'Sanjay Srikumar', 'Sagar Saxena'] | [] | ['css3', 'google-cloud', 'google-geocoding', 'google-maps', 'google-storage', 'html5', 'javascript'] | 16 |
10,372 | https://devpost.com/software/carmax-transfer-price-selector | Inspiration
Shopping for a car is a big decision, second to buying a home. FindingGetting a great car for a great price is hard. Too often dealerships hide fees, and take advantage of unknowing consumers. Something that should be fun, feels more like a fleecing. What if we could change the experience behind shopping for a car?
Partnering with Carmax, Our challenge was to find a way to help car buyers find the car of their dreams nationwide. Carmax has over 35,000 vehicles nationwide. That’s a lot of vehicles! Car buyers can order vehicles from any Carmax dealership in the country. From Maine to California, a shopper can ship a vehicle and test drive it at their convenience. Shoppers may have to pay a fee for shipping and transporting their potential vehicle. A surprise fee can ruin the total experience and ruin a customer relationship. How can we empower car shoppers to make the best choice available?
This project is submitted under the Carmax Challenge, Best Use of Google Cloud, for the RamHacks 2020 Hackathon.
What it does
Our tool helps car shoppers select a vehicle by their physical location and gives them the option of seeing the transfer fees associated with that vehicle, and gives them alternatives that may suit their needs better. Whether it’s shipping directly to the consumer or the lot down the road, we make the car buying experience better.
Giving a best pick recommendation, and alternative options, with the added capability to see the closest dealership that gives buyers a flexible way to shop.
How I built it
Inspired by the design of the current UX/UI of Carmax and seeing how others allow users to shop by location, we created high fidelity wireframes in Adobe XD, building a custom design library that replicates Carmax’s designs.
Scraping real data from Carmax dealerships. We used Python, Google Firebase, Google Maps API and Flask to calculate a Total Transfer fee based on real distances. We created geolocation data using real dealerships, and showing real vehicle details. We also created an authentication system.
We used React and Flask to manage the front end.
Challenges I ran into
Hardcoding images using API Material UI was a lot harder than we expected
Plugins with XD
Time
Accomplishments that I'm proud of
Great teamwork
What I learned
Some of our main goals at the start of this hackathon was to practice and understand the Flask package in python and develop our UI/UX skills. We also learned plenty of information about the process of full stack development, using Google APIs and effectively working together as a team in an online environment. Our team has never been in a team together and we found interesting ways to stay motivated and collaborate effectively.
What's next for Carmax Transfer Price Selector
Continue to improve our user ability and filtering system to allow alternative sorting options.
Integrate Google Maps JavasScript API into frontend to provide users visualization.
Continue to improve user experience by presenting car selection based on total price with no hidden fees.
Built With
adobe
firebase
flask
google
google-geocoding
google-maps
javascript
material
postman
python
react
xd
Try it out
github.com | Carmax Transfer Price Selector | Want it, find it, drive it | ['Adam Gainer', 'Lawangin Khan', 'Thinh Vo', 'Muskan Bansal'] | [] | ['adobe', 'firebase', 'flask', 'google', 'google-geocoding', 'google-maps', 'javascript', 'material', 'postman', 'python', 'react', 'xd'] | 17 |
10,372 | https://devpost.com/software/zakat-finance-1ykiwe | Homepage design
Compound API to deposit coins and earn
UMMA and ZAKAT token farm
Wyre API integration for onboarding and KYC
Inspiration
Revolutionizing Online giving, earning and generating value for those in need
What it does
Allows digital payments, Token farming to generate value, governance tokens to delegate funds to charities, earning using the compound protocol
How we built it
React, Nodejs, smart contracts, Wyre API
Challenges we ran into
Integration of separate systems
Accomplishments that we're proud of
UI, Logo and Design
What we learned
Front end design, Integration of smart contracts and Wyre API
What's next for Zakat Finance
Finish Integration of compound, Wyre, and Uniswap to launch ecosystem
Built With
css
node.js
react
sendwyre
solidity
Try it out
github.com | Zakat Finance | Revolutionizing digital payments | ['Shadman Hossain', 'coding kid', 'Tauseef Manj', 'Tahmid Biswas'] | [] | ['css', 'node.js', 'react', 'sendwyre', 'solidity'] | 18 |
10,372 | https://devpost.com/software/cameraman | cameraman
Get this camera to work
Getting Started
This project is a starting point for a Flutter application.
A few resources to get you started if this is your first Flutter project:
Lab: Write your first Flutter app
Cookbook: Useful Flutter samples
For help getting started with Flutter, view our
online documentation
, which offers tutorials,
samples, guidance on mobile development, and a full API reference.
Built With
dart
kotlin
objective-c
swift
Try it out
github.com | cameraman | Mobile Application that can take picture | ['Rohit Karnati', 'Francisco Perez'] | [] | ['dart', 'kotlin', 'objective-c', 'swift'] | 19 |
10,372 | https://devpost.com/software/smartfridge-reduce-food-waste | SMART FRIDGE
Mange Your Fridge Smarter
All Ingredients in 1 App
Meal Planner - Plan Ahead - Save Time
Auto-generate Shopping List
Inspiration
In the United States, food waste is estimated at between 30-40% of the food supply (figure from the FDA). Our biggest inspiration stems from our concern about the environment and how we can make simple lifestyle changes be more responsible about our consumption. SmartFridge app is a solution that makes meal-planning convenient, intuitive and sustainable. We create a logistic app that lets users monitor their food resources with ease. By scanning users' food inventory at home via picture input, the app will classify its users' food into categories, come up with suggested cooking recipes depending on the available food, prioritize food that will go bad soon, and send out alerts once the user's fridge is running low or going to expire. With the aforementioned features, SmartFridge app is the all-in-one solution for people to keep track of their fridge, have a more diverse meal plan, and reduce personal food waste.
What it does
SmartFridge has 5 main tabs, which are "Scan", "Ingredient", "Meal Plan", "Shopping List", and "Profile" tabs.
Scan
: Scan images of food ingredients and add to inventory. Integrate image processing functionality through TensorFlow.js to optimize user input step.
Ingredient
: Let user easily monitor available food in their home kitchen.
Meal Plan
": Suggest recipes that match existing ingredients, bookmark favorite recipes, manage weekly meal plan.
Shopping List
: Auto-populated with ingredients user still need for meal plan. Integrate Google Map API to navigate closed-by grocery stores.
Profile
: Summary status of fridge like remaining capacity, prediction of number of days until next shopping trip, number of items expiring very soon, manage ingredients used in the week's meal plan.
How we built it
We built a cross-platform mobile-app using React Native and Expo. We also used spoonacular API to suggest user with cooking recipes, Google Map API to locate nearby stores, and TensorFlow.js for image processing. The back-end of the app is managed through SQLite and Redux. Its main function is to record the food resources of the users and interact between different components of user interaction.
What we learned
React Native, JavaScript, Expo, spoonacular API, Google Map API, SQLite, Redux, TensorFlow.js
What's next for SmartFridge - reduce food waste
There are still many functionalities we want to add or optimize. We will need to make improvement to our front-end for a better user interface, and build our own cloud database to store more high-quality recipes and data about our users' preferences.
Built With
expo.io
google-maps
image-processing
javascript
node.js
react-native
redux
sqlite
tensor-flow
Try it out
github.com | SmartFridge - Reduce Food Waste | Eliminate Food Waste - Promote Better Health - Meal Planner - Quality Life | ['Nom Phan', 'Quang Luong', 'Blake Hieu Nguyen', 'Ari Nguyen'] | ['The Wolfram Award', 'The Benefits and Costs of Going Digital (Boomi, a Dell Technologies business)'] | ['expo.io', 'google-maps', 'image-processing', 'javascript', 'node.js', 'react-native', 'redux', 'sqlite', 'tensor-flow'] | 20 |
10,372 | https://devpost.com/software/hackathon1-934nkw | This project was bootstrapped with
Create React App
.
Available Scripts
In the project directory, you can run:
npm start
Runs the app in the development mode.
Open
http://localhost:3000
to view it in the browser.
The page will reload if you make edits.
You will also see any lint errors in the console.
npm test
Launches the test runner in the interactive watch mode.
See the section about
running tests
for more information.
npm run build
Builds the app for production to the
build
folder.
It correctly bundles React in production mode and optimizes the build for the best performance.
The build is minified and the filenames include the hashes.
Your app is ready to be deployed!
See the section about
deployment
for more information.
npm run eject
Note: this is a one-way operation. Once you
eject
, you can’t go back!
If you aren’t satisfied with the build tool and configuration choices, you can
eject
at any time. This command will remove the single build dependency from your project.
Instead, it will copy all the configuration files and the transitive dependencies (webpack, Babel, ESLint, etc) right into your project so you have full control over them. All of the commands except
eject
will still work, but they will point to the copied scripts so you can tweak them. At this point you’re on your own.
You don’t have to ever use
eject
. The curated feature set is suitable for small and middle deployments, and you shouldn’t feel obligated to use this feature. However we understand that this tool wouldn’t be useful if you couldn’t customize it when you are ready for it.
Learn More
You can learn more in the
Create React App documentation
.
To learn React, check out the
React documentation
.
Code Splitting
This section has moved here:
https://facebook.github.io/create-react-app/docs/code-splitting
Analyzing the Bundle Size
This section has moved here:
https://facebook.github.io/create-react-app/docs/analyzing-the-bundle-size
Making a Progressive Web App
This section has moved here:
https://facebook.github.io/create-react-app/docs/making-a-progressive-web-app
Advanced Configuration
This section has moved here:
https://facebook.github.io/create-react-app/docs/advanced-configuration
Deployment
This section has moved here:
https://facebook.github.io/create-react-app/docs/deployment
npm run build
fails to minify
This section has moved here:
https://facebook.github.io/create-react-app/docs/troubleshooting#npm-run-build-fails-to-minify
Built With
css
html
javascript
Try it out
github.com | hackathon1 | Sorted shipping fee (CarMax) | ['Juan Samuel Sugianto', 'Deonatan Dinata'] | [] | ['css', 'html', 'javascript'] | 21 |
10,372 | https://devpost.com/software/fundit-srit95 | fundIt
A platform that democratizes access to capital for small businesses via crowdfunding
Inspiration
Startups founders don't have those connections or profits to get funding and especially in a year full of uncertainties many big investors are scared to invest in small businesses. And not all startups makes million dollars in their beginning years.
Meanwhile, most people are not as rich but want to invest. So we want to build a platform that benefits color businesses (because majority of them are quite small) and Investors both. Startups put their video pitches to help make investor a decision on the startup and investor can make an appointment with the business to know about their future goals before investing.
What it does
fundIt is a an app for small businesses to get crowdfunding by retail investors for equity.
Users can login and authenticate their credentials via Apple/Google/Email
Startups can post data such as PDFs, Images, and Text to supplement their crowdfunding campaign and help investors to make investment decisions
Investors can browse all campaigns via a Tab view
The most unique feature of this platform is the highlighted businesses of the month. Underrepresentation and discrimination is a huge problem in business investments so we want to represent those businesses by having a separate page for them.
Investors can schedule a virtual meeting with the representative of startup that will help investor know about the future plans of the business
Investors can pay as little as $10 for a share in the startup’s equity offered in the crowdfunding campaign
Investors can view their past investments & their total investments on a profile view
Startups can checkout the funds raised from the crowdsourced campaign via Apple/Google Pay to Apple/Google Wallets in a virtual FundIt card
How I built it
Flutter: Dynamic Mobile Applications that runs both on Android and iOS.
Firebase: For authentication
Square: Payment Processing
SQL: For storing the Business and Investor Information
UiPath: For automating the process for investors displaying startups according to their search history
Potential Users
Retail investors - who will be investing in the companies that are listed on our platform
Startups - they sign up for crowdfunding in exchange for equity.
Challenges I ran into
Payment Processing using Square
Automation with UiPath
Making dynamic user interface for startup took some time to apprehend
Accomplishments that I'm proud of
Able to build a working platform with a great team work in such a short time.
What we learned
Learned how to divide tasks as a team and be accountable for it, setting report time
How to do payment processing
What's next for fundIt
We are planning to reach small businesses and small investors who could benefit from each other. Small businesses by getting money and small investors by getting returns on their investment with as little as 10 dollars.
Built With
android
Try it out
github.com | MoneyQ Fundit | A platform that democratizes access to capital for small businesses via crowdfunding | ['Rishav Raj Jain'] | [] | ['android'] | 22 |
10,372 | https://devpost.com/software/myserver-1gfb27 | Tkinter GUI
Data Directory
Public URL
Main Page
Videos
Watch big media files without downloading.
Other Files
Inspiration
Due to covid-19 our offline lectures have converted to online meetings. Teachers can now share their screen but it's often hard to share a lot of files of various types and sizes. Hence I decided to build something to solve this problem.
What it does
MyServer sort and shares your files with others without wasting your time in looking for a website to upload and then wait hours and hours for the upload complete.
How I built it
I built it using python. The two main libraries were Django for the web part and Tkinter for OS GUI part.
Challenges I ran into
The main challenge that I ran into was to create public url on the go. To solve this problem I used ngrok for url creation and bat file to automate all connections between ngrok, tkinter, django and the user.
Accomplishments that I'm proud of
I am proud of the fact that I have created an application to help teachers by saving their precious and let them concentrate on on their teaching and let MyServer handle the sharing part. Also that the project is so dynamic that it's connections can be utilised for creation of similar apps for almost infinite purposes and situations.
What I learned
I learned that connecting various languages and tools is not that hard and can be achieved by just using a strategic approach.
What's next for MyServer
MyServer can be upgraded to add chat feature, 2 way file share, GUI improvements and a way to keep the server running even if the computer is turned off.
Built With
batch
django
javascript
python
tkinter
Try it out
github.com | MyServer | Share your files on the go | ['Deekshant Wadhwa'] | [] | ['batch', 'django', 'javascript', 'python', 'tkinter'] | 23 |
10,372 | https://devpost.com/software/library-management-system-huk5c2 | Inspiration
I got inspiration to design a modern looking dark themed library managemnet system in java Swing Framework since there are many existing ones but thee ain't anyone like mine.
What it does
Features ⚙️
A draggable undecorated jframe with dropshadow effect.
A login panel with signup and forgot password option with security question.
Add book and Add student panel with auto-generated id to add new book and register new student in database.
Book can be issued to a student with given student id and book id using issue book panel.
Return issued book for the given student id with whom book is issued.
In Statistics panel there are tables of all the issued and returned books data.
How I built it
Tools & Technologies used 🎭
Java Swing + AWT
JDBC API
Msql database (SQLYog GUI client)
Flatlaf Look & Feel
Netbeans IDE
Pichon icon8 icon pack
rs2xml jar
Challenges I ran into
I faced many small challlenges like How tom pass the dynamic data form database to GUI without using complex SQL Queries etc.
What's next for Library Management System
May be add more options to it and make it a java web start.
Prerequisites ✔️
A minimum JRE version 8 for running the application.
Mysql should be installed on your system with the tables given in SQL file of the repository.
Built With
api
awt
client)
database
feel
flatlaf
gui
icon
icon8
ide
java
jdbc
look
msql
netbeans
pack
pichon
rs2xml
sqlyog
swing
Try it out
github.com | Library Management System | A 🌑 dark themed library management 🖥️ desktop application with modern look and feel. | ['Ashutosh Tripathi'] | [] | ['api', 'awt', 'client)', 'database', 'feel', 'flatlaf', 'gui', 'icon', 'icon8', 'ide', 'java', 'jdbc', 'look', 'msql', 'netbeans', 'pack', 'pichon', 'rs2xml', 'sqlyog', 'swing'] | 24 |
10,372 | https://devpost.com/software/senior-hub-59r648 | Inspiration
The inspiration came from my group member throwing an idea out and me trying to show him how it would/wouldn't work. After building a basic framework, I realized I actually might be able to do his suggestion.
What it does
This program attempts to help the elderly reduce their COVID-19 exposure by consolidating everything they would use on a daily basis in one place. We wanted to have something that would give them a resource to do what they need to do online, instead of having to go out and make a trip.
How I built it
I built this in Sublime Text while compiling from Terminal.
Challenges I ran into
I didn't know how to open links in chrome and I had to figure out which way worked without having to download any software.
Accomplishments that I'm proud of
I'm proud of how I used Google Map Links to create a search function for restaurants in the nearby area.
What I learned
I learned how to work with URLs in general and also how to open them using java.
What's next for Senior Hub
I wasn't able to complete our entire idea so I'd like to finish our initial idea for the program. Afterward, I'd like to implement a memory system for customization based on the user.
Built With
java
Try it out
github.com | senior-hub | This program aims to help elderly reduce their COVID-19 exposure. | ['Calvin Hurlbert'] | [] | ['java'] | 25 |
10,372 | https://devpost.com/software/accountabull | App.
Email.
Result.
Painting.
It's Accountabull! It's a productivity app that lets you set deadlines for yourself.
But if you miss a deadline, Accountabull doesn't tell you... he tells your friends.
Built With
go
inkscape
sendgrid
vue
Try it out
github.com
accountabull.do.tookmund.com | Accountabull | If you miss your deadlines, Accountabull doesn't tell you... he tells your friends. | ['Adam An', 'Jacob Adams'] | [] | ['go', 'inkscape', 'sendgrid', 'vue'] | 26 |
10,372 | https://devpost.com/software/network-packet-parser-1lmr6x | Inspiration I like analyzing data and I know some people cannot understand the raw data, so I figured this could help!
Built With
java | NETWORK PACKET PARSER | Basically what this code does it take network data I pulled and imported into a text file, then it reads it and parses it into a readable format for the user. | ['Bella Larkin'] | [] | ['java'] | 27 |
10,372 | https://devpost.com/software/you-re-covered-bot | Inspiration
One of our teammates had to call an insurance company for his only grandfather but wasn't able to reach them. When he did get connected, He was faced with the language barrier.
We reached out to companies and came to know that due to the COVID situation lot of call centers that handle customer queries are shut down. So we decided to make a virtual agent make sure the company can address all the needs of their customer and an app for people to interact with the bot at the same time share required documents safely.
What's next for You're covered Bot
To make a production-level application and well-trained Bot that can learn based on it interactions and get better
Problems we faced
Mostly technical but we had Google and StackOverflow Community on our slide Huge thanks to those awesome QnA there :D
What we Learned
How to go from Idea to building an application, How to pitch an idea
Built With
firebase
flutter
google-cloud
google-cloud-messaging
power-virtual-agents
vision-api
Try it out
github.com | You're covered virtual assistant | You are in trouble ?? you are covered. A virtual helper to cater your insurance need at the comfort of your home | ['Roman Cutler', 'Masrik Dahir', 'anush krishna v'] | [] | ['firebase', 'flutter', 'google-cloud', 'google-cloud-messaging', 'power-virtual-agents', 'vision-api'] | 28 |
10,372 | https://devpost.com/software/tracyz | Doodle done with hand tracking
Inspiration
As students new to online learning, we realized that many of our professors have a difficult time using the trackpad on their laptops or the mouse on their computer to draw diagrams and text on the screen. Smart tablets that make this task easier can be prohibitively expensive, so we decided to come up with a solution that incorporates something many teachers/professors use for classes every day: their camera.
What it does
TracyZ takes in real-time video footage from users' cameras and applies Tensorflow's hand pose library to track the movement of the user's pointer finger in front of the camera. With this movement, the web app draws ontop of PDF files and makes teaching with diagrams more natural and convenient. Users are able to cycle through pages of the PDF and make drawings as needed. To make the user experience more immersive, we have included features such as: speaking "go" to start drawing, and "stop" or clenching your clench fist to stop drawing.
How we built it
The main hand tracking functionality is handled through the TensorFlow.js Hand Pose library. After identifying a user's hand in the view of the webcam, my group highlighted the key points of the pointer finger and drew them with a special red colour on the screen. The x and y coordinates from the pointer finger were then used to draw on the HTML canvas of the PDF that the user previously uploaded. The PDF uploading and managing are handled with PDF.js, and the frontend of the web app with HTML, CSS, and React.js. Tensorflow's Speech Command Recognition was also used for users to start and stop the drawing action on the screen using vocal commands (alternatively, if a user clenches their fist, the TracyZ app stops drawing onto the screen).
Challenges we ran into
While the entire group worked with the same stack (based on react.js), we still had significant difficulty merging different sections of code together. More specifically, the Tensorflow and webcam components of the web app had many build conflicts with the PDF component. To overcome this challenge, team members spent significant time refactoring the code to smoothly integrate everything together. After tracing various build errors through the code, we were able to successfully accomplish the main tasks of our app.
Accomplishments that we are proud of
The majority of members in our group were completely new to Machine Learning, so developing an ML heavy web application was a very exciting accomplishment. Each major milestone, from seeing the hand tracking visualized on the screen to watching lines being drawn on the screen according to our hand movements, made us all excited! We are also very proud of the various features we were able to add to the app. While the drawing functionality isn't perfect, we felt that it was a great demonstration of our idea. Integrating all components of the app into a pleasing website was also a fun challenge.
What We learned
Working with Tensorflow and figuring out how ML models are trained was a great learning experience! Along the way, we figured out more about standard practices with ML models and how best to incorporate them into our projects. We also learnt more about react and javascript through our struggles to incorporate the many different components of our app together. We quickly realized that react is great at starting out web apps and has a lot of preexisting features (ie. webcam, buttons, and nice layouts), but can be a challenge to work with when using combining machine learning and other custom components.
What's next for TracyZ
While our basic line tracing and voice recognition capabilities work as desired, we would like to improve the accuracy of both systems to make the user experience smoother. Reducing the lag on the system overall would also make it much easier for individuals to draw exactly what they intended, without any lines showing up late. Additionally, while the speech recognition works as desired, it can easily get distracted by background noise, an issue we would like to address in future versions of this app (ie. by expanding our data sets). Given time, we also seek to incorporate more gestures (such as raising a hand, undoing actions, and saving), to make the experience more immersive.
Built With
css3
express.js
html5
javascript
material-ui
node.js
pdf.js
react
tensorflow.js
Try it out
github.com | TracyZ | React Webapp using TensorFlow.js Handpose to draw on PDF files with PDF.js | ['Dinu Wijetunga', 'Jim Wu', 'alexguo247 Guo'] | [] | ['css3', 'express.js', 'html5', 'javascript', 'material-ui', 'node.js', 'pdf.js', 'react', 'tensorflow.js'] | 29 |
10,372 | https://devpost.com/software/coachally-interactive-virtual-classroom-video-calling-app | Assist feature
CoachAlly Home page
User's can easily seek guidance , report bugs
Seek guidance with in-app screenshot&doodle feature instantly
Video Call
AR Classroom
Broadcast Mode
Inspiration
During these pandemic days, our team too are facing issues while learning through online portals. So our team took a
step forward in resolving the common issues and further improvising it.
What it does
CoachAlly application helps in creating interactive virtual classrooms using the latest technologies like
Augmented Reality
and creates room for the virtual classroom through
high-quality video calling
with a low-latency experience.
Augmented reality in education is surging in popularity in schools worldwide. Through AR, educators are able to improve learning outcomes through increased engagement and interactivity.AR features aspects that enhance the learning of abilities like problem-solving, collaboration, and creation to better prepare students for the future. Teachers can include custom AR objects and pre-recorded lecture videos which help students view course materials at the ease of their home.
Live sessions can be held virtually through the class meet option. We have designed a one-step join meeting keeping in mind of young students. App seeks only the meet code and doesn't collect other credentials thus improvising the privacy of end-user.
We have also integrated an
ASSIST
feature which guides the users step-by-step if they either need a walkthrough on a feature or if they encounter a bug. Our main advantage of this feature allows users can make use of an in-app screenshot feature with a doodle option on board with ease to contact the admin/developer hassle-free.
How I built it
Came across the
Flutter
technology recently and since then was caught up with it. We are
amateurs
and this is our first big step upfront on solving the problem with it.
We have approached our problem with Flutter which makes the app run natively on all platforms. The UI is made with help of google's material UI. The video call runs seamlessly with the help of agora as backend. The feedbacks, assist is done with the help of wiredash which provides instant messages which the end-users provide.
Would thank our sponsor echo-AR which helped us integrate AR seamlessly with our app.
CoachAlly is a light-weight app which is available across various platforms
-
Mobile platforms- IOS, Android
Desktop app-MacOs, Windows, Linux
Web app- Across all browsers
Challenges I ran into
We came across many challenges as this our first big approach using Flutter. We thank the mentors who took the time to help us. Students get insights on concepts& better understanding with AR & am proud to be a part to contribute to the global community.
Accomplishments that I'm proud of
We are very proud of the big leap which we dared to attempt has come out a bug-free working app in a short span of hours.Have learned many skills way from starting of the Hack. We learned to face the challenge by short days to give the best outcome of our app.
What's next for CoachAlly -Interactive Virtual Classroom & Video Calling app
We aim to increase security and add feature-rich contents and make our app more accessible to all age groups.We plan to improvise our app consistently for best end user satisfaction.
Built With
agora
ar
cupertino-ios
dart
echoar
flutter
materialui
Try it out
github.com | CoachAlly -Interactive AR Virtual Classroom & Video Call app | CoachAlly application helps in creating interactive virtual classrooms using the latest technologies like Augmented Reality and creates room for the virtual classroom through high-quality video calls. | ['Sudir Krishnaa RS'] | [] | ['agora', 'ar', 'cupertino-ios', 'dart', 'echoar', 'flutter', 'materialui'] | 30 |
10,372 | https://devpost.com/software/carmatch-e4vhk2 | One of the greatest challenges faced by Carmax and its customers is the transfer costs associated with transferring a desired vehicle from one Carmax location to another. In order to mitigate this issue we propose the use of an app that utilizes machine learning not only to calculate transfer costs, but also suggest vehicles according to customers’ tastes and preferences. In our proposed app, all else equal, vehicles with lower transfer costs will be displayed more prominently in the matches than those with higher costs. We believe this would result in a net effect of customers paying less transfer fees while maintaining or even raising their satisfaction in purchase experience. This is because we believe that when people purchase used vehicles, they are not necessarily looking to purchase specific models, but rather vehicles that fit their budgets and preferences. By addressing the challenge of decreasing transportation costs though this approach, there is a greater likelihood that a candidate vehicle will both suit the customer’s tastes while also keeping transfer costs to a minimum. We came to this hypothesis after navigating through Carmax’s website, where we noticed that there are many filters but none pertaining to factors that may influence transfer costs. Another disadvantage of the website is that when there are too many or too little filters applied, there’s the possibility of search results being too narrow or broad, respectively. We value the freedom of customers to continue having access to Carmax’s nationwide inventory, and hence do not wish for them to feel like their choices are being limited by the desire to save money on transfer. At the same time, we also believe that because there is an element of “guessing and checking” with manually setting filters, customers might potentially narrow their search to vehicles with expensive transfer costs when other vehicles that would equally satisfy their preferences and needs go unnoticed. Our proposed app does not, however, preclude customers from searching for vehicles via existing means, as it does not seek to replace the website or the existing CarMax app, but is rather a tool that facilitates the search process while also providing customers with purchase options that could potentially reduce transfer cost.
The beauty of unsupervised machine learning is that it recognizes patterns that humans might not readily notice, meaning that even subconscious preferences could potentially be recognized. The app collects further information regarding users’ tastes through an interface that operates similarly to Tinder, and the user has the option to enter broad preferences such as vehicle age, type, price, and mileage, to narrow down candidate vehicles that appear on the swipe interface. Images and basic information of vehicles in Carmax’ nationwide inventory that fall within the user-defined parameters, if any, appear on the screen, and the user can either like or dislike the vehicle in addition to clicking on the image for a more complete summary of the vehicle. With every additional swipe, the app’s predictions will become more accurate, and with sufficient swipes, users can access their matches that list vehicles that fit their many preferences, with priority given to vehicles with lower transfer fees. Because transfer fees are not constant, however, and can change at any time due to factors such as inclement weather, machine learning will also be incorporated to predict transfer fees based on factors such as distance, season, desired pick-up date. We assume the following factors to impact transfer costs:
Location - distance is not the only factor that determines transportation costs, but the location of the origin and destination relative to major transportation lines. All else equal, transporting a vehicle from a big city to another big city would be less costly than transporting a vehicle between two small towns
Season - shipping tends to be more expensive during winters since many roads and highways are more dangerous.
Natural disasters - if an area is impacted by a flood, hurricane, tornado, etc., then shipping costs can temporarily spike.
Vehicle size - all else equal, larger and heavier vehicles are presumably more expensive to ship than smaller ones. We imagine a Ford Super Duty to cost more to ship than a Honda Civic.
Delivery Flexibility - some deliveries may be less expensive if the buyer is willing to wait - multiple orders that have overlaps along their respective shipping routes can be consolidated for less expensive shipping. If a user were adamant about a particular vehicle, future delivery dates can be accurately predicted with past data as well as real-time information regarding things such as weather and patterns.
Built With
bootstrap
css
html
Try it out
github.com
pr.to | Carmatch | Reduce transfer costs with an app that utilizes machine learning to understand the user's preferences and also predict transfer costs. | ['Francisco Perez', 'Rohit Karnati', 'Roy Chung'] | [] | ['bootstrap', 'css', 'html'] | 31 |
10,372 | https://devpost.com/software/gifshop-wizard | Initial prompt from bot for GIF processing
Sample quick reply options for CycleGAN
Sample quick reply options for style transfer
More sample quick reply options for style transfer
Quick reply options for selecting an effect to apply
The bot processing the GIF as the user waits for a response
Quick reply options for selecting a source image to apply fake motion onto
Finish processing GIF
Quick reply options for next effect to apply after showing results of previous effect being applied
GIF
Our logo was inspired by the very first test image we used with the "tripping" style mask
Inspiration
We felt that modern computer vision techniques such as style transfer and object removal were only accessible to those who are well versed in machine learning and have sufficient computing resources. The average person does not have access to either of these which means that it is difficult for an average user to try out these techniques on their GIFs or images. We want to alleviate both of these problems and provide a platform for users to easily manipulate their GIFs or images using techniques from computer vision and receive instant feedback.
What it does
GIFShop Wizard is a Messenger bot that applies computer vision techniques on GIFs and images sent by users.
The bot receives images or GIFs and prompts users for an image processing technique to apply using Quick Replies. The bot processes the image according to the user's specification and returns the processed image to the user. The image processing techniques currently supported include fake motion, object removal, style transfer, GAN, and segmented style transfer. We drive the dialogue flow with Quick Replies to minimize communication errors and keep the interaction as close to GIF-to-GIF as possible.
Foreground Object Removal: Objects may appear in images that we wish to remove (i.e. photobombs). It takes long enough to photoshop objects out, but this is even more challenging for videos, where a manual process presents itself as a major obstacle. Thus we provide an object removal function, where we first detect what objects are available in the entire GIF, then return a list of detected objects for the user to selectively remove, and consecutively execute the removal of the specified object.
Fake Motion: This vision function enables users to transfer the motion in their GIF into one of our available source images. Motions can be transferred to faces or body postures using the first order of motion model. For example, if a user has a GIF of a person talking or moving their head, this motion can be transferred to images of faces that we provide. The main prerequisite from the user is that their driving image is as closely cropped to the object (e.g. the face).
Fast Style Transfer: When one GIF or meme is not enough, why not make more? To increase variations of the same content GIF, we can apply a style mask via neural style transfer. We trained on several style images to return style mask weights, such that when a user passes a GIF through, they can select from a variety of masks to apply onto their content image. To minimize latency is retrieving a stylized image, we pre-trained models rather than training on a new style each time (and thus this also means a user is not currently permitted from passing a custom style image to train on the spot).
CycleGAN: Though quick to train and perform inference, style transfer applies the style to the whole image and not selectively. Therefore a generative adversarial network comes in handy, selectively applying the style of the target object onto the source object. For example, for the mask
horse2zebra
, if a user passes in an image with a horse in it, CycleGAN would selectively stylize horses to possess stripes of a zebra. It should be noted that
horse2zebra
means that the GAN was trained on a pair of datasets (horse, zebra), but it does not mean that inference is limited to horse GIFs alone. In fact, users can pass in other images (e.g. people) and the stripes of a zebra can often be transferred as well, though just not as accurately as horse images.
Segmented Style Transfer: While CycleGAN is selective to specific objects, we use instance segmentation to target significant scene components and apply style transfer on those segments (i.e. scene-specific, not object-specific). We use FCN to detect instance segments, identify the largest one, extract that segment as an image, perform style transfer upon this image, and stitch it back onto the original image.
How we built it
We interface with the Messenger API and Webhooks using a Flask server and a custom bot interface. The models used in the various computer vision techniques are trained in PyTorch and TensorFlow.
Chatbot server
We built a convenience interface bot that takes in data from the server and automatically builds the correct POST request and sends it to the Messenger API. The actions currently supported include sending text, sending images, sending quick replies, and sending typing sender actions.
Vision functionality
GIF extraction/stitching:
When the user sends a GIF, we first parse GIF into its individual frames. We then apply the vision function selected by the user with its corresponding arguments and perform inference frame by frame. After all the frames are processed, we stitch the frames back together, compress the file to minimize latency, and send it to the user. Images are treated as a GIF with a single frame and are thus compatible with our bot.
Fast style transfer:
Based on the work of
Johnson et al.
, we implemented their real-time style transfer architecture that uses a perceptual loss function to measure model perceptual differences between the content image and the style image. The loss functions are capturing semantic differences between the original image and stylized image through image classification, based on a 16-layer VGG network pretrained on ImageNet. Stylized images are generated from in-network downsampling and upsampling, and the resulting image is passed as an argument to the perceptual loss function. We store pretrained style mask weights, and when a user selects the quick reply button to select a specific mask, we perform inference on each frame.
Segmented style transfer:
For this function, we use fast style transfer as a boilerplate to perform style transfer. The main difference is that we first perform instance segmentation using Fully-Convolutional Networks to compute whether each pixel is semantically different from another, and thus return a list of segment masks. We detect the largest mask, obtain its pixel coordinates in an array, export it as an image with a single color fill background, perform fast style transfer upon this mask image, then transpose the pixel coordinates from this stylized mask image onto the original image, thus selectively performing style transfer onto a specific mask in the image.
CycleGAN:
For this image-to-image translation architecture (
Zhu et al.
), the generator network transcribes perturbations upon the source image with features from the style image. The discriminator network evaluates the class of the stylized image; if the label is identical to the ground truth label, then the image-to-image translation is a success. This is somewhat similar to the perceptual loss function in FST, but we instead use a discriminator network to measure perceptual differences.
First order of motion:
Based on
Siarohin et al.
, our implementation of the First Order Motion architecture enables users to pass their GIF file as the driving image (image that contains the motion), and we provide the source images (images that users would want to transfer motion from their GIF into). The model works by first computing the first order motion representation by using a keypoint detector (identifying points of motion within a face/body). Using a motion network, we generate optical flow from the motion representations, and perform pixel transformations onto the source image based on the calculated flow of each pixel.
Foreground removal:
For this implementation, we remove foreground objects by performing YOLOv3 object detection, and sending the detected objects to users (objects based on those from the MSCOCO dataset). The object selected by the user via quick replies is passed as an argument to the object removal function, where we first apply a bounding box to the objects detected (if the object is equal to the one selected by the user), then remove pixels within the bounding boxes, then passing the resultant image to
pix2pix
to fill the missing pixels (supposedly with a selection of the surrounding background pixels).
Challenges we ran into
Model inference for image processing usually takes a while. Since we are processing each GIF frame by frame, this means model inference for GIFs takes even longer. Because Messenger requires a response within 20 seconds, this meant that we needed to find a way to work around the constraint. We tackle this problem by continuing to process the image on the server and keeping track of the fulfillment status of the request rather than allow Messenger to timeout our process.
Because we implement several external computer vision architectures, we pull source code from multiple different projects. This means that they could potentially use different versions of PyTorch or TensorFlow. PyTorch and TensorFlow 1 turn out to be incompatible due to TensorFlow 1 using outdated libraries. To remedy the situation, we had to migrate all the TensorFlow 1 code to TensorFlow 2 code.
When we tried sending multiple GIFs to the bot, the GPU would sometimes run out of memory. To address this issue, we allocated memory carefully for each vision function and reduced parallelism to decrease the strain on our GPU.
Trying to maintain and update the state of the user is difficult as the Messenger API uses webhooks. This was solved by creating and implementing a clear organization and structure of the user flow.
Since only one of our members had a GPU, we had to distribute tasks carefully and separate our logic accordingly so that certain features could be tested independently.
Accomplishments that we're proud of
Since the Messenger API does not have an official Python API, we had to use a bot interface to send requests to the API from Flask. Since the bot interfaces we found were insufficient for our purposes, we wrote our own.
We were able to aggregate a bunch of different computer vision models from different projects and both make them compatible with each other and integrate them together into one coherent experience. To do this, we had to modify and rewrite a good amount of the source code and train our own models with our own source images.
What we learned
We got to experiment with various Messenger API functions such as Quick Replies and Sender Actions through Flask. We also got to play around with webhooks and use localhost tunneling to test our code. We learned how to modify existing bot interfaces and deprecated wrappers/libraries in order to customize them to our needs.
Furthermore, we got the chance to play around with various different computer vision models and tinker with different image processing techniques. It was a good opportunity to bring computer vision to the chatbot space, which has traditionally been dominated by NLP literature. We explored state-of-the-art models, made modifications to improve them and generate novel functionality, and exercised proper software engineering and documentation practices with the extended time granted by the competition.
What's next for GIFShop Wizard
There are several directions that we could have taken this project if we had more time to work on it.
Additional Techniques
Some additional things we would like to see as features in our bot include super resolution of GIFs and images, increasing the resolution of each image, and frame interpolation for GIFs, creating intermediate frames in between consecutive frames.
Custom Source Images
We would like to allow users to input custom source images for various features such as fake motion and style transfer. The main concern for implementing this feature is that in order to be able to apply an effect with a source image, the model must be trained with the source image which could potentially take a long time.
Video Processing
GIFs are essentially short videos so extending our bot to videos is not too difficult. The only concern with this is that it may take a long time to render on the server. Since Messenger expects a response within 20 seconds, this could be hard to implement depending on the length of the video.
Model Improvements
Even though our features produce pretty good results, they could always be improved. Some of the things we could do include running more iterations, finding other interesting source images, and investigating other state-of-the-art models.
References
First Order of Motion
paper
Foreground Removal
paper
Fast Style Transfer
paper
CycleGAN
paper
Instance Segmentation
paper
Built With
facebook-messenger
flask
opencv
python
pytorch
tensorflow
Try it out
github.com
m.me | GIFShop Wizard | Computer vision has been left out of the hands of many photoshopping enthusiasts and chatbot users alike. Our mission is to bring automated GIF-editing functionality to the masses with GIFShop Wizard. | ['Jacky Lee'] | [] | ['facebook-messenger', 'flask', 'opencv', 'python', 'pytorch', 'tensorflow'] | 32 |
10,372 | https://devpost.com/software/discordia-4m0ovd | For a cool logo
Test bot created
Test Triggering
Discordia
A bot for bots!
Every time a buisness starts a discord channel, it needs to go through the monotonous task of creating a bot.
Although a bot is the optimal soln. for a user-friendly buisness it makes use of TOKENS which in the wrong hands
could destroy a buisness
.
This is often due to the fact that most buisnesses lack the know-how for making a bot and crowd-source the task. This could lead to the TOKENS for your buisness ending up in the wrong hands.
The solution
Discordia is a bot that makes Discord bots. The user can easily
add tons of functionality
to his bot
with no technical know-how
.
Discordia generates a python-based Discord bot and all the user has to do is to enter the TOKENS to his bot. This ensures that the data is in safe and secure hands.
User Manual
Run the fileCreator.py file. The resulting file generated will be the Discord bot program.
The botCreator.py is the bot program and the .env file is where you store your TOKENS
Enter your discord bot token and the guild name in your .env file and your good to go !!
As simple as that !
Discordia for Collaboration
One of the most important things in a virtual education environment is a timely response
. Discordia helps teachers with no programming experience create bots that could be used to give timely responses to theirs students doubts. It also reduces the workload of the teacher as the bot handles most of the general problems that might rise among students.
The teacher could then focus on the students with particular problems. This would actually bridge the physical gap between teachers and students and by making a virtual bond between them.
Discordia can be used in all realms of education making it a universally applicable as well as convenient tool.
Discordia for Virtual Assistance
Discordia is an assistant that helps the user to create assistants. This is highly customizable and
people from all walks of life
- gamers, businesses, teachers etc.
could use it to create a highly customer-friendly experience
. The secure use of tokens means that there is no point of vulnerability in your bot creation pipeline. That is one of the most important advantages of Discordia.
Built With
discord
python
Try it out
github.com | Discordia | A bot for bots ! Making Discord bots with the click of a button. | ['Hrishikesh P'] | [] | ['discord', 'python'] | 33 |
10,372 | https://devpost.com/software/cardabra | On the main screen, the user can instantly take a picture of a vehicle and press confirm.
Once an image is submitted from the camera screen, CarDabra will display that car at the cheapest price given transfer fees
The user can submit confirm the result, and will receive a confirmation.
CarDabra
Inspiration
Imagine driving through downtown on a Friday night and suddenly a beautiful car rolls past you. Despite being awed by the vehicle, you are unable to identify it. To solve this problem, we decided to create CarDabra, a mobile application that helps identify cars on the road.
What it does
Similar to Snapchat's Shazam, we decided to move the focus from music to cars. Using a mobile device, you are able to aim your camera at an unknown vehicle and get back the model and make of that car. Not only that, but CarDabra will help you find that car at the cheapest price given transfer fees between one point to another. All of this is done in the palm of your hands.
How we built it
We used Kotlin, XML, Google Maps API, TensorFlowLite API, and Android Studio to develop the app.
Challenges we ran into
A challenge that we had faced was working with git. Some of us had never used git before, so we spent time learning about git commands, branch, and overall version control concepts.
None of us had ever worked with data modeling and training. We spent quite a bit of time researching how TensorFlow worked and how we could train our own model.
Accomplishments that we're proud of
We were able to piece together multiple screens with different functionalities using Andriod Studios.
Upon learning git, we were able to maintain proper version control and kept merge conflicts to a minimum.
After spending numerous hours, we are proud to incorporate TensorFlow into our codebase.
What we learned
We learned how to create an Android app from scratch.
We learned how to implement different types of APIs.
We learned how to use git.
We learned how to implement data modeling and training.
What's next?
Implement an actual database that stores car locations and data.
Further improve our data modeling by incorporating more images.
Implement a more advance search that utilizes information aside from transfer fees
Create additional widgets to provide a better user experience
Built With
android
github
google-maps
java
kotlin
tensorflow
xml
Try it out
github.com | CarDabra | With prices so low, it's like magic! CarDabra, similar to Snapchat's Shazam, it works like a charm by allowing everyone to identify, research, and buy unknown vehicles at the palm of your hands. | ['Danny Tran', 'Hamza Chhipa', 'just-kote'] | [] | ['android', 'github', 'google-maps', 'java', 'kotlin', 'tensorflow', 'xml'] | 34 |
10,372 | https://devpost.com/software/brains-storm | Brains storms logo
NLP used for categorization for idea input
Inspiration
As the pandemic continues and as we are pulled farther apart from human interaction, there are many downsides; however, people are collaborating more than ever with people all around the world, attending meetings and sessions they could have never done otherwise, and collaborating in ways that once seemed impossible, all from the comfort of their own homes. We wanted to foster online collaborations that may have dulled due to difficulty visualizing as a group, lack of physical collaboration, slow typing skills, and anything else that may be making online brainstorming worse.
What it does
Our web app lets users focus on coming up with ideas and brainstorming rather than the little details like taking meeting notes or creating visual graphics to explain ideas. Our app allows users to quickly jot down ideas with a simple click, which will convert their speech to text and be placed accordingly. Using NLP, the idea will be categorized into categories that best suit the main points of the idea, and an image of the idea will display. The user can continue to speak into the site and branch out their ideas visually as they wish. Although we focused on brainstorming being the main use case, this app can be used for a variety of reasons, such as keeping track of thoughts with a cluttered mind, listing and categorizing ideas when preparing a speech, discussing political topics, listing the pros and cons to an idea, and the list goes on.
How we built it
We used Microsoft Azure's speech to text API, choosing it primarily because it was offered in client-side Javascript, for the transcription of the user's thoughts. We used Google Cloud NLP API to categorize the user's words and Google Cloud Platform to host our site. Additionally, we used HTML, CSS, Javascript, and Node.js.
Challenges we ran into
It was our first time using certain libraries and APIs, so it was initially difficult navigating through the new concepts.
Accomplishments that we're proud of
We are proud of spinning out a functional app that we were all excited about!
What we learned
We learned about APIs and libraries.
What's next for Brains Storm
We plan on creating functionality to automize brainstorming further and to make brainstorming a more hands-off experience and more focused on ideation. This would be done by allowing speech-to-text capabilities to run for as long as the user would like, while the website captures and notes ideas whenever it seems appropriate and then automatically connects ideas that seem to be related. We would also like to make it multi-user friendly, so that multiple people can work on the same "mind map" at once. We would also like to look into generating these maps in realtime.
Built With
azure
css
google-cloud
html
javascript
natural-language-processing
node.js
speech-to-text
Try it out
github.com
34.121.43.100
brainsstorms.tech | Brains Storm | ML/AI +Communication Tracks | Focus on the thinking. Let us do the rest. | ['Sai Vamsi Alisetti', 'Oran C', 'Mythili Karra', 'pWr1ght Wright'] | [] | ['azure', 'css', 'google-cloud', 'html', 'javascript', 'natural-language-processing', 'node.js', 'speech-to-text'] | 35 |
10,372 | https://devpost.com/software/example-project-809l6n | React Native Project
Built With
react-native | Example Project | React Native Project | ['Om Joshi'] | [] | ['react-native'] | 36 |
10,372 | https://devpost.com/software/2020-hindsight | I used the statsmodels package in Python and the Google Colab platform to test approximately 200 different machine learning models in order to predict the deaths COVID-19 in the United States on a county-by-county basis up to a week in advance.
Built With
python
Try it out
rememberwhenelonmuskshotacarinto.space
github.com | 2020envision | Visualizing ML predictions of COVID-19 deaths on a county-by-county basis | ['Milana Wolff'] | [] | ['python'] | 37 |
10,372 | https://devpost.com/software/raindropped-weather-based-music-player | GIF
Inspiration
The weather generally plays a big part on our daily mood and plans. Why not let the weather have a say in your music tastes as well? Nature is a powerful thing, and its behavior invokes a wide variety of emotion. At times, people listen to music based on the way the weather outside makes them feel, from listening to soft beats during rainy days to cheerful tunes on bright summer days.
What it does
The web-app uses geolocation and the OpenWeather API to display two unique Spotify playlist based on the weather. It not only gives you playlists based on your local weather, but unique animated backgrounds that are determined by weather as well.
How We built it
The app was primarily built using React.js and JavaScript. The weather was retrieved by using the OpenWeather API, while the playlists were taken from Spotify via the Spotify API. We programmed in javascript the specific conditions a pair of Spotify playlists and background would appear based on the weather retrieved.
HTML and CSS were also used to support the frameworks and for styling purposes. The logo is a vector created with Adobe Illustrator.
What's next for raindropped., a weather-based music player
Why not take this project to the next level by adding news? People can then read their morning news while listening to some nice calm tunes! (depending on the weather at least.) The next iteration may even allow users to pick what moods to associate to certain weather conditions as well!
Built With
adobe-illustrator
css
html
javascript
openweather
react
Try it out
github.com | raindropped. | A web application that recommends music based on your local weather! | ['Koji Q.', 'Gokul Varma', 'Taha Hassan'] | [] | ['adobe-illustrator', 'css', 'html', 'javascript', 'openweather', 'react'] | 38 |
10,372 | https://devpost.com/software/order-65 | Inspiration
I have previously watched the entirety of the clone wars in chronological order and found it to be a bit of a pain since I had to constantly search for the right order then go find the next episode, so I sought to make it easier
What it does
It gets the next episode of the Clone Wars based on chronological order and loads that page
How I built it
By making a chrome extension
Challenges I ran into
Creating eventlisteners for pages I had no direct access to, executing javascript after changing the URL
Accomplishments that I'm proud of
I managed to fully develop the extension including the extra features which I wanted to add but did not see as a requirement
What I learned
How to make a functional chrome extension
What's next for Order 65
Cleaning up the display, and attempting to find a way to load the new url while staying in full screen
Built With
css
html
javascript
json | Order 65 | Allows the user to more easily watch the Clone Wars in chronological order with Disney+ | ['Greih Murray'] | [] | ['css', 'html', 'javascript', 'json'] | 39 |
10,372 | https://devpost.com/software/radius-zu7d26 | Our icy UI
Get started
Report Infection
Danger ZONES
Prediction Dashboard
Inspiration
There are people dying all over the world - pretty big motivation. Helping elderly find their way through COVID-19 Pandemic by avoiding infected and crowded locations. We can help each other by being good neighbors and reporting cases for the community.
What it does
With daily COVID-19 death tolls higher than ever, a major obstacle to recovery is the lack of information. Social distancing is hard when you don’t even know which locations have a high density of people, or which places have had infected visitors.
Our goal is to fill this lack of information by alerting users in real time to locations with confirmed cases, so that they can avoid them. This allows users to make a conscious choice to avoid certain locations, stopping contact with infections in the first place. Additionally, we use the Besttime API to forecast the safest time to visit a store days in advance by statistically analyzing trends in visitor count. These predictions allow users to avoid foot traffic in stores - a breeding ground for COVID.
How I built it
This was built in two parts. The iOS app was written in Swift, and the main frameworks used were Core Location, Radar, and Firebase. The database used to store the data was Cloud Firestore, while the UI elements were from MapKit and UIKit. The Firebase queries were done in a background thread to avoid UI lag. We made GPX files on XCode to simulate location and test our app features. We used SwiftUI to display the dashboard of predictions for various store foot traffic. The prediction was based on data from the Besttime API
Challenges I ran into
Location tracking with live firebase updates was difficult since the multithreading was complex. We had to sort out the UI vs background thread issue. Also, we had a tough time getting SwiftUI set up properly since this was our first main project with it.
Accomplishments that I'm proud of
The UI looks pretty solid in our opinion and there are a bunch of useful features on this app. Even if only one person is a good samaritan neighbor and reports a case of COVID, everyone in the area will be able to avoid that location for the incubation period. We're just really happy with coming up with the entire idea from scratch and converting it into a finished product.
What I learned
We experimented a lot with swiftui, which will help in future hackathons. We also used multiple API's (Radar, Besttime), which we can add to our toolkit in the future.
What's next for Radius
We're trying to include advanced machine learning algorithms to make our store population prediction even more accurate
Built With
bettertime
core-location
ios
mapkit
radar.io
swift
uikit
Try it out
github.com | Radius | Avoid COVID-19 infected locations and crowded locations | ['Yatharth Chhabra', 'Aditya Sharma'] | ['The Wolfram Award', 'Medical Hack Prize (Wireless Charging Pad)', 'Wolfram Award by Wolfram Language'] | ['bettertime', 'core-location', 'ios', 'mapkit', 'radar.io', 'swift', 'uikit'] | 40 |
10,372 | https://devpost.com/software/wecare-0fjkb9 | Summary: Home Screen of app, which allows you to report your symptoms, check the status of your circle, and get daily personalized tips.
Home Screen of app, which allows you to report your symptoms, check the status of your circle, and get daily personalized tips.
Map Screen of app, which allows you to see hotspots around you and your Care Circle.
Care Circle screen of app, which allows you to health conditions of your loved ones.
Web interface, which can be used to update the symptoms. It is synced with the app.
The problem WeCare solves
As the outbreak of COVID-19 continues to spread throughout the entire world, more stringent containment measures from social distancing to city closure are being put into place, greatly stressing people we care about. To address the outbreak, there have been many ad hoc solutions for symptom tracking (e.g.,
UK app
), contact tracing (e.g.,
PPEP-PT
), and environmental risk dashboards (
covidmap
). However, these fragmented solutions may lead to false risk communication to citizens, while violating the privacy, adding extra layers of pressure to authorities and public health, and are not effective to follow the conditions of our cared ones. Until now, there is no privacy-preserving platform in the world to 1) let us follow the health conditions of our cared ones, 2) use a statistically rigorous live hotspots mapping to visualize current potential risks around localities based on available and important factors (environment, contacts, and symptoms) so the community can stay safer while resuming their normal life, and 3) collect accurate information for policymakers to better plan their limited resources.
Such a unified solution would help many families who are not able to see each other due to self-quarantine and enable early detection and risk evaluation, which may save many lives, especially for vulnerable groups. These urgent needs would remain for many months given that the quarantine conditions may be in place for the upcoming months, as the outbreak is not reported to occur yet in Africa, the potential arrival of second and third waves, and COVID-19 potential reappearance next year at a smaller scale (like seasonal flu). There is still uncertain information about immunity after being infected and recovered from COVID-19. Therefore, it is of paramount importance to address them using an easy-to-use and privacy-preserving solution that helps individuals, governments, and public health authorities. The closest solution is
COVID Aggregated Risk Evaluation project
, which tries to aggregate environment, contacts, and symptoms into a single risk factor. WeCare takes a different approach and a) visualizes those factors (instead of combining them into a single risk value) for more tangible risk communication and b) incentivizes individuals to regularly check their symptoms and share it with their Care Circle or health authorities.
WeCare Solution
WeCare is a digital platform, both app and website. Both platforms can be used separately, and with freedom of choice towards the user. The app, however, will give users more information and mobile resources throughout the day. Our cross-platform app enables symptom tracking, contact tracing, and environmental risk evaluation (using official data from public health authorities). Individuals can add their family members and friends to a Care Circle and track their health status and get personalized daily updates. In particular, individuals can opt-in to fill a simple questionnaire, supervised by our epidemiologist team member, about their symptoms, comorbidities, and demographic information. The app then tracks their location and informs them of potential hotspots for them and for vulnerable populations over a live map, built using opt-in reports of individuals. This map is accessible on the app and our website. Moreover, symptoms of individuals will be tracked frequently to enable sending a notification to the Care Circle and health authorities once the conditions get more severe. We have also designed a citizen point, where individuals get badges based on their contributions to solving pandemic by daily checkup, staying healthy, avoiding highly risky zones, protecting vulnerable groups, and sharing their anonymous data.
Our contact tracing module follows guidelines of Decentralized Pan-European Privacy-Preserving Proximity Tracing
(PEPP-PT)
, which is an international collaboration of top European universities and research institutes to ensure safety and privacy of individuals.
What we have done during the summer.
We have updated the app-design. New contacts with Brasil, Chile and Singapore. We have also made some translation work with the app. Shared more on social media about the project and also connected to more people on slack and LinkedIn.
We have consolidated the idea and validated it with a customer survey. We then developed a new interface for
website
and changed the python backend to make it compatible with the WeCare app. We have also designed the app prototype and all main functionalities:
Environment: We have developed the notion of hotspots where we have developed a machine learning model that maps the certified number of infected people in a city and the spatial distribution of city population to the approximate number of infected in the neighbourhood of everyone.
Contact tracing: We have developed and successfully tested a privacy-preserving decentralized contact tracing module following the
(PEPP-PT)
, guidelines.
Symptoms tracking: We have developed a symptom tracking module for the app and website.
Care Circle: We have designed and implemented Care Circle where individuals can add significant ones to their circle using an anonymous ID and track their health status and the risk map around their location.
You can change what info you want to share with Care Circle during the crisis.
The app is very easy-to-use with minimal input (less than a minute per day) from the user.
We are proud of the achievements of our team, given the very limited time and all the challenges.
Challenges we ran into
EUvsVirus Hackathon Challenge opened its online borders recently to the global audiences which brought together plenty of people of different expertise and skills. There were challenges that we faced that were very unique, as we faced a variety of communication platforms on top of open-source development tools.
Online Slack workspaces and Zoom meetings and webinars presented challenges in forms of inactive team members, cross-communications, and information bombardment in several separate threads and channels in Slack and online meetings of strangers that are coordinated across different time zones. In developing the website and app for user input data, our next challenge was in preserving the privacy of user information.
In the development of a live map indicating hotspot regions of the COVID-19 real-time dataset, our biggest challenge here was to ensure we do not misrepresent risk and prediction into our live mapping models. We approached Skill Mentor Alise. E, a specialist in epidemiology, who then explained in greater detail that the proper prediction and risk modelling should take into account a large number of factors such as population, epidemiology, and mitigations, etc., and take caution on the information we are presenting to the public. Coupled with the lack of official datasets available for specific municipalities for regions, we based geocoding data mining of user input by area codes cross-compared with available Sweden cities number of fatalities, infected and in intensive care due to COVID-19.
The solution’s impact on the crisis
We believe that WeCare would help many families who can see each other due to self-quarantine and enable early detection and risk evaluation, which may save many lives, especially for vulnerable groups. The ability to check up on their Care Circle and the hotspots around them substantially reduces the stress level and enables a much more effective and safer re-opening of the communities. Also, individuals can have a better understanding of the COVID-19 situation in their local neighbourhood, which is of paramount importance but not available today.
The live hotspot map enables many people of at-risk groups to have their daily walk and exercise, which are essential to improve their immunity system, yet sadly almost impossible today in many countries.
The concept of Care Circle motivates many people to invite a few others to monitor their symptoms on a daily basis (incentivized also through badges and notifications) and take more effective prevention practices.
Thereby, WeCare enables everyone to make important contributions toward addressing the crisis.
Moreover, data sharing would enable a better visual mapping model for public assessment, but also better data collection for the public health authorities and policymakers to make more informed decisions.
The necessities to continue the project
We plan to continue the project and fully develop the app. However, to realize the vision of WeCare we need the followings:
Social acceptance: though being confirmed using a small customer survey, we need more people to use the WeCare app and share their data, to build a better live risk map. We would also appreciate more fine-grained data from the health authorities, including the number of infected cases in small city zones and municipalities.
Public support: a partnership with authorities and potentially being as a part of government services, though not being necessary, to make it more legitimate. This would increase the level of reporting and therefore having a better overview and control of the crisis.
Resources: So far, we are voluntarily (and happily) paying for the costs of the servers. Given that all the services of the app and website would be free, we may need some support to run the services in the long-run.
The value of your solution(s) after the crisis
The quarantine conditions and strict isolation policies may still be in place for upcoming months and year, as the outbreak is not reported to occur yet in Africa, the potential arrival of second and third waves, and possible COVID-19 reappearance next year at a smaller scale (like seasonal flu).
Therefore, we believe that WeCare is a sustainable solution and remains very valuable after the current COVID-19 crisis.
The URL to the prototype
We believe in open science and open-source developments. You can find all the codes and documentation (so far) at our
Website
.
Github repo
.
Other channels.
https://www.facebook.com/wecareteamsweden
https://www.instagram.com/wecare_team
https://www.linkedin.com/company/42699280
https://youtu.be/_4wAGCkwInw
(new app demo 2020-05)
Interview:
https://www.ingenjoren.se/2020/04/29/de-jobbar-pa-fritiden-med-en-svensk-smittspridnings-app
Built With
node.js
python
react
vue.js
Try it out
www.covidmap.se
github.com | WeCare | WeCare is a privacy-preserving app & page that keeps you & your family safer. You can track the health status of your cared ones & use a live hotspot map to start your normal life while staying safer. | [] | ['2nd place', 'Best EUvsVirus Continuation', 'Best Privacy Project'] | ['node.js', 'python', 'react', 'vue.js'] | 41 |
10,372 | https://devpost.com/software/reciper-qev3mt | Inspiration: We were thinking of ideas to do in relation to COVID 19 and started thinking about how we could use the theme of food to support this relevant theme and started thinking of how people have to go to the grocery store doing the pandemic so why not alleviate their stress by giving them a list of easy dishes they can make with the supplies in front of them?
What it does: Essentially Reciper takes a list of ingredients that the user has in their household and shows the user a lot of options of recipes they can use in their household which will include ingredients, an image, instructions and the amount of calories. Our app also has a social good cause where on our page on the button donate it uses a google maps API to essentially take the users ZIP CODE and shows them the nearest food banks they can donate to!
How I built it: We used HTML and CSS for the front end to build the UI of the web app and used Python and Pycharm for the backend to build the functionalities of the web app.
Challenges I ran into: We ran into challenges with allowing the search bar to type an item as well as displaying multiple recipes through the users inputted list of ingredients. Another one was how to connect Python files to HTML and CSS files as well as managing our time in a convienent manner.
Accomplishments that I'm proud of: This is some of our first Hackathon's as well as experiences creating a web application.
What I learned: One of our team members learned what front and backend development means and how to use classes in python. Another learned how to create a search bar and connect HTML and CSS pages to eachother to create one fully working UI Web app.
What's next for Reciper: What's next for reciper is to include a part on the web app that can be a foodie community where when we add an authentication system users can talk to other users about the recipes they found through the app and even add their own while highlighting key ingredients so that next time someone searches those set of ingredients they will see the new recipe they added. Also the profile will be able to store the amount of calories the user had that specific day and can be used to track a users healthiness and wellbeing.
Built With
css
html
python
Try it out
github.com | Reciper | Want an obedient way to use your ingredients, use Reciper to find the Recipe made for your ingredients! | ['jlz6146 Zhao', 'ananya misra', 'Abraham Gonzalez'] | [] | ['css', 'html', 'python'] | 42 |
10,373 | https://devpost.com/software/home-rubl80 | Inspiration
This project was inspired by our passion for technology and our identities as women and minorities. As we make our way through higher-level CS courses and apply for tech internships, it becomes evident that there’s less of “us”. While there has been increased discourse surrounding such inequality in recent years and a lack of a large platform to support women and minorities in tech, we believe that more can be done. This web application platform has a simple goal: to foster a community where women and minorities in the tech industry to grow and thrive together.
What it does
Our web application has 4 main pages. The LEARN page provides extensive videos on how to navigate through the tech industry. The CONNECT page is where people can talk about anything they want and expand their network and social circle. The PROGRAMS page is filled with details and links to various programs that specifically support women and minorities, and lastly, the COMPANIES page shows the statistics of how companies are supporting minorities and their supporting programs.
How we built it
Our team used React and Material-UI to build the design of the website and Firebase for OAuth, video storage, and database.
Challenges we ran into
Git errors - unfortunately, running into many git conflicts sucked up a lot of time. Making use of firebase was also a learning curve.
React and Material-UI. However, the mentors were really helpful and I was so lucky to get help from them.
Accomplishments that we're proud of
We’re a team with a wide range of experience! Despite this challenge, we each learned a lot from each other and from the mentors at Spectra. We were able to build a lot of pages to show and our website looks really nice despite not having a designer. We were getting along very well under stress and everyone was able to contribute to the project.
What we learned
We explored the capabilities of Firebase, VideoAsk, React and Material-UI. Our experimentation with VideoAsk was really fun.
What's next for HOME
We want to be more than a typical website filled with resources. We want to be the goto place for women and minorities in tech to connect to each other--face to face. Given more time, we would have loved to make more use of VideoAsk so that individual users can post videos about their experiences, advice, etc. and then have others respond, either through video or a chat forum. We also need to complete the functionalities in Learn page and write Connect page.
After the Spectra hackathon, we are definitely publishing this platform online and our goal is 5000+ members.
Built With
Javascript, HTML, CSS, Material-UI, Firebase, Express, Axios
Built With
express.js
firebase
google
node.js
react
typeform
videoask
Try it out
spectra-d1334.web.app | HOME | A community for women and minorities to learn, grow, and thrive together | ['Anh Pham', 'Christy Zheng', 'Ha Nguyen'] | ['1517 Grant Prize for Most Novel Hack', 'Category Challenge: Best Use of Google Cloud - COVID-19 Hackathon Fund'] | ['express.js', 'firebase', 'google', 'node.js', 'react', 'typeform', 'videoask'] | 0 |
10,373 | https://devpost.com/software/mentor-youniverse-venus-track | Chat Feature
system page
logo
mentee view
login page
registering
mentor view
home page
matches list
Inspiration
Coming out of high school or college and moving on to the real world can me scary. Mentors can be a HUGE help in your life as they guide you through finding a career you'll love to helping you with professional decisions. Without having to sign a Google Form and have someone else match you and a mentor/mentee, Mentor YOUniverse allows for a quick swipe away from matching with more than one mentor/mentee.
What it does
Mentor YOUniverse is only available for Android systems and is a Tinder-like system that matches mentors with mentees and vice versa. Once you match with a mentor and mentee, you can start to chat with them!
How we built it
We used Android Studio with Firebase and Gradle. The coding languages we used were purely Java. We used XML documents to do the markup and the design!
Challenges we ran into
Sometimes the cards where people should appear on the main screen would disappear. We fixed that over hours of debugging and found out that in the back-end, the cards feature was streaming to a location that didn't exist!
Accomplishments that we're proud of
The swiping feature is incredibly helpful for ease of quickly matching to many people, and the name of the app is pretty out of this world too! We're also just proud that we created an app from scratch for the very first time, allowing us to learn a lot from this experience.
What we learned
We learned how to use Android Studio, Firebase, and Gradle for the first time. We learned a lot more about Java too. We are beginners and have never used Android studio nor participated in a hackathon!
What's next for Mentor YOUniverse - Venus Track
We'll add features including sorting preferences, reporting, and adding a bio/more elaborative profile! We will also add notifications for a match and a message!
Built With
android-studio
firebase
firebase-documentation
gradle
java
Try it out
github.com | Mentor YOUniverse | You’re a star in this galaxy. You, your peers, and your potential mentors/mentees are light years away from each other! However, no matter how far, Mentor YOUniverse will bring you all together. | ['Selina Huynh'] | ['Best Beginner Hack'] | ['android-studio', 'firebase', 'firebase-documentation', 'gradle', 'java'] | 1 |
10,373 | https://devpost.com/software/cosmic-connections | Sign-in Page
Pressing the calendar icon links users to a typeform (integrated with the TypeForm API) where they can fill out a form to get matched.
The theme for the first matching is personal wellbeing! We match users based on the answers to this question.
Here is another question that is involved in our matching algorithm.
Once matched, users can send letters to their cosmic connection and then wait for a letter back!
Pressing the mail emoji allows us to view the past letters from our cosmic connection!
The home page of our application!
Figma Prototype/Wireframe
Inspiration
In the midst of COVID-19, it is very easy to feel alone. Sometimes, you just want to see the world outside your walls. We are Cosmic Connections – the FREE virtual pen-pal service with personalized matches that are out-of-this-world. Combining the nostalgia of ‘snail mail’ with a fun, interactive user interface, getting a match is as easy as 1, 2, 3. All you have to do is fill out a short survey once a month, and we take care of finding your space-ial match and provide you with thought-provoking prompts to start the conversation. 🌌🛸
Our target audience is young adults around the age of 17 to 25 who are familiar with the concept of pen-pals. In addition to the nostalgic aspect, the Census Bureau shows that young adults (18-29) are most prone to negative mental health consequences due to self-isolation, with 42% reporting anxiety and 36% reporting depression. We hope that our application will allow users to enter a safe space where they can openly communicate with their cosmic connection and develop a friendship that will last light years!!
Watch our project demo here:
https://www.youtube.com/watch?v=0yu-UhFvH3k
Watch our project pitch here:
https://youtu.be/aIzT2oIGUdM
What it does
This web application allows users to find a cosmic connection, essentially a buddy who matches their interests. Cosmic Connection also provides conversation starters to foster new topics for anyone to open up about. Themes will be occurring on Cosmic Connection every round of matches! All users will engage with their cosmic buddy anonymously to foster a safe environment. Match reveal at the end of the month is optional and super exciting to look forward to!
After being matched with their cosmic connection, users have the opportunity to send virtual letters to their match. However, users only have the opportunity to send one letter at a time and can respond only when the user responds. This is used in order to mimic the real-life action of sending a letter to someone and in order to provide a unique feature in a culture that is heavily dependent on instant gratification.
How we built it
In creating Cosmic Connections, for design, we used Figma and Adobe software to develop a wireframe and graphic components. We chose a palette that went well with the space theme and provoked emotions of serenity, but alertness. On the technology side, our web app is made with React and Google Cloud Firestore, integrated with the TypeForm API and Cloud Function webhooks.
We used the TypeForm API to embed a form within our website that allows users to match with their cosmic connections based on their interests. We collected the responses from the TypeForm API and directly connected it to our Firebase backend.
We used Firebase authentication to validate user emails and passwords. In addition, we stored various fields in our database including connections, letters, and user information. This data is incorporated to match users with others with the same interest.
Challenges we ran into
The challenges that we ran into were configuring the Firebase and integrating external APIs. Also, working together with new friends from different time zones was super fun but also required more planning. 🕓
Accomplishments that we're proud of
We are proud of our Space theme and the usage of the tools available to us! Additionally, we designed the entirety of the UI components (including the spaceship and the background galaxy) through hand-drawn sketches and graphic design. Additionally, functionality-wise we are proud of our ability to embed TypeForm API and integrate the Firebase.
What we learned
We learned more about personal connections and new technologies! This is many of our first times using these technologies but thanks to mentors and friends, we were able to ramp up our learning! Thank you to everyone who hosted and helped us!
What's next for Cosmic Connections
Some future implementations will be features such as improving our matching algorithm to consider similar interests, adding community channels, and enabling different activities between penpals. We hope to create game nights
and a variety of conversation starters for easy social interactions! This web app is meant for young adults who feel isolated and would like to meet people from anywhere around the world. We will be constantly incorporating new ideas that suits our targeted audience. Stay tuned!
Built With
adobe-creative-suite
figma
firebase
google-cloud
react
typeform
Try it out
github.com
drive.google.com
www.youtube.com
vyl003.typeform.com | Cosmic Connections | The Easy E-Pal Service to meet your SPACE-ial match! | ['Vasundhara Sengupta', 'Rossa Sabu', 'Vivian L', 'Chau Vu'] | ["Spectra's General Hack"] | ['adobe-creative-suite', 'figma', 'firebase', 'google-cloud', 'react', 'typeform'] | 2 |
10,373 | https://devpost.com/software/manifest-covid | Manifest Covid
GIF
white dots note presence of moving object
GIF
gray ellipses note past movement of moving objects
GIF
what it looks like when rendered locally
Inspiration
We read an article about how we shouldn't walk where runners have recently passed to help avoid accidentally inhaling some airborne particles from the runner, and we thought, "how long do we have to wait until it's okay to walk where runners have passed?" Hence this idea was born.
What it does
Manifest COVID helps visualize the potential presence of airborne particles by generating "particulate" visuals in the wake of a person's movement that disappear over time.
How I built it
We used GCP to analyze video information and built the animation using p5.js.
Challenges I ran into
GCP was a learning curve for us all, from setting up the project to be properly billed to using the Video Intelligence API
tying together p5.js to generate the box coordinates / interpolating location on video, and overlay animation onto video
Accomplishments that I'm proud of
Not using GCP's AutoML API/tool/feature for this project (though this was fun to explore!). AutoML would've done something similar to what we accomplished, except we would've had less control and we wouldn't have had a chance to understand the environment through the GCP console CLI. The me in the past might've opted for this route because it's relatively easier and less intimidating, but today I'm going to be bold and say, I can do it either way.
What I learned
We learned more about the capabilities of GCP to analyze data as if we had the backing of hundreds of machine learning professionals and data scientists. We also learned about a very powerful visualizing/digital art tool that is p5, which also works well with ml5 (to apply future machine learning).
We also performed a video capture of our desktop for the first time, and in finding freemium app (ice cream app) to record a specific section of our desktop, we also learned that their logo appears. We promise we worked on this project - not ice cream apps!
What's next for Manifest COVID
Some applications include:
implementing collision detection and maintaining a history of when they happen to analyze later what areas tend to have activity. This information may be useful for people in operations or urban planning to better understand user behavior and the impact of environmental design
implementing at augmented reality level - e.g. playing pokemon go? Let's make sure you're not stepping into the wake of someone's path too soon just to arrive to that next gym and flash the screen red to show danger. A wearable like Google Glasses could use this, too.
Built With
gcp
p5 | Manifest COVID | Can't see airborne virus particles? Well, now you can! | ['Anita Yip', 'Anoushka Sengupta'] | ['Mercury Track: Best Safety & Security Hack'] | ['gcp', 'p5'] | 3 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.