hackathon_id int64 1.57k 23.4k | project_link stringlengths 30 96 | full_desc stringlengths 1 547k ⌀ | title stringlengths 1 60 ⌀ | brief_desc stringlengths 1 200 ⌀ | team_members stringlengths 2 870 | prize stringlengths 2 792 | tags stringlengths 2 4.47k | __index_level_0__ int64 0 695 |
|---|---|---|---|---|---|---|---|---|
10,414 | https://devpost.com/software/arduino-piano-with-lcd | Inspiration
Actually Its MLH Hacky birthday so I thought to develop some sort of device with arduino as far to my knowledge that matches the birthday theme of out 7th Hack MLH birthday.
What it does
There are some push button that when trigerred plays some piano notes and we can play any tone we want. The possibilities are just unlimited to the imagination of user. Moreover, It allows us to display some message. Here in my proejct I have displayed "Hacky Birthday MLH!"
How I built it
I built it using the Arduino Microcontroller inside Tinkercad Simulation Environment. I have used push buttons for input, piezo buzzer as output for piano and LCD (16*2) display for displaying the message.
Challenges I ran into
I was trying to do it physically but I didnt had an LCD and push buttons so I tried the simulated environment.
Also got some problems opening the video editor and still trying to figure it out.
Accomplishments that I'm proud of
I am happy that I came up with some geeky-hacky project for this hackathon.
What I learned
I learned about the pin configuration of LCD display and how I can play different frequency notes with the help of sound library in arduino.
What's next for Arduino Piano with Lcd
I have planned to use capacitive touch instead of push buttons as an upgrade to this project in future. So that it would be very easy to play piano
Built With
arduino
tinkercad | Arduino Piano with Display | Lets have some fun with DIY piano and display some messages | ['Naseeb Dangi'] | [] | ['arduino', 'tinkercad'] | 31 |
10,414 | https://devpost.com/software/note-o-matic-y5do80 | note-o-matic
Inspiration
As a group of students, we're often presented with huge amounts of notes to condense and read through in a painful, paragraphed format. We built note-o-matic to combat this problem, by taking in notes, reading and condensing them through NLTK, finding key words using an expectation maximisation algorithm and outputting new, fresh notes.
How did we make it?
We used Flask in order to connect our frontend and backend together, and the application is hosted using Google Cloud's App Engine. We used the NLTK module in Python in order to process the notes as per the English language, and using word2vec to create and graph words, as well as density-based scanning, we were able to determine what words are most linked to others in clusters, allowing us to graph and source the key words from the inputted notes.
What did we learn?
We all managed to improve our Python skills, especially mathematically and algorithmically due to the complexity of word2vec. Creating a whole application from scratch was something none of us have done before, and integrating each of our individual skills together - design, coding, testing and cloud work - was an exciting task.
How challenging was it?
The one common technology everyone on our team knew was Python, so creating and building the Python backend was no issue - however, we all have little experience with Flask, Google Cloud and NLTK (with its related technologies). For us, it was difficult to learn a lot of this from scratch, especially dealing with Google Cloud errors and the algorithmic complexity of graphing and extracting words from clusters.
Future Plans
To develop the project further, we'd like to add functionality using PyLaTeX and pdfLaTeX to format notes and outputted data into a final PDF which can be downloaded.
Built With
css
flask
html
nltk
python
Try it out
note-o-matic.tech
github.com | note-o-matic | note-o-matic takes overly-long notes and provides you with key words and insights, allowing you to then summarise them neatly on a web page. | ['Ana V', 'Sam Leonard', 'Sai Putravu', 'rak1507 1507'] | [] | ['css', 'flask', 'html', 'nltk', 'python'] | 32 |
10,414 | https://devpost.com/software/visually-impaired-s-assistor-03i1g2 | What it does
This model aims to capture the vicinity of a blind man using a camera and extract the information(name) about all the objects in view in the form of text and finally convert the text into sound. It aims to provide the blind person with a descriptive view of his surroundings by providing him an insight into the objects in his vicinity. Also, it will be able to extract any kind of text, if any, present in the image. To enable blind children learn in a classroom setting where normal children read textbooks, our project will extract all the text present in an image and convert it to braille script (which can be fed to braille printers further) or the text can be read out by our software as per the student
requirements.
In the COVID scenario where people touching contaminated surfaces are at a high risk of being prone to the virus, visually impaired are at even higher risks as have to touch the surfaces of objects to make out what they are. Through this solution, they will only have to click a picture of the object to make out what it is without having the need to touch it with their hands.
Built With
numpy
opencv
pillow
pyperclip
pysimplegui
pytesseract
python
pyttsx3
yolo
Try it out
github.com | Visually Impaired's Assistor | An assistor for the visually impaired to help them get an idea about their vicinity, convert any captured text to audio and braille. | ['kinster007'] | [] | ['numpy', 'opencv', 'pillow', 'pyperclip', 'pysimplegui', 'pytesseract', 'python', 'pyttsx3', 'yolo'] | 33 |
10,414 | https://devpost.com/software/drop-a-smile | dropasmileinthis.space
Inspiration
The world around us can be a sad and depressing place. However we are all willing to help another person out if we see or know about someone in distress or needing help. With technology we can make the helping nature of people come to life and spread smiles!
.
What it does
this is like a waze of helping people. When we see or notice someone is trouble or needing help (not a 911 emergency, we absolutely recommend calling the authorities for anything observed of a serious nature) a user can place a pin with an emoji on it using our app. The emoji can signify what the problem is, and can be accompanied by a small text description and/or an image.When someone uses the app to see this, they have the chance to help the situation and can then share that they have changed this by changing the emoji of the pin into a smile :)
.
How we built it
frontend was built with react native
the maps were built using GCP google maps api
the backend was built using GCP google serverless functions
the database was hosted on mongodb atlas .
Challenges we ran into
keeping the map active and refreshed
emoji animation
.
Accomplishments that we're proud of
the system actually works.
What we learned
we were actually surprised this wasnt a thing before.
What's next for Drop-a-Smile
hopefully some charitable organization can use this system, we would be glad to build out a robust version for free use..
domain registered: dropasmileinthis.space
Built With
google-cloud
mongodb
react-native
Try it out
github.com | Drop-a-Smile | Happy communities! | ['Ebtesam Haque', 'Muntaser Syed'] | ['Certified Dank'] | ['google-cloud', 'mongodb', 'react-native'] | 34 |
10,414 | https://devpost.com/software/giftswapr | Inspiration
This app is inspired by the popular and super fun game Secret Santa that we have all played during Christmas. Except, with a twist... This game runs all year long, and gives celebrating birthdays an element of surprise.
What it does
Giftswapr is a mobile app that allows you to create wish lists for your birthday, and anonymously gift your friends stuff on their wish-list. It is a fun app that gamifies the process of celebrating birthdays! You can search for you friends' username and claim items on their wish-list, as well as customize and create your own wish-list for your friends to see.
How we built it
Figma for design.
React-native for front-end.
Flask python for back-end server.
MongoDb for storing user data.
Gcloud for web hosting.
Challenges we ran into
We decided to participate when there were only four hrs left for submission (lol) so we had major time constraints.
Getting the react native to work fast.
Web hosting
Accomplishments that we're proud of
Pulling off a fully functional mobile app in such less time.
Creating a responsive and clean UI.
Setting up a functional backend server and db.
What we learned
How to use React Native.
MongoDb.
Gcloud hosting.
What's next for GiftSwapr
We want to make the login system more sophisticated, with password reset.
Notifications for friends' birthdays, and integration with calendar.
Sophisticated search engine using elastic
Built With
appengine
flask
gcp
mongodb
python
react-native
Try it out
github.com | GiftSwapr | Be the Secret Santa on your best friend's special day. | ['Veer Gadodia', 'Nand Vinchhi', 'Muntaser Syed', 'Ebtesam Haque'] | [] | ['appengine', 'flask', 'gcp', 'mongodb', 'python', 'react-native'] | 35 |
10,414 | https://devpost.com/software/ar-birthday | AR Studio
Inspiration
This pandemic has made every one of us stay back at home. As a result, most of us cannot celebrate our birthdays, like we do in those good old days. So does MLH' s birthday this time. So why don't we blow out the Candle on Cake in the AR right in our home and celebrate the birthday. That's how I came up with this idea of building an AR filter to celebrate MLH's 7th birthday.
What it does
As soon as you launch this AR filter on Instagram you can view a 3D cake appearing right in front of you.
If you don't like the default cake, don't worry we have two other options. Select a cake you like.
Now it's time to light up the candle. Tap on the screen to light up the candle. Once it's time, to blow out the candle click on the candle flame to blow off and let the celebration begin.
So what are you waiting for?
Go, find the link in the description below and try the AR filter yourself. Check out my GitHub to find instructions to make one such filter for yourself.
How I built it
I have made this filter using Facebook's Spark AR studio.
Challenges I ran into
Finding 3d models and templates was hard. However, after small research over the internet, I was able to find few 3d models and artwork required in some other open projects and websites.
Built With
ar-studio
instagram
sparkar
Try it out
github.com
www.instagram.com | AR Birthday | Instagram AR filter to celebrate MLH's Birthday, right at your home ! | ['Nitish Gadangi', 'Sainag Gadangi'] | [] | ['ar-studio', 'instagram', 'sparkar'] | 36 |
10,414 | https://devpost.com/software/vision-0biq9m | Splash Screen
Help Screen
Home Screen
Camera Capturing Text
vision
This repository contains code files for the app named Vision.
This app has some special features which can be used by
Blind People
!!
Assume that there is a paper on which something is written.
Now, if the blind person wants to read it, they can't.
To open the app, they just need to speak"
OK Google! Open Vision.
"
So, they can simply click the pic.
The app recognizes text from the paper (OCR) and then converts it into audio (TTS) which can be heard by the user!!
So, great!! Now they have VISION!!
Screenshots
Built With
dart
java
objective-c
ruby
Try it out
github.com | Vision | Eye for blind! | ['Abhishek Doshi'] | [] | ['dart', 'java', 'objective-c', 'ruby'] | 37 |
10,414 | https://devpost.com/software/birthday-blitz | Maze Generation Sample
Eric's Octocat Drawing!
Happy Birthday MLH!
Inspiration
Pac-man, party games, and birthday spirit.
What it does
Brings people together for celebration and friendly competition. Covers almost all PC platforms and is browser friendly through WebGL. Easy to learn controls that just require four keys or a joystick.
How we built it
Through the Unity game engine and Photon PUN multiplayer support. We randomly generate mazes through Kruskal's Algorithm, and synchronize repeated generation through randomized seeding across all players simultaneously.
Challenges we ran into
The biggest challenge we ran into was synchronizing the maze generation across multiple players. This is difficult because the maze is regenerated at random every 45 to 90 seconds. We were able to maintain this randomness while load balancing through client side generation by using seeding properties to our advantage. Through this technique we can ensure that mazes are generated multiple times with multiple seeds, all remotely on the client device based off of the host clock.
Accomplishments that we're proud of
The top three accomplishments we're most proud of are:
Maze synchronization
Excellent time management
Creative addition of Computer Science algorithms, techniques, and concepts.
What's next for Birthday Blitz
Possible expansion or game release in the future! Watch out for our work at
WingScythe
!
Built With
blender
c#
css
github
html
javascript
photon
piskelapp
unity
zenhub
Try it out
wingscythe.com
github.com | Birthday Blitz | HOORAY! Its MLH's birthday today! What better to celebrate than to eat a big fat cake! Bring your friends and race your way through the maze to sprinkle the cake with your delicacies. | ['Ryan Xu', 'Jeffrey Weng', 'Andy Zheng', 'Eric Tong'] | [] | ['blender', 'c#', 'css', 'github', 'html', 'javascript', 'photon', 'piskelapp', 'unity', 'zenhub'] | 38 |
10,414 | https://devpost.com/software/hackalendar-xomprs | Final email!
Brief information about HacKalendar from our slide.
Inspiration
As new hackers (and even now after a little bit of experience), we are always on the lookout for the most popular Hackathons taking place. Participating in larger Hackathons provides the opportunity to attend great workshops, talk to more company recruiters, meet more hackers and have a bigger platform to showcase ideas. We wanted to develop something that would help hackers find the most happening Hackathons especially during these unprecedented times when it is hard for information to be passed on through peers and friends.
What it does
Our hack utilizes UiPath. On a request, the automation finds the most popular Hackathons (through the number of participants) and compiles a short list with some key information. This list is then emailed to recipients so that they can always be up to date on the latest and greatest Hackathons! We see use cases for Universities to encourage participation in Hackathons, and perhaps even for Devpost’s members.
How we built it
We built this program completely with UiPath. It was a technology that we had not tried before and were extremely curious to explore. In all honesty, we will probably be using it again in the future as it’s simplicity and versatility really surprised us!
As for the setup, we made a Google Cloud Project that would allow us to use the Gmail API provided by the Google Cloud Platform. We then integrated this with our UiPath project by providing it with the Client ID and Secret.
The first task we had was to collect data from all the Hackathons taking place in the future/going on. In order to do this, we used the “Data Scraping” tool in UiPath Studio and extracted all the data including name, url, participants, and time until submission for each Hackathon. We then formatted this data into an excel spreadsheet by sorting it in a descending order based on the participants column.
Next we build a for loop that would extract the first five rows of the data in order to put in a table format. Once it was in this format, we used the “Send Email” activity/tool in order to send an email message with the table and some text. We made sure to write the body of the email using HTML in order to have the table in an accurate format.
Challenges we ran into
One of the biggest challenges we ran into was collecting data from each Hackathon’s URL individually. We were not able to accomplish this due to our inexperience with UiPath Studio. However, we soon realized that all the data we needed was already being displayed in the main Hackathons list page on Devpost. Therefore, we simply went with this idea and gathered data from here.
Accomplishments that we're proud of
We are glad to have completed a fully functional tool that can be used! It was great to add RPA and UiPath Studio to our skillsets so that we can continue to work on other automation projects.
What we learned
The major thing we learnt was the concept of RPA (Robot Process Automation). This tool proved to be quite efficient in the manner in which it is able to automate numerous tasks that organizations/companies may need manual work for. Using UiPath Studio’s drag-and-drop functionality really makes the user interface very smooth and easy to work with. All in all, we are glad we spent time learning something that will be of use in future projects!
What's next for HacKalendar
We plan to develop our RPA even more by allowing Hackathon events to directly be added to Google Calendar along with the email feature. Additionally, we would love to explore a way to integrate our project with mobile applications.
Built With
google-cloud
google-gmail-oauth
html
uipath
Try it out
github.com | HacKalendar | Our UiPath project allows hackers to find out about the most popular Hackathons taking place! | ['Ricky Bengani', 'Kanav Bengani'] | [] | ['google-cloud', 'google-gmail-oauth', 'html', 'uipath'] | 39 |
10,414 | https://devpost.com/software/code_blocks | CodeXplode
Welcome to CodeXplode--a fun matching game guides young kids and beginners through language-agnostic coding syntax!
This is a single-page React app mostly styled with React-Bootstrap.
Difficulties
CodeXplode was a surprisingly difficult app to manage and create. What we expected to be a quick drag-and-drop game with fun, shiny graphics, ended up being quite the monster to put together. Dragging and dropping in React is much harder than it seems, especially when you want to dynamically match style on different components along the way.
We also enjoyed learning more of React-Bootstrap during this project. The three of us hadn't had much experience with the styling library before, so though it slowed our progress down, we're thankful for the learning opportunity.
The Future
In the future, we'd love to fix some glitches, add an animated sprite avatar that accompanies the learner throughout their lessons, and add some fun graphics that explode the blocks of code when a bomb is correctly placed.
Thanks for checking our project out!
Created by Tom, Tanner, and Zach.
Built With
css
html
javascript
react
react-bootstrap
Try it out
github.com | CodeXplode | A fun matching game guides young kids and beginners through language-agnostic coding syntax! | ['Zach Nicholson', 'Tanner Brainard', 'tstrotherIV Strother'] | [] | ['css', 'html', 'javascript', 'react', 'react-bootstrap'] | 40 |
10,414 | https://devpost.com/software/techagro-mv25ys | jsjasj
Built With
at-mega
c++
github
pcb
pcb-designing
usbtoserial-conversion | techtechho | n | ['Prakhar Saxena'] | [] | ['at-mega', 'c++', 'github', 'pcb', 'pcb-designing', 'usbtoserial-conversion'] | 41 |
10,414 | https://devpost.com/software/no-test-no-problem | COVID Positive ID
COVID Negative ID
COVID-19 Positive
COVID-19 Negative
Convolutional Heap Map
Inspiration
With COVID-19 tests being carefully rationed out and there being multiple scarcities, patients may not have access to a traditional test. Our software can diagnose a patient purely on a CT scan, eliminating the need for single use tests. We used the COVID-19 Lung CT Scans by LuisBlanche on Kaggle.
What it does
Our web app has a form for submitting patient data and uploading a CT scan image. We then pass the pixel data to our server, which runs several Tensorflow models. We then take the average confidence of all the models, and return the prediction to the browser. You can test it with the CT Scan images in the Devpost Gallery.
How I built it
We built 8 Deep Learning Models with Tensorflow and Keras that integrate convolutional neural network architecture and was trained using K-fold Cross-validation, in order to make best use of a limited dataset. Our model achieved nearly 90% accuracy, allowing hospitals to use this as a tool to diagnose patients when resources are limited.
Challenges I ran into
Over the course of this hackathon we were able to create a data model which achieves nearly 90% accuracy, one issue we had was not having a powerful enough processing unit to train the model from the start of the competition. We started using a NVIDIA V100 GPU to train the model on Google Cloud Platform. Given a better processing unit from the start and more time we would've been able to achieve greater accuracy, however we still manage ~90%.
Accomplishments that I'm proud of
We used a NVIDIA V100 Graphics Processing Unit on Google Cloud Platform in order to train our models. We were also able to finish this entire project in 24 hours.
What I learned
During the course of this hackathon we were able to learn and use Django to correctly link up the website wherein a user has to upload a CT Scan to the back-end data model which can predict whether a patient has Covid-19 or not.
What's next for No Test No Problem
We plan to add a database structure to hold patient and prediction data. We hope that this functionality will make our app more appealing to healthcare professionals.
Built With
css3
django
google-cloud
html5
jquery
keras
python
tensorflow
Try it out
github.com
www.notestnoproblem.live | No Test No Problem | We made a web app that utilizes machine learning to classify CT scans of lungs for COVID-19 and store patient data. This might be the difference between life and death if you cannot get a real test | ['Mohit Chhaya', 'Kabir Pathak', 'Pranish Pantha', 'Maanav Singh', 'Sachet Patil'] | ['Best Business Potential'] | ['css3', 'django', 'google-cloud', 'html5', 'jquery', 'keras', 'python', 'tensorflow'] | 42 |
10,414 | https://devpost.com/software/police-brutality-forum | Inspiration
Police brutality has been in the news lately so we decided we wanted that to be the subject of our project. After realizing that more mainstream social media sites are sometimes forced to take down or flag content related to police brutality we decided there was a need for a more independent community.
What it does
It is a forum for police brutality victims and allies to build a community where everyone can share their experiences and resources.
How we built it
We used flask as the framework for our web application. Flask handled the logic and served up the web pages coded using HTML. After completing a back-end and a basic front-end we used a CSS framework, Bootstrap, to make everything look better.
Challenges we surpassed
Creating the database and figuring out the relationship between Users and Posts, Preventing duplicate users/usernames & mismatched passwords during registration, Creating forms using WTForms, Figuring out notifications, Profile picture integration using Gravatar, Email website error notifications to administrators, Accepting Cryptocurrency donations through Coinbase
Accomplishments that we're proud of
We made a functioning website that can be used to help people who have been victims of police brutality.
What we learned
Web Development using flask
What's next for Police Brutality Forum
Next, we will continue to improve on the website by adding more features and sources and well as continuing to make the forum more user-friendly. Examples of upcoming features include implementing a like/dislike system and methods for sorting posts such as by popular and by most likes, password recovery through email, the ability to delete/edit posts, image/link support for posts, and the implementation of tags to sort posts into categories.
Built With
bootstrap
flask
html
jinja
python
sqlalchemy
werkzeug
wtforms
Try it out
github.com | Police Brutality Forum | A forum for police brutality victims and allies to build a community where everyone can share their experiences and resources. | ['Dhruv Batra', 'siya batra'] | ['Honorable Mention'] | ['bootstrap', 'flask', 'html', 'jinja', 'python', 'sqlalchemy', 'werkzeug', 'wtforms'] | 43 |
10,414 | https://devpost.com/software/tempest-awycgp | Home Page
Storm Dashboard
Predicted Damage Probability Map
Predicted Monetary Damage Map
Embedded Hurricane Map
Hurricane Monetary and Severity Predictions
Upload Before and After Image
Predicted Damage Visualization
Example Damage Visualizations
Inspiration
As millions of people suffer throughout the nation from the sweeping problems of Natural Disasters, our team has reflected on how we might assist the people who lose anything and everything. These storms are responsible for the losses of billions of dollars in and thousands of lives.
Our team was determined to
severely mitigate
the losses generated from storms by predicting the costs and impacts of them. In order to have a
civic
impact, we wanted to help communities and governments adapt and more effectively respond to weather disasters. We wanted to become
civically engaged
in our government and community, and we realized providing software to solve massive problems was the best way to do so, especially during quarantine. We hope our solution will bring together the overarching national community of citizens affected by disasters and encourage government, crowd sourced planning to combat these detrimental effects.
Thus, we took a unique approach from the common hackathon project.
Instead of creating an application meant for general use, we developed an application specifically for state and city governments. We plan to implement our software as part of a nationwide government plan to promote smarter disaster response and efficient planning. Instead of having a grassroots approach to helping the community, we believe using the government’s platform is the best method of outsourcing our solution. Since governments often utilize outside developers to build applications, we believe our website fills a normally unoccupied niche, and projects like this should be encouraged in the hackathon community.
Thus, we developed Tempest, an application that uses ML to allow governments to prepare for storms and disasters by providing visualizations and predicting important statistics.
What it does
Tempest is a unique progressive web application that lets users and governments predict the outcomes of natural disasters. Using web scraped data, we were able to predict where storms would end up causing most damage and create interactive visualizations to aid the process.
We first developed a tornado model, which can predict the probability that a tornado does severe damage as well as the monetary value of the damage. We trained our model on data from NOAA, which contains tornado data such as wind speeds, duration, and azimuth values. Our model then outputs a magnitude probability from 0 to 1, with 0 being no impact and 1 being devastating. In addition, our model also predicted the monetary damage from each storm in dollars. We trained our model using
AI-Fabric from UiPath
, allowing us to train all our models at fast speeds. Our map includes completed tornadoes from Sept. 2019 to July 2020, and we also predicted tornadoes from the upcoming month of August since data exists for it. We exported all our map data by month from our python model, and from there we fed it into a map visualization we found through insightful documentation. This allows governments to adequately prepare for disasters and speed up recovery and minimize costs.
Even more dangerous than tornadoes are hurricanes. We embedded a map of upcoming hurricanes from the website LivingAtlas.org. We then used our tornado model and retrained it on this hurricane data. More importantly, our model takes the information and outputs both the magnitude of the hurricane on the Saffir-Simpson Hurricane Wind Scale, which classifies hurricanes on a scale of 1-5, based on data such as wind speeds and temperatures. We displayed the three upcoming hurricanes in the US. Additionally, we also output how much monetary damage each of the upcoming hurricanes will cause along with a satellite image of the storm, allowing residents and local governments to allocate proper funds and shelter themselves as much as possible.
Hurricanes can often produce floods that can ravage and destroy communities. Understanding how floods will cause damage allows communities to rebuild faster, reducing costs and time without a home. Thus, we developed a Style Transfer model that allows city planners to prepare for the aftermath of floods, which can visualize the damage in a location due to a flood. City planners will upload an image of the location before and during the flood, and our algorithm will predict the damage to the location and output a picture of what the damage will look like. The model finds commonalities in the images and keeps outstanding features from the flood image in order to properly display the damage. We deployed a portion of this model on our website to test, as the entire model couldn’t be deployed due to size. With this information at hand, city planners can swiftly respond to floods and prepare for the aftermath of disasters.
How we built it
After numerous hours of wireframing, conceptualizing key features, and outlining tasks, we divided the challenge amongst ourselves by assigning Ishaan to developing the UI/UX, Adithya to connecting the
Google Cloud
backend and working on implementing the interactive map features, Ayaan to developing our hurricane and flood models, and Viraaj to developing the tornado model and implementing and retraining the hurricane model.
We coded the entire app in 6 languages/frameworks:
HTML, CSS, Javascript
,
R
,
Python
(Python3 /iPython), and
Flask
. We used
UiPath
for training our algorithm. We used
Google Cloud
and
PythonAnywhere
for our backend. We developed our interactive maps using
HTML
and
R
, and embedded weather websites using web scrapers. We deployed part of our PyTorch model on PythonAnywhere using
Flask
. We hosted our website through
Netlify
and
Github
.
In order to collect data for these models, we developed web scrapers. We created a web scraper to scrape live-updating weather websites. For our home page, we got data from the NOAA. For our hurricane model, we collected previous data from Medium and webscraped for upcoming data using
Arcgis
. For our aftermath algorithm, we were able to deploy a version on PythonAnywhere which takes the two input images and creates an aftermath image. However, since we don’t have access to a cloud GPU, creating the image takes a while each time, so we didn’t completely deploy it.
Challenges we ran into
The primary challenge that we ran into was developing our geographic models. Since the data was very complex and requires cleaning, we weren’t sure how to start. Luckily, we were able to do enough EDA to understand how to develop the models and utilize the data. Training these models was also a huge challenge, and we saw that it was taking a long time to train. We luckily found
AI-Fabric from UiPath
, which allowed us to train our models easily in the cloud. While we were not able to deploy our models, as they are too large to deploy on free and available servers, as long as governments give us images and data, we can give them cost predictions.
Accomplishments we are proud of
We are incredibly proud of how our team found a distinctive yet viable solution to assisting governments in preparing and responding to disasters. We are proud that we were able to develop some of our most advanced models so far. We are extremely proud of developing a solution that has never been previously considered or implemented in this setting.
What we learned
Our team found it incredibly fulfilling to use our Machine Learning knowledge in a way that could effectively assist people who may lose their homes and livelihoods. We are glad that we were able to develop a wide range of predictive and generative models to help a vast range of people. Seeing how we could use our software engineering skills to impact people’s livelihoods was the highlight of our weekend.
From a software perspective, developing geographic models was our main focus this weekend. We learned how to effectively combine web scrapers with machine learning models. We learned how to use great frameworks for ML such as
UiPath
and transfer learning. We grew our web development skills and polished our database skills.
What is next for Tempest
We believe that our application would be best implemented on a local and state government level. These governments are in charge of dealing with hurricanes, floods, and tornados, and we believe that with the information they acquire through our models, they can take steps to respond to disasters faster and more effectively.
In terms of our application, we would love to deploy our models on the web for automatic integration. Given that our current situation prevents us from buying a web server capable of running the aftermath model frequently, we look forward to acquiring a web server that can process high-level computation, which would automate our services. Lastly, we would like to refine our algorithms to incorporate more factors from hurricanes to more accurately predict damages.
Our Name
Tempest is a creative synonym for wind related storms.
Built With
css
google-cloud
html
javascript
python
r
Try it out
tempestai.tech
github.com | Tempest | Using ML to prepare for storms and disasters | ['Adithya Peruvemba', 'Ayaan Haque', 'Ishaan Bhandari', 'Viraaj Reddi'] | ['1st Place Overall', 'Third Place', '1st Place'] | ['css', 'google-cloud', 'html', 'javascript', 'python', 'r'] | 44 |
10,414 | https://devpost.com/software/github-finder | Inspiration
I have always wanted to work with APIs and so I made this project
What it does
GitHub Finder is a web app that uses GitHub API to fetch user data from the GitHub database and display a user's information including the number of public repositories, the number of forks, starred repositories, among others.
How I built it
The web app has been built using HTML, CSS and Vanilla JavaScript
Challenges I ran into
I was unable to run the app on Google Chrome because of their CORS policy and so I had design the functionality in a different manner.
Accomplishments that I'm proud of
This is first time I have worked with and external API and I am so happy that I did it.
What I learned
I learned to work with APIs
What's next for GitHub Finder
I am planning to improve the functionality using modern frameworks and techniques.
Built With
bootstrap
css
html
javascript
Try it out
github.com | GitHub Finder | Displays a GitHub user's information | ['Ansh Dhingra'] | [] | ['bootstrap', 'css', 'html', 'javascript'] | 45 |
10,414 | https://devpost.com/software/hackermatch-hke7qy | Inspiration
For many hackathons, we have had trouble finding new teammates to fit our interests and abilities to work with. The DevPost system of finding a hacker that was looking for a team, and then emailing, and usually waiting weeks for responses, only to hear that the hacker was with another team. Because of the tedious process of making teams, we wanted to create a platform that connects hackers quickly through an easy to use mobile app that acts similar to Tinder. You swipe on hackers' profiles depending if you like their skillset, and you are matched with a team to fit your needs.
What it does
Our app serves to connect Hackers and form teams for Hackathons by connecting users with similar interests and abilities. To register, the user gives some identification and also fills out a short survey. Then, the results of the survey are passed into an algorithm where the backend matches users with similar interests and technological abilities, making the best matches possible. The best matches show up on the users swiping page, where the user can swipe right to indicate a pass, or swipe left to indicate a match. For a match, the user's information and the match user's information are sent to the backend, where they are combined to form a team. Then, a team forms on the team page and users can connect with each other through their discords. Basically, its like Tinder but for matching hackers into teams of a certain size.
How we built it
The frontend was built with Flutter. We chose Flutter because it is easy to work with, gives excellent widgets essential for the app's swiping functionality, and easy to integrate with the backend. We used two backends to hold user data: Firebase and MongoDB. The user's information was stored in both, however Firebase was used to call the user information directly from the app, and that information was passed into http post requests to the backend, where the actual matching happened.
The main backend was a database hosted on mongodb atlas and the matching algorithm and access methods were implemented on GCP using google's serverless functions.
The algorithm works basically like Tinder, except instead of making pairs it forms teams of multiple people.
The lists of users shown are first filtered by some of the user entered factors, for example some hackers may not want to be in a mixed gender team, and some hackers may not be comfortable in a team with all newcomers or all experienced hackers.
We then sort based on some key indicators such as
what the main aims of the hacker are at the hackathon
how comfortable they are with new technology
how focused they are on the area they chose
how open they were to choosing a different idea if they already had one
etc.
When matches are mutual, they get grouped into teams which then can be seen by users.
Challenges we ran into
Making an abundance of users was tedious because we had to fill out information and take the survey for 15+ users, since we wanted a diverse array of users to be matched according to their interests. We also had a bit of trouble communicating between our front end and backends, since for firebase and mongoDB, we used slightly different formats to store the user data.
Accomplishments that we're proud of
We were able to complete our product. We also were proud to seamlessly integrate the two backends together and connect them to the frontend.
What we learned
We learned that using two backends instead of one when communicating with Flutter is much easier than the one mongoDB database. We learned how easy it was to use firebase with Flutter, and we will be using it much more often in the future.
What's next for HackerMatch
We want to extend our service for multiple hackathons, and not just for the single hackathon that is currently available for our app. We want to make some kind of menu where you can sign into your DevPost account, and from your hackathons, choose teams using our app.
Built With
flutter
google-cloud
python
Try it out
github.com | HackerMatch | A platform to connect hackers and form teams for Hackathons! | ['James Han', 'Muntaser Syed'] | [] | ['flutter', 'google-cloud', 'python'] | 46 |
10,414 | https://devpost.com/software/pinq | Just learning new skills and trying new things! :D
Built With
css
html
javascript
Try it out
github.com | pinq | pinq is an app for setting email, text, and contact-based reminders. | ['Matthew M-B'] | [] | ['css', 'html', 'javascript'] | 47 |
10,414 | https://devpost.com/software/birthdaymlh | This video is short and is just a demo of the stack I am using which is hard to show, RPi through ethernet to flask to ngrok. I do not expect to win any prizes, but I wanted to continue practicing good documentation, explaining code and just learn.
https://github.com/knaik/BirthdayMLH
shows my thinking process with comments in the runcmd.sh file. I'll probably shut down the ngrok tunnel tuesday.
BirthdayMLH
This is incomplete. I have spent literally 2 hours at most on it so far.
Inspiration
I wanted to learn some web dev but I dislike not having complete control of my server.
What it does and How It's Built
So for now I just have a raspberry pi server running a flask app that is useless and just says hello world and that's hooked up to ngrok to get a public URL.
I have the RPi hooked up to my router via Ethernet. I am using ssh and vim to edit everything.
Challenges I ran into
I am learning bash scripting and doing everything from command line using VIM and SSH. I don't know how to use Flask properly so I kept running into issues of server ports being used up. For now it work though. I also only have a chromebook as my dev environment.
Also, I got a new chromebook but it doesn't work properly, so I didn't get to focus on this project the way I wish I could have.
Accomplishments that I'm proud of
That it runs at all.
What I learned
Not enough.
What's next
This isn't a joke submission. I am hoping to get something done in the next 8 hours. But if I lose motivation, I want have something submitted. I am interested in flask and web dev now after doing a few workshops but I feel really far behind in terms of understanding APIs.
Built With
flask
ngrok
python
shell
venv
Try it out
de9c8b6973ea.ngrok.io
github.com | BirthdayMLH | Still not sure, it's more of a stack demo? I don't expect to win anything. | ['Karan Naik'] | [] | ['flask', 'ngrok', 'python', 'shell', 'venv'] | 48 |
10,414 | https://devpost.com/software/hacky-birthday-arduino | Inspiration
I have had an arduino starter kit sitting on my desk since Christmas of last year. I wanted to finally learn how to use it.
What it does
The LCD lights up and displays happy birthday while a horrible rendition of happy birthday plays over a buzzer making your ears suffer the entire time.
How I built it
Using Arduino and C
Challenges I ran into
Trying to figure out how to get the LCD to work. It was a lot of trial and error.
Accomplishments that I'm proud of
I was able to get it to do everything I had hoped for. I was about to give up when I couldn't figure out how to power the LCD.
What I learned
The arduino platform.
What's next for Hacky Birthday Arduino
I hope to turn this into a bigger project. I would like to use what I learned this time to make a diorama that can light up and play music.
Built With
arduino
c
Try it out
github.com | Hacky Birthday Arduino | Happy Birthday sung to you by an Arduino Uno | ['Richard Rosenthal'] | [] | ['arduino', 'c'] | 49 |
10,414 | https://devpost.com/software/html-css-space-adventure | HTML & CSS Space Adventure
Inspiration
We were beginners in learning to code HTML and CSS and we realized how we were both taught differently. We combined what we liked from the way were taught into this project.
What it does
The game provides you with resources for learning code. It also provides practice prompts towards the end to enhance already acquired skills.
How I built it
We built the game using MIT's Scratch program.
Challenges I ran into
It was difficult to incorporate many elements within the time limit. The program also did not allow us to embed videos which were a key part of our idea.
Accomplishments that I'm proud of
The button coding required a lot of preplanning. To make each button respond appropriately, they had to be individually colored and coded.
What I learned
It is important to plan your projects ahead of time and start early. Because we started later in the hacking timeline, we weren't able to finish incorporating most of our features.
What's next for HTML & CSS Space Adventure
In the future, it is important that we add interactive videos, quizzes, and individual prompts for each level. We wanted the prompts to be randomly generated on each click. We also wanted to incorporate a Test-Out option that allowed you to take a short quiz that determines your skill level.
Built With
scratch
Try it out
scratch.mit.edu | HTML & CSS Space Adventure | An interactive learning simulation for young learners to learn how to build websites using HTML and CSS. | ['Jaidah Morales', 'Nicole Posada'] | [] | ['scratch'] | 50 |
10,414 | https://devpost.com/software/reading-csv-files | Option 1: Prints all data submitted to the google form
Options 2&3: Finds the oldest and youngest respondents, respectively
Options 4&5: Finds the most and least experienced respondents
Option 6: Searches for a respondent's name and returns their information
Option 7: Searches for respondents in a certain year and returns the names that match
Option 8: Searches for respondents who know a certain language and returns their names
Inspiration:
For this hackathon, we wanted to create something that would help us in the future. As part of the Girls Who Code College Loop at UF, we are actually planning on hosting our own hackathon for members and anyone interested in learning. By participating, we were able to 1) gain insight into how hackathons work and 2) create a project that would help us with this plan.
What it does:
Our project accepts a csv file that we create from responses to a Google Form. We plan on sending out this google form to our possible hackathon participants in order to understand them better. This includes their names, year of college, years of experience, programming languages they know, and any feedback they would like to give us. We then can use this file to see certain trends in our participants, such as the most experienced, the oldest, the kinds of languages our respondents know, and more. This way, we will have a better idea of who we are holding our hackathon for.
How we built it:
To work on this together, we used Repl.it, which allowed us to make edits at the same time and run our project together. By using this, we bypassed the commands and specifics of GitHub, and were actually able to feel like we were on the same page 100% of the time. The basic outline of the program was this: read the file, create objects for each person's response, and find the data we needed.
Challenges we ran into:
One big challenge we had was with the csv file itself. By using Google Forms, we were forced to adhere to the format that was given and had to grapple with delimiters and file streams. However, in the end, we were able to get the information we needed in an organized fashion. Additionally, we had trouble with the format of char arrays and had to use an external debugger to eventually solve the problem.
Accomplishments that we're proud of:
We are proud of our problem-solving skills, considering we wanted a way to work together but had very little experience with GitHub. Therefore, when we found Repl.it, we were very happy with it. Also, we are proud of how well we worked together and got to finish the project fairly quickly.
What we learned:
We especially learned how we can use Google Forms in a deeper way for our own purposes. By creating this project, we will be able to analyze responses from our organization's members in a much better fashion, thus making us more efficient and conscientious altogether.
What's next for reading .csv files:
We will most likely be using this project when we host our own hackathon.
Built With
c++
Try it out
repl.it | Reading .csv files | This project performs analytics on csv files from Google Forms. | ['Kattrina Erillo', 'Sophia Morin'] | [] | ['c++'] | 51 |
10,414 | https://devpost.com/software/plant-info | Inspiration
Originally I had wanted to build an android application so those on the go could identify plants and get basic information about them
What it does
Runs basic object detection using a custom data model on static images
How I built it
Not very well to be honest, but I used Python, detecto, and several other modules
Challenges I ran into
The tool I had planned to use to convert to android is evidently currently broken and as such did not provide much help, also ran into several other problems based mostly around having a .pth model
Accomplishments that I'm proud of
I actually managed to train a data model for the first time ever
What I learned
What's next for Plant Info
Hopefully being able to actually get it fully developed for android to the point the user can take pictures and instantly run the detection with a more robust model
Built With
detecto
opencv
python | Plant Info | Application to determine plant type based on image | ['Greih Murray'] | [] | ['detecto', 'opencv', 'python'] | 52 |
10,414 | https://devpost.com/software/chromesthesia | Inspiration
Chromesthesia or sound to color synesthesia is a relatively rare neurological synesthesia. The idea of watching music rather than listening to it sounded unique and made me (Rachelle) inquisitive about it. When I shared the idea with my teammate (Treasa) we instantly got on the same page and chose to create a website to give a normal person the experience of watching music rather than listening to it.
What it does
The website Chromesthesia enables the user to search for a song and predicts the color associated with that song based on its genre just like how people with chromesthesia would see it. If you feel the color associated with the song didn’t seem connected you can take the short survey linked in the website so the changes can be made to the simulator.
How I built it
We created the layout for our website on Figma and later hardcoded the website using HTML and CSS styles. We used Spotify’s API to link it with our code to make it our search engine to browse songs using Python and JavaScript. Later we flasked the Python code with the HTML/CSS website to make the website up and running.
Challenges I ran into
Trying to get Spotify’s API as our search engine to browse songs was the most challenging part of our project. Once that was complete we were not sure on how compatible Python and HTML/CSS were and later came up with the idea of using Flask to combine the frontend and backend.
Right now, our website doesn't take into account all genres listed in Spotify so some songs will run into error messages.
Accomplishments that I'm proud of
We are proud we were able to get this website up and running. We are happy we made this website with the future in mind so we can have a detailed idea of this synthesia.
What I learned
I learned how to use the new library Spotipy to use the Spotify API in python. I also learned Flask to connect the HTML/CSS frontend to the Python backend.
What's next for chromesthesia
As mentioned in the accomplishments, we look to spread the website and get as much feedback to keep the data accurate. The survey they take helps us analyze more sound-color connections and give more realistic results.
Built With
css
html
python
Try it out
github.com
chromesthesiaa.herokuapp.com | chromesthesia | Have you ever wondered what sounds look like? Chromesthesia takes your favorite songs and tells you what color it is! | ['Rachelle Cha', 'Treasa Bency Biju Jose'] | [] | ['css', 'html', 'python'] | 53 |
10,414 | https://devpost.com/software/happy-birthday-mlh-from-the-longest-domain-name-online | Start of page
After a little bit of scrolling
End of page
Inspiration
It's MLH's 7th birthday this weekend! I thought I'd celebrate by making a website :)
What it does
A radical, horizontal-scrolling website to commemorate the 7-year-old MLH! Longest domain online! (63 chars + . + "online" makes 70 chars --> exactly 10 times the age of MLH!) The horizontal scrolling means that the Devpost gallery looks exactly like the website!
Domain.com submission for best domain prize: happy-7th-birthday-to-mlh-from-the-longest-possible-domain-name.online
How I built it
With an HTML5 Up template, hosted on GitHub Pages. Involved gradient madness.
Challenges I ran into
Getting the gradients to look right took forever!
Also ns propagation took hours.
Accomplishments that I'm proud of
Getting the gradients to look right, including one block with a 5-way gradient.
What I learned
Why devs don't write CSS from scratch if they can avoid it unless they have to.
What's next for Happy Birthday MLH from the longest domain name online
The laptop image that disappears on mobile needs to be fixed.
Built With
cloudflare
css3
github
gradient
horizontal-scrolling
html5
html5up
Try it out
happy-7th-birthday-to-mlh-from-the-longest-possible-domain-name.online
github.com | Happy Birthday MLH, from the longest domain name online! | Happy Birthday MLH, from the longest domain name online! | ['Raymond Li'] | [] | ['cloudflare', 'css3', 'github', 'gradient', 'horizontal-scrolling', 'html5', 'html5up'] | 54 |
10,414 | https://devpost.com/software/present-worthy | Main Page
Sample Result Page
Inspiration
Holiday season, birthday season, special events, and more. You’re always going to have to buy presents for your peers and oftentimes get stuck choosing them. You frequently feel pressured to put your best foot forward and give a great gift, or giving a gift the gift-receiver hasn’t received yet. Hence, it would benefit us all if we had something that would give us a boost to where to look for.
What it does
Suggests gifts for the user to buy on amazon (due to quarantine we thought it was the best fit) based on the checklist form answers and an image of a present the user is thinking of. It also gives a score to the user, describing how well the chosen present image fits with the gift receiver's interests.
How we built it
Google Vision AI- used this API to analyze the image, find its corresponding labels, and used those labels to compare with the gift receiver's interests.
Apify: To crawl amazon's product search pages and store the relevant information in JSON files.
NodeJS: hosting the site, async shell commands, and much more for our backend.
HTML/CSS/JAVASCRIPT: content/styling/functionality.
Challenges we ran into
Having consistent CSS as we were all tweaking it throughout the day
Figuring out how to use Apify for the first time and crawl the pages
Using the results from Google’s vision AI and creating an algorithm to decide a “worthiness” score
Solving merge conflicts!
Accomplishments that we're proud of
Some accomplishments we made during this 24-hour hackathon include:
Deciding on an idea within the first hour and splitting up tasks soon after
Collaborating across different time-zones and countries.
Finishing the two main features we wanted to add in under 24 hours!
What we learned
Some new technologies for us included Google vision API and the Apify SDK. We had a great time exploring these, as well as improving our skills in Node!
What's next for Present Worthy
In the future, we hope to add a search bar for the generated amazon results, so the user can filter them even easier, a subscription-based newsletter, a more sophisticated "worthiness" function, and more personalized options on our form.
Built With
apify
css3
google-vison-api
html5
javascript
jquery
node.js
Try it out
github.com | Present Worthy | Find the perfect present! | ['Nicole Streltsov', 'Syawadhilah Pradipta', 'Andrew Xue', 'Medha Jonnada'] | [] | ['apify', 'css3', 'google-vison-api', 'html5', 'javascript', 'jquery', 'node.js'] | 55 |
10,414 | https://devpost.com/software/techagro-4w6jz9 | \
Built With
adafruit
atmega
c++
github
pcb
wifi | yoyoho | jo | ['Prakhar Saxena'] | [] | ['adafruit', 'atmega', 'c++', 'github', 'pcb', 'wifi'] | 56 |
10,415 | https://devpost.com/software/colexo | Colexo Thumbnail with Main Logo and Font
Render of the (Close to) Ideal Planet that Users can Achieve
Textures used to Display Planets
Planet Simulation, Featuring the Various Factors the User can Interact with
Inspiration
Our main inspiration behind Colexo was the increasing push for the colonization of new planets, with Mars as our reference point. We aimed to create an experience that takes the users through a journey of equilibrium and balance, forming a future home for humanity.
What it does
Colexo allows any user on the Internet to collaborate and slowly make changes to the planet on the website. Using custom-made assets, the planet shifts between phases depending on how its water content, flora levels, albedo, and oxygen content are changed.
Upon reaching the maximum habitation level, the planet is listed as complete, and a potential rocket animation will be shown.
How we built it
Colexo is made up of
a few main components
.
The first component is the 3D viewer for the planets, and we displayed it using p5js and some clever mathematics to make the planets rotate by default. Next, we use several data tables and dictionaries that determine what planet characteristics are possible at specific stages. We used classes and button clicks to control changes to the planets, and changes are animated based on a global timer. The front-end is composed primarily of HTML/CSS, and the back-end is composed of Javascript.
We also have over 30 base images that we created in order to display ever-changing water and flora levels. In order to create the assets used for the planets and logo design, Adobe Photoshop and Adobe Illustrator were the main tools of choice.
Challenges we ran into
Short answer:
Pretty much the entire project
Long answer:
The main challenges we faced were related to updating the images and animations to match our database, and ensuring that the transitions to and from the animations were smooth. Getting the display to work in the first place was also challenging, but we managed to make it work given enough time. Another key problem that we had was getting the web-page to update universally, and unfortunately we weren't quite able to deploy that feature in time for submissions.
Accomplishments that we're proud of
What we learned
Vedaant: I learned how to code in Javascript, which was essentially completely new to me, and I got to a relatively proficient level (up to understanding syntax and creating classes and functions). I also generally got to learn new graphic design techniques and tricks when it comes to making logos and planet graphics. It was pretty interesting to learn how noise can be applied to images to achieve terrain-like features. 3D projection was also something I got to learn more about. Lastly, I enjoyed messing around with data tables and trying to create a balance which the user would try to achieve with Colexo.
Vishnu: I learned how to do a lot more front-end development, specifically creating animations. I also got to learn a whole lot more about Javascript, which is a fairly new language for me. In general, I definitely gained a deeper sense and intuition for graphic and web design, and I'm looking forward to improving even more.
Christina: Since I did a lot of the work on the back-end, I had to learn how to use p5.js, and it was an interesting challenge to link up all of the functions to get the desired animations. Almost every characteristic that we control is interdependent, so representing that in code was something that I had to learn/improve upon.
Altogether: We all learned about some of the key factors for making a planet habitable, and we gained a deeper understanding of how all of them are linked. We hope that we were able to create a unique visual representation of that through our website, and that others can learn just as much as we did.
What's next for Colexo
Currently, the global online features aren't fully functional for Colexo, and improving upon that is the natural next step. Additionally, the CSS responsiveness of the website is limited, so that can be developed upon. In general, we want to be able to leverage more advanced computer-generated techniques for our planet graphics, potentially being able to procedurally generate starting planets.
Currently, once the "Habitation %" reaches the maximum, the planet is reset to its original state. However, we would like to get it to a point where a random planet is generated at each iteration. Additionally, since there are unique ways to obtain higher and higher population capacities, we would like to keep some sort of leaderboard or record of progress.
Built With
css3
html5
javascript
photoshop
Try it out
github.com
colexo.space | Colexo | Build and watch a planet grow as a result of your (and the Internet's) efforts. | ['Vedaant Varshney', 'Vishnudev Poil', 'Christina Zhang'] | ['First Overall'] | ['css3', 'html5', 'javascript', 'photoshop'] | 0 |
10,415 | https://devpost.com/software/space-streak | What it does
The user first gets to chose a favorite astronaut's face, wears the astronaut helmet, and sees his rocket shoot up from earth to moon. He then gets an augmented experience of solar system and constellations and gets to meet and dance with some aliens too!
How we built it
Lenses, faces, aliens and astronauts were built using Lens Studio and unity+vuforia was used for AR/VR constellations
Challenges we ran into
Since we were working on different components that too on different platforms, integrating all the different components together was a problem as we were having trouble importing unity models to lens studio.
Accomplishments that we're proud of
We were newbies to AR/VR and all of our work revolves around that. So gathering resources, getting to learn about "Lens studio" had to be completed in 48 hours. And proud of finishing our project and integrating all in the video. We experimented with a new platform and completed a project.
Built With
lensstudio
unity
vuforia
Try it out
github.com | Space streak | Explore the space as your favourite astronaut, see an augmented view of the solar system, explore constellations, meet aliens and dance with them too! | ['Namya LG'] | ['Second Overall'] | ['lensstudio', 'unity', 'vuforia'] | 1 |
10,415 | https://devpost.com/software/ad-lunam | Planet UI
Planetary Orbit
Main Menu
Planets
VR display
Inspiration
Earth is a stressful place, and sometimes I feel like hopping on a spaceship, heading off to the stars, and leaving it all behind. Sadly, I can’t hop onto spaceship in real life as of yet, so I made a game where I can.
What it does
Ad Lunam is a space exploration game, where you can fly your spaceship through a solar system and explore the different planets.
How I built it
Ad Lunam was made in the Unity editor, and the game was ported into VR using the Unity Mock HMD Loader Plugin, which displayed the game in stereoscopic (2 eyes) view. Since I don't have the money for a $1000 VR headset, I used a cheap Walmart VR Viewer and my phone.
By using the Unity Remote 5 app on my phone, I could take the gyroscopic input of the phone and use it to determine how my head was moving, and then turn the in game camera. This created a VR effect. Then, by using the keyboard, I could move around and interact, giving me all the functionality of an HTC Vive or Oculus without spending all the money.
In order to get the planets to orbit one another, I used the C# scripts from this
repository
.
Challenges I ran into
The toughest part of this was actually getting the VR to work. I was struggling to figure out how to get the phone and computer to communicate and extract that communication data using C#. Eventually, I got it to work, and it was only two lines of code:
Input.gyro.enabled = true;
transform.Rotate ((initialOrientationX -Input.gyro.rotationRateUnbiased.x)*6,
(initialOrientationY -Input.gyro.rotationRateUnbiased.y)*6,
(initialOrientationZ + Input.gyro.rotationRateUnbiased.z)*6);
Accomplishments that I'm proud of
This was my first time working on a real Unity game, and I am very proud of the results. Making the game itself was an ordeal, but the most rewarding part was being able to put on my $5 VR headset and play the game as if I was using an Oculus Rift.
What I learned
I learned how the Unity game engine works, and also how to budget time wisely. In addition, I learned how to simulate using a real VR Headset with just a phone and a cheap VR viewer, which I can apply to any first person Unity game I make in the future.
What's next for Ad Lunam
I hope to add procedural generation to the planets, and maybe some small procedurally generated animals as well.
I uploaded the non VR version of this game, so I'll add the link for it here and at the bottom.
https://bearseascape.itch.io/ad-lunam
Built With
c#
unity
vr
Try it out
drive.google.com
bearseascape.itch.io | Ad Lunam | Ad Lunam is a VR space exploration game where you can fly your spaceship through a solar system and explore the different planets. | ['Michael Li'] | ['Third Overall'] | ['c#', 'unity', 'vr'] | 2 |
10,415 | https://devpost.com/software/spacestation | Main App
Polite Invaders
Music Player
What
We thought space could get pretty boring without anything to do, so we made a space "station." With our space station you can:
Play Polite Invaders (our Space Invaders spin-off),
Get an AI to play Polite Invaders (A trained model is included in the repository),
Watch the AI learn to play Polite Invaders,
Monitor the latest tweets about the Mars Perseverance Rover, and
Listen to music (it is a "station" after all...)
Why
We wanted our hack to be an unnecessarily over-engineered goofy and cool project that combined different technologies under a single metaphorical space umbrella.
General Details:
Polite Invaders - Our spin-off space invaders where both you and the villainous invaders (consisting of space creepers, death stars, and space cookies) have one thing in common, politeness. The invaders may be villanous, but in our game, you
don't need to shoot them to drive them away. Instead, you aim and shoot polite emails at them, so they apologize and leave. They will also respectfully apologize and leave if they
bump into you on the way down. Getting the invaders to leave gets you 1 point each. You win if you reach a total of 42 points or if you survive for 99 seconds. 99 seconds
equal 15 space minutes, and the invaders legally have to leave if they can't invade you in that time. If any one of them manages to move past the bottom, though, you lose. The invaders appear more frequently as you rack up points, so the game
only gets harder.... until 15 space minutes pass, of course.
AI - Left, Right, Stay and Shoot isn't really something you need a neural network for... but since our goal was overengineering, we used an evolving neural network. Specifically,
we used the NeuroEvolution of Augmenting Topologies method of evolving neural networks in a reinforcement learning model. Multiple generations of genomes play the game and make decisions that are either
rewarded or punished, depending on the outcome. Eventually, the neural network figures out how to play the game, and when it reaches a certain score threshold, the model is stored in a file. This file is
later loaded when you want the AI to play in a non-training scenario. The training runs at a much higher fps with a lot of the eye candy stripped, while the testing is a copy of the standard game for humans except for the neural network making the decisions.
Music - Space is cool to look at, but it'd probably be cooler to listen to music while gazing into the gorgeous abyss on the way to the moon. We added a simple music player with space tunes to jam out to.
Perseverance - We like to keep an eye on Perseverance and thought you might too, so we used Twitter's API to show you the latest progress tweets :)
What we used
Technologies:
The eel library for the web app which allows us to seamlessly use javascript, HTML, and CSS with a Python backend
neat-python for the evolving neural network architecture
pygame for the game, space AI and music visuals
The Twitter API for Perseverance tweets
Resources:
We made our own space art inspired by Starwars, Minecraft, and Rick and Morty made using Pixilart
Astronaut background from UHDPaper.com
Daniel Zawadzki's CSS from templatemonster.com which served as the basis for our modifications
Space Music at
www.bensound.com
" from Bensound
List of songs for our radio:
Starwars Theme
Imperial March
Rick and Morty Theme
Star Trek 2009 Theme
Rocket Man
Space Oddity
Starman
What's left
Not much. We added all the space functionality we were aiming for, and we are particularly proud of how it looks. We wish the eel library worked better with pygame implicitly; our hacky workaround using subprocesses works but at a small cost to the user experience.
Challenges
I think our biggest challenge was coming up with a vision to combine all these technologies into a single project in the given time. The game, AI and music player components of the space station could serve as apps in their own right, so it was pretty hectic (but fun) to plan out and implement a game, write a training model for the AI and create a music player while making sure the project looked good and performed consistently. We also hadn't built an app with such diverse sub functionalities before, so we ended up spending a fair bit of time learning how to get them to work together. We also spent a little bit of time debugging differences in the app when running on Windows vs Linux.
Built With
css
eel
html
javascript
neat
pygame
python
twitter
Try it out
github.com | SpaceStation | We thought space could get pretty boring without anything to do, so we made a space "station". Play a game, watch an AI play it, read Perseverance updates or just relax with some music! | ['Aditya Banerjee', 'Ved Shah'] | ['In-Flight Anomaly'] | ['css', 'eel', 'html', 'javascript', 'neat', 'pygame', 'python', 'twitter'] | 3 |
10,415 | https://devpost.com/software/owie | GIF
A sample of the kind of video that will be posted on r/owiehack with a auto-generated caption
The hardware we used to control the chair; it's a transistor that causes the chair to fall by de-powering the electromagnets holding it up
Our first UiPath automation that exists solely to start our second one
Our second UiPath automation that runs the bulk of the demo
Inspiration
When we were looking at MLH's hackathon offering for this weekend, we noticed the "Well, it's not rocket science" category and the breakaway chair that we had built recently that still didn't have any code for it, and we knew what we had to do. Introducing
owie
, the brand new framework for GPIO powered devices like our chair that makes breaking your butt a lot more fun (at least for everyone else).
What it does
In essence, owie is simple: hit some button on some server and watch your friend get a one way ticket to the ground. However, in the spirit of this hackathon, we thought we should take things a step further, so we did! Here's a rundown of what our project does, with first mentions of a API,tool,or service in bold.
-We built a
UiPath
automation that runs our demo for us, shown off in the video at the top of this page.
-However, we thought we'd take things a step further: our UiPath setup doesn't just run the demo; it starts a SECOND UiPath automation to do the rest. You heard me correctly: a UiPath automation that solely exists to start another UiPath automation. If that's not innovation, I don't know what is. In addition, since all our embedded code runs off a
Raspberry Pi
, and UiPath isn't available on that platform we used
Real VNC
to remote into the Pi, and have our automations interact with the remoted in GUI.
-The second UiPath automation clicks a button corresponding to a
HTML
form on our
Flask
webserver running on our Pi
-This triggers the
RANDOM NUMBER GENERATION PHASE
, which determines how long your dear friend will have to wait before the chair falls (it's completely random, which just makes it so
exciting
, don't you think?). We used the
Wikipedia
API to grab a random article from their database and use a simple A1B2C3 cypher to convert all the letters in the title to number and then sum all those numbers and take that number and modulo it with 9 to get our random time value.
-After this is the
CAPTION GENERATOR PHASE
, which is a special tool that will help us later. Our
radar.io
implementation grabs the users coordinates using their geocoding API, and stores them. Our GCP implementation then takes it's turn, using the
fswebcam
linux module to take a picture of the user sitting in the chair using the webcam we're using to record the clip of them falling for farther down the pipeline. We then use the
GCP Vision API
to gather labels for the image, and randomly select a meaningful one for use in the caption. The caption is then stitched together to take the form
"yung bruh really think they can have GOOGLE LABEL at RADAR.IO COORDINATES"
to further humiliate the poor soul who thought it would be a good idea to sit in this chair while owie was online.
-Then, as the program sleeps for the time chosen in the
RANDOM NUMBER GENERATION PHASE
, a new thread (that's right, this is MULTITHREADED too, take that haters) starts recording a video using
ffmpeg
that should ideally capture the person falling to their unexpected doom. Since this is multithreaded, the main process of execution can wait while the video is being recorded.
-Now for the
IT'S ALL ONLINE? ALWAYS HAS BEEN
phase. We take the recorded clip from the webcam and upload it to the internet using the
Imgur API
and get a shortened link for it, which we then post, using the
reddit API
(along with the python package
praw
), to a subreddit created JUST for this project,
r/owiehack
. A lot of our tests are uploaded there, so you can see how we made progress towards our ultimate goal.
-Oh, and of course the
Chair
breaks during all of this inflicting maximal shame to their gluteus maximus.
How we built it
Using our breakaway chair (whose construction is NOT a part of this project, just the code written for it is) as a piece of hardware that we could interact with, we used the following to pull the project together:
-python3
-html/css
-Flask
-UiPath (x2)
-radar.io
-GCP Vision API
-ffmpeg
-fswebcam
-reddit API
-imgur API
-Wikipedia API
-Raspberry Pi 3
-Real VNC
-multithreading
Challenges we ran into
-We were going to add additional features, like having a speaker read out the caption through GCP Text To Speech, but the Pi couldn't run a version of python new enough, and I only had python set up properly on my laptop running WSL , which can't do I/O interaction since my laptop is just pretending to be a Linux device
-It also took us a whole bunch of tries to get the recording, since there are so many moving parts to the demo.
Accomplishments that we're proud of
-Using ALL the sponsor APIs
-Connecting so many things together to do something very simple
-Using UiPath to trigger a UiPath automation :)
What we learned
-Making things messy and unnecessarily complicated is kind of fun
What's next for owie
ADD MORE UNNECESSARY FEATURES!!!!!!
Built With
chair
css3
electromagnets
ffmpeg
flask
gcp
html5
imgur
python
radar.io
raspberry-pi
real-vnc
reddit
uipath
webcam
wikipedia
Try it out
github.com
www.reddit.com | owie | making friends with the floor with many extra steps | ['Nicholas Adair', 'Drew Ehrlich', 'Corey Du Val'] | ["Well, it's not rocket science."] | ['chair', 'css3', 'electromagnets', 'ffmpeg', 'flask', 'gcp', 'html5', 'imgur', 'python', 'radar.io', 'raspberry-pi', 'real-vnc', 'reddit', 'uipath', 'webcam', 'wikipedia'] | 4 |
10,415 | https://devpost.com/software/geobingo | Logo
App Home Screen
Website
Style guide
GeoBingo
The Newest Way to Play Bingo!
GeoBingo is not your ordinary game of Bingo. Compete with your friends and community to visit the places on the Bingo board and be the first to get a BINGO! No need for manual verification -- using Radar.io’s accurate geofencing services, your bingo tiles will automatically be checked off once you have entered the appropriate location.
link
Inspiration
This application was inspired by the idea of traveling the globe and beyond. We wanted to create an innovative and interactive way to encourage participants to travel the world, discovering new spots while having fun and competing with friends and family. We believe exploring the world while having fun is what makes new connections that drive our world forward.
How We Built GeoBingo
GeoBingo was built using Github to house our code and Radar.io's geofencing capabilities to track each user's location. The team utilized Discord to communicate during the hackathon and Github to share and review code. Inkscape was used for graphics, Animoto.com was used to create the demo video, and many thanks to contributors at Pexels.com for providing stock-free images and video.
Challenges We Faced Creating GeoBingo
Time is always a challenge but our team periodically checked in with each other to make sure we are on track. It was challenging to create a website and find a host as we are still new to that process. We had planned to finish and load our app to the app store but our plan had some roadblocks that prevented that goal. We had to debug and troubleshoot our code that was triggering unexpected geofences, we had trouble learning how to host a website and how to connect it to a domain, and we needed to set up websockets via the webhook if we wanted the bingo tiles to update to "done". We did the best we could with the time and resources we had and are very proud of how much we accomplished.
What We Learned
The roadblocks we had in the end made us learn the most. Being stuck forced us to seek out help from others, research to find solutions, and even find new resources throughout the process. Some examples that we learned are how Radar.io triggers geofences and that you can just point your domain to your webhost. We learned how to use Radar.io, how to manage our time better, how to use Discord more effectively, how to navigate Github, and learned coding techniques from other team members.
A project for the To the Moon and Hack Hackathon
Built With
dart
html
kotlin
objective-c
swift
Try it out
github.com
geobingoinouter.space | GeoBingo | GeoBingo, the newest way to play Bingo. Compete to visit places on the Bingo board and be the first to get a BINGO! Bingo tiles automatically check off once you have entered the appropriate location. | ['Shelley Ophir', 'James Chang', 'Adrian Silva', 'Adrian Silva'] | ['Most Creative Radar.io Hack'] | ['dart', 'html', 'kotlin', 'objective-c', 'swift'] | 5 |
10,415 | https://devpost.com/software/kill-the-demons | Kill the Demons
A space game in which you have to kill the aliens!
Introduction
During this lockdown we all are bored and we lack entertainment except Netflix! To kill this boredom and get out from the cozy beds, we are introducing
KILL THE DEMONS
game.
KILL THE DEMONS
is a Arduino based game where our
SPACE-FIGHTER
will kill the dangerous
DEMONS
or
ALIENS
to save the planet!
How to play?
It is very simple to play.
Turn on the game box and wait to run the program.
After the screenloads up, press the
START
button to start the game.
Then while the game starts, use the
LEFT
and
RIGHT
buttons to navigate your SpaceCraft and
FIRE
button to Fire Laser Shots to the Demons to kill them.
It's very simple to use!!
Make it Yourself
To make this game you need to follow these basics steps!!!
What we have used to build this?
Arduino Nano : (although any version of an Arduino should suffice as long as it has at least 32K of program memory). Connections to other Arduinos may vary.
OLED: Based on a 128×64 pixel I2C display. Other sizes could be used but you would have to adapt the program to compensate.
Piezo Speaker: Must be the simple type that can be driven directly and not one that makes noise with just power applied (this would be more normally termed a piezo buzzer).
Tactile push buttons (3): One for fire, 1 for left, 1 for right. Beware of building on the breadboard as some do not fit correctly into the holes. You may need to mount to stripboard and then run wires to the breadboard.
Wires (several)
Circuit Diagram
Code
Visit
Here
for the code.
Download the code and upload to Arduino Nano and PLAY!
Demo Video
Click
Here
to see the demo video.
Made by
Saswat Samal
Sanket Sanjeeb Pattanaik
Software
Hardware
Saswat's Website
Sanket's Github
Built With
arduino
c++
oled
Try it out
github.com | Kill The Demons | A space game in which you have to kill the aliens! | ['Saswat Samal', 'Sanket Sanjeeb Pattanaik', 'PIYSocial'] | ['Best Hardware Hack presented by Digi-Key'] | ['arduino', 'c++', 'oled'] | 6 |
10,415 | https://devpost.com/software/when-life-gives-you-lemoons | Splash Screen
Creating an account - New User
Login screen - Returning User
Profile
Inspiration
As the theme for To the Moon and Hack was pretty much anything relating to space, we wanted to develop something fun that we thought aliens would need as much as humans.
Humans meet and love each other, so we thought aliens needed the same. After all, love is love!
What it does
lemoons☽ is a networking and online space dating web application in which you can match, chat, and meet new Aliens.
We wanted to develop lemoons☽ as a web-based application that was mobile-responsive, so aliens can use lemoons☽ on different types of human smart devices.
How we built it
We used HTML, CSS, JavaScript (and React + Firebase)
Realtime Database
Also used Material UI as UI pack
Our domain.com domain name created this weekend is whenlifegivesyoulemoons.space
Challenges we Faced
Since it was our first time using React, it was challenging learning and building the app at the same time in the given time allowance. Thus, the app’s functionality is not 100% completed. However, we laid the groundwork and we would like to work on the functionality after this hackathon.
Accomplishments that we're proud of
This is our first app built with React! We are proud that we were able to begin our initial steps in creating lemoons☽ and challenge ourselves to try something different.
What I learned
Learned how to use firebase
Learned React and its packages
Learned Material UI
What's next for lemoons☽
Improving functionality across the app
Revising the app to be more mobile-responsive
Increasing the app accessibility for those with disabilities through improving our design choices
Implementing a chat feature
Adding a location feature using Google Cloud
Connecting our app to our domain name
Built With
css3
firebase
html5
javascript
react
Try it out
github.com
whenlifegivesyoulemoons.space
docs.google.com | lemoons☽ | When life gives you lemons, use lemoons☽. lemoons☽ is a networking and online space dating application. Match, chat, and meet new Aliens. 👽🌙 | ['Nancy Tran ✧', 'Khoi Le'] | ['Best Domain Name from Domain.com'] | ['css3', 'firebase', 'html5', 'javascript', 'react'] | 7 |
10,415 | https://devpost.com/software/swiftune | Swiftune Logo
Website Home
Website Intro
Website About
Inspiration
The project was inspired by us because, during the quarantine, music was a really good way of getting our minds off of the things going on around the world. We wanted to automate the process of adding the newest and most popular songs by our favorite artists to our playlist. Hence, we created this application that would help achieve that task.
What it does
Swiftune is a very effective program that takes away the excruciating task of creating a playlist. With the click of a button, users will have a complete playlist, which satisfies their taste. The program takes into account the users’ liked songs by data scraping them. Finally, UiPath adds the top most popular songs of each artist into the user’s playlist.
How we built it
We built this application by using UIPath Studio. First, the program asks the user what they would like to name the playlist using an Input Dialogue activity. Once this was stored into a String variable, the program opens the Spotify application and navigates to creating a playlist with that name. It then goes into the user’s liked songs and data scrapes all the data related to each song including name, artist, and more. This data table is stored in an Excel sheet in CSV format. Next, the program goes into a "for each" loop and cycles through each artist’s name in Excel. This artist is searched for in Spotify’s search bar. Once we’ve navigated into the artist’s Spotify profile, we use a combination of hotkeys, clicks, and hovers in order to add the artist’s top 3 popular songs into our playlist. This cycle repeats for each artist due to the for loop. Once this is done, it reopens the Spotify application and navigates to the newly created and personalized playlist for the user to have a listen to!
Apart from this, we also created a website in order to showcase our product to the best of our potential. In this website, you can also see our HTML, CSS, & Javascript skills. The website exhibits a brief introduction of Swiftune as well as a detailed video of the process (which is also added on Devpost).
Challenges we ran into
We ran into many challenges as a group because none of us had any experience using UIPath before. One of the challenges that we ran into was that the application would not click where we wanted it to click specifically. We solved this by removing the selector and making the program click based on the cursor position. Another challenge that we came across was that the application would not open on time and some of the steps would not be cast. This is because there was no delay in between those parts. By adding a delay, all the steps were cast, and the application was able to open without a problem and continue.
Accomplishments that we are proud of
We are most proud of the fact that the application worked as we imagined, and we were able to create it in the time given.
What we learned
This was a new process for most of us as we have never used the UiPath application before. With the creation of this app, we were introduced to the programming method known as “drag and drop”. For the most part, our group was acquainted with written programming methods such as HTML, CSS, and JS. However, with this hackathon, we decided to be adventurous and explore a new path of programming. This obviously came with added complications as we were not aware of the methodology of UiPath. Nevertheless, we persevered, and creates the application known as Swiftune!
What's next for Swiftune
To further develop our application, we plan to integrate Spotify web API in order to collect user data. Additionally, we plan to process this data using machine learning algorithms in order to create the best experience for our users. Even with the complexity of the Spotify Web API and machine learning algorithms, we’d like to integrate these innovative methods into our project while keeping the simplicity of one click!
Built With
css
html
javascript
uipath
Try it out
github.com
www.swiftune.tech | Swiftune | Minimize the tedious process of adding countless songs to a playlist into one quick click! | ['Kanav Bengani', 'Rohakk Gaddam', 'Anuraag Kolli', 'Sid Das'] | ['Best UiPath Automation Hack'] | ['css', 'html', 'javascript', 'uipath'] | 8 |
10,415 | https://devpost.com/software/cely-the-celestial-display | Logo
Astronomy Picture of the Day
Random Astronomy Picture
Sky Above Me Action with Celestial Map
Celestial Map Zoomed In
Rocket Facts
Sounds from the Space
Inspiration
CELY
is inspired by the idea that astronomy should be accessible from anywhere. Especially from a Smart Display! Our project thrives on providing rich educational content with pictures, facts, locations, and even space sounds!
What it does
CELY
is a project build to work with the Google Display Hardware. It uses a combination of APIs to provide people with fun & educational content about astronomy like a picture of the day, random rockets with facts, celestial Sky, and spooky sounds from our Solar System.
How we built it
To make the hardware integration,
CELY
uses the new Google Actions Developer Console to configure conversations, make actions, and use a webhook to process data. We built 5 actions in total!
1)
Astronomy Picture of the Day
:
The Astronomy Picture of the day is extracted from NASA's API to use in a rich card response on Google Actions. It takes the current date of the device to provide users with a beautiful image, description, and credits.
2)
Random Astronomy Picture
:
The Random Astronomy Picture also uses NASA's API with a random date to get the image, description, and credits.
3)
Sky Above me
:
The Sky Above me action uses
Radar.io
to get the coordinates for the location given to generate a celestial map of the
user's location
.
4)
Random Rocket
:
The Random Rocket action uses the SpaceX API to provide data about rockets. It gets the image, description, and name of the rocket to present it on the smart display card.
5)
Space Sounds
:
The Space Sounds action uses NASA's Spooky Space Sounds dataset, which we host on a Google Cloud Storage Bucket to play sounds that came from space to the users.
Challenges we ran into
Every time we changed our action on the Google Actions portal, we had to wait for the propagation of our Cloud Function to make sure our code was working. In addition to that, we had problems making external requests in Javascript due to asynchronous events happening in the Google Action. Also, Google recently changed their entire platform, so we had to relearn a lot of the concepts.
Accomplishments that we're proud of
We're proud of developing a project that mixes hardware and astronomy to produce rich & educational content for our users. We are proud of developing a really cool project that has a good chance to grow!
What we learned
We learned to be patient and persistent with anything that happened. We learned how to use the new Google Actions Portal and how to make nested external requests. In addition to that, we learned how to use Radar.io and asynchronous calls in Node.JS.
Our Tech Stack:
JavaScript, Node.JS, Axios, cheerio, firebase-functions, Google Cloud Storage, Google Actions, NASA API, Radar.io, SpaceX data API, fourmilab API.
Built With
axios
dialogflow
google-actions
google-cloud
google-cloud-function
google-firebase
javascript
nasa
node.js
spacex | CELY: The Celestial Display | The Celestial Display that brings Hardware and Astronomy together | ['Nathan Kurelo Wilk', 'Ariel Kurelo Wilk'] | ['Best use of Google Cloud'] | ['axios', 'dialogflow', 'google-actions', 'google-cloud', 'google-cloud-function', 'google-firebase', 'javascript', 'nasa', 'node.js', 'spacex'] | 9 |
10,415 | https://devpost.com/software/planet-recognition-and-chatbot | Planet Identifier
Astro CHATBOT
Inspiration
We were inspired to create this project because we wanted to challenge ourselves to build new things and try and build an image identifier.
What it does
The first part of the project is a Planet Detector which identifies the eight planets of the Solar System and the dwarf planet pluto.
The second part is a ChatBot which will keep you entertained whenever you are bored.
How we built it
The project uses a picture as an input and detects which planet it is, using Machine Learning. The Planet Recognition system consists of a pertained ResNet50 Model with its last layer replaced by a softmax unit of 10 units. The model is Built using the Keras library. The weights of the ResNet50 are set to non-trainable while the weights of the softmax unit are trained using a small image dataset of planets downloaded from google images.
There is also a python chatbot which can crack jokes, greet you etc. We built the GUI using Tkinter library of Python. The ChatBot is built using NLTK which is a Natural Language Toolkit in Python. It is a rule-based chatbot that replies to simple questions like: ”What is your name?”, “Tell me a joke”, “Tell me a fact”. It is able to reply to simple questions by the user.
Challenges we ran into
The code was not working and we had to find a way to link the windows to each other. We both had to learn the syntax of Tkinter to be able to publish it.
What's next for PLANET RECOGNITION AND CHATBOT
We plan to make the chatbot better and add more features to it so works more efficiently. We also plan to train the Convolutional Neural Network on a bigger planet dataset so that it is more accurate in its predictions.
Built With
machine-learning
python
tensor
tkinter
Try it out
github.com | PLANET RECOGNITION AND CHATBOT | Planet Detection System and AstroChat bot using python machine learning and Tkinter. | ['Rishabh Iyer'] | [] | ['machine-learning', 'python', 'tensor', 'tkinter'] | 10 |
10,415 | https://devpost.com/software/pace-planet | resting_page
team_page
studying_page
universe_page
front_page
Inspiration
Having a galaxy adventure is fantastic and romantic, just like Hackathons!
However, COVID-19 separates us from gathering together in person.
We are all alone, staying in front of computers all day long to do our work, but we can't resist of feeling bored and unproductive without each other.
How can we stay together and get motivated?
What it does
Built in a hope to become
productive
universe-explorer together,
Pace Planet
provides the platform for everyone to join together in a team or as an indivdiaul and track the amount of studying time together to build a greater universe.
Pace Planet calculates a user's studying time using a timer and ranks the users based on the amount of their studying time. Plus, users can see the cute mascot
Pace
to study together with the users.
How I built it
1. Design
Adobe XD
was used to create a several simple prototype of the app to build.
2. Functions
I created functions using
Android Studio
and designed the specific functions as follows:
2.1. Track Your Time with the planet Pace
I used chrometer and SharedPreference to track the studying time even after the app is refreshed. The pace was changed its status based on the status of chrometer.
2.2. See the Online Users (who are currently studying)
I used
Firebase
to save each user's ID so that the status of users who opened the app can be changed to online from offline and appeared on the screen.
2.3. Find Team to Study Together
I plan to build this feature using
Firebase
again by saving the team information.
Challenges I ran into
I had many issues with Firebase. First, I could not connect to Firebase, and it turns out that a simple restart can resolve the issue. Then I could not add the data in the class format (that has many small components inside). I tried to submit a long string to add all necessary data, but it was unsuccessful. I reduced the scope of project and simply used one variables to build all functions related to Firebase.
The retrieval part was also having some problems. Whenever I call the data, Firebase sends me a long string that each item is separated by a comma. I would rather want to have an organized list, but due to time limit, I simply split the string by using the comma.
Android Studio's files were too big to upload on Github at once using UI. I tried many different ways to upload the folder, and I finally found to use the Power Shell to upload the folder.
Windows does not allow to change the screens during the recording, so I had to create different videos for different screens. Unfortunately, I had difficulty of merging the videos, so I had to upload them separately. I selected the one demonstrates the app to upload in here.
Accomplishments that I'm proud of
It was my first time ever trying to complete the app in two days, and it brought me a lot of fun and challenges!
I am also proud of that I put extra efforts to introduce features for users as easy as possible.
What I learned
I had difficult time using Firebase and Github, but because I tried using them so many times, now I can utilize these systems better than ever before!
I started learning how to build AR using Android Studio (as well as Unity 3D), and it was very interesting and new.
What's next for Pace Planet
After completing all planned components, I hope to include AR that can take a picture of the planet that users created together.
Built With
android-studio
firebase
java
xml
Try it out
github.com | Pace Planet | Time Planner for Space Traveller | ['Gloria Kim'] | [] | ['android-studio', 'firebase', 'java', 'xml'] | 11 |
10,415 | https://devpost.com/software/asteroid-tracker | Inspiration
Our team loves learning about astronomy and space! We thought we'd create a project that combines that interest and provides people with a resource for data directly from NASA!
What it does
Our project aims to provide users with data about upcoming asteroids that will be closely approaching Earth in the next 7 days.
How we built it
We integrated the NASA API by accessing its Near Earth Object data. This gave us data about the close approach dates and times, names of asteroids, impact risk, relative velocity and miss distance. We used Jupyter Notebooks, Python and the Pandas library for our back-end, and Flutter and Dart for the front-end.
Challenges we ran into
Our objective was to output all the collected data onto our main page, add a page for downloading the data directly onto the phone as an excel or csv file, and another page for displaying statistics with graphs created using Matplotly and Seaborn. However, we were not able to complete this project because Flutter was not parsing through our csv file that contained all our data properly. So we ended up with just a template for our app.
What we learned
Although things didn't turn out the way we expected, we have all learned so much! This is the first time my group has made an app, especially using Flutter and Dart. We have also never accessed an API, so we were very new to these processes. We watched many YouTube videos and read online documentation to help guide us along the way.
What's next for Asteroid Tracker
We plan to complete and publish this Android app after the hackathon. If it doesn't integrate well with Flutter, we hope to use Swift.
Built With
dart
flutter
pandas
python | Asteroid Tracker | Track upcoming asteroids closely approaching Earth | ['Foram Gandhi', 'Rutvi Shah'] | [] | ['dart', 'flutter', 'pandas', 'python'] | 12 |
10,415 | https://devpost.com/software/movie-recommendation-system-il1qv7 | Inspiration
Always had to go through a lot of internet surfing on just to decide what to watch on a chill weekend ended up spending my most time in the search itself rather than actually watching anything. So decided to build something to deal with this problem.
What it does
This is a web app that asks you the name of the one movie you liked and based on that it finds similar movies that you may like and enjoy.
How I built it
It is Flask based web app that uses the method of cosine similarity and pandas library to deal with the dataset. then we built a similarity matrix and using that score it recommends the used we also used the OMDB API to get movie details.
Challenges I ran into
we completed the python-based recommendation system but was not able to merge it with the javascript-based API model so at the last it just kind of a movie info app rather than a recommendation system.the dataset hunt was a task and other task was to combine the both languages python and javascript
Accomplishments that I'm proud of
was able to deliver something .made an App that gives you info of the movie
What I learned
Flask implementation , how to use APIs , AWS deployment
What's next for Movie Recommendation System
to build a complete recommendation system with revamped UI and more features
Built With
amazon-web-services
api
bootstrap
css
flask
html
javascript
python
Try it out
www.movieinfo.com.s3-website.us-east-2.amazonaws.com | Movie Recommendation System | When you can't decide what to watch this weekend | ['Sanket Mhatre', 'Rohan Gawhade'] | [] | ['amazon-web-services', 'api', 'bootstrap', 'css', 'flask', 'html', 'javascript', 'python'] | 13 |
10,415 | https://devpost.com/software/planetary-system-simulator | Inspiration
I always loved space and loved looking at the stars. I also wondered how the solar system would be and was formed, I have a keen interest in astronomy and love to program. and taking our solar system as a start I thought of simulating planetary systems and have people interact with them, add suns, moons, planet, throw them at each other and have loads of other fun all the while making the program accessible to everyone online.
What it does
My program basically simulates planetary systems and shows orbits, planets and stars. people can add, understand and remove planets from the simulator, this helps understand a lot of the different physical and astronomical concepts and visualize them to make it easier for them to grasp these concepts as well.
How I built it
I have used the online module Vpython to create this program with complex and different functions of gravity based on and differing on masses and orbital plotter functions that would plot the orbit of the planets/stars/moons including variables such as eccentricity, periapsis, semi-major axes and other orbital elements. Vpython also has various mathematical functions to make it easier for me to calculate the values. Vpython is a 3D engine and using different spheres and textures I was able to simulate planets and using buttons I was able to add the functionality of the program.
Challenges I ran into
Plotting the orbits was the hardest part including the computations of all the planets when the user continues to add more although it was resolved with a little bit of research and a lot of head-scratching
Accomplishments that I'm proud of
I learnt how to simulate astronomical objects and also learn various different astronomical concepts on the same
What I learned
What's next for Planetary System Simulator
Built With
python
vpython
webgl | Planetary System Simulator | Let's Simulate, Learn and destroy the sun! | ['Hasnain Koser'] | [] | ['python', 'vpython', 'webgl'] | 14 |
10,415 | https://devpost.com/software/alienservice | Logo
Join Page
Main App
Inspiration
We noticed that people nowadays worry a lot about being judged by the people around them. A common phrase "Oh my god, this is so embarassing" is universal. With Alien8, we want people to communicate with no baggage and just be free. People can talk to each other without worrying about being judged based on their appearance.
What it does
Alien8 is a platform that allows users to video chat with anonymity. They can interact with other users (or their friends!), see and show their facial expressions and reactions without worrying about their appearance. It's a great place for people dealing with body-image issues who do not want to be seen.
According to research conducted by the Department of Psychology, Carnegie Mellon University, 36% of the people log on to anonymous chat rooms because they feel safe and 55% log on for being social.
How we built it
We have produced a video calling application that aims to assist people to meet like-minded individuals, focussing on their personalities rather than their appearance. We have achieved this through the use of alien AR filters and believe it’ll be a useful platform to network especially during the current pandemic, without being judged for your countenance.
Challenges we ran into
It was hard for us to implement the SDK we used for the AR filter because it runs on a python server. We found it a challenge to implement the SDK to our environment and we didn't any API keys to instantiate.
What we learned
We learn various ways we can use servers to communicate using the internet. Getting the server up and running was a tedious work since the hackathon was online, but we learned a quick way after making 3 different servers.
What's next for alienService
We need to build a global server so that this site is accessible to everyone around the world. No one should be suffering like Chewy :)
Built With
css
html
javascript
Try it out
github.com | ALIEN8 | Anonymous video calling | ['zweistein1326', 'Pranay Agarwal', 'anubhavgarg123 Garg', 'anujbhatia2600 Bhatia'] | [] | ['css', 'html', 'javascript'] | 15 |
10,415 | https://devpost.com/software/emotion-companian | Inspiration
Testing
What it does
How I built it
Challenges I ran into
Accomplishments that I'm proud of
What I learned
What's next for Emotion Companian | Emotion Companian | Testing | [] | [] | [] | 16 |
10,415 | https://devpost.com/software/explore-the-moon | Inspiration
I wanted to celebrate the return of Crew Dragon Demo 2 and in the same time improve my Vanilla-JS skills so I started to create a game which a astronaut will need to travel around the moon and gather samples for scientific research.
What it does
It is a simple game where the player (blue dot) explores the surface of the moon to search for rocky materials and gather them back for research.
How I built it
I only use 3 different languages, HTML5, CSS3 and mainly Javascript. I also wanted to learn how all the things come together so I decided not to use any libraries or frameworks.
Challenges I ran into
It is hard for me to code because I am not a software engineering student or engineer (I'm a product designer). So learning the logic and computational thinking used up a lot of my time. Unfortunately, I am not able to create the best experience for the game because I lack some of the logic inside the code.
Accomplishments that I'm proud of
Still, I am able to build a decent enough UI and in the same functional game where player can go around and interact with items on the moon.
What I learned
A lot of Javascript logic and functions. Flexbox is also a new thing for me.
What's next for Explore the Moon
Better graphics of course! I wanted to create the 8 bit effect but I don't know how to on the web.
More Space Quest on the moon
Maybe even Mars!
Built With
css3
html5
javascript
Try it out
csleong98.github.io | Explore the Moon | A Vanilla-JS game which enables you as a astronaut or a blue dot in our case to explore the flat 2D world of the moon. | ['Chee Seng Leong'] | [] | ['css3', 'html5', 'javascript'] | 17 |
10,415 | https://devpost.com/software/return-to-adra | Return to Arda Logo
Our Inspiration
Our inspiration stemmed from our love for trivia and 90s video games. We blended these two into the space theme by creating a trivia game testing on quirky facts to guide an alien back to their home planet.
Space is a vast topic and one can never fully understand it. Keeping this in mind, we based our game on educating people and spreading the love for the galaxy.
What it does
On
Return to Adra
, users follow an Adranian, Eru, on their journey home. Users need to answer questions on intergalactic facts and each right answer gets Eru one step closer to home. A wrong answer results in the termination of the game and Eru is back at square one. The aim of the game is to improve one's knowledge of the space while boosting their curiosity.
How we built it
We used
Unity's WebDL game development software
to develop the game in
C#
. The various levels were designed by deploying distinct in-built UI elements and graphics fashioned by us. In addition to the displays, the animations were also custom coded to make the interface lively.
Initially, multiple prototypes were sketched on paper to properly understand the functionality of the game and make it user interactive. The final blueprint was then used to base the game on.
All the graphics ranging from the Asteroid option blocks to the background were designed in
GraphicGale
and
Aseprite
. The tailor-made logo was generated on
Placeit
and
Canva
to further enhance the game recognition in a public platform.
The video was made using
Google slides
and
iMovies
and posted on
YouTube
.
Challenges we ran into
We are all absolute beginners in game development and the software used. A lot of time was invested in selecting the most appropriate software and learning it as much as we could in the given time to develop our game.
We ran across multiple challenges in using
Unity
to make a game of this scale. The scaling and transition between scenes were challenging. Creating the right graphics in
GraphicsGale
and
Aesprite
and embedding them into the game was an add-on task. We created multiple versions of the graphics and texts with various backgrounds until we settled on the current versions on a transparent background.
Once we were able to build the basic version of the game, we decided to plug-in sound and animations to the player sprites' motion. Here, we faced an issue in making the animations smooth and increasing the overall aesthetics of the game.
Accomplishments that we're proud of
We all have been playing games since our childhood, this was our first attempt to make a game.
So we are proud of the game interface and design. The premise was the highlight of the game!
Submitting a fully functioning game was the highest accomplishment we are proud of.
What we learned
Overall, the whole 48 hours spent on creating this game was a learning process for us.
We built on our team-building and collaboration skills throughout the project with constant communication and leveraging personal strengths in task division.
We learnt how to utilize
Unity
,
GraphicsGale
and
Aseprite
. We started from scratch in each software and worked our way up learning different elements needed for the game.
To conclude, we learnt how to create a beginner game with a neat user-interface in this hackathon.
What's next for Return to Adra
In the future, we hope to have more levels and animations in
Return to Adra
. We'd also want to make it accessible to the general public by adding to PlayStore and AppStore. Furthermore, we would want to make the overall game design and graphics more powerful to increase user interaction and retention.
Built With
aseprite
c#
canva
graphicsgale
python
unity
Try it out
simmer.io
github.com | Return to Arda | Ever dreamt of exploring the far reaches of space? Now you can help an Ardanian return home with your intergalactic trivia! Can you get Eru to Arda? Try your hand at answering questions on deck. | ['Ananya Rao', 'Kamran Alam', 'Rue Sriharsha'] | [] | ['aseprite', 'c#', 'canva', 'graphicsgale', 'python', 'unity'] | 18 |
10,415 | https://devpost.com/software/how-many-women-are-in-space-right-now-wptyvq | Do you know the exact number of women who are in space right now?
So far there have been 566 astronauts from 41 different countries who have been in space. Only 65 of those astronauts were women. It was only after the 1980s when flight programs began including women, however, men are still chosen significantly more creating an immense gender gap of astronauts who have been in space. From when Neil Armstrong took the first big leap for mankind to now, 12 men have stepped foot on the Moon. Only after 2024 as part of the Artemis Program, the first woman will land on the Moon. We are constantly exploring our solar system but we shouldn't let it be seen through the eyes of just one gender. We need more women in space.
Gender diversity in the aerospace industry is an extremely important issue for us, here at the howmanywomenareinspacerightnow team and we wanted the world to become more aware of this issue.
What our site does:
Provides real-time updates of the number of female astronauts in the Low Earth Orbit
How we built it:
Using HTML, CSS, and javascript with access to the NASA and CSA astronaut databases
What's next for How Many Women Are In Space Right Now?
Our website is currently deployed on Domain.com and we hope to expand by adding capacities by Country, Agency, and an iOS app
Our Github Repo:
https://github.com/sarahschun/WomenInSpace
Built With
css
html
javascript
Try it out
howmanywomenareinspacerightnow.space | How Many Women Are In Space Right Now? | howmanywomenareinspacerightnow.space is the Internet’s #1 source for keeping tabs on how many female astronauts are in Low Earth Orbit. This website was created to encourage gender diversity in space. | ['Sarah Chun', 'Fion Lin'] | [] | ['css', 'html', 'javascript'] | 19 |
10,415 | https://devpost.com/software/galaxy-craft | Education in the early stages of life is a fundamental basis for the social and intellectual development of everyone, through this cognitive stimulation is sought by fueling curiosity.
Built With
azure
css
html5
java
javascript
sketch
swift | Galaxy | MMO Multiplatform video game to create your own solar systems and life on each planet, users can visit their solar systems, explore together past space-time locations and share rewards. | ['Hrithik Sahu'] | [] | ['azure', 'css', 'html5', 'java', 'javascript', 'sketch', 'swift'] | 20 |
10,415 | https://devpost.com/software/edusoph-sms-education-platform | This is an all in one platform for teachers to set tasks ! | EduGo! Portable Education for Teachers | EduGo ! | [] | [] | [] | 21 |
10,415 | https://devpost.com/software/spacing-out | Join a team!
Visit "spaceships" (grocery stores) to play the game!
Earn points for your team by giving stardust - but be careful! It decreases the longer you stay in the store.
Put on a "spacesuit" (a mask) to help protect yourself and fellow space travelers!
If you don't have a mask, be warned... and be careful to social distance around others!
Without a mask, your game will go into danger mode, with a bright red screen!
On Earth (not in a store), you can still join a team to see their points, but stardust can only be earned on spaceships!
Logo
Inspiration
During this pandemic, space (specifically between people) is more important than ever! We wanted to combine our love for space with a concept that would help people socially distance during these uncertain times.
What it does
The website tracks the user's location and displays it on a map as their character (an astronaut). The game itself is activated if they are inside of a store, which is displayed as a spaceship on the map. From there, they choose if they are on the red team (Mars) or the blue team (Neptune). They then can earn points through two different features. First, they can get "stardust", where the amount decreases as time goes on, by making their trip to the store short and sweet. Second, they can earn points by indicating that they are wearing a mask (their spacesuit) inside of the store.
If they're at home, they're on "Earth", meaning that they can access the website and look at information, but can't play the game itself.
We hope that this game is a fun way to encourage people to social distance and wear masks while grocery shopping and running essential errands!
How we built it
We built the website itself using HTML, Javascript, and CSS. We then used the Google Maps API for the map, and Google Firebase to store the team data in a database. The website is hosted using Github pages, with our domain (spacingout.space) obtained from Domain.com.
Challenges we ran into
At first, we had issues with sensing if the user was in the store. Initially, we tried to use Radar.io in order to use geofencing, but unfortunately we had difficulties using the SDK. As a result, we had to calculate this distance between the user and the store and see if it was under a certain margin of error using the Google Maps API.
We also tried to use Firebase hosting for our website, but we found that our website would be up in a shorter time with our custom domain using Github pages.
Since we wanted the website to be used while grocery shopping, our biggest unresolved challenge was making the website scaleable for phones.
Accomplishments that we're proud of
Before this hackathon, we had never used the Google Maps API and most of us had very limited experience with databases in general. Also, none of us had ever used a custom domain name before. The very fact that we were able to create a mostly functional website with all of these components was a big accomplishment for us!
What we learned
We learned not only how to use all of the different APIs included in our project, but also how to put all of these components together to create a fun and space-themed solution to a real-world problem.
What's next for Spacing Out!
We wanted to create not only a team score, but a safety level for each location/spaceship. This safety level would take into account the ratio of the number of people in the store compared to the capacity, as well as the percentage of mask-wearing people in the store. We would also like to create a functionality for user logins, so that people can keep their points, as well as a leaderboard with more than 2 options for teams to join. And most importantly, we want the website to be scaleable so that the mobile version of the website to looks just as good as regular version.
Built With
css
firebase
google-distance-matrix
google-maps
google-places
html5
javascript
Try it out
spacingout.space
github.com | Spacing Out! | Encouraging people to socially distance through a web game! Travel to different real life locations to earn points, but make sure to socially distance and wear a mask! | ['Connor Erickson', 'Dhruv Sharma', 'Jacob Fanale', 'Emily Amspoker', 'Lily Zook'] | [] | ['css', 'firebase', 'google-distance-matrix', 'google-maps', 'google-places', 'html5', 'javascript'] | 22 |
10,415 | https://devpost.com/software/alien-puzzle-game | Galaxy Center: A Cryptography Game
Galaxy Center: A Cryptography Game
A simple puzzle game based on ciphers and discovering alien planets.
Welcome to Galaxy Center! Your one stop for all the planets!
The planets you will be traveling to are Epsilon, Kappa, and Zeta.
Each planet includes questions and challenges which the player must get through to move on to the next planet. Challenges are based on computer science and cryptography.
What inspired us
We wanted to help others learn cryptography and computer science basics in a fun manner. Traveling to various alien planets to learn this seemed like a great way to do this.
What we learned
We learned more aspects of JS and its integrations with HTML through elements. We also learned about
tags and how they can allow for better CSS formatting.
How we built
We began building by creating the HTML files for each page, starting with the index. The JS files were created when they were needed. Once HTML files and their JS counterparts were mostly finished, CSS was added to format and style the webpage. The project was made mostly in the order that the game plays in.
Challenges we faced
With HTML, we didn't face many challenges. With JS, the chatbox feature on Zeta proved the most challenging, as having all user responses and translator responses show up proved difficult. With CSS, centering the labels and button created the greatest challenges.
Built With
css
html
javascript
Try it out
github.com
arnavgupta03.github.io | Galaxy Center: A Cryptography Game | Galaxy Center is a web game designed to test cryptography and cybersecurity skills in an alien environment. | ['Harshul Gupta', 'Arnav Gupta'] | [] | ['css', 'html', 'javascript'] | 23 |
10,415 | https://devpost.com/software/rpi-game-shuttle | Inspiration
We love playing minigames, and having a series of them to encourage our dreams of being astronauts (once) would be super enjoyable to make and play through. We also wanted to use tactile functionality to be able to play our game through, using joystick controls similar to those in an aircraft.
What it does
RPi Game Shuttle has two in-built minigames - one that acts a space debris shooter, avoiding and destroying flying asteroids as they near you, and another where you use your mouse to point to a section on the map, guessing where exactly the ISS could be stationed above you.
How we built it
We used pure Pygame (and PIL for image processing) in order to build the project.
Challenges we ran into
The hardest part was ensuring the numerous conditions and collisions for the shooter worked and were effective. Having a final working product was our goal - and geting the full Raspberry Pi mouse functionality was equally difficult as well.
What we learned
None of us previously knew how to use Pygame, which meant that learning all of its mechanics and using it to its full potential for a difficult game was super hard - luckily, this was an obstacle we eventually overcame.
What's next for RPi Game Shuttle?
More minigames and ideas using the hardware could involve the RPi Game Shuttle becoming a huge collection of playable activities, and endless fun.
Built With
json
pygame
python
requests
Try it out
github.com | RPi Game Shuttle | A collection of minigames which work with joystick controls on a Raspberry Pi for space-related fun. | ['Ana V', 'Day91 B', 'Myst rite', 'ZomBMage'] | [] | ['json', 'pygame', 'python', 'requests'] | 24 |
10,415 | https://devpost.com/software/astrobot | Inspiration:
Robotics is all about precision, and in space the margin for error is super less! Satellite repairs are expensive and necessary, with a robotic arm, this expense could be managed and better results could be obtained. At least as robotics enthusiasts this is what we feel! So Astrobot is our attempt to bring this passion to life.
What it does
What we want it do: Select the tool required, and perform the repair, and all this be controlled by someone remotely.
What we tried to implement: The challenge being space tools are complicated to model with 48 hours, we modelled one tool, the drill. We added a camera for the person controlling the bot to see the environment, and we used OpenCV algorithms to assist in finding shapes (circles, as most repairs involve circular objects, nuts etc). We used ROS node to control the joints using the teleop-keyboard.
How I built it
We used Robot Operating System as a pipeline to bring all the components together.
Challenges I ran into
After converting the solid works file to urdf we had trouble controlling the Gazebo model, and hence we could not demo the entire simulation.
Accomplishments that I'm proud of
The basic overall framework!
What I learned
We used a solid works file for the first time and hence all the challenges! But overall it was a great learning experience for us, that made us push our boundaries.
What's next for Astrobot
Fixing the simulation, adding all the tools and getting this to work! We believe it has a lot of scope and want to try to implement it.
Built With
gazebo
opencv
python
ros
solidworks | Astrobot | What if repairinng satellites was as easy as playing a video game? Here is an idea that aims to take the love and precision of gaming to space. A remote-controlled robotic arm that could do repairs! | ['Prajna Jandial', 'Paramjit Singh', 'Suhail Khan'] | [] | ['gazebo', 'opencv', 'python', 'ros', 'solidworks'] | 25 |
10,415 | https://devpost.com/software/mars_pathfinder | GIF
Adding and removing checkpoints by Ctrl+Click
GIF
Drawing walls and ridges for the Rover to avoid
GIF
Routing through multiple checkpoints to reach destination
GIF
Speed controls
GIF
Dragging checkpoints before starting the Search
GIF
Start search
GIF
Generating mazes of various patterns
GIF
Inserting and removing walls and checkpoints
GIF
TARS bot
GIF
Picking algorithms along with their controls
GIF
Dragging checkpoints and endpoints and instant path routing
Inspiration
As engineering undergraduates starting off with graph theory algorithms and path planning techniques, we chanced upon this amazing web app built by Clement Mihailescu and explained on his YouTube channel.
https://clementmihailescu.github.io/Pathfinding-Visualizer/#
https://github.com/qiao/PathFinding.js
We have always been inspired by space explorations and voyages, and how unmanned vehicles are able to plan their way on a foreign planet with little human intervention, avoiding obstacles along the way and responding fast to changing environmental conditions. Hence to improve our understanding of Path Planning algorithms and to give some shape to our childhood dream of operating an actual Rover on Mars, we put together this project.
What it does
This application finds the shortest path from the rover's initial node to its destination node using various shortest distance algorithms like A*, Dijkstra, etc. It lets the operator specify control parameters like avoid corners, allow diagonal movement etc. depending on the Rover's design. It also allows the operator to set checkpoints through which the Rover must pass to reach the destination. Other features such as maze generation using random and recursive algorithms have been implemented. The control buttons are dynamics and message bot TARS guides the user through the app. Our app also supports the dynamic dragging of checkpoints and start/endpoints and as well as dynamic addition, removal of checkpoints.
Path planning algorithms incorporated:
- AStarFinder
- BestFirstFinder
- BreadthFirstFinder
- DijkstraFinder
- IDAStarFinder.js
- JumpPointFinder
- OrthogonalJumpPointFinder
- BiAStarFinder
- BiBestFirstFinder
- BiBreadthFirstFinder
- BiDijkstraFinder
- CollaborativeLearningAgentsFinder
Features built:
Multiple Destinations
- Ctrl+ Click on grid cells to add checkpoints.
- The agent covers the checkpoints in the shortest path order and reaches the destination.
- Rendering using Travelling salesman algorithm.
- Dynamic Rendering of path through just drag and drop The shortest path is dynamically visible if the user even after search is over drags the nodes.
- Removal and addition of checkpoints by Ctrl+Click on them, and instant path re-rerouting.
- Music loop in-game
- Interactive Guide using SweetAlert
- TARS bot to provide instructions to the user
How I built it
The UI is built using Raphael.js library and basic HTML, CSS, and Js, and our web app has been tested on four browsers: Firefox, Google Chrome, Safari, and MS Edge. The path-finding algorithms, maze algorithms are implemented using JS following the OOP programming paradigm, the multiple destination routing is handled using the Travelling Salesman approach to make sure the shortest possible path is found reaching every checkpoint and the final endpoint, and the instructions are displayed using SweetAlert. The compiling and bundling of the application are handled by Gulp and Npm and for version control, we have used Git and BitBucket.
Challenges I ran into
We ran into issues implementing the Path-Finding algorithms using Javascript and quite a few of the concepts were derived from research papers and articles, so understanding them and implementing each of them took a lot of effort. Besides, as our goal was to make the user experience smooth and fun, we had to come up with innovative ideas to convey the app's functionality to the user using guides, Messagebot, etc. Besides, implementing TSP, which is believed to be one of the most intriguing problems in the CS domain, and handling the complexity of our implementation was an algorithmic challenge that prompted us to read papers and refer to quite a few courses to clear our concepts of Graph Theory.
Accomplishments that I'm proud of
We are proud that we were able to work as a team and create an easily operable app with aesthetic designs. The real-time rendering of dragging is one of the highlights for us. We are also proud of creating a user-friendly app by introducing the message bot TARS. Not only did we consider the coding and implementation part of the project, but we also considered a good user experience.
What I learned
We used the OOP programming paradigm to create this app. We learned how to build and add features using good coding practices. We also learned the various algorithms used to find the shortest path from starting point to destination, and how they work under specific conditions (for example, allowing diagonals or not). We also learned how states (ready, start, pause, clear, etc) can be used to set the viable controls appropriately. This allows easy and systematic addition of features.
What's next for MARS_PathFinder
We plan to add obstacles of various types like bombs etc. as well as rewards like fuel, coins to improve the user experience. Further, we plan to build a game out of this project, where multiple users can play against each other to choose the controls that can give the shortest path in the least possible time collecting the maximum amount of rewards.
Built With
bitbucket
bootstrap
css3
git
gulp
heroku
html5
javascript
jquery
npm
raphael.js
sweetalert
Try it out
mars-rover-pathfinder.herokuapp.com
bitbucket.org | MARS_PathFinder | Navigate the Mars Curiosity Rover using various pathfinding algorithms, visualize routes found, add checkpoints, mazes along the way to reach its base and drag checkpoints and endpoints in real time. | ['Avani Gupta', 'Dipanwita Guhathakurta', 'Akshaya Karthikeyan', 'Tathagata Raha'] | [] | ['bitbucket', 'bootstrap', 'css3', 'git', 'gulp', 'heroku', 'html5', 'javascript', 'jquery', 'npm', 'raphael.js', 'sweetalert'] | 26 |
10,415 | https://devpost.com/software/teachers-guide | Home Page(on mobile)
Different components of our website(on mobile)
Edith Chatbot( on PC)
Video Tutorial Page(on mobile)
FAQ Page(on mobile)
Gif Playing in Action on the FAQ page
Team Members(on mobile)
Teacher's Guide
VIDEO TUTORIALS AND CHATBOT TO ASSIST TEACHERS REGARDING ONLINE CLASSES
Link to Website:
Teacher's Guide
WHY THIS?
In this period of COVID teachers especially those who are not usual to the online teaching platforms are facing many problems which can be solved through a chatbot that can answer their each and every query efficiently.
WORKING OF CHATBOT:
Assisting how to take classes(using droidcam),
how to generate link, sending link to students via mail or whatsapp,
how to admit or deny entry to a participant, how to mute/remove a participant,
when to switch on video and when to keep it off, how to present screen,
if someone has stylus/graphic tablet how to use it with screen sharing and writing on the screen,
how to end meeting,
how to take tests and assignments using meet and google forms and other free platforms.
various GIFs are added to demonstrate actions.
FUTURE WORK:
Adding support for more platforms.
Using ML and NLP for chatbot.
Improving UI.
SKILLS/TECH USED:
WebD & Dialogflow
PROJECT BY:
ADITYA RAI
ASHI SHARMA
TANMAY RAICHANDANI
VEDANT GUPTA
Built With
bootstrap
css
dialogflow
html
Try it out
github.com | Teachers-Guide | In this pandemic teachers especially those who are not usual to the online teaching platforms are facing many problems which can be solved through a chatbot that can answer their each and every query. | ['Aditya Rai', 'Vedant1710', 'Ashi Sharma', 'tanmayr2001 Raichandani'] | [] | ['bootstrap', 'css', 'dialogflow', 'html'] | 27 |
10,415 | https://devpost.com/software/constelaltion-generator | initial results, single pass with connector algorithm.
later results, three examples, no connections, one pass, two passes (top down)
What it does
The command line program produces three SVGs. These are output to stdout, so it is recommended that it is piped into a file. An html file works without any tweaking, but you could easily obtain a .svg.
How I built it
After trying to understand Qt5 enough to write all the graphics in it, I gave up and hacked together something that outputs svgs that look like what I wanted.
Challenges I ran into
In theory I had a team. In practice... I didn't see them do very much. Could have been timezone problems but I doubt it.
Qt is quite hard to work with when one doesn't have the time to look at any proper tutorials.
Accomplishments that I'm proud of
Considering the simplistic algorithm used to draw the constellations, they look surprisingly realistic.
What I learned
I was reminded that formats like SVG, PPM, and HTML are really simple to output, meaning you don't need a fancy GUI to make pretty pictures.
What's next for constelaltion generator
I would like to get the Qt part to work. I thought it would, but it just has an empty window.
Built With
c++
qt
svg
Try it out
github.com
github.com | constelaltion generator | Need a night sky for your fictional world? allow us to generate one for you. The best part is that you need to make it look good yourself! | [] | [] | ['c++', 'qt', 'svg'] | 28 |
10,415 | https://devpost.com/software/moon-network | Computer Screen
Computer Configuration
Address table
Local Base Network
Lunar Bases
Inspiration
I have always found networks interesting. Whether the internet or a small LAN I've always been interested in understanding how it works. And while there are many fantastic resources out there I felt there was a potential for a more gamified approach for learning how a network functions.
What it does
We made lunar bases on the moon! But there's a problem they are operating separately in their own universes. Moon network does a very basic simulation of a network. There's a number of computers in each lunar base that can be connected and configured so that they are able to communicate together.
How I built it
I built it using Microsoft's Blazor to handle all of the DOM manipulation. The graphics are done using SVG. And Azure to handle the hosting of the web app.
Challenges I ran into
Had zero experience setting up a web app through azure so I fumbled a couple of times during the setup.
Making the SVG graphics scale consistently. Having to redo some of the graphics near the end of development. Working out how to structure the nodes in a way best suited for blazor components.
Accomplishments that I'm proud of
The development pipeline had automated CI/CD. Making 99% of the drawing and graphics used by the web app. Using a new technology like blazor.
What I learned
-The process of deploying a web app on azure.
-Using the viewbox property to control the scaling of svg graphics.
-Communication between blazor components.
What's next for Moon Network
-Making the simulation more robust.
-Adding a tutorial
-Giving more freedom on the infrastructure of network to be built.
-Adding more types of devices to the network.
-Graphics overhaul.
Built With
asp.net
blazor
c#
Try it out
www.moonnet.online | Moon Network | Moon net is a project intended to teach how networks work in the fun setting of space. | ['Manuel Salguero'] | [] | ['asp.net', 'blazor', 'c#'] | 29 |
10,415 | https://devpost.com/software/take-me-too | Home Page
Experience Catalog
Moon
Submission Form
Successful Submission
ISS
Coming Soon
Chatbot
Inspiration
Space travel is something that has always excited and inspired us since childhood. With technological advancements and feasibility in this sector growing, the future in which common people get to experience space is not far. Hence, we can very well look forward to space tourism being a hot business in the upcoming decades. Another huge inspiration for this project has been,
Elon Musk
. We look forward and try to make his vision for
'Anyone' can move to Mars and beyond
come true.
What it does
Our user friendly web-app promotes space tourism. It is a futuristic application which will allow the masses to experience space like never before. Through this application, we connect you to agencies like NASA, ISRO, SpaceX, etc. and learn more about space travel opportunities and packages offered. It's your Booking.com for space travel, with a user-friendly interface to easily access the following services:
👨🚀 Space Experiences (e.g. Moon Walk, Space Jump)
🚀 Flights to the Expanse (e.g. Moon, ISS)
🪐 Stays in the Outer Space
How we built it
Our web - app is based on reactJS frontend and node.js backend. Firebase is used for user authentication and data storage. Google maps-moon API was used to display the exact locations of the place of stay at the moon. The google maps API was mounted upon NASA's API. We made use of it to retrieve launch information and other details as well. We have integrated a virtual assistant
Dr. Luna
👩🚀, to help users navigate throughout the website without any trouble. This virtual assistant is powered with Dialogflow.
Challenges we ran into
The most challenging part was to use the google maps - moon API. Since it's not something people work with every now and then, we couldn't find any good resources or documentation to get desired results. But thank god of NASA's moon trek embeds. This is built using google maps and NASA's information that gave us the most optimum solution to our problem. We wanted to make the web app user friendly and fun to use. We used star wars theme and subtle animations to create amazing user experience.
On the whole, this idea depends on future advancement in space tourism, which, in the current scenario is decades away. There have been a lot of hypothetical assumptions made during this project. But that's the beauty that technology covers. It makes us prepared for the future.
Accomplishments that we're proud of
We proudly present you version 1 of our website. We never thought we will be able to pull off the whole prototype of our idea. We hope space travel becomes open to all and this website helps you to book your dream vacation.
Going to space? Take me too.
What we learned
Through this hack, we got the opportunity to learn a lot of technologies. Key points are:
💻 React JS and its various libraries
🗺 Google Maps API and NASA API integration
👩🚀 Chatbot Development through Dialogflow
👩🏽🤝🧑🏻 Team Work
⏱ Working on a deadline
✨ most importantly
SO MUCH ABOUT SPACE
What's next for Take Me Too
With the advancement in space technology, the day is not far where more options and features will be added to our website. As we uncover the universe more, opportunities to explore also increase. And for that you will always have takemetoo.space to reach out and fulfill your intergalactic dreams and aspirations.
Built With
dialogflow
firebase
google-maps
nasa-api
node.js
react
Try it out
github.com
takemetoo.netlify.app | Take Me Too | Space travel ? Take me too | ['Iishi Patel', 'Sambhav Jain'] | [] | ['dialogflow', 'firebase', 'google-maps', 'nasa-api', 'node.js', 'react'] | 30 |
10,415 | https://devpost.com/software/radioaware | User Interface
Inspiration
Radiation is BAD. and its all around us. Well actually, theres a lot more of it in space, which is why astronauts always have their radiation exposure closely monitored. we thought this would be a very useful technology to adopt for terrestrial purposes, and instead of carrying around fragile equipment like geiger counters, why not have the radiation detection instruments be in fixed places and keep track of people who were near them, thereby allowing personal records indicating how much radiation someone has been exposed to. Many studies have shown a significant correlation between cancer and increased exposure to radiation. Our solution provides a quick and relatively inexpensive way to track general radiation exposure for people without requiring everyone to carry geiger meters.
What it does
Measures radiation levels in an area or a room, background as well as induced/artificial. Tracks users in vicinity and uploads radiation exposure data to a database, which then can be accessed via the app. Users can use a nfc/rfc ID to check in or out of radiation hazard areas, and this way the system keeps track of individual radiation exposure
How I built it
Hardware:
Geiger counter with digital pinout interface (5v logic)
Arduino Mega
MFRC522 RFID reader module
Wiring, breadboard, LEDs
Software:
Python
C
React native (app)
Database:
MongoDB
Challenges I ran into
Calibrating a geiger counter is not the easiest thing in the world! also, bananas are a bit radioactive. Also, i am currently hacking in a lab holed up here due to a hurricane (Hurricane Isaias, currently a few miles off the coast of Melbourne, Florida). Building and testing radiation measurement hardware amid such conditions has been quite a challenge, but a fun experience overall!
Accomplishments that I'm proud of
We can track rads in real time! Bananas wont kill you by radiation poisoning unless you were literally drowning in them
What I learned
Background radiation levels can fluctuate wildly depending on many factors
What's next for RadioAware
more robust prototype, maybe a POC for workers who are exposed to radiation often (CAT scans, X Ray Technicians etc)
Built With
ardunio
python
Try it out
github.com | RadioAware | Be Aware Of Your Enviornment | ['Vincent Occhiogrosso', 'Ebtesam Haque', 'Muntaser Syed'] | [] | ['ardunio', 'python'] | 31 |
10,415 | https://devpost.com/software/the-fact-updater | Inspiration
It's always a nice to be able to transport yourself in your imagination. This is exactly where this hack is coming from.
What it does
It is an AR filter which has multiple options related to the theme
How I built it
We built it using the SparkAR Studio
What I learned
We learned how to navigate the SparkAR slightly better and implemented multiple features for the first time.
Built With
sparkar
Try it out
www.instagram.com | The Cosmic Emulator | Feel The Space | ['Ritvi Mishra', 'Keshvi Mishra'] | [] | ['sparkar'] | 32 |
10,415 | https://devpost.com/software/travel-to-outer-space-dmaz2x | Travel-To-Outer.Space Logo
Travel-To-Outer-Space
Setting it up for yourself
Setup webapp:
To install the packages from requirements.txt
$ pip install -r requirements.txt
If you are getting any error just do:
$ pip install django,django-crispy-forms
To run the server:
$ python manage.py runserver
Inspiration
From a young age, I'm pretty sure all of us wanted to be an astronaut at some point. Exploring the solar system would be so fun! But, as we realized when we were older, it takes a lot of training to become an astronaut. We wanted to make it so that all the space enthusiasts have a chance to explore the solar system from the comfort of their own homes.
What it does
This is a website that displays space-related facts, a quiz (in progress), a solar system simulator, and live ISS coordinates page.
How we built it
We used a lot of Django/Python, CSS, Javascript and HTML (most of which was more advanced than we knew earlier) for the website. We also used ________ APIs and _______ libraries to achieve our final product.
Challenges we ran into
It can't be a Hackathon project if we didn't run into challenges, so of course, there were several. The main challenge for us was the code that was far more complex than we had ever done before, which took some time to learn. Other bugs were there, too, which we mostly ironed out.
Accomplishments we're proud of
Creating a functioning website
Increasing our knowledge of CSS
Learning Astrophysics Formulae
Educating ourselves on Space Facts
What we learned
We learned a lot... Like a lot, a lot. From CSS to wacky Space Facts, we have learned more this weekend than in any single lesson of Physics.
What's next for Travel-To-Outer.Space
4 words. Coming soon: Space Quiz.
Built With
css
django
html
javascript
python
Try it out
github.com | Travel-To-Outer.Space | Travel-To-Outer.Space is your one-stop Space Exploration experience. Over here, you'll find everything space related that we find notable, from a space sim to a page full of the latest space facts! | ['Prithviraj Singh Shahani', 'R Prasannavenkatesh', 'Yi Qiang Ji', 'Vikram Jaisingh'] | [] | ['css', 'django', 'html', 'javascript', 'python'] | 33 |
10,415 | https://devpost.com/software/space-triviader | Inspiration
Built With
python | Private | Private | [] | [] | ['python'] | 34 |
10,415 | https://devpost.com/software/singly | Firebase Storage
Firebase Database
PythonAnywhere Console
Profile
Phonation Modes
Leaderboard
Singing recording
Inspiration
One of the biggest risks to the mental well-being of young people is social media. Numerous studies show that social media exacerbates exclusion, anxiety, and depression. We sought to create an inclusive, positive-reinforcement based social app for something that everybody loves: singing! Singing has many benefits: it boosts creativity, creates a sense of community, and helps with social integration, especially among children. All four of us have always been enthusiastic about singing and music, although we have never been very good at it. We decided to use our expertise in signal processing and machine learning to solve this problem. Our app helps people and connect with their friends and get better at singing.
What it does
Our app allows people to become better singers. Users can access our song collection and sing over a song of their choice. Our signal processing algorithm calculates and returns a score to the user showing the pitch difference between the original recording and their recording. Users can also view a leaderboard for each song showing the highest scoring individuals. There is also a profile tab where users can play back all their recordings and view their pitch improvement overtime. Finally, users can use our powerful vocal classification deep learning algorithm to find out their phonation mode. The phonation mode classifier categorizes someone’s singing into 4 different classes: breathy, neutral, pressed, and flow. This tells the singer how their vocal cords and larynx are contracting when they sing, and what adjustments must be made in singing technique to sing in a certain style.
How we built it
We built our client app in React Native and Expo. We use Firebase for user accounts, data storage for the leaderboards, and for storing all the song recordings. The algorithms and ML model are run on Flask in a PythonAnywhere production server. The client app communicates with the server through a custom RESTful API. The vocal comparison algorithm was built using numpy and librosa. First, the comparison algorithm takes two audio files and generates a series of chroma vectors. Chroma feature vectors are generated by taking short time Fourier transforms of the data and binning the transformed data into the 12 different Western musical tones. These accurately describe the pitch of the audio files. These features are averaged for each file. The average features are then turned into probability distributions to account for differences in total energy between the two sound files. Finally, the statistical Bhattacharyya distance is calculated between the two distributions to find how close the two sound files are in pitch. The vocal classification model is a convolutional neural network built with tensorflow and keras. The model was trained using mel-frequency cepstral coefficient (MFCC) features from 910 audio files of different phonation modes. The model achieved a 93% training accuracy and an 89% test accuracy.
Challenges I ran into
There were a lot of challenges in synchronizing the Firebase database, our production server in PythonAnywhere, and our client app. Specifically, we had a lot of difficulty getting asynchronous functions to work the way we wanted in React Native. Often the PythonAnywhere server would look for files not yet uploaded to the Firebase because the async function was not finished.
Accomplishments that I'm proud of
We were able to successfully create an Accounts feature in our first attempt using React Native Expo. After learning how to use React Native, we developed a navigation bar leading to each page. We also developed a very extensive signal processing algorithm in Python. This algorithm was based on modern techniques in MIR, or music information retrieval. We had to figure out how to adjust for different vocal timbres and different amplitudes in the files. Through our research, we found that chroma features and probabilistic distance were the best ways to determine the differences in pitch. Finally, we also managed to train a CNN with relatively high accuracy for vocal classification.
What I learned
This is our first time developing a mobile app with react native. Additionally, we learned how to integrate firebase into our react native applications.
What's next for Sing-ly
We believe that we can take Sing-ly public, publishing it on the App Store and Google Play Store. Before this publication, we want to expand our Firebase file system to accommodate more users and more files. We also plan to receive a song licensing agreement to provide more songs. We would also create a friends/followers system to help facilitate more connection between people.
Built With
flask
keras
librosa
python-anywhere
react-native
tensorflow
Try it out
github.com | Sing-ly | An app to improve your singing and compete with others | ['Sid Srivastava', 'Aman Wadhwa', 'Victor Hu', 'Chris Gu'] | [] | ['flask', 'keras', 'librosa', 'python-anywhere', 'react-native', 'tensorflow'] | 35 |
10,415 | https://devpost.com/software/ping-b6ruj7 | glimpse of backend architecture.
Basic user flow
Our logo
Inspiration
Due to covid everything has shifted online , from classes, work meetings, social hangout, to doctors visits, concerts and basically all daily activities. Hence virtual video conferencing softwares have came very handy. There are soo many applications to do online calling, trying to bring people closer. But one major problem all of them face is the engagement level. Even though we can communicate face to face, but the problems arises in group setting. Few weeks back I attended a virtual birthday of my friend and there were 35 people on call and i just knew personally about 10 of them.So just to ask how they were, I had to wait for my turn to speak and it got very annoying when i had to hear other 2 people talk and share updates about their life, when I did not even know them. Then I realised I can just chat personally to my friend or use breakout room but it was a lot of work and hence I lost the engagement. I was unable to make small talk and socialize. Hence I realized no other application have that feature. People become at one point unengaged on video calls and it is no longer similar to physical setting. Hence this problem need to be rethought. In coming years, virtual call will be a bigger means of communication , when people start going to space more , etc..
What it does
Our application is trying to redefine how virtual interactions happen. We want the people on the call to be more engaged and enable them to ping people and make small talks within the main conference room. Currently the layout is very intuitive and similar to zoom like conference apps but with added functionality of instant ping. So when all the users, lets say 6 people, are in the conference call, and person 4 is talking. At this point person 2 wants to say something really quickly to person 3. Using our Ping app, he/she can ping the person3 directly while still being part of the main conference room. So when person 2 and person 3 are on ping call, no one in the main conference room can hear or see them but person 2 and 3 can hear and see each other and also on the background they can see everyone else and have low volume (20% as default) from the main conference call. This ping call can be made open and hence other users can request to join this call.
But we have a lot of features plan for or next release, both from ui/ux side and backend architecture.
How we built it
We are building the web app and in this weekend's demo we were able to achieve the main feature when the ping call is made, and the main conference room people stops seeing the people in ping call. We used express server, peer js for webrtc and socket.io. We used javascript as our main language. We first started looking online on "how to make video calling app" and tried to understand the current architecture of various different video calling apps. Then we tried to design our own data flow and architecture to implement the ping feature. We used visual code studio as the code editor and used socket.io as server, clients connection and peer.js( api for webRTC) to make peer to peer connection and stream media between peers. We deployed using Heroku. We tried using google cloud, but due to issues in our front end, we were not able to.
Challenges
The first challenge was trying to solidify our understanding of exactly what we wanted to accomplish (i.e. the user stories). Trying to understand the video streaming connections and how important performance is in video streaming took the most time. It was mostly us searching for background information, before settling at some promising tutorial videos/articles to follow. We even went through Jitsi open source code, but it was too adavnced for our level right now. We used the PeerJS package, which helps with managing webRTC, however for our specific application, it was a struggle to customize the peer connections the way we wanted them to for Ping calls. None of us have dealt with webRTC tech nor video live streaming. Most of us have no experience in front-end, either. We tried working with Firebase for hosting as well, which required an in-depth exploration into Google Cloud but we ended up hosting on Heroku.
Accomplishments
Learned a great deal about webRTC through PeerJS. Set up connections through PeerJS and using a PeerJS server. We were also successuful to have a working demo of basic ping feature which is really important to support if our idea is technically possible or not. Also, we were always thinking of scalable design and architecture, which is really important for sleek design.
What's next
Finding ways to optimize multiple video streams in a single room. We have unfinished pages that we will finish to welcome the user to our application and provide a positive outlook for the app. We are trying to rethink the meaning of virtual conference app and present users with a completely new experience. Having multiple speaker and ping is just one feature. As said before, we are trying to bridge all the gaps between physical and virtual conversations to make the virtual calling experience more seamless and engaging.
We are currently using a mesh, where each user is connected to all other users in a separate connection. We plan to try out a client-server. We also plan to use React (and react redux) to better organize our data passing more optimally. Lastly, we hope to complete the web browser version so that we can ship out a software that would also work for mobile.
Built With
css
ejs
express.js
familiarity-with-express-server
familiarity-with-nodejs
figma
firebase
heroku
html
javascript
linux-operating-system
npm
powtoon
react
socket.io
vanilla
visual-studio-code
webrtc
wsl.-editors-visual-studio-code
Try it out
ping-peerjs.herokuapp.com
github.com | Ping: Video Calling App | Redefining virtual conversations. Get out of the box, ping people, make small talks, form mini groups and lot more on virtual conference calls. No need to be muted all the time when u can express. | ['Ishmeet Kaur', 'Nischay Modi', 'Darshika Bansal', 'shasvatvyas Vyas', 'John Voong', 'David Brown'] | [] | ['css', 'ejs', 'express.js', 'familiarity-with-express-server', 'familiarity-with-nodejs', 'figma', 'firebase', 'heroku', 'html', 'javascript', 'linux-operating-system', 'npm', 'powtoon', 'react', 'socket.io', 'vanilla', 'visual-studio-code', 'webrtc', 'wsl.-editors-visual-studio-code'] | 36 |
10,415 | https://devpost.com/software/winner-winner-dehydrated-ice-cream-dinner | Inspiration
Way too much time spent on looking for projects
What it does
How I built it
Challenges I ran into
Accomplishments that I'm proud of
What I learned
What's next for Winner winner dehydrated ice cream dinner | Winner winner dehydrated ice cream dinner | Suggestions for projects | ['Kevin Jiang'] | [] | [] | 37 |
10,415 | https://devpost.com/software/spacewalk-lxsy1u | Moon
International Space Station
Mars
Inspiration
Our Inspiration was to create a website which facilitates a desire to exercise by encouraging people to walk the distance it would take to reach the moon, mars, etc.
What it does
The website tracks the users location and allows the user to plan walks. Then records the distance the user has covered and adds it to the total.
When the user reaches 254 miles (for example) they will achieve the first accomplishment, walking the distance it would take to reach the International Space Station.
How I built it
We built the website using HTML, CSS, Javascript, and bootstrap. The website uses the google maps Javascript API to generate a map for the user to see their routes on. It also uses the google distance API to create the pathways for walking routes.
Challenges I ran into
We could not get the icon to show up with the accomplishment notifications. This was supposed to be a moon for example when the user had walked the distance to reach the moon, however it was blue instead.
We wanted to use a tracking feature to track the user as they walked for a more open feeling experience . However, we could not figure out how to store the coordinates correctly and call them back without causing errors so we settled on fixed routes / set mileage.
We then had a lot of difficulty implementing fixed routes as at first we could not figure out how to add locations to the routes. After this, we couldn't get the directions for the routes to show up on the map. However, we did manage to get the location pointer to find the user
Accomplishments that I'm proud of
We are very proud if getting the accomplishments popup to come up at the bottom of the main page.
We were also quite proud of some of the logos / graphics we made for the planet accomplishments.
What I learned
This was the first time either of us had tried to use Javascript properly. We used it quite a lot in this project. We have learned how many things it can be used for, and how to use script in html code.
What's next for SpaceWalk
We would like to implement a map which allows the user to press start and walk where they like then press end when their walk is over. The user could then review their walk, and distance, etc.
We would also like to add percentage trackers for accomplishments so that the user can see how close they are to reaching mars, or whatever their goal may be.
Built With
bootstrap
css
distance-api
html
javascript
maps-api | SpaceWalk | Make it to space one step at a time. | ['shanna balfour'] | [] | ['bootstrap', 'css', 'distance-api', 'html', 'javascript', 'maps-api'] | 38 |
10,415 | https://devpost.com/software/gravity-sandbox | Inspiration
I really enjoy physics, and have found astrophysics and orbital mechanics particularly interesting. A while back I learned about gravitational N-body simulations: simulations of many particles interacting with each other through gravity. The complexity that arose from such a seemingly simple equation (Newton's Law of Gravitation) was amazing! I wanted to share this experience with other people, so I decided to make an interactive N-body simulation with a simple interface.
What it does
The program simulates the gravitational interactions of many objects. You can move around the simulation, view the properties of each object (such as mass, density, radius, coordinates, and velocity), control aspects of the simulation (such as the time steps and distance scale), and add new objects with customized properties.
How I built it
I made this program in UPBGE, and improved version of the Blender game engine. I wrote the code in python. The central part of this project is the N-body simulation itself. I also wrote this in python, using the vector equation for Newton's law of universal gravitation. Other important equations that were used (for collisions) included conservation of momentum, center of mass, and composite density.
Challenges I ran into
There were plenty of challenges, including limitations with UPBGE's physics engine. I ended up writing a script to calculate all of the necessary physics instead of using UPBGE's physics engine.
Accomplishments that I'm proud of
I am proud of my progress with python, blender, and physics. I was able to write this program in a single day.
What I learned
This project has helped me practice some physics, as well as learn more about UPBGE, including its features and its limitations.
What's next for Gravity Sandbox
Many things can be improved. For one, I would like to make a more intuitive way for applying velocity and forces to objects - perhaps with vectors showing the direction of the applied force and/or trajectories which show you exactly where the object will go. And of course, there is lots of room for aesthetic improvements and optimization.
Built With
blender
python
upbge | Gravity Sandbox | This project is an N-body simulation with an easy, simplistic interface. You can interact with the simulation as it is going, enabling you have an interactive and fun experience with astrophysics. | ['Dennis Chunikhin'] | [] | ['blender', 'python', 'upbge'] | 39 |
10,415 | https://devpost.com/software/space-collab | Inspiration
In recent times, space research has become a new race between countries and private organisations. Although, our past experience suggests that “Collaboration is important in maintaining a consistent pace of scientific discovery.” Therefore, I was inspired to build an application that encourages a collaborative approach to space research, thereby aiding the expansion of our current scientific understanding.
What it does
Space Collab provides a platform for individual scientific researches and organisations to collaborate on space research projects.
How I built it
Space Collab is a Flask application that uses Google’s Firebase realtime database to store user information. I also used Firebase authentication to handle passwords and user login. This was made possible by utilising Pyrebase, a Python wrapper to handle Firebase commands in Flask. The website also makes use of Google Maps Javascript API as well as geocoding API for the functionality of google maps. The frontend was designed using HTML, CSS and Bootstrap framework.
Challenges I ran into
This was my first time working with Firebase database and authentication, which meant that I spent a majority of time learning how to use firebase. However, I am glad to have learnt a new skill in this hackathon. I also struggled with time management and that's something I hope to improve in future hackathons.
Accomplishments that I'm proud of
I enjoy working as part of teams, although, I took it as a challenge to complete a project by myself this time. I am proud of completing the basic functionality and the UI design of the website as this was my first time using CSS animations for the frontend.
What I learned
This hackathon gave me the experience of using a variety of web technologies and helped me gain confidence in my skills. Besides, completing a project by myself has made me more appreciative of the importance of team-work and collaborative problem-solving.
What's next for Space Collab
Since I was restricted with time, I could not include a custom chat application on the website. This is something I would like to work on in the future. I also plan on deploying the website in the upcoming days.
Built With
firebase
flask
geocoding-api
google-maps
javascript
pyrebase | Space Collab | Encouraging Collaboration in the Space Industry | ['Manya Girdhar'] | [] | ['firebase', 'flask', 'geocoding-api', 'google-maps', 'javascript', 'pyrebase'] | 40 |
10,415 | https://devpost.com/software/moonage-daydream | Inspiration
We are all space geeks and we find everything related to space travel truly fascinating.
We wanted to make something that would make others see how cool it really is.
So we made Moonage Daydream to take you on a virtual adventure/mission to the moon and back.
What it does
You have 7 missions:
Mission 1:
Learn about the Saturn V rocket, it's components and it's controls through AR.
Mission 2:
Liftoff and enter outer space. Who needs git to push code to GitHub when you can use our space themed package 'Moonpie' instead. Set your destination (Upstream). Enter mission statement (commit message) and launch your rocket into space (push to GitHub) all by using Moonpie for a visual rocket flying off from your terminal.
Mission 3:
Navigate your way through the asteroids of space in our recreation of the classic asteroid game. Can you save your ship from the hurling asteroids and reach your destination safely?
Mission 4:
Explore your life in space. Learn about how moisture and urine is converted into drinkable water. How co2 escape into space.And how different food is here on earth vs in space through AR animation.
Mission 5:
Land carefully and safely on the moon using your thrusters. Be careful not to use up all your fuel in this recreation of Atari's lunar lander game made using pygame.
Mission 6:
Take out your VR glasses and explore the moon surface. This surface is the actual landing site of Apollo 17 and the Lunar Lander and the scene are both from the NASA's 3D gallery.
Mission 7:
Return to earth with the knowledge and experience of an astronaut.
How we built it
It was made using Unity studio, pygame and a bunch of other tools
What's it made up of
It has AR, VR, a command line package and recreation of two classical arcade games Asteroids and Lunar Lander.
Accomplishments that we're proud of
That we finished this project.
What we learned
Making apps with Vuforia
2.Vuforia database and importing
Vuforia with 2D and 3D Manipulating Images
Built With
ar
figlet
pygame
python-package-index
unity
vr
vuforia
Try it out
github.com | Moonage Daydream | We will take you to the moon and back | ['Jatin Dehmiwal', 'Simran Saini', 'blackcrabb Niyati', 'saumya shakya'] | [] | ['ar', 'figlet', 'pygame', 'python-package-index', 'unity', 'vr', 'vuforia'] | 41 |
10,415 | https://devpost.com/software/star-gazer-p6it27 | Inspiration
Ever wanted Google Street View for night skies? Now you have it. Introducing StarGazer!
What it does
StarGazer is a service for astronomy enthusiasts. Users can:
Upload their pictures of night skies to the cloud.
Get image processing result including names of stars & planets that are captured in user's image.
Explore night skies of other part of the world using intuitive globe interface.
How We built it
StarGazer has two pages: Study page and Explore page.
Study Page
In study page, users can upload their images and get image processing results. We used
radar.io
to determine users' GPS coordinates, then process their images with Astrometry API to get names of stars & planets.
Explore Page
There is a fullscreen 3D globe inside the Explore page. Users can easily identify community images as orange markers on the globe. We have implemented camera controls and rendering with
three.js
library.
Challenges We ran into
The Api we used for processing the images didnt provide any documentation in javascript, also it took long to process every request, taking up a lot of our time. Also the url provided by firebase for the stored images was not compatible with astrometry api so we coudnt use that either.
A challenge we dealt with is that some of us had another engagements to do as well as hacking, so we had to find time to do this project.
Accomplishments that We're proud of
We're proud of learning Radar.io
What We learned
Even though we didn't ended up using Firebase database, we have gained experience of using Firebase REST API.
What's next for Star Gazer
As of now, StarGazer is lacking a backend to store & manage astronomical images. Also, it would be great if we could add constellation detection AI to the service for casual observers.
Built With
firebase
react
typescript
Try it out
github.com | Star Gazer | Google Street View, but for Night Skies! | ['Shikhar Sharma', 'Donghyeon Kim', 'Deep Parekh', 'Anshika Mishra'] | [] | ['firebase', 'react', 'typescript'] | 42 |
10,415 | https://devpost.com/software/talk-trek | Inspiration
I have participated in other hackathons with projects which solve an issue or mostly address a gap. But for To the Moon and Back, I wanted to experiment with my web development skills to create a fun and interactive website. There is no universal language established in the universe for all the creatures to have conversations. Assuming the fictional characters, from Star Trek, come into the real picture, I developed an online translater that a human can use to talk to them and have meaningful conversations effortlessly.
What it does
It is a responsive website in which the user would be required to fill in the input using the English phrases they want to use in the conversation and the website handles the rest. The website manipulates the string entered in the HTML forms and POSTs it to the API translater. If the request is successful, the returned JSON is again manipulated to show the results on the screen for the human to read. The English phrases are translated to a language called "Klingon" which is spoken by some creatures on the Star Trek movies and shows.
How I built it
I used HTML to structure the webpage, CSS to style the page according to the theme of the website, and Vanilla JavaScript to send and receive requests to the API which translates the entered English phase.
Challenges I ran into
The main challenge I ran into was that I was not able to show the results of the translation on the webpage. I was unable to identify the JSON's properties and use them with respect to my purpose. After some research, I was finally able to show the translation on the webpage.
Accomplishments that I'm proud of
I am happy that I got the opportunity to implement AJAX, and HTTP in a real-time project in such a short amount of time. I am also proud of the fact that humans would not have problems conversing with alien creatures from now :)
What I learned
I learned to brainstorm efficiently. I also had the opportunity to apply my skills which increased my experience with the subject.
What's next for Talk Trek
I am currently working on implementing a text-to-speech option that will speak out the Klingon language to the human user.
Built With
ajax
css3
html5
http
javascript
jquery
powerpoint
Try it out
klingon.tech | Talk Trek | The most efficient way to have a conversation with aliens! | ['Ajeya Madhava Rao Vijayakumar'] | [] | ['ajax', 'css3', 'html5', 'http', 'javascript', 'jquery', 'powerpoint'] | 43 |
10,415 | https://devpost.com/software/digi-pad | Digi-Pad
Magic-Pen
Magic-Pen PCB
Digi-Pad
Digi-Pad is an application that makes digital drawing less expensive, more convenient, and increases worker productivity and time-efficiency.
Problem Statement
Our team was inspired to create this project when we realized that there were many problems with all digital drawing solutions. Currently, if you want to draw digitally, whether that be professionally or for a hobby, you would have to buy a drawing tablet or an iPad. With Digi-Pad, we solved all of those problems with currently all other digital drawing solutions.
The first common problem we solved is that drawing tablets and iPads are very expensive, costing in the hundreds of dollars. This makes them inaccessible for many artists, students, and hobbyists. However, Digi-Pad solves this problem. With Digi-Pad, all you need is your existing webcam or our $5 Magic-Pen and a piece of paper.
The second common problem we solved is that drawing tablets and iPads don’t feel like real paper. As a result, there’s a large learning curve to being good at drawing digitally versus being good at drawing on physical paper. This is such a massive problem that drawing tablet manufacturers and iPad accessory companies tout their products as having a “paper feel.” Many of them even offer “paper special editions” where the only difference is that the product feels more like paper than their other products. With Digi-Pad, you’ll have no such problem or learning curve.
The third problem we solved is that with the cheap webcam accuracy was a bit compromised therefore we developed Magic-pen. This Magic-Pen. It cost just $5 as compared to $399 i-pads. Magic-Pen consists of an LED (that generates the signature red light), a light-detector chip, a switch mechanism and a few other simple components.
Working
Digi-Pad allows you to draw digitally with just a $5Magic-Pen and a piece of paper. This makes it more accessible, more convenient, and more portable than existing digital drawing solutions such as Wacom drawing tablets and iPads.
Digi-Pad uses computer vision to detect, locate, isolate, and locate your hand. It then uses a few complex algorithms to locate the tip of the pencil you are holding or the Magic-Pen. Digi-Pad then uses that information to control mouse movement, clicking, and dragging, allowing you to draw digitally. As a result, Digi-Pad is compatible with all apps — something that’s not true for many other current digital drawing solutions.
The LED installed at the bottom of the mouse emits a bright light in the downward direction. Since it is usually used on plain surfaces, the light bounces back from the surface and enters a photocell that’s also mounted on the bottom, almost next to the LED. This photocell has a frontal lens that magnifies any light reaching it. As you move the magic pen around, the pattern of the reflected beam changes; this is then used by the light-detector chip to figure out how and in which direction you’re moving the magic-pen.
Challenges we ran into
We had some issues locating the hand and the position of the pen/pencil tip. Often, it would be too sensitive, but eventually we managed to fix the issue. At first, we had major latency issues, but we fixed that by heavily optimizing our computer vision code.
What's next for Digi-Pad
Our team would like to further improve the accuracy of the computer vision hand and pen tip detection. Finally, we are interested in polishing the project and selling it as a subscription online. Digi-Pad has business potential in a large market since it makes digital drawing significantly cheaper, and therefore more accessible and more convenient for people around the world. Additionally, Digi-Pad will improve worker productivity and time-efficiency for professional artists and graphic designers since digi-Pad provides a better user experience — a drawing tool and a drawing surface that is a real pencil and real paper instead of just “feeling like pencil and paper” like many other digital drawing solutions from many companies.
We are also working on Magic Pen and have developed its prototype. We are currently focusing on making it more compact and to make it user friendly. We are also working on incorporating bluetooth module like HC-06 to even make it wireless.
Built With
computer-vision
ctypes
hc06
microcontroller
multithreading
numpy
opencv
pcb
photocell
python
Try it out
github.com | Digi-Pad | Digi-Pad is an application that makes digital drawing less expensive, more convenient, and increases worker productivity and time-efficiency. | ['Prakhar Saxena'] | [] | ['computer-vision', 'ctypes', 'hc06', 'microcontroller', 'multithreading', 'numpy', 'opencv', 'pcb', 'photocell', 'python'] | 44 |
10,415 | https://devpost.com/software/voyage-into-space | Get Ready!
Prepare for your voyage!
Blast Off!
And set off for Mars!
Inspiration
Most of the inspiration comes from our interest in space.
The lead developer, Matt has been an avid fan of all things space, from SpaceX launches (as you will get a feeling for as you go through the web app) to Star Wars. His brother, Kevin is also a space nerd (in addition to being a nerd in other sciences).
Having this common interest really drives both of us into developing an experience that would allow our users to have the awesome feeling of preparing and experiencing an entire space voyage for yourself from the comfort of the screen.
What it does
It's a simulation of what a Mars mission may look like, from test flights to launching, yourself. It has a more cartoon-esque and fun experience that helps users have a nice relaxing journey.
How we built it
The web app is a MongoDB, Express, React, Node (MERN) stack app.
MongoDB was used to get and post leaderboard data. Mongoose was used to provide more structure to the MongoDB data.
Express was used to run a backend server connecting to the MongoDB.
React was used to provide the frontend animations, interactivity, and simulation logic. React Redux was used for global state handling, where global states such as chance for success, fuel, etc. states and state actions were handled in the Redux store. Seamless linking was implemented with React router dom.
HTML/CSS was used to style text and containers.
Challenges we ran into
Having just learned React and fullstack programming, the main developer Matt was still learning setting up CRUD in the Express server, Incorporating state handling with React Redux, and utilizing React hooks to re-render pages for animations and specific state changes. Additionally, as Kevin was an amateur high school hacker, he had very little frontend experience, and made leaps and bounds figuring out CSS button and text-styling tricks.
Accomplishments that we're proud of
Making a well-functioning full-stack React app that provides users with a visually-appealing, easy-to-use, and inspiring space simulation.
What we learned
React Redux tricks, setting up CRUD for MongoDB in an Express server, and React hooks tricks.
Css styling tricks.
What's next for Voyage into Space
Polishing the game, adding a lot more detail to make the simulation realistic, and adding more frontend to make the experience feel more real and exciting!
Built With
css3
express.js
html5
javascript
mern
mongodb
mongoose
node.js
react
redux
Try it out
voyage-into-space.herokuapp.com
github.com | Voyage into Space | A realistic Mars space voyage simulation to raise interest and spread knowledge about preparing a rocket and traveling in space, made using the MERN stack! | ['James S. Wang', 'Matthew Liu', 'Kevin Liu'] | [] | ['css3', 'express.js', 'html5', 'javascript', 'mern', 'mongodb', 'mongoose', 'node.js', 'react', 'redux'] | 45 |
10,415 | https://devpost.com/software/spacebot-wsm9r5 | GIF
All commands
GIF
Nasa Astronomy Picture Of the Day
GIF
Trivia
GIF
Liftoff - play a liftoff sound in voice
GIF
SpaceX launch information
GIF
Space Meme
GIF
Random Hubble Picture
GIF
Curiosity Picture
GIF
Weather On Mars
Inspiration
The space theme of "To the Moon and Hack" inspired us to make a space themed Discord bot.
What it does
It is a Discord bot which gives lots of info about space.
How we built it
We built it using discord.js and a variety of nasa apis, unofficial apis, and hand entered data.
Challenges we ran into
The trivia command was sending the bot into a infinite loop at first and later it wouldn't say anything if the use said an incorrect answer. The Mars weather information took a while to figure out what it actually meant.
Accomplishments that we're proud of
We are proud that we finished a project in such a short amount of time
What we learned
We learned javascript, node, discord.js, and hosting a server.
What's next for Spacebot
We do not have anything planned
If you want to invite it to your server, click
here
Built With
discord.js
Try it out
discord.com | Spacebot | A space Discord bot | ['yourgreatgramps', 'John Atkinson', 'walter s'] | [] | ['discord.js'] | 46 |
10,415 | https://devpost.com/software/mapgun | The wiring for everything
The front of the hardware portion
Inspiration
I do a lot of urban exploring, and it's easy to walk face first into a wall even with a flashlight. I thought having a general idea of what the layout of a place is not only increases safety, but also would allow me to avoid backtracking and have more fun.
What it does
Using ultrasonic sensors, the distance of each solid object in front of you is recorded. This information is then used to draw a map of an area. By stitching a bunch of these maps together, it's possible to chart entire floors of buildings or caves automatically.
How I built it
I built it using four ultrasonic sensors, an Arduino ATMega 2560, and python. The Arduino passes the python code the raw data from the sensors. The python code then turns this data into a map and charts it on a canvas.
Challenges I ran into
I don't own very much hardware, so I was really panicking when one of my sensors broke. A tiny bit of soldering and compensatory code later and everything had worked out though. Another problem I ran into was the timing of all the ultrasonic sensors. Because they are so close, without a delay between pings they become essentially useless.
Accomplishments that I'm proud of
I ended up writing a really good ultrasonic sensor utility in order to get everything I needed out of the hardware on this project. Hardware compensation was really the name of the game here, and I think I managed extremely well considering the limited resources.
What I learned
A lot about the pulse widths and I/O for ultrasonic sensors, and the python graphics library
What's next for MapGun
As I acquire more hardware, the plan is to slowly improve on this design until it is something I can properly show off next time I go exploring.
Built With
arduino
python | MapGun | Map rooms out using ultrasonic sensors, because who said bats get to have all the fun. | ['Hendrick Ducasse'] | [] | ['arduino', 'python'] | 47 |
10,415 | https://devpost.com/software/jedi-force-arduino-gesture-control-relay-module | final build
Inspiration
As a fan to Star Wars, I loved how the jedi the used the force to levitate objects, mind tricks etc, and I would love to switch on the lamp at my home via gesture almost similar with Star Wars
What it does
Turn on or off your switch via gesture using PAJ7620 module
How I built it
I found PAJ7620 in online store, the based on the datasheet said this module can recognized the gesture from swipe left or right, up and down, forward and backward. Combine it with Maker UNO is another compatible Arduino UNO board made in Malaysia
Challenges I ran into
Library compatibility and sometime the gesture are not registered.
Accomplishments that I'm proud of
Able to finish the prototype to turn on and off the led via gesture control
What I learned
There are some cool hardware can be found in online store and can be done a cool stuff to make in your arduino project
What's next for Jedi Force Arduino - Gesture Control Relay Module
Improve input gesture to register for the arduino board.
Apply to
Built With
arduino
led
paj7620
relay
Try it out
github.com | Jedi Force Arduino - Gesture Control Relay Module | Control your switch with gesture sensor PAJ7620 | ['Amir Hamzah'] | [] | ['arduino', 'led', 'paj7620', 'relay'] | 48 |
10,415 | https://devpost.com/software/spaceshot-your-cosmic-reminder-0n6kb1 | Landing Page
Description
Calendar
Surf the Months
Add to Your Google Calendar
Your Reminder
Inspiration
We found that people often tend to miss out the cosmic event just because they forget. So to overcome that we developed the easy to use Cosmic calendar Reminder Website.
What it does
You can browse through all the upcoming Cosmic Events, whether it be Eclipse, Shooting Stars or anything, we got you covered, Just click the ADD TO CALENDAR and it will add it to your Calendar!
How we built it
We built it using the HTML, CSS, JS and Bootstrap Framework. We worked with the APIs and kept our storage on firebase. We also hosted the Website on
FIREBASE
as well!
Challenges we ran into
There were few challenges as well like:
The Calendar: The installation of the calendar and working it with API's was one of the tasks we faced the challenge.
SVG Animation: The SVG animation was also a difficult task to do we felt.
Accomplishments that we're proud of
The overall website as well as using the power of Firebase we hosted it successfully and also it has nice UI and working all well with the reminders.
What we learned
Working with the APIs, Firebase services and SVG animation. Apart from these we also learnt time management and project architecture.
What's next for SpaceShot - Your Cosmic Reminder
We will be making it available to other Calendars as well and definitely some other features as well!
Built With
api
bootstrap
cloud
css3
firebase
html5
javascript
Try it out
spaceshot-4c48f.firebaseapp.com
github.com | SpaceShot | Your Cosmic Reminder | ['Arpan Patel', 'Devarsh Panchal'] | [] | ['api', 'bootstrap', 'cloud', 'css3', 'firebase', 'html5', 'javascript'] | 49 |
10,415 | https://devpost.com/software/restar-t-a5b8gi | Splash Screen
Onboarding 1 - The Basics
Onboarding 2 - Support Systems
Onboarding 3 - Your Reasons
Dashboard
Reflection Page
Making a Journal Submission
Shake Support Page
Notifications
Resources and Links Page
Community Feed
Sharing a story
Resources Page
Resources Page
Resources Page
💡 Inspiration
What sparked the idea for restar-t was a conversation about
rehabilitation and the irony of it
. Majority of those needing rehab are often lacking the income to attend. Upon doing more research, we came across this
shocking socio economic statistic
: approximately
40% of homeless individuals
and up to
70% of homeless youth
struggle with
alcoholism
. The
costly
price of addiction means that these individuals often live in poverty, unable to afford rehabilitation or the essentials necessary to gain employment. Given that over
95% of homeless individuals have mobile devices
, we wanted to create a more
affordable
and
accessible
solution that could help
uplift
these individuals to lead healthy and stable lifestyles.
🤔 What it does
Restar-t is an Android app that makes the path to alcohol recovery more
accessible and affordable
, by providing individuals manageable alcohol consumption goals to slowly
taper off their dependency
.
When a user creates a restar-t account, they fill in some basic information about their current dependency on alcohol so we can appropriately
track their progress over time
. After that, they have the option to add a
trusted individual’s phone number
that can monitor their progress, receiving
updates
on any relapses. They’re then required to list their
3 greatest reasons for wanting to change
. Based on their initial alcohol intake, we design a custom tapering plan with achievable daily targets, to help them reach sobriety without nasty withdrawal symptoms. Even after they reach sobriety, they can continue to track their progress!
Restar-t also encourages the user to
reflect
on their day and
keep track
of how they are feeling
emotionally
as well. Alongside this, the dashboard displays
relevant statistics
since the start of the user’s recovery, such as their current streak in order to continuously
motivate
them.
In order to help keep the user on track, every time the user is tempted to relapse, they can
shake the phone
. This brings them to a page which displays a countdown timer of
5 minutes
to resist their temptation. This page also provides the user an option to
call
their trusted person, a hotline,
write
an entry in their journal, and
reflect
again on their earlier answers on why they sought this change.
Another key feature of restar-t is the
reflection center
, where the user can
view and create
their daily
journal entries
. Each journal entry prompts the user for their mood, whether they met their alcohol target, and their thoughts on why they did or didn’t. If the user hasn’t submitted an entry
around 10:30PM
, then the app will send a
notification
reminding them to!
Additionally, there’s a
feed
for users to
anonymously share their stories
in overcoming addiction that provides a motivational place for users to
know they’re not alone
in their journey.
Restar-t also features a
resources page
that can launch
Google Map
directions to the nearest
Alcoholics Anonymous center
, along with
basic information
on alcohol use, services and resources the user can use, as well as other methods they can use to aid them in their recovery.
Regardless, restar-t is
not by any means a replacement
for professional services like conventional rehabilitation, but rather an alternative for those who have nowhere else to turn to. We
researched
recommended
tapering treatments
to addiction and substance abuse from trusted sources like verywellmind.com and drugrehab.com, and explored other affordable techniques that could
streamline the process
of uplifting those in need.
🧰 How we built it
With the serious and consequential nature of our topic, we first did a
substantial amount of research
on alcohol measurements, proven tapering strategies, recovery techniques etc. to
ensure
we were providing an
accurate
and
helpful
resource.
Together, we also researched the most
accessible
solutions to alcohol dependency. This is what led us to decide on a
mobile app over a website
, as well as what informed our
tapering algorithm
, and the
creation of our reflection page
.
Using those ideas as a
focal point
, we worked together to build different prototypes in Figma, until we discovered a suitable style to best convey an intuitive, motivating and comfortable design that reflects both the
hopeful and mature
nature of the topic.
As a team, we all tried to
push ourselves
to dabble in different areas. We explored
different areas
we weren’t so familiar with, whether that was back-end, front-end, or using
new APIs
, we taught one another to help us
learn
and
grow
along the way.
😅 Challenges we ran into
One of our
biggest initial roadblocks
was actually our
design and concept
. We debated for
hours
about how we could attempt to motivate users while also
respecting the gravity
of the situation. We started from an intense gamification concept that
bordered
a juvenile design, but arrived at the clean but
hopeful
design of our final product through the power of
experimentation
and
brainstorming
as a team.
Implementing/testing shake function:
To launch the app when the phone is shook, we used the built-in accelerometer (and some math!) to determine the triggering motion. However, one of our team members was using an emulator, which made for a very tedious testing process. We decided to add an alternate pathway to access the same screen, which allowed for more user freedom - and it made it a lot easier on our end too!
Algorithms to track progress:
Designing an appropriate way to quantify and taper alcohol consumption was unfamiliar territory for us, but we were determined to ensure the accuracy of our app. We researched topics including the ABV of alcoholic beverages, the quantity of alcohol in a standard drink, the best way to create a tapering schedule, etc. - all in order to implement the most appropriate algorithms and calculations.
Countdown time:
The timer on the relapse page was a bit difficult, as we had to wrap our head around how the countdown timer could work. After following a few different tutorials and having different members attempt to implement this feature, we still couldn’t get it to work. It wasn’t until we all came together with our knowledge learned that we implemented this feature. Once you open the screen, it begins counting down, and after 5 minutes of allowing the user to self-reflect and take their mind away from drinking, it redirects the user to the dashboard.
💜 Accomplishments that we’re proud of
Shake Detection:
Using a sensor event listener, we are able to detect and interpret motion from the phone’s accelerometer to launch a screen in the app. Although we didn’t have any prior experience with motion tracking, the feature worked like we hoped it would, thus offering a reflexive response to a user’s urge to drink. It was fun adjusting the sensitivity of this sensor too!
Making use of SMS & calling:
We used the phone’s built-in texting and calling functionality to facilitate contact with a trusted individual. We understand that it can be hard to reach out for help, so we
automated the process
through a non-confrontational support system. This the
first time
for us all playing with features outside of the app so it was amazing how we were able to implement this.
The UI/UX design:
We’re super proud of our UI/UX design because of how difficult of a journey it was. We spent a lot of time
prototyping, testing color schemes, and design styles
that would best represent the goals of our application and resonate with the target audience. From gradients to isometric designs, the process of trial and error was an experience that taught us that good results can come from perseverance.
Progress bar:
At first glance, having the progress bar move simultaneously with the user’s progress looked incredibly difficult. However, by carefully looking through documentation and tutorials we discovered this feature was already implemented within android studio. Many things may seem harder than they appear so we’re glad we decided to implement it even after believing it was extremely difficult.
🎓 What we learned
Alongside improving and streamlining our skills in mobile development, through this experience
we learnt a lot in areas we didn’t think we would
. When researching we learnt about mental health and addiction, as well as which members of a community would be most susceptible—
vulnerable people
suffering from homelessness and/or poverty.
In terms of technical skills, we learnt a lot about design - how different design styles can shape the audience, motivations and atmosphere of any applications, whether it be the colours, fonts or even the colour blending.
We also experimented a lot more with how to make use of different sensors and permissions in devices, from sending SMS messages, making calls, to using the built-in motion sensors to make for more flexible and unique user interactions.
Finally, we learned about how
difficult
recovery from an addiction can be and the complexities of the process, which was why we wanted to make an app to
aid them in the process
.
⏩ What's next for restar-t
We would like to make restar-t able to support individuals’ recovery for
other drugs
and substances that have the potential to be abused, with
even more extensive research
to ensure
safety
.
As well, we’d like to implement more customization options where users can select the timeline over which they’d like to taper off of their addiction, with certain minimum durations fixed by the app according to medical recommendations.
We’d also create a
donation page
that compiles Non-profit Organizations or Charities that promote sobriety and aid with overcoming addiction for those looking to support the cause.
Another feature we’d like to implement is an alternate portal for the user’s trusted contact (family member, friend, sponsor or therapist) to sign into the app and view the user’s overall progress or journal entries if given permission.
Another idea we'd love to implement would be
helpful articles and information
can be added to the resources section. The information can be
geared towards the user’s geographic location
using web scrapers to find information that would cater the app towards the user.
Finally, allowing users to
compile lists of their most trusted resources
in the forums, enabling users to
support one another
through their
collective journey towards a better life
.
Built With
android-studio
figma
firebase
google-maps
java
xml
Try it out
github.com | restar-t | You’re living in poverty, struggling with mental health issues, and alcohol has taken its toll on you. You’re unable to afford rehabilitation, and all you want to do is restar-t. | ['Bonnie Chin', 'Kailey Chen', 'Grace Gao'] | [] | ['android-studio', 'figma', 'firebase', 'google-maps', 'java', 'xml'] | 50 |
10,415 | https://devpost.com/software/starshare | Here is a photo of our database! Note the different fields of data.
Here is an early build of our program! Check our Github repo for the newest version!!
Here is a screenshot of our Java code. We used Android Studio to make our app.
StarAdvisor
Find and share the best stargazing spots near you!
Inspiration
It was a beautiful summer night outside, here in north New Jersey. Not too hot, not too cloudy and above all else - perfect for stargazing! Or so you might think. You see, even on cloudless summer nights like those, there's no guarantee that one can see stars in the sky due to a myriad of factors ranging from light pollution to tree coverage. But every experienced stargazer has 'that one spot.' The spot the veteran stargazer might have randomly stumbled upon with an immaculate view of the surrounding universe. The spot so good that they want to share it with the world. So our team thought, what if there were some way to share the best places to look at the night sky near you via some sort of app? That's where StarAdvisor comes into play!
What it does
StarAdvisor is an android app that takes in the user's location and converts it to coordinates using Radar.io, searches a database for user-submitted locations within a twenty mile radius and displays those locations on a map embedded into the android app. This embedded map then allows users to get directions to that location externally in Google Maps. StarAdvisor also allows for users to submit locations where they gaze at the stars themselves to our database for all users to see. We take in weather data that
How we built it
StarAdvisor was coded from the ground up in Android Studio using Java and XML. The program is compatible with newer Android devices and was tested on a simulated Nexus 6 API 28. We used the Radar.io API's geocoding capabilities to convert the address into a location and compute the distance between the user's location and locations in the database. StarAdvisor also uses two services provided by the Google Cloud Platform: Google Firebase as a database and the Google Maps API to display a map of the locations in the database.
Challenges we ran into
The hardest parts of this project were overcoming our own unfamiliarity with the APIs and services we used. Only two of us had used Android Studio before (and none of us had substantial experience in XML), only one of us had used Firebase and the Google Maps API was new to all of us. Considering there were only three of us, from the very get-go, you could say we were shooting for the stars. We also had initially intended for users to be able to see live weather data as well as information on light pollution in the area as well as the user-generated locations however due to time constraints we were unable to add these features. And of course: it wouldn't be a true Hackathon if everything didn't suddenly stop working at 4 AM for no apparent reason. It builds character ;D
Accomplishments that we're proud of
To say the least, this Hackathon project marked a lot of firsts for all of us. First time using XML and Android Studio for some, and first time using Firebase and the Google Maps API for others. This project was certainly a heavy undertaking but it showed us that when you work hard and think outside the box, the sky is, in fact, NOT the limit! Even though there is still much that can be done with this project, we are thrilled to present a functional proof of concept to the judges of "To the Moon and Hack," and we look forward to collaborating again on more projects in the future.
What's next for StarAdvisor
There were several additional features that we wanted to implement but couldn't due to time constraints. Above all else we want to implement a user rating system to weed out which stargazing spots are worth the trip and which aren't. We also want to have more information available to our users regarding the locations such as tree density, public access or not, light pollution levels and even which stars are in the sky on that night. Eventually, we'd also like to implement some sort of interactivity between our users to create a StarAdvisor community! So stay tuned because there is no telling what the future has in store for StarAdvisor!
Built With
android
android-studio
firebase
google-cloud
google-maps
java
python
radar.io
xml
Try it out
github.com | StarAdvisor | Find and share the best stargazing spots near you! | ['Josh Gole', 'Alex San', 'Shivank Agrawal'] | [] | ['android', 'android-studio', 'firebase', 'google-cloud', 'google-maps', 'java', 'python', 'radar.io', 'xml'] | 51 |
10,415 | https://devpost.com/software/apni-shiksha | GIF
GIF
GIF
GIF
GIF
GIF
GIF
Inspiration
Due to COVID-19, the whole world has realized the power of online learning and the impact it can have on students. Be it K-12 or higher education, virtual learning has become very important now. Teachers need to adapt and learn to get used to a new way of teaching their students.Apni Shiksha is a course management system that supports online learning and teaching. It allows professors to post grades, and assignments online.
What it does
A teacher upon registering is able to add registered students to the class. The teacher can then update the details of the students enrolled in the class. During the academic year a teacher can upload assignments for the students to complete. Apni Shiksha also has a built in attendance managment system.The teachers can keep a track of who attended the online classes on a daily basis. The teachers can also update the marks for the entire class after quizes from the online portal itself.The students can check their marks and attendance after logging in to the student portal. They can also upload their submissions to the portal. At the end of the academic year they can give feedback to their teachers.There is also a quizing system wherein teacher's can create their own quizes, they add multiple choice questions with correct answers, the students can take the quiz at the designated time and after the quiz is over, the students can see their own marks and after all the students have answered, the teachers can get a report of how well the students performed.
How we built it
Apni Kasksha is MERN stack application. The frontend is powered by React. The Frontend Components are buit using Material UI React.We use Redux for state managment for storing the marks and axios for making requests to the backend. NodeJS is used on the backend and Express is used as middleware.The NoSQL database used is MongoDB to define the Models. The code is properly structured using the MVC pattern.We use Multer to manage file uploads.
Challenges we ran into
Setting up Material UI with React
Learning Redux for state management
Deploying the full stack application to Heroku
Setting up MongoDB Atlas
Making the backend according to the MVC pattern
Defining relationships between different entities in the backend
While making the frontend like setting up theming and components using Material UI, Then there was the challenge of updation state with each event in the redux store
Accomplishments that we're proud of
We are proud that we have built a decent enough web app, for effective management of student and teacher data.
What we learned
We learned how to build a full stack application with react, redux and nodejs and learned how to deploy the web app, We also learned how to add a custom domain name to our heroku app.
What's next for Apni Shiksha
We plan to improve the quizing system wherein teachers can design their own quizes for students and also use AI technologies to make it more effective. We also plan to improve the UI and add other useful features.
Built With
axios
express.js
mongodb
multer
node.js
nodemailer
react
redux
Try it out
apni-shiksha.space
apni-shiksha.herokuapp.com | Apni Shiksha | An effective way for student teacher interaction | ['Akshit Suri', 'Sarthak Arora', 'Varun Ramnani'] | [] | ['axios', 'express.js', 'mongodb', 'multer', 'node.js', 'nodemailer', 'react', 'redux'] | 52 |
10,415 | https://devpost.com/software/journey-3d-a-space-game | What it does
In this game, you use your mouse to move around and you left-click to move forward. Collect coins to buy upgrades and collect fuel before you run out of it!
How I built it
We used the Scratch engine, this allowed us to make it easier for us to create art and import it. To give it the 3D look we increase the size of each clone (star/coin/fuel) can by a certain rate when you press the arrow key to make it appear you are gliding through space. When this clone became a certain size it would disappear, but if the center of the clone was within a part of the game screen, it would register it as if you picked up a coin or a fuel can.When this clone disappears because it reached a certain size, When you moving your mouse around, the clones move in the opposite direction making it appear as if you are traveling through space, and if a clone hits the edge of the screen, another one would appear in a random location at the other side of the map.
Challenges I ran into
One of the challenges we ran into was trying to make it look like it was 3D. Scratch can only make 2D games so we had to make it seem like you were playing a 3D game
Accomplishments that I'm proud of
Once thing we're proud of in this project is the artwork.
What I learned
We learned how to convert Scratch games to Javascript. We also learned how to make better art and how to create clones and program them to be different from each other
What's next for Journey 3D (A Space Game)
Maybe we could add an extra difficulty, making it so coins are twice the amount of it's value but there are enemies you have dodge.
Built With
scratch
Try it out
ankit-04.github.io
scratch.mit.edu | Journey (A Space Game) | Collect coins in space to get back to earth! | ['Ankit Gupta', 'Sidharth Saluja'] | [] | ['scratch'] | 53 |
10,415 | https://devpost.com/software/scaleoftheuniverse | A side-view of Jupiter with other models in front of the same door
A view of some models in front of a door
A view of mars in front of other models
An up close view of the models in front of the Sun
The models (with Mercury info panel) in front of the Sun
Inspiration
I thought "To the Moon and Hacks" was a really good name, and so I wanted to think of a space related idea as well. I had previously seen videos like
this one
but I thought an interactive 3d version would be really cool.
Controls I used in the Video:
Pinch/Pull to zoom in/out
Tap on left side to go to next planet left
Tap on right side to go to next planet right
What it does
ScaleOfTheUniverse models celestial bodies with
to-scale
sizes (although distances are minimized for viewing feasibility). Tap on the left and right sides of the screen to switch between bodies, and pinch to zoom. The app also shows basic information about each body on the left-hand side (radius, mass, distance, etc.).
Please note that the order of the objects are not accurate, and are displayed from largest (left) to smallest (right) to assist in understanding of scale.
How I built it
I used Xcode and Swift to develop my app, and used the SceneKit AR capabilities to render the models.
Challenges I ran into
Xcode is a big program. It took me 4 hours just to download the thing, because I didn't have space on my drive to just get it from the App Store. I had to get the .xip file from the Apple website, and then I still didn't have enough space to open the xip file. I eventually had to expand the file on an external drive and the transfer it back on to my computer. What a hassle!!
Oh boy. I'm not gonna say I don't like Swift, but I don't like Swift. I have barely any experience using the language (same goes for Xcode), and my experience has not been a good one. I ran into countless issues with the different types, mutability, scope, and structure that confounded me a ridiculous amount. I got very frustrated when my arrays were in different orders each run of the app. Somehow I managed to work through all of this.
Accomplishments that I'm proud of
In every hackathon I've done, I get to that one point of the project where I feel like nothing is working and it's not worth it to keep trying. For some reason, that point in this hackathon hit me harder than usual, and I almost quit. However, I gathered my mental fortitude and kept working through my problems, and I'm actually really proud of how it turned out.
What I learned
Basically all of Swift. (No, just kidding). I learned a lot about SceneKit and how it renders nodes, and how to move things around in 3d space. I ran into so many issues that I might say I learned the most at this hackathon compared to others, seeing as prior to this weekend, I knew very little (if anything) about Swift and Xcode
What's next for ScaleOfTheUniverse
I had big plans for this project in my head, but I feel like I never really got there. If anything, I plan to add more bodies, and to add some really large (and small) ones so the user can experience the overwhelming size of the universe. I would also like to add more information about each body, but this ends up being very tedious, though it shouldn't be too hard (and I'm lazy).
Built With
scenekit
swift
xcode
Try it out
github.com | ScaleOfTheUniverse | An educational tool to help teach about how big things in the universe really are! | ['Ezra Bernstein'] | [] | ['scenekit', 'swift', 'xcode'] | 54 |
10,415 | https://devpost.com/software/necesse-spatium-surculus | How I built it
I used p5.js, a JavaScript library for creative coding utilizing HTML5 Canvas, to create an overly complicated space shooter game.
Challenges I ran into
I had about 8 hours total to work on this project.
Accomplishments that I'm proud of
With the limited time I had this weekend I was still able to build a fun little web game in mostly pure JavaScript.
What I learned
I learned that you could import pre-trained tensor flow models in JavaScript. This was my first time programming in Ubuntu, so I was able to get comfortable with it during this hackathon. To The Moon And Hack is also both my first online hackathon and my first solo hackathon. I learned how to manage my time to the best of my abilities and how to organize my thoughts.
What's next for Necesse Spatium Surculus
To be honest, probably nothing. If I had more time I would have liked to use some sort of location service API to incorporate the player's location into the game. Perhaps mapping the wind speed to the speed of the UFOs or something ridiculous like that. I was also originally planning to make the background of the home screen a random NASA image of the day, and I was going to make an obnoxious 2 factor authentication system to even play the game.
Built With
bash
domain.com
github
javascript
p5.js
redbull
tensorflow
Try it out
whyisthisgameevensetin.space
github.com | Necesse Spatium Surculus | An unnecessarily complicated space shooter game | ['Jacob Zietek'] | [] | ['bash', 'domain.com', 'github', 'javascript', 'p5.js', 'redbull', 'tensorflow'] | 55 |
10,415 | https://devpost.com/software/expl0re-space | The Sun
Space
Roadster
Homepage
Menu
Inspiration
Just trynna teach our favourite part of physics and the key to going to the moon, Orbital Mechanics!
What it does
Let's you fly around in a rocket ship (bonus points if you can move from orbiting one planet to another!)
You can see orbital data of various object in our solar system.
How I built it
We used Unity to build our gravity physics scripts, and we used Unity to make the game and the simulation and then exported it to WebGL.
We also used React to make the web app which allows you to easily navigate our fresh orbital data.
Challenges I ran into
Unfortunately, our project wasn’t exactly smooth sailing. We originally wanted to include multiple levels and a win state in our game. However, most of us are very inexperienced in Unity and this proved to be quite time consuming. After this realization, we decided to expand, and to include our Unity simulation as part of a larger website to teach you about orbital mechanics. Furthermore, One of our APIs uses http and not https. Unfortunately, since our website is https, any calls to an http URL is blocked.
Accomplishments that I'm proud of
Since the Unity physics engine didn't have support for Gravitational pull to a certain point, we had to take advice from Newton and calculate it ourselves! We made an engine which allowed planets to rotate around others periodically depending on their mass. Based on the formula F = GMm/r^2
What I learned
How to apply mathematics and physics to software
How game development differs from web development and its challenges
What's next for Expl0re.Space
Add more solar systems to the Unity Game, and maybe make the simulation more realistic by trying to use actual numbers instead of the scaled numbers we made
Built With
axios
blender
bootstrap
facebook
firebase
github
google-cloud
material
nasa-asteroids-neows-api
nasa-mars-rover-photos-api
oauth
open-notify-iss-api
photoshop
react
spacex-api
twitter
typescript
unity
yahoo
Try it out
expl0re.space
github.com
github.com | Expl0re.Space | Help you learn about Orbital Mechanics! | ['Devam Sisodraker', 'Aiden Kerr', 'Vishal Desh', 'Pedro Machado'] | [] | ['axios', 'blender', 'bootstrap', 'facebook', 'firebase', 'github', 'google-cloud', 'material', 'nasa-asteroids-neows-api', 'nasa-mars-rover-photos-api', 'oauth', 'open-notify-iss-api', 'photoshop', 'react', 'spacex-api', 'twitter', 'typescript', 'unity', 'yahoo'] | 56 |
10,415 | https://devpost.com/software/quaranteen-feelings | Welcome to Quaranteen Feelings
Example page: what to do when you're bored
Inspiration
Being spread around the world in the middle of a pandemic, we had to face feelings of anxiety, loneliness and boredom. We decided to create a website that would provide ideas to teens on how to overcome these issues.
How we built it
We used HTML and CSS to create our website, and added some styling with Bootstrap. The images were designed with Procreate.
What we achieved
We learned how to implement and host our own website.
What's next for Quaranteen Feelings
We would like to add more feelings that teens experience during the quarantine, and keep improving the website design by adding dynamic components.
Built With
bootstrap
css
html
Try it out
quaranteenfeelings.surge.sh | Quaranteen Feelings | Are you a TEEN struggling through quaranTINE because of Covid-NineTEEN? Then this page is for you! Find tips and tricks on how to cope with all kinds of overwhelming feelings. | ['Ria Stevens', 'Agnes Totschnig', 'Catherine Xu', 'Saumyaa Verma'] | [] | ['bootstrap', 'css', 'html'] | 57 |
10,415 | https://devpost.com/software/ar-in-automobile | Inspiration
When we bought a car, I struggled to know more about my car and the only way to learn more about it was Manual Book which we receive along with car or Online resources. During which I faced a lot of difficulties. That's why I decided to build an AR-based manual for Car.
What it does
The Solution can be used anywhere where there is a use of a manual.
All the applications which need a manual for simple repair can be replaced by this kind of app.
How I built it
Taken multiple Images of Car and uploaded them to Vuforia Engine and selected the better one's that have high tracking performance
Build the whole project in Unity using echoAR and Vuforia
Challenges I ran into
As we don’t share mechanical background recognition of the parts itself was a tedious task for us.
Then detection of the parts by the APP was a big challenge.
What I learned
Combining echoAR and Vuforia
Learned some basics of different parts of Car
What's next for AR in Automobile
We will try to enter as many parts we can and provide a basic servicing tip to it.
Solution Description
We are trying to develop an AR-based Application which can tell the user which part he is looking at and can do the basic servicing of it
It is giving visual cues so even illiterate people can also use it
In short, it will be Augmented Reality based manual for Car
Uniqueness of the idea
No costly hardware required.
Easy to use and highly interactive.
Navigation and searching of parts is easy
Even an illiterate person can use the App.
Built With
echoar
sketchfab
unity
vuforia
Try it out
github.com
drive.google.com | AR in Automobile | Replace traditional way of referring manual book with AR based app that we developed | ['Mahesh Gavhane', 'Sanket Patil', 'Prasad Wakchoure'] | [] | ['echoar', 'sketchfab', 'unity', 'vuforia'] | 58 |
10,415 | https://devpost.com/software/hackside-of-the-moon | Inspiration
So I have been following NASA's launch of Perseverance and wanted to base my project over something like that.
I also have been wanting to learn DENO on the side and decided to follow tutorials in order to build my own API.
What it does
The API that was built in Deno makes a request to NASA's Insight Rover api and grabs the weather information from the rover on mars. My API cleans up some of the data and sends it over to my rails app which will display the data on a home page and also allow users to send a text message with the data streaming over from the API.
How I built it
Deno for the backend/api
Ruby on Rails for the frontend
Challenges I ran into
I originally had planned on the entire app being made in Deno, however I kept getting server errors every-time I tried to go to the welcome page. So I had to create the front-end in rails.
One of the things that confuses me is my API sends the information over to Twilio everytime I restart the server. So my app is sending texts only when I start up the backend server. I'm not proficient enough in Deno yet to understand why it's doing this.
Accomplishments that I'm proud of
I built an API in a language I had no experience in before this hackathon.
What I learned
So much with Deno. I can't wait to continue learning this language.
What's next for Hackside of the Moon
Add a form so users can sign up for daily sms updates from the rover. Fix the bugs.
Built With
deno
ruby-on-rails
tears
Try it out
github.com | Hackside of the moon | Grabbing data from an API ontop of NASA's Insight Rover API and sending that data via text. | ['Richard Rosenthal'] | [] | ['deno', 'ruby-on-rails', 'tears'] | 59 |
10,415 | https://devpost.com/software/mask-detection-system | Project LOGO
User Interface
System detecting a mask is worn
System detecting no mask is worn
Plot showing the accuracy of testing
INSPIRATION
Given the current trends in incidence and underlying healthcare system vulnerabilities, Africa is facing a lot of problems due to the Covid-19 pandemic such as a drastic reduction in medical commodities and supplies following border closures and restrictions on exports, and financial resource limitations.
A lot of people these days are avoiding the use of basic tenets of hygiene during this crucial time such as wearing masks and gloves in public places. Moreover, they endanger establishments by not abiding by the guidelines and compromising the safety of themselves and others.
That is why we came up with the idea of Maskaught, a simple mask detection system that can be placed outside any shop which has access to basic surveillance cameras.
WHAT IT DOES
Maskaught is a convenient mask detection that system that can be placed outside any shop or establishment. This system would ensure that those who are found to be not wearing a mask would not be allowed to enter inside. Outside a mid-range or large scale shop with security, it can act as a helping hand to the security guard by minimizing his/her interaction with consumers with the help of this software and ensure their safety.
HOW WE BUILT IT
We first trained the system by providing a dataset containing pictures of people with and without masks. After training the system, all the images were converted into an array and two
deep learning models
for detecting face and mask were created. The accuracy of testing was also plotted as shown in the graph. Both the models were then loaded into a new Python file and a camera was integrated into the mask detection system. The system would then detect the mask on the face and displays the accuracy of detection. A
text to speech software
was also integrated within the system which would guide the customer throughout the whole process of detection.
CHALLENGES
Converting our python project into a web application was a significant challenge that was faced by our team. However, we used python's
Django
framework to bridge the gap between our python software and HTML. As a result, we were able to build an interactive and user-friendly interface for the project.
ACCOMPLISHMENTS
Completing a challenge always feels satisfactory. Thus the entire project from the mask detection to the web application to our business model are all accomplishments that we are proud of.
WHAT WE LEARNED
Through this hackathon, we had the opportunity to learn how to train a deep learning model and create a python program integrating the use of Keras, OpenCV, and MobileNet along with text to speech conversion software (pyttsx3). We also applied our web development skills to attach the python program to the HTML one using the Django framework.
FUTURE STEPS
Our idea also focuses on promoting the rise of small domestic businesses that do not get a lot of customers as they’re not able to keep a track of whether all their customers are wearing masks inside. This may sometimes procure them with financial losses as it may cause customers to stop coming to these shops due to the ease of contracting the coronavirus with people without masks.
Moreover, since our idea is inexpensive to enact where we need to only connect our web app to the camera the cost involved to adopt this system is pretty much minimal.
One more plus point regarding our system is its expanded use and modifications which can be made after adopting it. It can be further applied for security in the future when we are safer against the pandemic.
It can also add in more scanning features in the future like scanning for gloves.
With further improvements, this system could be integrated with CCTV cameras to detect and identify people without masks and could be used in the imposition of fines for people who don’t wear masks by the government.
Built With
bootstrap
css3
django
html5
keras
opencv
python
tensorflow
Try it out
github.com | MASKAUGHT | The convenient mask detection system | ['Mohammed Ozair', 'Parin Joshi', 'Fabin Joe Flasius', 'Rishabh Saini'] | [] | ['bootstrap', 'css3', 'django', 'html5', 'keras', 'opencv', 'python', 'tensorflow'] | 60 |
10,415 | https://devpost.com/software/safesurance | Welcome
View detailed risk score
Select an area
Risk Score for each state
myinsuranceistoohighevenifipay.online
Financial data
Flood, fire, storm data
Endpoints
Scores
Inspiration
(Forbes) Natural disasters and insurance premiums continue to rise
, and that's hurting our wallets.
What it does
Provides an overview of how susceptible your area is to natural disasters.
How we built it
Datasets:
Earthquake: USGS
Fire: NASA
Flood: FEMA
_ We also dug around a little to obtain financial lossses caused due to floods for each state_
Storms: NOAA (Hurricane and Tornado history)
For earthquakes, USGS provided a granular dataset along with the probability of the magnitude of damage around the fault lines. Fire risk score was calculated by taking into consideration the history of fires and the acres of land that were damaged by them in a particular area. Flood risk score was calculated by taking into consideration the financial losses caused by the floods in addition to damages. Storm risk was calculated using hurricane and damaging tornado data obtained from NOAA. We normalized all the scores to a scale of 1-100, with 100 being the most susceptible.
An overall score was then calculating by taking an average of all the scores.
Challenges we ran into
Hacking while being stuck in a hurricane.
Accomplishments that we're proud of
Successully completing what we planned to finish this weekend.
What we learned
2010 had the most natural disasters.
What's next for Safesurance
Tie it to insurance
domain.com entry:
myinsuranceistoohighevenifipay.online
..
Built With
adobe-illustrator
google-cloud
javascript
ml
python
react-native
Try it out
github.com | Safesurance | Disaster susceptibility- Graded! | ['Ebtesam Haque', 'Muntaser Syed'] | [] | ['adobe-illustrator', 'google-cloud', 'javascript', 'ml', 'python', 'react-native'] | 61 |
10,415 | https://devpost.com/software/rocket-class | Landing Page
Register
Recommended Articles
Profile
Rocket News
Rocket News
Inspiration
With the magnificent launches of SpaceX and NASA’s Perseverance rover, many are not aware of such amazing breakthroughs in science and as a team of space enthusiasts, we are always wanting to know more about the recent space discoveries and the latest news. Although there are tonnes of resources available on the internet, most of the news feeds are not customizable and are not tailored towards one’s specific interests. In light of that issue, we decided to create a website that provides a space-related news feed, integrated with a machine learning algorithm to recommend articles the user is interested in, and provide a great experience for the user.
What it does
Rocket News provides a one-touch space-related news feed based on the latest articles on the web, and the user’s preferences. Users can stay on top of the latest space related news and recommended articles using our platform.
How we built it
We used Figma to design the website and the UI/UX. The actual frontend was created using technologies like HTML, CSS and Javascript. The backend was built using the Python framework Flask. We used the Newscatcher api to build the news feed, which incorporates pagination using the Rest API we created. Recommended articles are generated using data analytics and the machine learning algorithm. We used Scikit Learn, a Python Library, to build the algorithm for recommendations. The database was created using Google’s Firebase Database on the cloud. The application is deployed to Google Cloud using Google App Engine which takes care of all the infrastructure related issues.
Challenges we ran into
One of the main problems for us was implementing the pagination and Rest API. This was a new concept for us and something we had never worked with before. When working with the recommendation algorithm, we ran into issues surrounding data piping. To combat this, we examined the documentation thoroughly and found a fix. We also encountered a few problems with deploying to Google Cloud and dealing with the Firebase Database, and learnt a lot by attempting to solve them.
Accomplishments that we're proud of
We are proud to ship a full fledged website on the cloud available to all users with basic internet access. The website is integrated with a login system, a fully functional database, a Rest API for dynamically fetching data, and a high level recommendation algorithm based on user data. We are also proud of setting ourselves apart from other recommender systems by using support vector machines. We are very proud of what we did in the short span of time that we had.
What we learned
We learned about a variety of technologies and concepts. This was our first time implementing a Rest API with pagination and we learnt a lot from the process. Our website includes several features that are part of modern web applications, and building it helped us get a better understanding of how technologies work together. Because we had many problems and had to come up with several solutions, we gained a lot of experience and insight in a very short span of time. We also enhanced our interpersonal skills working in a team setting as we’ve never worked together. This hackathon was a great learning experience for all of us.
What's next for Rocket Class
Going ahead, we would like to develop a community platform that lets users communicate with others and have fascinating discussions about space and other related topics. In the future, we hope to make our product recommendation more thorough. We also hope to make our product recommend space related courses as well as news articles, so the user is able to learn more from our site. We are also planning to improve our machine learning model by using different ways of sorting data, such as k-means clustering or using different sigmoid functions to train and test our model.
New Features
1. Community building
Encourage community building by adding features such as friends, private messages, forums, and discussion threads.
2. Gamification component
Make the interaction process fun and enjoyable by adding features such as leader board, arena and battleground with space quizzes.
Built With
app-engine
css3
figma
firebase
flask
google-cloud
html5
javascript
machine-learning
newscatcher
python
restful-api
scikit-learn
sql
Try it out
tothemoonandhack.el.r.appspot.com
github.com | Rocket News | Recent space discoveries with one touch | ['Ava Chan', 'smriti sharma', 'sidharth-04 Roy', 'Sanil Jain'] | [] | ['app-engine', 'css3', 'figma', 'firebase', 'flask', 'google-cloud', 'html5', 'javascript', 'machine-learning', 'newscatcher', 'python', 'restful-api', 'scikit-learn', 'sql'] | 62 |
10,415 | https://devpost.com/software/planet-hack | rotating > 90 degrees to the right
Draining a planet
Inspiration
Wanted to build something challenging and fun for the space theme! We first thought of a game like Asteroids but then we wanted to be a bit more original so we decided to change up the mechanics.
What it does
It's a survival game based on having limited and diminishing resources. Draining planets and will increase humanity's chances at survival, earning yourself points along the way.
How we built it
React.js was used to build the front-end and a Ruby on Rails API was built to add permanence to the high scores. We leveraged built-in javascript methods/browser functionality to aid us in creating a map or playing field.
Challenges we ran into
This being the team's first game built, many of our first challenges included figuring out how to create a game-like environment. Dealing with things like object collision, game timers, even just figuring out the movement of the ship was a tough challenge.
Accomplishments that we're proud of
As a team, we're really proud of the project overall. It definitely has a long way to go in terms of appearance, so for now, it's somewhat intentionally tacky. Our goal was to keep external library use to a minimum, so it was exciting to see the game come together while learning to roll game mechanics and hacking together a map of sorts.
We're really proud of the product we ended up with after only 48 hours and no game design experience!
What we learned
Game design is much, much, harder than it looks. It's very easy to introduce bugs with so many window methods and properties interacting with each other. There's still so much to learn! We realized that with only the browser and JavaScript, you can create anything you can imagine.
What's next for Planet Hack
We would like to incorporate better visuals and smoother user experience. Cleaning up certain game functionality like smoother status bars and ship movement. We would love for people to compete against one another.
Built With
css
html
javascript
react
ruby
ruby-on-rails
Try it out
github.com | Hack the Planet! | Survive oblivion by draining planets for resources. | ['Nelson Caudill', 'William Lin'] | [] | ['css', 'html', 'javascript', 'react', 'ruby', 'ruby-on-rails'] | 63 |
10,415 | https://devpost.com/software/astronot-impossible-to-reach-the-stars | Introductory page
Data survey page
Results page
Code on Github
https://github.com/MarinaOrzechowski/AstroNOT
Inspiration
First, generating ideas about space wasn't really easy. Specially, when a lot of space simulators where done. But, we've recalled that sweet time in our childhood, when we were all obsessed with space, being an astronaut... We all knew those people, who had walked the moon, and seen the world from above. But we weren't able to get... feelings? at least some idea about how we were able to manage in space... So, here came our idea
What it does
It gives people, specially those within preschool age, ability to get hold of, what it is being an astronaut. And even guides you to the stars, or in our case your dream job.
How we built it
We used Python libraries such as Pandas(for Data) and as Flask(for web) in conjunction with HTML and CSS in order to make someone's dream come true at least on a webpage.
Challenges we ran into
1)Being scattered across the globe - That significantly increased difficulty of collaboration within those 48 hours
2)Resolving merge conflicts on Github - Multiple people working on same files at the same time wasn't the ... most convenient thing we've had
3)Creating a machine learning model in PyTorch to implement Style Transfer feature for user to be able to space style an uploaded photo
Accomplishments that we're proud of
Finishing this project within given time constraints
Coping with all challenges XD
What we learned
1)Coding without tutorial
2)Making fancy prototype with Figma
3)Flask and CSS for web development
4)How to merge multiple commits on GitHub
5)Manipulating data using Pandas
What's next for AstroNot Impossible to Reach the Stars
1) Add more information to the app to help educate people on a career as an astronaut : Detailed career path - Education, universities.
2) Complete the Style Transfer feature (Make users experience fun with “space” styling their image ) : Filtering user's photo in order to make a realistic photo from "space" excursion
3) Complete the “send information via email” feature : Being able to store your results, career guideline and even your photos from journey on your email address.
Built With
css
flask
github
html
pandas
python
Try it out
fast-castle-68171.herokuapp.com | AstroNot Impossible to Reach the Stars | Web App That Helps People get a feeling of what it is like to be an astronaut | ['Ansel Zeng', 'Marina Orzechowski', 'Kyriana Garcia', 'Salwa Din', 'Artyom Sotnikov'] | [] | ['css', 'flask', 'github', 'html', 'pandas', 'python'] | 64 |
10,415 | https://devpost.com/software/space-platform-game-7649k3 | the game itself
Inspiration
from an adventure game that i was playing before, cat quest 2, but our game is far from it so...
What it does
it randomly generates platforms, and you can go around them, if you want, you can also play crab rave in the background
How we built it
we made a canvas on top of a simple HTML page, and controlled by a short piece of javascript.
Challenges we ran into
Javascript being annoying
console.log(.1 + .2) // 0.3000000000000004
console.log(.3 + .6) // 0.8999999999999999
excuse me but
whattttttttt
(yea had this problem for years lol)
async stuff _
doesn't work
_
setInterval(loOp, 100); //go brrrrrrrrrrrrrrrrrrr
yea it just doesn't wanna work with me
StackOverflow have the right answer, not what I wanted
who is still using jquery?
algorithms don't work
aka "I didn't study about algorithms"
implemented a crappy "physics engine", glitchy and doesn't work half of the time lol
so tired fixing those, but somehow I just decided to write it down on paper, after some magic hacks, I fixed it.
why is the drawing thing so hard to use
yea we used photoshop and its so hard to draw things, even those it's pixel art, still very hard
Accomplishments that we're proud of
we actually got it to work, in
2 days
and it actually felt like a game, just less complete
we fixed all problems
we did all feature requests we made for ourselves
What we learned
we should plan things before code and have more branches for git
also, javascript is not good, debugging with js is not fun at all
What's next for space-platform-game
multiplayer support using node + socket.io, since this thing is kinda boring
better graphics and less crappy code, maybe a library
it was planned but since there are only 2 days, it never happened
Built With
css3
html5
javascript
Try it out
vincentxie.net | space-platform-game | a quick game made in html canvas with javascript, based on the top of this hackathon | ['wenqi xie', 'Eric Yu', 'Pranav Bonagiri', 'Sai Divvela'] | [] | ['css3', 'html5', 'javascript'] | 65 |
10,415 | https://devpost.com/software/beeboard | BeeBoard
Inspiration
BeeBoard was inspired by my Mom, a kindergarten teacher. Covid-19 is raising a lot of questions about what school will look like come September, but one thing that changed regardless is a digital presence in the classroom. A lot of the solutions for kids in schools seem to be part-time learning or smaller classes and I wanted to create a product that would allow students to still collaborate even if they are still at home in a fun way.
When I went to help my mom pack up her classroom, we had to go out into the hallways and take down all the bulletin boards. I immediately remember how fun it was walking through the halls and seeing your work on display for everyone to see, and was sad that may not be possible this year. We always had fun themes and drawings depending on the season or holiday, and there was a sense of pride when spotting your creations. I thought it would be great to create a website that allows students to still show off their artwork, like on a bulletin board, but in a digital way.
The reason I decided to create a website, rather than an app, is because of a policy in our county that gives all students K-12 access to a laptop. We can, therefore, assume all students will have the necessary hardware to work with a website, but may not have access to a smartphone. Perhaps in the future an app can be integrated, but website seemed a better foundation for now.
What it does
*
As A Student *
BeeBoard will allow students to print off coloring sheets picked by their teacher, to fit the theme of their board, color them in and then upload them to a class bulletin board, or BeeBoard. For example for a winter-themed board, a teacher may choose snowmen and snowflakes to be colored in. Students print off and color in the predetermined sheets and return to the site to add to upload. BeeBoard uses a laptop camera to scan the picture and upload it to the class bulletin board. Students add their name, and a short video description of their project for all to see, and the project is added to the board along with classmates
As A Teacher
BeeBoard allows teachers to create new themed boards, and have accompanying coloring sheets to print off to complete the theme. Send a link to students to allow them to join the board, and add their own creations. Upload a class list in order to see easily who has completed the assignment and who still needs to turn in. The board is also sharable (read-only) so that once completed, It can be shared with family and friends to enjoy.
How I built it
The site is currently built using Node.js, React, Redux, and Express with a Postgres Database.
Challenges I ran into
In a short time, finding a way to scan the picture was difficult. For now, the ability to upload a picture exists, but with a file upload. For smaller children, I want to make it an easy and seamless process. I intend to keep working on BeeBoard after the hackathon
Accomplishments that I'm proud of
I'm proud that I was able to create a tool to help students in such a strange time, especially younger ones who are missing out on key moments in the classroom. They may not be able to see their work in the halls, but they can still digitally show it off to their grandparents and fellow classmates.
What's next for BeeBoard
For Students
ability to scan pictures
ability to choose a picture to color and print it off
ability to make a video to attach to upload
For Teachers
-ability to create different themed boards
-ability to choose different pictures to print off
-ability to upload student list to compare who has turned in assignments
Built With
express.js
node.js
react
redux
Try it out
beeboard.space | BeeBoard | Digital and calaborative classroom bulletin board. | ['Shannon Crowley'] | [] | ['express.js', 'node.js', 'react', 'redux'] | 66 |
10,415 | https://devpost.com/software/heard-helping-eliminate-all-racial-discrimination-app | GIF
Shared GitHub with
info@dubhacks.co
Inspiration
Despite the Civil Rights Act, Black Americans still face brutality and unequal conditions today.
Black Americans suffer a higher rate of fatal police shootings than any other ethnicity at 30 shootings for every one million Black Americans.
Black households have 1/10 The median net worth of white households
Black Americans are living in poverty at twice the rate of their white counterparts
Black Americans constitute a higher percentage of COVID-19 Deaths
Black Americans are incarcerated in state prisons at 5.1 times the rate of their white counterparts
Once convicted, black offenders receive longer sentences compared to white offenders.
Studies have shown that people of color face disparities in wage trajectory following release from prison
While people of color make up about 30 percent of the United States’ population, they account for 60 percent of those imprisoned. According to the Bureau of Justice Statistics, one in three black men can expect to go to prison in their lifetime.
As a minority myself, as in many countries, many of have know, seen or been one discriminated based on our differences and other people and this is cause that still has not been reduced to an optimal level. With all of these points in consideration, I felt as it is my responsibility to use my productive coding skills in order to develop a high level application in order to help people be heard, avoid and act against forms of racial discrimination.
Main Features of the App
First here is the news feed in the home page tab. It allows end user to obtain powerful, unbiased and truthful sources speaking regarding the topics of race, discrimination and protests in regards to real world events and the ability to quickly view if you choose. Here you see:. Overall this section helps inform the user and spread awareness.
Here is the Petitions Tab where users are able to choose a petition they would like to support in order to help take action for those respective reasons. Some of these include justice for Ahmaud Arbery and other such incidents. Here you see: This will help users quickly take action with the quickest approach rather than scavenging the web for petitions. You can click the view button and it. Overall, this section helps the user take action against racial discrimination.
Here in the Statistics tab where you can view all the numbers the app is calculating behind the scenes to provide data in order to take action. Here you see: ... More specific data stored behind the scenes can be provided to local lawmakers, legislative officials and the towns to push for actions in those areas to help improve society as a whole. Overall, this section helps the user take action against racial discrimination.
Now we move to our search tab where we have a maps UI displaying our locations. Due to some constraints, I am not able to use a physical phone. Hence this Iphone 11 simulator displays the default San Francisco location header on the map, where otherwise it would show your location. Here, we have a text input where we can type in a location and enter to see various results. Based on the location input, we find data regarding the amount of complaints in this city. Since this app has not published any complaints yet we see 0. Using a custom algorithm with multiple data points from various apis, it estimates the amount of potential occurrences of acts of discrimination. We also see a risk level in order to help you make a decision whether you are going to avoid this location or not. We also see the amount of people in help status, which is a status you can enable in order to alert people nearby in your city to help you. Additionally, their icon will come up on the map. End users will end up seeing it and this will help others respond by going to their location and responding in numbers to that act of racial discrimination. Additionally, we see how many HEARD users there are in this city to see others in support. Over hear we see a button we can click in order to send a data point to the app to show we have avoided the location we typed in, incrementing data counters by 1 to track it. If we go out, we see this button in order to go into “help” status. Overall, this helps users avoid certain locations based on multiple data points.
Now to the last part of our app.
Here is the complaint submission form where you can submit a complaint regarding a person, individual, location and specify details regarding an act of racial discrimination. Additionally, the user can attach a photo in order to identify the person who discriminated. This will help us take action against such people and reduce racial discrimination on the back side of the app and provide this information to such organizations that do. Additionally, this data will be posted to the backend in google firebase and the cloud where the data is stored and then is retrieved in this upcoming section where all users can see the various different complaints by users in their areas.
Here we refresh and see my complaint comes up and we have a simple UI displaying this information.
I used various apis such as the news api, react-native-maps api, expo’s location api, the api provided by google firebase making the backend so easy, and various libraries for easy and efficient styling. These apis helped me make this app in such a small period of time.
That is all for this app and I hope this app can help make the world a better place.
How I built it
During the progression of this application, I built I primarily with already existing libraries which helped me style my elements quite quickly. I used libraries such as react-native native base, react-native-elements and react-native vector-icons. For the many apis I used in this project a News API creating the feed of news, react-native maps api allowing for the maps UI, the Expo permissions and locations API making it very simple to take data points using their documentation, Google Firebase API to help me use a simple post method to post my data reducing tons of time and geolocation/geocoding api that came with expo permissions. All of these apis and libraries made it easier to make a high level application in such a tight time frame. Overall, I used APIs, Location Services, Geolocation, Geocoding, Google Firebase, to create an application for people to connect, use data to make decisions to go to a certain location, alert other users around them for help and take action using petitions and the technologies provided in the application.
For alerting others, getting location and the maps UI I used the react-native-maps api, geofencing apis provided by expo and geocoding to translate a lat and long coordinates into an address and visa versa to process the address given, share location if you are in 'help mode' and alerting others if there is someone nearby who needs help.
For the news, I used News API for a feed
For the complaint form, I used Google Firebase API to store and retrieve data, which I also used to store statistics which can be used in order to take action.
Challenges I ran into
Some last second challenges I ran to was the manipulation of the database on Google Firebase. While creating the video in fact, I realize that some of the parameters were missing and were not being updated properly. I eventually realized that the naming conventions for the some of the parameters being updated both in the state and in firebase got mixed up. Unfortunately this took way to long and results in me having to cut the video up into two different sections. I stayed up until 1am and got it done. Thanks to all the great documentation and other tutorials I was able to effectively implement the rest.
What I learned
I learned a lot. Prior to this, I had not had experience with geolocation, geofencing and other location services which I found quite fascinating after I was able to fully learn them. I also learned how to upload large amounts of data to Google Firebase, which was easy with the help of tutorials, which I had previously thought was hard. Additionally, I learned new styling elements such as how to created rounded images and permissions to the images directory which I was not previously familiar with. At the beginning of this project, I did not think I would finish. However, I had learned that after you gain enough time with certain things and learn how to do it and push yourself to do it, you realize it is much easier than anticipated.
Theme and How This Promotes Social Good
Overall, this application was created in order to help reduce racial discrimination
Design
I created a comprehensive and simple UI to make it easy for users to navigate and understand the purposes of the Application. Additionally, I used previously mentioned utilities in order to create a modern look.
What's next for HEARD (Helping Eliminate All Racial Discrimination) App
I hope to create an API endpoint for the algorithm I created in order to determined the estimated occurrences of racial discriminations and calculated risk level rather than having it on the app. In the long term, this will reduce complexity if the app becomes larger.
Built With
algorithm
api
cloud
expo.io
firebase
geofencing
google
javascript
news-api
postman
react
react-native
react-native-base
react-native-elements
react-native-maps
Try it out
github.com | HEARD (Helping Eliminate All Racial Discrimination) App | Helping everyone get heard, avoid, and act against discrimination against minorities using geofencing, geocoding, location services, google cloud and apis to bring awareness to, avoid and act against. | ['Om Joshi'] | ['Powerbeats3 Wireless Earphones + WOLFRAM AWARD + $250 Digital Ocean Credits', 'First Place'] | ['algorithm', 'api', 'cloud', 'expo.io', 'firebase', 'geofencing', 'google', 'javascript', 'news-api', 'postman', 'react', 'react-native', 'react-native-base', 'react-native-elements', 'react-native-maps'] | 67 |
10,415 | https://devpost.com/software/janky-rocket | Start screen
Game play
Win screen
Inspiration
Janky rocket was made in an effort to introduce players to the difficulty and intensity of space travel. While researching space travel, we discovered that there are over half a million pieces of debris orbiting Earth, with over half of them larger than a softball. Space debris poses a threat to both rocket launches as well as satellites providing consumer services like internet service. Janky rocket educates players about the obstacles astronauts may encounter on their journey to the moon.
What it does
Players are first introduced into Janky rocket’s start screen where they will be met with their mission: GET TO THE MOON. After clicking start, players control the rocket — which moves very jankily — and try their best to avoid obstacles and collisions against shooting stars, asteroids, satellites, and more! Move the mouse to each side of the screen to control the rocket. Why does the rocket move jankily you ask? This is to better simulate how the conditions that real-life astronauts go through can often be...janky.
How we built it
Janky rocket was built with repl.it for collaboration. P5. js was used to create the graphics and animations that enable the janky rocket to move. The p5.collide2D library was used to handle rocket collisions with the moon and the moving obstacles. The p5.sound library was used to play sound effects when the rocket blasted off from earth and when it collided with obstacles.
Challenges we ran into
Some challenges we ran into include getting the orientation of the rocket to correspond to the direction in which the user wants it to. The janky part of our rocket is that the direction of the rocket corresponds not only to the mouse, but also to the mouse position depending on whether or not the position is to the far left or right.
Accomplishments that we're proud of
We’re proud of all of our animations — from the stars in the background to the earth and moon coming into view as the rocket travels through space.
What we learned
While creating this project we learned about how to implement the p5.collide2D library so that when the user-controlled janky rocket hits an obstacle, it would lose a life. We also learned how to use the p5.sound library to incorporate sound effects when the rocket launches from earth and when it collides with obstacles.
What's next for Janky Rocket
Janky Rocket has big plans in the future. One future iteration we are planning to implement in this game is to create different types of rocket skins. Another iteration is adding more levels such as getting to other planets and not just the moon!
Built With
css3
html5
javascript
p5
Try it out
www.jankyrocket.space | Janky Rocket | 3... 2... 1... JANK | ['Jendy Ren', 'Christina W', 'Ethan Horowitz', 'Kevin Gauld', 'Michelle Bryson'] | [] | ['css3', 'html5', 'javascript', 'p5'] | 68 |
10,415 | https://devpost.com/software/hello-space | Inspiration
We were inspired by how popular skywatching has become. Remember the solar eclipse in 2019 that, according to Forbes, about 215 million Americans watched? Only around 12 million saw a total solar eclipse. We wanted to find a way to increase the number of people who can see a full celestial event.
What it does
Our hack helps people who don’t know or are having trouble finding great locations for skywatching locate one near them, and lets these individuals meet up with others who share their passion of skywatching, encouraging group solidarity.
How we built it
On the front end, we used CSS, HTML, and JS, mainly for UI design and API integration. On the back end, it was all Java and Python. We had Java with Spring framework set up request controllers, security, APIs, and services to access external APIs, while Python and Flask allowed us to build our REST API.
Challenges we ran into
We came across several challenges while building Hello Space. While coding, at times, half of our PC's were slowing down and not performing properly. It was completely random and unpredictable too. Moreover, we did not have enough time to implement the rocket launch API. Last but not least, as not all of us have worked with these APIs, we had to learn how to use and implement them into Hello Space.
Accomplishments that we're proud of
We were impressed by the successful implementation of various APIs we used on the back end, despite the difficulty we had building and integrating them. On the front end, our site looks visually appealing. Overall, we believe that we did a great job combining the two to ensure a smooth UX experience.
What we learned
By building Hello Space, we learned how to work with several different applications, such as Java EE and Sublime. Moreover, we were introduced to what a REST API is and does. As we are not too experienced working with the Google Maps API, we were able to learn how to use it and implement the geolocation feature into our webapp, to determine the user’s location.
What's next for Hello Space
We're planning on adding 3D visuals for some upcoming celestial events, like a blue moon spinning on an axis. A merchandise page would be cool; supporters could buy space-themed merch with our logo on it. Last but not least, we would like to host our webapp to a domain.
Built With
css
google-maps
google-places
html5
java
javascript
launch-library-api
python
spring
Try it out
github.com
docs.google.com | Hello Space | The world is yours to explore | ['John Amiscaray', 'Aditi Parekh', 'Xiaofan Gao', 'oldo0570'] | [] | ['css', 'google-maps', 'google-places', 'html5', 'java', 'javascript', 'launch-library-api', 'python', 'spring'] | 69 |
10,415 | https://devpost.com/software/galaxy-explorer-pjacbr | Start scene//game
Start page//website
Inspiration
Space shooter/Space Invader and websites with the Space themes.
What it does
The game has a player sprite that shoots bullets to explode enemy ships, every 1000 points the new level will get faster and faster and it also keeps track of your points and high score when you lose and it's game over. The enemy ships can also bump into you and explode your sprite, if an enemy ship goes past you you also lose lives (you start with 5). The website, finds you a few learning platforms if you want to learn about outer space as well as tells you about the developed game!
How I built it
I used Xcode to develop the game and Repl.it to make the website
Challenges I ran into
Not having enough time, errors randomly coming up, me not knowing how to use Xcode to develop a game, errors that i couldn't fix in Xcode, errors I didn't know, code I was using being outdated, me not knowing how the website would attach to the game, my computer overheating then shutting down, my computer glitching and crashing, Xcode crashing, Repl.it crashing, the parallax not working, my functions not working, my sprites not working, my conditionals not working, not having enough knowledge in Swift, Repl.it crashing and deleting things, my video not working, the slides not working, Youtube not wanting to work with my website, my files not loading, the cursor freezing, the game freezing, massive glitching of the simulator, the bullet sprites disappearing, the simulator crashing, the simulator freezing, the simulator and code not cooperating with the player space coordinates and many more...
Accomplishments that I'm proud of
I've never developed a game before and never learned to so I'm very proud of developing the game from scratch and learning to do so very quickly, I'm also proud that I learned how to add in the video background!
What I learned
How to fix 99% of what's above, how to code a game in Xcode how to use Xcode's game developing software, how to use swift and so much more, this was a frustrating process but in the end I realized that I learned so much, I don't need an amazing outcome/product, mine will definitely not be as good as other people's nor as complex or amazing but the product doesn't matter because I learned so much and I learned to overcome many things through this amazing experience.
What's next for Galaxy Explorer
I want to better the website and use the ideas I had but was unable to input, I want to learn more about Xcode/game developing to do even more coding, developing and updating. I want to be able to input levels and different challenge, sprite switches, different worlds and hopefully so much more. and for the website I want to make it have all the functions that I had in mind! (if possible :D), but nothing is impossible with coding so...!
Built With
css
html
javascript
repl
swift
xcode
Try it out
galaxy-explorerthe-out-of-this-world-project.stephaniesgao.repl.co | Galaxy Explorer | A little space adventure/shooter game with a website! | ['Stephanie Gao'] | [] | ['css', 'html', 'javascript', 'repl', 'swift', 'xcode'] | 70 |
10,415 | https://devpost.com/software/bio-graphy | Inspiration
We found that making good sounding bios for Tinder, Bumble, and other platforms like it was tough. We needed a tool that could help us craft awesome sounding profile bios that help us put our best foot forward.
What it does
It evaluates the overall and individual sentiment and magnitude of each sentence and determines whether or not you're portraying yourself positively, negatively, or somewhere in between. From there, you can see what needs to be tweaked and the effects that these tweaks have.
How I built it
We used the Google Natural Language API to evaluate the text, and used PHP, HTML, CSS and Bootstrap to code the front end.
Challenges I ran into
Figuring out how to get GCloud up and running was a big hurdle, but after that it was mostly smooth sailing.
Accomplishments that I'm proud of
Having a working project at the end of a hackathon!
What I learned
Both of us learned how to use GCloud and also what goes into making a good bio for dating sites. Dorian learned Bootstrap for the first time, and I learned how to code in PHP for the second time in my life.
What's next for Bio-Graphy
We hope to include more of GCloud's API, so that we can provide more tools. One such idea is evaluating which pictures look the best using their Vision API.
Built With
bootstrap
css
html
javascript
php | Bio-Graphy | The First and Last dating tool you'll ever need. | ['Ian Smith', 'Doween Smith'] | [] | ['bootstrap', 'css', 'html', 'javascript', 'php'] | 71 |
10,415 | https://devpost.com/software/luna-chat-bot | Hi and How are you
(Bad) Joke
Fact
testing
Inspiration
Instead of creating a space game or simulation that would normally be done for space themed events. I thought that creating a Chat bot that would give users random facts and images on space, but also adding a fun twist on it by creating a bot that gives out jokes as well.
What it does
Luna is a Chat bot so it waits for User to input a specific phrase or word and it will output a response
How I built it
Since I had never built a Chat bot before I researched best ways for beginners and had found one on a template to start off from.
Challenges I ran into
I had never coded a chat bot, before and had not used HTML and CSS in over a year.
Accomplishments that I'm proud of
I kept having issues adding responses to the bot and just having issues designing and get the bot to work the way I wanted it to, but At the end of the day I was able to find a way to compromise for it and still get a functioning bot out of it.
What I learned
It's good to stick with what your know, because sometimes you don't even know that.
What's next for Luna-Chat-bot
I'm hoping to work more on blending APIs that have images and celestial body data to the bot and cleaner design/voice to make it more appealing on the eyes.
Built With
css
html
javascript
Try it out
github.com
codepen.io | Luna-Chat-bot | A Chat bot that can read you fun facts or show you images of space | ['Britta M'] | [] | ['css', 'html', 'javascript'] | 72 |
10,415 | https://devpost.com/software/spacetag | App Screen Shots
Inspiration -
As college students, our friends who we used to see in person every day are now spread across the country and world for the semester
we wanted a game that we could play with friends nearby while still socially distancing, as well as with friends that are far away
--Also this Tag trailer was a major inspiration
https://www.youtube.com/watch?v=kjC1zmZo30U
What it does
is a platform for you and your friends to play games of long-distance and long-term tag, so you can play even if you cannot come into close proximity with one another.
When you create a game you can set the distance within which you can tag another player. the player who creates the game starts as “it”, and can tag anyone who is within the specified distance. This distance can range from 100ft to 10 miles. There is a certain amount of time (which is a function of the tag distance) before the next player is able to tag anyone else to allow the last player to get farther away.
Space tag is a game that you could play over the course of a few days or weeks with your friends who live in your city, or one that you could play over the course of months with your friends across the country.
How we built it
We built the app in Swift using SwiftUI. We designed the app in Figma, and used Firebase and Google Cloud Platform for the database. Vercel Serverless Functions were employed to host the API the app used to interact with Firebase.
(+Stackoverflow and friendship :) )
Challenges that we ran into
We had very little knowledge of Swift or native development. Building a project in XCode pulled from Git is a 1st place hackathon worthy feat.
Dealing with complex, particularly multiline, secrets with Vercel and across multiple platforms required a lot of regex character manipulation. In the future, Firebase secrets need to be handled better if the development team were to grow.
Accomplishments that We are proud of
Learning so many new technologies (swift, new APIs, etc) in such a short period of time. Had a really fun time working together as a team!
Building the entire Next.js backend in Typescript :) (with only one “// @ts-nocheck” comment)
Also, quickly creating a flexible Firebase Realtime Database worked very successfully.
What we learned
Learning a language and making a multiplayer game with it in one day is harder than we thought. Swift is hard. There’s a lot of barriers. Students should have free apple developer accounts. but yay google no break up the big tech companies down with google down with apple down with bezos but i want a tesla all hail X Æ A-12 p.s. have my location you deserve it you sly dog
What's next for SpaceTag
Proximity notifications to alert you when a person is near you in your game of tag.
work out the kinks in the back end
get it on the app store!
We are very eager to keep developing and get a version off the ground that we can play with our friends.
Built With
amazon-web-services
gcp
googe-cloud-platform
google-firebase
google-maps-sdk
next.js
radar.io
swift
swiftui
typescript
vercel-serverless
Try it out
github.com | SpaceTag | Social distancing tag -- keep your space :) | ['Elissa Perdue', 'Andreas Bigger', 'Max Leiter', 'Zoe Vogelsang'] | [] | ['amazon-web-services', 'gcp', 'googe-cloud-platform', 'google-firebase', 'google-maps-sdk', 'next.js', 'radar.io', 'swift', 'swiftui', 'typescript', 'vercel-serverless'] | 73 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.