hackathon_id int64 1.57k 23.4k | project_link stringlengths 30 96 | full_desc stringlengths 1 547k ⌀ | title stringlengths 1 60 ⌀ | brief_desc stringlengths 1 200 ⌀ | team_members stringlengths 2 870 | prize stringlengths 2 792 | tags stringlengths 2 4.47k | __index_level_0__ int64 0 695 |
|---|---|---|---|---|---|---|---|---|
10,568 | https://devpost.com/software/library-management-system-huk5c2 | Inspiration
I got inspiration to design a modern looking dark themed library managemnet system in java Swing Framework since there are many existing ones but thee ain't anyone like mine.
What it does
Features ⚙️
A draggable undecorated jframe with dropshadow effect.
A login panel with signup and forgot password option with security question.
Add book and Add student panel with auto-generated id to add new book and register new student in database.
Book can be issued to a student with given student id and book id using issue book panel.
Return issued book for the given student id with whom book is issued.
In Statistics panel there are tables of all the issued and returned books data.
How I built it
Tools & Technologies used 🎭
Java Swing + AWT
JDBC API
Msql database (SQLYog GUI client)
Flatlaf Look & Feel
Netbeans IDE
Pichon icon8 icon pack
rs2xml jar
Challenges I ran into
I faced many small challlenges like How tom pass the dynamic data form database to GUI without using complex SQL Queries etc.
What's next for Library Management System
May be add more options to it and make it a java web start.
Prerequisites ✔️
A minimum JRE version 8 for running the application.
Mysql should be installed on your system with the tables given in SQL file of the repository.
Built With
api
awt
client)
database
feel
flatlaf
gui
icon
icon8
ide
java
jdbc
look
msql
netbeans
pack
pichon
rs2xml
sqlyog
swing
Try it out
github.com | Library Management System | A 🌑 dark themed library management 🖥️ desktop application with modern look and feel. | ['Ashutosh Tripathi'] | [] | ['api', 'awt', 'client)', 'database', 'feel', 'flatlaf', 'gui', 'icon', 'icon8', 'ide', 'java', 'jdbc', 'look', 'msql', 'netbeans', 'pack', 'pichon', 'rs2xml', 'sqlyog', 'swing'] | 12 |
10,568 | https://devpost.com/software/hospital-bed-tracker | hospital login
user login
Bed data show
welcome screen
Bed data update
Overview
Hospital-Bed-tracker is a Native Android app built to track the total number of beds and the number of beds available in any hospital. This was built using JAVA, XML for front-end, and Firebase for the backend.
The problem it solves :
The idea behind the app is to minimize the waste of crucial time that a patient has during any emergency. In such cases sometimes a lot of casualties happen just because of not being able to find the empty bed in any hospital (more likely to happen in the current pandemic situation). Now with the help of this app one can check the number of beds vacant which will definitely save time.
Phases: We developed the app in two phases -
Phase 1: In this phase, we made a roadmap to the development and also collected all the resources/ assets we were going to use in our application during development. We started developing and developed the according to the following roadmap :
Welcome Screen (Option to choose between User & Hospital )
User authentication ( Phone OTP verification)
Hospital Registration ( All the parts of forms )
Bed details to the user (Bed Details shown to the user in recycler view)
Phase 2: This is the last day a few hours left, all the big parts were complete, we followed the pre-decided roadmap :
Bed Data update (For the employee to update data)
Completing small things like (Logout, menus, On resume, On stop )
Adding and changing the drawable files.
Looking for the bugs and making app submission-ready.
Challenges we ran into :
The Biggest challenge we had was to make this idea into a practical app in just a few hours. In terms of the process, we kept on trying to make the UI as best as possible so we had issues in the selection of the assets/ font/ icons we were going to use. Another problem that we faced regularly was the verification of the person adding Hospital (registration ) , to solve this we decided to add an employee database with the ID card of the employee so as to verify the authenticity of the person. For doing this we ran into some issues with Firebase storage. Another big part of the app was to show bed details to the user, we handled it well and used a recycler view with a search option at the top.
What next :
This app we developed and we want this to be a part of daily life, because this can be an ultimate asset for society. We want every hospital with any website or mobile app to include this feature.
Thanks and regards
Built With
external-libraries
firebase
firebase-realtime-datebase
front-end-:-java
java
material-design
xml
xml-back-end-:-firebase-(firabse-authentication
Try it out
github.com | Hospital-Bed-Tracker | Tracks the availability of beds in the hospitals. | ['Umang sharma'] | [] | ['external-libraries', 'firebase', 'firebase-realtime-datebase', 'front-end-:-java', 'java', 'material-design', 'xml', 'xml-back-end-:-firebase-(firabse-authentication'] | 13 |
10,568 | https://devpost.com/software/radius-zu7d26 | Our icy UI
Get started
Report Infection
Danger ZONES
Prediction Dashboard
Inspiration
There are people dying all over the world - pretty big motivation. Helping elderly find their way through COVID-19 Pandemic by avoiding infected and crowded locations. We can help each other by being good neighbors and reporting cases for the community.
What it does
With daily COVID-19 death tolls higher than ever, a major obstacle to recovery is the lack of information. Social distancing is hard when you don’t even know which locations have a high density of people, or which places have had infected visitors.
Our goal is to fill this lack of information by alerting users in real time to locations with confirmed cases, so that they can avoid them. This allows users to make a conscious choice to avoid certain locations, stopping contact with infections in the first place. Additionally, we use the Besttime API to forecast the safest time to visit a store days in advance by statistically analyzing trends in visitor count. These predictions allow users to avoid foot traffic in stores - a breeding ground for COVID.
How I built it
This was built in two parts. The iOS app was written in Swift, and the main frameworks used were Core Location, Radar, and Firebase. The database used to store the data was Cloud Firestore, while the UI elements were from MapKit and UIKit. The Firebase queries were done in a background thread to avoid UI lag. We made GPX files on XCode to simulate location and test our app features. We used SwiftUI to display the dashboard of predictions for various store foot traffic. The prediction was based on data from the Besttime API
Challenges I ran into
Location tracking with live firebase updates was difficult since the multithreading was complex. We had to sort out the UI vs background thread issue. Also, we had a tough time getting SwiftUI set up properly since this was our first main project with it.
Accomplishments that I'm proud of
The UI looks pretty solid in our opinion and there are a bunch of useful features on this app. Even if only one person is a good samaritan neighbor and reports a case of COVID, everyone in the area will be able to avoid that location for the incubation period. We're just really happy with coming up with the entire idea from scratch and converting it into a finished product.
What I learned
We experimented a lot with swiftui, which will help in future hackathons. We also used multiple API's (Radar, Besttime), which we can add to our toolkit in the future.
What's next for Radius
We're trying to include advanced machine learning algorithms to make our store population prediction even more accurate
Built With
bettertime
core-location
ios
mapkit
radar.io
swift
uikit
Try it out
github.com | Radius | Avoid COVID-19 infected locations and crowded locations | ['Yatharth Chhabra', 'Aditya Sharma'] | ['The Wolfram Award', 'Medical Hack Prize (Wireless Charging Pad)', 'Wolfram Award by Wolfram Language'] | ['bettertime', 'core-location', 'ios', 'mapkit', 'radar.io', 'swift', 'uikit'] | 14 |
10,568 | https://devpost.com/software/sanchar-mx2vds | Inspiration
hjhk
What it does
How we built it
Challenges we ran into
Accomplishments that we're proud of
What we learned
What's next for Sanchar
Built With
cnn
easygui
heroku
image-processing
keras
opencv
speech-recognition | jhkk | hj,k | ['ISHA MUDGAL'] | [] | ['cnn', 'easygui', 'heroku', 'image-processing', 'keras', 'opencv', 'speech-recognition'] | 15 |
10,568 | https://devpost.com/software/fundit-srit95 | fundIt
A platform that democratizes access to capital for small businesses via crowdfunding
Inspiration
Startups founders don't have those connections or profits to get funding and especially in a year full of uncertainties many big investors are scared to invest in small businesses. And not all startups makes million dollars in their beginning years.
Meanwhile, most people are not as rich but want to invest. So we want to build a platform that benefits color businesses (because majority of them are quite small) and Investors both. Startups put their video pitches to help make investor a decision on the startup and investor can make an appointment with the business to know about their future goals before investing.
What it does
fundIt is a an app for small businesses to get crowdfunding by retail investors for equity.
Users can login and authenticate their credentials via Apple/Google/Email
Startups can post data such as PDFs, Images, and Text to supplement their crowdfunding campaign and help investors to make investment decisions
Investors can browse all campaigns via a Tab view
The most unique feature of this platform is the highlighted businesses of the month. Underrepresentation and discrimination is a huge problem in business investments so we want to represent those businesses by having a separate page for them.
Investors can schedule a virtual meeting with the representative of startup that will help investor know about the future plans of the business
Investors can pay as little as $10 for a share in the startup’s equity offered in the crowdfunding campaign
Investors can view their past investments & their total investments on a profile view
Startups can checkout the funds raised from the crowdsourced campaign via Apple/Google Pay to Apple/Google Wallets in a virtual FundIt card
How I built it
Flutter: Dynamic Mobile Applications that runs both on Android and iOS.
Firebase: For authentication
Square: Payment Processing
SQL: For storing the Business and Investor Information
UiPath: For automating the process for investors displaying startups according to their search history
Potential Users
Retail investors - who will be investing in the companies that are listed on our platform
Startups - they sign up for crowdfunding in exchange for equity.
Challenges I ran into
Payment Processing using Square
Automation with UiPath
Making dynamic user interface for startup took some time to apprehend
Accomplishments that I'm proud of
Able to build a working platform with a great team work in such a short time.
What we learned
Learned how to divide tasks as a team and be accountable for it, setting report time
How to do payment processing
What's next for fundIt
We are planning to reach small businesses and small investors who could benefit from each other. Small businesses by getting money and small investors by getting returns on their investment with as little as 10 dollars.
Built With
android
Try it out
github.com | MoneyQ Fundit | A platform that democratizes access to capital for small businesses via crowdfunding | ['Rishav Raj Jain'] | [] | ['android'] | 16 |
10,570 | https://devpost.com/software/attendance-for-google-meet | List of students and symbol showing attendance
Edit Class Screen
An automatically generated attendance Google Sheet
Inspiration
In the era of COVID-19, virtual classes have become the norm. For teachers, however, taking attendance in these virtual classes is often a pain. They must keep track of when students join and leave among side conversations and distracting visuals. Many teachers at our school complain about the difficulty of taking virtual attendance, claiming that existing Google Chrome extensions are buggy and unreliable.
What it does
Our Google Chrome extension,
Attendance for Google Meet
, streamlines the entire process of taking attendance in a Google Meet. When a teacher first joins a Meet, they are prompted to choose the class that the Meet is for, such as "Period 1 Math". They can edit the class to customize the list of students, add other classes, or delete them. The extension automatically detects when students join or leave the call and records it in local storage. At any time, teachers may click on the attendance button to view each student's status (present, absent, previously present, or not on list), and export the data to a
beautifully formatted
Google Spreadsheet in their own Google Drive.
How I built it
The extension was built with HTML, CSS, and Javascript. We injected these scripts using DOM manipulation into the Google Meet page to match the Material Design theme. We set up an OAuth2 consent screen to ask the user for permission to create a Google Spreadsheet in their Google Drive using the Google Sheets API.
Challenges I ran into
Using the Google Sheets API was rather difficult due to its complexity; however, we managed to abstract the functionalities with functions to make the process simpler. It was also hard to match the exact style of the Google Meet UI because Google minifies its class names, so parsing the page source was troublesome. The Material Design library documentation was
infuriatingly
unclear and we often could not do what it said we could do, but we overcame these hurdles by making functionality ourselves.
Accomplishments that I'm proud of
We are proud of the appearance of our extension and the appearance of the exported spreadsheet. We are also pleased that we not only managed to piece together a comprehensive UI in two days but also create an almost fully functional chrome extension that seamlessly integrates with such a large video platform.
What I learned
We learned about manipulating the DOM with Javascript. We learned about implementing design libraries, in particular Material Design, into our HTML and CSS. Additionally, we learned basic video editing skills.
What's next for Attendance for Google Meet
We plan to fix any bugs in our extension and later deploy the completed product to the Chrome Web Store to help teachers around the world take attendance.
Built With
css
google-sheets-api
html
javascript
material-design
oauth
Try it out
github.com | Attendance for Google Meet | A Google Chrome extension for teachers to make virtual attendance taking easier than ever. | ['Aditya Balasubramanian', 'Tyler Lin'] | ['Grand Prize Winner', 'Best Web Hack', 'Amazon or Visa Giftcard'] | ['css', 'google-sheets-api', 'html', 'javascript', 'material-design', 'oauth'] | 0 |
10,570 | https://devpost.com/software/privacy-oriented-contact-tracing-app | The delete data page
Home page of our app that shows your MAC Address and your previous status.
The page that allows you to state whether you're infected or you've recovered.
Option page that leads you to all functions of our app.
Inspiration
As the pandemic continues to aggravate, it was hard for everyone to keep it under control as it was difficult for people to recall their whereabouts for the past 14 days. Nevertheless, an old man who was infected could recall every location he’s been at for the past 14 days. This greatly helped to keep the pandemic after control as those “high risk” individuals that were in close contact with him could immediately be sent to self-quarantine or receive a Covid-19 (Nucleic Acid) test. However, it is quite rare for people to recall exactly where they’ve been for the past 2 weeks, so we thought of the idea of creating an APP that could help remember where you’ve been. A tracking APP that would record down where you’ve been would be almost unrealistic as it interferes with user privacy and may contain numerous personal information. Due to this reason, we decided to create a contact tracing app that is based on public information and does not contain any personal information.
What it does
Our application serves as a tool to cut down the transmission of the Coronavirus. As your device's MAC Address is sent to every WIFI router to establish a connection, our application records every MAC address that's on the same WIFI router your device is connected to. These MAC addresses that's recorded on your device would help identify users that were in close contact with you. When a user wishes to check if they are at risk or not, the application would compare the user's MAC address with all the "high risk" MAC Addresses stored in the online server to check whether or not they are at risk. By utilizing MAC Addresses, this app would greatly minimize the spread of the virus while protecting our user's privacy.
How I built it
Our entire program was written in the Python language. For our APP's front-end, we used the Kivy library. To access wifi networks and send ARP requests, we used the built-in arp command, which is already built into most OS systems. Server is hosted by IBM cloud foundry with a IBM Cloudant database (based on CouchDB principles). The server is based on a WSGI application written in Python 3.8 with the Flask library and Flask add-on flask_api library. The WSGI application code makes use of python core libraries for regular expression parsing, Json parsing / manipulation, hashing, date/time manipulation. It makes use of IBM Cloudant python library to manipulate the database and expiringdict library to manage dynamic ban lists. The WSGI application is served by the program gunicorn on cloud foundry.
Challenges I ran into
The 3 major hardships we went through while creating this application was during the planning phase, the cloudant documentation, and the part when we first started to compile our codes into an APP. While planning and coding for this APP, we were very concerned about user privacy and how to protect our users’ privacy. To prevent impersonation and others from hacking our website, we made sure that we had precautions to stop things like this from happening. For example, our application would only contain users’ MAC addresses. Our server would also automatically ban anyone who attempts to access the admin section without the correct password by IP for 15 minutes. It took us some time to both code and come up with ideas to avoid people from maliciously attacking our server. The cloudant documentation was often ambiguous which served as another major problem that we had to overcome. Lastly, compiling our code into an APP was the 3rd major challenge we went through. We contemplated whether we should keep the function or delete the function, whether we need to add anything, and whether the compilation would be successful. There were several things we were concerned about and worried about, but we were glad that the compilation worked in the end after several alterations to the code and several tries.
Accomplishments that I'm proud of
We are all very proud to create this app and participate in this competition. Although the whole process was tough sometimes, it paid off well. When we ran our code, the feeling of accomplishment was overwhelming, and we were definitely glad that we never gave up.
What I learned
Although we were familiar with the different computer languages, it was our first time creating an application by ourselves. In addition to learning the creation of a mobile app and the systemic usage of github, we also truly understood the power of computer and its real life applications. After this experience, computers are not just a bunch of variables, numbers, and strings anymore, and we clearly understood how computers are helping us in real life.
What's next for Privacy-Oriented Contact Tracing App
To support additional users and to support the iOS platform, we'll need to upgrade our servers in terms of RAM and storage by upgrading our IBM Cloudant DB and Cloud Foundry services. Further functions and features may also be added to our application for future use.
Built With
expiring-dict
flask-api-libary
github
ibm-cloud-foundary
ibm-cloudant-database
kivy-libary
python
wsgi
Try it out
github.com
youtu.be | Privacy-Oriented Contact Tracing App | A contact tracing app which protects users’ privacy by utilizing MAC Addresses to record those who have been in contact with an app user. | ['Ryan Wang', 'lenayq', 'Tyllis Xu'] | [] | ['expiring-dict', 'flask-api-libary', 'github', 'ibm-cloud-foundary', 'ibm-cloudant-database', 'kivy-libary', 'python', 'wsgi'] | 1 |
10,570 | https://devpost.com/software/vox-of-life | Sign Up
Time to Talk Notification
Text Confirmation
Voxers on call
Wireframe
People are by nature social creatures. These past few months of having to isolate and distance yourself from others has been extremely difficult, both physically and mentally. In today’s world, communication and social contact are a huge part of daily life, larger than they have ever been before. This situation happened so quickly, and it may continue indefinitely. We believe that one can abide by the ‘new normal’ and still maintain social contact by using Vox of Life.
In Latin, the word ‘vox’ means ‘voice’. Vox of Life strives to be the ‘voice’ that can be used to initiate, maintain and communicate with others in a safe, controlled environment by the use of the Vox of Life app. The Vox of Life is an opportunity to socialize with others with the same interests, hobbies in you choose, or with someone else who may just need to talk and hear another human voice.
What are you interested in accomplishing? What might be reasonable to accomplish?
Encourage spontaneous meaningful interactions between all walks of life
Reduce impact of social isolation
Make conversing with new people comfortable
Encouraging extroversion in a time where we’re all distanced
Is this something you want to do within the timeframe of the hackathon only, or are you interested in working on this after the 12th?
The effects of COVID-19 are far reaching, and mitigating the effects of social isolation is our goal. We believe we can do this by connecting new people on 1:1 phone calls to meet new walks of life. Even after the curve is flattened and social interaction becomes safer, we will be transitioning our efforts to continue reducing the impact of social isolation, and creating ways for improved communication between individuals and groups.
What is your bandwidth to contribute?
We, the members of this team, are extremely passionate about our mission and are currently balancing this initiative with full time professions. We are capable of supporting this initiative but are seeking support to ensure our mission continues to have a greater impact.
Who is on your team? Please describe your background and why you’re the right person/team to work on your idea.
Magus
Software Engineer
Design thinker
Social enterprise enthusiast
Right person: I like to make things, and also love to make things better.I aspire to work and make products that can create lasting social impacts globally
Shiv
Client facing
Biomedical engineer
Health focused
Right person: I desire to make an impact that can positively affect people and improve the way people see the world and others
Sam
Marketing Specialist
Client facing
Right person: I have a desire to network with others and get to know other people and Vox of Life is heavily geared toward networking
Patrick
Full stack developer
Highly competent in multiple technologies
Right person: I am not a fungus, I am a fun guy and I like building fun tech
Inspiration
Vox of Life was inspired after noticing heavy the toll self-isolation was putting on the mental health of friends and closed ones. Social distancing and self-isolation don't have to mean loneliness and silence. VOL's on the mission to encourage people to have meaningful social interactions, while still practicing safe social distancing. We enable this by pairing different walks of life to each other through just a voice call and simply by the time they both are available.
Built
To rapidly execute on this mission, the proof of concept initially was "pretotyped" with a combination of traditional software engineering frameworks (NodeJS/Nest) and even no-code tools (Zapier, Airtable, Wix). We used NestJS to build our matching algorithm and backend server, and used the no code tools to build our frontend. The first prototype was complete in the first half of a weekend, and the private beta started the very next day. A lot was learned from our testers and the project went through multiple iterations over the course of the trial very quickly.
Challenges
Initially, VOL calls only worked over the web on desktops. Multiple testers provided feedback to make it also accessible over their mobile devices. In less than a week's time, the testers were able to dial-in to their calls.
Future
Nursing Homes
Build relations with nursing homes, to allow the isolated elders to have one on one calls with volunteers
Slack
Integration for Slack, where users can have spontaneous one on one interactions with in their existing communities
Built With
airtable
html
javascript
nestjs
sendgrid
twilio
typeform
wix
zapier
Try it out
voxoflife.com
instagram.com
github.com | Vox of Life | VOL encourages people to have meaningful social interactions, while still practicing safe social distancing. We enable this by pairing different walks of life to each other through just voice. | ['Magus Pereira', 'Patrick Luy'] | [] | ['airtable', 'html', 'javascript', 'nestjs', 'sendgrid', 'twilio', 'typeform', 'wix', 'zapier'] | 2 |
10,570 | https://devpost.com/software/miranda | Introduction.
Google + Twilio APIs.
Our Motivation.
Special Features. Auto Vehicle Plate Detection System.
Statistics.
Inspiration
Our app📱 “Miranda” at Hack’s 20—was inspired by recent global law enforcement encounters, we wanted to build a seamless app that can help de-escalate an aggravating situation in case of getting pulled over. There are obvious psychological impacts, of driving among police cars— how do we remain calm while being fully discreet and cooperative? How do we rebuild trust in certain communities, where racial profiling has become a norm?
What our app does
This is an app that documents and analyses police encounters using
Machine Learning
to help mitigate negative interactions with the police. Additionally, 'Miranda' promotes community policing by alerting nearby users and family.
User Story
User: Innocent citizen being pulled over by the police
When I see the flashing red and blue police lights in my rear-view mirror, I ask my phone “Hey Siri, I’m being pulled over.” The App “The one stop for police stops” opens up and automatically starts recording audio and video of this scene, streaming it to the cloud for secure storage. While the phone is recording the scene, my Constitutional and Miranda Rights are presented clearly on the app’s screen.
I can tap a button (or the screen) to send a notification to my family and local concerned citizens that I’m being pulled over and may need help interacting with the police (e.g. recording the incident themselves).
My incident report (including a transcript, the officer’s name, and the officer’s license plate) is published securely. Optionally, this incident can be posted on Twitter (with location + hashtags) to solicit help from other folks in the community, especially when there’s racist or aggressive language involved.
How We built it
Using
Google’s Speech Synchronous Recognition API
, audio files longer than 80 minutes can be transcribed successfully.
It can also be translated to different languages to fight police brutality in other countries (HK, a notable one).
The
NLP model
analyses the sentiments, comes up with a list of words with saliences attached. Judging from the relevance/how negative or positive it is, a loved one is able to encapsulate the situation quicker.
Auto generates
PDF
. An actual log that went down during the whole interaction.
With
Google’s Cloud Vision API
, our dash cam was able to screen capture the vehicle’s license plate. Printing each digit carefully into the full report, as a copy for the victim’s attorney/representative.
Within Miranda, the app provides a comprehensive
list of commandments
the user is entitled to. IF the situation is aggravated, user can conveniently refer to it.
As a form of protection, the app is designed to be completely black on the exterior. In case of confiscation, the user’s data will be saved and a full report will be ready for review.
Our
speech recognition
can also detect screams/words highly categorized as “danger”, and using
Twilio API
, the user is able to send texts to friends and family if the user is in a dangerous situation
Challenges We ran into
Connect some of the endpoints together.
Figuring out which APIs to use from Google Cloud
Working remotely
Accomplishments that We're proud of
We were able to integrate a lot of functionality in a short amount of time.
What's next for MIRANDA
We plan on expanding and developing some of the functionalities of the app. For instance, we would like to integrate
AR/VR
in order to simulate a similar environment where you are pulled over by a cop and familiarize yourself with the actions you can take using this app. We also plan to use
Twitter API
to alert our community to flock to the location if the interaction is indicative of police brutality.
Built With
adobe-xd
android
google-cloud
google-cloud-natural-language-processing
google-cloud-speech-to-text
google-cloud-vision
google-cloud-vision-api
mobile-application
python
twilio
twilio-messaging-api
Try it out
github.com | MIRANDA | Your one stop for police stops. An app to counter police brutality. | ['Chelsea Ip', 'Pramod Kotipalli', 'Mythili Karra', 'Rahul Pulidindi'] | ['Best Use of Google Cloud', 'Active Tooling Category Prize'] | ['adobe-xd', 'android', 'google-cloud', 'google-cloud-natural-language-processing', 'google-cloud-speech-to-text', 'google-cloud-vision', 'google-cloud-vision-api', 'mobile-application', 'python', 'twilio', 'twilio-messaging-api'] | 3 |
10,570 | https://devpost.com/software/bluetooth-controlled-rc-car-rdi2q8 | Top View
Back View
Front-Right View
Front View
Right View
Back-Right View
Front-Left View
Back-Left View
Left-View
Inspiration
I love using the Raspberry Pi to make many different kinds of projects. I have made many kinds of projects in which I controlled them using a keypad, joystick, or push buttons. I wanted to create a project that I could control wirelessly. This inspired me to create a car which I could control wirelessly via bluetooth.
What it does
The bluetooth controlled car that I have made acts like any other remote controlled car, however, it can be controlled with any smart phone.
How I built it
To build my bluetooth controlled car, I used a Raspberry Pi which I coded using Python. I used 4 DC Gear Motors to move the car. To steer and drive the motors, I used 2 l293d motor drivers. After connecting the DC Motors to the wheels, I connected a push button and active buzzer to the breadboard. To connect any smartphone to the Raspberry Pi, the user must first install the Serial Bluetooth Terminal App. Then he should choose "raspberry pi" from the "Devices" Menu. However, to establish a successful connection between the Raspberry Pi and smart phone, the user must also hold onto the push button while connecting both devices. After the user's phone is connected to the car, the active buzzer will beep indicating that it is connected. The Raspberry Pi will send directions to control the car to the user's smartphone. To watch my video for this project, click the link
here
. If you want the code for this project, check it out
here
.
Challenges I ran into
A big challenge that I ran into was connecting my phone to my car and sending data back and forth. The problem was that my Raspberry Pi didn't use its hci0 to listen to my phone. To make that work, I had to do a little configuration to the hci0 for my Raspberry Pi. If you run into this problem as well, then you can check out this video
here
.
Accomplishments that I'm proud of
In this project, I am proud to have created my own remote controlled car which anyone's smart phone. Ever since I started working with the Raspberry Pi, I've been waiting for a remote controlled car of my own and thanks to this hackathon, I also had a motivation to make this car. Now that I have created a car, there are many other things that I could do with it. I could create a mopping bot, vacuum bot, or many other kinds of robots, because all of them need to move!
What I learned
In this project, I learned a little bit about networking and how bluetooth works. I also learned about how to use bluetooth with Python to send and receive data. As far as it goes for the Raspberry Pi, I learned how to control DC Motors and I was able to understand the functionality of a transistor.
What's next for Bluetooth Controlled RC Car
I intend to improve my Bluetooth Car by adding a camera. The camera will provide live feed on a live flask server. I also think that it would be necessary to include a distance sensor. This way, if the car is about to crash into something, then the distance sensor can sense it prior to the car crashing, and make the car stop by itself. Perhaps, I might even make the car controlled from the flask website that also provides live camera feed.
Built With
android
bluetooth
python
raspberrypi
Try it out
github.com | Bluetooth Controlled RC Car | Control a Raspberry Pi Car via Bluetooth Mobile App | ['Sohan Dillikar'] | [] | ['android', 'bluetooth', 'python', 'raspberrypi'] | 4 |
10,570 | https://devpost.com/software/clean-water-detector-app-that-detects-cleanness-of-water | Instruction Screen
Home Screen
SINCE THE MODEL IS UNDER 1ST PHASE OF DEVELOPMENT PLEASE USE A WHITE SURFACE FOR KEEPING THE GLASS/BOTTLE SO THAT THE MODEL CAN PREDICT ACCURATE RESULTS
DOWNLOAD SAMPLE IMAGES FROM THIS LINK
link
DOWNLOAD APP FROM LINK AT BOTTOM
Inspiration
Dirty water is dangerous
In
Africa
, more than
315,000 children
die every year from diarrhoeal diseases caused by unsafe water and poor sanitation. Globally, deaths from diarrhoea caused by unclean drinking water are estimated at 502,000 each year, most of them of young children.
Every year
575,000 people
die from water related diseases. This is equivalent to a jumbo jet crashing every hour. Most of these people are children (2.2 million).
Unclean water and poor sanitation have claimed more lives over the past 100 years than any other cause. The water-crisis claims more lives through disease than any war through guns.
844 million
people lack access to safe drinking water. This is more than the combined populations of the United States, Brazil, Japan, Germany, France and Italy.
What it does
It basically calculates the cleanliness of the water with the help of its machine learning model that I made, It than shows the results indicating how clean or dirty the water is.
The Covid-19 Detector is a complementary thing that I added just to show the power and usefulness of AI. It is currently in the beta version.
How I built it
I built it using Flutter, with flutter I have the capability to create both iOS and Android apps at the same time hence making the availability vase. At the back-end I used Tensorflow lite, to give my app the capability to use machine learning models in offline modes. Model is made using Trainable Machine powered by google cloud.
Challenges I ran into
Being a solo developer I ran through many Challenges but I succeeded on my goals and I am happy to deliver this prototype on time.
Accomplishments that I'm proud of
I am really happy to contribute this project of mine for the entire people of the world so that they can have access to clean drinking water
What I learned
I learned a lot through out making this app as it was a really challenging task
What's next for Clean Water/Covid-19 Detector App
If everything is going good with this app I would really like to release this app to the entire population, but before that I would have to give some more minor improvements to this app.
Built With
flutter
google-cloud
tensorflow
Try it out
drive.google.com | Clean Water/Covid-19 Detector App : iOS/Android compatible | Powered by Tensorflow lite model made using google cloud Teachable machine, Can detect the % of cleanness and dirtiness of water by just an image from your phone(even without Internet)!! | ['Udipta Koushik Das'] | [] | ['flutter', 'google-cloud', 'tensorflow'] | 5 |
10,570 | https://devpost.com/software/cloud-girl-edge-career | Home (with website description below)
Registration
Login
Student portal classrooms
Assignments
Career resources
Grade choice
Organization portal (with information upon clicking a name)
Teacher portal
Tutor portal (with information upon clicking a name)
What is your project Name?
The name of this project is Girls Career on Cloud
Who are the team members?
Sally Jain, Sonnet Xu and Marthe-Sarah Nkambou Tchunkuo
What languages is the app built with?
This web app is being built with frontend using HTML, CSS and JavaScript while the backend is developed using node.js. Web server is hosted on Google Cloud Platform Virtual Machine running Linux with a static IP address with port 8000
Short Description of Project
Girls Career on cloud is a web-based application tailored to rural communities in helping them set up virtual education centers for young girls in order to improve their education. Virtual classrooms
Girls Career on Cloud is a web-based application tailored to helping young girls in rural communities receive further education. This education would be in the format of virtual classrooms set up by organizers and teachers to teach a variety of subjects.
How did you get the idea to build the project?
We got the idea for this application project by reading online news about how efforts in the world have hit obstacles in bringing more education to young girls in rural communities. While males are usually supported in their efforts to seek higher education, young girls in these communities are frequently not given the same opportunity and support in this endeavor. In these communities, limited finances have pushed them to only send their best men to urban centers for further education. As such, we sought to think about a potential solution or at least an avenue for these girls to attain more education. Thus, we came up with the idea of a simple website that can help rural communities set up virtual and onsite education centers tailored to these young girls.
We got the idea for this project by reading about how recent efforts around the world have met obstacles in helping improve the education of young girls in rural communities. Most often, some rural communities mostly only sent boys to urban centers for urban education, not giving the same opportunities to their young girls and girls due to limited financial support. Therefore, we as a team, sought to think about a possible solution to this issue, seeking a method to help teach these young girls in these communities. Therefore, we came up with the idea of a simple website that can assist these rural communities in setting up virtual and onsite education centers tailored to help these young girls receive the higher education they deserve.
What is the purpose of the app?
The purpose of this website is to assist rural communities in educating their young girls by allowing them to gain more knowledge and skills in a safe, virtual environment to catch up with their male peers. It seeks to create virtual classrooms for teachers and students under the watch of educational organizations to reach educational standards and provide resources for these students for further education and additional career paths.
The purpose of this website is to help rural communities in educating their young girls by providing a safe, virtual environment to reach the same educational level as their male peers. This website creates virtual classrooms for teachers and students. The website is monitored by educational organizations in order to maintain educational standards, and provide resources for these students for further education and future career paths.
How does
Girls Career on Cloud
assuage a multitude of negative implications COVID-19 has on society through innovative solutions?
Girls Career on Cloud Project will help rural communities around the world who are struggling during this current pandemic. This project is the work of future developers who hope to create projects that limit the negative effects of COVID-19 on different communities and regions. Many students around the world are suffering from a lack of further education due to COVID-19. However, projects and applications such as these can help reduce such a burden on students, especially young girls in rural communities.
By creating and maintaining this website, we are hopeful in helping young female students get the education they deserve in order to enter the workforce and achieve higher careers. By achieving higher careers with the higher education they receive, these young girls would gain future jobs and careers that not only help them but also their local communities. Better jobs as a result of more education and training lead to better purchasing power, more income to more people, and a chance to reduce poverty in communities by helping create more experts in their fields.
Virtual education not only improves education in the short-term, but it also leads to more benefits long-term. Furthermore, virtual education can be a money-saving endeavor. More students in virtual education spaces can reduce the need to construct large education centers in faraway rural communities, which also carries additional costs. These additional costs such as transportation, roads, new education, and social buildings, and other expenses can be avoided if virtual education is sought as a solution in response to the pandemic.
So, projects such as this one are hoping to bring more affordable education to demographics that truly need it in order to have better futures and to help nations and regions to save their wealth through the best they can.
How is it beneficial to society?
Girls Carrer on Cloud is beneficial to society as it seeks to help educate young female students in order for them to have better futures. More education and training for these bright students can help achieve careers, which brings about more benefits to these communities. Better educated girls can help build stronger families, earn higher wages, reduce poverty, and generally improve the workings of local economies as a result of higher education.
Girl's Career on Cloud is beneficial to society in many ways. The webpage seeks to help educate young female students in rural communities around the world, leading to future benefits for not only them but also their communities. First, more education and training for these promising students can lead them to achieve future careers that can help rural communities. Better education for these young girls and girls can help build stronger families, with the higher levels of wages that reduce the amount of poverty in these communities and generally improve local economies with higher purchasing power.
Challenges we ran into
One big challenge we faced when developing this project is how to connect web portals for teachers, students, tutors, and organizers on the website. We had two minds about this particular issue. Should we keep these screens separate while coding these segments of the project or should we find a solution to join these portals together in a template to simplify the issue. Generally, the issue of coding a more complex web page brings about more complications, no matter how a skill a programmer is with any programming language or project suite.
One significant challenge we faced as a team when developing this project was figuring how to connect web portals for teachers, students, tutors and organizers for the web application. We were conflicted on this particular issue. We discuss if we should keep these screens separate while coding these segments or should the team find a solution to join these portals together in a template to simplify the issue. The matter with coding is that a more complex web page brings generally more complications in coding to any programmer of any skill level no matter the programming or project suite.
A second challenge we faced as a team was coordinating and adjusting to the amount of effort and time to make sure our code was functional and complete. There was a significant amount of time spent double-checking, and at one point, we had to start over with a different framework to complete this project. Many times, we face significant challenges that delay or completely stop projects which lead to discouragement. Still, when dealing with a passion project or work that must be complete, it is best for everyone to keep pushing through and finish the work to the best of your abilities.
Accomplishments that I am proud of?
The accomplishment we are most proud of is able to code such a complicated web app such as this project. This application requires many systems to function properly, and as such can be quite an issue to keep tracking in coding and testing such functions. Especially since our initial attempt at making this web page fell apart in the middle of the project. We felt We had to scramble more in the less time We had to finish this project.
The accomplishment we as a team are more proud of during this project is that we were able to code such a complex web application. This web application requires many systems to function properly, which led to issues in keeping track of coding and testing various functions. This is notable when considering that our initial efforts in completing this project fell apart halfway through the initial project. As such, we felt that we had to scramble and work harder in the less time we had in order to complete the project.
What did I learn?
We learned from the previous application projects that proper planning, time schedule, double-checking work, and other such efforts must be at the forefront in mind when working on such a project. This is especially important when something unexpected happens, and you have to start nearly all over again. This can be quite nerve-racking and can lead to despair. But, once you set yourself to start over again, you must cross the finish line even if you come in the last place.
As a team, we learn that from previous projects involving mobile and web applications, the importance of time management when we develop and complete a project. Proper planning, maintaining a strict schedule, double-checking work, and code, and similar efforts must be on the forefront in the minds of a team or individual working on such a project. This is especially important when the unexpected happens, which could lead to such effort and work being wasted on an initial attempt. Such misfortune can lead to despair and a lack of confidence. However, once you set yourself to complete something, it always best to finish the race even if you come in the last place.
Summary
In summary, we can discuss the main points of this presentation of this project application. First, the purpose of the Girls Career on Cloud is to help young rural girls, who often lag behind their male peers in education, the opportunity to expand their students through virtual classrooms online. The features of this web app would help assist in reaching that goal. The app has a classroom portal and interface that allows teachers and students to have classes online. It also has listings of class, educational, and career resources tailored to these young female students. By improving the education of these young girls, rural communities, and the world at large will reap the benefits. The benefits include these girls receiving better education and skills to pursue better careers, earning higher wages, and better chances to help their local communities as experts in their fields of study. While this sounds like a great web page, it is obviously not the only application that has a virtual classroom online portal. One such example is Class Dojo, which shares many of the same features as Girls Career on Cloud, but it is tailored to a younger and general audience. It does not have the links to resources that can help young rural careers to succeed in their studies.
To summarize, we can discuss the main points of this presentation of this project. First, the purpose of the Girls Career on Cloud web application is to help young girls from rural communities around the world the opportunity to expand their knowledge and skills through virtual classrooms online. This project is based on the objective is to help these young girls and woman to catch up with their male peers, who often did have the opportunity to enter higher education. Web applications have several features to help these students. First, the web has a virtual classroom portal and interface that allows students and teachers to have classes online. Furthermore, this application also has features such as class listings, educational material, and career resources for students to partake in furthering their education. By improving the education of these young girls, not only would rural communities reap the benefits, but also the world in general. The benefits include more education and skills to students who can now achieve better careers, higher wages to support themselves and their families, and have better opportunities to help their local communities as educated experts in their field of study. As we discuss the benefits of such a goal, it is obvious that such an objective has precedent in the field of virtual.
What will be next after this project?
After this building project as a functional website, I will look at different ways to improve and modify this web application in order to suit the needs of users. I will conduct more research into how to make the webpage more user-friendly to users, and see what new features can be implemented into the code and webpages. Launching and maintaining this website will require a great deal of effort, time, and resources, so it is best to consider what decisions we must take in launching such a project. By setting up and maintaining a donations page on the website, we can further support and develop Girls Career on Cloud if enough donations can come in. Those donations would be used to help support the website, and help students with further education.
Note
Please use the repl.it link if the server link does not work.
Built With
css3
gcp
html5
javascript
node.js
Try it out
34.121.226.52
github.com
Girls-Career-on-Cloud.cookiecuty123.repl.co | Girls Career on Cloud | Every student has a chance to learn, no matter how far the distance | ['Sally Jain', 'Marthe-Sarah Nkambou Tchunkuo', 'Faheem :-)', 'Amit Dandawate', 'Sonnet Xu'] | [] | ['css3', 'gcp', 'html5', 'javascript', 'node.js'] | 6 |
10,570 | https://devpost.com/software/hackloop | Open live demo hackloop http://hackloop.cleverapps.io
A cloud based Android MonitoringTool, powered by NodeJS For Child activities tracking.
->->->>>>>> our Team email : 7054company@gmail.com
Live demo :
http://hackloop.cleverapps.io
•Username = Admin
•Password = Password
Tracking app :
https://github.com/7054company/hackloop/raw/master/server/app_sample/hackloop.apk
Click to download & install it in andriod | Open app named ' Process Manager ' set the given access .. after few second you see a message in notification something is tracking you activity ..& Just hide this notification by system to show .. | Enjoy it..
Features
GPS Logging
Microphone Recording
View Contacts
SMS Logs
Send SMS
Call Logs
View Installed Apps
View Stub Permissions
Live Clipboard Logging
Live Notification Logging (WhatsApp, Facebook, Instagram, Gmail and more ....)
View WiFi Networks (logs previously seen)
File Explorer & Downloader
Command Queuing
Device Admin
Built In APK Builder
Installation on VPS or Server
Prerequisites
NodeJs
A Server
Disclaimer
Hackloop Provides You a super power with this You will able to track your child activities & responsible for you effort.
Hackloop is built for Educational Purpose. Use at your own Risk.
What'up
This is project which is created on the presence on view our today world conditions ..
From past days, many of situation geeting more unpleasent like COVID, Terrorism, etc
Made with ❤️ By
Hackloop
Built With
css
html
java
javascript
npm
smali
Try it out
github.com | hackloop | The secure need to security ... Andriod activity tracking app to track you child activities .. #security | ['7054company frank'] | [] | ['css', 'html', 'java', 'javascript', 'npm', 'smali'] | 7 |
10,570 | https://devpost.com/software/good-vibes-web | The problem Xper solves
In the world of Web Development, I have always faced one major/irritating problem which is responsiveness of a website. Everytime when I am developing a website, I make a quick change and push it in order to quickly check how it looks on my phone. And it does not even update in RealTime!!! Now I know we can simply turn on the inspector and toggle to mobile screen mode to have a look and get an idea of how it might look on a mobile device, but is it accurate? I still always have this urge to check something that I spent hours working on in realtime, on my phone!!
Imagine, a tool/code editor where you can simply write code, and then deploy it, and see your deployed code update in realtime, as you code on all DEVICES that has your website open. Imagine how easy it would be to see your code’s output just after you make that small two line change to your code and see it update in REALTIME on your phone without connecting your laptop to it. Imagine being able to edit your code on any device that you visit your website from!!
Built With
acejs
firebase
javascript
react
Try it out
xperbycoder.netlify.app
github.com | Xper | Xper is a realtime code editor where you can both write and save your code in realtime! | ['Jaagrav Seal'] | [] | ['acejs', 'firebase', 'javascript', 'react'] | 8 |
10,570 | https://devpost.com/software/nelayan |
window.fbAsyncInit = function() {
FB.init({
appId : 115745995110194,
xfbml : true,
version : 'v3.3'
});
// Get Embedded Video Player API Instance
FB.Event.subscribe('xfbml.ready', function(msg) {
if (msg.type === 'video') {
// force a resize of the carousel
setTimeout(
function() {
$('[data-slick]').slick("setPosition")
}, 2500
)
}
});
};
(function (d, s, id) {
var js, fjs = d.getElementsByTagName(s)[0];
if (d.getElementById(id)) return;
js = d.createElement(s);
js.id = id;
js.src = "https://connect.facebook.net/en_US/sdk.js";
fjs.parentNode.insertBefore(js, fjs);
}(document, 'script', 'facebook-jssdk'));
Tidak pernah mundur meski ombak dan angin menghalang.
Demi penghasilan
Built With
kapal
Try it out
www.facebook.com | Nelayan | Menjual Ikan Lewat Online | ['Frenskey Sunnil'] | [] | ['kapal'] | 9 |
10,570 | https://devpost.com/software/c-care-f4zd7j | Inspiration
During this current COVID 19 pandemic, I see health worker is curing the patients, doctors are innovating new medicine, the police is controlling the crowd movement and even bus drivers are helping people to get back to home. As a future engineer, I felt like my contribution is none, so I felt motivated to do my part and try to bring a positive change and to make sure my product can also be used in a future pandemic.
problem our project solves
The massive spread of COVID 19 is due to a measure reason, When a person is infected he can be asymptomatic for up to 21 days and still be contagious, so the only way to contain the spread is by wearing a mask and maintaining hand hygiene. WHO and CDC report said that if everyone wears a mask and maintains hygiene then the number of cases can be reduced three folds. But HOW we will do that? , How can we make ever one habituated to the following safety precaution so the normalization can take place.
What our project does
Our app is 1st of its kind safety awareness system, which works on google geofencing API, in which it creates a geofence around the user home location and whenever the user leaves home, he will get a notification in the C-CARE app ( ' WEAR MASK ' ) and as the users return home he will get another notification ( ' WASH HANDS '), ensuring full safety of the user and their family. It is also loaded with additional features such as i.) HOTSPOT WARNING SYSTEM in which if the user enters into a COVID hotspot region he will be alerted to maintain 'SOCIAL DISTANCING' And it also has a statics board where the user can see how many times the user has visited each of these geofences. With repeated Notification, we will make people habituated of wear masks, washing hands, and social distancing which will make each and every one of us a COVID warrior, we are not only protecting ourselves but also protecting others, only with C-CARE.
Challenges we ran into
1,) we lack financial support as we have to make this app from scratch.
2.) the problem in collecting data regarding government-certified hotspot and also we have to do a lot of research regarding the spread pattern of COVID-19.
3.) Due to a lack of mentors, whenever the app stop working we had to figure out by ourself, how to correct the error.
4.) It took us too long to use it in real-time as during lockdown it was too hard to go outside in the quarantine but finally, after lockdown loosens a bit we tested it and it gave an excellent result.
5.) we didn't know much about geofencing before that so we have to learn it from scratch using youtube videos.
Accomplishments that we're proud of
We’re proud to have completed our project in the period of this hackathon. Additionally, we’re proud of how we’ve dealt with time pressure and worked cohesively as a team to actualize our start-up goals, which we believe would have a genuinely positive impact on saving many lives once implemented properly.
What we learned
All team members of C-CARE were able to grow their area of competence by participating in the whole process of idea definition, market research, validation, prototyping, and presentation. Through different mentor sessions, we learned that problems could be approached by many means, but most importantly our mission should be clear.
What's next for C - CARE
COVID cases are increasing every day, and chances are low that we can create a vaccine immediately, apps like C-CARE will play a crucial role in lower the spread of infection till a proper vaccine is made. Our app can also be used for a future pandemic or seasonal diseases such as swine flu or bird flu.
Built With
android-studio
geofence
google-maps
java
sqlite
Try it out
github.com | C-CARE APP | C - CARE An app that makes ever person a COVID warrior. | ['Anup Paikaray', 'Arnab Paikaray'] | ['Our First Hackcation'] | ['android-studio', 'geofence', 'google-maps', 'java', 'sqlite'] | 10 |
10,570 | https://devpost.com/software/psychopathology-assistant | The components of the Tensorflow environment I used.
The custom model I built and it's proposal for a residual block containing convolutions.
The model was trained with Google Colab using GPUs, and a confusion matrix after only 10 epochs.
About the public dataset I used with the Kaggle API.
The structure of the hardware used for tflite serving and real-tiem streaming after quantization and deployment.
Screenshots of the web platform with the technologies used.
Psychopathology Assistant
Because mental health matters.
View the demo »
Table of Contents
About the Project
Motivation
Built With
Getting Started
Prerequisites
Installation
Usage
Data Exploration
Model Training
Web Application
Model Serving
Roadmap
License
Contact
Acknowledgements
What it does
An intelligent assistant platform to track psychopathology patients' responses during face-to-face and remote sessions.
This platform makes use of a machine learning algorithm capable of tracking and detecting facial expressions to identify associated emotions through a camera. This allows the corresponding medical staff to take care of their patients by creating medical records supported by the artificially intelligent system, so they can follow-up the corresponding treatments.
Inspiration
Some facts:
Anxiety disorders, Mood disorders, Schizophrenia and psychotic disorders, Dementia...
Over 50 percent of all people who die by suicide suffer from major depression
Most of these disorders are treated primarily through medications and psychotherapy
THIS IS THE MAIN REASON OF THE PLATFORM AS A COMPLEMENTARY SOLUTION
As you may have had,
I have had depression
and I can only ask to my self "what are we doing to help others avoid or decrease their suffering?"
Mental health is important.
And as I have mentioned,
most of these disorders are treated primarily through medications and psychotherapy
, and tracking the emotional responses of the patients during psychoterapy sessions may result important as this reveals progress on their treatment. This is why I am trying to help with this AI based platform.
How I built it
This project has been built with a lot of love, motivation to help others and
Python
, using:
Tensorflow 2.0
Google Colab
(with its wonderful GPUs)
Model quantization with
tf.lite
for serving
A
Raspberry Pi
Model 3B+
A real-time
Flask
and
Dash
integration (along with
Dash Bootstrap Components
)
A real-time database, of course, from
Firebase
The
Kaggle API
to get the dataset
Getting Started
To get a local copy up and running follow these simple steps.
Prerequisites
This is an example of how to list things you need to use the software and how to install them. For this particular section I will suppose that you already have git installed on your system.
For a general everview of the Raspberry Pi setup, you can check out my blog tutorial on
how to setup your Raspberry Pi Model B as Google Colab (Feb '19) to work with Tensorflow, Keras and OpenCV
,
as those are the steps that we will follow
. In any case, this specific setup can be seen in the corresponding
rpi
folder.
Installation
Clone the
psychopathology-fer-assistant
repo:
git clone https://github.com/RodolfoFerro/psychopathology-fer-assistant.git
Create a virtual environment with Python 3.7. (For this step I will assume that you are able to create a virtual environment with
virtualenv
or
conda
, but in any case you can check
Real Python's post about virtual environments
.)
Install requirements using
pip
:
pip install -r requirements.txt
You may also need to create your own real-time database on Firebase and set the corresponding configuration variables on the
app/__init__.py
and
rpi/main.py
files.
Run the dashboard
To run the dashboard you will need to get access to the MongoDB cluster by setting the
MONGO_URI
variable in the corresponding
db
file. Once you have done this and have installed the requirements, get the dashboard up and running with:
python run.py
Usage
Data Exploration
The dataset used for this project is the one published in the "
Challenges in Representation Learning: Facial Expression Recognition Challenge
" by Kaggle. This dataset has been used to train a custom model built with Tensorflow 2.0.
The data consists of 48x48 pixel grayscale images of faces. The faces have been automatically registered so that the face is more or less centered and occupies about the same amount of space in each image. The task is to categorize each face based on the emotion shown in the facial expression in to one of seven categories (0=Angry, 1=Disgust, 2=Fear, 3=Happy, 4=Sad, 5=Surprise, 6=Neutral). A sample of the dataset can be seen in the next image.
If you would like to see the data exploration process, check out the notebook found in the
data folder
, or click on the following button to open it directly into Google Colab.
Model Training
After doing some research in the state of the art for Facial Expression Recognition tasks, I found that in "
Extended deep neural network for facial emotion recognition (EDNN)
" by Deepak Kumar Jaina, Pourya Shamsolmoalib, and Paramjit Sehdev (Elsevier – Pattern Recognition Letters 2019), the proposed model turns out to achieve better results in classification tasks for Facial Expression Recognition, and by the architecture metrics this network turns out to be a more lightweight model compared with others (such as LeNet or Mobile Net).
As part of the project development
I have implemented from zero the proposed model using Tensorflow 2.0
. For training I used the previously mentioned dataset from the "
Challenges in Representation Learning: Facial Expression Recognition Challenge
" by Kaggle
on a Google Colab environment using GPUs
. So far the model was trained
for only 12 epochs using a batch size of 64
. The training history can be seen in the following graphs:
Although the results may not seem quite good,
the model has achieved an accuracy value of 0.4738 on the validation dataset with only 12 training epochs
, with a result that could be part of the top 35 scores in the
challenge leaderboard
. We can get a general idea of the model performance in the confusion matrix:
The trained model architecture and quantized model with tflite (for the deployment in the Raspberry Pi) can be found in the
model folder
. Finally, if you want to re-train the model and verify the results by your own, or only if you have the curiosity to understand deeper the whole process of building and training the model with detail, check out the notebook found in the same folder, or click on the following button to open it directly into Google Colab.
UPDATE:
I have trained the same model with a research database (the
Radboud Faces Database
) obtaining an accuracy of 0.9563 with 50 epochs, a learning rate of 0.00001 and a batch size of 128; after doing some pre-processing and data augmentation. Anyway, due the priacy of the database I won't be able to share more details about this, but in any case PLEASE feel free to reach me at:
ferro@cimat.mx
As you may wonder about the results, the training history and the confusion matrix may illustrate more about them:
Web Application
The web application is the base of interaction for the medical staff during the treatment sessions. This web platform aims to integrate a medical record for patients, and a realtime dashboard to make use of the AI power for the FER tasks during sessions.
The platform has been entirely developed with Python on top of a Flask and Dash integration, along with Dash Bootstrap Components for a more intuitive interaction. The platform serves a real-time plot that is served through the trained model that is deployed on de Raspberry Pi, which sends the data in real-time to a real-time database that is Firebase hosted. The platform already includes a login view (
user: rodo_ferro
,
password: admin
) to access the dashboard and patients' records.
Model Serving
The following image illustrates a general idea of the model serving on the Raspberry Pi:
Once that the model has been trained, saved,
quantized
and downloaded, the model has been ported into a Raspberry Pi model 3B+. The Raspberry Pi connects directly to the real-time database in Firebase to send the data as the deployed model predicts.
The script that serves as the interface between the Raspberry Pi and the database is capable of printing metrics of the model performance as well as the device performance during the time the model is serving its results. In general,
the served model with tflite takes only ~3% of the Raspberry Pi CPU and the time of prediction is in the range (0.005, 0.015)
, as you may see in the following example of its output:
* Time for face 0 det.: 0.0017399787902832031
* Time for prediction: 0.0062448978424072266
* Process ID: 50495
* Memory 2.8620%
* Emotion: Neutral
* Time for face 0 det.: 0.0023512840270996094
* Time for prediction: 0.0059719085693359375
* Process ID: 50495
* Memory 2.8629%
* Emotion: Neutral
* Time for face 0 det.: 0.0016210079193115234
* Time for prediction: 0.006102085113525391
* Process ID: 50495
* Memory 2.8629%
* Emotion: Neutral
The complete details on how to setup a Raspberry Pi and how to run the Python script to communicate with Firebase can be found inside the
rpi
folder.
Challenges I ran into and What I learned
One of the main challenges was to create a real-time dashboard without having much knowledge about web development, my major area of studies is mathematics, and this is why I found that Dash along with Flask may result in the most suitable technologies to tackle this area of necessity. The second main challenge (once I learned about creating the dashboard) was to create a real-time streaming service to save the data gathered by the deployed model. The solution was to integrate a Firebase real-time database and then connect the dashboard with the same database, so it could get updated in real-time. Finally, this is the first time I serve a trained model using tflite on a Raspberry Pi, and the Raspberry Pi for an outdated version was a struggle.
At the end, I learned that whenever you may think that you found no way out, the motivation may help you to find alternative solutions with new technologies.
Accomplishments that I'm proud of
Building a custom model from a paper proposal
Serving the model on a Raspberry Pi using tflite
Sending real-time predictions to a real-time dashboard
Learning new technologies in a record time
Start creating a platform that will help others
What's next for Psychopathology Assistant
Develop own embedded device for the model deployment (which should already include a camera)
Improve user data aquisition through the real-time service
Add medical recording to database
Implement patients' medical records analytics
Add security metrics for medical records
Test prototype with a psychologist/psychiatrist
Contact
Rodolfo Ferro -
@FerroRodolfo
-
rodolfoferroperez@gmail.com
Project Link:
https://github.com/RodolfoFerro/psychopathology-fer-assistant
IF YOU THINK THAT YOU CAN HELP ME TO HELP OTHERS, PLEASE DO NOT HESITATE TO CONTACT ME.
Acknowledgements
Icons made by
Smashicons
from
www.flaticon.com
Icons made by
Flat Icons
from
www.flaticon.com
Icons made by
Becris
from
www.flaticon.com
Built With
colab
dash
dash-bootstrap-components
firebase
flask
kaggle-api
love
motivation
python
raspberry-pi
tensorflow
tflite
Try it out
github.com | Psychopathology Assistant | An intelligent assistant platform to track psychopathology patients responses during face-to-face and remote sessions, by facial expression tracking and emotions recognition on an embedded device. | ['Rodolfo Ferro'] | ['Attend the next TensorFlow developer event!'] | ['colab', 'dash', 'dash-bootstrap-components', 'firebase', 'flask', 'kaggle-api', 'love', 'motivation', 'python', 'raspberry-pi', 'tensorflow', 'tflite'] | 11 |
10,570 | https://devpost.com/software/communicate-through-text-messaging-file-share-group-call | Inspiration
From my own interest in VoIP and modern communications.
What it does
Its for multi-user audio+video+screen calling with text chat and file sharing application. Based on webapp(PWA) and services(microservice)
How I built it
Initially I did some R&D with VoIP. Then I got info about WebRTC with few limitations. Now I developed complete PWA webapp with microservice based backend services. e.g. Consumers(users), Messenger(signalling), Application (authentication) etc. And one PWA app with modern JavaScript HTML things.
Challenges I ran into
This is completely R&D based application so I faced a lot of challenges. Most of them is overcome by me. Now need to test it with most of countries and regions.
Accomplishments that I'm proud of
Currently I have good user recognitions only.
What I learned
Learned a lot about network based WebRTC communications, Mesh Networking for multi-user calling, Multi-step signalling services for telecom
What's next for Communicate through text messaging, file share & group call
Now I am working for conference application as a part of the messenger.. using conference we can have a tool for office work done.
Built With
mongodb
php
signal
webrtc
Try it out
www.webtalk.app | Communicate through text messaging, file share & group call | Better Communications | ['Santanu Brahma'] | [] | ['mongodb', 'php', 'signal', 'webrtc'] | 12 |
10,570 | https://devpost.com/software/invite-friends-to-devpost-jobs | Who we are
Our New devpost invite friends feature incentivizes devpost member to share our new product with others software developer.
How it is work
w'll give devpost member $1,000 when they refer a candidate to devpost joib throught there unique tracking link and that candidate gets hired for a job throught devpost.
Built With
facebook-messenger
linkedin
ruby-on-rails
twitter | Invite Friends to Devpost jobs | Our Customer invite friend url can be shared out on Facebook, Twitter, Linkedin and via email. | ['Md Farhad Hossain'] | [] | ['facebook-messenger', 'linkedin', 'ruby-on-rails', 'twitter'] | 13 |
10,570 | https://devpost.com/software/smart-class-n0e9mw | Landing
Features
Attendance Taker and Interaction interface
Bot Join the call
Note Taker
Attentiveness Tracker
Inspiration
Due to the worldwide pandemic education sector is most one of the most affected sector in this situation online learning is the only hope. In these days online learning has emerged as one of the leading ways to transmit education and the government is looking for ways to shift education to online platforms due to the pandemic situation.It becomes difficult for the administration like schools, colleges,etc to have an unbiased feedback of the students for the faculty.
What it does
Our solution ie SMART CLASS Application helps professors better interact with those in their class and track their students' comprehension of the material with numerous ways to collect more data about classroom engagement. i.e. Total number of hands raised on a particular question, class attendance scheduling at specific time, attention analyzer of the students, and feedback of the students by face recognition.
Our Solution SMART CLASS bot will join the online meeting on ZOOM and collect the information from the browser client in the background of the host's computer. And will analyze the behaviour of the students/members and with the power of Smart Class App, teachers can also write/ draw in air and will be shown on the screen and will be live on the other student’s screen.
How we built it
The data gathered using our python + selenium component is fed into our python + tkinter interface that is displayed on the host's computer, alongside their Zoom client.
We built a bot using python and selenium to join the call (headless-ly) and collect all the information from the browser client in the background of the host's computer.
Note taking feature using web-speech-api.
Used CanvasJS for graph attentive analysis.
Challenges we ran into
Zoom has no API for accessing a lot of the features we wanted to use, like the number of people raising their hands, the ability to send messages, the ability to get current users, etc.
2.While we had success with actually doing recognition of facial expressions, but making machine learning model that is accurate was tough task.
Accomplishments that we're proud of
Built a self-contained, fairly full-featured client to interface with the Zoom client headless-ly and providing some features which are not provided by Zoom.
What we learned
Throughout the hackathon we learn to deal with API and use them in proper way and using Machine learning being unfamiliar with it.
What's next for SMART CLASS
Feedback Expression Analyzer which uses face recognition and gives the automated feedback of the students.
Creating more accessible online classroom with its closed captioning service. This allows users with limited hearing to follow along more closely which improves usability
Built With
canvas
css
google-cloud
html
javascript
machine-learning
python
selenium
tkiner
Try it out
github.com | Smart Class | SMART CLASS Application helps professors better interact with students in their class and track their classroom engagement | ['Ashutosh Kumar verma', 'Atishay Srivastava', 'Yashashvi Singh Bhadauria', 'Arpit Agarwal'] | [] | ['canvas', 'css', 'google-cloud', 'html', 'javascript', 'machine-learning', 'python', 'selenium', 'tkiner'] | 14 |
10,570 | https://devpost.com/software/qeasy-sz45e2 | Name and Logo of the product
Sign In Screen
Sign Up Screen
Boot Screen
Dashboard of Users
Ticket Generated for an appointment
Book an appointment Screen
Inspiration
With the onset of the year 2020, came new opportunities and new problems. From Australian bushfires to almost reaching World War 3 we saw too much. And then came the world astonishing pandemic that has taken around 7 crore lives till now. The prevailing of new scenarios forced everybody to adapt to them by bringing major reforms and changes. But by taking bold steps like imposing lockdown, digitizing education by taking online classes came other new issues and I was concerned about them a lot.
On one day, I got to know that one of my uncles was tested COVID positive during very early stages of COVID in India. And then we got to know that he used to run a ration shop which needed to be open as 'Essential Items' shops were instructed be open. This was one motivating factor. Another factor is that there is a grocery shop near my house of Mr. Sachin. So, I noticed that there used to be an unmanageable crowd everyday at his shop. And each customer had a wait time of well over 1 hour. Few of his customers were infected with COVID 19 and due to this other customers at his shop also got infected and some of the employees of the shop as well. This was proving to be a serious problem of Mr. Sachin. Then, I thought that during these times, due to highly contagious nature of COVID, we cannot afford to have crowds and queues. But shops need to open and not only shops but public offices, restaurants, salons, service units also. All of these centers have to run but without having crowds or queues. And during researching more on this I realized that this issue is a very major concern and some solution has to be find out for this otherwise performing every day tasks would be life-threatening tasks to perform. So, now I was firm on my problem and determined to find a solution to this. Therefore, let me propose the problem statement here first. The following problems have become widespread and highly dangerous due to the contagious nature of COVID19 with respect to shopping at grocery, vegetables, milk, medical stores and other essential units. 1. Excessively long queue lines with unlimited wait times. 2. Inefficient queue management system. 3. Violation of social distancing in queues making people vulnerable to being infected more and more. Such inefficient management and disobeying of social-distancing norms can lead to queues at stores to become COVID-19 Hotspots.
What it does
Then, I came to the final solution that is 'QEasy' - Queues made easy. QEasy is a mobile application through which we will form queues but VIRTUALLY. QEasy mobile application is a digital solution to drastically reduce lines at stores to allow efficient practising of social distancing. The app facilitates this by the concept of 'virtual queues‘ (A computerised system allows park visitors to secure their place in a “virtual queue” rather than waiting in a physical queue.) The app can be used to eradicate the crowds and physical waiting and also as a tool to maintain the social distance. The app will create a situation when there will be no customers in the queues and then, the probability of spreading the virus will be very less. The app works on the principle of using token codes and time allotments for customers. After signing in the app, the user could search for the shop he/she desires to go. After selecting the shop, he can enter the date and time when he wants to visit that shop. If that particular time slot would be available then the user could confirm his booking otherwise the app will ask to choose another time slot. While booking a slot, the customer can also add an order which would be sent to the shop along with the booking. After confirming the booking, the app will generate a token number. At the time of visit, the shop will first cross-verify the token number with the customer and then will allow him to enter. If there would be any order available then the customer could directly pick it and go. That’s how shopping and working can be made easy with QEasy. The reason why I chose this as my solution is :- 1. I applied the QBL principle which was best suited on this. 2. This solution is not only limited to shopping but every product and service providing unit. From boutiques, tailoring shops, dairy to government offices and banks. Everyone should adopt this app in order to eliminate long queues and crowds. 3. This solution is not only limited to the COVID era. One very important thing that we must consider is that the solution we present in the COVID era should also be scale-able during the NON-COVID era. So, this solution is built to scale. COVID will give the right platoform for this solution to launch and then it will flourish immensely. 4. Also, seeing on its advantages which were numerous, I became confident on this solution. Some of its advantages are: 1. From the Customers side: - They will save time(the most precious asset). - They will save energy and efforts - No waiting in queues - No more standing by in unbearable or extreme weathers - Contact-less and safe visits - Receive immediate assistance - Hassle free cancellation of appointments 2. From the shop's/business' side: - Attract and retain more customers through stress-free waiting experience - Build customer’s confidence to visit. -Customers stay safe so the stores stay open. -Know your customers who are waiting. -Stand out against competitors. -Increase your sales and productivity - Less requirement of floor space and workforce will be there. Thus, I can say that my idea is Quick-Impacted,Bold and Long-Lasting.
How I built it
My solution is a low-investment idea. Our first target was to create the product. We researched thoroughly on the technical aspects of the mobile app so as to create the most secure yet easy to use mobile app. Then, we first created the prototype of the mobile app, understanding the app flow thoroughly. And then we started to develop the mobile app on the software. The key point in our mind was to keep the best User Interface so the app is very easy to use. After the successful development of the app, we had 4-6 rounds of testing so as to ensure the smooth functioning of the app.Initially, we are focusing to make it operational in Delhi NCR and then scale it step by step. The app's implementation will unfold in three phases as per our plan. First Phase - We will focus on making it available to the shops and service provider units "nearby us". In the first phase, we will target 100 shops to adopt our solution. Second Phase - After the successful implementation of phase 1, we will first gain insights and improvise the app that we learned through phase 1. Keeping in mind to that, in the second phase we will aim to to make it operational in the other metro cities of India like Mumbai, Kolkata, Jaipur, Indore, Chennai etc. Third Phase - And after the successful implementation of phase 2, and again improvising our app we will expand our app globally during the phase 3.
What was the result or impact of your project?
Till now, approximately 25 shops have successfully adopted our idea. And we are gaining their feedback on a regular basis. 1. Impact on Shopkeepers/Service Providers: The shopkeepers firmly agreed with the point that their productivity and sales have been affected by increment. They are now able to handle more customers and fruitfully utilise their time. The crowd there is limited and manageable as per their requirements. Shopkeepers were very satisfied with the experience. They also stated that they now require less workforce and also they are able to re-purpose their floor space. Most importantly, now due to less people at same point of time social distancing can be followed properly and therefore they assured that their customers are now more confident to visit.
Challenges I ran into
"A journey is never a journey without obstacles." Yes, we did face challenges in between implementing our idea. 1. The concept of the app was a little complex to code. We had to frame the logic ourselves in order to develop the main functioning of the app and it was very difficult. Being a school student we were not much aware of the concepts which should be inculcated. Nevertheless, we took help from the elders. My mother played a huge supporting role during the whole process. She helped me frame the correct logic and taught me various concepts. So, that's how we solved this challenge by actually learning. 2. Since, I am a school student that too of class 11, with the science stream , therefore, there is a lot of pressure of studies. Attending school, then coaching classes, and then taking out time for the thing which I like the most of developing a solution was getting more and more difficult for me. And eventually, neither I was able to study properly nor progress in the solution. Then, I realised that something has to be done. Therefore, I explored the Internet finding ways and techniques to my issues. And then I started to create deadlines for me. I chose a date, jotted down all the tasks I want complete till then and then did anything to complete it. This technique actually helped me to increase my productivity. 3. Also, my family had only one laptop. So, since my mother is a teacher she would require the laptop for taking online classes and for doing other work. That is why I was not able to get the laptop to work. So, I decided to work in the night to build a proper balance. So, in the days the laptop would remain with my mom and at nights it would remain with me so that I could work with peace. Working at night and then again waking up early in the morning to attend school classes was a tedious task for me but I still enjoyed the journey.
Accomplishments that I'm proud of
There are a couple of accomplishments that I am proud of.
With this I won the first runner-up position at Creo Entrepreneurship Competiiton.
I won the second runner-up position at Mount Caramel School.
I won the first runner-up position at Bhartiya Vidya Bhavan School. The award was presented to by NASSCOM company.
What I learned
The learning I get from a project is always immense. I literally learned a lot during the whole tenure of development of this product.
I learnt how Queue and Crowd Management works. And while coding the whole app I learnt a lot new algorithms and logic that helped me creating the system.
Also, I learnt time management organize my work and to be more productive.
I learnt new concepts of UI/UX designs. I have tried to design the best ever GUI for my app and it is the best among all my other products.
What's next for QEasy
In order to sustain any project the most important key point to remember is to constantly improvise the app because there is always a scope of improvement. Changes with time help the product to adapt within any circumstance and these changes only build a strong foundation for a successful startup. 2. Therefore, we will keep introducing new features which would engage more and more consumers. Let me list some of my future features:- > Product Display - Displaying of all products for a more visual experience. > Collaboration with government offices and public sectors that would help expand the solution at an enormous scale. > Providing Customer Analytics powered by Artificial Intelligence. This feature would make us more than a queuing system. Now, with real-time and historical data available at fingertips, shops and businesses would be able to measure both customer satisfaction and staff performance.
Built With
firebase
kodular | QEasy | Virtual Queuing Solutions | ['Aditi Jain'] | [] | ['firebase', 'kodular'] | 15 |
10,570 | https://devpost.com/software/grocer-s-point | Inspiration
On 24 March 2020, Shri Narendra Modi, the Prime Minister of India ordered a nationwide lockdown for 21 days, limiting movement of the entire 1.3 billion population of India as a preventive measure against the COVID-19 pandemic in India. All businesses, except those selling essentials were shut down. Even the later could function only under limited hours during a day, and had to follow strict social distancing to prevent further spread of the virus.
Now even after more than 4 months, these small businesses are still reeling from the impact. The virus is still prevalent throughout the country, and businesses are struggling under its effects. Due to India's high population density, it's an incredibly difficult task for them to maintain social distancing at their stores.
Meanwhile, giants such as Zomato and Swiggy have ramped up their services during the lockdown are now also providing home deliveries for grocery items, which has put small scale grocery shops at an even more disadvantageous position. COVID19 is here for to stay, at least for a couple of years. And if this same pattern continues, these small businesses will soon run out of fuel, which will be a great blow to the economy.
Solution
Grocer's Point is an app-based e-commerce platform where Users can view and shop from their neighborhood groceries via digital means. This way, they can keep purchasing from the Shops they previously used to, only through a digital platform for a more comfortable and safer experience.
Easy and Secure Registration
Users can sign up instantly using their Phone Numbers, and have to just provide their name and address, before they can start shopping.
If a Shopkeeper wants to add their Shop on our platform, they have to provide us with all details of their Shop, including clear pictures, location, registration papers. Upon verification from our side, his shop will be approved and visible on our System.
The Notebook System
The Groceries in India contain a vast array of Items, some branded and others unbranded. And every shop differs from the other in both size and actual content of their Inventory, which makes it very difficult to manage a unified Inventory System.
For that purpose, we're providing a Notebook System for taking Orders.In the old days, households would write down their necessities on a list of paper and pass it to the Shopkeeper, who'd get them the items. We're following the same approach here with a modern take. While placing an order, the customer provides a list of items he wants to purchase, which then is sent to the Shopkeeper for review.
Price, Item Availability and Time Slots
Upon receiving an order, the Shopkeeper will go through the list of items and add the price for each. If an item is not available, he can just strike it off the list and the Customer won't be charged for it. Then, he has to add a Time Slot for when the Customer can come and pick up his order from the store. Doing so ensures that the crowd is always at a manageable level and social distancing can be maintained.
Technology Stack
Flutter - The skeleton of the project. We used Google's New Framework for creating our Cross-Platform Mobile Application.
Firebase - We use Firebase, a BaaS (Backend as a Service) as our backend as it provides many features such as Phone Authentication, Cloud Storage, no-SQL Database and Cloud Messaging out-of-the-box that we required in this project
Challenges I ran into
The biggest challenge for me was User Validation. I began the Ideation phase for this project back in April, when the whole country was in Lockdown. Due to which it was difficult to get hold of Grocery Shop Owners for their input and feedback on the current iteration of the Application. Once the Lockdown restrictions eased down, I talked with many of them, discussing what problems they are facing due to the Pandemic, and brainstorming possible solutions to them.
What I learned
I already had tons of prior experience build Mobile Applications. But I also had to write Cloud Functions for using notifications in the Application, for which I used NodeJS. I had limited proficiency and experience with it, and I had to look up a couple of tutorials to get it working properly. But it was a very fun learning experience.
What's next for Grocer's Point
Currently we're improving the UI of the Application and integrating Payment Gateways. After which, we'll be distributing our Apps among the Grocery Shop owners of Chandrashekharpur area of Bhubaneshwar for a beta testing session.
Built With
adobe-xd
firebase
flutter
node.js | Grocer's Point | An E-Commerce Platform for Small Scale Groceries to combat the COVID19 Pandemic. | ['Amlan Nandy', 'Pratyush Kumar Satapathy'] | [] | ['adobe-xd', 'firebase', 'flutter', 'node.js'] | 16 |
10,570 | https://devpost.com/software/better-zn0aif | good
wonderful
Inspiration ( low incomes of peasant farmers)
headline
What it does
(provides transport services)
How I built it
(from scratch)
Challenges I ran into
dev post
Accomplishments that I'm proud of
finishing
What I learned
coding
What's next for better
excellent services
Built With
api
cloudant
Try it out
github.com | senteuganda | we provide easily accessible and affordable insurance services to all peasant farmers | [] | [] | ['api', 'cloudant'] | 17 |
10,570 | https://devpost.com/software/lean-learn-during-earning-upskilling-all-workers-now | IMPORTANT
Other content:
If you want to take a look at the
admin panel
, please click here:
Youtube
Do you want to look at the
entire video
with research? Please click here"
Youtube
Inspiration
Due to the COVID-19 pandemic, many low skill workers have lost their jobs resulting in record breaking numbers. The civilian unemployment rate is
currently at 10.2%.
That leaves
16.3 million people unemployed.
As a result, The Mad Hackers decided to tackle this extreme issue which has lost the primary source of income for many families. It is essential for these workers not have this vulnerability right now and in the future and as a result we have created LEAN to help solve this issue.
What it does
LEAN has 2 main components: an admin panel and a user panel. The admin panel allows the company to upload information and see current candidates. We chose to not make this an aggregator because the employer has to be willing to have employees without a college degree - merely online certificates and whatever projects they come up with from those online courses. Employers get the benefit of having a qualified employee for a lower starting price, whereas employees get higher paying jobs than what they already have and once they leave the company, they can use that experience to catapult themselves. The user panel allows the user to get information about various jobs that are around, like a standard job-finding app.
However, what makes LEAN stand out is that it allows users to learn the skills from MOOCs, which you can search for within the app itself.
LEAN also helps you look at
job trends
with a
built-in prediction view
, filled with interactive visualizations and graphs.
You can find yourself a temporary job within LEAN too! Just go to the low-skill jobs pane, and find something that interests you. That way, you can make money while learning and preparing for your next job. You can also get money by going to the funding pane to take a look at different loan and scholarship opportunities (not included in video - added after recording and submission - will be pushed to GitHub soon).
LEAN also has a
forum for users to be able to talk with each other about their experiences and support each other.
LEAN also has an integrated
Projections tab
for jobs in the workforce for insight of the future
LEAN also has a
Courses tab
which allows you to input your courses and it will use an API in order to retrieve courses.
How I built it
The Admin panel was build using Vue.js. We used buefy to create a simple but elegant theme that made development both quick and stylist. Then, we use Google Cloud Firestore to manage our data in the cloud so that information could eventually be read by the prospective employee.
The course search integration was built in much of the same way, linking to Classpert to leverage their huge database of MOOCs across over 30 websites. This was then integrated into the application.
The job trends panel was custom-built using Datawrapper to create interactive visualizations, using data from the US Bureau of Labor Statistics. These panes where then embedded into the application.
The forum was created using tribe.so, an application similar to discourse. We chose this because it has a lot of features and is styled extremely well. We linked that into the application similarly to others.
The react native app was created using several apis and features. For the Home page, we used Google Cloud FireStore to store and retrieve data to manage the employer/admin and user interfaces. This would allow the posts from the employer on the web, to be posted on the react native app. For UI, we used react-native-base which allowed for more customizations than we previously had in order to provide the app a more premium and modern feel. For the courses tab, we created a page using an API provided by ClassPert in order to get the course data retrieval. We used react-native-maps in order to create the maps interface and find the current location of the user and display it. Additionally, the API provided by Indeed.com allowed us to retrieve temporary jobs within queries we passed through it. For the forum page, we used tribe.so and Web-View in order to integrate the forum into the application and make it readily modifiable by users. For the design tabs and the cool animation, I used react native navigation material tab navigators which allowed for the rich animations.
Challenges I ran into
There were challenges in implementing the UI on the RN App due to deprecations and Dependency errors.
There were challenges with GitHub not allowing files to be pushed correctly
There were challenges figuring out how to access certain datapoint
There were challenges with getting image uploading for the admin panel (logo).
There were challenges with making sure that data was being worked with properly, as sometimes, the views themselves would have impediments blocking us for hours.
What I learned
I learned how to manage time, how to work efficiently, how to not procrastinate and how to work together with multi-platform concepts which I had never implemented before.
What's next for LEAN
We hope to implement this app into the google play and iOS app stores to help those currently in need of such an app. We plan on styling better in order to get a more professional design. Additionally, we hope to add more features allowing for the user to connect to the employer more easily. We have many other ideas that we wish to implement, however time is one constraint that will always limit us but yet pushes us to work our hardest.
We would like to thank these hackathons we participated in for giving us the ability to demonstrate our skills and also challenging us to push them. We both found it to be a very valuable experience.
Built With
apis
cloud
expo.io
firebase
firestone
google
location-services
react
react-native
react-native-maps
vue.js
Try it out
github.com | LEAN (Learn while Earning, upskilling All workers Now) | LEAN is a job-finding website that allows users to upskill themselves on the job hunt while still being able to support themselves. | ['Vijay Daita', 'Om Joshi'] | ['Grand Prize', 'Team Submission Prize'] | ['apis', 'cloud', 'expo.io', 'firebase', 'firestone', 'google', 'location-services', 'react', 'react-native', 'react-native-maps', 'vue.js'] | 18 |
10,570 | https://devpost.com/software/is-my-food-ready-uj69ev | Inspiration
One of our team members recently picked up an online order from a local restaurant (Armadillo Willy's). He was surprised to see they were using a white board to list when orders were ready! The same day he walked by a different restaurant which was using an Excel spreadsheet to tell customers when their orders were complete!
While both of these are creative solutions in the face of this global challenge, we thought it showed that there was a lack of a cheap/easy to use order status software.
There is also a fundamental accessibility problem that has arisen as a result of the sudden reliance on online pickup. Visual information such as TVs or white boards is exclusionary towards blind people and those with vision loss. Audio only notices such as an employee shouting order numbers is exclusionary towards the deaf and hard of hearing.
"Is My Food Ready?" seeks to solve both these critical accessibility issues and help make managing online orders and takeaway easier than ever for small businesses. Through this we hope to make social distancing smooth and the quarantine a bit more smooth.
What it does
Our project acts as a standalone PWA enabling small businesses without the large resources of corporate chains to have a competitive order readiness software.
The business owner/manager first goes to our sign up page and enters some info about their store, while also listing their employees email addresses.
These employees can then login using google to access our employee dashboard. Here you can either manage orders, or access the customer facing display screen (intended to be put on a TV).
In the order management screen employees can enter information about an order, including the customer's phone number. When the customer's order is ready, they will receive a text!
The employee is able to mark orders as "Ready" once they are cooked. This will display on the TV view, customer's phone, and send them a text. Once the customer picks up their order, the employees can "Complete" the order, thus removing it from the screens.
The customer is also able to enter their order number to view information about their order on their own phones.
The TV view lists the status of all orders in the system. Whenever an order is ready the order number is read out-loud to help those with visual impairments.
How I built it
We built our project as a PWA using react.js. Firebase acts as our back-end, storing restaurant and order data. We used the Twilio API to text customers when their order is ready.
Challenges I ran into
Power outages! Several of our team members are from California, and are thus enduring rolling blackouts.... Despite randomly losing team members for hours at a time, we were able to complete the project!
We struggled a lot with the front end since none of us are really front-end/UI focused developers. This however was a really good learning experience, and helped broaden our capabilities as programmers.
Fonts... For some reason it was a nightmare importing custom fonts into react.js. Thankfully we eventually figured it out, but it took a lot more experimentation than we ever expected.
Accomplishments that I'm proud of
We managed to get out project to complete functionality by the end of the Hackathon. We've each participated in a few blackthorns before, but often only end up with a partially completed project or bare minimum functionality. We were really happy therefore to have something nearly ready for production. There are of course features to add (as we'll detail later) but we were able to realize much more of our scope than before
Accessibility Improvements! We were really happy that we were able to build some solid accessibility improvements into our app. Massive corporations with very expensive custom POS systems still often fail to have things like Audible readouts or display boards, so being able to build something that helps those with disabilities was awesome!
We were really happy that we were able to have the status updates be texted to our customers. We think this is a great feature that plenty of the large chains don't even have.
What I learned
A lot more about remote work! All of us are in different towns right now, with 1000's of miles separating some of us. This was a very useful venture into learning a lot of the remote work tools out there.
Accessibility. We always try to think about ways our apps can be accessible, but since it was a core focus of this project we learned even more.
We learned a lot about front end/styling work in react.js. Most of us are back-end focused so this helped expand our expertise and make us better developers.
React! Some of us are very experienced in React by now, but for others it was one of their first big projects in the framework. Its a very useful framework so was helpful to get further experience for all of our team members.
What's next for Is My Food Ready?
Customization options for businesses. We want to let businesses customize the appearance of their order/TV pages to better convey their brands. Allowing companies to upload their own photos would also be very cool.
GPS detection of which stores/restaurants you're near.
QR code functionality, so that a customer can scan a QR code to find the status of their order.
Estimated wait time, automate purging of old orders.
Integrations into POS systems/ability to actually submit/revise orders.
Built With
css
firebase
gcp
html
javascript
material-ui
react
twilio
Try it out
github.com
is-order-ready.web.app | Is My Food Ready? | Order Status platform designed to help small businesses with acessibility and social distancing. | ['Alan Brilliant', 'Ray Altenberg', 'Farhan Saeed', 'Bryan Lim', 'Drew Ehrlich'] | [] | ['css', 'firebase', 'gcp', 'html', 'javascript', 'material-ui', 'react', 'twilio'] | 19 |
10,570 | https://devpost.com/software/healthrific-your-health-pal | todo
Built With
bluetooth
dart
flutter
Try it out
tiny.cc
github.com | na | no | ['Anubhav Sinha'] | [] | ['bluetooth', 'dart', 'flutter'] | 20 |
10,570 | https://devpost.com/software/maching-anding | Interface
Inspiration
Latin America is facing a great challenge due to current situation with COVID-19. At our country, for instance, hospitals are slowly running out of beds, people is getting infected at an increasing rate and governmental politics have shown this matter is not easily solvable. That's why we are trying to contribute by setting the first steps on a model that can help medical staff to diagnose COVID, as it can be confused with other diseases as pneumonia.
What it does
The project itself its divided into two parts, the Jupyter notebook worked in google collaborate contains both algorithms. The first tries to classify images (slower and much more computationally demanding) meanwhile the second contains the code of the graphic interface we used. It can classify a torax plate and tell whether it is an example of a covid pneumonia, a non-covid pneumonia or a healthy lung.
How We built it
We used several libraries in the process. We used a Keras model based on CNN algorithm in the graphical interface, despite its high computational cost, and, after setting the buttons that allow user to select any image from its computer (with the functional requirement that images need to be in both .jpg and .gif format), we achieved the presented the final functional interface.
Challenges we ran into
First of all to import the data. It was challenging to import thousands of images into arrays so that they can be analysed. We tried to use an algorithm that could set deep features but, as this wasn't possible (mainly due to time), we implemented the CNN and used the keras-utils library instead. We thought the challenge was coding the machine learning algorithm but in the end that wasn't a problem, but rather achieve a functional model with good accuracy. Also RAM was hard to control.
Accomplishments that I'm proud of
The CNN model works nicely, even if due to RAM its hard to use it in our computers. We think it can help other projects on their development.
What We learned
In this project we learned to develop and strengthen our programming skills. Additionally, we managed to carry out a project that solves a problem that has a great incidence in Colombia. We also learned how to build these type of prediction models, such as neural networks.
What's next for Maching Anding
The idea would be to implement the same algorithm with more data so that it classifies more illness and more type of medical images. It depends on whether we find datasets or people with willing to cooperate with this project.
Built With
cnn
data
kaggle
machine-learning
medical-images
python
xray
Try it out
colab.research.google.com | Maching Anding | We aimed to produce an image recognition system that could help in the diagnosis process of COVID patients in Colombia and wherever this software its found useful. | ['Paula Alejandra', 'Andrés Manrique'] | [] | ['cnn', 'data', 'kaggle', 'machine-learning', 'medical-images', 'python', 'xray'] | 21 |
10,570 | https://devpost.com/software/adopt-a-student | This work aims to propose an information system for managing the search for volunteers to provide interaction
with students so they can form teaching partnerships. The elaboration of this project was conceived with the
concepts of BIG DATA, DATA LAKE and BLOCKCHAIN. Starting from an information collection, a pilot
project for search management was elaborated where all partnerships between volunteers and students can be
described. From this information, partnerships can be identified, generating reports for the management and
decision making of students and volunteers. With the identification and quantification of partnerships, it is
possible to produce management reports and information for future partnerships and for improvement in
teaching. These reports assist in making decisions regarding teaching, contributing to the search for volunteers
in locations where there is more need. With the help of the reports generated through the pilot project, we can
act in advance, streamlining and increasing the supply of volunteers and, at the same time, preventing students
from being left without education.
Built With
blockchain
database
datalake
flutter
react
Try it out
github.com | Adopt a Student | Help students who can be anywhere in the globe. | ['Jose Alexandro Acha Gomes'] | [] | ['blockchain', 'database', 'datalake', 'flutter', 'react'] | 22 |
10,570 | https://devpost.com/software/immunize | Home page
User profile details
Analytics dashboard for the admin
Dashboard listing out all the centers on the map
Page to book an appointment for the vaccine, the vaccine centers are mapped on the map
Inspiration
Most of the countries were unprepared to face the Covid-19 pandemic and were not successful in controlling the infection rate. However, given that the vaccine for Covid-19 would be discovered soon, we must have a system in place which will not only distribute the vaccine to the mass population, but will also capture data and churn out patterns to generate insights. This could also be preserved as a learning for the future generation.
What it does
Since a large scale population is affected by the Covid-19 virus, most of the people would want the vaccine as soon as possible. However, there might be a set of people like children, old people, essential service workers and people with life-threatening diseases who will need the vaccine more than the healthy adults. So it is essential that the right people get that dose of vaccine at the right time. The Immunize platform scores an individual based on these parameters and assigns a priority to an individual.
Also, since there would be a limited staff to disburse the vaccines and a high demand, it is extremely important to streamline this process.
How I built it
An user booking an appointment for receiving the vaccine will login/signup which will happen via Auth0. Once a user logs in, he will be able to view all the vaccine centers near his vicinity on a Map. Each vaccine center will also have information related to the number of stocks of vaccine available and the total appointments. A calendar which will also show the day-by-day summary of the vaccine stocks and requests is also available. This will give the user an idea of the waiting queue. Accordingly a user can book a slot on the particular day. However, a user should keep in mind that his turn might get postponed as high priority patients who require vaccine urgently will come in the picture.
An admin will have the dashboard with all analytics and tracking tools. He can check the total requests/stock of vaccines/ total vaccines disbursed from the dashboard. Apart from that a user can enter address of a new vaccine center, and that address will be geo-coded and then displayed on the map.
Challenges I ran into
Plotting the vaccine centers on the map was a bit challenging.
Accomplishments that I'm proud of
The platform also has a feature where it prompts a user for submitting scanned copy of identity proof, so that these users can be given a higher priority. Using a optical character recognition engine to extract the name/date of birth/ Id from the scanned image, is something I implemented for the first time.
What I learned
I learnt how to create a minimum viable product in 2 days. Prioritizing which features to add and time management are the important skills I learnt.
What's next for Immunize
The Immunize app idea should be used by everyone to build a robust system for efficient delivery of the life saving vaccine. I would want to implement more data analytics around this, and also build a demand-supply evaluator, wherein if a highly populated area is falling short for vaccines, the supply from vaccine centers who have extra stock will be redirected. I will also implement a payment option, if at all the vaccine cost would be higher. A channel where people can post about the side effects they have encountered if any, would also be a good-to-have feature.
Built With
auth0
flask
leaflet.js
mongodb
node.js
python
react
rechart
tesseract
Try it out
github.com | Immunize | Platform for efficient distribution and tracking of the Covid-19 vaccine | ['Sharayu Thombre', 'Jui T'] | ['Third Place - Addressing after effects'] | ['auth0', 'flask', 'leaflet.js', 'mongodb', 'node.js', 'python', 'react', 'rechart', 'tesseract'] | 23 |
10,570 | https://devpost.com/software/new-look-of-e-commerce-for-covid-satuation | 3D view
catalog page
AR view
AR view
Title of your project : New look of Ecommerce with 3D & AR view
Problem Statement :
If we buy any products through online shopping app or website we don’t get a clear view of the products. Photos are the only comprehensive guide to products for Ecommerce. Now a days, in this covid situation, having a complete view has become even more necessary as people mostly prefer online mode of shopping.
Proposed Solution : i have developed a prototype of online shopping App using some websites which helps us to visualize our product in a much better way .
The main features of this application are
3D model of the products Along with the photos of products I have added this feature, so that customers can view their products as 3D model.
Augmented Reality (AR) of the products AR view of products will help customers to visualize with Smartphone camera in their near place like reality.
Try it out
drive.google.com | New look of E-commerce for covid satuation | This will help in E-commerce in covid satuation | ['AJOY KUMAR'] | [] | [] | 24 |
10,570 | https://devpost.com/software/electronic-mask-e-mask-y04wvr | Inspiration
In recent days, the COVID-19 pandemic has affected the world negatively. Shelter-in-place and social distance have become a need for every country to fight against and stop spreading the COVID-19 in society.
Today, we introduce a mobile app - named E-Mask. Through Bluetooth, E-mask can help you scan and check whether there is an infected patient around you or not.
This app will also provide a set of questions to collect information as well as to detect whether you may have COVID-19 and will provide guidance to go to local hospitals for testing.
Users can also report to the system if they are infected so the system can help provide the most up-to-date and realtime data.
What it does
By enabling Bluetooth and press "scan". E-Mask will scan, spot, and notify if there is an infected patient near you. If you were near to an infected user, the app will mark you at risk.
Then it shows CDC's guidelines and recommendations such as wear your mask, wash your hand.
If users are infected or highly at risk or feel unwell, they can take a survey in the app to check if they are really infected
Testing assessment will suggest if you need to take the COVID test or not based on your given information.
The app will guide you to the nearest Covid-19 testing facility using Google's Map
How I built it
We started with the architecture of the app. First of all, based on the idea we raised, we came up with an agreement that the app basically just needs to function, I will call it ScanCovid, ReportMyInfection, and SelfAssessmentSurvey
Users have to signup to be able to use these functions via Gmail as it allows the system to recognize who users are and automatically assign a new device ID to your account that we could use in our scanning system.
When users report their infection status, the system will update the status of other users who previously contacted that infected user in the app's history and notify them to take the COVID-19 testing.
Our system will suggest the nearest Testing site via our location fragment.
For users that use the ScanCovid function, it will return the list of devices ID, representing nearby users that reported to the system previously.
Challenges I ran into
Graphics, UI as we don’t have any experienced designer.
Determine how the ScanCovid and ReportMyInfection are connected to the database and upload
the information to the database via ReportMyInfection will sync to the ScanCovid of other users.
Struggle to determine whether ScanCovid functions should be realtime or not.
Struggle with NoSQL via FirebaseDB
Since Google does not allow us to use their device ID system, we have to create our unique device ID based on the user's generated ID for Bluetooth mapping.
Accomplishments that I'm proud of
We’ve created a fully functional product and make it works within 36 hours! : )
Develop our team-working skills.
Become more familiar with Android applications and Firebase service.
Learning new tools, and making an app that we think will very beneficial to the community.
What I learned
Android app, Firebase, Graphics/UI design, Kotlin.
What's next for E-Mask?
Improve the Bluetooth tracking system
iOS app, a Web-based version.
Improving the UI.
Built With
android
firebase
google
gradle
kotlin
Try it out
github.com | Electronic Mask (E-Mask) | An realtime mobile app that scan, spot and notify users if they were close to a COVID-19 infected user. | ['Loc Tran', 'Hoan Pham', 'Minh Nguyen Le', 'Nhat Nguyen', 'Hung Pham'] | [] | ['android', 'firebase', 'google', 'gradle', 'kotlin'] | 25 |
10,570 | https://devpost.com/software/on-how-to-reduced-the-negative-impacts-and-in-the-community | Inspiration to be motivated
What it does been sourround
How I built it integrity and honest
Challenges I ran into school competition
Accomplishments that I'm proud of
Been comfortable
What I learned to be diplomatic
What's next for on How to reduced the negative impacts and in the community
Promote the positive image
Built With
youtube
Try it out
www.shuaibhabu.youtube.co | on How to reduced the negative impacts and in the community | The main way on how to reduce the negative impacts and bad influence in the socialize community which by providing an adequate facilities to the massive and giving them a social awareness not injuries | ['shuaib habu'] | [] | ['youtube'] | 26 |
10,570 | https://devpost.com/software/maskvi | MaskVi
Demo Screenshot
Demo Video
https://youtu.be/S0bw1w5RFR0
PLEASE NOTE BEFORE RUNNING PROGRAM
Please use OpenCV 3.x.x
.
This is because one of the classifiers is not compatible with the newer versions. Thank you!
Inspiration
We took inspiration from general facial recognition practices in computer science. By utilizing standard OpenCV feature detection, we were able to detect whether or not someone is wearing a mask based on what facial features the computer is able to detect.
What it does
Our software uses concepts from standard feature detection algorithms to try and find particular features that are indicative of whether one is wearing a mask or not. It firstly looks for the eyes in order to determine if a person is there or not. If it is able to find the eyes, it then looks for the mouth of the individual. If it is able to find a mouth, the software knows you are not wearing a mask and returns that no mask is present. However, if it is unable to find a mouth, it means that something is covering it and hence, assumes that you are wearing a mask and returns this.
How I built it
We created a simple feature detection program in Python using the OpenCV library and the
Haar Cascade classifiers
. The program uses a webcam to analyse each frame and identify various facial features (the eyes and mouth). Based on what facial features are detected, an output of whether or not a person is wearing a mask is returned in real time.
Challenges I ran into
One of the classifiers did not seem to work with the newer versions of OpenCV. We tried to get it to work but ended up deciding it was not worth the time and just used an older version of OpenCV.
What I learned
I was able to refine my knowledge of the OpenCV library.
Built With
opencv
python
Try it out
github.com | MaskVi | An OpenCV software which automatically detect if an individual is wearing a mask or not | ['Jeremy Jun-Ping Bird', 'Joshua Bird'] | ['Best App'] | ['opencv', 'python'] | 27 |
10,570 | https://devpost.com/software/miia-medical-intelligence-applied | App screens for miia
Overview
Here are some quick links to some of the resources we developed while creating our project:
💡 • Website
📐 • Wireframe
📱 • Prototype
📕 • Documentation
Inspiration
As our population ages we will begin to have a lot of multimorbidities. The aging population will have higher rates of diabetes, hypertension, and other chronic ailments. Mobile health (mHealth) platforms using smartphones have proven effective for monitoring blood pressure, glucose and other health related symptoms. However, applications are not always accessible for the elderly population. Finger sensitivity and mobility can be an obstacle for the elderly as it impairs their ability to interact with apps. Features such as larger font size, high contrast, and text to speech functionality are often neglected due to the lieu of modern design trends intended to appeal to younger audiences.
We designed our app, miia (Medical Intelligently Applied) to be accessible and usable by most seniors. Miia is an application that will help track and manage health conditions for the elderly population. For instance, we implemented a Chatbot function to help seniors input their vital signs. The chatbot can be made to speak aloud, while the senior can utilize their voice which is then converted to text. The chatbot can also ask questions to monitor symptoms and mood to screen for infection or depression, respectively. Furthermore, our app will track mobility and activity functions of our users through drawing data from the built-in accelerometer, gyroscope, and other smartphone sensors. This will help us predict activity level and potentially prevent frailty and traumatic falls with seniors.
How to use miia
Miia can be used through entering
https://miia.me/
and signing in with gmail or by creating a new account. Once you've logged into miia you're greeted by the main dashboard that provides an overview of your profile along with several different tabs. Here users can chat with miia, sync wearables, and receive diagnostic reports from health checkups. Current functionality of the application is limited to conducting conversations with the chatbot while also completing facial recognition scans that detect mood and BMI.
Nonetheless, our current figma prototype serves as a better representation of the apps final functionality and design.
In contrast to the web application the prototype is developed for mobile devices to better serve the elderly through prioritizing convenience and mobility. The prototype itself is fully interactive as users have the ability to click, scroll and drag through both caregiver and patient interfaces.
What it does
The system leverages AI technology to analyze data collected from facial recognition, speech recognition, wearable devices and/or IoT on a daily basis, and alert the caregivers if there is any identified risks. The platform also provides a way to facilitate communication between caregivers and care recipients, while aiding with health management to alleviate caregiver stress.
Main features
Health data collection
We ensure the health data collection process is easy to follow by having the whole health check up process guided by our AI chatbot miia, which include the following:
Facial recognition - facial image taken for analysis of cardiovascular diseases risks, emotions, BMI and etc.
Speech recognition - speech recorded and analyzed for emotions and mood
AI chatbot - collect health data unavailable in facial and speech recognition/ wearable devices
Phone sensors - detection of fall
Wearable devices/sensors - measurements including but not limited to blood pressure/ heart rate/ sleeping pattern/ activity
Elderly focus design
Voice control - elderly users can choose to interact with chatbot by voice or text
AI Chatbot to stimulate human interactions
Enlarged text and other accessibility features
Reminder system - visual and sound alerts can be snoozed until the elderly login and complete the health monitoring daily
Data visualization for caregivers
Data analytics dashboard - show key metric of elderly over one month
Detailed health reports of elderly - details of each health parameter
Alert system for identified issues - caregivers can set threshold values according to elderly's condition; red warning symbols and notification pop up when value above/ below normal
App Guide
Caregiver
Signs up in the app and makes a profile for both themselves and their care recipient.
After choosing the caregiver option, they will set up an account with their email and phone number, and set a password.
Then, the caregiver will add the patient’s name and phone number.
They can then add the pre-existing medical conditions of their care recipient. In this case, the preset conditions are common chronic diseases but there is also the option to add more conditions and background information.
The caregiver can choose important metrics to monitor for certain chronic conditions, such as blood sugar level for diabetes, or mood for depression.
After adding the background information for the patient, a unique pin will be generated for connecting the caregiver with the care recipient.
A confirmation screen will also show the patient’s conditions and metrics to follow.
If there are multiple care recipients, the caregiver can add another patient.
On a daily basis, caregivers log in and monitor health of care recipients, the most important metric on display. The red notification symbol indicates a warning that requires caregivers to follow up on a metric.
In the patient profile, the caregiver can change or add more metrics to monitor, chat with the patients, or edit the patient profiles.
Elderly/ Care Recipient
Care recipient received a text message from the caregiver with his/ her unique pin. If a senior is unfamiliar with technology, the caregiver can help him/ her to set up the app.
Choose to sign up as a patient, and enter the pin received.
Our chatbot, guide seniors through the whole health checkup process on a daily basis
The patient can choose to text or speak to the chatbot.
Miia will proceed to initiate the process of health check by taking their facial image
Miia will first ask a few questions regarding their physical and mental health, such as body temperature, blood pressure, or mood and the senior can input manually or tell miia their measurements. For voice inputs, Miia will repeat the measurement to verify.
Depending on the needs of the senior and caregiver, the chatbot can also ask about other metrics, give reminders, or chat with the senior.
After health check, users will be redirected to a health overview which summarizes results for the senior.
Key metrics of seniors are shown in measurements. If the user is interested in knowing more of a particular metric, they can click the metric and look into the details.
If seniors have any concerns, they can contact their caregivers using the in-app chat function.
If desired, they can also choose to add or remove wearable devices and sensors.
Lastly, they can check their profile, which shows personal information, settings and caregiver information.
How we built it
Software
• Frontend Dev using Angular, FireBase Authentication.
• Node Libraries Likes charts.js PWAs, BootStrap, Material Design, etc.
• Hosting and CICD setups using Netlify and Heroku and GitHub.
• Domain and SSL certificate from Namecheap and Let's Encrypt.
• SQL DB connected to the app with Restful API.
• Google Colab notebooks to execute heavy GPU workloads and ML Algorithms.
• Invision for developing WireFrames
• Figma for creating final prototype
• Slack for Internal Communications & Google Drive for Documents, Images, etc.
Machine learning
We collected datasets from varies sources such as Kaggle, JAFFE and IMFDB and trained the machine learning model for a couple of tasks: the identification of emotions from facial expressions, identification of BMI from face images, identification of emotions from speech, and detection of falls from phone sensors. Determination of cardiovascular disease risk is also achieved by reviewing cohort studies and results in medical journals. After training the model, we deployed a demo of the emotion prediction model, BMI prediction model, and cardiovascular disease risk using Heroku service.
Challenges we ran into
It is difficult to find quality labelled data for training machine learning models, which in turn affects the accuracy rate. Given that this is a remote hackathon, we were also unable to test connection with wearables. While there is flexibility to use the app without external sensors, we plan to integrate with multiple wearable devices and platforms in the future.
Market Evaluation
To facilitate the adoption of our technology, we plan to target caregivers (B2B) as our primary target demographic. Currently there are 34 million caregivers for the elderly in the United States, with 5 million of them being long distance caregivers. Our goal is to introduce our product, while increasing our adoption rate, and thus solidify our application as an essential tool for caregivers worldwide.
Currently miias distributions channels will be limited to mobile app stores found on both android and ios devices. In later iterations miia will transition to being available as a web application for desktops.
Our go-to-market strategy during distribution will include a combination of freemium and viral approaches. This in-turn provides us with financial incentives for early adopters, who are able to take advantage of the 2-month free trial while having the ability to subscribe later. We’d also like to introduce a referral system where users are able to promote our application while being rewarded for successful signups. In addition to this, we aim to partner with health organizations (clinics/ hospital/ national health insurance) alongside deploying through-the-line marketing tactics in order to enhance customer reach and maximize customer acquisition.
What's next for miia!
App Development
Health data collection via speech recognition and wearables
Data analytics dashboard
In-app chat
Wearables
Water-proof watch for seniors
Water-proof necklace for seniors
Recruitment
We are planning to bring the project to the next stage. Shoot us a message if you're interested!
Built With
angular.js
cicd
figma
firebase
github
invision
ml
netlify
pwa
python
Try it out
www.figma.com
github.com
github.com
emotionpredict.herokuapp.com
bot.dialogflow.com | miia - medical intelligence applied | Digital health solution for elderly and caregivers | ['Ava Chan', 'Rohail Khan', 'Alice Tang', 'Billy Zeng'] | ['Best Designed Hack'] | ['angular.js', 'cicd', 'figma', 'firebase', 'github', 'invision', 'ml', 'netlify', 'pwa', 'python'] | 28 |
10,570 | https://devpost.com/software/medicine-screener | Inspiration-In the 1900's everyone used to go to Doctors for everything from a common cold to a fever. This program helps patients find what medicine they need so they don't need to go to the doctor unless they need to.
What it does- How the program works is that it asks you for your symptoms. If you have cough and pain then you press numbers to convey it. Then it will tell you which medicines you need to buy and where to get them. In the next 5 days, it will ask you if you are vomiting or if you feel better. If you don't feel better for 5 days then it will give you the phone number to call the doctor.
How I built it- I built it by building small parts of the program at a time then combine it. I checked if it works and if it is wrong I break it into smaller parts to find the bug. When I built it into two big pieces I combined it and fine-tuned it so it would work perfectly together.
Challenges I ran into- I ran into a major problem of variables. I needed to use them in two different places but I couldn't since It was a local variable. To solve this I got rid of some of my loops and rearranged so it would need separate variables. This way it did not matter it was a local variable.
Accomplishments that I'm proud of- I am proud of finishing this project since this my first project that I did by myself.
What I learned- I learned that you also need to expect the unexpected and that you need to change the entire structure of your program to include all parts.
Built With
python
Try it out
github.com | Medicine screener | To Help Patients cure themselves. | ['Arya Kunisetty'] | [] | ['python'] | 29 |
10,570 | https://devpost.com/software/supplyrowdy | COVID - 4seen
by: Michael Mohn (MichaelMohn624#6613), Taemin Ha (Taemin#1466), Ian Kim (Ian K#8258)
Link to our github =>
link
Link to our Youtube Video =>
link
We all know that the spread of Covid-19 is causing a lot of problems, especially with how food is distributed and whether or not people have enough for their family. It was this growing issue and the fact that it affected everyone that inspired me, Taemin, and Ian to design Covid - 4seen. Here is how it works:
Covid - 4seen allows the user to track and project how long their remaining food supplies will last, based on the size of their family. We use the recommended meal plan by Mayo Clinic to determine this. This way, we are not only capable of telling you how much time you have left in general, but which food groups will run out first, and which you have the most of, further allowing you to have a balanced diet as well.
Once the user gets the information, Covid - 4seen provides you with a google maps page displaying the nearest grocery stores in your area. This allows the user to go from store to store as efficiently as possible and ensuring they get exactly what they need. This is especially useful to have handy as many stores are out of stock on various items, which other stores might have plenty of.
In order to do this in only 24 hours, we had to divide and conquer! I focused on the front end while Ian and Taemin contributed with the back end and implementation of the google maps. I grew so much faster at design UI templates and Taemin and Ian learned the different capabilities of google maps APIs and how to access them. However, we all learned about the importance of communication and teamwork is. This brings me to my next point, our challenges. As it turned out the most challenging part was not the coding itself but coding in a way that allowed other's work to be implemented together, rather than 3 different projects. We became confused over time exactly what the plans for the app were and had to stop many times to catch up with each other. Luckily, towards the end, we figured it out and got
Covid - 4 see done in time!
Built With
android-studio
dart
flutter
google-maps
Try it out
github.com | Coivd - 4seen | The best to app assist you in preparing for Covid - 19 | ['Michael Mohn', 'Taemin Ha', 'Ian Kim'] | [] | ['android-studio', 'dart', 'flutter', 'google-maps'] | 30 |
10,570 | https://devpost.com/software/covidscreen | Home Screen, where the user clicks to begin his quick screening.
First question of the questionnaire, begins with a bullet point list of severe symptoms.
Results, one of the multiple diagnoses that a user can receive based on their inputs.
Inspiration
With the second wave of COVID-19 surging globally, there's a shortage of applications that combines the anonymous simplicity of a quick questionnaire with a powerful diagnosis. CovidScreen does exactly that.
What it does
CovidScreen asks a series of questions and symptom lists to check off. In just 6 series of in-depth selections, the application gives the user an ensured decision regarding their next steps in handling their potential case of COVID-19. The appealing User Interface makes it easy for anyone around the world to take a quick and effective diagnosis that can be completed in one minute. It first prompts the user to "Start Screening" with a button in the middle of the page. Once it is clicked, the next component will appear. It asks the user to select their age -- whether they are 18 years old or younger, 18-64 years old, or 65+ years old. If they are 18 or younger, it displays that the tool is only designed for people that are older, and directs them to learn more at cdc.gov. Then, it asks the user for present symptoms, past conditions, exposure, and travel to present their screening result at the end.
How I built it
I built CovidScreen using ReactJS, HTML, CSS, and other packages for simplistic user experience. With a series of React Hooks to monitor the user's clicks and selections, I was able to create an interactive environment flowed with the patient's progress on the webpage. The forms, radio buttons, and checklists were used throughout CovidScreen to keep track of their choices after they click "submit." At the end of the questionnaire, they are given a diagnosis out of a pool of multiple results, based on their number of symptoms, exposure to other COVID-19 stricken people, whether they traveled internationally, and more. With a variety of conditional operators, the case is made confidently.
Challenges I ran into
I found it a bit tough to keep track of all the React Hooks that monitored the user's progress on the webpage. I had 10 Hooks that kept track of the user's checks, clicks, and submissions. However, I learned to effectively organize the components in order to give the patient a direct answer from the states of the buttons and checklists.
Accomplishments that I'm proud of
I'm proud of configuring my first web application with React. Although I struggled initially with setting up the environment and IDE, I was able to design a user-oriented product that can be implemented virtually anywhere.
What I learned
I learned how to make a full-stack application and utilize React. I also learned to organize my code effectively so that I can see where any potential errors may lie.
What's next for CovidScreen
In the future, I plan for CovidScreen to be fitted with a Google Maps API that can read in the user's current location to show where they should go and avoid in the near future. Whether that be in the Hospital, Emergency Room, or at home, the results screen should provide the quickest route to the location that fits their diagnosis the best.
Built With
css
html
javascript
material-ui
react
Try it out
github.com | CovidScreen | COVID-19 Diagnosis made easy, quick, and guaranteed. | ['Danny Zhang'] | ['Wolfram|One Personal Edition + 1 year subscribtion to Wolfram|Alpha Pro'] | ['css', 'html', 'javascript', 'material-ui', 'react'] | 31 |
10,570 | https://devpost.com/software/remote-elderly-home-care-via-privacy-preserving-surveillance-2lt9p1 | Privacy Preserving Face Detection at Home
Plug and Play AI Device Discovery
Home Page
Person Detection Indoors
Person Detection Outdoors
Inspiration
COVID19 isolated at home many of us, including our elderly parents and grandparents. Not being able to check on them regularly elevates the risks that they are exposed to such as falls, gas leaks, flooding, fire and others.
What it does
Ambianic.ai is an end-to-end Open Source Ambient Intelligence project that removes the stigma associated with surveillance systems by implementing privacy preserving algorithms in three critical layers:
Peer-to-Peer Remote access
Local device AI inference and training
Local data storage
Ambianic.ai observes a target environment and alerts users for events of interest. Data us only available to homeowners and their family. User data is never sent to any third party cloud servers.
Here is a blog post that goes into the reasons why we started this project:
https://blog.ambianic.ai/2020/02/05/pnp.html
And here is a technical deep dive article published in WebRTCHacks. It clarifies that it is absolutely possible to build a privacy preserving surveillance system, despite popular cloud vendors making us believe that all user data belongs safely on their cloud servers:
https://webrtchacks.com/private-home-surveillance-with-the-webrtc-datachannel/
How we built it
Ambianic.ai has 3 main components:
Ambianic.ai Edge: a Python application designed to run on an IoT Edge device such as a Raspberry Pi or a NUC. It attaches to video cameras and other sensors to gather input. It then runs inference pipelines using AI models that detect events of interest such as objects, people and other triggers.
Ambianic.ai UI: A Progressive Web App written in Javascript using Vue.js and other front end frameworks to deliver an intuitive timeline of events to the end user.
Ambianic.ai PnP: A plug-and-play framework that allows Ambianic UI and Ambianic Edge to discover each other seamlessly and communicate over secure peer-to-peer protocol using WebRTC APIs.
Challenges we ran into
Challenges include selecting high performance, high accuracy and low latency AI models to detect events of interest on resource constraint edge devices.
Another challenge is taking into account user local data to fine tune AI models. Pre-trained models can perform reasonably well, but they can be improved with privacy preserving federated learning on unique new local data.
Accomplishments that we're proud of
Ambianic.ai has been in public Beta for several weeks helping a number of users in their daily lives. Some users report success in keeping an eye on their elderly family members:
https://twitter.com/mchapman671/status/1230931722650423299
What we learned
Although the project sets ambitious goals, there seem to be sufficient enabling Open Source frameworks and community momentum to drive the ongoing success.
What's next for Remote Elderly Home Care via Privacy Preserving Surveillance
We need to work on these major areas:
Recruit volunteers in the home care community to test the system and provide feedback
Select more models to address open use cases such as fall detection, gas leaks and others
Work on implementing Federated Learning infrastructure to fine tune initial pre-trained models.
Built With
javascript
pwa
python
raspberry-pi
tensorflow
webrtc
Try it out
docs.ambianic.ai | Remote Elderly Home Care via Privacy Preserving Surveillance | COVID19 isolated at home many of us, including our elderly family members. Left unattended they are prone to risks such as falls, gas leaks, flooding, fire and others. | ['Björn Kristensson Alfsson', 'Yana Vasileva', 'Ivelin Ivanov', 'Vidhushini Srinivasan'] | [] | ['javascript', 'pwa', 'python', 'raspberry-pi', 'tensorflow', 'webrtc'] | 32 |
10,572 | https://devpost.com/software/helping-hands-shqpol | Check out Helping Hands!
Track your hours
Personalized Dashboard
Forum to chat with others
View Upcoming Opportunities
Inspiration
Through the pandemic, many teens struggle to find volunteering opportunities that impact their community. Connecting with like minded individuals is still difficult and staying organized is a struggle. Helping Hands is here to, well, help! Through Helping Hands, volunteers can connect to event organizers, find opportunities, and connect with other volunteers. It is a multi-functional tools for volunteers and potential organizations to help their community through service and stay safe. Find out more about Helping Hands below!
What it does
Our website is full of helpful info on keeping safe and on volunteering, Our Volunteer Hour Log and Calendar help volunteers stay organized and keep track of events, and Volunteer Event Managers are able to request to put their events, so volunteers can find them. We also have a map with common volunteering locations like libraries and senior citizen centers.
Volunteers can make an account or login then use their personal dashboard to stay organized. On this dashboard, volunteers can find their volunteering numbers, upcoming opportunities, and declare their interests. There is also a calendar to stay organized. Then, the volunteer can look at different pages for more information about safety, create a volunteering event, or even chat with others through the public forum!
How we built it
We built the back-end using
phpMyAdmin
and
MySQL
. We made the website through
HTML/CSS/JS
and used Bootstrap for organization. All vector images are from Stories by
Freepik
Challenges we ran into
Some challenges we ran into were time constraint and some problems with the integration. We wanted to make a leadership board for those who were volunteering and so we could incentive the efforts but, we faced problems with our database and could not complete the program within the time limit. We also faced some problems integrating the database but we were able to troubleshoot and integrate completely.
Accomplishments that we're proud of
We are proud of the ability to integrate the database and website and the user friendly design. We wanted a multi-functional tool so that the website could be accessible and useful for all and we were able to accomplish most of the features we wanted. The integration took us time but it allowed us to have a website that remained updated and helpful. It was our first time working with a database and integration so it was a really good experience.
What we learned
Anchal learned how to make a database and integration using PHP and MySQL. She also learned how to make multiple HTML/CSS features such as scrollers and more. Aanya learned how code using CSS and how to code a calendar using HTML. Both of us learned how to make a user friendly design.
What's next for Helping Hands
We hope to expand our forum and have volunteer opportunities from all over the country. We also would like to implement a leader-board based on the volunteers with the most hours. We didn’t have time to execute the idea. This would promote more service during the pandemic. We would also like to highlight safety in a virtual setting through an informative page.
Built With
bootstrap
css
google-maps
html
mysql
phpmyadmin
Try it out
github.com
docs.google.com | Community Service -Helping Hands | An multifunctional application to help volunteers during the pandemic | ['Aanya Bhardwaj', 'Anchal Bhardwaj'] | ['Social Impact Prize (Google Home Mini)', 'The Wolfram Award'] | ['bootstrap', 'css', 'google-maps', 'html', 'mysql', 'phpmyadmin'] | 0 |
10,572 | https://devpost.com/software/example-project-6rkj49 | Inspiration
Many of my close friends and my parents have suffered with various allergies ranging from food to general allergies in the air. Additionally, often we come across various foods that we may or may assume not to have any allergens in them however, unexpectedly, this assumption can result in a very serious sitatuation quickly. On a less extreme note, pollen and other allergies from pollution can result in irritation throughout the day, really disturbing many people. Due to the lack of applications which is able to detect the ingredients of foods and the allergens in the air, I felt obligated to create an application and my app development skills in order to create an app beneficial to those who have suffered and unfortunately resulted in extreme situations due to simple mistakes. As a result, I have created RADAR helping users detect food and air allergens based on preferences and locations.
What it does
The app consists of three different sections:
The main part of the app is the Food Allergen Detector.
This part of the app uses Firebases ML Vision Kit in order to classify the image of a food, with a food name in order to find the ingredient of the food which I will explain further in the how I built it section. You are prompted to select what your food allergen preferences are such as milk allergies, peanut allergies and more. After that, you are able to either input manually, by typing, a name of the food and see whether it has any of your allergy foods that you specificied in the beginning. You can also press the full ingredients button to view the full list of ingredients. For the image taker, you are able to press the take an image button, take a picture of your food and then the model classifies the food you took a picture of and then shows whether the food includes an allergen you dont wish to have.
The second part of the app is the air allergen detector:
In this part of the app, using your location, the app is able to find the local Air Quality index based on your location, In addition to that, it is able to show you a reference point such as good, bad or great in terms of air quality. There are also other options to visualize this data for pollen and pollution as well. You are able to press on a heat map and it will visually (Intuitively red being poor, and green being good) in terms of air quality for the air quality heat map and same in terms of pollen concentration for the pollen heat map.
The last part of this app is the procedures tab where you are able to view various procedures regarding how to tackle various situations such as detecting where you have an allergy or not, what to do when suffering or someone in front of you is suffering an allergic reaction. These various links allow the user quickly to access resource and act quickly.
How I built it
I built this app using Google's ML Vision Kit, Native Base UI Library, react-navigation, Breezometer Pollen, AQI and Heatmap APIS, and the Nutritionix API to reference the ingredients based on the name of the item.
For the first section I built the classification of the images and its results using Google's ML Vision Kit which allowed me to use an on board ML Model to classify the text of the food item which then later is searched with in the Nutritionix API in order to see if the food has the allergens I chose. I can also input the food name item manually, which uses Native Base UI elements.
For the second sections, the Pollen API, AQI Index and Heatmap's data was all created using Breezometer's Heatmap, Pollen and AQI Apis which I was able to retrieve the information from and then displayed. I also use expo permission to retrieve the location of the user.
For the last part of the app, it is a varied of links which I use Native Base library to create the ui for.
Challenges I ran into
I faced many challenges, in terms of dependencies and UI to be able to incorporate an on app ML-Vision Kit Model, which allowed the app to function so quickly. I spent quite a few hours resolving this and am happy I was able to get the project done. This was my first time dealing with Google Firebase's ML Vision Kit.
What I learned
I learned how to Implement Googles ML Vision kit inside of a react native application. I also learned how to pass data such as images and data between screens when navigating in between them.
What's next for RADAR
I hope to release this application into the App Store sometime in the future in order to help those around and allergic people themselves prevent, and raise awareness about allergies in general.
Built With
apis
breezometer
firebase-ml-vision-kit
ml
ml-kit
nutritionix
react-native
vision-kit
Try it out
github.com | AI/ML-RADAR | An App Raising Awareness and Detection of Allergens Regionally | ['Om Joshi'] | ['Clerky Lifetime Package', 'The Wolfram Award', 'Machine Learning/AI Prize (Gaming Mouse)'] | ['apis', 'breezometer', 'firebase-ml-vision-kit', 'ml', 'ml-kit', 'nutritionix', 'react-native', 'vision-kit'] | 1 |
10,572 | https://devpost.com/software/radius-zu7d26 | Our icy UI
Get started
Report Infection
Danger ZONES
Prediction Dashboard
Inspiration
There are people dying all over the world - pretty big motivation. Helping elderly find their way through COVID-19 Pandemic by avoiding infected and crowded locations. We can help each other by being good neighbors and reporting cases for the community.
What it does
With daily COVID-19 death tolls higher than ever, a major obstacle to recovery is the lack of information. Social distancing is hard when you don’t even know which locations have a high density of people, or which places have had infected visitors.
Our goal is to fill this lack of information by alerting users in real time to locations with confirmed cases, so that they can avoid them. This allows users to make a conscious choice to avoid certain locations, stopping contact with infections in the first place. Additionally, we use the Besttime API to forecast the safest time to visit a store days in advance by statistically analyzing trends in visitor count. These predictions allow users to avoid foot traffic in stores - a breeding ground for COVID.
How I built it
This was built in two parts. The iOS app was written in Swift, and the main frameworks used were Core Location, Radar, and Firebase. The database used to store the data was Cloud Firestore, while the UI elements were from MapKit and UIKit. The Firebase queries were done in a background thread to avoid UI lag. We made GPX files on XCode to simulate location and test our app features. We used SwiftUI to display the dashboard of predictions for various store foot traffic. The prediction was based on data from the Besttime API
Challenges I ran into
Location tracking with live firebase updates was difficult since the multithreading was complex. We had to sort out the UI vs background thread issue. Also, we had a tough time getting SwiftUI set up properly since this was our first main project with it.
Accomplishments that I'm proud of
The UI looks pretty solid in our opinion and there are a bunch of useful features on this app. Even if only one person is a good samaritan neighbor and reports a case of COVID, everyone in the area will be able to avoid that location for the incubation period. We're just really happy with coming up with the entire idea from scratch and converting it into a finished product.
What I learned
We experimented a lot with swiftui, which will help in future hackathons. We also used multiple API's (Radar, Besttime), which we can add to our toolkit in the future.
What's next for Radius
We're trying to include advanced machine learning algorithms to make our store population prediction even more accurate
Built With
bettertime
core-location
ios
mapkit
radar.io
swift
uikit
Try it out
github.com | Radius | Avoid COVID-19 infected locations and crowded locations | ['Yatharth Chhabra', 'Aditya Sharma'] | ['The Wolfram Award', 'Medical Hack Prize (Wireless Charging Pad)', 'Wolfram Award by Wolfram Language'] | ['bettertime', 'core-location', 'ios', 'mapkit', 'radar.io', 'swift', 'uikit'] | 2 |
10,572 | https://devpost.com/software/deltaxhacks-submission | https://www.youtube.com/watch?v=9J3biFhRMqg&feature=youtu.be
The largest challenges that we faced while writing this program was dealing with the time constraints (and lack of sleep), as well as finding new solutions to problems that we had not been faced with before such as identifying collision, getting reliable user input, especially from the mouse, and finding ways to work on the program as a team an try to work on it together. There was so much to be done when writing it from scratch, especially when it can be long periods of time before being able to test, and just wondering if it will work, like at the beginning. This was so much fun to do, as the added time constraints made it much more of a frantic and exciting experience.
We worked together in the same room for the entirety of this project only leaving for the absolute necessities, like donuts. Leo learned basically everything he currently knows about python from this project. We evenly traded off typing and watching (and sometimes we would both be typing). This has been one of the best learning experiences we have had while programming.
Note: Some of the developing procedures that we used are still in place, such as printing the location of the objects. The program still functions perfectly
Built With
python
Try it out
github.com | Miscellaneous - <Running Game> | A dodging game, but with a twist: You get to control time. | ['Wiley Frank', 'Leo Brown'] | ['The Wolfram Award', 'Miscellaneous Prize (Power Bank)'] | ['python'] | 3 |
10,572 | https://devpost.com/software/voluntier-8osag2 | Landing
Dashboard
Inspiration
Many services are built to aid coordination between non-profits and volunteers, but due to ineffective UX/UI and no way to stimulate interest, they often fail. They get the job done, but not well. VolunTier is designed to have an intuitive yet clean design.
Additionally, it adds a competitive edge to volunteering not found anywhere else. Volunteers are able to see where they rank among those in their city and state.
What it does
VolunTier allows Organizers to post upcoming volunteering shifts for their organizations. Volunteers can view these upcoming shifts, and get personalized recommendations for where to volunteer next. In addition, volunteers can find shifts near their location, track their shifts on a calendar, view detailed statistics of their past volunteering activity unlock milestones as they progress, compete with other volunteers on an interactive leaderboard, and much more.
How I built it
We built a serverless JAMStack site using NuxtJS and Vue. We handled database interactions through AWS Lambda and our database was FaunaDB. FaunaDB is an efficient GraphQL-powered data store that brings both powered relational data structures, as well as the MongoDB-Esque JSON-like data.
Challenges I ran into
We worked very smoothly in this hackathon. However, with a new tech stack comes bugs, which did take time to fix and get used to.
Accomplishments that I'm proud of
What I learned
We learned how to use AWS lambda serverless functions using Netlify. This simplified our workflow and development speed.
What's next for VolunTier
Invest money into to powerful hosting solutions, and build this project up from a prototype.
Built With
amazon-web-services
axios
buefy
bulma
css3
google-cloud
google-maps
html5
javascript
lambda
netlify
nuxt
vue
Try it out
github.com
voluntier.netlify.app | Community Service - VolunTier | Make volunteering a fun and competitive experience. | ['Raghav Misra', 'Pranav Subbaraman', 'Lehuy Hoang'] | ['The Wolfram Award'] | ['amazon-web-services', 'axios', 'buefy', 'bulma', 'css3', 'google-cloud', 'google-maps', 'html5', 'javascript', 'lambda', 'netlify', 'nuxt', 'vue'] | 4 |
10,572 | https://devpost.com/software/gogig | Inspiration:
Listened to a friend in band who wanted to start getting into the public and share their talent. There was nothing that existed for connecting young musicians to gig opportunities. I had some experience with gigging at local venues as many public areas are in fact, wanting to hire young musicians.
What it does:
GoGig is a mobile app both deployable on IOS and Android that connects young musicians in a school to public venues willing to hire.
How I built it
The front end of the app was built with React Native and the backend uses Google Firebase as it was very easy to connect with React Native.
Challenges I ran into
Having database authorization, connecting both the musician side and the venue side
Accomplishments that I'm proud of
Taking action after recognizing a problem that related to people I know
Truly has the potential to help any young musicians to show their talent.
Very nice Logo design and UI that's very intuitive
## What I learned
How to iron out an idea so that it was feasible
Connecting the app to a backend in Firebase
New styling techniques for a better user experience
## What's next for GoGig
Deploying to the App store and organizing the data base
Built With
firebase
reactnative
Try it out
github.com | Community Service - GoGig | Enabling young musicians to get out there and gig! | ['Roy Hwang'] | ['The Wolfram Award'] | ['firebase', 'reactnative'] | 5 |
10,572 | https://devpost.com/software/trump-tweets-and-stock-prices-indices-dizuxo | Home screen
Direct link to Trump's twitter feed
Output from machine learning model
Our machine learning model
Inspiration
Our hero Elon Musk is famous for his tweets that affect Tesla Stock Prices. As I was scrolling through Twitter, I wondered if @realdonaldtrump's tweets affected the nation's "stock" in a similar fashion.
What it does
Our project allows users to see projected effects on the economy from a theoretical Trump Tweet. A user can input their own tweet or one from Trump himself and see it scored from 0 to 1 on the basis of whether it will increase or decrease the US Dollar index.
How I built it
We collected data on DXY(the US Dollar Index) and trump tweets from the Trump Twitter Archive. Then we labeled the tweets based on whether the DXY went up or down on the following day. After applying a natural language model, we used neural networks to finalize our model. From there, we adapted our model to PythonAnywhere and uploaded it. We build a Graphical User Interface (GUI) to make our model useful for the public.
Challenges I ran into
It was initially difficult to find stock indices online, but Yahoo finance had a CSV to download. The same went for Trump tweets, since we didn't have a twitter API, but the Trump Twitter Archive served our needs. Also, holidays and weekends did not show up for the DXY index, so we found a way to work around that. The keras model was built using Tensorflow 2.3.0, whereas PythonAnywhere only supported Tensorflow 2.0.0. After building our model, we had to downgrade it for compatibility in order to deploy it.
Accomplishments that I'm proud of
We're proud of developing a fully functional model and deploying it for immediate usage, and making something creative out of these seemingly unassociated information sets.
What I learned
We learned how to use natural language models, and it was our first time deploying python in a web interface. We learned that there actually is predictability (we scored between 0.53 and 0.55) with Trump tweets and the US Dollar, which is very important for people to know.
What's next for Trump Tweets and Stock Prices/Indices
One thing that we noticed was that there were more "bad day" tweets than "good day" tweets. We're going to do an exploration into whether the volume of tweets from that account is associated with how the economy performs.In addition, we are planning to apply our concept and model to other influential figures.
Built With
flask
html
jupyter
keras
numpy
pandas
python
Try it out
deltaxhack.pythonanywhere.com | ML/AI - Trump Tweets and Stock Prices/Indices | Our project allows users to see projected effects on the economy from a theoretical Trump Tweet | ['Viraaj Reddi', 'Enoch Luk'] | ['The Wolfram Award'] | ['flask', 'html', 'jupyter', 'keras', 'numpy', 'pandas', 'python'] | 6 |
10,572 | https://devpost.com/software/food-safe | Inspiration
If you have food allergies like me, one has to constantly read dense nutritional labels to see if a specific product is safe to consume or not. Even if one does not have allergies, consumers are increasingly engaging with these labels to make informed choices about what they consume. Given the amount of information and how densely it is written, a casual trip to the local grocery might quickly develop into a case of migraine. I built this app to solve this problem.
What it does
My app is very simple. The user first enters his/her specific situation as in what interests them in a nutritional label (e.g. allergens that they are allergic to or if they are diabetic and hence interested in monitoring the total carbohydrates in a product). Next whenever they are considering buying a product, they just take a picture of the nutritional label, and the app magically extracts the information of interest to them and surfaces them as a concise nugget which is neither hard to read or digest.
How I built it
I built the app by first integrating camera access so I could enable the user to take pictures of the nutritional labels. Then I used the Google vision APIs (from the firebase-ml-kit) to extract all the text in those images. The rest was easy as I had to use some regexes to match the the keys of interest to the user so I could extract the corresponding values (e.g. /Total Carbohydrates/ -> 37g). To bring it all together, I needed to understand how to use Android-studio, the general architecture of my app (how many activities and such), do the UI design, and learn how to use elements like check-boxes, toasts, and floods.
Challenges I ran into
I had to make tradeoff choices between local vs cloud based implementation of firebase-ml-kit. The quality from the local seemed just as acceptable as the cloud based solution. So, I chose the local solution as it was free and also saved the user round-trip time to-from the cloud hosted APIs on each request.
To my dismay, I also found that the text that seems to be so nicely laid out on the nutritional labels comes back from the vision API as a pretty incoherent blurb of text. I used the power of smart regexes to identify and extract the specific text of interest to the user.
Accomplishments that I'm proud of
I am very proud of how I used the Vision API in the couple of different ways and made quality, cost, and latency based choices to determine the right way of doing things. Using regexes was pretty innovative and helped me solve the big problem of dealing with garbled text returned by the API.
What I learned
Learned how to use the Google Vision API from the firebase-ml-kit.
I also learned a lot about regexes.
What's next for Food Safe
I want to add features like - allowing users to store nutritional labels of what they are buying so they can track consumption based on re-buys. I also plan to build some machine-learning models to more robustly analyze nutrition labels as the inaccuracies in text outputted from vision APIs can be best fixed by predictive ML models. I also plan to offer users the ability to compare similar products based on the information in their nutritional labels.
Built With
android
android-studio
firebase-ml-kit
google-vision
Try it out
github.com | Medical - Food Safe | This app takes an image of the nutrition labels on products and highlights nutritional information of interest to the user. This would be things like total carbohydrates and any allergens of interest. | ['Anshul Sinha'] | ['The Wolfram Award'] | ['android', 'android-studio', 'firebase-ml-kit', 'google-vision'] | 7 |
10,572 | https://devpost.com/software/vote-za4xe1 | What it does
This webapp does the following : --- allow users to make new friends --- allow users to stay up to date on the news ---- allow users to engage in group political discussions ---- allow people to vote for propositions and petitions.
How I built it
I used python, flask, html, and css to make this
Challenges I ran into
I tried learning google firebase for this project for backend; however I didnt have enough time. I also didnt have time to finish part of the UI
Accomplishments that I'm proud of
Im proud of finishing this in 5 hours and writing over 1000 lines of code
What's next for #VOTE
Would like to use google firebase for backend
Built With
css3
flask
htm
ml
Try it out
github.com | #Vote | Allowing users to make friends while also staying update to date about politics, and helping out their community by voting | ['Neeral Bhalgat'] | ['The Wolfram Award'] | ['css3', 'flask', 'htm', 'ml'] | 8 |
10,572 | https://devpost.com/software/delta_hacks | The welcome screen
Check in screen
Queues page showing your number, and how many people are ahead of you in line
Inspiration
My inspiration for this project is the tendency of medical facilities such as hospitals lacking in terms of technology, with this virtual automated queues app, we will be saving more lives by saving more time for healthcare workers to focus on the more important tasks.
What it does
It saves time for healthcare workers as it takes away a task that is usually time-consuming. On Mediqueue, you can check into your hospital on the app instead of in person. Essentially, you either don’t go to the hospital until it is your turn, or you stay into the car until you are next in line. This will not only make the check-in process for all hospitals easier and more convenient and safe. Allowing health care workers to focus on saving more people.
How I built it
We built it by creating 3 user interfaces in HTML and CSS. We created a welcome page, login page, and queue page with buttons and UI elements that allow the user to use the app without any difficulties. Our back end person linked all the pages to a server via flask. This means anything entered in the login form will be communicated to a database that is viewable to only the doctor. If the user decides to leave the hospital, he/she may press the leave button and their name will be removed from the database. Also, when the patient's appointment is done, the doctor manually removes their name from the list.
Challenges I ran into
Some challenges we ran into were completing the database queries required for the system. We also had trouble with creating the login form and making the queue list work effectively. Hosting the website on Heroku was quite a challenge as well.
Accomplishments that I'm proud of
We were able to work as a big team of 4 to complete a web app within a short time frame by assigning roles and most importantly, communicating with each other.
What I learned
We learned how to work efficiently as a group and the benefits of cooperative programming.
What's next for MediQueue
We will give each hospital that signs up for our program a unique code, such that the link for the patient to sign up to only checks them into their hospital. We also hope to integrate AI to make it easier for people to log in, maybe by simply taking a picture or scanning their insurance card. Finally, we will also create a separate interface in which doctors can log in and see all the people in line instead of having to pull it from the program.
Built With
css
flask
heroku
html
javascript
postgresql
python
Try it out
github.com | Medical - MediQueue | MediQueue is a virtual queue application that allows patients to check in and check out of hospitals. | ['LEE ZHENG', 'Varun Venkatesh', 'Saurav Kumar', 'JOTHESH S P'] | ['The Wolfram Award'] | ['css', 'flask', 'heroku', 'html', 'javascript', 'postgresql', 'python'] | 9 |
10,572 | https://devpost.com/software/ml-s-p-price-predictor | Website Page Displaying the Data
Screenshot of the Jupyter notebook where the model was trained
Inspiration
We were very interested in looking at different factors and how they affect one another. While searching for COVID numbers we realized there was a spike in new deaths around the same time there was a large drop in the S&P 500 price. Wanting to use machine learning this was a perfect challenge for us.
What it does
We used a custom machine-learning algorithm to create a model for predicting the price of the S&P 500 for the following day. Our script uses the current day's COVID stats to predict the following day's S&P 500 numbers. Then it goes on to predict the numbers for the past 9 months and outputs some nice graphs comparing the true values to the predicted ones.
How we built it
We built it using the industry-standard Tensorflow 2.0 by google along with Python and React.js. These together make up the machine learning algorithm in Tensorflow with Python and the website which uses React in JavaScript.
Challenges we ran into
Some challenges we ran into include finding data that had any correlation or are good for Machine learning, communication with the front end and making graphs, and tuning the machine learning algorithm to find the best results. These challenges forced us to learn valuable data science skills and apply them to real-world examples.
Accomplishments that I'm proud of
We are proud of our algorithm that predicts the values, the graphs that display the values, and the algorithm that makes the model. We put a lot of time into all of these components and many others and are very happy and proud of the outcome.
What I learned
We learned a lot from this project. One thing that we learned was how to use a machine learning model as the back end for a website with node.js front end. We also got experience with the various file formats for the machine learning libraries, such as .h5 to store TensorFlow models and .save for saving scalers. Finally, we learned that even though Google Colab lets you use their higher-powered machines so learning is faster in theory, in many cases such as ours, running it locally in a Jupyter notebook can actually be much quicker.
What's next for ML S&P Price Predictor
The next step we want to take for ML S&P Price Predictor would be to use it for a stock trading bot based on the data for US COVID cases. We would enter the stats on COVID from the day, have the machine learning model predict the closing price of the S&P 500, and then make a one day bet on the fund based on that.
Built With
javascript
python
react
tensorflow
Try it out
github.com
github.com
github.com | ML/AI - S&P 500 Price Predictor | We used machine learning and the last 9 months of Covid-19 and stock market data to predict the S&P 500. | ['Maxwell S.', 'Sammy Taubman'] | ['The Wolfram Award'] | ['javascript', 'python', 'react', 'tensorflow'] | 10 |
10,572 | https://devpost.com/software/edulligence | Student's Dashboard View - iOS Application
Teacher's Dashboard View - Website (hosted using domain.com)
Questions Queue View - iOS Application
Using MongoDB hosted in Google Cloud for the database
Inspiration
The purpose of education is for people to learn. But how do we uphold that idea if the current school system doesn't facilitate the feasibility for students to ask questions? Imagine you are in a giant lecture hall with 400+ students, and you have no idea what the professor was talking about. You wanted to ask questions, but you are too shy and afraid of it being considered as a "
dumb
question"
As students, we tried to analyze the fundamental reason why engagement between teachers and students is so minimal in most classrooms. In most cases, even the teachers themselves want the students to ask them questions! The best thing is, this is not only you. Most people also feel the same way. We understand this because we are all students that have been through that situation and want to break through the boundary of engagement! Thus, we developed ClassInsights, a real-time classroom engagement application designed for the betterment of learning!
What it does
The iOS mobile application allows students to rate their understanding from 'What are you talking about?' to 'Easy peasy lemon squeezy!' In addition to that, we also allow students to ask questions to be shown immediately on the professor's web dashboard. We allow flexibility for the students to choose either they want to ask questions with their names attached or anonymously—utilizing the computational knowledge API from
Wolfram Alpha
. That way, we can eliminate the students' shyness and fear of their questions being considered dumb, despite them being not!
The web application provides an intuitive and powerful dashboard for the professor. It features an average live understanding level of the students in the class so the professor can have an idea of how well the students keep up with their lecture! The professor can overview the list of questions asked by the students so that he/she can go over specific topics on the board that may confuse students. Even cooler than that, the website uses complex algorithms to summarize the numerous amount of questions to related keywords! Thus, it is useful for professors to analyze the classroom's understanding and go over specific topics of the lecture.
How we built it
We developed a mobile application for the students and a website for the professors. We used the Swift Programming Language to deliver the native experience for iOS! We also used HTML, CSS, and Pymongo for the website to provide users the functionality we want to present. We also used
domain.com
for our customized domain name for our project!
We relied on MongoDB as our database storage to store the list of questions and the classroom understanding levels, which is hosted in the Google Cloud server. We also implemented
Google Cloud Natural Language Processing API
to obtain the keywords that we want (which is obviously the ones related to the subject!) and
MongoDB Query API
to retrieve the latest list of questions, keywords, and process the real-time average understanding level among hundreds of students to be displayed in the dashboard!
Challenges we ran into
It was tiring and took us around two hours only to link the MongoDB database to the iOS application! We also find it challenging to create a query in MongoDB that suits our needs, which is to obtain the keywords from the list of questions directly in MongoDB! We also spent hours to understand the Natural Language Processing API to be implemented after we get the query results (the list of questions) ready from the query!
Accomplishments that we're proud of
For the product, we are glad that we can integrate both the website and the iOS application with the MongoDB database and its powerful and customizable queries (the Google Cloud Natural Language Processing API is very cool!) During the process, however, we feel immensely proud that we can collaborate and work with each other despite us never meeting each other in person. To solve this, we hosted numerous video calls during the 36-hour period.
What we learned
A clear and well-understood blueprint and planning for all are vital for a collaborative project!
What's next for ClassInsights
Provide more accessibility features for disabled students, able to collect class attendance, and work with numerous universities and colleges that suffer the engagement problem.
Built With
css
domain
google-cloud
html
mongodb
natural-language-processing
python
query
swift
uikit | ClassInsights | Solving the fundamental problem of classroom engagement in schools! | ['Michael Winailan', 'Tremael Arrington', 'Jaideep Cherukuri', 'Muntaser Syed'] | ['The Wolfram Award'] | ['css', 'domain', 'google-cloud', 'html', 'mongodb', 'natural-language-processing', 'python', 'query', 'swift', 'uikit'] | 11 |
10,572 | https://devpost.com/software/smartfridge-reduce-food-waste | SMART FRIDGE
Mange Your Fridge Smarter
All Ingredients in 1 App
Meal Planner - Plan Ahead - Save Time
Auto-generate Shopping List
Inspiration
In the United States, food waste is estimated at between 30-40% of the food supply (figure from the FDA). Our biggest inspiration stems from our concern about the environment and how we can make simple lifestyle changes be more responsible about our consumption. SmartFridge app is a solution that makes meal-planning convenient, intuitive and sustainable. We create a logistic app that lets users monitor their food resources with ease. By scanning users' food inventory at home via picture input, the app will classify its users' food into categories, come up with suggested cooking recipes depending on the available food, prioritize food that will go bad soon, and send out alerts once the user's fridge is running low or going to expire. With the aforementioned features, SmartFridge app is the all-in-one solution for people to keep track of their fridge, have a more diverse meal plan, and reduce personal food waste.
What it does
SmartFridge has 5 main tabs, which are "Scan", "Ingredient", "Meal Plan", "Shopping List", and "Profile" tabs.
Scan
: Scan images of food ingredients and add to inventory. Integrate image processing functionality through TensorFlow.js to optimize user input step.
Ingredient
: Let user easily monitor available food in their home kitchen.
Meal Plan
": Suggest recipes that match existing ingredients, bookmark favorite recipes, manage weekly meal plan.
Shopping List
: Auto-populated with ingredients user still need for meal plan. Integrate Google Map API to navigate closed-by grocery stores.
Profile
: Summary status of fridge like remaining capacity, prediction of number of days until next shopping trip, number of items expiring very soon, manage ingredients used in the week's meal plan.
How we built it
We built a cross-platform mobile-app using React Native and Expo. We also used spoonacular API to suggest user with cooking recipes, Google Map API to locate nearby stores, and TensorFlow.js for image processing. The back-end of the app is managed through SQLite and Redux. Its main function is to record the food resources of the users and interact between different components of user interaction.
What we learned
React Native, JavaScript, Expo, spoonacular API, Google Map API, SQLite, Redux, TensorFlow.js
What's next for SmartFridge - reduce food waste
There are still many functionalities we want to add or optimize. We will need to make improvement to our front-end for a better user interface, and build our own cloud database to store more high-quality recipes and data about our users' preferences.
Built With
expo.io
google-maps
image-processing
javascript
node.js
react-native
redux
sqlite
tensor-flow
Try it out
github.com | SmartFridge - Reduce Food Waste | Eliminate Food Waste - Promote Better Health - Meal Planner - Quality Life | ['Nom Phan', 'Quang Luong', 'Blake Hieu Nguyen', 'Ari Nguyen'] | ['The Wolfram Award', 'The Benefits and Costs of Going Digital (Boomi, a Dell Technologies business)'] | ['expo.io', 'google-maps', 'image-processing', 'javascript', 'node.js', 'react-native', 'redux', 'sqlite', 'tensor-flow'] | 12 |
10,572 | https://devpost.com/software/prescribemate-vyruje | Inspiration
The prompt brought some interesting possibilities to the table, so we started looking at some issues that regular people have with the medical system. Immediately, what came to mind was the amount of medication some patients are prescribed after a simple visit. It’s disorienting, and both of us have related to the confusion associated with having to use multiple medications a day. At PrescribeMate, we want to consolidate the process of prescribing and consuming medication, as well as simplify vital information needed for patients.
What it does
PrescribeMate is a well-rounded application. We allow patients or doctors to manually input all of the information regarding their drugs: type of medicine, dosage, and the intervals needed between the doses. We use this information to remind the patients when the time comes to take their medicine, but we also cross-reference the drugs between each other. When we do this, we can find any interactions between the drugs and warn the patient to check with their doctor before consuming anything, minimizing the risk of unintentional misuse. We also display the adverse effects of any medications you take, so that patients can identify the cause of ailments that appear when on meds. PrescribeMate can also refill prescriptions and remind you to pick medicine up from your pharmacy, greatly reducing the time and effort needed when prescribed medication. In risky situations, when you might need medical help or are trying out something new, it is good to have an idea of meds you are taking, so our platform is also great to use as a shortlist for all of a patient’s medication.
How we built it
PrescribeMate was built upon the Angular Framework as the frontend and Node.js as the backend, with Firebase serving as the database system and the method of hosting the application. Bootstrap and TailwindCss were both used to style the application, allowing for smooth, easy-to-use user experience. The application utilizes the openFDA API in order to get realtime medicinal data. This way, users have access to the latest information on the drugs in circulation.
Challenges we ran into
The largest challenge was querying the data from the openFDA API. Though retrieving the raw data itself was not a hindrance, creating and combining several queries to allow for cross-referencing the medicines with one another was an incredibly tedious process. Even finding an API that would allow us to get data of such bulk for free was difficult, as most APIs either cost money, or they would not provide the data that we needed. The decision to use bootstrap and Tailwindcss made styling a slight challenge. We wanted to use features from both libraries, though they conflicted at times.
Accomplishments that we're proud of
We're proud of how we've created a platform that would help not only patients but also doctors and pharmacists, better connecting all three aspects of healthcare into one harmonious system. While there are systems in place that do connect doctors with pharmacies, no system engages the patient as well. The user experience is also something we are proud, as we have designed a smooth, easy-to-use application that is visually appealing to users.
What We learned
Throughout this hackathon experience, we were able to learn about cool tools and technologies to help support this application with all of its features. We learned how to effectively pitch an idea by incorporating personal experiences and researching the best method of addressing an issue at hand. On the technical side, we learned how to combine multiple styling libraries into one seamless design, and we learned how to incorporate official data from the FDA and present it in a manner that is easy to read and understand.
What's next for PrescribeMate
We think that PrescribeMate has a great path forward and a lot of room to grow as a utility for both doctors and patients. At some point, we see this platform being used as a direct line of communication between you and your doctor. This will simplify the treatment process tenfold, and make the recovery process easier on you and your doctor. We would also like to better cater to mobile users by creating a dedicated application on both iOS and Android, which would provide an enhanced mobile experience in comparison to a progressive web app.
Built With
angular.js
bootstrap
firebase
html5
note.js
openfda
scss
tailwind
typescript | PrescribeMate (MEDICAL) | At PrescribeMate, we understand that it's hard to keep up with the copious amounts of medication given you by doctors. We want to make it easier know what you're taking and when to take it. | ['rahul-rajamani Rajamani', 'Vrishank Viswanath'] | [] | ['angular.js', 'bootstrap', 'firebase', 'html5', 'note.js', 'openfda', 'scss', 'tailwind', 'typescript'] | 13 |
10,572 | https://devpost.com/software/food-nutrition-facts | Inspiration
We came up with this idea after seeing that one of the categories was Health. We tried to think of food related programs or extensions we could make, and settled on this idea.
What it does
The extension can find Nutrition facts for a product when the UPC (barcode number) of the item is provided.
How I built it
We built this using the knowledge from the workshops and lessons, and lots of googling. We used HTML, JavaScript, and CSS to make it all work. We worked on the extension together using repl.it.
Challenges I ran into
We had to experiment with using different food database APIs, but many of them stipulated a finite number of calls in a period of time (usually one minute, one hour, or one day), so we constantly ran out of API calls and had to create many accounts to test our code.
Accomplishments that I'm proud of
We were able to build something in only two days, even though we had very little knowledge of JavaScript, or building extensions in general.
What I learned
We learned a lot about how to use JavaScript, HTML, and CSS, as well as problem solving and debugging skills.
What's next for Food Nutrition Facts
We were planning on making this extension work with only the names of foods or products, but we couldn’t make it work in time. Maybe in the future we can make it work and improve this extension.
Built With
css
html
javascript
Try it out
github.com | Medical - Food Nutrition Facts | Get Nutrition Facts for a product | ['Owen Schafer', 'Shreyas Raghunath', 'Declan Chamberlain', 'Thalen Abadia'] | [] | ['css', 'html', 'javascript'] | 14 |
10,572 | https://devpost.com/software/beautiful-bird-world | Inspiration
I love birds Community - I visited last year with my parents Cornell Lab of Ornithology for birds.
They are dedicated to advancing the understanding and protection of the natural world. At that time I got inspired to do a project on Beautiful Bird World. Birds give a message for kindness and peace. As human beings, we should help the bird community to make their world more beautiful.
Due to current all kinds of pandemic situations, I wanted to provide community natural relax tools and same time urge them to help the bird community
What it does
Beautiful Bird world website helps the community to reduce their anxiety during a pandemic and shows ways how you can connect with birds..
Purpose of Beautiful Bird gives information for various kinds of Bird Spices and shows the way how we can save the bird community during a wildfire. It's also providing interactive games ref to birds and provide research tools reference birds.
Features-
Bird Spices information
Birds Research Information
Bird Interactive games
Wildfire impact on Birds
A natural healing tool to the community during a pandemic
Arts connection with Bird
Donation
How I built it
HTML5 and Javascript
Challenges I ran into
I am a junior in high school and alone with the project.
Time was a major challenge to do this project
Accomplishments that I'm proud of
I tried to level best to give my thoughts to the community ref to bird
What I learned
I connected my self now more with Birds-- While doing this project
What's next for Beautiful Bird World!
I wanted to do more research on Bird using AI tools
Built With
html5
javascript
Try it out
github.com
beautiful-nature-1.sjj3.repl.co | Beautiful Bird World! | We wanted to get connected to beautiful bird world | ['Sally Jain'] | [] | ['html5', 'javascript'] | 15 |
10,572 | https://devpost.com/software/deltaxhackathon | DeltaXHackathon
DeltaXHackathon
Creators: Neal Malhotra, Avik Belenje, Nikhil Mathihalli
Welcome!
This is our medical project submission for the September 26-27 DeltaX Hackathon.
In this project, we aimed to create a website that uses a combination of Machine Learning, APIs, and python code.
We used a few languages which includes python, HTML, and javascript. We used python and a little bit of javascript for the backend part of the website. Backend is the part of the website where all of the inputs are actually processed. For example, if you clicked a button on a website, it wouldn't do anything without backend code telling it to perform an action when the button is pressed. We then used HTML for all the frontend, which is making all the things appear on the page.
But what actually is our project?
Well, our project is aimed to help primary care physicians and patients alike. It lets patients be pre-diagnosed, saving doctors and patients time. This is useful because 20% of all COVID cases are caught in hospitals, so it is understandable why people want to avoid going to one, or if they have to, to spend as little time as possible.
To run our project, clone the repo and open it in visual studio code. In the built-in terminal of visual studio code type, "python "first.py"" or if that doesn't work "python3 "first.py"". In the terminal, it should give a link to the website which can be used to your will! The link given will be a string of numbers.
Built With
api
css
html
html5
javascript
powershell
python
shell
visual-studio
Try it out
github.com | Medical - Mediform | A way for patients to complete a form before reaching the hospital, so that patients can already have a tentative diagnosis, saving time for doctors and patients. | ['Neal Malhotra', 'Nikhil Mathihalli', 'Avik Belenje'] | [] | ['api', 'css', 'html', 'html5', 'javascript', 'powershell', 'python', 'shell', 'visual-studio'] | 16 |
10,572 | https://devpost.com/software/melody-lg60ic | Introduction Page
Instructions Page
Results Page
Inspiration
Melody is a product that all of our members hold near and dear in our hearts. Our inspiration stems from all of the young artists in our lives that we personally know and support. We wanted to create a platform to help them promote their music because current platforms tend to be oversaturated, which makes it difficult for new local artists to truly share their music with an audience.
What it does
To solve this issue, we built an app that allows users to discover new up and coming artists on Spotify based on their preferred genre. It utilizes Spotify API software to grab user preferences and then gives an automated local artist based on the amalgamation of recently played songs.
How we built it
We built the app using the Spotify API to provide a sign-in option for users. We used Java and XML to code our program on Android Studio to create the app, Figma to design our interface, and Adobe Illustrator to design our logo.
Challenges we ran into
One of the biggest challenges we ran into was the feasibility of our product. Our first idea, a matchmaking service based on music taste would’ve taken far too long. Manually putting user data online would’ve taken more time than we had under the given time constraint, as a result, we had to improvise and made adjustments to our project.
Accomplishments that we're proud of
We are proud of participating in our very first hackathon! Some of our members have extensive experience with code, while others did not. Creating a space for teamwork and learning is one of our biggest accomplishments! We also are very proud of collectively creating the first draft of our fully functioning app!
What we learned
As a team, we have learned so much more about coding practices. We have only begun to learn about the basics of Android Studio which most of our team have never worked with before. We learned how to use Figma, a Vector Graphics Editor to design our app interface and logo. We all also gained insight into how to implement API into code and app design.
What's next for Melody.
In the future, Melody. will include more features to help narrow down the potential new artists that our users are matched with. These features include regions and artists similar to the genre, accessed via the API’s popularity ranking feature. Melody will also offer a matchmaker service that connects our users with other Spotify users based on their similarity in music taste. After the user accepts their match, they will have the option to curate a playlist for their new music penpal via the Melody app and connect with them through other social platforms such as Instagram.
Built With
adobe-illustrator
android-studio
api
figma
java
xml
Try it out
github.com | Miscellaneous-Melody. | Melody is an interactive app that allows Spotify users to maximize their listening experience by helping them discover and support new up and coming artists based on their listening profile. | ['Palpasha Karki', 'Mandy Lieng', 'Lauren Wagner', 'Bronwyn Busby'] | [] | ['adobe-illustrator', 'android-studio', 'api', 'figma', 'java', 'xml'] | 17 |
10,572 | https://devpost.com/software/lifehax-lf | Inspiration
I've always wanted to find a central site for all sorts of useful life hacks, so I decided to create my own.
What it does
It has pictures and links to other websites with useful life hacks.
How I built it
I used repl.it to help create this website.
Challenges I ran into
There were a lot of features I did not know how to implement at first, but I have learned a lot from this project.
Accomplishments that I'm proud of
This is my first real website that I built.
What I learned
I learned how to implement many different features into this website.
What's next for LifeHaX-LF
I plan to continue adding more categories and sites as well as making it more easy to navigate. I would also like to add some of my own hacks into the website.
Built With
css
html
Try it out
LifeHax.lfbunny.repl.co
github.com | Miscellaneous Hack - LifeHaX Website | The LifeHaX website is a good source of useful life hacks. | ['Lisa Fung'] | [] | ['css', 'html'] | 18 |
10,572 | https://devpost.com/software/game-for-pos-interpretation | Our game provides categories for kids to sort a word the parent comes up with, and then the child sorts it into a category of noun, verb, or adjective. Once the kid has moved the word to the category they think it belongs in, the parent can press a button that states whether or not the kid is correct. A counter then goes up or down, depending on whether or not the answer was correct or not.
Built With
css3
html5
javascript
Try it out
github.com | Community Service - Learning Parts of Speech | Want to teach your kid Parts of Speech without worrying about them drifting to other websites? Try our game, which is a fun way to teach your kid parts of speech yourself! | ['Rebecca Hollis', 'shannon j', 'Kevin Lu', 'Alex Yang'] | [] | ['css3', 'html5', 'javascript'] | 19 |
10,572 | https://devpost.com/software/borderhacks-l8rkaz | Online testing of certain conditions so that they don’t have to go to hospital.
Would be useful for progressive conditions like dementia, Parkinson’s, muscular dystrophy, etc. because they get worse over time.
For example
Online Parkinson’s test (there’s some data on typing speed to check for response time of the user, etc. for Parkinson’s patients),
Online vision test (where user needs to read some text from a distance and get sound feedback),
Cognitive test (a questionnaire about common symptoms),
etc.
Dementia and parkinson’s in particular would be interesting since they are generally in older people who would be more susceptible to COVID and it would be better for them to stay out of hospitals.
Built With
css
database
html5
php
Try it out
github.com
xd.adobe.com | YourDoctor - Community Service | Online testing and diagnosis of diseases made easy | ['Muskan Gupta', 'Prasad Kumkar'] | [] | ['css', 'database', 'html5', 'php'] | 20 |
10,572 | https://devpost.com/software/test-pztgx0 | c# code
Design UI
Test
Inspiration
We wanted our project to be fun, short, and well-developed.
What it does
The user chooses radio buttons from 10 questions about their personality, and the program determines which character's buttons are pressed most. Along with a text box result, the UI displays a picture of the character from the motion picture and a pie chart displaying the share of each character's score from the results. There is a submit, reset, and close button which does its said functions.
How I built it
Using microsoft's .NET framework and Visual Studio, my team was able to create a UI containing all the buttons and display interfaces and the code .
Challenges I ran into
Making sure the reset button reset certain parts and not others was one of the challenges we faced. Another is getting the image to change depending on the result.
Accomplishments that I'm proud of
My team and I are proud of creating a test with so many display options and decent code. Our proudest moments were in creating the pie chart, and image box.
What I learned
I learned how to utilize many of visual studio's controls and code classes in c#.
What's next for test
For further development, 'test' may be given more options for different tests from different movies, or a cleaner modern GUI feeling.
Built With
.net
c#
visual-studio
Try it out
github.com | misc - test | A short personality test to determine which character you are from Disney's Frozen | ['Vijay Kethanaboyina', 'Aadhavan Magesh', 'James Dinh', 'Aidan Brown'] | [] | ['.net', 'c#', 'visual-studio'] | 21 |
10,572 | https://devpost.com/software/test-dxhacks | Inspiration
What it does
How I built it
Challenges I ran into
Accomplishments that I'm proud of
What I learned
What's next for test DXHacks
test
Built With
test | test DXHacks | test | ['Delta X'] | [] | ['test'] | 22 |
10,572 | https://devpost.com/software/am-i-dying-vumqde | Inspiration
I was inspired by my teammate's summer project which is a program that suggests recipes based on inputted ingredients
What it does
My team's project prints out a list of diseases that the user may have based on the symptoms they input into the user interface
How I built it
This project was built through lots of teamwork. The UI was created and is operated using the tkinter module. The diseases are stored in a list that was created essentially through grueling data entry. The illnesses and conditions are actually objects of the Disease class I created. The class has several attributes including symptoms, transmission, disease type, and rarity. and although only the symptoms and rarity were required to calculate and sort through the diseases, the other information is there to serve as a teaching tool to others.
Challenges I ran into
One large challenge was the user input. because there are so many symptoms a person could have, a drop down menu simply wasn't feasible, so we ended up with text input by the user. the challenge there is capitalization, spelling errors, and everything in between. To deal with this, our program creates a list of symptoms based on the cataloged diseases, and any input is compared against that list to see if it matches, and if it doesn't the program throws an AssertionError
Accomplishments that I'm proud of
I'm most proud of the Disease class, as I feel like it is really the heart of the project. It contains the method to calculate how likely the disease is, and all the information about the different diseases is held in Disease objects.
What I learned
I learned a bit about how to handle and create UI's so that I could help my team mates who created the UI to debug and reformat it to make the project look and work better.
What's next for Am I Dying
The next steps for this project would be to add more diseases to our disease library because we currently have less than 20 diseases. In addition to adding to the library, I would like to refine the information going forward, as well as refine the calculation process that determines what diseases a person may have.
Built With
python
repl.it
tkinter
Try it out
github.com | Medical-Am I Dying | One stop shop for symptom diagnosis | ['Aidan Herz', 'Damian Arzakel', 'Ashley Redhead', 'Hung Pham'] | [] | ['python', 'repl.it', 'tkinter'] | 23 |
10,572 | https://devpost.com/software/quicknotes-jp5ceb | What it does
QuickNotes is a website where you can quickly and easily log your mood each day.
How we built it
We built it using react, react bootstrap, styled components, react router, and recharts, all of which we had never used before.
Challenges we ran into
A few challenges were form submission, storing data, and displaying the mood history, as well as creating the pie chart to show mood trends. All of these were things we hadn't done before, and that took us a while to figure out.
Accomplishments that we're proud of
We accomplished our goal and overcame all our challenges. This was something we really didn't know if we'd be able to finish in the time given because we were so inexperienced with what we were trying to do. Being able to work past the myriad of challenges and errors was really satisfying.
What we learned
We learned how to make a website with React Bootstrap, and deepened our knowledge of Javascript and HTML. We learned a lot of critical-thinking skills as well, and how to use some unconventional ideas to bring an idea into fruition.
What's next for QuickNotes
To continue, our next steps would be to add the option to choose 0-100% of small moods, such as nervousness or joy, to give a more in-depth report and be able to show relevant information for users in the statistics page. Overall, we want to be able to extract more information easily, and use that to help users feel better each day. That might include recognizing thinking traps or prompting users to re-think a situation and write about it some more.
Built With
bootstrap
html
javascript
react
react-router-dom
recharts
styled-components
Try it out
github.com | Medical - QuickNotes | A website where you keep track of your mood. A report is given of all your past responses and you can also see a chart of your mood. | ['Chase Treadwell', 'Derek Casini'] | [] | ['bootstrap', 'html', 'javascript', 'react', 'react-router-dom', 'recharts', 'styled-components'] | 24 |
10,572 | https://devpost.com/software/in-school-contact-tracing | The data that can be accessed by entering the teacher password.
The interface that is used to sign in and out.
We were inspired to build a webpage that could help our school keep track of who is in what space and for how long to be able to effectively contact trace when we go back to school. It was specifically made to run on the old iPads our school is fazing out for Chromebooks to reduce the cost of implementing our solution. It was challenging creating a website that both looks good and effectively stores and manages data. The biggest thing we learned is it takes perseverance and teamwork to make complicated systems like our project.
Built With
css3
html5
javascript
Try it out
deltax.deltaxpeak.repl.co | Medical - In School Contact Tracing | Contact tracing is a very tedious process. Our system - running iPads - will streamline this process by easily locating students who had contact with someone who tested positive for COVID-19. | ['Ethan Harris', 'Kyle Jager', 'Mason Rein'] | [] | ['css3', 'html5', 'javascript'] | 25 |
10,572 | https://devpost.com/software/express-98vrdl | Circuit Design
Model
Model
Inspiration
Nearly everyone faces hardships and difficulties at one time or another. But for people with disabilities, barriers can be more frequent and have greater impact.
I got the inspiration from the device Stephen Hawkins had to use to communicate when he was diagnosed with motor neurone disease. He was able to move only one finger and he had to tap on a key to form words which were then spelled through a screen and speaker.
During this hack, I had a idea to relate this to people with mutism where people. In this case, people could use their complete hand and finger movements which could make communicating a lot faster.
This hack reminded me of the physically disabled in our community who are still not benefitted by technology, even some the students from our university whom I used to see daily.
Stephen Hawkin is a true example that even people with disabilities are equally capable and deserve to be treated equally and have equal opportunities.
What it does
The user has to wear this device and tap on his finger with other fingers. Just like tapping on a virtual keyboard that is present on your hand. These key presses are then converted into characters with which could be seen on the screen present on the back of the users' hand. Autocomplete algorithm is used which can be used to complete words or even sentences.
The image below shows the key-points (circles) where the user can touch to register a keypress. This shows a possible combination of 20+ keys.
Sign Language Recognition:
Flex sensors are placed on the back of each finger to measure each finger's bending. Measurement from flex sensor and gyroscope (hand-rotation) can be used as an input into our recurrent neural network to predict the sign language and it's meaning.
How I built it
This device is build by 3D printing it and attaching hardware to it. Code for the hardware is provided in the repository above. This code needs to be uploaded to an ESP32 development board which has to be attached to the components below.
For the simulation I used the following tools:
Circuito
for designing the circuit.
Tinkercad
to simulate the designed circuit
Sketched and ideas are drawn showing the core functionalities of the device.
The circuit requirements:
ESP32 - DevKitC [Qty: 1]
LCD Display 20x4 I2C [Qty: 1]
SparkFun MPU-6050 - Accelerometer and Gyro [Qty: 1]
Lithium Polymer Battery - 3.7v [Qty: 1]
Mini Pushbutton Switch [Qty: 8]
10K Ohm Resistor [Qty: 8]
USB micro-B Cable - 6 Foot [Qty: 1]
Lipo Battery Charger Module 3.7v Step Up to 5v [Qty: 1]
BreadBoard [Qty: 1]
Jumper Wires Pack - M/M [Qty: 1]
Jumper Wires Pack - M/F [Qty: 2]
Male Headers Pack- Break-Away [Qty: 1]
Flex Sensor [Qty: 5]
Challenges I ran into
Electronics or any kind of hardware is not currently available from where I come, stores and shops are closed and even Amazon is not able to deliver any as my city is in a complete lockdown and is having the worst COVID condition in the world right now. Without having the hardware, I had to sketch out all the circuit diagrams and models and create a simulation.
Accomplishments that I'm proud of
I am proud that this device could bring a change in the world and can help people who cannot speak for themselves. The circuit that I have build and the code I've written is working properly. This device shall be able hamper individuals’ ability to have full participation in society, the same as people without disabilities.
What I learned
Pixel Art
Tinkercad Simulation
3D modelling
What's next for Express
I am looking forward to use the prizes that I win from this hackathon to 3D print this device and buy the hardware to have a working prototype. I shall test this on people with mutism and have their feedback and work on improving it. I know this is not a perfect device but this is not the final version yet too. I am sure I would be able to take this project forward by getting more people interested and taking this in the market.
This device shall further promote inclusion among our community by:
Getting fair treatment from others (nondiscrimination);
Making products, communications, and the physical environment more usable by as many people as possible (universal design);
Modifying items, procedures, or systems to enable a person with a disability to use them to the maximum extent possible (reasonable accommodations); and
Eliminating the belief that people with disabilities are unhealthy or less capable of doing things (stigma, stereotypes).
Built With
arduino
blender
embedded-c
esp32
pixelart
tinkercad
Try it out
github.com | Community Service - Express | This device would let people with mutism to express themselves quickly like never before. Wearing this device will give them back the ability to talk to everyone. | ['Prasad Kumkar'] | [] | ['arduino', 'blender', 'embedded-c', 'esp32', 'pixelart', 'tinkercad'] | 26 |
10,572 | https://devpost.com/software/library-management-system-huk5c2 | Inspiration
I got inspiration to design a modern looking dark themed library managemnet system in java Swing Framework since there are many existing ones but thee ain't anyone like mine.
What it does
Features ⚙️
A draggable undecorated jframe with dropshadow effect.
A login panel with signup and forgot password option with security question.
Add book and Add student panel with auto-generated id to add new book and register new student in database.
Book can be issued to a student with given student id and book id using issue book panel.
Return issued book for the given student id with whom book is issued.
In Statistics panel there are tables of all the issued and returned books data.
How I built it
Tools & Technologies used 🎭
Java Swing + AWT
JDBC API
Msql database (SQLYog GUI client)
Flatlaf Look & Feel
Netbeans IDE
Pichon icon8 icon pack
rs2xml jar
Challenges I ran into
I faced many small challlenges like How tom pass the dynamic data form database to GUI without using complex SQL Queries etc.
What's next for Library Management System
May be add more options to it and make it a java web start.
Prerequisites ✔️
A minimum JRE version 8 for running the application.
Mysql should be installed on your system with the tables given in SQL file of the repository.
Built With
api
awt
client)
database
feel
flatlaf
gui
icon
icon8
ide
java
jdbc
look
msql
netbeans
pack
pichon
rs2xml
sqlyog
swing
Try it out
github.com | Library Management System | A 🌑 dark themed library management 🖥️ desktop application with modern look and feel. | ['Ashutosh Tripathi'] | [] | ['api', 'awt', 'client)', 'database', 'feel', 'flatlaf', 'gui', 'icon', 'icon8', 'ide', 'java', 'jdbc', 'look', 'msql', 'netbeans', 'pack', 'pichon', 'rs2xml', 'sqlyog', 'swing'] | 27 |
10,572 | https://devpost.com/software/brains-storm | Brains storms logo
NLP used for categorization for idea input
Inspiration
As the pandemic continues and as we are pulled farther apart from human interaction, there are many downsides; however, people are collaborating more than ever with people all around the world, attending meetings and sessions they could have never done otherwise, and collaborating in ways that once seemed impossible, all from the comfort of their own homes. We wanted to foster online collaborations that may have dulled due to difficulty visualizing as a group, lack of physical collaboration, slow typing skills, and anything else that may be making online brainstorming worse.
What it does
Our web app lets users focus on coming up with ideas and brainstorming rather than the little details like taking meeting notes or creating visual graphics to explain ideas. Our app allows users to quickly jot down ideas with a simple click, which will convert their speech to text and be placed accordingly. Using NLP, the idea will be categorized into categories that best suit the main points of the idea, and an image of the idea will display. The user can continue to speak into the site and branch out their ideas visually as they wish. Although we focused on brainstorming being the main use case, this app can be used for a variety of reasons, such as keeping track of thoughts with a cluttered mind, listing and categorizing ideas when preparing a speech, discussing political topics, listing the pros and cons to an idea, and the list goes on.
How we built it
We used Microsoft Azure's speech to text API, choosing it primarily because it was offered in client-side Javascript, for the transcription of the user's thoughts. We used Google Cloud NLP API to categorize the user's words and Google Cloud Platform to host our site. Additionally, we used HTML, CSS, Javascript, and Node.js.
Challenges we ran into
It was our first time using certain libraries and APIs, so it was initially difficult navigating through the new concepts.
Accomplishments that we're proud of
We are proud of spinning out a functional app that we were all excited about!
What we learned
We learned about APIs and libraries.
What's next for Brains Storm
We plan on creating functionality to automize brainstorming further and to make brainstorming a more hands-off experience and more focused on ideation. This would be done by allowing speech-to-text capabilities to run for as long as the user would like, while the website captures and notes ideas whenever it seems appropriate and then automatically connects ideas that seem to be related. We would also like to make it multi-user friendly, so that multiple people can work on the same "mind map" at once. We would also like to look into generating these maps in realtime.
Built With
azure
css
google-cloud
html
javascript
natural-language-processing
node.js
speech-to-text
Try it out
github.com
34.121.43.100
brainsstorms.tech | Brains Storm | ML/AI +Communication Tracks | Focus on the thinking. Let us do the rest. | ['Sai Vamsi Alisetti', 'Oran C', 'Mythili Karra', 'pWr1ght Wright'] | [] | ['azure', 'css', 'google-cloud', 'html', 'javascript', 'natural-language-processing', 'node.js', 'speech-to-text'] | 28 |
10,572 | https://devpost.com/software/ambulance-vitals | Inspiration
We saw an ambulance drive by one of our houses.
What it does
It generates a random number for each variable (normally would be input by a sensor from an instrument in the ambulance), and then evaluates 6 criteria that determine one's wellbeing.
How we built it
We coded in codeshare.io and then compiled in jGrasp.
Challenges we ran into
We are both beginners and having yet to even use a method in java aside from main, we had to use the tools that we knew how to use in order to make our project.
Accomplishments that we're proud of
We made a working generator for medical situations to display the vitality of the patient.
What we learned
We learned a lot about teamwork, as well as how to code in Java.
What's next for Ambulance Vitals
As we grow more advanced, I imagine we will add external inputs that allow a machine to input sensor data and determine conclusions based on that data.
Built With
java
Try it out
repl.it | Ambulance Vitals | A person with unknown vitals is being transported in an ambulance. This program randomizes and calculates the vitals of the person and attempts a diagnosis. | ['Brian Bippert', "Nicholas D'Imperio"] | [] | ['java'] | 29 |
10,575 | https://devpost.com/software/learn-it-k96fue | Learn, Inspire, Create
Inspiration
I was thinking about creating an app where students can learn by visualizing them. So, snapkit gave me some hope and I got started with this project. Since, Covid started students can't go to school and visually see and learn the topics which are in their syllabus. So, I thought of creating an app like this.
What it does
Learn it helps students to visually see and learn.
Let's say they are learning about Earth how it rotates and revolves. So, Snap Lens while show them how this all things work. How Earth revolves around sun and rotates and we get day and night. Like this for every subject we can add lens and then they can go for quiz which will mark their progress in academic and in their life too.
How I built it
I started designing UI for app in Adobe xd.
After creating some UI designs and finalizing Color palette, I started making UI in android studio
Simultaneously, adding functionality into app and then thinking of new topics to add into app.
Creating 3d model for Earth and Sun.
Exporting and importing the 3d model from blender and lens studio.
Now, connecting lens with android studio.
Profile section included into app and connected with firebase
Adding quiz section so that students can learn and also see their progress.
## Challenges I ran into
College hours 6-7 hrs are given to college. 😅
Snap lens id to be unlocked permanently took time.
College started exam suddenly so some features haven't been added which were planned.
## Accomplishments that I'm proud of
Adding lens in app.
Connecting app with Snapkit
Creating an app which helps kinds to grow their mind, Be Creative!
being a participant and successfully submitting project in this hackathon
## What I learned
How to add lens into android app
Firebase Integration
Got more into Kotlin
## What's next for Learn it
Adding Stats i.e. Daily streaks, Medals after each milestone
Friends i.e. Adding followers and following feature to compete with friends.
Teacher adding notes via Broadcast to all students
Built With
android
android-studio
kotlin
snapchat
xml
Try it out
github.com | Learn it | Learn, Inspire, create | ['Dishant Gandhi'] | ['Best Use of Dynamic Lenses'] | ['android', 'android-studio', 'kotlin', 'snapchat', 'xml'] | 0 |
10,575 | https://devpost.com/software/happy-4mqxcb | Share Good Habits With Friends
Motivate Each Other
Learn New Habits
Make Your Success Visible
Share Your Activity
Choose From Over 50 Good Habits
Inspiration
Studies have shown that you develop good new habits faster if you share them with friends. This is exactly what the HAPPY app is for.
What it does
HAPPY is an app that lets you share good habits with friends. The app helps you to live happier.
Whenever you do something good for yourself, you share this experience with HAPPY. In doing so, you motivate your friends, and they encourage you in return.
Share at HAPPY, if you are
• drinking water
• going for a jog
• reading a book
• meditating
• abstaining from alcohol
• doing yoga
• getting up early
There are over 50 good habits available for you to choose from in HAPPY!
How it works:
• In the app, click on the good habit you are currently working on. Your friends will see what habit you’re developing thus motivating them to get into the same good habits.
• On your map, you can see all your friends who are currently working on developing new habits. You can motivate each other and even meet in the real world. Together you will learn good new habits.
• The app keeps track of your progress on each habit you’re working on.
• If you like, you can upload a photo of your activity and share it with your friends. It’ll provide an additional motivation for them.
• Let your friends inspire you. Exchange encouraging messages through HAPPY and help each other develop healthy habits.
• Find new friends who also want to develop new habits. Exchange messages with people who are just like you!
How we built it
Going Native for the apps, using Firebase for the backend.
Challenges we ran into
Build the software so it scales well even with millions of potential users.
Accomplishments that we're proud of
Proud of the image generator that creates the content that gets shared to Snapchat. Also, the possibility to make many a/b tests easily.
What we learned
What's next for HAPPY - with friends
The following points are on our roadmap:
• Adding Storykit as soon as we get access to it
• Adding Camerakit as soon as we get access to it
• Improved integration of the Bitmoji Kit in the chat so that users can choose a Bitmoji matching their chat message
Built With
bitmoji
creativekit
snapchat
Try it out
apps.apple.com
play.google.com | HAPPY - with friends | HAPPY is an app that lets you share good habits with friends. The app helps you to live happier. Whenever you do something good for yourself, you share this experience with HAPPY. | ['Daniel Minini', 'Hanno Weiß'] | ['Best Use of Bitmoji'] | ['bitmoji', 'creativekit', 'snapchat'] | 1 |
10,575 | https://devpost.com/software/botmoji-458h0y | Default
Login
Hello
What's up
Have some fin
It also understands humor!
Inspiration
The application was inspired from the fact that one day chatbot could be actual companions and have human like characteristics. It's a fun way to interact with the chatbot that has your bitmoji avatar as it's face.
What it does
It is a chatbot that recognizes the "feeling" of the current conversation and responds to it accordingly with a customised bitmoji graphic. The graphic changes as per the chat that is going on, and the bot learns how to adapt to conversations.
How I built it
I built the application using React for the frontend. On the backend, it has a Rasa Server for handling the chatbot conversations. Apart from that, the application is deployed on a Google Compute Engine instance.
Challenges I ran into
The biggest challenge was creating the data and training a chatbot on the rasa system to give out sustainable outputs. Also, a big issue was deployment, which had a lot of problems in terms of SSL routing, reverse proxying, connecting to instances, etc.
Accomplishments that I'm proud of
What I'm most proud of is the way the application has come out to be. I clearly was not expecting the chatbot to respond so well and the responses in terms of bitmoji's to be so much fun. Also, that even being tight on the deadlines, I have managed to get up and running on a demonstratable application, seems like a huge achievement to me *phew.
What I learned
I learnt how to train chatbots, and working with the Snapkit SDK. I learnt how bitmojis are identified and used. I also learnt how to deploy virtual assistants on the cloud.
What's next for BotMoji
A few things that I have on my mind are,
Having voice enabled conversation
Fluidic and smooth transitions
GIF reactions
Animated 3D bitmojis
Note: The live version has a sample user already defined, as the app is not published on the Snapkit SDK and is still in the development phase (although this can be changed in the config file during local development).
Github URL:
https://github.com/Vedant1202/botmoji
Built With
javascript
python
rasa
react
snapkit
Try it out
app.botmoji.tk | BotMoji | A chatbot with bitmoji reactions | ['Vedant Nandoskar'] | ['Runner Up - Best Use of Bitmoji'] | ['javascript', 'python', 'rasa', 'react', 'snapkit'] | 2 |
10,575 | https://devpost.com/software/fleek-51zfs7 | GIF
fleek x Snap
Personalized Shopping
Dark and Light Modes for Any Mood
Available on iPad
Our #fleekfriend Anushka Sapra with an item she got on Fleek
Snap Sticker Share
Inspiration
In class, we noticed girls casually browsing through clothes on their laptops. We found that the digital shopping experience, whether on mobile or laptop, is overwhelming with generic catalogs and inconsistent experiences. We wanted to create an experience that was easily digestible across a shopper’s favorite brands. We saw an opportunity and created Fleek’s mobile app, with full-bleed images, multiple brands, and a Tinder-like swiping interface.
What it does
Today, we are Tinder for fashion. Our app creates a shopping experience focused on fashion discovery using AI. Simply by swiping, Fleek finds and recommends the latest fashion for your style. You can filter items, save and share favorites, and purchase them.
How we built it
Fleek is a native mobile app, built for iOS using Swift. My co-founder Cyp and I mock up designs using Figma and I subsequently implement them using the standard UIKit framework with Autolayout. My co-founder Kian has built out a Spark streaming backend infrastructure and a content based recommendation model. We pull new items from our affiliate partners on a daily basis and run recommendations hourly. On the app, I pull the clothing items for a specific user with a get request which is hooked up to our backend database.
Challenges we ran into
Our first challenge was figuring out how to get content on the app. We did some research and found out about 3rd party affiliate programs such as CJ and Rakuten. We applied to become affiliate partners with brands suited to our target demographic and learned how to use their APIs to pull clothing items for our app.
I realized that people love to share items with their friends and that our app had an organically viral aspect to it. However, if someone shared an item with a user, a generic link would be shared and it would redirect the user to the clothing item in its respective brand's website. There was no link back to us and we wanted to change that. I dug into universal links and created an experience which opens up Fleek with the appropriate item when it is shared with them.
Accomplishments that I'm proud of
Since releasing our updated app on July 24th, we have a total of 572 downloads. Of these new users, 120 are weekly active users. Our average user swipes 130 times with 50 users having swiped over 1000 times. I'm very proud of these numbers because even though our number of users aren't very high, the usage per user has been great. I'm also very happy that personalized items have been 6 times more likely to be faved than random ones, which means our recommendation model is working great.
I'm simply proud that we were able to put together a fully functional product that could put a smile on someone's face and be useful to them. I'm proud of the attention to detail we put into Fleek's UI. I particularly like the custom view controller animation when tapping on an item from the home page and the heart icon at the bottom right beating when a user swipes right.
What I learned
We are learning so much in the process of building Fleek. Despite having worked on iOS apps for a while, I have picked up a bunch of new technologies I haven't used before. I've learned to use collection view diffable data sources, universal links, context menus, along with many others. I've even expanded outside my comfort zone and done a little bit of web development. I've learned how to host a website on Google Cloud using App Engine and how to write REST API's in Python using Flask.
Apart from the technology, we're continuously learning about our target demographic and how they like to shop. We are constantly asking them questions about their favorite brands or features they'd like on Fleek and gaining a lot of insights in the process.
What's next for Fleek
Fleek has a long way to go. We want to take our product to the next level with a whole lot of new brands and features such as similar items, collections (playlists), search, and a user profile with social and community components. We want Fleek to be people's go to fashion discovery and shopping app.
Our next steps include partnering directly with the brands instead of going through services like Rakuten. That way, we can enable features like in-app checkout which would make buying items on Fleek a lot easier. Moreover, we feel like we can provide fashion brands valuable insights about their clothing items. We envision a future where brands can test the waters with new items before beginning the manufacturing process so they can tailor supply based on what is in demand. This could save them a lot of costs and be good for our environment as there would be less waste.
Alongside that, we want to continue to grow the Fleek community by getting more users, growing our Instagram page, and getting campus ambassadors across universities.
Built With
figma
google-cloud
postgresql
python
swift
tensorflow
uikit
Try it out
apps.apple.com
www.fleekfashion.app
github.com | Fleek | Personalized Fashion Discovery Made Fun | ['Naman Kedia', 'Cyprien Toffa', 'Kian Ghodoussi'] | ['Best Design'] | ['figma', 'google-cloud', 'postgresql', 'python', 'swift', 'tensorflow', 'uikit'] | 3 |
10,575 | https://devpost.com/software/like-my-recent-lmr | Inspiration
I started developing LMR a few months into the Coronavirus pandemic when feeling socially distant was on my mind almost every day. Yearning for more connection and a greater sense of community online was a big part of my inspiration for this project. People already use social medias like Snapchat and Instagram to share their lives with and connect with others, even if you can't be physically near them. I created LMR because I wanted to help users share their posts
across
social medias, specifically, from Instagram to Snapchat. This tool will help grow people's online community, and allow for more engaging interaction across social media.
What it does
LMR allows you to log into your instagram, and share a post from your feed to Snapchat. This post is imported to snapchat as a sticker which you can take a picture with and position anywhere on the screen. Once you share this snap, friends can then swipe up on this view the post in Snapchat's native browser without having to go search for it elsewhere.
How I built it
I decided to build this app near the beginning of quarantine so most of the brainstorming and planning I just did on the whiteboard in my room. I started by listing everything I needed to learn/test for this project, the three major components being Swift/Xcode, the Instagram API, and the Snapchat API. Once I felt sure I could work with all three I began developing the app, starting with the simplest possible version and working my way up from there.
Challenges I ran into
I had a tough time working with the Instagram API because they just came out with a new one and didn't provide an SDK like SnapKit that makes it easy for developers.
Accomplishments that I'm proud of
I'm proud that I was able to complete the app on my own, and that I get to share it with my friends very soon.
What I learned
Since this was my first major project in the field of computer science, I had the same realization that I think a lot of young programmers have which is that everything takes about ten times longer than you think. Whether it was getting familiar with an API, learning how to store encrypted data, or finalizing the UI to be compatible with any iOS device, I could never estimate how long each step would take, and often found myself spending days on something I thought I could accomplish in just a few hours.
What's next for Like My Recent (LMR)
LMR is currently being reviewed by Snapchat and once that goes through I will put it on the App Store and share it with friends. I'm hoping the practical feature that it adds to both snapchat and Instagram will be useful for a variety of users.
Built With
creative-kit
instagram-api
swift
Try it out
github.com | Like My Recent (LMR) | Like My Recent (LMR) is an app that allows you to share your Instagram posts to Snapchat. | ['Peter Marsh'] | ['Runner Up - Best Design'] | ['creative-kit', 'instagram-api', 'swift'] | 4 |
10,575 | https://devpost.com/software/accessible-game-controller | Architecture Diagram
Using it in Call of Duty
Using it in Dark Souls 3
Using it in Fall Guys
Inspiration
People with certain physical disabilities often find themselves at an immediate disadvantage in gaming. There are some amazing people and organizations in the gaming accessibility world that have set out to make that statement less true. People like Bryce Johnson who created the
Xbox Adaptive Controller
, or everyone from the Special Effect and Able Gamers charities. They use their time and money to create custom controllers that are fit to a specific user with their own unique situation.
Here's an example of those setups:
You can see the custom buttons on the pad and the headrest as well as the custom joysticks. These types of customized controllers using the XAC let the user make the controller work for them. These are absolutely amazing developments in the accessible gaming world, but we can do more.
Games that are fast paced or just challenging in general still leave an uneven playing field for people with disabilities. For example, I can tap a key or click my mouse drastically faster than the person in the example above can reach off the joystick to hit a button on a pad. I have a required range of motion of 2mm where he has a required range of over 12 inches.
I built Suave Keys to level the playing field, now made even better by Snap Keys! Combine voice input, facial expressions, and gestures to play games the way that works for you!
What it does
SnapKeys + SuaveKeys lets you play games and use software with your voice, gestures, and expressions alongside the usual input of keyboard and mouse.
It acts as a distributed system to allow users to make use of whatever resources they have to connect. Use your voice via any virtual assistant, smart speaker, or voice-enabled app. Then use the Suave Keys snap lens and the Snap Reader app to start using expressions and gestures too!
Here's what it looks like without Snap Keys:
The process is essentially:
User signs into their smart speaker and client app
User speaks to the smart speaker
The request goes to Voicify to add context and routing
Voicify sends the updated request to the SuaveKeys API
The SuaveKeys API sends the processed input to the connected Client apps over websockets
The Client app checks the input phrase against a selected keyboard profile
The profile matches the phrase to a key or a macro
The client app then sends the request over a serial data writer to an Arduino Leonardo
The Arduino then sends USB keyboard commands back to the host computer
The computer executes the action in game
Now here it is with Snap Keys:
Snap Keys acts as an extension of Suave Keys. You launch the lens from the Android or iOS app, then launch the Snap Reader windows client. This client lets you choose an application to monitor such as your android or iPhone, then streams each frame of the application through the processor. Whenever a QR code is found, it will detect the inner-command of the QR code and send that command to Suave Keys. From there, Suave Keys takes over and sends the command down to the user's end-client which checks it against the current game profile and executes the key or macro of keys through the Arduino.
Here's a typical flow once everything is running:
User raises eyebrows
Snap reader detects the brow raise QR code
Snap reader sends brow_raise command to Suave Keys for the authenticated user
Suave Keys sends brow_raise to the end client via websocket
End client sees that "brow_raise" matches with the space bar
End client sends space bar key request to Arduino
Arduino presses space bar
Character jumps in game!
The app also allows the user to customize their profiles from their phone as well as their desktop client. So if you want to quickly create a new command or macro, you can register it right within the app.
Here's an example of a Fall Guys profile of commands - select a key, give a list of commands, and when you speak them or use the gesture, it works!
You can also add macros to a profile:
How I built it
The Snap Keys apps were built using:
Kotlin for Android
Swift for iOS
C#, .NET, and UWP for the Snap Reader
The lens was built using lens studio combined with custom scripts and assets for the QR codes generated online
While the SuaveKeys API and Authentication layers already existed, we were able to build the client apps to act as a whole new input type.
The most important piece was the Snap Reader Windows app. I made use of the
GraphicsCaptureSession
library in Windows along side a Direct 3D encoder to take each frame from the stream, process it in memory to a bitmap, then run the bitmap through a ZXing barcode scanner set to scan for QR codes.
Here's the method that is invoked on each frame being processed from the screen stream:
private async void OnFrameArrived(Direct3D11CaptureFramePool sender, object args)
{
_currentFrame = sender.TryGetNextFrame();
BarcodeReader reader = new BarcodeReader();
reader.AutoRotate = true;
reader.Options.TryHarder = true;
reader.Options.PureBarcode = false;
reader.Options.PossibleFormats = new List<BarcodeFormat>();
reader.Options.PossibleFormats.Add(BarcodeFormat.QR_CODE);
var bitmap = await SoftwareBitmap.CreateCopyFromSurfaceAsync(_currentFrame.Surface).AsTask();
var result = reader.Decode(bitmap);
if (!string.IsNullOrEmpty(result?.Text) && (result.Text.StartsWith("suavekeys|expression") || result.Text.StartsWith("suavekeys|gesture")))
{
Debug.WriteLine("WOOHOO WE FOUND A CODE");
if(!_isSending)
{
_isSending = true;
var command = result.Text.Split('|')[2];
await _suaveKeysService.SendCommandAsync(command);
_isSending = false;
}
}
_frameEvent.Set();
}
I added the
_isSending
lock so that we weren't constantly feeding HTTP requests to SuaveKeys' API on every single frame since on a decent machine, the Snap Reader app can process about 60 fps. That means if you were raising an eye brow to jump in game, holding your brow up for 1 second, it would send 60 jump requests to the game. This acted as a safe throttle while still allowing for hold actions.
Challenges I ran into
The biggest challenge was testing while also talking to my chat on stream! Since the whole thing was built live on my twitch channel, I'm always talking to chat about my thought process, what I'm doing, telling jokes, and answering questions. I also talk with a lot of facial expressions naturally, so talking to chat triggered tons of extra mouth, smile, and brow events. Honestly, it ended up being pretty funny though.
Other than that, it just took me an hour or so to really figure out how to use lens studio to its potential. I've done a bit of Unity and Unreal work in the past, so it wasn't too bad.
Accomplishments that I'm proud of
The biggest accomplishment was being able to see it in use! I was able to play games like Call of Duty, Dark Souls, and Fall Guys using my face and gestures! It is far more performant for primary actions than just voice and feels like there is some real potential to use this type of technology or direction to give people more options for how to interact with games and software that works for them.
What I learned
I learned a lot about multi-modality on the input side of conversational AI and commands, and was able to use snapchat to push that to new limits. I also learned tons about how to use lens studio and create some really cool, funny, and innovative lenses that people will hopefully love ♥
What's next for Snap Keys
Tons of stuff! For Snap Keys:
More gestures and expressions
Making the lens look a lot better
Tweaking performance
Then within just Suave Keys:
Making the UI a WHOLE lot cleaner and easier to use
Enabling more platforms to help more people use it
Distributing hardware creation to let people actually use it
Adding more device support for the XAC
Building shareable game profiles
I'm working on it twice a week on stream, so we are always making tons of progress :)
Conclusion
I think Suave and Snap Keys has the chance to enable so many more people to play games that they never could before using whatever they have available to them!
Resources:
Suave Keys:
https://github.com/SuavePirate/SuaveKeys
Snap Keys and Reader:
https://github.com/SuavePirate/SuaveKeys.SnapReader
Lens:
This entire project was built live on twitch! Check out all the build vods and join in the future for other awesome builds at
https://twitch.tv/suave_pirate
Built With
android
arduino
azure
c#
dotnet
ios
kotlin
swift
uwp
Try it out
github.com
github.com | Snap Keys | Turn your snap lens into a controller for PC games using the Suave Keys lens! Use hand gestures and facial expressions to control your keyboard in games like FallGuys, Call of Duty, and Dark Souls. | ['Alex Dunn'] | ['Innovation Award', 'Pitch Contest Winner'] | ['android', 'arduino', 'azure', 'c#', 'dotnet', 'ios', 'kotlin', 'swift', 'uwp'] | 5 |
10,575 | https://devpost.com/software/mercnmore-snap-integration | One stop solution to buy button pins!
Try out your pins on Snapchat before buying it or use it to represent yourself!
Moment Badges to store all of your moments in one place! Scan these QR codes and check out your moments (AKA Virtual Album).
Share your moments on Snapchat!
Inspiration with Founding Story
We believe button pins are one of the most coolest and affordable way to show the world what you represent and what you believe in. We started a
facebook page
to sell cool and customizable pins and within a year we sold over 10,000 pins. That basically motivated us to expand it with a
mobile application
📱where customers can browse through our collections, create their own pins and do a lot more with button pins.
Talking about our inspiration on
Snapkit Integration
? Well, Most of our targeted users are
GenZ
and
Snapchat is the language they speak.
💁♀️
What it does
Core of our application is to let people represent what they believe in. And we are doing it by letting customers buy button pins that represents them. We work with a lot of local communities, schools etc and design button pins that our customers can proudly wear. ✊
Unfortunately, due to unforeseen pandemic we are seeing a lot of our customers aren't able to wear them on their clothes or bags in their day to day lives. So we were thinking, what would be the best way for these Gen Z to express what they believe in ? And answer was of course. Snapchat!! 🕶️
Our integration with Snapchat let them share the pins they are proud of as a Snapchat story so they can keep representing everything they are proud of even while staying at home. 🏠
How I built it
In general, our entire application is built on react-native and backend is on Play Scala which is hosted on AWS EC2 instances for high scalability and availability.
Snap Integration
We built it by integrating Snap SDK to our platform. We create an iOS bridge for react-native with Objective-C and Android bridge with Java.
On the backend, we added a new Snap dedicated end-point to support sticker and background image generation based on our internal productId. In order to create stickers as fast as possible, we also integrated a caching layer to give seamless customer experience.
Challenges I ran into
We had Product Challenges to find a coolest way to integrate Snapkit on our Platform. We went through a lot of iterations with different Snapkits, where to put those integrations, what to call those buttons and how it will align with our product in general. We also reached out to few of our power customers. After data, some feedback and with gut feeling we eneded up integrating Creativekit for mercNmore Products and Moments.
Talking about Engineering challenge, Our application is built with react-native. Since Snap SDK is available only for native iOS and Android one of the biggest technical challenges we had was to build a react native bridge that can support native SDK on our react-native application. Learning little bit of Objective C, some great tutorials on Medium and help from super awesome Github community, resulted into a successful integration.
Accomplishments that We're proud of
Our goal is to let our customers represent what they are proud of. Some of our Accomplishments that gives us goosebumps:
Selling over 10,000 button pins using Facebook Page 📈
Recently moving our Facebook page to a full fledge application and launching it during pandemic 😷
Getting Amazon Active Credits for starting and surviving business during pandemic 🙏
Integrating Snap experience so our users can keep representing what they love while staying safe at home 💛
What I learned
We launched our business during pandemic and biggest thing we learned is to have faith during uncertain time, find a way out and most importantly enjoy the process. 🙌
What's next for mercNmore Snapkit Integration
We have many exciting and innovative plans with Snapkit. Next we are planning to do integration with "App Stories" so our amazing customers can link their Snapchat account with mercNmore and just take a Snap and we'll instantly print a button pin and mail it to them! 📸
We also have plan to go big with our deliveries and go global from India only once things start getting back to normal. 🗺️
Built With
amazon-web-services
android
ios
react-native
scala
Try it out
apps.apple.com
play.google.com | mercNmore Snapkit Integration | Buying button pin just got easy! Do it with Stickers, Filters and Snaps! If we are speaking your language, check us out. | ['Vihas Shah'] | ['Runner Up - Innovation Award'] | ['amazon-web-services', 'android', 'ios', 'react-native', 'scala'] | 6 |
10,575 | https://devpost.com/software/swirl-story-challenges | Inspiration
I'd heard about Snap Kit when it first launched, and kept seeing new, cool apps pop up that were using the platform. Apps like Yolo, Hoop, and Trash. At the same time, I think the Snap Kit SDK is really underused by most apps, so I thought there was a lot of room for something new and interesting.
When quarantine started and I found myself with a lot of extra time, I decided to brainstorm ideas for a Snap Kit app. Around that same time, story challenges started blowing up on social media. They were dumb but fun challenges, like posting your favorite song, drawing a carrot, or doing 10 pushups. I decided I could supercharge this trend by building a dedicated app for this type of content, with seamless sharing right to your story.
What it does
Swirl lets you share fun challenges to your stories and tag friends. It has dozens of original templates about memes, movies, music, art, games – there's no theme or limits to the types of content on there. Just find a template, share it to your Snap or Insta story, and tag friends to keep the challenge going!
How I built it
I didn't know a thing about iOS development before I built this, so I first took a few weeks to learn all I could about the app dev process. Once I felt competent, I started working on the app on my own. It took me a few months to build everything and knock out all the bugs. I mainly used swift, with firebase as a database.
Challenges I ran into
Development challenges: Fetching images, memory leaks, all of UIKit. (I wrote a
blog post
about my experience with the entire development process)
Marketing challenges: I had to rename my app right before launching due to a copyright issue, writing good marketing copy was surprisingly really hard. I'm still learning how to promote my app on social media.
Motivational challenges: Staying productive and motivated while working on my own schedule, not getting discouraged by hard bugs.
Accomplishments that I'm proud of
I launched my first app! It’s now on the app store
here
. Going from knowing nothing to feeling pretty competent in app development in the span of a few months felt really great.
I launched it on Product Hunt and got featured, which was awesome to see!
check it out
What I learned
Everything is hard: making software is hard, design is hard, creating a website is hard, marketing is hard, getting users is hard, asking friends for feedback is hard. There's no easy part of building a product. I knew it would be hard going in, but the reality of it is much heavier than I thought.
I also learned how to stay motivated. I doubted myself a lot, whether it was with programming, having the right idea, promotion. I learned when to step back for a bit to get out of a streak of negative thinking and try to ride the streaks of positivity and motivation as long as I could.
What's next for Swirl
Adding more interactivity! Right now, most of the content is static images, but I want to add more interactive and personalized features. I've already started doing that by adding bitmoji stickers. For other new features, think creating/submitting your own content, dynamic lenses, and favoriting templates in the app.
Built With
firebase
snapkit
swift
Try it out
getswirl.app | Swirl - Story Challenges | Swirl is the app for story challenges – low-effort, fun, and social challenges for your Snap stories. Find a template in the app, share it to your story, and tag friends to keep the challenge going! | ['Ben Edelstein'] | ['Best Overall Use of Snap Kit'] | ['firebase', 'snapkit', 'swift'] | 7 |
10,575 | https://devpost.com/software/premo | Inspiration- With the onset of Covid-19 millions of Americans lost their jobs and were often looking for new ways to make money online. Some were creating courses online with teachable, selling music on Patreon, posting content on Onlyfans, etc... However, a lot of these platforms were disjointed from where most users spend their time like Snapchat/Facebook/Twitter. As a person in tech I have app fatigue even the thought of downloading a new app like Tik Tok annoys me. Having to learn a new platform just to enjoy new content from a band I follow, or an artist I like, would likely loose me in the sales funnel. Also creators often gripe about the disjointed processes using platforms like Patreon and teachable. Fans gripe about usability of these platforms. The average person is not an integration specialist. Most people want something that just works well where they can monetize their content easily. So Premo was born. A platform built on top of a familiar app like Snapchat where people can monetize their premium content with ease. after initial setup creators do the same thing they do today when posting to their snap stories, instead when they want to flag exclusive content, they send it to their Premo stories. This is not a new idea I've seen plenty of users offering premium memberships, but most have disjointed processes to try to solve this problem or they are offering a one-time life access fee which hurts the creator long term. Imagine paying for 1 Beyonce CD and having access to her discography forever. That's insane. The innovation here is in the integration and the ease of use for the future workflow.
What it does- Premo is a premium content management app for Snapchat. For creators- Premo handles subscriptions: Allows fans to subscribe for a monthly fee, automated billing, and payouts via an integration with Stripe. For Fans- Premo is a place where they can view exclusive content from their favorite artist, creators, and teachers. Fans can also search for new creators and subscribe to their content.
How I built it- I'm a product guy with a strong background in API design and development. I have a very in depth understanding of how APIs work, and what I wanted to design and build but I'm not a developer. So building the actual apps would be a challenge for me. I discussed what I wanted to build with contractors and how I wanted it built. My developer resources were able to assist me with the integrations after I delivered the UI and process flows. Premo is built into two native apps(IOS, Android).
Challenges I ran into- The major challenge I ran into was with the initial flow of how I wanted Premo to work with Snapchat. Initially I wanted the app to be as non-intrusive as possible. The initial design was to have users download the app. Premo creates a new story via the API and controls the subscription management to that story on the Premo side. when creators posted a promo sticker via the creative-kit we could capture fan GUIDs that wanted to subscribe and assign them to the story. When fans cancelled their subscription, we would remove the reference. For instance when a Creator downloaded the Premo app we would store their Snap user GUID, Create the Premo Story GUID, then as fans subscribed associate their Snap user GUID with the Premo Story GUID making it viewable in snapchat. The creators would only use Premo to manage subscriptions, view earnings, and schedule payouts. This was not possible, so we re-designed Premo to handle all the subscription references in the Premo app and actually show fans the premium content in Premo via StoryKit. However, Still creating a pretty easy process flow for creators to post original content to fans. This method also allows us to show the content in our app so in the future an integration with the snap ad network could be another way to monetize.
Accomplishments that I'm proud of- I'm proud that we were able to build something that may be a possible source of income for people all over the world. I'm also proud that we were actually able to finish the app to the current state with the storykit. It took some time to work through approval, but this was a key feature to being able to bring the app together. Gervis and Seema were a huge help making this possible.
What I learned- I learned a lot about the snapkit platform and I'm sure I will be creating additional apps in the future for some concepts I think Snap users would enjoy. Maybe not as ambitious as Premo, but still something with great substance. This experience has me thinking about a number of apps that would be great for the Snapchat platform.
What's next for Premo- Next for Premo is public Launch. I still have a few items in the app and design I would like to clean up including additional promotional stickers. Long term I would like to see Premo grow into a platform for different peer to peer monetization items such as paid live classes like a yoga lesson, Spanish lessons, or math tutoring. Paid celebrity shout outs and messages. An integration with Shopify for creators to sale products, and a few other wild ideas. My day job is an Integration Product manager so I'm constantly taking disjointed and silo'd software and making them work together to improve workflows. Hopefully Premo will grow and become a platform that helps a lot of people.
Built With
creativekit
figma
hostinger
java
loginkit
storykit
stripe
swift
Try it out
premofans.com | Premo | Premo is a premium content management app for Snapchat. Premo allows Creators to monetize their exclusive content while also providing a management utility for fan subscriptions and monthly renewals. | ['Justin Thomas'] | ['Runner Up - Best Overall Use of Snap Kit'] | ['creativekit', 'figma', 'hostinger', 'java', 'loginkit', 'storykit', 'stripe', 'swift'] | 8 |
10,575 | https://devpost.com/software/touchgram-for-imessage | Inspiration
Hypercard
Built With
sprite-kit
swift
Try it out
www.touchgram.com | Touchgram for iMessage | Touchgram provides interactive messages within Apple Messages, multi-page experiences reacting to touch. This submission adds Bitmoji stickers to these experiences. | ['Andy Dent'] | [] | ['sprite-kit', 'swift'] | 9 |
10,575 | https://devpost.com/software/adopt-your-new-best-friend | Adopt!
Inspiration
Animals have always had a soft spot in my heart and I've never been able to turn down an animal in need of a home (which is why we currently have 5 dogs and 3 cats). As much as I would love to save them all personally I know it's not feasible. Breeding and spending thousands of dollars is unnecessary when there are so many wonderful and adorable pets waiting to be adopted.
What it does
Showcases 1 to 2 dogs that are available for adoption with contact information of shelter.
How I built it
Using the LenStudio tools for images and text. I used Canva to create some of the images.
Challenges I ran into
None really. I've made several of these filters and had a decent idea of what I was doing.
Accomplishments that I'm proud of
The idea of helping put animals out there that are available for adoption on the Snapchat platform.
What I learned
I've improved my skills on using the Behavior function.
What's next for Adopt Your New Best Friend!
I'd like to reach out to more shelters to use this platform and acquire more sponsors for paid ads.
Built With
none | Adopt Your New Best Friend! | Help share the dogs available for adoption at local animal shelters. One to two dogs will be highlighted on a regular basis (daily or weekly). Businesses are able to sponsor posts. | ['TashaPenwellHC Penwell'] | [] | ['none'] | 10 |
10,575 | https://devpost.com/software/virtual-studio-rq21a0 | Main Screen
Menu
Setting
Inspiration
During this post-pandemic period, many trainers or performers have their classes or shows in social media. So we create an app called Virtual Studio to help them making virtual avatar show quickly.
The app has Android, PC and Mac version. More info would be found here.
https://softmindtech.wixsite.com/vtuber
What it does
Users would control the avatar head by their webcam and some motions by hotkeys. Users would also control avatar lip movement by microphone. They would record the video during their shows and insert different medias (e.g. photo, video) during capture.
And there are pre-built 3D props and VFX effect in the app. So they would drag it for their need during capture. Microphone and both other sound effects are all captured in the production video.
How I built it
I am using Unity and different other plugins to make all software functions.
Challenges I ran into
The face detection in real time is quite difficult to implement in the app. I try hard to figure out the correct solution.
What's next for Virtual Studio
To import Bitmoji 3D avatar from Snapchat
To let user customise their virtual avatar, import their VRM model (3D humanoid avatar format) and import 3D models from Sketchfab
Built With
unity
Try it out
smarturl.it | Virtual Studio | During this post-pandemic period, many trainers or performers have their classes or shows in social media. So we create an app called Virtual Studio to help them making virtual avatar show quickly. | ['Tom Tong'] | [] | ['unity'] | 11 |
10,575 | https://devpost.com/software/waho-app |
window.fbAsyncInit = function() {
FB.init({
appId : 115745995110194,
xfbml : true,
version : 'v3.3'
});
// Get Embedded Video Player API Instance
FB.Event.subscribe('xfbml.ready', function(msg) {
if (msg.type === 'video') {
// force a resize of the carousel
setTimeout(
function() {
$('[data-slick]').slick("setPosition")
}, 2500
)
}
});
};
(function (d, s, id) {
var js, fjs = d.getElementsByTagName(s)[0];
if (d.getElementById(id)) return;
js = d.createElement(s);
js.id = id;
js.src = "https://connect.facebook.net/en_US/sdk.js";
fjs.parentNode.insertBefore(js, fjs);
}(document, 'script', 'facebook-jssdk'));
What's next for WAHO App
bring Stories user for my app and building Community Story
Built With
creative-kit
swift
Try it out
apps.apple.com | WAHO App | Share Stickers and Filter on Snapchat | ['faisal alshalahi'] | [] | ['creative-kit', 'swift'] | 12 |
10,576 | https://devpost.com/software/tidy-mind | Inspiration
It's an important topic and it's not given as much attention as we did before due to the Covid-19 pandemic. Around 46% of all adults in the United States are mentally ill and it's a big portion of the population. If it's not given much attention there can be negative effects such as more people getting ill, not getting treated early, and much more. One of our friends suffered from depression and had intentions of attempting numerous times. We were inspired to create a website so that other people who are suffering from the same issues can find a place to confide in and take their mind off problems.
What it does
Our website contains information (guides and tips), a chatbot which responds in an appropriate way after knowing your mood and gives you some suggestions on activities you can do to help yourself cheer up. We also have a game that is like a free painting app, it lets the user create anything and have fun with. It proves that even the little actions can make you feel better inside out.
How I built it
We used Codepen.io for prototyping most of our website, but the final source used for the website making was called Brackets, an IDE used for creating websites. The source used for the friendly robot is called "ChatBot.com." and for game development, we used "Scratch."
Challenges I ran into
We all were new to coding and website developing, making chatbots, and coding a game. Ont top of everything being new, we all were attending our first-ever hackathon! Some of us had experience with website making and coding but some didn't have experience in the required field, which made it hard to give each other roles and parts to talk about. We were also new with chatbot and game development as well. The time length was also a significant problem because we all were new.
Accomplishments that I'm proud of
With having everyone fairly new to coding and website making, we did have other skills too such as creative thinking, organization, and much more. We were able to communicate and organize everything quickly. Subsequently, we had very few arguments, which were concluded almost instantly, as we always strived for compromises. We all are proud of all the things we have doing the past 36 hours. Looking at our website makes us proud because we were able to make not just a website, but also an easy game and an accurate chatbot as well
What I learned
Our team learned how to become much more productive and how to proficiently use HTML, CSS, and JavaScript.
What's next for Tidy Mind
Tidy Mind hopes to create a more user-friendly interface and it also hopes to add more highlights and characteristics for the users to experience and use.
Built With
css3
html5
javascript
Try it out
github.com | Tidy Mind | We were inspired to create a website so that other people who are suffering from the same issues can find a place to confide in and take their mind off problems. | ['Saket Sharma', 'Nityant Rathi', 'Navdeep Gill', 'Nikhil Bhutani'] | [] | ['css3', 'html5', 'javascript'] | 0 |
10,576 | https://devpost.com/software/turbletown | Inspiration
We got inspiration from another app called Calm. We wanted to help educate people that are experiencing stress and anxiety daily but do not necessarily have a diagnosed mental health disorder.
What it does
Our website helps to educate and give support to people who might be not sure about their mental state or don't know where to go for help. It gives users access to a community of therapists and people in similar situations to chat online with. Our website has tips and advice for people who are constantly dealing with stress.
How we built it
We built our website using Wix website builder and we used Canva to design a preview of the app version of our website.
Challenges we ran into
We ran into some technical difficulties along the way with some of the features we wanted on our website but we were able to work around them by using different approaches.
Accomplishments that we're proud of
We're proud that we were able to complete the challenge within the 36 hours, we didn't think we would make it.
What we learned
We learned the importance of making an outline before we go straight into a project and of dividing roles evenly amongst our team members.
What's next for TurbleTown
We hope to build a community on the app version of our website where our users can talk with each other. We also hope to have more subscriptions to our newsletter and get some sponsors in the future.
Built With
wix | TurbleTown | Talk about your Turbles! | ['Hargunpreet Kalra', 'Ekjot Juneja'] | [] | ['wix'] | 1 |
10,576 | https://devpost.com/software/mental-health-harihacker-s | Mental Health - Hari Hacker's, #23
N/A
Try it out
sites.google.com | Mental Health - Hari Hacker's, #23 | null | ['Prem Patel', 'Kush Patel'] | [] | [] | 2 |
10,576 | https://devpost.com/software/39-magnum-brain-incarnate-ecosystem | Inspiration
We wanted to create something that was useful, something that didnt exist before. This is an ecosystem that solved many of the problems we personally faced, with annoyances that each app only did that. There was no way to use data from one to get another.
What it does
The apps we have are the run of the mill health apps. The difference is that we are expanding. With numerous apps behind our company name, we have a LOT of data to transfer between apps so that we can truly take everything into account. Like many know, everything affects everything else. Sleep affects fitness, and all of it compounds into your mental health. Because of this, we are able to develop lifestyle apps that come together to help you with your mental health. A bad day’s rest? We will know about it and do our best to help. Eating too much? We will alert you of the health risks, and other risks you didn't even know about, without you asking. This ecosystem is a lifestyle whereby you can keep in progress all aspects of your life, and thus, in turn, help the mental health of the average person. All of this information is vital. A meditation app won't always work, a breathing app wont either. Not a sleeping app, not a nutrition app, but all of them. Without every aspect covered, you cannot truly help yourself.
How I built it
We built it based on ideas. We brainstormed all the things that we thought the modern health apps missed, and realized how much this must affect our mental health. It was essentially a black hole in the Mental Health industry. To build it we worked with a site to build a mock-up of the core app on marvel. using that as a basis, we built a mock-up of the Addiction-In app that was similar in style. Everything was minimalist, because we didn't want to have clutter in an app meant for the healing of the mind. Afterwards we worked on a few java programs to see what the system would look like for more complex apps, like the Sleep-In app, that required a database for lots of previous information. This was done, and so was an app that showed the interface of the Screen-Time related app. These things were the foundation of all our ideas, and business plans, from which we brainstormed monetization and pitches, as well as how to market the ecosystem, and how to hook people in. From a business standpoint it is complete, with it really being a fully company.
Challenges I ran into
The Java programs had errors that we had to fix in order for the applications to run smoothly as possible.
Also we had some issues while using marvel, and we had to fix these so it functioned to our requirements
Video editing was also a difficult task, we had to work out something so that our video could accurately display the goal of Incarnate
Coordinating all our tasks was much more difficult with the coronavirus pandemic confining us in our homes, but we learned how to do so throughout the project. This will definitely help us in the near future as the pandemic has not yet ended.
Accomplishments that I'm proud of
In terms of Java, we had to learn new methods to code while coding the applications, and this helped increase our knowledge of the coding language.
In terms of Marvel, we had to completely learn it from scratch, so it we all gained the basics of a new skill that will surely be beneficial in our futures
In terms of video editing, we were able to better hone our skills, therefore gaining skills that we will use in the future
We were able to improve our skills in working remotely as a team, which will be useful for our upcoming academic challenges.
What I learned
Mental health is more important than people realize, and there should be more awareness about this topic. It is just as important as one's physical health.
We also learned that there are many more factors than people realize that affect one's mental health. Maintaining good mental health may seem complex, but it should be everyone's goal, alongside with maintaining good physical health.
We learned how to improve our efficiency working in a group project while in a remote setting, as this year's Spark 2020 Hackathon was different than all others before it. it was the first time that team members did not physically interact with each other, and was a new and challenging, but rewarding learning experience
Over the past 36 hours, we have learned quite a bit as a group. We believe that the most important thing that we all learned was how to collaborate with one each other. When we first meet, we got a feel for each other's strengths and weaknesses. This meant we were able to tailor our design process around each other's traits, which was a skill that all of us learned. We also all gained a better understanding of coding. Finally, we learned some more technical skills, such as how to use Marvel and MIT app inventor, which are both platforms we will use in the future.
What's next for Incarnate: An Ecosystem - Magnum Brain, 39
After we expand our ability in coding and app development, we will be able to improve our ideas and and far beyond what we are showing today. We will be able to create the first, usable prototype of Incarnate: An ecosystem for testing and possible release to select platforms.
Built With
java
marvel
Try it out
marvelapp.com
marvelapp.com
marvelapp.com
drive.google.com | Incarnate: An Ecosystem - Magnum Brain, 39 | An ecosystem with a hook, a community, a lifestyle. It is an all encompassing ecosystem, that gives you more, the more you use it. | ['Eric Peng'] | [] | ['java', 'marvel'] | 3 |
10,576 | https://devpost.com/software/aceit-smphacks-team-35-ck0do9 | SECOND VIDEO (HACK LINK):
https://www.youtube.com/watch?v=7nkOsmwF258
PLEASE VIEW THIS LINK
Inspiration
Other females in the STEM Field that have worked hard to overcome their daily problems
Problems we as teens face on a daily
What it does
Is a support platform that helps students organize their daily tasks and conquer issues related to school work and stress
Acts as a one-stop-shop for all students to track their day to day activities and manage their schoolwork in an efficient manner
How I built it
Using the software's above, we created a user-friendly website for future students
Challenges I ran into
Decide on an effective solution that falls under the Mental Illness Theme
Accomplishments that I'm proud of
Working together
Overcoming the stress the group faced before establishing a concrete idea
What I learned
Managing our own time wisely and how this app could be of use to us as well
Learnt the basics of HTML
What's next for ACEit - SMPHacks & Team # 35
Reaching out and connecting with others with like mindsets to help us and our idea grow
Built With
adobeaftereffects
adobexd
html
photoshop | ACEit - SMPHacks & Team # 35 | ASPIRE.CHANGE.EXCEL. ACEit is a website extension on the school's BYOD page.The platform was created to help students develop soft skills and encourage stress and anxiety management techniques. | ['prexa18 P'] | [] | ['adobeaftereffects', 'adobexd', 'html', 'photoshop'] | 4 |
10,576 | https://devpost.com/software/calvosa | Introduction Poster
Our Goals
Our Design
Inspiration:
While researching for the mental illnesses that affect people the most in the world, the numbers and statistics of food-related disorders shocked us. Just in the US, there are more than 30 million people who suffer from some sort of eating disorder. When we came across these appalling facts, we realized that as a group, we had to take some kind of action, whether it was inspiring others or helping those affected. In our modern world, mental illnesses such as eating disorders are not that relevant due to current tragic events (COVID-19). Due to the lack of attention being given to the issue of mental illnesses, those who are impacted by different types of disorders are starting to suffer. One of the main issues patients are facing is the lack of communication between them and healthcare professionals Instead of tackling the issue of resolving mental illness as a whole, we decided to focus on resolving communication issues we found embedded in curing mental illnesses. Using the inspiration we received from the insufficient attention being given to this issue, Calvosa came to fruition.
What It Does:
Calvosa is an application that is developed with cutting-edge technologies. It is designed to bridge the gap and lack of communication between healthcare professionals and patients who have a food-related disorder. In the app, patients/users can view their assigned dietitian and psychiatrist's contact information such as phone numbers and email addresses. This information is retrieved from their provincial health records. Patients can also use Calvosa to find the nearest hospital and pharmacy to their current location. This feature uses a patient’s phone’s location services to retrieve their location, and use it to find the nearest hospital and pharmacy. Another useful feature that Calvosa possesses is the family contacts information. The patient can store their family members’ or close friends’ contact information whilst setting up the app for the first time. This information can also be edited at a later time by the patient. Furthermore, there is a NEDIC helpline button that the user can tap to contact NEDIC. When the patient needs more information about their condition, or if they have any concerns, they can tap the NEDIC helpline button, which causes a pop-up prompt. The prompt asks for confirmation, and if the patient taps “Yes”, it uses the user’s cellular services to make the phone call. The main feature of Calvosa is the diet planner. Every month, the dietitian on the patient’s Ontario file creates a diet plan for the entire month. The patient can then use the application to view what they must consume for each part of the day. In simpler terms, the user has access to all of their meals for the entire month. Calvosa also includes an emergency call button that can be used when a patient faces a panic attack. When the patient taps the button, it instantly calls their psychiatrist, so that they can seek immediate consultation and assistance. The call is made using the patient’s cellular services.
How We Built It:
Taking our ideas from a piece of paper, and converting them into reality was a difficult, yet possible task. To bring the Calvosa prototype to life, we used an application called “Adobe XD”. In Adobe XD, we used many design elements and shapes to create a mood-lifting interface. We considered many things to make a calming, and relaxing interface such as colours and shapes. In the beginning, we researched specific colour palettes that were scientifically proven to relieve stress and uplift an individual’s mood. In Adobe XD, we created multiple pages and linked them together to create one large application. We also used many of Apple’s SF Pro symbols and fonts to make our application look stylish and authentic at the same time. We worked very hard and long hours to create an attractive user interface. Hundreds of mouse clicks, and key presses later, Calvosa came to life. Other then that, our team working on programming the emergency button feature using HTML and CSS. We were also successful at accomplishing that, as our code succeeded when it was time to test it.
Challenges We Ran Into:
While designing our prototype application, we encountered numerous challenges. The main one was creating the transitions between the application pages. Another challenge we faced was getting the text to be the correct size. If the text was either too big or small, it ruined the style and look of the entire page. To solve these 2 major issues, we watched YouTube tutorials and searched up the best tips to create an application. To solve the issue of the transition, we watched a few YouTube videos, which helped us understand how Adobe XD created transitions. After watching the videos, we were able to solve any problems we were facing and got right back to making the application. For the text sizing issue, we searched up the best text sizing tips for applications, which allowed us to perfect our application. In the programming side, an image we had programmed into the system was not showing up when it came time to display it on our phone. To resolve this, we published our image online and coded the web address which ended up working. Although we ran into many challenges, our diligence and dedication were unbeatable.
Accomplishments That We're Proud Of:
There are many accomplishments that our team at Calvosa is proud of. The main one is an attractive and stylish user interface. We worked very hard to design a colour palette and overall user experience. Another thing we are proud of is our business plan. With our experience at the IBT program, we applied what we have learned, to create an effective master plan. We believe that our business plan will allow us to prosper both financially, and professionally.
What We Learned:
While attending our first-ever hackathon, we learned many things. The major thing we recognized was that no matter how big the obstacle, never give up. We acknowledged that one’s diligence and determination can allow them to succeed in absolutely anything, We also learned that consulting the internet when stuck is one of the best available solutions. Furthermore, we figured that communication within a team is the only way we can move forward together.
What's Next For Calvosa:
In the future, we aspire to turn Calvosa into a fully functioning application available to hundreds of patients and health professionals online. We would love to gain sponsors so that we could constantly release new updates for Calvosa. Our group’s main aspiration is to be able to prosper financially and help many people all over the world.
Built With
adobexd
canva
css3
html5
imovie
sublime-text
Try it out
xd.adobe.com
github.com
drive.google.com
sites.google.com | Calvosa | Help Your Nervosa, With Calvosa | ['Krish Desai', 'Anantjyot Grang', 'LI - 10ZZ - Harold M Brathwaite SS (2482)'] | [] | ['adobexd', 'canva', 'css3', 'html5', 'imovie', 'sublime-text'] | 5 |
10,576 | https://devpost.com/software/lucid-guide | Inspiration
As students of the 21st century, we have only seen how prominent mental health has become and how crucial it is to learn about it during this day and age. We all have mental health, and there is a substantial number of people that do suffer from mental illnesses as well. It is a condition that can not be perceived by the human eye and therefore it is never spoken upon, and stigma surrounding mental health in its entirety exists. We need to start shedding light on this issue and begin to educate those around us about what exactly mental health and illness is, and how it varies so greatly from person to person. There is not one diagnosis, one treatment, nor one solution to fighting mental illness that is set in stone, and this is what inspires us; we need to change the way our society treats mental health and start to take this issue a lot more seriously. If you don’t bandage a wound, you will bleed; in similarity, if there are limited resources available to help treat mental illness, then people are on the verge experiencing very grave consequences. If no one else will step forward, we will take initiative to make that change.
What it does
Lucid Guide is an app to help people struggling with mental illnesses, something that brings a little peace to their daily lives. Our app allows for patients to track, record and analyze every aspect of their mental health, including the ups and downs. As our app carefully curates all the data that is put into it by our users into graphs and charts, it allows for them to keep track of their progress. Lucid Guides is also accessible across many platforms, the most recent addition including Apple Watch Series 3, 4 and 5. This provides convenient access to their data whenever and wherever they may be. This app also prepares resources to guide them in the right direction in terms of their own self betterment and well-being designed strictly for the personal needs of the user. We are a service that users can rely on for positive reinforcement at any time.
How we built it
To build our hack, we decided to use the software Adobe Xd. After we decided on the idea of “Lucid Guide” the app, we knew that we had to find a way to showcase our creativity accurately to the judges. By using this software, we have depicted clearly how we pictured the app to look on an iOS device, including all the speciality features that were made specifically for our users that suffer from mental illnesses. Adobe Xd shows our skills in creating professional, yet captivating designs, which is a staple in an everyday used app. Everything from icons, to theme colours were carefully curated by our team of four, to present the best version of the hack idea we’ve dreamed of.
Challenges we ran into
A specific challenge that definitely tested our abilities is the time restraint that was placed against us. Although the entire essence of the SPARK hackathon is to complete a hack within a short period of time, it really tests the technological and intellectual abilities of a team. That being said, we would like to proudly state that we have come across this hurdle with ease, as our best efforts were made with time to spare. The hack and pitch that we present to the judges consists of our hard work and dedication, which will create tough competition when it comes to the decision making. The combination of our passion for coding and the adrenaline rush provided by the short deadline has worked in our favour. One last challenge that we faced as well would be technological failures. Examples of this would include mp4 files not providing audio, or even as simple as not knowing how to import something into our software. However, these, in our opinions, were minor flaws, things that can be solved with a quick search on Google. Technological difficulties are ones that can’t be predicted, however, can prove to be a real test when attempting to resolve. In all, our team dealt with these certain challenges with ease and calm demeanour.
Accomplishments that we're proud of
As a team, we persevered through all challenged that were thrown at us, and managed to create a stunning, innovative and essential app that we genuinely believe will be of use to many globally to aid them to reach their full potential and guide them in the right direction to combating mental struggles that we as humans face on a day-to-day basis. We are so incredibly proud of the work that we have done because we know we are a part of the change that we so desperately need to end mental health stigma and educate society on what mental health and illness truly is.
What we learned
In only 36 hours, we have learned a great deal about each other, the world of technology, and the skills required to collaborate and work efficiently. As a team, we are not very experienced with coding however, with the help of YouTube videos, and our incredible mentors, we learned a great amount of information on how to operate Adobe softwares. Second, because it has been months since we have worked together in teams, the hackathon truly exercised our teamwork, collaboration, and leadership skills, and really taught us how to get things done efficiently with an open mind and within an incredibly tight time frame. We also learned a lot about each other that include our working habits, and our individual strengths and weaknesses.
What's next for Lucid Guide
Lucid guide is an idea formed by four 11th graders in the spur of a moment, from start to finish in a total of 36 hours. The bare bones of it were designed and executed, however, there’s still more to go in terms of the completion of this app. We would like to make it accessible to everyone, of all illnesses, mental or physical, as all of them will have a reliable and secure app to access. We plan to publicize our creation and to increase our user base. Needless to say, Lucid Guides is going to be continued on into the far future, as it is the future of mental health.
Built With
adobe-xd
canva
imovie
Try it out
xd.adobe.com | Lucid Guide- Alpha Bytes 21 | The future of mental health awaits! | ['Shreya Gupta', 'Hitanshi Patel', 'vinita kallam', 'Aneri Patel'] | [] | ['adobe-xd', 'canva', 'imovie'] | 6 |
10,576 | https://devpost.com/software/therapy-messenger | Inspiration
The idea was based off of wanting to help specialists working in the field of mental health. After looking at several medical treatments, it became clear the the vast majority of treatments required a trained professional to show any effectiveness at all. It was with this in mind, that we decided to assist those helping patients instead of helping the patients ourselves.
What it does
Therapy Messenger is a messaging service used for therapists to talk to their patients and easily take notes on their conversations. It provides and interface where individual messages can be saved with tags or custom notes attached to them.
How I built it
We used Firebase and Angular for the front end and back end development.
Challenges I ran into
Our group had a very difficult time deciding on an idea to make. Most of us had very little background knowledge on the topic of mental health and so it took many hours of research to come to an idea.
Accomplishments that I'm proud of
What I learned
From the research that we did, we learned a lot about mental health illnesses, their treatments, as well as many problems that caretakers may face. It was also a good refresher on the technologies associated with creating a website.
What's next for Therapy Messenger
We would also like to provide functionality for voice calls since that's where most therapy counseling sessions have been held recently due to covid.
Built With
angular.js
firebase
node.js
Try it out
github.com | Therapy Messenger - Team Figit, 44 | Therapy Messenger is a messaging service used for therapists to talk to their patients and easily take notes on their conversations. | ['jonathan huo', 'Taehoon Kim', 'Adeeb Mahmud'] | [] | ['angular.js', 'firebase', 'node.js'] | 7 |
10,576 | https://devpost.com/software/palette-37pnxb | Inspiration
Quarantine has been a tough time for everyone. Were you feeling stressed with nothing to do? Worry no more Team Swag Hacks brings you Palette.
Being Stuck in this quarantine, our team tried everything to stay occupied and stress free, however with all the news and excessive workload it was becoming difficult. Having to limit outside interaction, it became very stressful having nothing fun to do. This led us to researching ways to have a clear mind and we came upon Art Therapy. Art therapy has been dated to be the best method to help users to relieve pain, stress and anxiety. Our team wanted to spread this form of therapy and motivate others to try it out and see the wonders. That is why we have created Palette, an app which shines light upon Art Therapy. Whether that be drawing stick figures, shapes, doodles or sketches and paintings, Palette caters to all!
What it does
The app acts as a platform for users to browse artwork, learn artistic skills live from various professionals and participate in amazing challenges. Users are able to browse tutorials to learn different art techniques and learn them live from various professionals or like minded individuals. To inspire and motivate users for Art therapy, Palette offers competitions from various companies that have immense awards and perks as well. Users can choose to participate in these competitions or just draw and design for fun. Once a user has finished their piece they also receive points for uploading and sharing their work, which they can use for discounts on purchasing art pieces. A user may even decide to list their art piece for sale or just upload it to get feedback and network with like minded artists.
How We built it
The app prototype was built in Adobe XD, a software designed to help map out mobile applications. You can view the live prototype through the links below. The backend code for user input for uploading images, creating accounts, leveling up and post statistics was done using python, using the pycharm ide.
Our code consists of 6 implementations of Palette’s features. Firstly, we can create new accounts and store them in organized JSON files on a web server. For now, all demonstrations will be shown on a local database. To make a new account, we simply put in the command along with three values; the username for the account, the email, and the password. This will create a dedicated folder for each user that is added, containing their basic user data.Secondly there is the posting feature, which takes three values. The username of the account that is posting, the path of the media that is to be posted and finally the caption to the post. Each post’s media is backed up to the database, along with the statistics for the post itself (likes & comments). Each post also consists of a unique ID to help identify it with other aspects of the program. Next, there are functions to store information on an account's followers and who they are following. The information of their statistics are stored along with the rest of their individual data for easy and efficient access from the app. Lastly, there are the like and comment functions. These functions are stored individually with every post and work through a program scanning every post and identifying which post that matched the unique ID of the post being interacted with. The likes and who liked the media is stored along with the comment and respective commenter safely on our database.
Challenges We ran into
We knew we didn't have enough time to make a fully working app in two days, so we decided to use a new application called Adobe XD to map out the app and show off our design skills. We were having difficulty with creating a database to store the user info and create an algorithm that would provide accurate statistics for the user. We were having difficulty storing the password and keeping the data organized for the user to view and monitor. To overcome this we stored the info as separate files which were recalled at a later state and used local servers to make a database.
Accomplishments that we are proud of
We were happy to see that our Adobe XD prototype was working, as it was amazing to see the app in our drafts come to life without real code. It was also a heartwarming feeling when we were able to simulate a working login, and statistical code. It was a beautiful piece of workaround code that ended up being useful. We gave each action a unique ID and to make these features work that unique ID would be recalled, this way the code was organized and executed with efficiency. We were also proud with the rendering of our video, as creating the mockups was also a very tedious and lengthy task. By having the whole team work together we were able to split up the video work and by doing so editing and rendering was done a lot quicker.
What we learned
We learned how to effectively use Adobe XD as a substitute for creating real apps. We also learned how to work around databases and use local servers to perform our tasks. This was also our first time using Adobe After Effects.We learned how to incorporate professional mockups in our videos to lift our user experience. We also learned how to use the .json library to create small intriguing modules which we plan to use for other projects. Overall it was indeed an excellent experience, and we learnt a lot about video editing and python modules.
What's next for Palette
The next steps for our project are to use Xcode and android studios to code out the UI prototype into a real global app with help from developers and get financial support to hire people to update the media catalogue of various territories. We would also like to enhance our Python code as currently it is just the backend and to make it usable for users we will need to add on additional front end code which must all work in sync with the Xcode. We plan to continue promoting healthy mental health through Palette so everyone can truly remain stress free!
Built With
adobe-after-effects
adobe-xd
pycharm
python
Try it out
github.com
xd.adobe.com | Palette-Team Swaghax, #29 | Palette is a user driven organization that provides an opportunity for consumers to explore creativity skills and manage their mental health through Art Therapy. | ['Aamodit Acharya', 'Jasanjot Gill', 'Aadil Somani', 'Hrid Patel'] | [] | ['adobe-after-effects', 'adobe-xd', 'pycharm', 'python'] | 8 |
10,576 | https://devpost.com/software/talk-positive-talk | Logo
The upset user types or speaks their negative thoughts.
A recurrent neural network analyzes the sequence of characters and turns it into a positive statement.
Inspiration
People tend to get trapped in a spiral of negative thoughts, and being able to transcend the tendency and see the good in bad isn’t always easy, but once you are able to do so, your perspectives, the way you handle, and react to things will become much more positive and helpful. We hope to make the process of seeing the good in bad easier.
What it does
A speech recognition apps that encourages positive self-talk by taking in our users' negative words and thoughts when they record themselves, and converting them into a much more positive message.
How we built it
Positive Talk is built with a Flutter frontend and Flask backend. The frontend provides a user interface that allows users who are upset to enter the negative thoughts that they may be feeling. The text is sent to a Flask backend through a POST request to be analyzed by a pre-trained recurrent neural network that recognizes patterns and sentiment in the text to associate the negative statement with a generated positive one. This is sent back to the frontend, completing the POST request and allowing the positive text to be displayed.
Challenges we ran into
Flutter was pretty new to all of us, so we had a hard time going through the documentation and having 6 hour blocks of learning was definitely a major challenge.
Accomplishments that we're proud of
In the end, we were able to build a functioning flutter app with a working backend, despite the time crunch.
What we learned
Through the process of creating this app, we gained more experience with Dart, Flutter, Flask, and Python. Also, some video editing skills :)
What's next for +Talk (Positive Talk)
We are planning to add a login and sharing feature. So our users will be able to share the positive messages with their friends on the app, or other social media platforms.
Built With
flask
flutter
python
Try it out
github.com | +Talk (Positive Talk) | A speech recognition apps that encourages positive self talk by taking in our users' negative words and thoughts and converting them into a much more positive message. | ['LEE ZHENG', 'Ren Jie Zheng', 'Borna Sadeghi'] | [] | ['flask', 'flutter', 'python'] | 9 |
10,576 | https://devpost.com/software/myndful-vb6lhq | Myndful banner!
Here at Myndful we take mental issues seriously, and we think that in order for someone to properly address their mental issues they have to first live in a society that does not look down upon those who are suffering from these illnesses, this allows for more people to feel comfortable to come forward and talk about it.
So what are we doing about it? Well, we have created a very intuitive and engaging app that allows for people to answer a few questions about their daily lifestyles. Once finished, the app will see what they could improve on. This helps people create more mental and also in many cases, physical strength as well.
Built With
godot
Try it out
mashrafulchoudhury.wixsite.com
www.instagram.com | Myndful - Hackermans, #15 | A self-care app to help improve your mental health. | ['Shxdow Raven', 'Dhruv Raval', 'KY - 10ZZ - Central Peel SS (2522)'] | [] | ['godot'] | 10 |
10,576 | https://devpost.com/software/breakthrough-gwm2xr | homescreen with all features
home screen with side bar extended
messeging screen
daily question 2
daily checkup question3
something good about your day screen
PS: the 'never gonna give youuu uppp' represents our mission statement as opposed to a Rick Roll
Inspiration
Me (Taha) and another girl on the team sort of grew up in a childhood where mental health wasn't really considered a thing. Therefore serious issues
What it does
has a bunch of features. Random texting features(matches you up with random people who have suffered similar
mental health issues). Does daily checkups on the user about their mood. If they are sad, the cause of their sadness
is recorded.
Built With
android
java
Try it out
github.com
soheilrajabali.wixsite.com | BreakThrough - Code Linguists - Team 30 | A multi diverse mental health app that includes a texting tool which can connect you to random people that have gone through similar problems before. | ['taha Siddiqui', 'Soheil Rajabali', 'Eeman Chaudhary', 'Hadi Jafar'] | [] | ['android', 'java'] | 11 |
10,576 | https://devpost.com/software/smarttracker-covid19 | Inspiration :
Now a days whole world facing the novel Corona Virus, to track the spread of novel Corona Virus country-wise, details of confirmed cases, deaths and Recovered, awareness regarding COVID-19. This Android app was created to spread awareness about the covid -19 virus.
What it does :
The Android app named as ‘SmartTracker-Covid-19’ created to spread awareness about the COVID -19 virus. App includes following functionalities:
CoronaEx Section -
This section having following sub components:
• News tab: Having latest new updates. Fake news seems to be spreading just as fast as the virus but as we have integrated from official sources so everyone will be aware from fake news.
• World Statistic tab: Real-time Dashboard that tracks the recent cases of covid-19 across the world.
• India Statistic tab: Coronavirus cases across different states in India with relevant death and recovered cases.
• Prevention tab: Some Prevention to be carried out in order to defeat corona.
CoronaQuiz section - quiz that will help people know about the Corona virus and its effects on human body. It chooses random questions and show the correct answer for the questions and at the end user will get to know their highest score.
Helpline Section - As this application particularly made for Indian citizen to use, all state helpline number of India included.
Chatbot Section - A self-assisted bot made for the people navigate corona virus situation.
Common Questions: Start screening,what is COVID-19? , What are the symptoms?
How we built it :
We built with using Android studio. For the quiz section we have used sqlite database and live news data we have integrated from the News API. For the coronavirus statistic we have collected data from worldometer and coronameter.
Challenges we ran into :
At time of integrating the chatbot in application.
Accomplishments that we're proud of :
Though , It was the first attempt to create chatbot.we have tried to up our level at some extent.
What's next for SmartTracker-COVID19 :
For the better conversation we will be looking to work more on chatbot.
Built With
android-studio
chatbot
java
news
quiz
sqlite
Try it out
github.com | SmartTracker-COVID-19 | Android app to track the spread of Corona Virus (COVID-19). | ['Pramod Paratabadi', 'Supriya Shivanand Madiwal .'] | ['Best Use of Microsoft Azure'] | ['android-studio', 'chatbot', 'java', 'news', 'quiz', 'sqlite'] | 12 |
10,576 | https://devpost.com/software/evexia-gjm8ze | Link
Main Page
Mood Log Page
Mindfulness Page
54321 Game Page
Logo
Inspiration
As students we found that the feeling of anxiousness and panic often overcame us before tests or big presentations. This shared feeling is what we felt tended to occupy our minds and disrupt our mental well-being. We also then realized that these feelings aren’t simply what students face but most people feel this in whatever position they are in life.
What it does
Evexia is a Webapp, that allows users to choose methods to help calm themselves down depending on the level of anxiety or panic they are feeling. The application also has an emergency button for those who are experiencing intense panic attacks.
Evexia also offers a Log section. Within this section, the user can input and track their moods throughout the month as well as the right daily diary accounts to vent or relieve stress. Furthermore, when they are feeling anxiety or panic the mindfulness section of the app allows them to partake in different calming activities of different levels based on how their feeling. The Webapp also offers a 5-4-3-2-1 game which is very useful in moments of intense panic. Users can then also practice this without using the app later on.
In terms of user experience, Evexia offers a plethora of options for the user to use for their anxiety or panic. Furthermore, the web app is fairly simple to navigate and use the features of it through various buttons as well as being able to easily access pages from one to another.
How we built it
We used HTML for the structure, CSS for styling , Javascript for the interactive calendar feature. Along with Javascript, we used an open source code--p5.js
Challenges we ran into
Through the development of this webapp we faced many challenges. For example when creating the mood log the javascript was not displaying or on html the buttons would not play the needed audio. However, we discovered that when it comes to developing webapps google and mentors are your best friend. After doing a bunch of trial and error we got our webapp to work. Furthermore, by talking to mentors like Astawa we were able to visualize our pitch more as well and it had helped us greatly in creating our video.
Accomplishments that we're proud of
Our team is proud of completing our web application through these 36 hours. We developed new skills and were able to see our effort pay off with a final product!
What we learned
We learned more about web development and about mental health as a whole. More specifically, we learned more about anxiety and how it the most common mental disorder in the world.
What's next for evexia
In the future, we are looking to partner with organizations such as the Canadian Mental Health Association. We will be able to share their resources on our app and their endorsement will increase our reliability.
Built With
css
html
javascript
Try it out
github.com
lisa-vong-evexia.glitch.me | Evexia - Taught By Doner, 26 | Evexia: Calm your mind and bring back your senses.-- Evexia is a MOBILE web application that provides resources for grounding. It aims to help with anxiety and panic attacks. | ['Lisa Vong', 'Achchala Deepan', 'vidhipandya29', 'Aarya Jha'] | [] | ['css', 'html', 'javascript'] | 13 |
10,576 | https://devpost.com/software/neuratext-app | Sign-In
Social network log-in (Twitter, Facebook, Linkedin)
Homepage with content from all over the world!
Search for endless amounts of users and content
Daily Checkup for User Motivation
Messenger with inappropriate message guard
24/7 Service Helpline
Inspiration: From seeing all the heart-stabbing news about depressed teenagers in connection to cyber bullying
What it does: Censors inappropriate messages and content which have the potential to inflict harm on receivers. As well, we integrated diagnosis for those who need mental support upon signing up for our app, and treatment as well, such as daily activity monitors and inspirational quotes.
How we built it: Through using different software such as Marvel, Balsamiq, Canva, and Photoshop, we were able to integrate a bit of each software into our final product, ultimately to make a usable and effective app.
Challenges we ran into: Coding was an issue that we had to work around to complete our final product, as two days were not enough for us to learn a new language. Furthermore, deciding on the software needed to complete this project was difficult to an extent as well.
Accomplishments that we're proud of: We are without a doubt, proud of learning new software such as Balsamiq, Canva and Marvel. This opportunity was a great learning experience, and that is what's most important to us.
What we learned: Not only did we learn how to use new software, but we also learned how to collaborate, and build upon our leadership skills.
What's next for NeuraText App: We definitely plan on continuing this app journey, using all our free time towards improving our app!
Built With
balsamiq
canva
marvel
photoshop
Try it out
marvelapp.com | NeuraText App | Prevention of cyberbullying (leading cause of depression) through censoring potentially hurtful words and harassment in our safety-guaranteed social media/messanger app. | ['Chen(David) Zhuang', 'Kush Shastri', 'Alex T'] | [] | ['balsamiq', 'canva', 'marvel', 'photoshop'] | 14 |
10,576 | https://devpost.com/software/life-of-blobby-team-ftw-team-19 | Works Cited 1
Works Cited 2
Works Cited 3
Works Cited 4
Works Cited 5
Story Script 1
Story Script 2
Story Script 3
Story Script 4
Pitch 1
Pitch 2
Pitch 3
Pitch 4
Pitch 5
Pitch 6
Pitch 7
Pitch 8
Pitch 9
Pitch 10
Pitch 11
Inspiration
Online games from PBS and a desire to create something that was educational and entertaining for children
What it does
Educate children of ages 5-12 from minority groups, including immigrants, black and Indigenous peoples, on issues surrounding and coping mechanisms related to mental health.
How I built it
Code - repl.it, desktop design of the game - balsamiq, mobile application prototype - adalo
Challenges I ran into
Coming up with ideas, attempting to write the code, trying to make the video only 5 mins long.
Accomplishments that I'm proud of
Completing our pitch and working together
What I learned
Teamwork, time management, and a little bit of coding in python.
What's next for Life of Blobby - Team FTW, Team #19
Continue making adjustments to the prototype design and writing the code for the game.
Built With
adalo
balsamiq
python
repl.it
tkinter
Try it out
previewer.adalo.com
balsamiq.cloud
repl.it | Life of Blobby - Team FTW, Team #19 | An online game that educates children about mental health | ['Tanisha Nigam', 'Vidhi Gokani', 'Celine Kwan'] | [] | ['adalo', 'balsamiq', 'python', 'repl.it', 'tkinter'] | 15 |
10,576 | https://devpost.com/software/mindmap-bruv5p | Inspiration
In recent years mental health awareness has taken the media by storm. More celebrities athletes and other influencers have been making an effort to talk about mental health than ever before. This has helped reduce the stigma around mental health, created supportive workplaces, and encouraged individuals to seek services.
As such, a prominent shortcoming in diagnosing and treating individuals with mental illnesses is the scarcity of mental health professionals that can cater to the individual requirements for each patient. Provider shortages persist and are expected to worsen. With a diverse variety of different categories, and subcategories within mental health finding a viable treatment within an accessible radius and time slot can be difficult. In addition to this, not all providers accept insurance, and it is difficult to know which professionals will be able to cater to your financial, mental, and physical needs.
What it does
MINDmap is a one-stop application that bridges the accessibility gap between the resources of mental health professionals and their patients.
This app provides:
-Therapy Location Map
-Business Contact Information
-Cost and Insurance compatibility
-Appointment Availability
-General Treatment Details and Specialties
-Reviews by Verified Users
-Detailed COVID procedures
-Contact with Government Officials
-Booking through App
Challenges we ran into
With an online event, there are bound to be technical difficulties. Unfortunately, our team was no exception to this. With one member losing power twice throughout the event and one with unreliable wifi, we were constantly on our toes. Now that is has come time to submit we can really laugh about that challenge. We had no control over it, but we still managed to submit our best work, and on time too
Accomplishments that we're proud of
None of us were experienced in programming or software design, however, the idea we propose is an app. We managed to use figma to develop a UX/UI design and it turned out way better than expected.
What we learned
How to use figma to create a UX/UI design
How to use premiere pro to edit videos
Manage time better
Allow ourselves to manage time better by staying on task
What's next for MINDmap
Phase 0:
Develop the App
Marketing to Professionals
TECHNICAL ASSESSMENT
Phase 1:
Launch free to consumers app on probation
industry professionals only pay for scheduling appointments and seeing appointment availability
BUSINESS ANALYSIS
Phase 2: 6 Months After Initial Launch
Launch Website after Testing and modifying app according to performance
Relaunch App with new features/feedback from screening process
COMMERCIALIZATION
Phase 3: 1 Year After Initial Launch
after the one year mark -- industry professionals will pay $11 a month (billed as one payment of $132) to keep more than just the name of their business on the app and website
Phase 4: 5 Years After Initial Launch
once there is sufficient traction → prices will become $22 per month (billed as one payment of $264
GROWTH
Phase 5: 6 Years After Initial Launch
Expanding reach past Ontario and up to federal govt across Canada
The provider works with BIPOC and neurodivergent people, as racism and ableism are rampant in the mental health industry and make mental healthcare inaccessible for BIPOC and neurodiverse people.
Phase 6: 10 Years After Initial Launch
Expand focus from private to public health clinics as well by proposing the government to implement this app as a part of the healthcare system
Built With
figma
ui/ux
Try it out
www.figma.com | MINDmap - Team GIF - 34 | accessing mental healthcare resources has never been easier | ['Logan Webb', 'Binalpreet Kalra'] | [] | ['figma', 'ui/ux'] | 16 |
10,576 | https://devpost.com/software/moodify-5y04u8 | Wireframe: New user survey
Wireframe: Features menu
Wireframe: Main Menu
Wireframe: Music and Audio Resources
Wireframe: Map and Nearby Resources
Wireframe: Selecting emotions
Wireframe: Opening app/New user
Citations Page PNG
Our VoiceFlow Alexa Diagram
Instagram Post 2: Mental Health Quote
Instagram Post 1: Mental Health Fact
Inspiration
We were inspired by personal experiences with mental health apps. A lot of us had seen the previously made apps as stepping stones, rather than competition. Most apps act as a journal or a simply a place for someone to vent. Other apps had meditation features. A lot of these features together helped create Moodify and that inspiration can still be seen in the app right now!
What it does
Our app provides a variety of mental health resources, all in one place. These resources include, but are not limited to, guided meditation, a list of nearby activities, emotion management strategies, music and immediate support from professionals.
How we built it
We made a functioning prototype in Marvel app by using our rough drafts made in Balsamiq Cloud. Using the rough Drafts ideas and concepts were taken and used to modify it into the app that it is today.
Challenges we ran into
The biggest challenge we had encountered was attempting to set our app apart from all others alike it. Mental health awareness has become much more normalized, and with doing so, many apps were created. We attempted to solve similar problems in the mental health world, but we were also required to ensure our app was not like all the others.
Accomplishments that we're proud of
We are really proud of our final products, as most of us had no experience with creating or developing anything similar to what we had produced. This has been a greta learning experience for the entire team, and looking back at all the work we were able to complete really gives us all something to feel accomplished about.
What we learned
We learned how to make our own Alexa/home assistant applications for us to use at home. Furthermore, we learned about the marvel app features, abilities and overall where people can design and create various applications. As a team we learned many useful skills such as collaboration, prioritization, organization and effective communication skills.
What's next for Moodify
In the future Moodify will conduct much more extensive research regarding user experience, therefore we can develop new innovative features which are beneficial and requested by our customers. Moodify is dedicated to providing preeminent and convenient mental health resources which is why we would reach out to mental health organizations. This way we will gain first hand insight on the needs in mental health services and learn about the assistance these organizations provide. Moodify hopes to better promotional and marketing strategies by improving our current platforms and seek more partnerships where the organizations are mutually benefiting. We hope to grow the success of Moodify by increasing our recurring users and encourage more users to upgrade for our premium account option for a monthly fee of $1.99.
PLEASE READ EMAIL FOR MORE DETAILS REGARDING VOICEFLOW
We had issues showing our work on VoiceFlow after accidentally doing work using a free workspace instead of a PRO workspace given to us by Spark Hackathon
These are the credentials of my account that holds the project
Email:
angelbennypaul@gmail.com
Password: tinybot345
To access the project, it should be found in Angel's Workspace, project Moodify and click on "Moodify" which is found underneath the Moodify project.
If you have any issues finding the project please email
angelbennypaul@gmail.com
Built With
balsamiq
marvel
voiceflow
Try it out
marvelapp.com
www.instagram.com
docs.google.com
docs.google.com
docs.google.com | Moodify - Aang Gang, Team 28 | Moodify is a feasible mobile application which provides users with quick resources & coping skills for nonemergencies while catering to their specific needs. It compiles numerous resources in one app. | ['Harveen Grewal', 'Charmi Kadi', 'Benny Paul'] | [] | ['balsamiq', 'marvel', 'voiceflow'] | 17 |
10,576 | https://devpost.com/software/werise-quantum-hackers-team-24 | This was truly an inspiring project. It helped us explore new resources about mental health. We also got the opportunity to brainstorm ways to raise awareness for the cause. We have never used multimedia like Marvel before, and this project gave us the opportunity to expand our skillset. Software like Marvel can be utilized in our future academic lifestyle as well. The main challenge we faced was time management as we had a lot of tasks to complete in a small period of time, which meant we had to remain responsible and efficient to be successful.
Built With
java
marvel
premierepro
Try it out
marvelapp.com
drive.google.com | WeRise-Quantum Hackers, Team #24 | Its an app created to raise awareness for mental health. It allows people to connect with helpful therapist/people who understand the severity of the issue or share similar experiences. | ['Virendra Jethra', 'Amitoz Jatana', 'Vasu Sukhija'] | [] | ['java', 'marvel', 'premierepro'] | 18 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.