hackathon_id int64 1.57k 23.4k | project_link stringlengths 30 96 | full_desc stringlengths 1 547k ⌀ | title stringlengths 1 60 ⌀ | brief_desc stringlengths 1 200 ⌀ | team_members stringlengths 2 870 | prize stringlengths 2 792 | tags stringlengths 2 4.47k | __index_level_0__ int64 0 695 |
|---|---|---|---|---|---|---|---|---|
10,354 | https://devpost.com/software/fusion-liczj7 | Our UV Sanitizing N95 Mask Storage Container!
Inspiration:
Currently there is an N-95 mask shortage around the globe. Millions of healthcare professionals are facing a mask shortage and thus are being forced to reuse N-95 masks.
Professionals are using plastic containers and paper bags to store masks for reuse. These are crude and unsanitary methods which do not kill the bacteria and can therefore increase the risk of infection.
What can we do to combat this issue and help our healthcare workers on the front line?
What it does:
Our product, Fusion, is a an effective solution to combat the problems of uncleanly mask storage.
Fusion is a portable box that allows users to store masks and sterilize them using UVC LED light. This allows masks to be stored cleanly and in an effective manner, while still retaining breathability which is necessary for the mask's storage.
UVC light only takes a few minutes to clean masks which means that users can clean masks quickly without much delay.
One concern we had was about the cost of the storage container, but since almost all of the parts can be 3D printed, it should not be expensive.
How we built it:
We used Fusion 360 over the course of the weekend to design a CAD model which we later 3D printed with a Creality Ender 3 as a proof of concept. We then used a basic LED light to simulate the UV light since we did not have access to a UV LED.
Challenges we ran into:
The ideanation process was the most difficult aspect of this challenge. Due to PPE's restrictive nature in order to retain safety, it took our team multiple hours of brainstorming to conceive a product which could retain safety while benefiting the PPE.
Accomplishments that we're proud of:
We are proud of our CAD model since it turned out to function well despite limited testing! Likewise, we are happy to say that we Macgyvered a working prototype as well. Although we ran short on time 3D printing the model, the basic concept was proven to work well.
We are also proud of the fact that we were able to design, create, and present an idea in such a small time frame!
What we learned:
Firstly, this challenge gave us the opportunity to learn how to do very thorough research: we searched across the internet not only for products similar to ours, but also for studies that could prove that our idea would function as we wished. Along with this, we gained the experience of presenting this research and our solution to a professional audience, an opportunity that we high-schoolers and hopeful entrepreneurs are grateful for!
What's next for Team Fusion:
Our goal after this competition is to continue our research into our idea by digging deeper into the internet and reaching out to professionals in the field of our product. If we can confirm that we are the first to invent this product and that it can succeed in the real world, then we will file for a Provisional Patent Application (PPA), which will grant us protection over the idea so that we can reach out to companies to license the product. Although we would not profit as much through this strategy, our product would reach the market much faster, therefore getting it to those who need it (first responders, other individuals) as fast as possible. With licensing, our product reaches all of the stores to whom the company sells, unlike if we started our own company.
Built With
autodesk-fusion-360
Try it out
github.com | Fusion | N95 Sanitizing UV Mask Storage | ['Gaurish Lakhanpal', 'Lance Locker'] | [] | ['autodesk-fusion-360'] | 17 |
10,354 | https://devpost.com/software/billy-face-shield | Functional prototype main view
Functional prototype alternate view
Fusion 360 model screenshot
Full sized standoff test
Prototype CNC gcode path
Standoff CNC gcode path
Render of the CAD model
Billy goat wearing the Billy Face Shield
Billy Face Shield
Abstract
The Billy face shield is designed to reduce plastic waste of plastic frame COVID face shields by offering a platform that can be sterilized and reused. The Billy face shield is made of an aluminum frame and a clear plastic visor. The aluminum frame of the Billy face shield will not melt when put into common hospital autoclave cleaning devices. The frame can be made with “maker” tools very similar to widespread Prusa 3D printers. Specifically, an OpenBuilds CNC Mill was used to cut the aluminum plates of the frame. This $900 CNC mill is as accessible to makers as the $1000 3D printers used to make plastic frame face shields. The clear plastic visor of the Billy face shield is identical and compatible with popular
Prusa
3D printed face shields, supporting cross-compatibility. I created a successful 50% scale mock up scale model as a proof of concept. Overall, this alternate face shield design will help augment plastic only face shields by giving users the option of reuse of the frame and therefore reducing the total plastic waste.
Introduction
It is truly noble that makers from around the world are taking a stand against COVID. The well known 3D printer manufacturer Prusa is one such maker group who is taking a stand. They have designed and produced a single use PPE face shield for the COVID epidemic. The Prusa face shield is made from a clear front visor and a 3D printed frame. The frame of the Prusa face shield is printed on their 3D printers. The Prusa face shield has gained widespread use to help a lot of caregivers. However, the next step forward is to make a face shield that is more sustainable and uses less plastic. One way of accomplishing this is to reuse the plastic frame of the face shield after it has been sterilized. However, the common disinfecting method of autoclaving is not compatible with the plastic frame of the Prusa face shield.
Proposed Solution
Autoclaving is a simple and fast way to ensure a piece of PPE can be reused. The issue with Prusa face shields is that they cannot be autoclaved without melting. The logical solution is to use a material that is aesthetically pleasing yet can be autoclaved. Therefore I have designed an aluminum based face shield to be reused. The aluminum frame will not melt in an autoclave. This new solution is not meant to replace the successful single use Prusa design but to augment and offer an alternative to the single use plastic solution, thus reducing plastic waste. This design reduces the plastic waste of the frame and not the clear visor.
Design Implementation
This aluminum frame of the face shield is unique as it has the same effective dimensions as the plastic Prusa frame. This means that the pre-existing clear visor covering is compatible with the metal frame of the Billy face shield. While satisfying this cross compatibility, the design was kept simple to be easily made on hobbyist CNC mills. Ease of manufacturing while keeping attention to aesthetics was kept at the forefront in this design. The model is made from two flat plates of aluminum and six metal standoffs. The metal standoffs have features cut into them to hold the clear plastic visor similar to the Prusa implementation. The aluminum is generally high quality material, adding to the aesthetics of the face shield.
Functional Prototype
The functional prototype has a few concessions compared to the true model. The true model and the functional prototype are both primarily aluminum. The main body is constructed out of two CNC milled aluminum plates. The functional prototype is a scale model (50% scale) as the aluminum stock I had was 100x100 [mm^2] in dimension. However, scaling meant that I could not reliably construct the aluminum mating standoffs as they would be too small to make at 50% scale. I completed the functional prototype with plastic standoffs which would be traded for metal ones on a production model. I additionally made an aluminum standoff at full size to demonstrate the mating of the clear visor shield to the standoff. Some modifications would be required to fully streamline the process to make these reusable face shield frames. Overall, the functional prototype and full sized standoff was a success and turned out great.
Engineering Skills
To create this first generation prototype, an understanding of engineering and technology is required. Specifically, a mastery of CAD (computer aided design) and CAM (computer aided manufacturing) must have been used. The CAD package that was used was Fusion 360 by Autodesk. Fusion 360 was chosen over Solidworks as Fusion 360 is as powerful in terms of CAD but additionally seamlessly integrates CAM functionality. The CAM integration allows for easy and efficient control of end use machines such as 3D printers and CNC mills. Therefore, CAM mastery is also required to create the prototype. Furthermore, an understanding of CNC mills was required to turn the CAM information into custom aluminum plates that could be used for the face shield. Overall, a full suite of engineering and technology skills from design to manufacturing were used to make the prototype.
Tools Used
A 3D printer was not used to make the functional prototype or to construct a working iteration. This is because a 3D printer can only make plastic parts. A hobbyist CNC was chosen instead. A CNC is very similar to a 3D printer and is the same as a 3D printer in how it moves and processes information. The only major difference is that a 3D printer extrudes molten plastic while a CNC removes material with a high powered cutting bit. This allows CNC machines to work with more material than a 3D printer. The ability to use aluminum means a net decrease in total plastic used. The CNC chosen, the OpenBuilds Mini Mill is an open source budget hobbyist machine. The machine in total cost $900, a comparable price to the $1000 Prusa 3D printer. This means the CNC mill used is as accessible as the 3D printers used to make the plastic frames.
Built With
autodesk-fusion-360
cnc | Billy Face Shield | COVID reusable metal frame face shield | ['Geoff Billy'] | [] | ['autodesk-fusion-360', 'cnc'] | 18 |
10,354 | https://devpost.com/software/face-mask-dispenser | Mask Dispenser - Designed by Joe Stavistky
Inspiration
Colleges will be open in the Fall semester. Students taking science classes involve with labs need to go to the campus and perform hands-on experiment. Colleges such as Hudson County Community College will provide face-mask to their students. The question is how to provide students the face-mask. Instead of letting students take the face-mask out of the box, we developed an automatic face-mask dispenser, to prevent students from taking more than one mask at a time, as well as to minimize cross-contamination.
What it does
The automatic face-mask dispenser is a robotic arm that can take one face-mask at a time out of the box and give it to the student. The dispenser can be activated by sensing a person in front of it. It can also make a refill signal when the face-mask in the box is running low.
How we built it
The basic mechanism is a robotic arm. It was built by using an Arduino microprocessor along with sensors, a stepper motor and a servo motor. The mechanical parts were made by using hand tools, power tools, and AutoCAD design and 3d printing.
Challenges I ran into
Anyone who are interested in collaborating or doing business with the STEM department of Hudson County Community College.
Accomplishments that I'm proud of
It worked.
What I learned
Refresh some skills of 3D design and Robotics.
Hackathon means no house chores at all.
Skipping a night of sleep is OK.
Team work is important.
What's next for Automatic Face-Mask Dispenser
We will modify and perfect the prototype, We will put the automatic face-mask dispenser in different building at Hudson County Community College.
Built With
3dprinting
arduino
autocad
handtools
powertools
robotic
sensor
servomotor
steppermotor | Automatic Face-Mask Dispenser | We developed an Automatic Face-Mask Dispenser protype. The dispenser can prevent people taking more than one mask at a time as well as eliminating cross-contamination. | ['Clive Li'] | [] | ['3dprinting', 'arduino', 'autocad', 'handtools', 'powertools', 'robotic', 'sensor', 'servomotor', 'steppermotor'] | 19 |
10,354 | https://devpost.com/software/designs-in-dentistry | Version 1
Version 2
Version 3
Inspiration
In the current Covid-19 pandemic, healthcare workers are at much higher risk due to increased exposure to infected individuals. This is particularly even more relevant to dental personnel actively exposed to oro-nasal aerosols from procedures. Among several approaches to address the PPE shortages, our 3D-printing volunteer group has been actively pursuing additive 3D printing as a potential avenue for custom-designed, long-term sustainable (reusable) PPE. Our goal is to provide these products to our local community as a service since the 3D PPE will improve comfort and compliance in accordance with NY state, CDC recommendations, and the American Dental Association.
What it does
Our face shields specifically provide dentists with PPE that allows them to wear their loupes comfortably because the loupe lights are located outside the face shield. A common problem that dentists have with the face shield is that the loupes will not fit underneath the shield properly, and this often causes the dentist to have to operate without their loupes. Operation without loupes can significantly impact the quality of their work. By moving the light outside of the face shield, we can help mitigate this issue.
How we built it
We obtained measurements from various loupe lights using calipers to design a mount from scratch that would be compatible with the loupes. We used free CAD software, specifically Meshmixer, to create STL files and convert them into gcode which allowed us to print physical copies of our prototypes on a Prusa MK3S.
Challenges we ran into
Our original prototype had the light extended out too far, resulting in the light being outside of the field of vision where our patient would be. The face shield evolved multiple times in order to achieve the correct path for light to travel towards the intraoral cavity.
Version 1: The light did not illuminate patient's mouth
Version 2: The light was mounted outside of the dentist's line of sight, thus preventing a consistent illumination and the need for constant adjustment
Version 3: Built a 2 piece drop down mount to place the light in the line of vision. This helps to eliminate the need for adjustment.
Accomplishments that we're proud of
We are proud that as dental students and a grad student with no engineering background, we were able to problem solve various challenges that we encountered. In addition to the science and technology aspect, learning how to run a business effectively and efficiently the last few months has taught us a lot. We are proud to be helping the healthcare professionals in our community during this pandemic. Our future goals include hosting work shops for local dentists so they can be self reliant and also offer events for kids to encourage them to pursue an STEM profession.
Built With
cura
meshmixer
notepad++
prusaslicer | Designs in Dentistry | Students finding ways to accommodate the needs of dentists within the community | ['Philip Sales', 'Kierra Bleyle', 'Shaina Chechang'] | [] | ['cura', 'meshmixer', 'notepad++', 'prusaslicer'] | 20 |
10,354 | https://devpost.com/software/antimicrobial-filament-for-3d-printing | Lab test for the antimicrobial characteristics of a plastic sample
Inspiration
We think that college students can innovate, create, and make solutions to real world problems when they are given the right mentorship and guidance. Giving back to our community is always a priority especially during this pandemic situation.
What it does
We developed a formula to make antimicrobial materials to produce filaments for 3D printing. Unlike some products in the market that only claim antibacterial characteristics, ours on the other hand, has been tested in our lab and it shows antimicrobial efficacy. We can also provide service to test antimicrobial efficacy on other products. In addition, We have an X-ray machine to quality control our materials by examining the concentration of the active ingredients such as silver or zinc. Our ingredients are FDA food contact compliant and are heat resistant. Our material can be used also to make a face shield head band during this pandemic situation to be used by first line health and food workers.
How I built it
We made our antimicrobial gadget by using an auto-cad program and 3D printed as a prototype. Our design is inspired from an already existing stick in the market which is made from metal.
What's next for Antimicrobial Filament for 3D printing
We would like to work with 3D printing companies and maker spaces by supplying them our material to be used to 3D print antimicrobial gadgets, face shields head band,
Built With
3dprinting
autodesk
extrusion
ingredients
polymer
x-ray | Antimicrobial Filament for 3D printing | We developed a formula to make antimicrobial material to produce filaments for 3D printing. This antimicrobial material can be extruded in big size machines or in lab scale 3D printers. | ['Anass Ennasraoui'] | [] | ['3dprinting', 'autodesk', 'extrusion', 'ingredients', 'polymer', 'x-ray'] | 21 |
10,354 | https://devpost.com/software/hat-attachment-face-shield | Inspiration
When I heard that schools were opening back up, I was worried for the children’s safety since they are not fully aware of the effects of the virus. I wanted to design something that would keep them safe but that was also comfortable and not hindering.
What it does
The face shield attaches to hats to be worn. It has a double attachment that attaches to the face shield and the hat. The part that attaches to the hat can be moved to fit different shaped hat brims.
How I built it
I first prototyped it with household materials I have, binder clips, a plastic bag, bobby pins, and a plastic sheet. I then tried to design the clip concept on solidworks so that it could be 3D printed. Next I created animations to show the concept and how it would work.
Challenges I ran into
I struggled most with creating the design on solidworks. I just began to learn how to use solidworks, so I could have used some more experience to create the design as I had imagined it.
Accomplishments that I'm proud of
I am proud of participating in the Hackathon and for the design I came up with in such a short time span.
What I learned
I learned how to work efficiently in a short amount of time. I also learned to reach out and work with others (my mentors) when I am stuck on something.
What's next for Hat attachment face shield
The solidworks design needs to be finalized, printed and tested. Once this design is complete it can be 3D printed and then the face shield can be cut out and attached and then it can be produced to be used in schools.
Built With
adobe-animate
solidworks | Hat Attachment Face Shield | A comfortable yet effective face shield to protect children from COVID-19 when going back to school | [] | [] | ['adobe-animate', 'solidworks'] | 22 |
10,354 | https://devpost.com/software/ar-anatomy-827wgq | Ar view
Inspiration
As we know that this pandemic is really tough for us so I made an AR app which will help the people as well as the doctor
What it does
It will give the virtual world related information of our body parts.
How I built it
Challenges I ran into
Accomplishments that I'm proud of
What I learned
What's next for AR Anatomy
Built With
augmented-reality
echo-ar
Try it out
drive.google.com | AR Anatomy | AR based app for the hospital | [] | [] | ['augmented-reality', 'echo-ar'] | 23 |
10,354 | https://devpost.com/software/remote-sensing-rapid-kit-for-covid-detection | COVID detector
Prototype
Inspiration
Globally the world is in a panic situation on novel coronavirus, epidemically the infection originated from China and spread all over all the world. The infection of the novel COVID-19 causes 19 deaths per 1,000 infected persons. Viral detection assay was determined by the Enzyme-linked immune sorbent assay (ELISA), Quantitative polymerase chain reaction (qPCR), Flow cytometry, Tunable resistive pulse sensing (TRPS), Protein assays and Immunofluorescence Assay (IFA). The research on the COVID-19 founds that the protein present in the virus was arranged rapidly as Surface protein, Membrane protein and Envelope protein. The epidemic spread of COVID-19 might increase through currency notes, which increase the death rate all over the world. It is necessary to develop a method to detect COVID-19 virus by a simple, cost-effective and rapid method.
What it does
The epidemic situation leads to the loss of population in the world and also causes economic damage all over the world. The proposed method can able to detect virus rapidly on-site detection. The estimated cost of the probe was calculated to be 150 to 200 rupees. The time taken to get the result was found to 5 to 10 min.
How I built it
Nano based system in which modified Au NPs (gold NPs) was used with ACE-2 for the Immunofluorescence Assay for the detection of COVID and nanocarrier which is made of biopolymer (chitosan) with modified iron oxide nanoparticles was used as a vaccination process to stimulate the adaptive and innate immunity.
Challenges I ran into
Need to develop an antigen which is reactive to COVID-19 (inactivated form) and the formation of the complex of COVID and nanomaterial.
Accomplishments that I'm proud of
The prescribed solution can able to track many cases when we compare to available studies (PCR detection) when compared to the available technique the proposed method is cost-effective, selective and efficient. This will make people live to be routine all over the world.
What's next for Remote sensing rapid kit for COVID detection
Need to commercialize the kit all over the country and need to stop the upcoming waves of COVID-19
Built With
immunofluroscence
nanomaterial | Remote sensing rapid kit for COVID detection | Colorimetric COVID detector | ['Balasurya Balasurya'] | [] | ['immunofluroscence', 'nanomaterial'] | 24 |
10,354 | https://devpost.com/software/healthrific | Contact free basin modified design
Actual picture of Basin
Actual hardware pic
SignIn page
SignUp page
Forgot Password page
Kiosk Test
Self Assessment Test
Covid 19 Updates
Health Tips
Results of Self assessment test
UV-C
todo
Built With
bluetooth
dart
firebase
flutter
Try it out
github.com
docs.google.com
drive.google.com | no | na | ['Haripriya Baskaran', 'Mohammed Mohsin'] | ['Script Foundation: Best Healthcare Solution'] | ['bluetooth', 'dart', 'firebase', 'flutter'] | 25 |
10,354 | https://devpost.com/software/bunnypapr-for-jcrmrg-hackathon | BunnyPAPR
Video Overview
Watch the
5 minute video
for an overview.
Inspiration
In March 2020, a short three months ago, Dr Kuo released his design for the Bunny Science PAPR. It was meant to address the N95 shortage in hospitals, but hospital adoption was slow. We asked: For emergency situations, could a $2000 PAPR system be simplified to 1/10th the cost? The answer is yes, and the $30 bunnyPAPR was born.
What it does
The diagram below shows how it works. And one of the key things is that it can be made for $30 or less.
Contaminated air passes through the FDA-approved anesthesiology viral filter with sub-micron level capture.
Fan pulls in outside air. Circulates clean air. Provides positive air pressure.
Person exhales CO2 through a recommended surgical mask. [1]
CO2 is continuously vented out rear flutters valves or additional viral filters.
Fan is powered by USB battery pack in the user's pocket.
Depending on the use case, many of these parts can be decontaminated and re-used. Hence, the one-time cost of $30 is for a
reusable system
. [2]
Medical Testing
The Bunny Science PAPR has been tested by the inventor, Dr Kuo, an anesthesiologist from Seattle. In particular, it
passes the nebulized saccharin test (sometimes referred to as the QLFT for respirators)
passes monitoring of O2 and inspired/expired CO2
passes "stress tests" of deep and fast inhales and exhales.
It has been worn for 11hours straight. Dr Kuo has also worn it everyday he is in the hospital for the last 2 months.
Details of the Bunny Science PAPR are in the 14-page paper titled:
Pilot Evaluation of Oxygen and CO2 Safety of BunnyPAPR, A Prototypical Sealed, Compliant Volume Powered Air Purifying Respirator (SCV-PAPR)
How we built it
About 1000 engineering hours are in on the project.
Four out of five parts are commodity parts. We tested several dozen fans, filters, bags, and batteries to find the right balance.
Commodity Parts List (millions available)
Fan ($2 wholesale to $4-5 retail)
FDA-Approved Viral Filters ($1 retail in packs of 100qty)
Bags ($0.30 each, disposable)
USB Battery Pack ($5-$10)
The final parts are a head mounting system and airflow-connectors. These were prototyped using 3D-printing. See
https://www.bunnypapr.org/makers/3d-print-files
Challenges we ran into
The primary challenge was figuring out our target market. Initially, the focus was on the medical community and at-risk people. After all, they need protection as good or better than N95. However, given the Covid19 controversies around mask wearing mandates, hospital approvals, and cloth masks, we had to admit our error:
Not everyone who medically needs one will get one.
The hackathon overcame this challenge by rethinking the problem. "Who is most motivated (financially, medically, or behaviorally) to want bunnyPAPR?" We refocused on international aid (free distribution) and large industries (premium) where safety is part of the
license to operate
. Hence, we figured out the need for a "Freemium" business model described in the 5-minute video.
Other challenges included various R&D and scaling/sourcing challenges.
PMF
Like many teams, we are struggling with product market fit (PMF) because decision makers are overloaded. We are pivoting to not market to everyone, but to find out target customers in B2B and B2C and B2government contexts.
FANS
We bought over a dozen fans, many did not match the listed specs.
BAGS
We have spent approximately 125-200 hours on bags. Some bags are strong, but visibility is poor. Other bags have great visibility, but might tear easily.
INDIA
Our India partner (
link
has had a hard time sourcing the right bags and fans.)
Overall, we are approaching these issues methodically and finding adequate solutions. Importantly, we've been able to maintain our $30 target.
Is it safe? And other reactions
A lot of people ask us, "Is it safe?" When they do, we mention that
A doctor has worn it in surgery and for 11 hours straight. In a COVID-19 ward.
The filters are FDA-approved and used in surgery for anesthesia and pulmonary testing.
If there is ever an issue, you can tell because the bag deflates. You typically have at least 4 minutes to take off the plastic bag. You will not suffocate unless you do not pay any attention for typically 4 minutes.
Accordingly, don't sleep in the bunnyPAPR.
If they then say, "I don't trust plastic bags. It looks not safe", then we thank them and move on. As with any new product, a lot of people will be skeptical. Some people won't change their minds based on information. They mostly want to wait and see if others use it.
Based on emails, we're finding that 80% of people don't want to try it. (They just want familiar masks or N95 masks.) Of the other 20%, 1-5% really love it and need it. (Fun fact: people with beards love bunnyPAPR since N95s don't seal.) So that is our target market. The other ~15% are skeptical and will try it if it's free. And, upon trying it, many of them like the protection, but dislike the noise or that it is a bag.
We have worn these to the supermarket and Costco. In those cases, there is a lot of interest. This is because people are seeing someone else wear it. Even better is if two people wear it out together.
Accomplishments that we're proud of
All volunteer team. We run through a Discord/Slack chat server.
Doing all this fast! (2 months, with a flurry in the last 3 weeks).
Teamwork.
Integrating a diverse group of talented volunteers. From 16yo to 70+. Finding good, helpful people internationally.
Distributing 20+ BunnyPAPR's to people who have wanted to use them.
We have a list of over 30 requests that we are in the process of fulfilling.
What's next for BunnyPAPR
Finish "500plan" bunnyPAPR inventory. Make and distribute 500 internationally.
More user feedback, refine target market.
Seek "angel" investors for scaling to 2000-5000 and then 100,000
Explore target markets, like
NBA and other large markets,
direct to consumer, and
WHO, and international humanitarian aid
Consider US medical markets and FDA Emergency Use Authorization (EUA) approval. Medical regulations are very complex.
Provisional patent submission
Contact Us
We can be contacted at
info@bunnyPAPR.org
and
eth7an@bunnyPAPR.org
.
You can also call/text/WhatsApp Howard at +1 XXXXXXXX (mobile phone) - email
hocho@bunnypapr.org
and request WhatsApp information.
NOTES
[1] NOTE on surgical masks. In the "consumer" version, bunnyPAPR™ has viral filters on the intake
and
exhaust. Hence, surgical masks are not necessary. In one lower-cost "medical" version, bunnyPAPR™, has a viral filter on the intake only and a valve on the exhaust to prevent reverse flow. A surgical mask will capture any viral shedding from the wearer instead of an exhaust filter. This option is because many hospitals administrators will still require a surgical mask or N95. The reason is for legal/FDA-compliance reasons.
The decision of which version to use will ultimately be up to the hospital administrator and infectious disease control department.
[2] The decision on re-use and decontamination is a tricky one. Under normal circumstances of abundant filters, the viral filter would be replaced after each use, just like N95s. Under the Covid19 crisis, many hospitals are rationing and re-using N95 masks, sometimes using one-per-day or even one-per-week. If this is worn in a consumer setting (grocery shopping, not a hotspot), a set of 3 filters would probably last months to a year, depending on how often one goes out. If this is worn in a hospital setting in a Covid19 ward, it will depend on the availability of viral filters. One filter per day or several days is reasonable and similar to the N95 rationing protocols. A lot will depend on the infectious disease control departments in hospitals and national/international health authority guidance. Note: The bunnyPAPR system is not yet FDA-approved, but the viral filters are FDA-approved and routinely used during surgery.
Try it out
bunnyPAPR.org | BunnyPAPR for JCRMRG Hackathon | Helping protect and re-open schools and businesses | ['HowardG Chong', 'Michael Noes', 'Ethan White'] | [] | [] | 26 |
10,355 | https://devpost.com/software/cupertino_pride | Resouces
Home Screen
Events
Safe Routes
Voting Screen
Log In Screen
WeCupertino
In Santa Clara and Cupertino, there are 4 main challenges which interfere with Civic Engagement and Civic Responsibility
1) Lack of Awareness: In our community, a lot of the events and elected officials are unknown to the citizens
2) Lack of Motivation: There are no incentives to help out in the community or participate in Elections
3) Unaware of Effect: People in our community are unaware of how Elections, Volunteering, and participation in the
community affects them
4) Safety Concerns: Cupertino ranks a score of 35 in the Crime Index Scale, where 100 is the safest. This prevents
people from participating because of the fear of Crimes happening towards them
What Inspired us?
1) Lack of participation in Elections: in Cupertino, a mere 49% of the people who were eligible to vote, voted in the last presidential election, In other elections that number drops even lower
2) Low Safety Rating: Cupertino's low safety rating prevents people from enjoying our community
3) Love for Cupertino: Both of us have been members of Cupertino since birth, the chance to have a real impact on our community is a major factor influencing us to participate
How We Built the App?
Building this application was especially difficult for us because we had to make sure that the entire UI was easily accessible and feasible for people of all ages. This forced us to plan a strategy and User Interface for the entire application before we even started coding. After drawing out and developing the User Interface we tackled all the 5 different aspects of the Application we needed to
1) Google Sign-In: This involved us using creating a Firebase project and learning how to integrate an online server and database with our code, Since we were relatively new to Flutter this took us a while to learn and integrate however we realized the benefit of doing so would be large as it would open up the app to a lot more users.
2) Home Page: This page involved us using a large number of UI features and setting proper constraints between objects so the Application works phones of different sizes. We also worked on pulling data from the Firebase after they signed in. This allowed us to specifically reference people's names in the application making it far more user friendly.
3) Voting and Resources: These pages were the easiest for us to create, this involved us simply linking our buttons to various websites and resources to help the population of Cupertino. This allowed us to help citizens learn from better and more credible resources rather than an app built by two teenagers.
4) Events: This scene was harder to make as it involved us communicating between two different classes and Firebase to preserve information in accounts and update the UI on the homepage simultaneously. It took us a while to achieve however we were able to overcome all the obstacles we faced.
5) Safety: This was the hardest part of the App for us to accomplish. Integrating google maps, Firebase and Polylines took us a lot of time to achieve. We had to first learn how to add maps and integrate camera navigation in those maps. Then we struggled with using Polylines which were hard to understand. We had to use lines to draw and mark out safe routes in the community which could be used to travel. This took us a while to achieve. Despite attempting to do all of this we ran out of time and were unable to finish fully implementing this part of the app
Future improvements:
1) Improve the safe routes to include live location tracking.
2) Add more information and resources, also integrate the Events with Firebase better to allow more usage of the App
3) Awards and point system to incentive people to participate
Built With
dart
java
kotlin
objective-c
ruby
swift
Try it out
github.com | WeCupertino | WeCupertino is the future for uniting communities, and promoting community engagement. It provides reasons for people to participate in various aspects of Civic Duty | ['Raghavan Ramaswamy', 'Shivansh Hedaoo'] | ['Apple Airpods'] | ['dart', 'java', 'kotlin', 'objective-c', 'ruby', 'swift'] | 0 |
10,355 | https://devpost.com/software/recyclehub-itlfhp | Scanner with both Quick Scan and Custom Scan
Scanner Results Screen
Map Screen
Recycling Center Details
Total profit based on recycling history
Plastic Recycling Statistic
Screen with Recycling History
Inspiration
As a result of the COVID pandemic and the consequent lockdown, we noticed that sales in single-use goods such as plastic water bottles and soda cans have skyrocketed, even in our own households. However, the general consensus that we've noticed in the Bay Area is that recycling is tedious and offers very little in return. As a result, RecycleHub was born, an app that guarantees to get more citizens involved in the process of recycling by making it simple, efficient, and profitable, thus incentivizing users to help make our city greener.
What it does
The app has three main features - the scanner, map, and logs. The scanner classifies what type of trash the user is scanning, and identifies specific recyclable materials in the object. The map points users to nearby recyclable centers, and provides specific information regarding when the center is open, what recyclables it accepts, and how to get there using Google Maps. The logs help the user keep track of their recycling history, provide them with statistics regarding amounts of specific materials recycled, and estimate the amount of money they can make off recycling based on their recycling history.
How we built it
Kyle and Adithya used Python to build the convolutional neural network (CNN) that classifies trash using Google's MobileNetV2 architecture. We chose this architecture because it is very efficient and can easily be run on a mobile device. The network was ported to IOS using the LibTorch library. We were able to achieve 97.69% accuracy in classifying different types of recyclables by training the model for 10 epochs and using stochastic gradient descent with a learning rate of .003 and a momentum of .9.
Nikhil built the features related to google maps using the Google Maps IOS SDK. In combination with the Places Library, Nikhil located recycling centers near Cupertino. By calculating the distance to each possible center, Nikhil was able to determine a relevant list of centers catered to the user's needs and display them on the map's front end.
Dinesh adapted all the features into an IOS app, designed the graphical interface and user experience of the app, set up the scanner to send correctly formatted images to the CNN. He also created a statistics feature for users to keep track of recycling history and their accumulated profit with price values based on guidelines from CARecycle. Dinesh applied Yelp's Fusion API to gather specific data about nearby recycling centers in order to display images, accepted materials, and addresses of recycling centers.
Challenges we ran into
Porting the convolutional neural network from Python to Swift was rather difficult as there was no indicator that we needed to save the network in a special way to make it compatible with Objective C++ until we got an error message saying so. Porting the app was made even harder as none of us knew Objective C++ prior to developing the app.
It was also hard to communicate between different groups since two people worked on Swift while two others worked on Python, so they each had to learn a little bit about what the other was doing to understand what was going on and bridge the tasks together.
However, despite all the setbacks, we persevered, using online tools, APIs, development tools, and each other to help make our vision happen. As a team we discussed our plans, setbacks, and successes, and supported each other to make RecycleHub a reality.
Accomplishments that we're proud of
We're incredibly proud of the app we have created: an iOS app with a rich UI, accurate convolutional neural network, and seamless incorporation of the Google Maps SDK and the Yelp Fusion API. We were able to take on three difficult and seemingly unconnected tasks and integrate them into a single app, despite being unable to meet in person due to the pandemic. Our willingness to work with new and intimidating technologies and persevere through setbacks is an accomplishment that we will forever celebrate.
What we learned
Each member of the team utilized challenging technology, and as a result learned a lot during this week of coding. We became much more familiar with IOS development and the intricacies behind developing user interfaces and layout constraints, as well as the challenge in manipulating data to input into the NN model. We learned how to design, train, and test a neural network in Python and learned how to port a network built in Python into the iOS environment using a new language: Objective C++. We learned how to work with Google’s Map SDK and how to implement it in a Swift application to locate nearby recycling centers.
What's next for RecycleHub
We hope that this app can be used to help inspire people in the Bay Area by giving them an incentive to recycle more often.
Currently, our neural network and app can classify images into six categories –metal, glass, plastic, paper, cardboard, and trash– but by adding new materials such as textiles, batteries, and tires, we could make our app more powerful for our users. In addition, adding new images to the dataset used to train our neural network could help increase its accuracy rate. Other features we've brainstormed include upgrading the map section of our app to allow for more user interaction (such as starring commonly visited recycling centers), increasing the element of competition between friends by allowing for the creation of profiles, and implementing a badge system for recycling-related achievements. We could also utilize cloud features such as cloud storage for users to view a more detailed recycling history (account details, chat logs, and previous trash images) as well as cloud computing to make the app faster and more accurate by allowing a deeper network to run on a GPU in a server instead of on a phone as it currently does. These are just a few of the many features that we hope to add to our app in the future to make it an inspiration to recycle for all.
Built With
google-maps
objective-c
python
pytorch
swift
Try it out
github.com | RecycleHub | RecycleHub is an app that aims to increase recycling in Cupertino by identifying recyclable materials, showing where they can be recycled, and estimating the reward for recycling them. | ['Kyle Kumar', 'Dinesh Thirumavalavan', 'Nikhil P', 'Adithya Chandrasekar', 'Saadiq Shaikh'] | ['Fitbit'] | ['google-maps', 'objective-c', 'python', 'pytorch', 'swift'] | 1 |
10,355 | https://devpost.com/software/mgassist-cupertino-hack-project-divya-venkataraman | home screen
vision impairment symptom
speaking/hearing impairment symptom
seizure symptom
headache symptom
mood tracker
help menu
Inspiration
My grandfather had meningioma brain tumor, and unfortunately passed away due to it. This inspired me to research more about the tumor, especially since neuroscience interests me immensely. Once I researched, and realized it was such a big problem in the community even today, I knew I had to create something to help the patients. Now I know that if my grandfather had my app when he had meningioma, he could have had a little bit of a longer life, instead of having to leave while I was only 5 years old.
What it does
MgAssist is an app that assists in the recovery management process of those with meningioma brain tumor. It is a tool to assist people with the different symptoms of meningioma (such as vision impairments, speaking/hearing impairments, headaches, and seizures) and allows doctors to track patients’ progress. The app uses algorithms to convert scanned text to an enlarged font, convert typed text to speech, provide a test to detect a possible risk of headaches or seizures in the near future, and track patients' mood for doctors to analyze trends.
How I built it
I built this app using Swift 5 and Xcode 11. I first designed the user interface of the app using the SwiftUI Storyboard, where I can implement buttons, images, and more elements. Then, I added code in the backend of the app, so that the scanned text can be converted to an enlarged font, convert typed text to speech, analyze the tests for headaches and seizures, and track the moods. Then, to test the app, I connected it to my dad's iPhone8, and was able to see the app's function, to make changes.
Challenges I ran into
When coding the image scanner (to convert into a larger font), I realized that I had designed the whole app using a tab view, rather than the single view. The vision framework code I used only worked in the single view. However, since I wasn't too far into the process, as I wanted to target the hardest parts of code (vision framework) first, it was relatively easy to transfer everything to a single view app project.
Converting text to speech was a huge problem of mine, because when the ringer of the phone is turned off, the voice won't come. It took me multiple tries, however, I finally tried the speech with the ringer on, and it worked!
There weren't too many tutorials in the new Swift5 code, and since I was learning most of the code in the process, is was a bit hard. However, I was able to find a few and work with them. I hope that in the future, I can provide some tutorials so that others won't run into this purpose.
Accomplishments that I'm proud of
I am proud that I was able to learn most of Xcode and Swift in the time given!
I'm proud that my initial plan for the app was carried out as I expected, although there were a few obstacles on the way, which is completely normal.
I'm glad that I was able to make an app that can actually make a different in someone's life, helping out the field I am so passionate in, neuroscience.
I'm proud of my incorporation of the vision and AVFoundation frameworks, since those were the most complicated parts of my code.
What I learned
I learned so much about Xcode and Swift in general. I mainly know JAVA, and learning this new language was insanely awesome.
I learned how to use Vision Framework, which I think is really cool, as I know that there are so many different apps that also have implemented Vision Framework after it came out, such as notes and google's scanning app.
I learned that it is possible to implement text to speech conversion using code! I wasn't sure if this would be possible using Swift, especially since I didn't know much to start with, however, I was able to do so by implementing the AVFoundation framework. This was incredibly cool to me, since this was so out of the box for me!
What's next for MgAssist - Divya Venkataraman
I'd like to add different tumors to the app as well, targeting their symptoms, rather than just the meningioma symptom, event though this is the most common. This is a large task, but I really hope to do so.
I'd like to publish this app to the App Store, so that others can benefit from it as well.
I'd like to reach out to some doctors and researchers in the neuroscience field about this app, to ensure that I can bring it to a new level, and maybe event find a sponsor.
- I will then be able to implement this app with actual patients, and possibly make a huge, positive difference in their lives
Built With
avfoundation
swift5
visionframework
xcode11
Try it out
github.com | MgAssist - Divya Venkataraman | MgAssist is an app that assists in the recovery management process of those with meningioma brain tumor. It assists people with the symptoms of meningioma and allows doctors to track progress. | [] | ['Raspberry Pi'] | ['avfoundation', 'swift5', 'visionframework', 'xcode11'] | 2 |
10,355 | https://devpost.com/software/klasse | Login Page
Dashboard
Courses Page
CS 1 Lesson
Inspiration
Pranav’s cousin was born with
Non-Verbal Autism Spectrum Disorder (ASD)
. It was always very hard for him to communicate and most of what he wanted to say could only be said in small phrases which were hard to understand. Because of this, he couldn’t learn as easily as others could and this obstacle made it very hard for him to reach his goal of becoming a journalist. We wanted to give everyone a fair chance at learning, achieving their goals, and contributing to the community. So, we built Klasse.
The Problem
Kids with special needs often find it difficult to learn without specialized lessons at school.
Their disability prevents them from reaching their full potential by learning new things.
How We built it
We created Klasse using the Python web framework Flask, along with HTML, CSS, and JavaScript.
Challenges We Ran Into
We came across a lot of different bugs and errors; however, we never gave up and continued to keep pushing to where we are today.
Accomplishments That We're Proud Of
We are proud to have created such a platform that enables special needs children to learn more effectively. We believe it can really make a difference in today's world.
What's next for Klasse
We are planning to be add more courses, lessons, and achievements.
Built With
css
flask
html
javascript
python
Try it out
github.com | Klasse | A learning platform catered to curious children with special needs. | ['Nishant Ray', 'Pranav Harakere'] | ['Raspberry Pi'] | ['css', 'flask', 'html', 'javascript', 'python'] | 3 |
10,355 | https://devpost.com/software/visualizing-predicted-san-francisco-crime-bsm9rz | Home Page
About Page
Sample Devlog Directory
Sample Devlog
DISCLAIMER
The three minute video is not nearly long enough to properly explain the entirety of the project. The below descriptions are more detailed and offer much more information regarding the site's inspiration and development. For more insights into our coding process, please read the devlogs, which are records of daily progress. For another look towards the ethics and potential impacts of a website like this, please click the "ethics" button to see a short article: that explains our concerns and rationale. This project was built with a lot of passion—not only for coding, but also for discussion and information regarding the handling of crime in urban cities—and I hope you can see that!
Finally, the video has a bit of lag at the 2:15 mark. I (Michael) start talking about the assault data being displayed, but it's not yet visible. This is a result of internet lag, and is not perfectly representative of what the final product is. If you have a strong internet connection, it will load properly. Try it for yourself!
INSPIRATION
Crime has and will almost always be a prevalent civic issue, especially in urban environments. Local governments are in charge of directing police and making infrastructure investments to solve and prevent crimes. However, these infrastructure changes can often be ineffective without proper information and understanding of where crime occurs. Also, when tourists visit a large city or local residents simply travel out of their home districts (most notably in exceptionally large cities like San Francisco, Los Angeles, New York City, Boston), the understanding that crime occurs but not knowing where it occurs can make individuals feel unsafe or uncomfortable. Ineffective changes as well as a sense of discomfort when travelling in unknown areas both revolve around a lack of information.
This project is a website that helps solve that by displaying data regarding where crimes are most likely to occur at certain points of the year through the utilization of a Machine Learning algorithm as well as Google Maps JavaScript API.
WHAT IT DOES
The project is a website hosted on Google’s Firebase web development platform. It can be found here:
https://cupertinohacks-cc57d.web.app/
The home page of the site is a map centered on San Francisco. In the upper left, the user can choose both a month and a classification of crime to view data for. Once the user selects the data they want to see, they will be sent to another page with the top forty-or-so locations where that type of crime is most likely to occur in that month is displayed as a heatmap.
The project’s sole function is to display information. The locations themselves are predicted by a Machine Learning algorithm that utilizes publicly provided historical data. The data is provided by SFOpenData (a database run by the San Francisco city government), although the version being used in the project is from a Kaggle competition. Ultimately, this information is provided in the hopes that:
a) All individuals travelling through San Francisco can have a strong understanding of generally what areas they may want to either physically avoid or avoid parking a vehicle in
b) San Francisco city authorities that are NOT statisticians or subject matter experts can have a visual representation of where crime is expected to occur and make corresponding decisions on where to expend future resources in infrastructure or security
The reason San Francisco data was chosen was because it was both publicly available and large enough for the Machine Learning algorithm to make inferences that we were confident in. The project itself is also designed around urban cities; which tend to have districts/regions in which crimes are more likely to take place yet are also locations that are frequently traveled, even by individuals not living in those regions.
HOW IT WAS BUILT
ML EXPLANATION
INPUT DATA
The data used to train the algorithm consisted of the date, type, and location of 800,000 individual instances of crime throughout San Francisco’s history. To simplify the later user interface as well as the training of the algorithm, we mapped each day to just the month it took place, as we determined that the specific date of a crime doesn’t encode much meaningful information, whereas the year it occurred could only encode rising or declining trends. We determined that the month the crime takes place could encode seasonal differences in crime patterns, and thus used that as an input variable. The type of crime included 17 different criminal violations, ranings from larceny, assault, and burglaries to murder.
OVERVIEW OF DATA PREPROCESSING
We determined that the type of ML model required to predict location would be regressive. Quickly, though, we found that a regressive approach would not fit our needs. Common models for regression, such as linear regressions (which would not be applicable here as the data was categorical), regression trees, and regressive neural networks, predict a single element from a given output space. For example, given the task to regress over the real numbers, a regression model would only predict a specific number, such as 3. However, in our problem statement, we want to predict a distribution over the output space. This is much more nuanced and difficult to do with a regression model. Thus, we decided to use a classification approach, splitting up the city of San Francisco into 625 subregions, 25 on each axis, and classified each axis independently.
The problem arises in determining the probability of a crime occurring in a certain subregion of San Francisco. Given that a 625 dimensional output space is larger than the input space itself, the mapping the model would learn would necessarily be a strict subset of the output space, and thus fail to predict crimes in certain areas of SF. Thus, we created two classifiers, one for each axis, and treated them as independent random variables. This, of course, is a fallacy, since longitude and latitude are not independent in predicting crimes (as determined by a Chi Squared Test of Independence). Although this is true, it was a necessary assumption we were willing to make.
Thus, all longitude and latitude entries were discretized into numbers from 0-24 to aid in ‘classification’. We determined that the model would output a probability distribution over these 25 classes in both the latitude and longitude directions, and we would just multiply the corresponding probability vectors to create a 25*25 probability matrix spanning San Francisco to model a 2-dimensional probability distribution.
ALGORITHMS
We tested a multitude of algorithms on the dataset, of which 3 will be explained in the following paragraphs:
We tried training a simple multi-class logistic regression to predict the 25 classes in both the longitude and latitude axis. Unfortunately, due to reasons I am still trying to debug, the probability distribution was heavily skewed to 2 classes: 16 and 17, which happen to span the infamous Tenderloin (the most notoriously dangerous neighborhood in SF).
We also tried training a simple artificial neural network, which didn’t seem to succeed either. This is likely because the network saw multiple instances of the same input with vastly different outputs, which hindered the network from being able to properly perform gradient descent. The issue wasn;t with the network architecture, rather it was with the problem statement - it is simply mathematically improbable to predict a good probability distribution of crime, an occurrence that is largely random.
Lastly, we tried using a categorical Naive Bayes model. This worked well because it makes naive independence assumptions, hence the name, which helped the model get “less confused”. It also relies largely on historical data, which was useful given our historical dataset. The Naive Bayes model ended up predicting a larger, more robust subset of the 25 classes, and thus, we ended up using this model for our predictions
FIREBASE DEPLOYED CODE EXPLANATION
The entire front-facing portion of the project is developed and hosted with Firebase, which is Google’s app development platform. Firebase was chosen primarily out of convenience. The Firebase service is free if the developer’s application doesn’t take up too much storage or have too much web traffic, and eliminates the need for a separate hosting server by providing its own servers that the developer can directly upload to and have hosted online. Additionally, Firebase also has its own databasing and storage service, making the development of the application much simpler to handle.
Because Firebase largely trivializes the hosting and deployment of the project, most of the actual code in the project is HTML, JS, and CSS stylesheets to supplement them.
The right half of the fieldset contains a small menu/navigation bar. TheThe site’s core page is its home page, or the file titled “index.html” in the actual files. The home page consists of two main elements: the main fieldset containing most of the UI as well as the map itself.
FIELDSET
The "fieldset html object contains two main parts. The left half of the fieldset contains two selections, a button, and a label for the data being displayed on the map. The two selections allow the user to choose a month and a crime type, and the button on the right labelled “go” will take the user to another page that contains a map for that information. These elements are all fairly simple in the actual source code. The selections are merely select objects with hard-coded values, and the “go” button links to a function labelled buttonreact() that takes the user to the page they have selected.
The navigation bar has four buttons, labeled “about,” “ethics,” “michael devlogs,” and “anant devlogs.” The about page will take the user to this location, where they will be able to read about the purpose, inspiration, and technology behind the project. The ethics page will take the user to a short article/blog post written by Michael Yang regarding his considerations of the ethics of predicting locations for crime. The two devlogs pages will take the user to basic pages with links that lead to pages with texts regarding both Michael and Anant’s progression and work process.
The fieldset has some simple css stylesheets that set fonts as well as customize the buttons. The css files were all written from scratch (not taken from online), and, as such, are incredibly simple.
MAP
The map is generated through the Google Maps JavaScript API. The API provides not only the generation of the map itself, but also Latitude/Longitude objects (google.maps.LatLng) as well as functions that allow for the generation of the heatmap.
The map itself is a div object titled “map” in the html code, and the initMap() function is what initializes it when the page is loading. The initMap() function contains two main parts: an implicit declaration of the map variable, as well the implicit declaration of the heatmap variable. The map variable is a reference to the map div object, and comes with preset values that enable certain UI functions or display settings. The map is initialized with a zoom of 13 (which is roughly enough to see San Francisco in its entirety), all UI functions enabled (including things like the ability to rotate the screen, zoom, etc.) except for Google Street View, and with satellite view (which is changeable by the user). The heatmap variable is used to initialize the heatmaplayer object (google.maps.visualization.HeatmapLayer). In initializing the heatmap, the heatmap is given a set of points, a map that it will be placed over (in this case the map variable initialized right above it), and a radius for each point. The set of points is taken from a separate function named locations(), and once that function is run, the map is generated on the page.
The locations() function provides the heatmap with an array of google WeightedLocation objects (google.maps.WeightedLocation). The locations() function has a nested loop of jquery calls. The calls are to a json file titled “resultsfinal.json,” which contains all the points produced by the machine learning algorithm. The outer loop is a $.getJSON function, and the inner loop is an $.each loop. The $.getJSON function produces an array of all the elements in the JSON, and the $.each function selects both the month and classification for the nodes that will eventually be mapped. The $.each’s input is an array formatted as: month[monthnumber].CRIMECLASSIFICATION, which is different for each map page. The loop will then generate a new LatLng object with each point, and push them into an array that is ultimately returned. To make sure that the json file is read and that the map is properly initialized with the heatmap, the jqueries are set to not be asynchronous ($.ajaxSetup({async:false});), this makes sure that the locations() function runs completely and returns a completed array before the map is initialized.
JAVA/PARSING EXPLANATION
The project contains a few simple Java files. The Java files themselves are not utilized in the front facing portion of the project. However, they are still vital to the project. The Java files are simple processes (they each have their own main function and are independent to one another) that parsed data and generated the html files for each map. The java files themselves can be found in Github, or in Michael’s Day 1, 4, and 5 logs (
https://cupertinohacks-cc57d.web.app/maps/michaellogs.html
).
The first file (day 1 log) is used to parse the original full dataset csv file into a new csv file that cuts out all information irrelevant to the machine learning step. It takes the original csv and breaks it into a new, four-columned csv containing month, type, latitude, and longitude in that order. More details can be found in the comments of the actual code.
The second file (day 4 log) is an extremely simple process used to remove all the spaces in the json file with the completed data. The machine learning code produced a json formatted as a 3 dimensional array (array[month][type][point]), and, for the sake of convenience in formatting the JS code that would read the JSON, all spaces in the JSON elements were removed. More details can be found in the comments of the actual code.
The third file (day 5 log) copies a template html file and prints multiple versions of it (one for each month/type) to the website directory. It simply copies a template line by line, and makes alterations in the locations() function as well as the displayed text in the fieldset to match the data being displayed. More details can be found in the comments of the actual code.
CHALLENGES
All progress and challenges faced are documented in the logs section of the website, at these two links:
https://cupertinohacks-cc57d.web.app/maps/michaellogs.html
https://cupertinohacks-cc57d.web.app/maps/anantlogs.html
ABOUT THE TEAM/PERSONAL GROWTHS
MICHAEL YANG:
Hi! I’m a rising senior at Lynbrook High School with a strong interest in computer science, primarily full stack programming, which I hope to study at university and pursue as a career.
The idea of a project revolving around the display of San Francisco crime has been something that’s interested me for over a year. I had initially experimented with the same dataset in a different hackathon. However, the code written back then was wildly different, both in purpose, quality, and general structure. The project back then had no online hosting, had no ML-inferred data (we just displayed historical data), and displayed the top 100 overall locations where all types of crime occurred, something very different from the project here. I also didn’t write any of the front-end code for that hackathon; a teammate that has since then graduated from high school did.
The project this report depicts is something that I consider a vast improvement and a testament to my coding progress. I’ve rebuilt the entire site with completely new code. I’ve switched the mapping API to Google as opposed to HERE.com, I’ve hosted the site on Google’s Firebase platform. Although it was never used in the final project, I learned a great deal about working with Firebase’s Realtime Database. This project, while far from perfect, was somehow stressful yet also engaging and exciting to work on, and the experience gained from it was absolutely meaningful.
I admit that the front-end code has flaws. It’s not as pretty as I’d like it to be. I claimed to want to become a full stack programmer, but the front-end portion of things is easily not as good-looking as the modern standard for websites. Although I did code it all from scratch and without a template, it’s not really an excuse. Other parts of the scripts are very much a little sloppy. There were some huge issues with the database that led me to scrapping its usage in the final project. None of the code is even close to the modern professional standard.
But for me, this is in many ways still just the beginning. At the end of the day, it was just five days of programming (I only really coded from Monday to Friday), and not even a full 40 hours of actual work. I look forward to being able to do better in the future.
ANANT BHATIA:
Hello, world! I am Anant Bhatia, a rising senior at Lynbrook High School with a strong interest in computer science and mathematics, especially in theoretical machine learning and vision.
I have been exploring machine learning for the past year and a half, and have completed numerous projects involving neural networks. This being said, it was been a while since I have pursued a project with statistical machine learning, and viewed this project as a chance to hone my skills in that area.
I think the most important thing I have learned through pursuing this project is methods of debugging and understanding statistical models. I used Chi Squared analysis and multivariate linearity tests for data exploration, as well as worked with formatting data to make it easy for visualization. I also learned to to look at the mathematics behind each model and the assumptions it makes (such as naive independence assumptions in Naive Bayes algorithms, as well as assumptions of Gaussian distributions in linear models) to debug low accuracy rates in the model’s prediction. Overall, I learned a lot about the pure statistical theory behind pure machine learning models and the resulting methods in debugging said models.
What's next for Visualizing Predicted San Francisco Crime
Potentially, heatmaps will be replaced by markers with linked infowindows; when you click on a point, it can provide an info-window.
Built With
css
firebase
google-maps
html5
intellij-idea
java
javascript
kaggle
python
webstorm
Try it out
cupertinohacks-cc57d.web.app
github.com | Visualizing Predicted San Francisco Crime | a website that displays data regarding where certain crimes are most likely to occur during certain months in San Francisco | ['M Yang'] | ['Raspberry Pi'] | ['css', 'firebase', 'google-maps', 'html5', 'intellij-idea', 'java', 'javascript', 'kaggle', 'python', 'webstorm'] | 4 |
10,355 | https://devpost.com/software/vaple-q8sj6d | logo
login page
Inspiration
The design for our website was inspired by Strava, but the concept for our website came from us. We felt the need for an easy way for people and organizations to find volunteers. Our idea in Vaple was our answer to that problem.
Challenges
All of us were new to web design, except for one member who was already familiar with javascript and css. We had to spend the first 2 days studying about Java Servlets, Maven and Apache Tomcat to see how they work and interact with each other.
What it is
Vaple is an innovative creative solution to finding volunteers for charities, public activities, and educational workshop postings. Vaple allows anyone, anywhere to get started helping out in their community and get more people than ever helping
Please check out our website at
www.vaple.net
!
Built With
css
java
javascript
Try it out
vaple.net | Vaple | A Social Media app to compare yourself with others and see different volunteering opportunities! | ['Aryan Garg', 'Samarjit Singh'] | [] | ['css', 'java', 'javascript'] | 5 |
10,355 | https://devpost.com/software/hack-cupertino | Know Your Nation
Presenting Know Your Nation! This website is the ideal place to educate immigrants and their families of their fundamental rights and privileges that are guaranteed under the United States Constitution. It covers topics regarding the branches of our governments, the rights of citizens, and the duty of citizens. While targeted towards immigrants, Know Your Nation is a great resource for anyone craving to learn!
Built With
bootstrap
css
html
javascript
jquery
Try it out
kaushikmuthukrishnan.github.io | Know Your Nation | Source code for the Know Your Nation website | [] | [] | ['bootstrap', 'css', 'html', 'javascript', 'jquery'] | 6 |
10,355 | https://devpost.com/software/votecupertino | Inspiration
Since elections are coming up very soon, it's pertinent that we do everything we can in order to have our representation for the country. Even though many public figures are vocal about the significance of voting, there are still some obstacles that people face to get to the polls. One of these is the logistical aspect of finding somewhere to cast a ballot. By making it convenient to find a spot to vote, I hope to encourage more people to make their opinions heard, and this ultimately helps uphold democracy in our country when everybody is able to contribute to a decision.
What it does
It prompts the user for their home address, and then gives them 5 voting address. Below these addresses, there is a set of distances, which correspond to the route taken from the user's home address to each of the 5 voting centers. Additionally, there is an embedded map which has markers for each voting spot.
How I built it
I built it using simple HTML code, and a Google Maps Services API, implemented using Javascript. With the API, I accessed directions and route data, which was then able to find the distances from the user's address to each of the 5 voting centers.
Challenges I ran into
Before starting the project, I was unfamiliar with web development and the programming languages necessary to complete the project. Aside from the learning process, there were obviously many troubleshooting moments where I had to divert slightly from my original plans for the website.
Accomplishments that I'm proud of
I'm proud of working my idea into a functional piece of software in such a short time. This being my first significant programming project, I had a great learning experience for troubleshooting and problem solving. Specifically, I'm happy that I was able to overcome discouragement from countless failed attempts at completing this.
What I learned
I learned (and experienced firsthand) the significance of persistence, and now understand that reality can sometimes stray from your vision and theory, and it's important to be able to adapt. As for programming, I learned about web development, front-end design skills, and using API's.
What's next for VoteCupertino
Initially, I hoped to be able to sort through all of the distances and find the shortest one, but I ran into a roadblock trying to implement this feature. So, I definitely hope to keep improving this project in order for it to fulfill what I originally planned my project to be. Additionally, I hope to be able to add some stylistic effects to the website for the future.
Built With
api
Try it out
github.com | VoteCupertino | An initiative to increase voter turnout for Cupertino residents, making our voice heard! | ['Jackson Goenawan'] | [] | ['api'] | 7 |
10,355 | https://devpost.com/software/no-test-no-problem | COVID Positive ID
COVID Negative ID
COVID-19 Positive
COVID-19 Negative
Convolutional Heap Map
Inspiration
With COVID-19 tests being carefully rationed out and there being multiple scarcities, patients may not have access to a traditional test. Our software can diagnose a patient purely on a CT scan, eliminating the need for single use tests. We used the COVID-19 Lung CT Scans by LuisBlanche on Kaggle.
What it does
Our web app has a form for submitting patient data and uploading a CT scan image. We then pass the pixel data to our server, which runs several Tensorflow models. We then take the average confidence of all the models, and return the prediction to the browser. You can test it with the CT Scan images in the Devpost Gallery.
How I built it
We built 8 Deep Learning Models with Tensorflow and Keras that integrate convolutional neural network architecture and was trained using K-fold Cross-validation, in order to make best use of a limited dataset. Our model achieved nearly 90% accuracy, allowing hospitals to use this as a tool to diagnose patients when resources are limited.
Challenges I ran into
Over the course of this hackathon we were able to create a data model which achieves nearly 90% accuracy, one issue we had was not having a powerful enough processing unit to train the model from the start of the competition. We started using a NVIDIA V100 GPU to train the model on Google Cloud Platform. Given a better processing unit from the start and more time we would've been able to achieve greater accuracy, however we still manage ~90%.
Accomplishments that I'm proud of
We used a NVIDIA V100 Graphics Processing Unit on Google Cloud Platform in order to train our models. We were also able to finish this entire project in 24 hours.
What I learned
During the course of this hackathon we were able to learn and use Django to correctly link up the website wherein a user has to upload a CT Scan to the back-end data model which can predict whether a patient has Covid-19 or not.
What's next for No Test No Problem
We plan to add a database structure to hold patient and prediction data. We hope that this functionality will make our app more appealing to healthcare professionals.
Built With
css3
django
google-cloud
html5
jquery
keras
python
tensorflow
Try it out
github.com
www.notestnoproblem.live | No Test No Problem | We made a web app that utilizes machine learning to classify CT scans of lungs for COVID-19 and store patient data. This might be the difference between life and death if you cannot get a real test | ['Mohit Chhaya', 'Kabir Pathak', 'Pranish Pantha', 'Maanav Singh', 'Sachet Patil'] | ['Best Business Potential'] | ['css3', 'django', 'google-cloud', 'html5', 'jquery', 'keras', 'python', 'tensorflow'] | 8 |
10,355 | https://devpost.com/software/link-local | Login Page
Register Page
Home Page
Saved Location Page
Details Page
Extra Info Page
Inspiration
The Covid-19 Pandemic kept local businesses healthy and accessible, but local family-owned businesses were hurt badly, so we wanted to
shed light on the local businesses.
What it does
Takes the user's location, and shows all local restaurants around the location in a ~50 mile radius. User can save certain locations, and businesses can update information about their business on the spot!
How I built it
Django, HTML, and CSS were used to create the framework. Python was used to create user accounts, save locations for a user, and create Login, Register, and Add Extra information pages. Google Maps JavaScript API was used to display map, and Places API was used to show all the local restaurants.
Challenges I ran into
Accessing one part of the Location triggered other factors in the location, which caused a bunch of errors.
Accomplishments that I'm proud of
We allow businesses to update information on the spot.
What I learned
The Google Maps JavaScript API is very accessible in many aspects.
What's next for Link Local
Generalizing local restaurants to local businesses and adding more spots on the API.
Built With
css3
django
html5
javascript
maps-javascript-api
places-api
python | Link Local | A way for people to pinpoint local business and save their favorite ones during the Covid-19 Pandemic. Businesses themselves can update their own information on the spot as well. | ['Kush Gogia', 'Agustya Chamarthy'] | [] | ['css3', 'django', 'html5', 'javascript', 'maps-javascript-api', 'places-api', 'python'] | 9 |
10,355 | https://devpost.com/software/preventing-car-crashes-with-weather-data | Inspiration
Around 6 million people get into car crashes every year in the United States. 20 percent (1.2 million) of those crashes are weather related. Those 1.2 million could be avoided with just a proper warning. Sending warnings is the goal for this software.
What it does
It takes data from previous crashes from the beginning of 2016 to the middle of 2019 and It compares it with live weather data to determine if there is a high risk to driving now. Once it detects a risk, it sends an email to the user telling them exactly where not to drive.
How I built it
I picked up a free data set of some previous crashes, the same one as listed above, and loaded it into Jupyter Notebook. I then used that data to train a sci-kit learn decision tree to detect bad weather. I then use OpenWeatherMap, an API that sends live weather data for free, to get the current weather. I then compare this weather to the previous weather to see if they are similar. If they are very similar, I use smtp to send an email to the user. All of this is looped through once every minute
Challenges I ran into
Most of the problems I had were part of the last part of my program, the emailing. At first, I couldn't establish a connection with the Gmail servers. This was because my router was blocking the connection. I quickly troubleshooted that and I could send emails. When I ran the program and manually gave it bad weather inputs, it kept looping through, spamming my inbox full of "don't drive now". I fixed this by adding a safety that only triggers once per spell of bad weather.
Another big challenge I had was the time limit. If I had more time, I could have turned it into a subscription service and instead of taking one email address, I could have had a mailing list to help the not so tech savvy.
Accomplishments that I'm proud of
This is my first time using SMTP and my first time using an external API
What I learned
As explained earlier, I learned how to use SMTP and I learned how to scrape data from the internet.
What's next for Preventing Car Crashes with Weather Data
If I have to update, I would probably add a subscription service and a mailing list instead of everyone running this locally on their own machines
Built With
gmail
openweathermap
pandas
python
scikit-learn
sklearn
smtp
Try it out
drive.google.com
github.com | Preventing Car Crashes with Weather Data | Making driving safer for everyone | ['monesh pon'] | [] | ['gmail', 'openweathermap', 'pandas', 'python', 'scikit-learn', 'sklearn', 'smtp'] | 10 |
10,355 | https://devpost.com/software/greencycle-aql54z | GreenCycle
Inspiration
The inspiration for GreenCycle came from reading about how much problems bad recycling can cause. From there, we realized the potential for recycling and reuse in our community.
How we built it
I built this project using Swift on Xcode and the CreateML Developer Tool.
What it Does
Greencycle is a platform with a wide array of capabilities to help users to reuse and recycle better. Greencycle offers a machine learning based platform using AVFoundation and CreateML which can both instruct the user on which bin to put an item in and the types of items that go in certain bins. Further, Greencycle allows users to post unused items on the app where others can contact them about those items. In addition, using CoreLocation and MapKit, Greencycle allows users to see recycling centers near their current location. This feature also allows users to schedule a pickup which gets added to the route for the driver to pick up. When the driver is ready to pick up, a notification is displayed on the user's phones informing them that the driver is ready. Finally, Greencycle offers motivation for recycling by showing how you can get money from recycling and allowing users to keep track of their goals.
Challenges we faced
We had to learn how to display alerts and send messages from the app. Further, we had to create different machine learning models based on different images as previous models were not identifying the objects well
Built With
avfoundation
core-location
createml
foundationdb
mapkit
messageui
swift
uikit
vision
xcode
Try it out
github.com | greencycle | An app that helps you reuse and recycle better | ['Sohom Dutta', 'Srinjoy Dutta'] | [] | ['avfoundation', 'core-location', 'createml', 'foundationdb', 'mapkit', 'messageui', 'swift', 'uikit', 'vision', 'xcode'] | 11 |
10,355 | https://devpost.com/software/business-review-based-off-covid-19-compliance | Inspiration - We got inspired by the Yelp website.
What it does - It divides the businesses into different categories and users can select which business they want to write a review for. The review then gets stored into a csv file. We wanted to put the data from the csv file but we couldn't make the code work.
How we built it - We used mainly html and some css and a little bit of javascript.
Challenges we ran into - The main challenge we had was putting the html form data into a csv file and putting that data back into the html file.
Accomplishments that we're proud of - We are proud that we made the html form data get stored in a csv file.
What we learned - We learned much more techniques in css and html and also troubleshooting.
What's next for Business Review Based off COVID-19 Compliance - Make it more realistic and more like a Yelp-type website.
Built With
css
html
javascript | Business Review Based off COVID-19 Compliance | We made an effort to make a website based on how well businesses are following COVID-19 guidelines so it could help people be aware on how well local businesses are doing with these difficult times. | ['Mohit Patil', 'M P'] | [] | ['css', 'html', 'javascript'] | 12 |
10,355 | https://devpost.com/software/civica-analytics | Civica Analytics
GIF
Civica website scrollthrough
GIF
Civica sidebar demo
GIF
Civica data visualization demo
GIF
Civica data links demo
📈What it does
Civica is a data visualization website, focusing specifically on voter data. On our website, we have 6 different visualizations displaying data and analysis from both Democratic and Republican parties.
📈Inspiration
One of our teammates recently read in her sociology textbook that people from the working class are more likely to be democrats, while it is the opposite for the upper class. This made her think: well, what other correlations like this are out there? This hackathon was the perfect opportunity for us to answer that question. Since the theme of this hackathon is civic engagement, we wanted to build a website that promoted political awareness, especially because of the upcoming election in November. Awareness is the first step to active engagement, and using graphs to visualize political data encourages voters to get involved in their community by voting.
📈How we built it
• Used HTML, CSS, and JS to construct the basic framework of the website as well as enhance aesthetically
• We used Canvas JS to construct the interactive charts
• The data was compiled from several esteemed research centers and government sources, including the U.S. Census Bureau, Pew Research Center, U.S. Elections Project, Kaiser Family Foundation, and Gallup News.
📈Accomplishments that we're proud of, Challenges we ran into
We believe that our greatest accomplishment and challenge we overcame is how we were able to turn around our project and complete it in a matter of mere days. Originally, we attempted to use React JS, PHP, and MySQLi to build an interactive web app; however, we realized that completely familiarizing ourselves with React and building a fully functioning React app in a few days was unrealistic. Thus, we managed to work together to compile data and use Canvas JS to create seamless charts.
📈What's next for Civica Analytics
• We hope to create a subscribe form in order to consistently publish current data about electorate trends, party affiliations, and demographics. This feature would have to be incorporated using a dynamic website, and languages such as PHP and MySQLi instead of just HTML, CSS, and JS.
• Another next step for our project includes adding a map feature, which would allow a visitor to our website to find the closest polling center to either volunteer or vote at. This feature would be incorporated using Google Maps JavaScript API.
• Along with more data visualization and graphs, another step for our project would be adding more resources for visitors of our website. For example, we could incorporate links to other websites with further information about how to vote in the upcoming election, news outlets that provide further updates, important upcoming dates, and more.
• Most importantly, we hope to add more data!
📈Data Sources
https://www.census.gov/data/tables/time-series/demo/popest/2010s-state-total.html
http://www.electproject.org/2018g
https://www.kff.org/other/state-indicator/distribution-by-raceethnicity/?currentTimeframe=0&sortModel=%7B%22colId%22:%22Location%22,%22sort%22:%22asc%22%7D
https://news.gallup.com/poll/247025/democratic-states-exceed-republican-states-four-2018.aspx
Built With
canvas
css
html
javascript
Try it out
civica-analytics.github.io
github.com | Civica Analytics | Civica uses charts compiled of analyzed census data to create a visual model of analyzed data that is easily understandable and accessible to the general public. | ['Kayley S.', 'Natalie Zhou', 'Anahita Hassan'] | [] | ['canvas', 'css', 'html', 'javascript'] | 13 |
10,355 | https://devpost.com/software/feeling-down | Inspiration
Due to the shelter in place, there have been many people that have to stay at home by themselves. I wanted to make a website for those people feeling down letting them know that we will get through this. People that live by themselves might feel lonely and alone. This website shows them that they are others doing the same thing as them. I wanted to make a place that can spread positivity for those who are going through a rough time.
What it does
This website contains a slideshow of messages from people stuck at home bringing positivity into this pandemic. It also includes a link so others can put their messages on the website.
How I built it
I built this website using Repl.it. It was my first time using it and it took me a while to figure out how it worked. The website is made with HTML, CSS, and Javascript.
Challenges I ran into
One challenge I ran into was working on Repl.it. In the beginning, I was confused because I did not know how to start making this website on Repl.it. Another challenge I went through was choosing a topic for this hackathon. I could not think of anything for a while until I was talking to the people around me asking them how they are and how they are feeling.
Accomplishments that I'm proud of
I am proud to have made a website on Repl.it. I am also proud that I made a slideshow that works the second you go on the website.
What I learned
I learned how to make a slideshow that works right away instead of having to click something to start it. I also learned how to make a website on Repl.it.
What's next for Feeling Down?
I hope that this website can help cheer others up and bring them the positivity that they need.
Built With
css
html
javascript
repl.it
Try it out
to-a-brighter-day--erikawu.repl.co | To a Brighter Day | Look at this website to brighten your day | ['Erika Wu'] | [] | ['css', 'html', 'javascript', 'repl.it'] | 14 |
10,355 | https://devpost.com/software/covidbot-vyz5d9 | Inspiration
My Inspiration for this project was our current situation in COVID-19, and how we could spread useful information on it
What it does
CovidBot lets the user request the statistics of the covid cases of a certain country
How I built it
I built it using Twilio's API in Python.
Challenges I ran into
I had trouble with getting the messages sent from the user to be recognized
Accomplishments that I'm proud of
I'm proud of creating a bot capable of responding to messages sent by a user
What I learned
I learned how Twilio works, and how to use it. I also learned more about Python and how to host a web server using ngrok.
What's next for CovidBot
More precise location data such as states, cities, zip code, etc.
Built With
flask
python
twilio | CovidBot | Get Near-Instant Covid-19 Information on Your Phone | ['Pranav Bollineni'] | [] | ['flask', 'python', 'twilio'] | 15 |
10,355 | https://devpost.com/software/tempest-awycgp | Home Page
Storm Dashboard
Predicted Damage Probability Map
Predicted Monetary Damage Map
Embedded Hurricane Map
Hurricane Monetary and Severity Predictions
Upload Before and After Image
Predicted Damage Visualization
Example Damage Visualizations
Inspiration
As millions of people suffer throughout the nation from the sweeping problems of Natural Disasters, our team has reflected on how we might assist the people who lose anything and everything. These storms are responsible for the losses of billions of dollars in and thousands of lives.
Our team was determined to
severely mitigate
the losses generated from storms by predicting the costs and impacts of them. In order to have a
civic
impact, we wanted to help communities and governments adapt and more effectively respond to weather disasters. We wanted to become
civically engaged
in our government and community, and we realized providing software to solve massive problems was the best way to do so, especially during quarantine. We hope our solution will bring together the overarching national community of citizens affected by disasters and encourage government, crowd sourced planning to combat these detrimental effects.
Thus, we took a unique approach from the common hackathon project.
Instead of creating an application meant for general use, we developed an application specifically for state and city governments. We plan to implement our software as part of a nationwide government plan to promote smarter disaster response and efficient planning. Instead of having a grassroots approach to helping the community, we believe using the government’s platform is the best method of outsourcing our solution. Since governments often utilize outside developers to build applications, we believe our website fills a normally unoccupied niche, and projects like this should be encouraged in the hackathon community.
Thus, we developed Tempest, an application that uses ML to allow governments to prepare for storms and disasters by providing visualizations and predicting important statistics.
What it does
Tempest is a unique progressive web application that lets users and governments predict the outcomes of natural disasters. Using web scraped data, we were able to predict where storms would end up causing most damage and create interactive visualizations to aid the process.
We first developed a tornado model, which can predict the probability that a tornado does severe damage as well as the monetary value of the damage. We trained our model on data from NOAA, which contains tornado data such as wind speeds, duration, and azimuth values. Our model then outputs a magnitude probability from 0 to 1, with 0 being no impact and 1 being devastating. In addition, our model also predicted the monetary damage from each storm in dollars. We trained our model using
AI-Fabric from UiPath
, allowing us to train all our models at fast speeds. Our map includes completed tornadoes from Sept. 2019 to July 2020, and we also predicted tornadoes from the upcoming month of August since data exists for it. We exported all our map data by month from our python model, and from there we fed it into a map visualization we found through insightful documentation. This allows governments to adequately prepare for disasters and speed up recovery and minimize costs.
Even more dangerous than tornadoes are hurricanes. We embedded a map of upcoming hurricanes from the website LivingAtlas.org. We then used our tornado model and retrained it on this hurricane data. More importantly, our model takes the information and outputs both the magnitude of the hurricane on the Saffir-Simpson Hurricane Wind Scale, which classifies hurricanes on a scale of 1-5, based on data such as wind speeds and temperatures. We displayed the three upcoming hurricanes in the US. Additionally, we also output how much monetary damage each of the upcoming hurricanes will cause along with a satellite image of the storm, allowing residents and local governments to allocate proper funds and shelter themselves as much as possible.
Hurricanes can often produce floods that can ravage and destroy communities. Understanding how floods will cause damage allows communities to rebuild faster, reducing costs and time without a home. Thus, we developed a Style Transfer model that allows city planners to prepare for the aftermath of floods, which can visualize the damage in a location due to a flood. City planners will upload an image of the location before and during the flood, and our algorithm will predict the damage to the location and output a picture of what the damage will look like. The model finds commonalities in the images and keeps outstanding features from the flood image in order to properly display the damage. We deployed a portion of this model on our website to test, as the entire model couldn’t be deployed due to size. With this information at hand, city planners can swiftly respond to floods and prepare for the aftermath of disasters.
How we built it
After numerous hours of wireframing, conceptualizing key features, and outlining tasks, we divided the challenge amongst ourselves by assigning Ishaan to developing the UI/UX, Adithya to connecting the
Google Cloud
backend and working on implementing the interactive map features, Ayaan to developing our hurricane and flood models, and Viraaj to developing the tornado model and implementing and retraining the hurricane model.
We coded the entire app in 6 languages/frameworks:
HTML, CSS, Javascript
,
R
,
Python
(Python3 /iPython), and
Flask
. We used
UiPath
for training our algorithm. We used
Google Cloud
and
PythonAnywhere
for our backend. We developed our interactive maps using
HTML
and
R
, and embedded weather websites using web scrapers. We deployed part of our PyTorch model on PythonAnywhere using
Flask
. We hosted our website through
Netlify
and
Github
.
In order to collect data for these models, we developed web scrapers. We created a web scraper to scrape live-updating weather websites. For our home page, we got data from the NOAA. For our hurricane model, we collected previous data from Medium and webscraped for upcoming data using
Arcgis
. For our aftermath algorithm, we were able to deploy a version on PythonAnywhere which takes the two input images and creates an aftermath image. However, since we don’t have access to a cloud GPU, creating the image takes a while each time, so we didn’t completely deploy it.
Challenges we ran into
The primary challenge that we ran into was developing our geographic models. Since the data was very complex and requires cleaning, we weren’t sure how to start. Luckily, we were able to do enough EDA to understand how to develop the models and utilize the data. Training these models was also a huge challenge, and we saw that it was taking a long time to train. We luckily found
AI-Fabric from UiPath
, which allowed us to train our models easily in the cloud. While we were not able to deploy our models, as they are too large to deploy on free and available servers, as long as governments give us images and data, we can give them cost predictions.
Accomplishments we are proud of
We are incredibly proud of how our team found a distinctive yet viable solution to assisting governments in preparing and responding to disasters. We are proud that we were able to develop some of our most advanced models so far. We are extremely proud of developing a solution that has never been previously considered or implemented in this setting.
What we learned
Our team found it incredibly fulfilling to use our Machine Learning knowledge in a way that could effectively assist people who may lose their homes and livelihoods. We are glad that we were able to develop a wide range of predictive and generative models to help a vast range of people. Seeing how we could use our software engineering skills to impact people’s livelihoods was the highlight of our weekend.
From a software perspective, developing geographic models was our main focus this weekend. We learned how to effectively combine web scrapers with machine learning models. We learned how to use great frameworks for ML such as
UiPath
and transfer learning. We grew our web development skills and polished our database skills.
What is next for Tempest
We believe that our application would be best implemented on a local and state government level. These governments are in charge of dealing with hurricanes, floods, and tornados, and we believe that with the information they acquire through our models, they can take steps to respond to disasters faster and more effectively.
In terms of our application, we would love to deploy our models on the web for automatic integration. Given that our current situation prevents us from buying a web server capable of running the aftermath model frequently, we look forward to acquiring a web server that can process high-level computation, which would automate our services. Lastly, we would like to refine our algorithms to incorporate more factors from hurricanes to more accurately predict damages.
Our Name
Tempest is a creative synonym for wind related storms.
Built With
css
google-cloud
html
javascript
python
r
Try it out
tempestai.tech
github.com | Tempest | Using ML to prepare for storms and disasters | ['Adithya Peruvemba', 'Ayaan Haque', 'Ishaan Bhandari', 'Viraaj Reddi'] | ['1st Place Overall', 'Third Place', '1st Place'] | ['css', 'google-cloud', 'html', 'javascript', 'python', 'r'] | 16 |
10,355 | https://devpost.com/software/home-security-form | code part 1
code part 2
code part 3
Inspiration
I saw a guy next to his house with the police and I heard them talk about a robbery.
What it does
It gives people ideas to improve their home security
How I built it
I used HTML to create a form and links
Challenges I ran into
wrong links
Accomplishments that I'm proud of
actually making a proper form.
What I learned
HTML and CSS
What's next for Home Security Form
we could make the submit button tell a score and what to improve.
Built With
css
html
repl.it
Try it out
wastefulsquareredundancy--rishabhsahoo.repl.co | Home Security Form | A form that gives ideas to make your home safe. | ['rishabh sahoo', 'Veeral Shroff'] | [] | ['css', 'html', 'repl.it'] | 17 |
10,355 | https://devpost.com/software/help2shop | An App that allows users to post their shopping lists and volunteers can accept people's lists and deliver their groceries to their home
Inspiration
We were inspired by quarantine to create an app that helps people who are high-risk COVID-19 patients or people who live with high-risk patients to be able to get their groceries without exposing themselves in high population stores and supermarkets.
What it does
The app allows users to post their grocery lists publicly, allowing neighbors and people who live close to them to pick up the groceries along with their own groceries, reducing the number of people in stores and supermarkets, therefore limiting the spread of the virus to some extent.
How I built it
We built a flutter application using the Dart language to create the app. Although the demonstration is on an Android device, the flutter codebase allows us to export the same app onto an iOS device with no code alterations
Challenges I ran into
We had several issues learning the language from scratch and with the visual formatting of the application, we were able to iron out most of the issues to make a functional app in time for submission.
Accomplishments that I'm proud of
We had very little time to learn the entirety of the dart language, so by strategically using youtube and reference code libraries, we learned only what we needed in order to make the app functional, eliminating unnecessary libraries and functions that were not crucial to, or used in our application.
What I learned
We learned pretty much all of the basic flutter and dart functions on our path to building this application. We also learned the inside and out of application formatting and the difficulty it actually takes to make an app look appealing and user friendly
What's next for Help2Shop
Create a user base, get companies to start adding their products to the catalogs so that users can add those products to their personal shopping lists.
Built With
dart
flutter
Try it out
github.com | Help2Shop | An app that allows users to post their shopping lists and lets volunteers pick up their groceries for them, limiting the number of people who have to leave the safety of their home during quarantine | ['Sreeganesh Siva', 'samhith kakarla', 'Sreegoo Siva'] | [] | ['dart', 'flutter'] | 18 |
10,355 | https://devpost.com/software/safercityvision | Inspiration
While walking around our neighborhoods, we noticed some safety concerns that we want to improve. We searched for ways to share our ideas but could not find an easy way to do so. We decided to create a website that enables anyone to play an active role in improving their city.
What It Does
Safer City Vision will enable cities to identify areas needing improvements. With our new and innovative website anyone can share suggestions to make their community safer. A form to submit requests is located on the top of our home page. Submissions can be viewed on an integrated Google Map or in a list view that can be sorted by most upvotes, downvotes, or most recent suggestions. On the main feed, users can upvote or downvote ideas to indicate those they support. In order to prevent an individual from voting on a submission multiple times, we added an account creation and login system. Users must be logged in to vote on a suggestion. However, we allow suggestions to be submitted without being logged in because we understand that some people would like to help improve their community while remaining anonymous. If a user creates an account before making a suggestion, they can visit the “My Submissions” page to see the upvotes and downvotes their idea received.
When fully completed, we will license this framework to cities so they can tailor this system to their communities. Using our site, cities can gain a better understanding of the needs of its residents and make improvements that will benefit them.
How We Developed It
We used a server hosting service that we uploaded our PHP and HTML code to get our website running. From there, we used Google Maps API to add a map to display all the community suggestions.
Challenges We Faced
At first, it was very difficult for us to begin programming, as we were new to using PHP. However, one of the most prominent challenges that persisted throughout the entire hackathon was decisiveness. With only three people to work on the project, no one was really the leader, so coming to a solid decision instead of merely suggesting ideas was definitely a challenge.
Accomplishments We're Proud Of
We are very proud to have completed this project within the given time frame, but especially given the fact that there were so many "firsts" in this project. This project was a work of passion, and with that comes pride. As this was our first time creating a website, coding in PHP, and participating in a hackathon, we are very proud of our project. With that said, notable smaller accomplishments include integrating Google Maps API, creating a secure account system, sorting submissions by a specified parameter, and using various new data structures.
What We Learned
Through this hackathon, we learned how to write PHP code, make a website, and create a secure account system for our website. We also learned how to integrate Google Maps API, create a sorting system, use HTML forms, server-side programming, and prototype how we can help the community.
What's Next
Using our site, cities can gain a better understanding of the needs of its residents and make improvements that will benefit them. When fully completed, we will license this framework to cities so they can tailor this system to their communities. These cities would have administrator accounts so they can review suggestions and indicate the status of suggestions. Submissions will be marked as suggested, under review, approved, or completed. Users will be bale to follow suggestions to receive updates on their status. We would also like to improve the style of our site.
Built With
google-maps
html
imovie
notepad++
php
veed.io
Try it out
safercityvision.moon-watcher.com | Safer City Vision | This is a new and innovative website that anyone can use to suggest improvements to their city. | ['Salvador Baray', 'Sarah S'] | [] | ['google-maps', 'html', 'imovie', 'notepad++', 'php', 'veed.io'] | 19 |
10,355 | https://devpost.com/software/localgigs | LocalGigs
LocalGig Prototype:
https://www.justinmind.com/usernote/tests/47701328/47743935/47746211/index.html
If you have further questions please contact:
goldenstatewarriors735@gmail.com
Thanks,
Akshay Mahajan, CTO
Built With
justinmindprototyper
Try it out
github.com | LocalGigs | People need help with chores, there is no easy way to connect to local people that need help, and it is hard to make side money. | ['Akshay Mahajan', 'Aaditya Mahajan'] | [] | ['justinmindprototyper'] | 20 |
10,355 | https://devpost.com/software/ice-in-case-of-community-emergency | Inspiration
A recent Federal Emergency Management Agency (FEMA) survey found that nearly
60 percent of American adults have not practiced what to do in a disaster by participating
in a disaster drill or preparedness exercise at work, school, or home in the past year. Further, only
39 percent of respondents have developed an emergency plan
and discussed it with their household. This is despite the fact that
80 percent of Americans live in counties that have been hit
with a weather-related disaster since 2007, as reported by the Washington Post. Additionally,
48% of Americans report having seen at least some news they thought was made up
about the recent coronavirus virus. With
dangerously large amounts of false information regarding how to prepare for the coronavirus (52%)
in the general public, with nearly** 61% of _ Americans having not prepared a emergency plan in case of a widespread emergency in their local area
and **no basic technologies
present to sufficiently provide information to local authorities to provide help to one another, I felt the need to develop this app in order to build a tool for both,
the average household and authorities
, in order for local authorities to better plan, better make decisions and provide knowledgable information in order to save lives. _ Additionally, I have been personally affected by a similar situation, providing me better insight regarding the experience.
What it does
I have created a hybrid mobile application which has three different primary pages and intentions. On the home page, there are
three different feeds
that incorporate the main intentions of alerting the user end of
Reliable information from reliable sources based on the subject criteria.
These
three information feeds include the recent COVID-19 pandemic, an information feed from the government /city and the National Weather Service/other informations in relation to extreme weather. **For the second page, the user must enable location features In order to have full access to the app. **The app utilizes the current location of the user and uploads to my Backend in real time, which is viewable to the local authorities with authentication to the backend. This can be used in many instances such as a fire, to see whether any people are inside the fire, other natural disasters, to help rescue people based on these given locations, and also to alert other surrounding you for help.
Your location on the map will show up to people around you (in a certain radius). On the second page, there is a search option where the user can input a location and find the
Est. amount of people at that location(using a self-created algorithm), the risk level (in terms of COVID), the amount of people who are in "help status" near that location and the amount of COVID cases **in your state. **These features are very helpful during the current pandemic as essentials need to be visited. This will help uses find the time which has the least people present and the least cases recorded.
On the third page, there are
verified resources to help inform you what to do when various kinds of emergencies occur
and how to prepare for them in advance.
Overall, these functions will help inform the user of the correct information of what to do, how to be prepared, stay informed with reliable information in these categories, help avoid contact and contraction of the novel coronavirus and help stay safe in natural disasters/other similar emergencies by being able to send data to users nearby for help and local authorities.
How I built it
I built this app using the hybrid application development platform called React Native. I used expo for faster testing and a better and managed workflow. I wrote this entire app in Javascript. Now to the construction of the main features of the app. For the Information feeds, I used a
News API with several endpoints
to gather information in relation to its respective topic by reliable sources. For the search page, I used
react-native-maps in order to gather the accurate location (long, lat, geolocation) of the user and create the maps UI
. Additionally, I used
Google Firebase as my backend
in order to store this data in a database where it would be accessible to the local authorities in real time. For the East people at a location and risk level , I used an
algorithm I created using multiple data points such as the population of the residing city
, the amount of users recorded in the database and density of the city. For the last page, I used individual reliable sources to provide Preparation resources to be ready for and prepared for emergency situations.
Challenges I ran into
Overall, there were many challenges I faced over the course of this entire project. One of the earlier issues I had was in relation to the tracking of geolocation and using that data, uploading it to the backend and then redisplaying it onto the map. Although this may have not been visible during the demonstration, I wanted to ensure full functionality. The reason some of the data was not showing was due to my incorrect way of passing the props and setting state and overall scope of the project, which I eventually resolved. One of the other Issues I had was in relation to the News API feed which had multiple failed requests and was not pulling through. The issue, I eventually figured out, was due to the incorrect formatting and mapping of data, which did not load the data properly. Additionally, I wasn't assigning keys properly.
Accomplishments that I'm proud of
Some of the accomplishments I am proud of is creating the
algorithm to calculate risk level based on several data points such s population in the residing city, he amount of users recording in the database in that city and the density of the residing city.
I was also proud of creating an interface that was able to use locations and map this data to a backend. I am also proud of implementing my first multi-endpoint api in a react native app.
What I learned
I learned a lot of new things during this entire project. I learned how to implement react-native-maps, how to implement some basic functionality's such as webview and deep linking, allowing me to access certain websites for resources in my prep resources tab and overall how to troubleshoot in low time constraints. Overall, this time constraint helped me work more efficiently and prioritize.
What's next for In Case of an Emergency
In the future, I hope to host this algorithm on an API endpoint rather than on the app itself. Additionally, I would like to my own api endpoint for the resources since I am using a website webview at the moment.
I also would like to create more features in my backend such as alerting the app that you will be going somewhere to adjust the algorithm accordingly and have a lot more user input. I look forward to that in the future and hope this can help the overall community in such times.
Built With
algorithm
api
firebase
google
javascript
native
news
node.js
react
react-native
react-native-webview
Try it out
github.com | ICE (In case of Community Emergency) | The ICE App, helping you stay cool during times of emergency | ['Om Joshi'] | ['1st Place Continuation Hack'] | ['algorithm', 'api', 'firebase', 'google', 'javascript', 'native', 'news', 'node.js', 'react', 'react-native', 'react-native-webview'] | 21 |
10,355 | https://devpost.com/software/artificial-insight | COVID Diagnosis
Normal Xray Diagnosis
Melanoma Diagnosis
Logo
We would really appreciate it if you gave our project a like!
Inspiration
For Cupertino, we created a mobile application dedicated toward diagnosing specific diseases and ailments, like skin cancer, pneumonia, and COVID-19. Our app is named Artificial Insight. One has to be aware of his or her own health in order to be civically engaged, as by being health-conscious toward oneself, one will be able to simultaneously protect others. This is especially prominent today with the current worldwide pandemic—citizens must be cautious of every single action they take once they step outside of their house in order to prevent the spread of the virus. This sort of perpetual cautionary action also applies to other contagious diseases, like pneumonia, which spreads via bacteria and viruses. And while skin cancer isn’t contagious, it is hereditary—about 10% of all people who get melanoma have a family history of the disease. Thus, if citizens have an easy way to find out if they have melanoma, they will be able to discern whether or not their close blood relatives have a higher risk of developing it as well, which also ties into the theme of civics. Knowledge is power, and this power can save lives.
Because of this, we decided to create an application that provides easy methods to determine whether one has skin cancer, pneumonia, and/or COVID-19. It is extremely easy for an average citizen to simply take a picture of a mole on their body and use the app to determine whether or not they have signs of skin cancer. Similarly, x-ray machines are readily available in community physician offices, urgent care clinics, and hospital emergency departments, and they can provide images for diagnosis rapidly. According to the UCLA Department of Radiology, chest imaging plays a very important role in the early diagnosis and the treatment planning for patients with suspected or confirmed COVID-19 or pneumonia chest infections. Thus, we hope that our mobile application will allow citizens to be able to diagnose certain diseases early so that they will be able to obtain the treatment they need more rapidly.
What it does
Artificial Insight accurately detects cases of melanoma, a form of skin cancer, through pictures of colored pigments in skin that often resemble moles. Our app also accurately distinguishes between chest x-rays that are either healthy, have pneumonia, or have the coronavirus. The user can choose to either select a photograph of their skin or an image of a chest x-ray. On the left, they can upload a picture of a mole on their skin and the app will tell them whether they have signs of melanoma. On the right, the user can upload an image of a chest x-ray, and the app will tell them whether the x-ray provides indication of pneumonia or COVID-19. The app has an extremely rapid diagnosis response for optimal user experience.
How I built it
We created Artificial Insight on Flutter. We trained an AI using Machine Learning. We used many image datasets of melanomas and xrays to train our AI until we got a functioning image classification model for melanomas and xrays. We utilized TensorFlow Lite to deploy our models into our mobile application. With an intuitive UI, the experience is fluid for the user.
Challenges I ran into
Overall, the experience was challenging. We had lots of difficulties with our models and it took many attempts to get a well trained model. However, we believe we produced a great app that will be extremely useful to anyone who comes across it.
Accomplishments that I'm proud of
We're proud of how we were able to train models and reach accuracy in such a short timeframe.
What I learned
Overall, the process of creating this project was difficult, but fun. Time-consuming, but also enlightening, as we learned a lot about app development and machine-learning while creating our mobile application. Training our models and getting them to work within our app was extremely difficult, and we ran into countless problems, but we managed to finish Artificial Insight within the given timeline. This was the first time we worked with models and machine learning, and we definitely learned a lot.
What's next for Artificial Insight
We not only hope that our application is impressive to you, the judges, but we also hope that it will be a realistic way for people to diagnose diseases in the future.
Built With
dart
flutter
tensorflow
Try it out
github.com | Artificial Insight | Stay health-conscious to be civically engaged. | ['Philip Vu', 'Madhavi Vivek', 'Kathie Huang'] | [] | ['dart', 'flutter', 'tensorflow'] | 22 |
10,355 | https://devpost.com/software/civiccircl | Inspiration
Since we had to self quarantine, I have had a hard time keeping track of major events that are happening in Cupertino. I also have had a hard time communicating with my friends and doing things I love, such as playing basketball. This inspired me to look for solutions to fix the problem. My inspiration for creating this app was to solve some problems that I have experienced during quarantine. I also realized that other people may be experiencing the same problems and I wanted to find a solution that would not only help me, but help a lot of people in a similar situation.
What it does
My solution to all these problems was creating an app to allow people in a similar location to communicate with each other and inform them of any events happening in their area. It also allows the community to come together and do things together that they have in common, such as playing sports or topics like music. This even allows people to chat about things that they have in common, like a video game that many people enjoy playing or even interesting discussion topics. This will keep you informed and might teach you something new and interesting every day.
How I built it
I looked around for examples for chat applications that allow people to communicate but with the goal of creating localized communities that can help get local people together. I decided to make this an Android app so people can have it on their phones.
Challenges I ran into
I did not know how to develop server code for my application. After struggling to do this, I came across a tool called Firebase that allowed me to develop the app without requiring me to develop the server code. This helped me get over the hirdle and develop the app.
Accomplishments that I'm proud of
I was proud that I could figure out way to authenticate users and verify them in the app and then able to send them messages or announcements. Even though this is not the full fledged app that I wanted to develop this gives me a good basis to keep adding features to it.
What I learned
I learned to develop a complex Java app with Firebase for authentication and database. I also learned about client-server architecture for these kinds of messaging apps.
What's next for CivicCircl
Add ability to verify the address and make sure the users are local.
Ability to create multiple groups depending upon interests and needs
Ability to provide notifications for update and allow the group to send messages to each other.
Built With
android
firebase
java
Try it out
github.com | CivicCircl | This app allows live updates of any major events that are happening in my area and create groups that are local to the community. | ['Arnav Deshpande'] | [] | ['android', 'firebase', 'java'] | 23 |
10,355 | https://devpost.com/software/charcoal-pgrhf3 | What it does
Allows users to share their experiences about COVID.
How I built it
Flask, SQLite
Challenges I ran into
Learning about flask and databases with little to no prior experience was extremely challenging.
Accomplishments that I'm proud of
Learning about databases, so I can apply that skill to other projects in the future.
What I learned
Flask, SQLite
What's next for Charcoal
Making a mobile app for mobile users.
Built With
flask
python
sqlite
Try it out
charcoalwebapp.herokuapp.com | Charcoal | Sharing your covid experiences | ['Michael Leong'] | [] | ['flask', 'python', 'sqlite'] | 24 |
10,355 | https://devpost.com/software/votevoice | Inspiration
I was inspired to make this app because of my work as part of the Sunnyvale Youth Public Policy Institute. There, I was tasked with helping make Climate Education in California a law. Working to make a law made me feel proud to be a part of a democracy, and I wanted to make a platform that recreates that for people on a daily basis. Imagine making a post about something on a social media application---and then watching it turn into a LAW! That feeling is incredible, along with the fact that the platform allows you to make legal change much more easily than current methods.
What it does
It's essentially a "Reddit for your Rights". You can create a post about anything--an idea or issue--relevant to you or others. These posts can be shared with government officials based on the regions selected (the app generates them for you) and can help show the officials that there is support for your claims with the upvote/downvote system.
How I built it
I used python as the main language, using tkinter import to create prototype graphics for the project. All data is stored on a server using MYSQL.
Challenges I ran into
It was extremely hard for me to create the looping feed system. Each post is placed in a frame which is packed onto the screen, but storing variables in that frame required many workarounds because of the old fashioned design of tkinter. I also found it hard to create the upvote/downvote system because of difficulties interfacing not only tkinter and python, but also the database.
Because I was using a free database of only 5 MB, storing user's data took a lot of effort to keep within limits.
This provided some challenges as well.
Accomplishments that I'm proud of
I'm proud that I was able to create the feed generation interfacing tkinter, python, and the database. All three of them have different ways of working together, so getting them to cooperate was a challenge.
I'm also proud that I stuck with tkinter till the end no matter how much it frustrated me. I definitely will moving on to a different graphical user interface next time around.
What I learned
I learned about special commands interfacing tkinter, the database, and python, and how to make tkinter look (a little bit) more visually pleasing.
I also learned a lot about time management, and about balancing time spent on graphics versus code.
What's next for VoteVoice
I want to add a "Groups" system where you can create groups with people to tackle issues that might require a little more work than just a regular post.
I also want to clean up the UI and create a comments upon comments upon comments system, so the app can create more forum-like discussions. Because of data and time limitations, I did not have time to accomplish these.
Built With
mysql
python
smtplib
tkinter
Try it out
github.com | VoteVoice | The app is a platform in which users can create posts sharing ideas/issues in varying areas and sharing them with other users and political representatives generated based on the regions affected. | ['sanmathapathi Mathapathi'] | [] | ['mysql', 'python', 'smtplib', 'tkinter'] | 25 |
10,355 | https://devpost.com/software/byte-aid | Zoho desk
Basic, easy to navigate website
Inspiration
I know many people that struggle with helping elderly loved ones with tech support. That's why I created Byte Aid. We as high school students know technology well and know all the tips and tricks to help.
What it does
This is a platform for high school students to help seniors with tech support remotely. We have the ability to communicate via phone, email, or video call.
How I built it
The main interface is a basic, easy to navigate website that I built with HTML and CSS. On the back end is a powerful enterprise support tool called Zoho Desk. It lets all support agents see the cases neatly laid out, and cases can be assigned to different agents depending on the type of problem.
Challenges I ran into
This was my first time actually building a website, and I had to learn and figure a lot of stuff out. I already had some basic HTML knowledge, but not much else.
Accomplishments that I'm proud of
I built this entire website from scratch and I learned how to harness the power of Zoho's tools.
What I learned
I learned to put my basic HTML knowledge all together and to use CSS to format my website.
What's next for Byte Aid
Byte Aid will go on to be a widely used tool. Seniors will come and submit tickets and we will help them. Byte Aid will be recruiting high school students to come help with tech support. This is a great volunteer opportunity and way to help the community.
Built With
css
ecowebhostinguk
html
zohodesk
zohoforms
Try it out
www.byteaid.me | Byte Aid | Byte Aid is a group of high school students helping seniors with tech support remotely. We help with many common issues such as internet problems and uncommon issues such as storage space. | ['Rohan Vittal'] | [] | ['css', 'ecowebhostinguk', 'html', 'zohodesk', 'zohoforms'] | 26 |
10,355 | https://devpost.com/software/grow-me | Our Logo
Website
Plants Info Page
Plants Info Page
Plant Information
Plant Information
Plant Information Mobile View
Virtual Garden
Garden Log
Inspiration
Earth’s population is increasing, but its land is not. We decided to create a gardening app to help anyone to create a sustainable farm in their very own home while supporting local farms.
What it does
If you are looking to escape from the daily stresses of life or are looking to become a master grower Grow Me is for you. With information about 400,000 plants you can grow virtually anything in your backyard! Grow Me can help you plan and track your garden efficiently and effectively with our virtual garden planner and receive daily reminders to water your plants.
How I built it
The front-end was built with HTML, CSS, React, and TypeScript. We used trefle.io as an API for our project. We also used RxJs for asynchronous state management. The back-end was a combination of Firebase and Java + Spring.
Challenges I ran into
The first challenge we ran into was finding an API or library with the data we wanted. Even after we found a good API, we needed to set up an RxJs store so that data calls could be made effectively and asynchronously. Finally, it was a challenge to make the virtual garden portion of our project.
Accomplishments that I'm proud of
We’re proud that we were able to fetch, display, filter and sort the data, and allow users to create a virtual garden. We're also proud that we are able to develop the front-end and the back-end.
What I learned
As a team, we learned how to use APIs to fetch data, and how to create a backend with Java and Spring. We also learned a lot about gardening and plants.
What's next for Grow Me
We would like to create a GrowMe community that will support and encourage new growers to turn their backyard into a mini farm. We’d also like to improve our database by adding more varieties and info of our plants. Ideally, we'd also like to generate the virtual garden from a satellite image, and calculate amount of sunlight, etc. using Machine Learning. We are also going to try to get an SSL certificate and make our website secure. We believe with more detailed data and more features our project can become a real company!
Built With
css3
firebase
html5
java
javascript
node.js
nodemailer
react
rxjs
spring
typescript
Try it out
grow-me.us | Grow Me | An app to revolutionize the way gardening is planned and executed. | ['Siddhartha Chatterjee', 'Satvik Balakrishnan', 'Humza Dalal', 'Rohan Bodke'] | [] | ['css3', 'firebase', 'html5', 'java', 'javascript', 'node.js', 'nodemailer', 'react', 'rxjs', 'spring', 'typescript'] | 27 |
10,355 | https://devpost.com/software/cupertinobills | Cupertino Bills
A website designed to provide easily understandable and accessible information on local bills.
Who are we?
Our names are Ishani Das and Vaishanvi Kouru, and we are both rising sophomores at Cupertino High School.
Why did we create this site?
As it is known, the most powerful way to prevent infringement of the citizens' civil rights is to keep ourselves informed.
Currently, staying up to date on local policies is tedious. It's our mission to make this information easily accessible to anyone who is interested. Right now, this website caters to people residing in Cupertino, CA.
What are some challenges we faced?
It was difficult picking a topic and communicating remotely, but the hardest part was definitely having to learn, understand, and implement new languages and databases in only a week. Working together and having clear communication helped us a lot.
Built With
css
glitch
html
javascript
mongodb
Try it out
github.com | Cupertino Bills | A website designed to provide easily understandable and accessible information on local bills. | ['Ishani Das'] | [] | ['css', 'glitch', 'html', 'javascript', 'mongodb'] | 28 |
10,355 | https://devpost.com/software/safemaps-2ydk7l | Safemaps Cupertino
In order to run
Make sure to have python3, and pip
pip (or pip3) install -r requirements.txt
If prompted that flask is not installed - please execute the command pip install flask
python app.py
Inspirations
We were inspired to create Safemaps because we wanted to track the COVID-19 pandemic in Cupertino before we reopen our schools.
Learnings
We learned how to utilize a complicated API such as the google maps API.
Challenges
The challenges we faced were obtaining proper markers for the map, and collaborating online, because, for some of us, this was the first time we used GitHub in a collaborative sense.
Built With
flask
google-maps
html
python
Try it out
github.com | safemaps | A platform enabling users to view the number of covid-19 cases in Cupertino. | ['Jay Shah', 'Siddhant Hullur', 'Neil Deo', 'Aayush Goel'] | [] | ['flask', 'google-maps', 'html', 'python'] | 29 |
10,355 | https://devpost.com/software/market-safe | Market Safe
A COVID - 19 related hackathon project that allows users to see AI reccomendations to go to some store and reserve a spot to go to that store, and also functions as a contact tracing site.
As the COVID - 19 pandemic fades out, we will all eventally have to step out of our houses, whether it is to get groceries or medicine. However, we need to do it in a way that is carefully planned, or else we might face a second wave. Market Safe is a web application that allows customers and local businesses in the community connect to plan in person visits. It also functions as a contact tracing app! Look below to see all the great things that can be done with Market Safe!
Customers can...
search for businesses in the community.
view a business.
get an AI reccomendation about the business using data and machine learning.
reserve a spot to go to that store on a certain day.
ask the application to send alerts to people they have been in contact with in the case of testing positive.
view alerts on whether they may have been in contact with someone who tested positive.
Businesses can...
view and edit analytical data used by our machine learning algorithm.
see who has reserved a spot to visit their store in person.
open or shut down their stores.
view alerts on whether they may have been in contact with someone who tested positive.
Inspiration
The idea for Market Safe came from the current situation in the community, especially after hearing how various stores have different standards to prevent the virus. I wanted to keep the community safe, while still keeping local businesses and stores up and running.
How I Built It
Market Safe is a Django web application written in Python, HTML, and CSS. Although many of the files were precreated by Django, I made all the templates and static files, as well as the views and urls for each application. In addition, I use Bootstrap display alerts. Last but not least, I used scikit-learn for the machine learning.
Challenges I Ran Into
This was my first time coding a fairly big project in Python, as I usually used Node.js. Because of this, I consistently got many errors involving Python syntax and Django usage. However, I was able to recover from all of these errors, and I believe it was a very good experience to code in a new language and a new way.
Accomplishments That I'm Proud Of
As I've said above, this was my first time coding a huge project in Python and Django. I'm proud of the fact that I was able to transition completely to Django, and was able to get past many of the errors involving Django and Python successfully. I believe that one of the best ways to learn a new activity is by doing it, and I agree that by having myself do Django, I learned a lot about it. In addition, this was my first project involving Machine Learning, and I am glad that I was able to implement the model first try without any errors.
What I Learned
Through this project, I learned the differences, benefits, and drawbacks between Django and Node.js. Although I am proficient in Node.js, by delving into Django, I now believe that Django has many more tools making it easier to code, especially about security and databases. In addition, this was my first time doing Machine Learning, and that was also a beneficial experience.
What's in the future for Market Safe?
allow customers to reserve a specific time slot rather than just a day.
allow businesses to customize the page that the customers see when they visit their site.
add more AI to help businesses create standards that will minimize the chance of spreading diseases.
incorporate a real database rather than the self-made database I used for this project.
allow customers to search for businesses that are near them first.
A Special Note to the Judges
From previous hackathon experience, I have always seen that people who do Machine Learning projects have a considerable advantage compared to those who don't. Why is this, though? After doing Machine Learning in this project, I realized that it only takes 10 lines of code, because you are just using a lot of the code that has already been written by others! If you would like to give me a prize for this project
only
because I did Machine Learning, I will say right now, I will refuse it. I urge the judges to please look at the creativity and the originality in the code (not just mine, but everyone's). Thank you.
Look at the code:
https://github.com/Ved-P/market-safe
Look at the video demonstration:
https://youtu.be/ON7kgEZEsRw
Unfortunately, I am unable to publish this project, so you will have to view the video to see the project. I am extremely sorry.
Built With
css
django
html
python
Try it out
github.com | Market Safe | A COVID-19 related hackathon project that allows users to see AI reccomendations to go to some store and reserve a spot to go to that store, and also functions as a contact tracing site. | ['Ved Pradhan'] | [] | ['css', 'django', 'html', 'python'] | 30 |
10,355 | https://devpost.com/software/virtual-skill-share-2020-hack-cupertino | Inspiration
My inspiration came from wanting to create something that could help my younger sister keep busy through the summer and the boredom of being unable to leave the house.
What it does
This website allows its users to create online classes which others can register for. Through these classes people can learn and share their skills.
Accomplishments that I'm proud of
I was able to successfully create a database and connect it to my frame.
What I learned
I learned how to use php and mysql.
Future Updates for Virtual Skill Share - 2020 Hack Cupertino
Future updates for this website include giving volunteers the ability to send batch emails to students grouped by class, and a way of communication between students and the volunteers through the website.
Built With
html
jquery
mysql
php | Virtual Skill Share - 2020 Hack Cupertino | My idea is to create a website which allows people to create classes and teach others, or learn subjects from others. | ['Swetha Ashok'] | [] | ['html', 'jquery', 'mysql', 'php'] | 31 |
10,355 | https://devpost.com/software/tweets-molester-finder | . | . | . | [] | [] | [] | 32 |
10,355 | https://devpost.com/software/dbunk-ml-diov86 | Cover Photo
Browser extension detecting fake news
Breakdown analysis of a political article
Homepage
dbunk.ml
Inspiration
The advent of the Internet and the Information Age has transformed the way we receive and interact with information. Billions of people worldwide now rely on online news sources to stay up-to-date with current topics, and we are now able to spread information at speeds never seen before.
However, this comes with a catch. Fake news is becoming more and more prevalent on online news sites, and it is increasingly difficult to distinguish credible information from articles meant to spread misinformation. This issue has become a hot-button topic in recent news and election cycles. Often, this requires doing extensive research and cross-checking of sources, which takes an immense amount of time and effort.
Furthermore, being able to trust online information is especially important during critical situations like COVID-19. In a time like this, the truthfulness of news has become a public health issue. When citizens rely on news sources to keep themselves safe, it is extremely important that those sources are telling the truth.
We surveyed the residents of Cupertino, San Jose, Saratoga, and nearby cities, and found that over 60% of people expressed concern over fake and real news on the internet. What’s more, nearly 80% of people would be more engaged in current events, activism, and politics if they had a better way to identify credible information. Busy parents and workers expressed to us that they lacked the time and energy to keep up with the news on top of their already packed schedules. We wondered if there was a better, more efficient way to filter out misinformation. That better way did not exist, until now.
Our solution
Introducing dbunk.ml
. Leveraging big data, modern machine learning frameworks like TensorFlow, and the massive computational power of Google’s Cloud Tensor Processing Units, we can now train neural networks tens of thousands of times faster than we could before. Using this technology, we tailored our model to 10 million news articles from over 1000 different online news websites in order to classify news articles as completely fake, largely political, or credible. After iterating on network design and training for multiple days, our model can now correctly categorize news articles 94% of the time, a result comparable to that of existing fact-checking watchdogs, but our fully-automated solution lets users know whether or not to trust articles with a single click.
Our state-of-the-art system is displayed using a browser extension that clearly displays the credibility of a certain website. While browsing the web, our extension will automatically detect applicable news sites, and the extension icon will light up. By clicking the icon, the article is then sent to our servers, which analyze the article using our model and send the result back to the user within seconds.
dbunk.ml is:
Instant.
Our extension gives you instant insights into news articles as you browse, with a single click.
Accurate.
Powered by state-of-the-art machine learning technology, our model delivers 94% accuracy across thousands of news sites.
Detailed.
Our algorithms deliver comprehensive analysis and political bias indicators from hundreds of news sites instantly in a simple user friendly interface.
How we built it
We used Python and TensorFlow to train our model on FakeNewsCorpus, a dataset of 10 million news articles from 1000 different news websites. We trained our model using Google's Cloud TPUs, which deliver over 100 Petaflops of performance (that's a huge amount of computational power!) Our model is based on the LSTM architecture, which has proven time and time again to be excellent for text classification and sequences.
We used Flask to build our API which communicates with our machine learning model. Our website and extension is built with HTML, CSS, and JS, and we used Vue.js for reactive framework. We also used the chrome extension API to get the popup, tabs, and to store user settings.
Challenges we ran into
We ran into a lot of trouble throughout the process. The first was the dataset. We first used a dataset hosted on kaggle, which gave us really good results (99.8%) really easily. This sounded too good to be true, so we did some investigating and found that that dataset was really skewed and did not apply to real world articles.
So we went on the hunt for another dataset. This time, we found
https://github.com/several27/FakeNewsCorpus
, a dataset of 10 million articles from across 1000 different news sites, all clearly classified. It looked amazing! But they didn't have a released version because it was too large, so we had to write code to retrieve the dataset ourselves. It took a while, but we finally got it working.
Now, with this new dataset, our model was originally only getting around 70% accuracy, which is good, but not as good as we hoped. We realized that we were trying to classify into too many different categories, like conspiracy, pseudoscience, and rumors. This was too much for the network to handle and we ended up making it only distinguish between credible, political, and fake news, which proved to be a lot easier for the network to learn. We ended up with a 94% accuracy tested using 1000 real articles from the internet after training for a few days.
That might not sound like much, but it took a lot of iterations to get here. We first tried using just a simple GRU, but it didn't work, so we tried LSTM. LSTM worked better, but it still wasn't doing very great because of vanishing gradients. So we needed to use our own special implementation which solved the problem of the vanishing gradients. Stacking two of these special LSTM layers on top of each other allowed us to achieve better accuracy.
We also needed to fine tune our model many times to prevent it from overfitting (which is where it just memorizes the input data and can't apply it to the real world.) We used regularization and dropout to prevent overfitting, and although the training accuracy went down from 99.3% to 96%, it proved to work a LOT better in the real world than it did before, getting a 94% validation accuracy compared to the before 80%.
Accomplishments that we're proud of
This project started out as a simple idea, but it turned out to actually work really well, even better than we imagined at the start. All those hours and hours of hard work fine tuning the model really paid off! We expected it to work a lot better in theory than in practice, but after testing it on a few hundred links, we realized that it applied really well to the real world.
Many times, hackathon projects turn out to be just a "prototype", which doesn't necessarily work well yet, but we're proud that this time we actually finished a very great product that can already be applied and used by people to help them identify fake news.
What we learned
We learned many things here. Going in, only one team member had a lot of experience with machine learning, one had some experience but not that much, and one had none at all. Everyone was able to learn working on it together. Even the person who did not have any experience at all now knows lots of the fundamental principles and how to tune the model and stuff.
We also learned how to make browser extensions. Before, we thought about making it just a website, but realized a browser extension was perfect. No one had really made a legitimate browser extension (besides just a timer).
Most of all, we learned good ways to work together even remotely. Before, it would have been really hard to work together this efficiently, and at the start, it was really confusing to everyone. But by the end, we'd learned to work together REALLY efficiently and could get everything done quickly.
What's next for dbunk.ml
We want to improve this even more. Right now, some articles can't be analyzed because they are behind a paywall or the news site doesn't let robots view the site to get the text. This is only a really small portion of news sites, but it is still a problem that we need to deal with. We can experiment with taking the text directly from the user's browser instead of just the URL, but that might cause some privacy issues.
We also want to integrate it with more services, such as mobile devices. One idea is to make a browser app that functions exactly like the built in one, but with added news checking features.
Thank you, and welcome to dbunk.ml.
Unfortunately, chrome makes you pay to publish the extension on the web store, so we aren't able to do that. but you can view the code at GitHub.
Built With
chrome
machine-learning
tensorflow
Try it out
dbunk.ml
github.com | dbunk.ml | Advanced fake news analyzer powered by Deep Learning | ['Riley Kong', 'Oliver Ni', 'kyle he'] | [] | ['chrome', 'machine-learning', 'tensorflow'] | 33 |
10,355 | https://devpost.com/software/cubba | Main Picture
Why?
How It Works
Setup?
What Happens After Detection?
Who doesn't like memes, right?
Inspiration
We all deal with potholes on a daily basis considering we all need to use the roads of the cities we live in and for sure, we all hate them. Major Californian cities such as San Diego, San Jose, San Francisco and Los Angeles has over
%64
of their roads in mediocre and damages conditions. United States of America is estimated to have approximately
55 million potholes
. A comprehensive study made in 2018 by AAA shows that over the past five years around
16 million drivers
across the States have suffered damage from potholes.
The pervasive potholes in question wreak havoc on drivers' car suspensions and cause a considerable amount of traffic issues. Well, that is a problem we need to fix.
On average, a little over
3 million drivers
in the US suffer pothole related damage every year. This can be anything, from popping a tire, to bending a rim, to blowing out a shock absorber. The direct financial costs of fixing these damages adds up to nearly
120 billion dollars
for America's drivers. Even worse than a financial annoyance, a pothole can cause various problems and crashes for even the most experienced drivers. Of approximately
33,000 traffic deaths
a year, as many as one third are attributed to poor road conditions like potholes. These losses must be avoidable.
So, what are our options? Can't we just build new roads? Building roads is insanely expensive and they are not the easiest thing to manage in a big, crowded city. Designing better foundations for our roads featuring improved drainage wouldn't decrease the frequency of potholes, on top of that these improved foundations are so much more expensive to build. Maintaining America's large four million mile road system and enabling our level of travel would be a serious burden to have. The American society of civil engineers who know a thing or two about roads, estimates that the cost of maintaining United States' roads properly is between
150 billion
and
200 billion dollars
a year for the next
50 years
. But the States' budget is limited with only
60 billion dollars
. Now I may have only taken math for the humanities in high school, but that seems a few bucks short of what roads need.
What it does
Raspberry pi records a video footagw while you are driving your car. Then, the recorded video in question will be converted to images. Image classification will decide if image has pothole or not.
How I built it
Raspberry Pi (or any small computer that can fit in front of car) and camera. When you start your car, the power goes to the raspberry pi and it automatically runs a python script that records the video.
Challenges I ran into
Raspberry Pi is compact but not as powerful as needed (at least before raspi4).
What I learned
Well, if you do want the pictures of potholes or road disorders ahead, maybe you shouldn't (you definitely shouldn't) put a camera in front of your mom's car.
Even though image classification is not a hard code to write, it still requires a decent amount of study.
What's next for CUBBA
A decent change of name, it's in Turkish.
Better dataset and images
Object detection
GPS and stats
a lot of stats
...
Built With
camera
python
raspberry-pi
tensorflow
Try it out
github.com
cubba.ml | CUBBA | Dedect potholes with raspberry pi and camera! | ['Umut YILDIRIM'] | [] | ['camera', 'python', 'raspberry-pi', 'tensorflow'] | 34 |
10,355 | https://devpost.com/software/dermi | Dermi focuses on helping the community by having
early diagnosis
of skin diseases so that treatment can happen sooner, reducing deaths, and it is less expensive (reducing the burden on systems such as Medicare).
Inspiration
Many die and become disabled every year from skin diseases. If early detection can solve the issue, we need to make it more available. I aim to do that using deep learning.
What it does
Suggests skin disease from image.
How I built it
First I collected a dataset using a web scraper. Then I created a ResNet50 model using Pytorch..
Afterwards, I created a Flask server for inference and used Ngrok to make it publicly available.
Created an Android app using Kotlin and Room for on-device database.
Note: long press on a diagnosis allows you to delete it.
Accomplishments that I'm proud of
I am proud of the application and the speed at which I built it.
What I learned
Learning how to create a ngrok server on Colab itself.
What's next for Dermi
Larger dataset
Using several models to be able to create the most accurate diagnosis.
The current app uses 1 model, however, I previously created another model with another dataset, which I chose not to use for this hackathon due to the time restriction.
Using other available datasets (this app can integrate with other models, making it extensible)
iOS application
Built With
android
fast.ai
kotlin
pytorch
room
Try it out
github.com | Dermi (rash edition) | Simple but powerful skin rash classifier. | ['Vijay Daita'] | [] | ['android', 'fast.ai', 'kotlin', 'pytorch', 'room'] | 35 |
10,355 | https://devpost.com/software/one-life | Inspiration
Dear Judges, in order to gain a better understanding of our project and its strong connection with civic engagement, we would love for you to gloss over page 6 of this WHO report :)
https://apps.who.int/iris/bitstream/handle/10665/252071/WHO-MSD-MER-16.6-eng.pdf;jsessionid=BF40C14B3E1CB512CFC9C8F12D998DAE?sequence=1
Problem
Suicide and depression are major national public health issues in countries such as the United States. Just between 1999 and 2014, the average, adjusted for age, annual U.S. suicide rate increased by a staggering 24%. Furthermore, according to NIH, an estimated 17.3 million adults in the United States have at least one major depressive episode. This number represents 7.1% of all U.S. adults. These shocking statistics is what makes suicide and depression a major public health issue across the nation, and it demands a solution to effectively address the problem at this very moment.
According to research from the WHO, Preventing suicide can have a positive impact on communities by:
Promoting health and well-being of community members
Empowering communities to identify and facilitate interventions
Building capacity of local health-care providers and other gatekeepers
However, the problem is that most people lack the preparation and confidence to truly help someone with suicidal tendencies and make a difference.
Civic Engagement and Community Importance
Governments all across the world need to take a lead in suicide prevention in order to develop and implement comprehensive multi-sectoral national suicide prevention strategies.
However, research from the WHO suggests that variations in the suicide rates within countries indicate that top-down suicide prevention must go hand-in-hand with local bottom-up processes. Hence, communities play an essential role in suicide prevention when they provide bridges between community needs, national policies and evidence-based interventions that are adapted to local circumstances.
Prevention of suicide cannot be accomplished by one person or institution alone; it requires support from the whole community. The community contribution is essential to any national suicide prevention strategy. Communities can reduce risk and reinforce protective factors by providing social support to vulnerable individuals, engaging in follow-up care, raising awareness, fighting stigma and supporting those bereaved by suicide.
More importantly, communities can help by giving individuals a sense of belonging. It is essential to understand that the community itself is best placed to identify local needs and priorities.
Confronting and helping someone dealing with these challenging problems can be difficult and daunting that not many have experience with. Thus, we aimed to improve community knowledge on the topic of suicide and depression, and to prepare members incase they ever need to help someone through a tough time.
Solution
Therefore, we decided to create OneLife, a health web application aimed at equipping community members with the necessary tools to help people in their community suffering from suicidal or depressive thoughts. Furthermore, with our get help and forum features, our application fosters a positive and supportive community, which is the key to preventing suicides.
What it does
OneLife supports people with the tools to help someone in their community suffering from depression and suicidal thoughts, using machine learning to identify suicidal and depressive thoughts in messages, which can be pasted into a text box or uploaded through an image. Through the get help page, the community member or the suicidal person can find local therapists through one quick press of a button. The forum page fosters a collaborative and supportive community for community members to engage in conversations, as well as communicate together with therapists.
OneLife also has a twitter bot which identifies suicidal and depressive messages from social media chats, and sends consoling messages to victims with the National Suicide Prevention Hotline.
How I built it
In order to build our web application and twitter bot, we used:
Flask
HTML/CSS/JS
Python
Google Cloud: Places and Maps API
Machine learning: Bayesian classifier
Data scraping from various subreddits to create a custom dataset
Socket.io for the real time chat
Twitter API for the twitter bot
Challenges I ran into
The first big challenge we ran into was how to find an appropriate dataset in order to train our machine learning model. After a lot of researching and learning how to web scrape effectively, we were able to scrape information from the r/Depression, r/SuicideWatch, r/CasualConversation, and r/All subreddits. Furthermore, the reddit API only allowed a maximum of 100 posts, so we had to learn how to use a wrapper tool called praw in order to scrape 1000 posts from each subreddit. Another challenge was creating the chat feature in realtime. We learned how to use socket.io to create the forum and chat feature so that people can messages each other in realtime.
Accomplishments that I'm proud of
We are very proud to have developed a complete working web application. We are also proud to have learned to integrate new technologies such as socket.io, web scraping, and google cloud's Places and Map's API.
What's next for One Life
We hope to polish off some of the bugs from our code, and conduct more testing of our platform. Then, we plan to deploy it on a server, and release it as an open source project for people all over the world to build upon and learn from!
References
WHO, Preventing suicide: A community engagement toolkit, 2016 -
https://apps.who.int/iris/bitstream/handle/10665/252071/WHO-MSD-MER-16.6-eng.pdf;jsessionid=BF40C14B3E1CB512CFC9C8F12D998DAE?sequence=1
Praw -
https://praw.readthedocs.io/en/latest/
SuiSense Team
Try it out
github.com
github.com
onel1fe.herokuapp.com | One Life | Helping support community members to assist people suffering from suicidal and depressive thoughts through machine learning and automated bots. | ['Veer Gadodia', 'Nand Vinchhi'] | ['Third Place Overall'] | [] | 36 |
10,355 | https://devpost.com/software/garbagedetector | Recycle.AI - The Smart Cleaner
Why we need Recycle.AI?
Recycle.AI Logo
RecycleAI
Introducing Our Initiative, recycle.AI!
Our Initiative
Recycle.AI is a multiphased initiative utilizing modern technologies such as Machine Learning, robotics, and game development to encourage the responsible usage and consumption of our natural resources around the world.
We noticed that most of recyclable materials and products are not actually recycled but rather thrown into landfills. Note that roughly 80% of rubbish in landfills is recyclable which is honestly, way too much!
Our initiative focuses on the youth, households, organizations, and the government, aiming to encourage recycling amongst our local and the global community.
Youth Phase
Introduction
The youth phase is a recycling-based game where children can score points by correctly identifying if an object is recyclable or not, helping them understand recycling from a young age. The game is built for any setting and simply works by clicking the right bin for the item that is to be disposed of. It can be used to teach children how to recycle in classrooms or can be an educational activity children can do with their parents.
How it is built
Using C# and the unity game engine, the CAD was made in Autodesk inventor
Households and Society Phase
Introduction
We built a tool targeted at small organizations and households that can identify if an object is recyclable or not. The tool is implemented on our website where the user can read about our mission as well as utilize our tool to ensure they are disposing responsibly
How it is built
Using Machine learning and HTML, namely using the tf.keras framework to build a convolutional neural network, and the Flask API to connect the python with the HTML. The essence of it is that we trained a Deep Convolutional Neural Network to classify images using a dataset and label them based on one hot encoded values.
Issues and how they were overcome
The main issue regarding the performance of the network was addressed when we increased the size and shape and made the network far bigger, however we did not get enough time to trial and error the design so we were not able to improve on our second iteration. The padding of the images inputted by the user. This was fixed using a Pillow implementation that added padding:
`` python
if test:
inputData = Image.open('test/'+testfile)
else:
inputData = Image.open(testfile)
desiredSize = (512,384)
im = inputData
old_size = im.size
ratio = float(max(desiredSize)) / max(old_size)
new_size = tuple([int(x * ratio) for x in old_size])
delta_w = desiredSize[0] - new_size[0]
delta_h = desiredSize[1] - new_size[1]
padding = (delta_w // 2, delta_h // 2, delta_w - (delta_w // 2), delta_h - (delta_h // 2))
new_im = ImageOps.expand(im, padding)
im = new_im.resize(desiredSize, Image.ANTIALIAS)
im.show()
inputData=im
The model
The model can be seen below:
model.add(layers.Conv2D(32, (4, 4), activation='relu', input_shape=( 384,512, 3)))
model.add(layers.MaxPooling2D((2, 2)))
model.add(layers.Conv2D(64, (4, 4), activation='relu'))
model.add(layers.MaxPooling2D((2, 2)))
model.add(layers.Conv2D(64, (4,4), activation='relu', input_shape=(384, 512, 3)))
model.add(layers.MaxPooling2D((2, 2)))
model.add(layers.Conv2D(128, (4, 4), activation='relu'))
model.add(layers.MaxPooling2D((2, 2)))
model.add(layers.Conv2D(128, (4, 4), activation='relu'))
model.add(layers.Flatten())
model.add(layers.Dense(128, activation='relu'))
model.add(layers.Dense(64, activation='relu'))
model.add(layers.Dense(32, activation='relu'))
model.add(layers.Dense(16, activation='relu'))
model.add(layers.Dense(6))
this is a multilayered convolutional neural network using 4x4 filters and the relu activation function, our loss metric was Mean Squared Error.
Society Phase
Introduction
This concept involves utilising robotics to effectively locate and manouver small, recyclable materials to recycling bins, as they are often not as readily avaliable as regular dustbins. These robots should be autonomous, however as of now the robot has just finished construction, it is able to pick objects with up to 8 inches in diameter awith the idea being to install a bin bag in the large empty space to store the objects.
How was it built
As evident in the CAD file it was built using the VEX Robotics V5 system, as of now, the components do not have the computational power to fully impliment an algorithm as computationally intensive as YOLO, and so we chose not to try port it to the system.
Complications
The robot only finished construction about 5 minutes before the video was being made and so it could not be showcased fully, but the CAD renders are avaliable on this page.
Quick overview
-The intake flaps increase the contact between the target and the bot
-The rubber treads on the intakes increase the traction of the intakes
-The 8:1 gear ratio of the drivebase ensures the robot operates at maximum speed and efficiency
Future plans for the robot
The final aim of this phase of the initiative is to implement the YOLO algorithm alongside the robot. This algorithm draws bound boxes around the images it is interested in in real time. The next step would be to implement PID controls so that the robot is able to meet its target without overshooting, by slowing down as it approached, or alternatively, using a gyroscope, or even odometry(i.e. position tracking) to maintain the robot's movements
Built With
bootstrap
flask
html
keras
python
tensorflow
Try it out
github.com
recycleAIdemo.ved07.repl.co
sharemygame.com
drive.google.com | Recycle.AI | Responsible consumption, reduced depletion | ['Vedaangh Rungta', 'Emmanuel Ma', 'Ishan Baliyan', 'Mahad Ali Khan'] | [] | ['bootstrap', 'flask', 'html', 'keras', 'python', 'tensorflow'] | 37 |
10,356 | https://devpost.com/software/clove-nlui4w | We are Clove - A personalized recipe-sharing social media platform with weekly meal planning and automatic cart creation functionality.
The home dashboard. You can search for and view recipes, look up accounts, and plan ahead on meals out.
Viewing the details of a recipe to add it.
Looking up channels or popular accounts for their recipes. Clicking on any account gives a pop-up with their recipes.
Populating the dashboard with recipes for the week using the add functionality.
Checking out through the preferred grocer or getting a pdf list of ingredients for personal use.
This is the kroger authentication page which connects your kroger account to Clove.
This page shows if the process of adding items to the cart was a success or not and then redirects to the grocer's page for secure checkout.
This was our initial UI design. Of course, as we progressed our layout evolved to meet user convenience.
This was our initial back-end plan, and with the help of Lambda functions and Dynamo DB, we made it very adaptive to any form of data input.
Inspiration
Clove was inspired by an issue shared by all its members: grocery shopping is too tedious. The process of finding a recipe, hunting for the hundreds of ingredients and commuting to multiple stores - especially during a pandemic such as now - is a pain for anybody. We aimed for a simple, but impactful solution. By integrating a social-media platform for recipe sharing with an automated grocery compiler in a user-friendly web app, we felt that we could meet the needs of those who just don’t have the time to spare for long grocery trips and inefficient meal-planning or those who simply look for a new culinary adventure every day. With a goal to create an application with a touch of playfulness and powerful functionality, we envisioned Clove as an all-around user-friendly web application for day-to-day convenience.
What it does
Clove serves to fulfill one task: make grocery shopping easier. Biding by this mentality, Clove brings meal planning and the joy of finding new recipes together in an intuitive interface. Users can use Clove to discover exciting recipes while Clove works behind the scenes to create a shopping cart around the ingredients used in your recipes. Clove also takes your list and automatically searches for the products you need in order to make your grocery shopping simple and efficient. Clove takes the tedious job of compiling a list of groceries and transforms it into an enjoyable task as numerous features simplify the job at hand while providing an elegant UI.
How we built it
Clove was built on a variety of technologies. Initially, front-end design was done on Figma, focusing on the major parts of the app. From there, our team split up front-end work and back-end work amongst ourselves. The Front-end was built primarily off React, MaterialUI, and CSS with data storage in Redux. This front-end was then set to communicate with the backend, running multiple services such as DynamoDB and Lambda. The line also blurred between front-end and back-end integration as we included authentication technologies through AWS Cognito/the Amplify Framework. Communication was done through JQuery and XMLHttpRequest, both of which helped facilitate multiple HTTPS requests. In order to achieve the automated cart functionality, we integrated our recipe-finding and weekly-planning services with the public-access API’s created by grocery companies such as Kroger and Walmart.
Challenges we ran into
Clove ran into a few challenges working with the AWS API. Our team was mostly new to using AWS so we saw a steep learning curve that took some time away from functionality. The React development process was especially slow at first due to some issues with project setup in the beginning, however, as we progressed, we soon gained momentum. In addition to that, we struggled with the grocer APIs. The documentation was clean and easy to read, but our lack of professional development experience made it a little difficult to understand the reasoning behind and the usage of certain important features. We lost a large amount of time in trying to explore some very new territory and argued about giving up that functionality, but ultimately decided that we would keep trying as exploring new territory is what a hackathon is all about! :) Eventually we were able to figure out how to use the APIs and it turned out even better than we expected. With cooperation and teamwork between all our members, we were able to turn difficult situations into memorable ones.
Accomplishments that we are proud of
Clove was actually the first time a few of us encountered/actively used technologies like DynamoDB and Lambda and that quickly became a fulfilling experience in itself. Furthermore, considering the time span, it was both stressful and exciting to learn these technologies to get to development as soon as possible.while retaining the information for later use. The UI design aspect in particular shined in our project which was an equally pride-worthy moment. In addition to that, we were very, very happy to get the API functionality down for adding automatic cart functionality to our app. It was a struggle, but the fun and worry in pulling something off at the very, very last second was an amazing experience for us all.
What we learned
As a team, we undeniably learned many new things about newer technologies that shaped the course of our project. Outside of our code, we learned more about the time constraint and the importance of planning/sticking to the plan. We saw ourselves drifting towards more ambitious goals, but learning to stick to the essentials and create something that simply works was a great learning. In essence, we learned the importance of making a product that is impressive, scalable, and dynamic in order to maximise effectiveness. In terms of technologies, we learned how to use a multitude of new APIs, we learned how to use AWS Lambda functions, Dynamo DB, and S3 alongside github pages in order to host our web application and store data in the back end. Alongside that, as we were starting to run out of time, our front-end developers worked with a lot of back-end and vice versa - so it served as a chance for everyone to explore something new and out of their comfort zones.
What's next for Clove
Clove definitely has the capacity to go much further than it is now in the form of integrations, sophisticated data-storage, and further tweaking. Integrations wise, we were looking to have a more privatized database based on authentication tokens to provide a more personal experience. Data storage wise, we were looking to separate the stores for users and recipes to create a more centralized feed that could provide recipeed based on your interests. Also, we can add extra user functionality that makes it easier for users to find recipes and add them to their planner. On top of that, we were only able to access the public APIs so we are looking into how we could integrate our current product with restricted-access APIs of companies such as Instacart in order to further develop Clove’s ease of use and functionality.
Built With
amazon-amplify
amazon-dynamodb
amazon-lambda
amazon-web-services
cognito
css
figma
html5
javascript
jquery
material-ui
node.js
react
redux
Try it out
charansriram.github.io
github.com | Clove | A personalized recipe-sharing social media platform with weekly meal planning and automatic cart creation functionality. | ['Mohit Sahoo', 'Henry Castillo', 'Kaushik Akula', 'Charanyan Sriram'] | ['Everest Winner'] | ['amazon-amplify', 'amazon-dynamodb', 'amazon-lambda', 'amazon-web-services', 'cognito', 'css', 'figma', 'html5', 'javascript', 'jquery', 'material-ui', 'node.js', 'react', 'redux'] | 0 |
10,356 | https://devpost.com/software/slice-yk38l9 | Inspiration
Given the current pandemic, many have switched to working from home, and a common feeling is that it's easier to be distracted in this new work environment. There's the constant temptation of wanting to check Facebook, browse Reddit, or watch a quick YouTube video. Our productivity tool provides a clean and easy solution!
What it does
Slice enables users to set time limits for how long they can spend on unproductive sites (ie. YouTube, Facebook). Once the time limit has been reached, users get a notification from chrome reminding them that they have reached their quota of "distractedness". Aside from the notifications, the extension also provides a settings page where users can choose which sites go on their "distraction blacklist", and how long the limits for those sites should be, as well as an analytics page that provides users with data insights on their productivity/time usage habits.
How we built it
Whenever a new chrome tab is opened, a timer is created, and a function runs periodically to update data and check whether the user has exceeded their allotted time limit, and notify if yes.
Challenges we ran into
There were a lot of challenges we ran into along the way, including figuring out how chrome extensions worked and navigating the API, investigating gnarly bugs, etc.
Accomplishments that we're proud of
Having the tool work from end to end, from being able to set time limits and customize the blacklist of sites, to receiving notifications when we spend enough time on those sites, to seeing the data displayed in our graphs!
What we learned
How to build a chrome extension from scratch! Also, none of our team members were very familiar with HTML/CSS/JS before this, so we learned a lot about those as well.
What's next for Slice
Add to our collection of graphs to provide users with more insights.
Built With
bootstrap
chrome
css3
extension
html5
javascript
Try it out
github.com | Slice | As working from home becomes the new norm, maintaining productivity and reducing distractions is key. Our tool notifies users who have been unproductive for too long and provides insightful analytics. | ['Ellen Huang', 'Bimesh De Silva'] | [] | ['bootstrap', 'chrome', 'css3', 'extension', 'html5', 'javascript'] | 1 |
10,356 | https://devpost.com/software/c-care | When our app worked, Satisfied
Inspiration
During this current COVID 19 pandemic, I see health worker is curing the patients, doctors are innovating new medicine, the police is controlling the crowd movement and even bus drivers are helping people to get back to home. As a future engineer, I felt like my contribution is none, so I felt motivated to do my part and try to bring a positive change and to make sure my product can also be used in a future pandemic.
problem our project solves
Offices and workplaces are opening up and as the lockdown loosen we have to get back to work, but there is a massive possibility that infection can spread in our workplace as, When a person is infected he can be asymptomatic for up to 21 days and still be contagious, so the only way to contain the spread is by wearing a mask and maintaining hand hygiene. WHO and CDC report said that if everyone wears a mask and maintains hygiene then the number of cases can be reduced three folds. But HOW we will do that? , How can we make ever one habituated to the following safety precaution so the normalization can take place. So we have come up with a solution called C-CARE 1st ever preventive habit maker that will bring a positive change.
What our project does
Our app is 1st of its kind safety awareness system, which works on google geofencing API, in which it creates a geofence around the user home location and whenever the user leaves home, he will get a notification in the C-CARE app ( ' WEAR MASK ' ) and as the users return home he will get another notification ( ' WASH HANDS '), ensuring full safety of the user and their family. It is also loaded with additional features such as i.) HOTSPOT WARNING SYSTEM in which if the user enters into a COVID hotspot region he will be alerted to maintain 'SOCIAL DISTANCING' And it also has a statics board where the user can see how many times the user has visited each of these geofences. With repeated Notification, we will make people habituated of wear masks, washing hands, and social distancing which will make each and every one of us a COVID warrior, we are not only protecting ourselves but also protecting others, only with C-CARE.
Challenges we ran into
1,) we lack financial support as we have to make this app from scratch.
2.) the problem in collecting data regarding government-certified hotspot and also we have to do a lot of research regarding the spread pattern of COVID-19.
3.) Due to a lack of mentors, whenever the app stop working we had to figure out by ourself, how to correct the error.
4.) It took us too long to use it in real-time as during lockdown it was too hard to go outside in the quarantine but finally, after lockdown loosens a bit we tested it and it gave an excellent result.
5.) we didn't know much about geofencing before that so we have to learn it from scratch using youtube videos.
Accomplishments that we're proud of
WINNER at Global Hacks in the category of HEALTH AND MEDICINE.
WINNER at MacroHack As the best Android Application.
WINNER at MLH Hackcation in the category ( Our first Hackcation ).
TOP 5 in innovaTeen hacks.
TOP 10 in Restartindia.org and Hack the crisis Iceland.
What we learned
All team members of C-CARE were able to grow their area of competence by participating in the whole process of idea definition, market research, validation, prototyping, and presentation. Through different mentor sessions, we learned that problems could be approached by many means, but most importantly our mission should be clear.
What's next for C - CARE
COVID cases are increasing every day, and chances are low that we can create a vaccine immediately, apps like C-CARE will play a crucial role in lower the spread of infection till a proper vaccine is made.
Our app can also be used for seasonal diseases such as swine flu or bird flu or possible future pandemic such as Hantavirus, G4 Virus, bubonic flu, Monkeypox.
Built With
android-studio
geofence
google-maps
java
sqlite
Try it out
drive.google.com | C - CARE | C - CARE An app that makes every person a COVID warrior. | ['Anup Paikaray', 'Arnab Paikaray'] | ['Track Winner: Health and Medicine'] | ['android-studio', 'geofence', 'google-maps', 'java', 'sqlite'] | 2 |
10,356 | https://devpost.com/software/foodie-c7fez0 | Amid the coronavirus pandemic, grocery shopping is extremely difficult and we want to make shopping to ingredients easier. Therefore, we decided to build our Foodie! application to aid people during these trying times. We automatically built a shopping list with ingredients needed for the week's meals and provide corresponding recipes to use up those ingredients. Users will be saving time and money with our application.
This generate recipes for the user to try. The application provides a GUI that prompts the user for dietary restrictions and preferences, then chooses tailored recipes from over 600,000 online recipes. We also provide the grocery shopping list to make shopping that much easier.
How we built it We leveraged the Spoonacular API., which contains functions
Challenges we ran into Quite a few front end challenges.
Accomplishments that we're proud of Very proud of the appearance and functionality.
What we learned Python.
What's next for Foodie More features including giving more recipes.
Built With
python
spoonacular
tkinter
Try it out
github.coecis.cornell.edu | Foodie! | Grocery shopping and meal prepping made easy with automation. We automate a shopping list and recipes for the week and present it all in one place. | ['Junlin Yi', 'Daniel Zhan', 'Jackson Keel-Atkins'] | [] | ['python', 'spoonacular', 'tkinter'] | 3 |
10,356 | https://devpost.com/software/fridgeremindr | Auto generated grocery list based on items needed
Suggested recipes
air quality measurements showing a spike in particulate values (someone coughed or sneezed)
Essential groceries list
Expiry date recognition
Product identification
the air quality workshop which inspired us
attending the rust workshop for the learning and the bonus points
at this point we were wondering if we could build it cheaper and better
air quality workshop
the sps30 sensirion sensor up close
sensor basic hardware setup
Inspiration
Like it or not, grocery shopping is a major unavoidable chore. Especially in these circumstances, shopping can be a pain and we decided to automate the tediousness of checking expiry dates, whats in the fridge/pantry and also whether we should wear a mask indoors (sometimes we should, especially if there is company) the workshop on Saturday inspired us to build a cheaper and better quality air sensor which can even detect cough and sneeze droplets indoors
What it does
there are two main sets of features:
1: shopping list generation and expiry date detection. All you have to do is take pictures of what’s in your fridge and kitchen, FridgeRemindr takes care of the rest. The app detects the products present in the images using machine learning and generates a list. It lets you add essential items to a list along with their expiry dates. The app keeps track of the dates and generates your grocery shopping list for you. FridgeRemindr also generates a shopping list for you based on your cravings. All you have to do is name the dish you want to prepare. You could exploit this feature to help you prepare for a potluck, Thanksgiving dinner or any special occasion. FridgeRemindr compares the list of ingredients required with the items you have at home and generates the list for you.
2: active air quality monitor
the price and functionality of the purple air sensor was shockingly high, so we "borrowed" a sensirion sps30 sensor which is probably the most accurate retail available air quality sensor which is less than 45$ retail. paired with a DHT11 (less than 3$ retail) and a cheap microcontroller with wifi (esp32, less than 7$ retail) it is possible to build our sensor for a hardware price of under 55$. This setup senses particles from pm10 levels all the way down to pm0.5 levels. This is important because respiratory droplets, airborne pathogens and micro pollutants can be detected at this level of granularity (we actually tested if a cough or sneeze can be detected, and our sensor can do this: video here - ) our setup uses a dht11 for temperature and humidity but we can also use a better sensor like a bme280
.
basic video
https://youtu.be/QD7vP-gXwbU
hardware video 2:
https://youtu.be/iIwDxE88KfY
How we built it
hard work and perseverance and no sleep
Machine learning: used models from GCP cloud vision and clarifai
OCR - used google cloud and tesseract
Servers: used python flask and GCP functions
React-Native to build a cross platform app
Adobe Illustrator for designing the logo, assets, UI/UX
Hardware : sensirion sp30 particle sensor, dht11 temperature and humidity sensor, ARduino, raspberry-pi (didnt make the UI, ran out of time :( )
Ngrok tunnels everywhere
nodejs/express : push notifications server
Scientific references
.
https://www.sensirion.com/fileadmin/user_upload/customers/sensirion/Dokumente/9.6_Particulate_Matter/Datasheets/Sensirion_PM_Sensors_SPS30_Datasheet.pdf
https://www.ncbi.nlm.nih.gov/books/NBK143281/#:~:text=Published%20data%20have%20suggested%20that,the%20same%20number%20as%20talking
Challenges we ran into
coding for the sps30 was challenging (had to adapt some things directly from datasheet).
Accomplishments that we're proud of
the things all work.
What we learned
its hard work integrating everything
.
What's next for FridgeRemindr
.better integration of components
more robust hardware and casing
better notification system
Placing orders automatically for essential groceries
Built With
adobe-illustrator
arduino
clarifai
dht11
raspberry-pi
react-native
sps30
Try it out
github.com | FridgeRemindr | Automated grocery shopping | ['Ebtesam Haque', 'Muntaser Syed'] | [] | ['adobe-illustrator', 'arduino', 'clarifai', 'dht11', 'raspberry-pi', 'react-native', 'sps30'] | 4 |
10,356 | https://devpost.com/software/postup | The Post Up! Logo
Inspiration
In this Age of Information, the Internet possesses undeniable power in shaping lives around the globe. Each post has the ability to make an impact, but we often struggle to find the right words to match each photo. That's why we created
Post Up!
, an app designed generate just the right caption for any picture you desire.
What it does
With
Post Up!
, the user has the ability to upload a photo of their choice, and with just a few easy clicks, a caption will automatically be generated for direct use or inspiration!
How we built it
The primary software we used in creating the app was Android Studio. We used the Google Vision API to scan images and label objects. Using those labels, we were able to use the Paper Quotes API to generate a quote for a caption for a picture.
Challenges we ran into
This app required many new skills our team was not used to, so it was difficult to code and debug the errors we faced. The Google Vision API is not documented well and we spent hours trying to figure out the authentication process.
Accomplishments that we're proud of
We're proud that we were able to create a clean-looking, easy-to-navigate app that accomplishes what we aim to do, despite being relatively unfamiliar with the tools during the start of the project.
What we learned
Because this project contained many unknown elements, our team was able to learn a lot about working with Android Studio, utilizing JSON, and Google Vision API.
What's next for
PostUp!
?
We here on the
Post Up!
Development Team recognize the importance of innovating and keeping up with technological advancements, so we have a lot of ideas on how to improve this prototype! For example, we aim to expand the app's abilities to include generating hashtags and posting to other social media sites directly from the app, and in the future, we may include features that help automatically edit a photo to better match your feed.
Built With
android-studio
google-vision
java
json
xml
Try it out
github.com
github.com | PostUp! | An Android application that automatically generates photo captions. | ['Michael Zhao', 'Grace Liu'] | [] | ['android-studio', 'google-vision', 'java', 'json', 'xml'] | 5 |
10,356 | https://devpost.com/software/raven-g6xt92 | . | . | . | [] | [] | [] | 6 |
10,356 | https://devpost.com/software/taptask-werable-vg30uo | Inspiration
The very need of the hour to cut away from our screens to resolve to effective solutions to automate daily tasks. Moreover, exploring more into the integration of embedded systems and wearable technology
What it does
It uses a 4*4 Keypad Matrix with the help of Conductive Thread sewed on the Glove, enabling user to allot a task using the IFTTT App to automate a task, like switching on a bulb, starting the favourite playlist, call a family member, schedule a tweet and much more. In the Demo Video, I've displayed the prototype with a functional use case for blind people using this device to manage phone calls through simple gestures.
How I built it
I built it using conductive thread connected with the Arduino Uno programmed with C++. Moreover it has GSM Module connected with the MCU and the microphone as well as the in-ear speaker.
Challenges I ran into
The major problem I faced while making the glove, was stitching the conductive thread in different
channels as the conductive thread often tends to get in touch with each other through small yarns, apart from this, a number of errors and problems came into my way, but I tried to overcome that and continued to work upon this idea.
Accomplishments that I'm proud of
Applying technology for a cause can empower the most vulnerable across the world. We have had strong people in the past like Hellen Keller
who have blazed the trail with their achievements, without such devices and support. Therefore, how exuberant it is to think that such people can
be equipped with technology in our times and achieve extraordinary feats. I look forward to add features into my device such as Smart Assistants
and GPS navigation and ultimately empower the user.
What I learned
How to integrate conductive thread into a physical circuit and further shaping it as a wearable
What's next for Taptask Werable
Refining the prototype into a seamless and lightweight glove for masses to use.
Built With
arduino
c#
conductive-thread
esp8266
glove
php
Try it out
github.com | Taptask Werable Glove | It is a smart wearable glove allowing users to assign pre-configured functions to automate daily tasks, and further run them using a tap on finger regions. | ['Praveen Kumar', 'Albert Seins'] | [] | ['arduino', 'c#', 'conductive-thread', 'esp8266', 'glove', 'php'] | 7 |
10,356 | https://devpost.com/software/mrspinnacle | This is an example of what the extension displays on Chrome, except the caption now says seconds instead of ms.
Inspiration
We wanted to create a "online mom" of sorts, hence the name Mrs. Pinnacle, that could automate your productivity. The goal was for the product to track your browsing time online and provide suggestions on what activities to perform in real life rather than continue surfing the web.
What it does
We created a Chrome Extension that tracks the time spent on websites and displays this in a bar graph.
How we built it
We used javascript with some help from open source code. More details are in the github repo linked below.
Challenges we ran into
Some of us were relatively inexperienced with javascript, so writing and debugging the code was challenging, and we were unable to have all the features we originally hoped for or to have it working as smoothly as we desired.
Accomplishments that we're proud of
We are proud that we could create a project that is the foundation for a stronger product when the team was relatively inexperienced over the course of 40 hours.
What we learned
We learned a great deal of javascript, as well as lessons on teamwork, communication, and friendship.
What's next for MrsPinnacle
We would like to have the project track the selected tab of the user more effectively, improve the visuals, and have the project provide suggestions on what activities to perform in real life rather than continue surfing the web as well.
Built With
canvas
chrome
css
html
javascript
Try it out
github.com | MrsPinnacle | Automate and track your time spent online | ['ayc12345', 'Andrew S', 'Edward Lee'] | [] | ['canvas', 'chrome', 'css', 'html', 'javascript'] | 8 |
10,356 | https://devpost.com/software/blind-braille-board | Inspiration
m
What it does
How I built it
Challenges I ran into
Accomplishments that I'm proud of
What I learned
What's next for b
Built With
swift | b | m | ['Maninder Singh'] | [] | ['swift'] | 9 |
10,356 | https://devpost.com/software/splitnews | SplitNews - in search of truth
About SplitNews
SplitNews Article View
Inspiration
This year alone, our team has spent hours scrolling through news applications, reading a stream of updates on the constant barrage of events in the world. We've seen firsthand how hard it can be to get a balanced view on a topic and how easy it can be to forget to even try. During these times, it's especially important that we have an easy and quick way to get information on a topic - with arguments coming from multiple perspectives.
To this end, we wanted to automate the process of searching for truth in the news - an application that automatically finds sources with varying biases, summarizing each position to allow the user to learn something about a topic without effort.
What it does
Our web application, SplitNews, allows the user to search for a specific topic, (for example, affirmative action), and quickly and easily see a summary of the topic, from both left and right leaning sources. We automate the process of looking for sources with varied biases, presenting the user with a list of articles from reputable sources from both sides of the aisle, as well as a short and a long summary for each of the articles. To spread awareness of implicit biases in news sources, each news source is color coded blue for left leaning and red for right leaning.
How I built it
We were able to construct our website using Flask for our backend and React in combination with HTML and CSS to construct our frontend. To distinguish biased news sources, we used a combination of website heuristics, and a custom trained transformer neural network using Tensorflow and Hugging-Face's Transformer on over 5000 news sources, achieving 75% accuracy. We used word and sentence tokenization in NLP to summarize the articles and we used the Bing News Search API to obtain links and relevant information to the articles.
Challenges I ran into
Getting a large enough dataset to train our neural network proved to be difficult - we ended up using Selenium and BeautifulSoup to scrape news articles from various sources, on a variety of topics, eventually ending up with about 5000 articles.
Accomplishments that I'm proud of
It was so much harder to coordinate with each other remotely - we're proud that we were able to work with each other through Zoom and of course we're especially proud of our Trello board.
What I learned
This project involved many first experiences for our team, from our first time working with React to our first time using the Hugging-Face transformer library.
What's next for SplitNews
For phase two, we're hoping to make a SplitNews Chrome Extension that automatically helps you any time you're reading the news, whether it's something you Googled or a link you received from someone else.
Built With
azure
flask
keras
react.js
spacy
tensorflow
Try it out
github.com | SplitNews | Automating the search for truth | ['Edward Li', 'Aditya Kannan', 'chess leopard', 'Robin Han'] | [] | ['azure', 'flask', 'keras', 'react.js', 'spacy', 'tensorflow'] | 10 |
10,356 | https://devpost.com/software/stratischaritytracker | Whole Story
This app has been developed for the Stratis hackathon. This app is a blockchain-based charity app deployed on the ethereum test network.
The core purpose of this app is a transparent charity platform on Blockchain. Due to the rise of Coronavirus, there are a huge number of growing fake charities that emerged and which are looting people during these tough times. As thousands of people are diagnosed with COVID-19 and are unable to work, many are finding it hard to make ends meet and are asking for donations. At the same time, scammers are creating fake GoFundMe pages designed to tug at your heartstrings and empty your wallet.
This issue has been solved and addressed by a blockchain-based transparent charity web app to ensure people do not get cheated as there is transparency in each and every step. By minimizing administrative costs through automation, providing more accountability through traceable giving milestones, and allowing donors to see more clearly where their funds are going, blockchain may help restore some of the lost credibility to charities that prove worthy of the public’s trust.
This web app is backed by a smart contract written in C Hash and the Stratis platform along with web 3 has been used to integrate the smart contract with a react frontend and the D app has been deployed on the Ethereum network. The first Dashboard page shows the list of open charities that a user can contribute to. These are sample charities that have registered on the platform. An organization can register itself on the network by giving more details about itself and by setting a minimum contribution which they would want from each person.
Anyone willing to donate to one of the charities, can view a charity and get all information about it. People can also contribute Ether to charities by using their basic metamask account. Those who contribute higher than the minimum contribution set for that particular charity, automatically become approvers and later have a say in how their money is being spent. This ensures transparency and prevents fraud. The organization can then make requests to spend the money for the purpose of welfare or other government purposes. All the approvers in the network who have contributed to the charity can see the request and a majority vote is required from all the approvers to be able to execute the transaction. This whole logic is backed by a smart contract and works on trusted consensus. End to end transactions have been implemented which is made possible with the help of metamask and this whole web app is deployed on the Ethereum Rinkeby test network.
Built With
c#
ethereum
metamask
react
stratis
web3
Try it out
github.com | StratisCharityTracker | This app is a blockchain-based charity app deployed built using the Stratis platform, which instills transparency and prevents frauds. | [] | [] | ['c#', 'ethereum', 'metamask', 'react', 'stratis', 'web3'] | 11 |
10,356 | https://devpost.com/software/voyageur-be0zw5 | Inspiration
We had won the grand prize at an APAC-level hackathon and he got a chance to go to a startup conference in Barcelona this summer. But due to the pandemic, it was canceled. Now, we are not sure if we’ll go even if the conference is next year. We realized this would be the case for many travelers, both leisure and business. One of the major problems would be making travelers gain the confidence to travel again.
We decided to do something to encourage people to travel, by assuring them of safety.
Our solution solves these problems:-
Hygiene & Health
Hack the very definition of travel in ways that address the health concerns long after the COVID-19 slowdown has passed. These inventions must demonstrate how the well-being of travelers and businesses is preserved or improved during travel rather than put at risk.
Sustainability & Relief
With pre-pandemic travel practices as your starting point, and new business models and social/environmental impacts as your guide, invent new responsible, socially impactful products and services that drive exceptional travel experiences and economic growth. Also, with public health and employment concerns at the forefront now, demonstrate how your invention measurably enhances economic conditions in regional and local destinations around the world.
What it does
We have created software services for hotels, airports, parks, restaurants, museums, theatres, and other enclosed private tourist spots. Our system will automatically detect whether people are following social distancing and whether they are wearing masks or not, from CCTV footage. The owners of the place will be alerted if someone is not following the rules.
These places can advertise that they’re using an automated system to ensure safety, and this will attract more tourists.
The other facet of our solution is an Android app for travelers/tourists. Users can pick a destination and a date of interest. We will show them the updates of that area, and give the estimated number of cases. This estimation is based on a predictive ML model.
This will help users make an informed decision and they can postpone their trip well in advance, without losing out money on cancellation charges. This will also help air travel companies and hotels, who have to bear losses if a person cancels their stay.
How we built it
We have taken a sample recording of the CCTV camera footage. An image processing model detects and classifies various bounding boxes based on the distance between people in the video. Also, we have the Mask detection algorithm which was built using CNN and it checks whether people are wearing a mask or not and creates a bounding box around the face. So the viewer knows the number of people violating the norms.
These models were built in Python.
The website for private owners (of the hotel, market, tourist spot) was built using React, Firebase, and Node. The mobile app for users (tourists) was built using Android Studio and Firebase.
Challenges we ran into
The main challenge we will face will likely be integrating everything into an app. Also designing a robust business model. Maintaining data privacy was the main challenge we faced. to ensure this we will be only displaying the numbers i.e index of people maintaining social distancing and index of people wearing masks. As this will be sold as a service, there will be no intervention from our end with the data.
Tech used
We have used Machine learning to detect if people are following COVID19 norms- social distancing and wearing masks. We have used the YOLO model to detect people. Once that task is achieved we have calculated the euclidean distance between the bounding boxes(output of the ML model). We check if the distance has been above a threshold that we have defined. If so, we attribute such a group of people with a red bounding box. For mask detection, we have made a custom CNN model wherein we have annotated the mask image and trained the convolutional neural network for the mask image. The model detects the presence of masks in the frame and draws a bounding box around it.
For COVID19 trend prediction, we have used data of a place named Mumbai in India. We have used Recurrent neural networks(RNN) for such prediction because of the fact it can remember past trends and predict future trends based on past knowledge.
For the website, React, Node, and Firebase were used. For mobile apps, Android Studio, Google SDK, Covid19 API, and Firebase were used.
Accomplishments that we're proud of
We are proud of the fact that our project will help many travelers, both leisure and business in the aftermath of this pandemic. We will be providing one of the major strengths to travelers that are gaining confidence to travel again. We are really happy to be part of the change that will boost & encourage people to travel safely
What we learned
We learned to zero down our idea and focus on one domain.
What's next for Voyageur
The next plan would be to host our entire application on the cloud. The ML models and the backend will be deployed on the cloud. In phase 1, we would like to try out this solution locally. We will tie-up with local hotel chains and tourist spots in Mumbai and devise a basic billing plan to start earning revenue. We will also release our app for tourists on the play store. After these iterations and learning from the results, we would like to partner with more places and or a company like Trivago which can in turn sell these services to its partners.
Built With
android-studio
angular.js
firebase
keras
machine-learning
tensorflow | SafeT | Artificial intelligence-powered mobile application-Making people believe in travel again. | ['Vedant Kumar'] | [] | ['android-studio', 'angular.js', 'firebase', 'keras', 'machine-learning', 'tensorflow'] | 12 |
10,356 | https://devpost.com/software/cartshare | Signin Up
Creating Items
Neighbors' Wishlists
Notifications
Inspiration
We were seeing that in this day and age, there are many people who are unable to go out and get their necessities, especially due to COVID-19.
What it does
It allows a user to create a shopping list, and have certain items wishlisted. These wishlisted items will be available for neighbors to see, so that they can lend a helping hand and purchase them for the person in need. It also creates notifications for the user when an item is bought for them, thereby organizing the neighborhood as it helps its vulnerable population.
How we built it
We built the server in Go and the application in Vue.js.
Challenges we ran into
Getting the Vue.js application to communicate with the Go server, especially with CORS errors.
Accomplishments that we're proud of
Building a full-stack application in a day, and building an application with advanced functionality which is mostly hidden from users but working for them behind the scenes.
What we learned
We learned about different methods of problem solving and debugging for web apps and servers.
What's next for CartShare
Adding a better UI and more security features to be able to expand it to the world.
Built With
bootstrap
css
go
html
javascript
vue
Try it out
github.com
github.com | CartShare | A full-stack application that focuses on automating the process of asking for help from the neighborhood when purchasing items, which is especially helpful for the elderly and immunocompromised. | ["Sa'ar Lipshitz", 'Ethan Davis', 'Darryl Yeo', 'Giancarlo Garcia Deleon'] | [] | ['bootstrap', 'css', 'go', 'html', 'javascript', 'vue'] | 13 |
10,356 | https://devpost.com/software/smarttracker-covid19 | Inspiration :
Now a days whole world facing the novel Corona Virus, to track the spread of novel Corona Virus country-wise, details of confirmed cases, deaths and Recovered, awareness regarding COVID-19. This Android app was created to spread awareness about the covid -19 virus.
What it does :
The Android app named as ‘SmartTracker-Covid-19’ created to spread awareness about the COVID -19 virus. App includes following functionalities:
CoronaEx Section -
This section having following sub components:
• News tab: Having latest new updates. Fake news seems to be spreading just as fast as the virus but as we have integrated from official sources so everyone will be aware from fake news.
• World Statistic tab: Real-time Dashboard that tracks the recent cases of covid-19 across the world.
• India Statistic tab: Coronavirus cases across different states in India with relevant death and recovered cases.
• Prevention tab: Some Prevention to be carried out in order to defeat corona.
CoronaQuiz section - quiz that will help people know about the Corona virus and its effects on human body. It chooses random questions and show the correct answer for the questions and at the end user will get to know their highest score.
Helpline Section - As this application particularly made for Indian citizen to use, all state helpline number of India included.
Chatbot Section - A self-assisted bot made for the people navigate corona virus situation.
Common Questions: Start screening,what is COVID-19? , What are the symptoms?
How we built it :
We built with using Android studio. For the quiz section we have used sqlite database and live news data we have integrated from the News API. For the coronavirus statistic we have collected data from worldometer and coronameter.
Challenges we ran into :
At time of integrating the chatbot in application.
Accomplishments that we're proud of :
Though , It was the first attempt to create chatbot.we have tried to up our level at some extent.
What's next for SmartTracker-COVID19 :
For the better conversation we will be looking to work more on chatbot.
Built With
android-studio
chatbot
java
news
quiz
sqlite
Try it out
github.com | SmartTracker-COVID-19 | Android app to track the spread of Corona Virus (COVID-19). | ['Pramod Paratabadi', 'Supriya Shivanand Madiwal .'] | ['Best Use of Microsoft Azure'] | ['android-studio', 'chatbot', 'java', 'news', 'quiz', 'sqlite'] | 14 |
10,356 | https://devpost.com/software/eagle-sight-cbny23 | This is a cool website.
Try it out.
Built With
css3
html5
Try it out
www.eaglesight.tech | Eagle Sight | Cool Eye checking game. | ['Senuka Rathnayake'] | [] | ['css3', 'html5'] | 15 |
10,356 | https://devpost.com/software/pop-up | Inspiration
When approaching recruiters, you have to print those long resumes that people barely glance through... Recruitment and building your professional network can be a challenging thing. There is a limit to the number of Linkedin requests you make in a day and people barely glance at your projects and your experience. Our idea is to create a new personalized AR Business Card that will be an add-on to your Linkedin profile; showcasing components such as videos of project work, links, AR headers (gifs), and much more!
What it does
There are two components to this hack:
There is a Web App for the AR component (where you can view the AR)
There is a Mobile App for creating and generating your personalized QR code.
Our app will add you to the our AR 'professional network' once the user signs up. It asks your personal information including:
Name, Description/Title, Quick Links (Github, LinkedIn, etc), Contact-Links (email, phone), Optional Headers (gifs which showcase who you are or a project), Optional Experience/Status
For those who don't have the mobile app, they can still view the user's info by scanning the QR code with their camera app.
How I built it
Our Mobile App uses Apache Cordova and the our forms were all built with Vue and React JS
The AR Web Component is built with AR.JS & React
Challenges I ran into
Watch the video! We describe all of them
Accomplishments that I'm proud of
Our app works well!
What's next for Pop Up
We want to turn this into an extension for Linkedin. LinkedIn has its own QR code component so creating an AR Business Card over lay on top of the LinkedIn QR code would be our next step. Also, we want to extend the scope of our current form so that it is more customizable.
Built With
ajax
apache
google-cloud
react
vue
Try it out
github.com
pop-up-ar.web.app
qrcodes-app.web.app | Pop Up | Business Cards are boring :( Create interactive AR/Static Webpage Business Cards that encompass all professional user information including: resume, LinkedIn, etc. | ['Aman Adhav', 'Roman Koval'] | [] | ['ajax', 'apache', 'google-cloud', 'react', 'vue'] | 16 |
10,367 | https://devpost.com/software/earthquake-watch | Motivation
Social media, most notably Twitter, has played a key role in the distribution of information during earthquakes. People use Twitter to alert and inform other citizens, and this is where media outlets source much of their early-stage information. Yet, in many less developed countries across the world, this source of information is underutilized. Emergency response systems are often alerted hours or days after the earthquake takes place.
Solution
Earthquake Watch is a real-time worldwide earthquake monitoring platform that collects data from Twitter, extracts relevant information through Natural Language Processing, and predicts the locations and relative magnitudes of earthquakes through Latent Dirichlet Allocation and deep learning models. It can be used as a tool to allow the appropriate respondents, such as disaster relief agencies and humanitarian organizations, to more quickly act upon earthquakes across the world.
Challenges I ran into
No matter how much relevant information there is on Twitter, there is always an equal amount, if not more, of irrelevant thoughts, advertisements, and even misinformation. One of the major challenges of this project was to process these tweets and extract the most relevant data through Natural Language Processing and learning models.
How I built it
Earthquake Watch is entirely built on Open Source software. I gathered data with Twitter's API, used TensorFlow for modeling, and nltk and gensim for Natural Language Processing inference. I served up my application with Flask.
What's next
As well as improving the accuracy of Earthquake Watch's predictions, I plan to migrate the application to AWS infrastructure in order to improve performance and scalability.
Built With
tensorflow
twitter | Earthquake Watch - Bithacks 2020 | Real-time worldwide earthquake monitoring by analyzing tweets with NLP and deep learning models | [] | ['2nd-6th Overall'] | ['tensorflow', 'twitter'] | 0 |
10,367 | https://devpost.com/software/bitbuzz | Inspiration
Learning
What it does
hackathon
How I built it
python
Challenges I ran into
Accomplishments that I'm proud of
What I learned
What's next for BitBuzz
Built With
python | BitBuzz | A cool Bithack Project | ['Sambhav Kumar Thakur'] | [] | ['python'] | 1 |
10,367 | https://devpost.com/software/endangared-defender | We Won Best AR/VR Hack Award!
Try It Yourself
Want to try out EndangARed Defender yourself? Check it out at
https://andrewdimmer.github.io/endangared-defender/
. Note: Due to time restrains, we currently only support AR on iOS 11 or higher. Also, we recommend using smaller sized images on mobile devices, as the TensorFlow model has a high CPU usage.
Inspiration
Human activity on Earth is putting dozens of animal species in imminent danger of extinction. Whether it’s because of poaching, or loss of habitats caused by climate change, pollution, and human encroachment, many species’ very survival hangs in the balance.
The good news about these human-caused problems is that they have the potential to be human-solved before it’s too late.
Unfortunately, one-size-fits-all universal solutions aren’t effective because of the huge range of different animals and habitats. A solution to help critically endangered Sumatran elephants, for example, probably wouldn’t help Hawksbill sea turtles. Therefore, the very first step in trying to save an endangered species is finding out how many animals of that species are left, and where they’re currently living. This is a hugely time-consuming and massively labor-intensive process, which requires enormous amounts of money and great numbers of highly organized, highly trained conservationists to locate, track, and monitor the animals.
And that’s exactly why we built EndangARed Defender. Our web app combines the nearly unlimited power of crowdsourcing and AI object detection to easily and inexpensively locate, track, and monitor endangered species. This has the double benefit of freeing specialized wildlife protection organizations to focus more of their time, money, and resources on specific conservation efforts, while simultaneously increasing public awareness of endangered species.
What it does
EndangAR Defender allows civilian volunteers, conservationists, and tourists to help track the range and population size of endangered animals without needing specialized training. All they need to do is take pictures of target species in the wild, and upload the pictures to our web app. Then a machine learning object detection model identifies and logs the animal sightings based on what was in each picture. We do this by pulling the geotagging information from the picture, then display the sightings over time to the user via Google Maps.
We also help provide users with information about the animals that they have photographed, as well as provide information about how they can get involved in that animal’s preservation. This includes providing the current endangered status, a 3D model so users can see it up close, and links to organizations that help conserve that animal.
How we built it
We started by building our machine learning model in Google Auto ML Vision. To do this, we first collected as many different images of the sample animals as we could. Then, we wrote python scripts to handle things like bulk renaming and CSV generation, before labeling each image and training the model. In particular, we used an object detection model so we can identify, count, and track multiple animals (of the same or different types) all in one image.
From there, we built a web app to allow users to upload photos and view information about the animals in the photos. We also connected Google Maps to display information about where that animal has been sighted recently. This allows us to track the range and estimate the size of the population over time as users upload more images.
Finally, we used echoAR to upload, host, and display 3D models of the animal(s) identified in the picture so that users can see them up close, even from the comfort of their own homes.
Challenges we ran into
It took us forever to figure out how to access file properties after the files were uploaded. Eventually, we figured out that the best way to handle this was to not do the tag processing on the front end, but rather to send it to a server where we could use a better FileReader to access that information.
Accomplishments that we're proud of
We were really happy with the accuracy with which our machine learning model works. In addition, we’re really happy that we completed all of the main features that we wanted to make over the course of the hackathon.
What we learned
We learned a lot about file tags and how we can access file properties like the date taken and any attached geotagging data. We also learned more about each of the different types of image classification/detection models, and when to use each one.
What's next for EndangARed Defender
We’d like to expand the machine learning model to include more species of animals. We’d also like to see if we can integrate with the Image-Based Ecological Information System (IBEIS) project which analyzes animal photos scraped from internet sources such as Flickr and Facebook and applies computer vision and active learning methods to detect the animals, identify the species, and even identify individual animals. Their AI techniques can identify unique animals as long as they have stripes, wrinkles, or other unique textures.
We’d also like to add some other aspects to the machine learning model to see if we can identify items in the picture that might indicate the presence of hunters, farmers, or poachers. We’d also like to add features to the tracking system to see if the growth of human settlements has influenced the habitat, as we can track the range over time.
Built With
echoar
google-auto-ml
google-maps
material-ui
react
typescript
Try it out
andrewdimmer.github.io
github.com | EndangARed Defender | Using global cooperation to promote global conservation | ['Andrew Dimmer', 'Nathan Dimmer'] | [] | ['echoar', 'google-auto-ml', 'google-maps', 'material-ui', 'react', 'typescript'] | 2 |
10,367 | https://devpost.com/software/hiv-drug-prediction-model | Logo
Inspiration
Our inspiration is our professor who described the current world scenario of HIV infections and the population diagnosed with AIDs. The constant aim of developing something for social good was a constant source of motivation.
What it does
Our project is an HIV regulator that can in fact detect the HIV drugs from their molecular structure. It performs feature extraction by constructing graph structure representing the molecules and calculating 1D 2D and 3D descriptors for the molecules.
How I built it
We have used special libraries like Pytorch, Tensorflow, Rdkit, Torch, etc to build the model using proper datasets. All references have been provided within the documentation.
Challenges I ran into
We faced various issues like which classification algorithm or which classifier to be used for the prediction and after cross-validating various classifiers, coming up with a particular one was difficult as there hasn't been much work upon this earlier.
Accomplishments that I'm proud of
We were finally proud to have achieved a promising accuracy and better F-scores. We were able to work out the way we wanted the data to be processed and augmented.
What I learned
We had a great time learning about the various molecules and their structure. How we implemented graph structures upon 3D molecules was really a great thing I learned.
What's next for HIV drug prediction model
We will make this better to look upon by using a notebook like Jupyter Notebook or Colabs to give a clean format of all the files implemented together and work upon improvising the algorithm with CNN classifier along with Class Activation Maps.
Built With
python-package-index
pytorch
rdkit
tensorflow
wbp-systems-torch
Try it out
github.com | HIV Regulator | Perform feature extraction by constructing graph structure representing the molecules and calculating 1D 2D and 3D descriptors for the molecules.Our aim is to develop a model to detect the HIV drugs. | ['Abhik Chakraborty'] | [] | ['python-package-index', 'pytorch', 'rdkit', 'tensorflow', 'wbp-systems-torch'] | 3 |
10,367 | https://devpost.com/software/smarttracker-covid19 | Inspiration :
Now a days whole world facing the novel Corona Virus, to track the spread of novel Corona Virus country-wise, details of confirmed cases, deaths and Recovered, awareness regarding COVID-19. This Android app was created to spread awareness about the covid -19 virus.
What it does :
The Android app named as ‘SmartTracker-Covid-19’ created to spread awareness about the COVID -19 virus. App includes following functionalities:
CoronaEx Section -
This section having following sub components:
• News tab: Having latest new updates. Fake news seems to be spreading just as fast as the virus but as we have integrated from official sources so everyone will be aware from fake news.
• World Statistic tab: Real-time Dashboard that tracks the recent cases of covid-19 across the world.
• India Statistic tab: Coronavirus cases across different states in India with relevant death and recovered cases.
• Prevention tab: Some Prevention to be carried out in order to defeat corona.
CoronaQuiz section - quiz that will help people know about the Corona virus and its effects on human body. It chooses random questions and show the correct answer for the questions and at the end user will get to know their highest score.
Helpline Section - As this application particularly made for Indian citizen to use, all state helpline number of India included.
Chatbot Section - A self-assisted bot made for the people navigate corona virus situation.
Common Questions: Start screening,what is COVID-19? , What are the symptoms?
How we built it :
We built with using Android studio. For the quiz section we have used sqlite database and live news data we have integrated from the News API. For the coronavirus statistic we have collected data from worldometer and coronameter.
Challenges we ran into :
At time of integrating the chatbot in application.
Accomplishments that we're proud of :
Though , It was the first attempt to create chatbot.we have tried to up our level at some extent.
What's next for SmartTracker-COVID19 :
For the better conversation we will be looking to work more on chatbot.
Built With
android-studio
chatbot
java
news
quiz
sqlite
Try it out
github.com | SmartTracker-COVID-19 | Android app to track the spread of Corona Virus (COVID-19). | ['Pramod Paratabadi', 'Supriya Shivanand Madiwal .'] | ['Best Use of Microsoft Azure'] | ['android-studio', 'chatbot', 'java', 'news', 'quiz', 'sqlite'] | 4 |
10,367 | https://devpost.com/software/findmyevent-i46hf1 | Inspiration
For our project, we wanted to focus on an important issue, Social Activism, that affects people in both our community and across the world. We realized that many people in this area are oblivious to the events occurring in today’s society and we wanted to create something that can help us change this narrative.
What it does
FindMyEvent provides event coordinators everywhere with a free service to organize events and recruit individuals for a team in order to improve their communities.
How we built it
We used a boostrap-made template to create a user friendly front end, and used javascript to communicate with the client and transfer information between the client and host. This information included event details such as name, date, and location.
Challenges we ran into
We faced many challenges while developing the website; from getting a functional map to properly storing event information. Along with this, we also struggled to adapt to the workflow of a team environment, since we had conflicting schedules and were unfamiliar with aspects of the project development process. However, by conducting more research and establishing better communication, we were able to overcome these hardships and successfully build our website.
Accomplishments that we’re proud of
We were able to develop this project with skills we were previously inexperienced with. By finding the resources necessary, we were able to obtain the skills needed in order to successfully accomplish our goals. This hackathon allowed us the opportunity to grow, not only as computer scientists, but also as contributors to our communities.
What we learned
Team collaboration and communication is critical in order to efficiently develop a project. Without the strong team-based environment we created for ourselves, we would not have been able to complete this project in such a short time frame.
What's next for FindMyEvent
Polishing and expanding FindMyEvent in order to make it readily accessible for communities around the world.
Built With
bootstrap
css
html
javascript
mapbox
Try it out
findmyevent.netlify.com | FindMyEvent | A simple new way to organize and find events near you. | ['Abhi Nayak', 'Skyler Gao', 'Jay gandhi'] | ['Think Boards'] | ['bootstrap', 'css', 'html', 'javascript', 'mapbox'] | 5 |
10,367 | https://devpost.com/software/airwrite-xdo781 | The drawing of the first letter commences!
Inspiration
From hours of long calculus homework, to intense days of trying to input a diagram onto google docs, we could do no more. We had difficulty creating alternatives for doing things on the web to easily upload, while doing work through handwriting is less acceptable and doesn't easily upload to certain platforms. With many high school and college students in mind, we had to make a difference in terms of how a teacher could send hand written notes online, as well as how a student can submit work this way as well. COVID has employed around 12x the use of online learning for schools all across the US, and in an effort to create a better and more well off society, we pursued
AirWrite
!
What it does
Our program runs on a computer, and tracks your finger based on location in the frame of the computer's wide angle camera. You are then able to draw and create new diagrams, letters, words, and numbers, simply with the movements of your fingers.
How we built it
For this project, we used one of our personal favorite programming languages: Python. In terms of python, we chose OpenCV to create our computer vision model, and numpy arrays for our frame by frame, to allow the processing of the data by the computer vision model.
Challenges we ran into
At first, we were unsure on how we were going to start the program and what must we do in order to even recognize movements using OpenCV. However, over countless hours of research and Stack Overflow articles, we were able to create the final product that is currently in the GitHub repository linked below. Following this, we continue to struggle with the frame rate on our machine's camera output, and have a difficult time finding the proper lighting such that our algorithm can run its full course, to the best of its ability.
Accomplishments that we're proud of
We are glad we were able to create a functional algorithm that detects writing in the air! We truly believe this is the next revolutionary idea in the education industry and that algorithms like these can make writing in the classroom so much easier. We hope this project can gain tons of outreach, and we can teach kids the power of AI, and Computer Vision and how it can affect almost every aspect of our lives.
What we learned
We learned a great amount about functionalities of the Python modules, OpenCV and NumPy. The fact that we could create such a complex algorithm with just three imported modules shows how technology has come to progress over the years. We loved the way that the simplicity of different libraries, can come together, and we love how we were able to incorporate them in this way.
What's next for AirWrite
We hope to implement this algorithm into an application that can used by teachers and students across the world. This way, we can allow the classroom to once again be revolutionized, and take the world by storm with new technology, following the internet age. The future of writing is truly here!
Built With
collections
numpy
opencv
python
Try it out
github.com | AirWrite | Magically, the future of writing is here! | ['Shrey Jain', 'Shashank Vemuri'] | [] | ['collections', 'numpy', 'opencv', 'python'] | 6 |
10,367 | https://devpost.com/software/decentralized-pandemic-reserve-dpr | DPR Logo
Voting on Need using R-Value
Inspiration
As hospitals and essential businesses compete for necessary supplies, the identification of need is skewed by racial, political, and socio-economic biases. Policies are being driven by short-sighted and inaccurate data with little understanding of the virus’’s real-time effective reproductive number, leaving communities left scrambling for an effective plan of action. We are inspired by the DAO's ability to function without hierarchical management, removing biases and hopefully giving a voice back to people instead of skewed policy leaders during a time of global crisis.
What it does
The Decentralized Pandemic Reserve (DPR) aims to create an autonomous supply chain consortium that matches individual and manufacturer resources with the areas most in need. Our end-to-end solution solves issues with data storage, data retrieval, data validity, supply chain, and governance. By using a predictive model to identify the coronavirus’s real-time, effective reproduction number, training that model with A DAO supported by decentralized voters and assessing proposals of hospitals or entities in need of resources against data models of activity (healthcare.gov/covid or healthdata.org/covid) to deliver supply.
How we built it
PySyft is a Python library for secure and private Deep Learning. PySyft decouples private data from model training, using
Federated Learning
,
Differential Privacy
, and Encrypted Computation (like Multi-Party Computation (MPC) and Homomorphic Encryption (HE)) within the main Deep Learning frameworks like PyTorch and TensorFlow.
We deployed the jupyter notebook, and used
PySyft
,
OpenMined
and
PyGrid
to implement Differential Privacy and Federated Learning on the model.
For the DAO component, we utilized Aragon and it's smart contracts for ease of deployment, token functions, and voting mechanism. A React front end ties together the data component, a DAO component, and R values and other data from jupyter notebook and the OCEAN protocol.
Challenges we ran into
Initially, we were too consumed with figuring out the supply chain component, but quickly realized we could leverage cohorts such as
https://make4covid.co/
and others across the globe to handle supplies and manufacturing production to meet demand. Since they are a growing coalition of designers, engineers and manufacturers, we could shift our project to focusing on distribution and areas of greatest need.
Accomplishments that we're proud of
We were able to stream line the data being used to understand COVID from the daily amount of reported cases to the coronavirus’s real-time, effective reproduction number, or its actual ability to spread at a particular time. By using this data we are able to continuously train the model with a DAO putting the power of choice back into the hands of the people. A modified voting mechanism with informed data points informs the community about relevant information such as supply recipient, location, (R) infection rate, and reputation.
What we learned
We learned that people are afraid and feel powerless as decisions are being made that they have little say or control over. We also learned that our project is modular and can be replicated to solve for more than just this use case. We have a centrally build a decentralized and global data science department.
What's next for Decentralized Pandemic Reserve (DPR)
We hope to gain feedback and insight from the Devpost community and continue building our project as we believe it will have long-term scalable impact. We believe DPR has the potential to decentralize voting, force decision making and price transparency and build the data economy of the future. It can be used for a variety of supply chains, but more importantly, it will eventually with enough training, reduces the amount of human bias and mode from the decision chain.
[presentation]
https://docs.google.com/presentation/d/1JOKOpt5lD0M9_JQbWPOhPo9YH1qLv3inADb9C5DItlQ/edit?usp=sharing
)
Built With
aragon
dao
juypter
ocean
python
react
Try it out
github.com
indigotheory.invisionapp.com | Decentralized Pandemic Reserve (DPR) | DPR is an R-value backed DAO matching individual and manufacturer resources with the areas most in need. We're solving pandemics with data driven consensus and governance. | ['RISHABH CHAKRABARTY', 'Alex Gardner', 'William Sterling', 'Ron Stoner', 'Rahul Bishnoi'] | ['1st place'] | ['aragon', 'dao', 'juypter', 'ocean', 'python', 'react'] | 7 |
10,367 | https://devpost.com/software/covid-heal | Home page
Information on how to stay healthy
Face Touch Reminder with Artificial Intelligence
Live news
Remedies, memes, quotes, and videos
Inspiration
As a result of the current situation, the COVID-19 pandemic, our fellow family and friends are in a state of urgency. As of right now, the problem is that we have not been able to find a cure for COVID-19. With lockdowns happening everywhere, people are staying at home, quarantined, we realized that the world is in a dire situation and requires a lot of help. So, we (three high school students) decided to help out by creating a web app.
What It Does
COVID-HEAL encompasses many different and helpful functionalities:
•Built-In Artificial Intelligence that will detect when you are touching your face when on your computer and then notify you. This functionality works even in the background and is accurate at catching when you are picking your nose.
•The latest news about the COVID-19 virus based on your location.
•A live meter of the number of cases, deaths, and recoveries.
•A relax page that takes away the "corona-anxiety" that everyone is getting. This page includes light-hearted jokes and memes, special music that is proven to heal disease, and motivational quotes. On top of that, this page provides helpful tips and strategies credited by nurses and doctors to help you stay healthy at home.
How We Built It
This app is built using many languages and frameworks. The frontend is built with HTML, CSS, JQuery, and Bootstrap and the backend is built using Node.js. The frameworks and APIs we used were Tensorflow, Smartable, Bootstrap, Express.js and Forismatic. These languages & frameworks allowed us to make the web app responsive, use accurate data on COVID-19, and effectively use Artificial Intelligence.
Since we couldn’t meet each other, we planned through voice chats & voice calls. For the version control system, we used Git, and we deployed the web app using Heroku.
We decided to divide and conquer when building the app in order to use our time efficiently. Labdhi worked on a majority of the frontend, which involved designing the web app. Ashay worked on the backend and Artificial Intelligence feature, which involved implementing the Machine Learning models. Sohum worked with the APIs that gave us data on news, statistics, and memes.
Challenges We Ran Into
We found that it was a bit difficult planning & coding when we weren’t able to see each other in person, but we became flexible and learned to use Google Docs, voice calls, and Git.
Often, we came into issues when we weren’t able to merge/combine our code without conflicts. We solved this problem by using Git and using an iterative process when combining our code.
Accomplishments That We Are Proud Of
Within 4 days, we accomplished many coding feats and incorporated unique features. We were able to use machine learning models that could effectively detect when someone touches their face. We were able to pull data from numerous sources using APIs and display this information. Most importantly, we were able to deploy the web app, obtain a custom domain name for the app, covidheal.org, and spread awareness.
What We Learned
Throughout the 4 days we worked on COVID-HEAL, we had the opportunity to learn a lot. Coding-wise, we sharpened our knowledge on the application development process, from brain-storming to deploying. We learned how to effectively use Git to develop a web app. We also learned how to deploy the app on Heroku and add a custom domain name to the app.
Along with that, we learned how to use our coding skills to help out and spread awareness. We got to ask people what they are really in need of to fight against COVID-19. With the current situation, no one holds an antidote to save COVID-19 patients and that has caused a lot of unnecessary fear. We made it our goal to take this fear away, and provide a sense of positiveness and security to people around the world. This is a new and unique idea different from the existing media already in place because it gives a positive vibe rather than a cynical one.
What's Next For COVID-HEAL
Even though our web app is already up and running, there are several future steps we have in mind. We hope to make this web app more specific & interactive in future versions. The website can be made more specific to the user by giving more local news such as events occurring in the users hometown that are related to the coronavirus. More information about clean practices that limit the spread of the virus for people in quarantine would be beneficial as well as more references to numbers that can be called for further questions about the virus. Also, we believe that adding more at-home precautions and treatments would spark a lot of interest.
Built With
bootstrap
css3
html5
javascript
node.js
opencv
smartable
tensorflow
Try it out
covidheal.org | COVID-HEAL | An all-in-one tool, COVID-HEAL reminds you every time you touch your face, will keep you updated to the latest news based on your location, and provide helpful tips and strategies to avoid COVID-19. | ['Labdhi Jain', 'Sohum Bhole', 'Ashay Parikh'] | ['Challenge Winner'] | ['bootstrap', 'css3', 'html5', 'javascript', 'node.js', 'opencv', 'smartable', 'tensorflow'] | 8 |
10,367 | https://devpost.com/software/mood-match | Logo
Sample conversation
Inspiration
Many times, it is very difficult to pick a song that is perfect for your current mood. Especially during such an unprecedented time in our world, it is hard for people to understand how they feel, let alone understand how to feel better. A plethora of research studies have proven that music is truly one of the best ways to brighten someone’s day and MoodMatch aims at doing just that. This issue of not being able to understand one’s true emotions and grasp what is needed to acquire a more positive mindset inspired me to create MoodMatch.
What it does
MoodMatch is a Facebook Messenger ChatBot that recommends songs based on a user’s current mood. MoodMatch understands the user’s current emotions by asking several personalized questions and using novel sentimental analysis algorithms to deliver several songs that will cheer up the mood of the user.
How I built it
Before this project, I was very new to the ideas of creating a project. This project introduced me to the end-to-end programming pipeline. I started by first mapping out what I had to do:
Obtain a list of songs, artists, and lyrics of the songs
Perform sentimental analysis on the songs to understand what kinds of feelings are exhibited by the song
Create a messenger bot
Program the messenger bot to ask the user questions
Process the results (sentimental analysis on the chosen choices)
Recommend songs with similar sentiment levels or perhaps higher sentiment levels to cheer one up
I followed this process thoroughly and used the Microsoft Azure Cognitive Science API to perform sentimental analysis. I also used various APIs including Spotify, Genius Lyrics, and various others.
Originally, I had meant for MoodMatch to pop up a Messenger Webview and encourage users to fill out a form that would then allow for the processing of data, however, Facebook’s innovative Quick Replies module on Messenger allowed me to make MoodMatch more interactive and personalized for users.
Furthermore, all of the data regarding songs and the various components associated with it are stored using AWS S3, making this application a perfect model of cloud-based computing. Through this project and several past instances, I understood the growing power of cloud-based applications in our world and how leveraging the cloud through applications such as AWS and Azure will undoubtedly ameliorate your app.
Challenges I ran into
As I was very new to the end-to-end programming process, I ran into many challenges including not being able to understand what was wrong. My first challenge in this journey of completing MoodMatch was deploying my messenger bot on a Heroku server. Many times, I received an Error 400 or Error 500 stating that the /GET request cannot be found. However, once I got over this hump by fixing a few lines in my code, I was faced with several other tough mountains:
Obtaining results from the HTML form and using ExpressJS to deliver POST requests
Returning a list of songs and executing my algorithm on an ASYNC function
Transitioning from a bland messenger webview to more interactive Quick Replies
I often asked friends or even posted on StackOverflow to get some of my questions answered and get over these challenges. This project truly wouldn’t have been possible without others who helped me get over these challenges.
Accomplishments that I'm proud of
Creating a bot was not something I could've imagined me doing a few months ago, but now I have fully learned one example of the end-to-end programming pipeline. Getting over the aforementioned challenges and not giving up throughout the process has not only given me more technical knowledge, but it has impacted me mentally as I am more confident in my coding ability after this project.
What I learned
I fully learned the end to end programming pipeline of sorts. In particular, I learned the ins and outs of the various APIs I worked with and understanding how to deploy/manage server-based applications like a messenger chatbot.
What's next for MoodMatch
A lot is in store for MoodMatch, ranging from automating the song updates as well as adding more factors that may influence someone's choice of song. I plan on having an autorefresh of sorts for the list of songs already scraped off the web right now so that users will continuously get different songs. Perhaps even employing an account scenario in which users are never recommended the same song on multiple occasions is also one thing I plan to work on. Lastly, a lot of factors go into one’s feelings and current emotions, which definitely cannot be grasped by the current 2 questions asked my MoodMatch. I plan on adding more of these questions and even an open-ended question of sorts, in which users can type in their response, and emotional analysis will be performed on their inputs. In fact, a few of these are already in the works so be on the lookout for upcoming updates ;)
Built With
billboard
facebook
genius
html
javascript
node.js
python
spotify
Try it out
www.messenger.com | MoodMatch | Ever feel down, but don't know how to cheer up? Ever partying, trying to have fun but not knowing what song to play? Fear no more, MoodMatch is here, giving you the right song at the right time. | ['Viren Khandal'] | ['Track Winner: Entertainment and Silly'] | ['billboard', 'facebook', 'genius', 'html', 'javascript', 'node.js', 'python', 'spotify'] | 9 |
10,367 | https://devpost.com/software/test-dv8pka | World map
Inspiration
Since the COVID-19 has been cause the “lockdown” around the whole world and more people started being aware of the seriousness of what the pandemic brings to us, that is the reason why we choose to implement a website to evoke the importance of social distance for preventing the pandemic through this hackathon.
What it does
In our application, people can check whether the trend of new COVID-19 cases within a country is increasing or decreasing. This will prevent many new cases of coronavirus within countries. If people are aware of the potential increase in new cases of COVID-19 in the near future, they are more likely to follow lockdown rules and won't travel to friends or relatives. Thus, them following lockdown rules will define a new trend in the decreasing number of new coronavirus cases. Stay safe and let's take care of each other!
How we built it
We wanted to use data to help us understand COVID-19 better. We decided to research where people were traveling during lockdowns. Using the Google Global Mobility Report, we found that as soon as lockdown started people began traveling more to friends or relatives' houses as the restrictions to travel to the grocery or take the train became more strict. Lockdowns are difficult and sometimes good company is all you need. While lockdowns are still in effect, remember to follow CDC guidelines for staying safe. It's hard for many people to not be social, so we wanted to make sure as many people as possible to stay safe.
Challenges we ran into
The challenges while implementing this project is how to combine different part of teammate's work together on the website platform, which each part was written by different coding languages.
Accomplishments that we're proud of
Through this project, we combine machine learning, data science, front end, back end, and UI together. Also, the front end and back end part need a good communication to understand how to optimize sending and operating our ideas to the public and we are proud to establish a productive environment.
What we learned
We learned about how to combine different professional programming skills into a whole project, such as front end and backend development, UI design, machine learning and data science, etc. Teamwork is important in order to keep this project consistent.
What's next for CoPrevent
We can improve the website's front end more smoothly and the machine learning part to predict more data about COVID 19 cases. Cloud services would be used to streamline the process of continuous delivery of up-to-date datasets to ensure the best awareness from analytics and predictions.
Built With
bokeh
css
flask
heroku
html
javascript
jquery
jvectormap
keras
numpy
pandas
python
tensorflow
Try it out
coprevent.herokuapp.com
github.com | CoPrevent | We create an interactive website for people to recognize the importance of social distancing to prevent COVID-19 | ['Justin Viola', 'Chiao Wang', 'Illia Halych'] | [] | ['bokeh', 'css', 'flask', 'heroku', 'html', 'javascript', 'jquery', 'jvectormap', 'keras', 'numpy', 'pandas', 'python', 'tensorflow'] | 10 |
10,367 | https://devpost.com/software/helmprotech | In the making of our machine learning model, this is us testing the helmet/motorcycle detection.
The beginning of our responsive web application. This is the home page with a brief introduction to the software and mission.
More reasoning on our home page as to why you should use our service :)
This is where we're going to have users upload images to be detected, later to be a real time video camera system.
This is a section of the results page where users get to see scanned license plates, helmets, motorcycles, and statistics.
Inspiration
A big problem in countries such as India is the danger on the road due to people not wearing helmets and not wearing the appropriate safety gear. With over 37 million motorized two wheelers representing 75% of the operational vehicles in India, as well as roughly 34 million motorized two wheelers in China, the number of people not wearing helmets and not taking the proper safety precautions has been higher than ever. According to reports from the Indian government, at least 98 two-wheeler riders without helmets have died on a daily occurrence in 2017. That's over 35,000 lives in a full year that could have been saved through just the use of a helmet. Furthermore, according to China's ministry of public safety, from 80% of fatal traffic accidents involving motorized two wheelers, brain injury was the leading cause of death, and wearing helmets can reduce the risk of this fatality by about 60 to 70 percent.
What it does
Our Flask application take an input of images of motorcycle riders on the street. With these images, we use YOLO to find the number of motorcycles and the number of helmets in the picture and subtract that amount. If we find that there are more motorcycles than helmets (signifying that there are some riders not wearing the proper gear), we scan the license plates and map the plates to the motorcyclist that wasn't wearing a helmet, then use OCR to find and save the license plate which violated the rules.
How I built it
To analyse the images, we used:
YOLO, for finding the objects in the picture.
OpenCV, for reading and parsing the images.
OCR in UiPath Studios, for reading the license plates.
In addition, we created an accompanying web application using:
Flask
HTML, CSS, and JS
TensorFlow, Keras, Scikit Learn
Pandas, Numpy, Matplotlib
Google Cloud Platform, App Engine
Challenges I ran into
We were working through different time zones, which made it difficult to coordinate, especially when combining our code at the end. In addition, it was difficult to find working models of license plate identification, especially since the motorcycle license plates were incredibly small. Also, we went through several libraries and models before we settled on reading the license plates using OCR. We also spent a decent amount of time perfecting our data set of detecting motorcyclists without helmets due to a lack of resources and images online. Another big challenge towards the end was figuring out how to deploy our web application. Since our machine learning model files were so heavy, we knew we had to use some heavy web service to deploy our app. This is when we started learning and following documentation on Google Cloud Platforms App Engine, there wasn't the clearest docs on this so it took a lot of trial and error to perfect our config files to support our specific app with the heavy machine learning files on top of it.
Accomplishments that I'm proud of
We're proud that we were able to create an application which was able to use computer vision to identify motorcycles, helmets, and read license plates. We're also proud to have put our app together within the time frame and create an impactful product for our communities. Another big accomplishment would be our live deployment as we were able to figure out how to utilize Google cloud platform and use it to host our web app.
What I learned
All of our team members had limited knowledge of OpenCV and YOLO before the hackathon; we used this to expand on our computer vision skills. In addition, three of our teammates learned about how to collaborate on GitHub using version control software Git as well as learning techniques on how to avoid merge conflicts when collaborating on a coding project. Another service we learned was following Google Cloud Platforms docs more efficiently and getting the right information in a shorter period of time.
What's next for HelmProtech
Our current end product is a proof of concept which is functional, however doesn't have connections to outside security cameras. Our next steps are to make the software work with real-time video cameras and video footage to help law enforcement keep track of riders without proper safety gear.
Built With
css3
favicon
flask
google-app-engine
google-cloud
html5
javascript
keras
matplotlib
numpy
opencv
pandas
python
scikit-learn
scss
tensorflow | HelmProtech | A health web application that detects if users are wearing helmets and captures an image of their license plates for law enforcement in an effort to promote safe transportation. | ['Veer Gadodia', 'Agnes Sharan', 'Shreya C', 'Nand Vinchhi'] | ['Track: Best Advanced Hack - Second Place'] | ['css3', 'favicon', 'flask', 'google-app-engine', 'google-cloud', 'html5', 'javascript', 'keras', 'matplotlib', 'numpy', 'opencv', 'pandas', 'python', 'scikit-learn', 'scss', 'tensorflow'] | 11 |
10,367 | https://devpost.com/software/again-vui0w1 | Inspiration
Few days before the start of the quarantine in Morocco, we were walking down the street and we saw a homeless guy trying to find food. Going back home, we were wondering what can this guy do if the quarantine gets imposed on us, Moroccans. A few days later, that was exactly what happened: we were quarantined. Thinking about that guy we saw the other day, we started brainstorming solutions that we can build as computer science passionates to make him and many others in the same situation as he finds a shelter especially during this tough time when they can be easily infected by the virus, as likely as easily spreading it. After seeing Covidathon, we believed that this is our chance to make our solution reach more people and to take the first step in making an impact.
What it does
Again is a solution that aims at securing shelter for homeless people during the lockdown by matching associations and organizations that deal with homeless people and house donators.
The solution also creates jobs for people who have lost their jobs by being applications' reviewers (more details about this below).
To secure shelter for homeless people, the application allows users to create accounts as an association, a house owner, or an applications' reviewer. All of the different types of users enter useful information about themselves when registering (details about the registration information required from each type can be found on the demo site):
As a house owner: anyone who possesses a house or multiple houses can donate them via the application by filling a house donating application. The application asks for information about the house/s that the user would like to donate. This information includes the location of the house, the area, but most importantly a document proving that the user owns that house. The purpose of this proof is to reduce the wasted time after matching an association with a user that does not really possess the house. This proof document will be processed by an AI system that will either validate it or not. If the document is validated, it will be available to applications' reviewers to match it with an association. If not, the donor’s application will be withdrawn. After the donated houses have been matched with an association or more associations (if there are many houses that a lot of associations can use), the contact of the donor is given to the associations so that they coordinate to finalize the donation process.
As an association: after registering in the application, associations can submit applications asking for matching with a donor. An approximate number of homeless people who will benefit from the donation should be specified in the application. It is then the job of applications' reviewers to review the application and decide on a match with a donor.
As an application reviewer: applications’ reviewers are people recruited through the application in order to review the associations’ applications and match them to house/s donors. To be an applications' reviewer, one must apply to the job through the website (applications are available in case of need when the amount of applications is too much). Applicants must provide their personal information, but most importantly, proof of losing their job because of the pandemic. This proof can be of any kind: a screenshot of an email of firing (the email should be forwarded later to make sure it comes from a recruiter, a document..). This proof of losing a job, plus the first-come, first-served basis, and the description of the need in the application are the factors that the admins are going to rely on when assessing applications. Each applications' reviewer will get associations’ applications on a weekly basis. Their job is to assess the need for associations and match them with house donors in the same locations. They also have to distribute the houses in an optimal way taking the need and the impact into consideration. Applications reviewers get paid from donations to the web application. These donations have nothing to do with the house/s donations, they are monetary donations that can be done through the web application to a specific bank account for this purpose. Anyone can donate including people not registered under any type in the application. More on how application reviewers get paid in the section below.
Payment Policy
Applications reviewers will get paid from donations. Since donations are uncontrollable, our team came up with an adequate solution. Applications reviewers will get a token for each application reviewed and thus an association matched with a donor. The value of a token changes on a weekly basis depending on the donations received. Here is a hypothetical scenario: we have 3 applications' reviewers who have reviewed 10 applications each, this means that each applicant has earned 10 tokens, making 30 tokens in total. The amount of donations received in this week is 300 $, implying that a token is worth 10 $. In this case, each reviewer will receive 100$ for this week. However, this method is not good if the amount of donations for a certain week is very high, let’s suppose that in the same previous scenario, the amount of donations is 30000 $, then a token will be worth 1000$. This also means that an applicant will earn 10000 $ for a single week. This might be not fair for other applicants who will join in the coming weeks, and when the donations will be very much lower. To solve this problem, we decided on having a maximum amount that a token cannot exceed so that if the amount of donations is high, we save it for later weeks.
Going back to our scenario, if we set the maximum worth of a token to be 20$, and having 30 tokens to issue, we will spend 600 $ and save 29400$ for upcoming weeks.
Important notes:
Before associations submit their applications, they have to agree to some terms and conditions. An important condition is that the associations should engage the beneficiaries in society by making them help either by doing a job, volunteering or helping other homeless people. The goal of the application is not only to find shelter for these people but to try to engage them in society especially during these tough times when we all have to unite.
Link to the document about using AI in Again:
[
https://docs.google.com/document/d/1RNNpGf3MIhp-lksVtGzXkH7Tb91Ilw4gRw7AJmu27bA/edit?usp=sharing
]
How we built it
To build our web app again, we (team members) divided the work into three parts:
The front-end part (Mohamed Moumou): This part consisted of designing each web page in the web app. The story of AGAIN and all the scripts in the web app. Also, building the actual web app front end using the react framework.
The back-end part (Ouissal Moumou): This part consisted of designing the database and building the actual back-end of our web app using the express.js framework, MongoDB(for the database), and APIs.
Deployment (Ouissal Moumou & Mohamed Moumou): We used Heroku to deploy either the back-end and the front-end app.
Accomplishments that we're proud of
The team of Again is very proud that he is thinking about homeless people when everyone is thinking about the problems of the homeful ones. It does not mean that homeful people’s problems are not urgent, but it means that there is a huge part of society that struggled and now struggling more because of the COVID-19 outbreak that needs urgent help and re-integration. Another accomplishment we are proud of is that our idea is providing jobs for people losing their jobs.
What's next for Again
1- Implementing AI solutions in our App,
2- Adapting the services offered by the app to every country's laws,
3- Make our web app available in many languages (Arabic, French...).
Helpful hints about running the application in our demo site:
http://againproject.herokuapp.com/
If the page returns an error message from Heroku, just refresh the page and it will work.
Here are some login credentials for quick testing of the application:
For an association: **
email:
tasnimelfallah@gmail.com
password: Tasnim123
*
For a house/s donator: *
email:
mohamedjalil@gmail.com
password: yay yay
*
For an application reviewer: *
email:
badr@again.com
password: Badr123
**The information and metrics shown on our app are fictional.
Built With
heroku
javascript
mongodb
node.js
react
rest-apis
uikits
Try it out
againproject.herokuapp.com
againbackend.herokuapp.com
github.com
github.com
docs.google.com | Again | Again is a solution that aims at securing a shelter for homeless people during the lockdown by matching associations and organizations that deal with homeless people and house donators. | ['Mohamed MOUMOU', 'Ouissal Moumou'] | ['The Wolfram Award'] | ['heroku', 'javascript', 'mongodb', 'node.js', 'react', 'rest-apis', 'uikits'] | 12 |
10,367 | https://devpost.com/software/find-a-peaceful-protest-fi9ka2 | Inspiration
When I watched the news, I noticed how many protests around the world support different communities and movements. However, I also noticed many protests had violence, which often was not related to the movement itself. Violence is a reason many people who want to support their movement don’t attend these protests. For this hackathon, I decided to build something to help people connect and find peaceful protests.
What it does
Find a Peaceful Protest is an interactive map with user-generated content. It allows users to add new protests and find existing ones. It also has a review system where users can add their own account of the protest and images of the protest.
To reduce the amount of fake protests and reviews, Find A Peaceful Protest has a smart system where it calculates your reliability. The more reliable you are, the greater the impact is of your new protest or review.
Additionally, a color-coded map navigation system makes it easy to see which protests are right for you. Markers on the map are color-coded according to how peaceful the protest is rated, and the opacity of markers are set according to how reliable it is.
How I built it
For the map, I used Leaflet.js for the user interface, OpenStreetMap for the map content, and ESRI for geocoding and reverse geocoding. For the backend, I used Firebase Realtime Database to store data and Firebase Storage for image storage. For the general structure of the website, I used HTML/CSS and JavaScript, as well as W3.CSS. I used Firebase Hosting to host my site.
Challenges I ran into
I was not familiar with the Leaflet.js library when starting the project, so it took some time to understand and learn it. Additionally, I had some problems with getting an image URL from Firebase Storage, but eventually figured it out after reading the documentation.
Accomplishments that I'm proud of
I am proud of figuring out how to implement the Leaflet.js map without prior experience in using the library.
What I learned
I learned a lot about Leaflet.js, as well as Geocoding and accessing geolocation in the browser.
What's next for Find A Peaceful Protest
I plan to implement an account system for Find A Peaceful Protest to further increase reliability of data and repeated reviews from the same person.
Built With
esri
firebase
html5
leaflet.js
openstreetmap
Try it out
github.com
find-a-peaceful-protest.web.app | Find A Peaceful Protest | Not attending protests because of violence? Find a Peaceful Protest today. | ['Benjamin Man'] | [] | ['esri', 'firebase', 'html5', 'leaflet.js', 'openstreetmap'] | 13 |
10,367 | https://devpost.com/software/well-beings | WQS self assessment test layout
Media kit
Mind map
Messenger screen #1
Messenger screen #2
Inspiration
Mental disorders affect
one in four people
. Treatments are available, but nearly
two-thirds of people
with a known mental disorder never seek help from a health professional. The stigma around mental health is a big reason why people don’t get help. This needs to change. By changing the attitude towards mental health in a community setup, we believe we can create a domino effect of more people opening up as a result of increased social and sympathetic views on mental health.
Our Solution - Wellbeings: A Community
Wellbeings is a Mental Health Community. Unlike most mental health communities, Wellbeings is inclusive to even people that are unaware of mental health problems. This community is called Wellbeings because we want to de-stigmatize mental health.
Our solution to the problem is to provide access to vital information so that people can educate themselves on types of mental health problems, identify any warning signs by a quick self-assessment, information, and resources including helplines, advice on helping someone else, tips on wellbeing, etc.
We want this done in the most interactive way possible, which we believe we can achieve by creating a chatbot and a community that is synonymous with peer support groups. We want to focus on the idea that people with mental illnesses are not abnormal or some isolated group of people, but as many as 1 in 4 people in the world will be affected by mental disorders at some point in their lives. By creating a community, we want to reach out to the victims as well as the general public because they are likely to know someone who suffers from mental illness.
Collectively in a community setup, we harness a "me too" feeling and help members become advocates of mental health.
To sum up, we aim to
advocate the importance of mental wellbeing,
make information accessible and available,
tackle stigma,
empower community,
support people by aiding recovery through early identification & intervention.
Who are we?
We are a team of 4 people - which consists of a developer, a designer, and 2 doctors. All of us share a common vision to improve the intricate health system with the use of revolutionary technologies. Mental health is one of the issues we feel strongly about.
How we built it
Our messenger bot is powered by wit.ai to handle all the NLP tasks. The webhook managing all the backend logic and scoring is built with flask. For the bot flow, we have used Chatfeul. And for the self-diagnosis of disorder, we have used the WQS standardized test.
Challenges we ran into
Most of the people don't even know that they are suffering from some kind of mental distress, so they usually are not engaged with apps and bots marketed as self-diagnostic/help apps. To even reach to that naive user, we have taken a community approach to engaging him by providing a comfortable community which will be pictured by the user as the answers to his unknown problems. Once engaged, we can help him use our bot to take the assessment and know about his/her mental wee being.
Most of the self-diagnostic tests available are lengthy or too monotonous so implementing it in a bot is not a good experience and the drop out ratio of users becomes high. So our team of health professionals selected WQS from various different standardized tests and modified it to be more interactive and less negative to increase conversion ratio. Also, the questionnaire has around 50 questions but we have made it dynamic to the users are not given disorder-specific questions if his/her response is negative to the screening question. So for a normal user, the effective number of questions is around 15-20 which improves the number of people who complete the test.
What's next for Well Beings
We don't stop here. We aspire to engage as many people as we can and bring this to every person who is unknowingly possessed by this demon. We also want to educate the community about mental well being so that they can understand its importance and observe the silent cues of people in distress.
We plan on scaling this solution in the following ways
Incorporate health care professionals to help members with an accurate diagnosis
Add a database of country-wise helplines
Work on suicide prevention
Improve the self-help questionnaire
Make our bot even smarter (Thanks to wit.ai)
Incorporate CBT (Cognitive Behavioural Therapy) to assess and help people with mild symptoms here in our community only.
Built With
chatfuel
flask
glitch
messenger
wit.ai
Try it out
www.facebook.com
glitch.com
m.me
mm.tt
www.mindmeister.com
www.figma.com | Well Beings | Creating an organic space to nurture mental well being! | ['yash aggarwal', 'Uday Upreti', 'Pallavi Thakur', 'Banipreet Kaur'] | [] | ['chatfuel', 'flask', 'glitch', 'messenger', 'wit.ai'] | 14 |
10,367 | https://devpost.com/software/sistema-de-historico-de-pacientes-publicacao | mapa de telas
Desenvolvemos um projeto para área de saúde pública que visa atender a população mais carente. O projeto trata da gerência dos dados históricos de pacientes. A ideia é desenvolver um Data Lake (um repositório de dados estruturados, semi-estruturados e não-estruturados) que armazene os dados dos pacientes. A vantagem do Data Lake é que é possível associar a dados estruturados (um banco de dados existente), dados não estruturados como imagens de um exame, etc. Além de ser possível aplicar técnicas de aprendizado de máquina sobre esses dados. É um ambiente de integração de dados sem todo o custo de bancos de dados multidimensionais como Data Warehouses e Data Marts. A partir desse Data Lake, o plano é desenvolver um sistema que seja capaz de gerenciar/consultar os dados. Resumidamente um banco de dados atualizado, confiável, seguro e gerenciado com dados e informações pelos pacientes e seus médicos. O fato de usar a tecnologia de Data Lake faz com que esse projeto possa ser utilizado em qualquer lugar do mundo da mesma forma. Finalizando podemos usar a saúde pública nos controles de doenças de forma coordenada mundialmente e ajudar diretamente quem mais precisa nos momentos de pandemia além de poder projetar futuros surtos de doenças através de inteligência artificial e modelos de projeção matemáticos.
Built With
data
flutter
Try it out
xd.adobe.com | Sistema de Historico de Pacientes | Leve sua vida consigo onde for! | ['Jose Alexandro Acha Gomes'] | [] | ['data', 'flutter'] | 15 |
10,368 | https://devpost.com/software/predent | Home Page
Government Page
Upload Crash Data
Generated Heatmap
Map with Risk Reports
Drivers Page
Submit Witnessed Accident
Resident Report Map
Data Visualization Page (Please look at Website)
Intensity Heatmap
FAQs
Inspiration
Throughout the last few months, our team has received our permits and begun driving for the first time. We are now able to experience how dangerous the roads and infrastructure are. With our eyes finally opened to such an understated and risky problem, we set out to help solve it using technology.
Car crashes remain the leading cause of death for people under 30, meaning it is incredibly critical to understand and attack this problem. Specifically, we were incredibly shocked to find out that during COVID-19, several states have found an increase in fatal car crashes. We noted that most technological solutions combating this problem focus on driver education and safety, and while this is incredibly important, we focused our efforts towards a less addressed approach.
Thus, we took a different approach from the common hackathon project.
Instead of creating an application meant for general use, we developed an application specifically for state and city governments. We plan to implement our software as part of a nationwide government plan to promote smarter designing of road infrastructure. Since governments often utilize outside developers to build applications, we believe our website fills a normally unoccupied niche, and projects like this should be encouraged in the hackathon community. However, we still made our website useful to normal drivers with useful features.
Thus, we developed PreDent, which analyzes road data through a machine learning algorithm to identify high-risk crash sites.
What it does
PreDent is a unique progressive web application that identifies the accident-prone areas of a city through machine learning. The core of our project is an ML model that inputs static features (speed limits, road signs, road curvature, traffic volume), weather (precipitation, temperature), human factors, and many other attributes to ultimately output a map of city roads with hotspots of where collisions are likely. Note that our demo shows the process, but because our model is incredibly complex and large, the only way for us to deploy our model is to get access to expensive, high-powered servers. Our model will work on any city’s dataset, but they would have to be collected or provided to us.
First, government officials can upload a csv file of their collected traffic data, which many
already
have in private storages. This file is uploaded to
Google Cloud
, and we then input it into our model. Once we finish processing their data, we notify them via email. Our model then outputs: 1) coordinates of crash sites 2) specific issues at each crash site, and 3) a heat map overview of the city. Additionally, using the model generated coordinates, we create an interactive map using the
Google Maps API
.
With this information at hand
, city designers can informatively improve their roads by determining where to fix roads, add additional signs, adjust speed limits, and more. This information is essential for promoting safer roads and infrastructure.
We also have a page for common drivers. Residents from partner cities can find a map with the hotspots of where crashes are likely. These heatmaps change based on an hourly basis and time of year to account for rush hours and temperature/weather. The common pedestrian or driver can also help improve the efficiency of our model by inputting data about crashes in their neighborhoods by interactively placing pins on the map, which we aggregate with already collected data using
Firebase
.
Lastly, we have a
Data Visualization
page, where we show our process of analyzing data and determining which factors are important. We show our exploratory data analysis process and visualizations of key attributes. We used
GeoPandas
and
Fiona
to render these images. Instead of just uploading plots and graphs, we rendered our data into real
dimensional visualizations
and maps.
How we built it
After numerous hours of wireframing, conceptualizing key features, and outlining tasks, we divided the challenge amongst ourselves by assigning Ishaan to developing the UI/UX, Adithya to connecting the
Firebase
backend, Ayaan to managing, training, and developing the ML model and creating heatmaps, and Viraaj to developing our map system and integrating our heatmaps.
We coded the entire app in 4 languages:
HTML, CSS, Javascript
, and
Python
(Python3 /iPython). Developing and optimizing our Geospatial-ML model was done through
Jupyter Notebook
and
AI-Fabric from UIPath
. We used
Javascript
to create our maps, and used
Google Cloud
to store our data. We hosted our website through
Netlify
and
Github
.
After reading documentation, we developed our model and tested it on open-sourced data from Utah roads (from Medium) and produced the heatmaps. We also created a web-scraper to collect data from state databases to create our training sets. We scraped weather and road infrastructure databases to add to our available data. We pinpointed thousands of crash sights as our positive samples, and randomly sampled for negatives from locations where crashes never occurred. We trained two models, a gradient boosting model and a neural network, and found that the gradient boosting model performed better. We documented all our progress in our
Jupyter Notebook
, which we recommend reading.
Challenges we ran into
The primary challenge that arose for us was training and deploying our model. It was incredibly difficult to find data; we only were able to find one publicly available dataset from Utah. In addition, since we have never created a geospatial-ML model, developing our model and creating maps with hotspots was our main challenge. We read lots of documentation to learn how frameworks like
ArcGis
work. While we were not able to deploy our model due to not having an affordable yet high-computation web server, we were able to make it functional regardless of the dataset, meaning as long as cities give us data, we can create heatmaps for them.
Accomplishments we are proud of
We are incredibly proud of how our team found a distinctive yet viable solution to revolutionize road development and driving. We are proud that we were able to develop one of our most advanced models so far, which was mostly possible through
UIPath
training. We are extremely proud of developing a solution that has never been previously considered or implemented in this setting and developing a working model.
What we learned
Our team found it incredibly fulfilling to use our Machine Learning knowledge in a way that could effectively assist governments in assessing roads and finding ways to make them safer, especially when there aren’t quick and effective ways to do so currently. Seeing how we could use our software engineering skills to impact people’s daily lives and safety was the highlight of our weekend.
From a software perspective, developing geospatial-models was our main focus this weekend. We learned how to effectively build models and generate descriptive heatmaps. We learned how to use great frameworks for ML such as
AI-Fabric from UIPath
. We grew our web development skills and polished our database skills.
What is next for PreDent
We believe that our application would be best implemented on a local and state government level. These governments are in charge of designing efficient and safe roads, and we believe that with the information they acquire through our models, they can take steps to improve roads and reduce risks of crashes.
In terms of our application, we would love to deploy the model on the web for automatic integration. Given that our current situation prevents us from buying a web server capable of running the model, we look forward to acquiring a web server that can process high level computation, which would automate our service.
Our Name
PreDent has a few different meanings, which we’ve listed out below:
“Pre” means prior to an accident
“Dent” refers to denting a car during an accident
“Dent” is also short for a car accident, which we try to avoid
“PreDent” is very similar to “prevent”, which is the primary goal of our system
Built With
ai
css
esri
fiona
firebase
geopandas
geospatial
google-cloud
google-maps
html
javascript
jupyter-notebook
keras
machine-learning
pandas
python
sci-kit
tensorflow
ui
uipath
xgboost
Try it out
github.com
predent.tech | PreDent | Using ML to promote safer driving by predicting crash hotspots. | ['Adithya Peruvemba', 'Ishaan Bhandari', 'Ayaan Haque', 'Viraaj Reddi'] | ['1st Place (Sponsored by PEC)', 'Best Web Application', 'MacroTech Sponsored Prize', '3rd Place - Airpods', 'First Overall'] | ['ai', 'css', 'esri', 'fiona', 'firebase', 'geopandas', 'geospatial', 'google-cloud', 'google-maps', 'html', 'javascript', 'jupyter-notebook', 'keras', 'machine-learning', 'pandas', 'python', 'sci-kit', 'tensorflow', 'ui', 'uipath', 'xgboost'] | 0 |
10,368 | https://devpost.com/software/fund-predictor | poster
Analysis of Wildfires in USA
Analysis of number of burned acres every year
Analysis of total of funds have been allocated in each activity in millions
Analysis of funds allocated to Fire Operations every year
Analysis of funds allocated to fight wildfire every year
Analysis of funds allocated to Preparedness every year
Analysis of funds allocated to Other fire Operations every year
Inspiration
There are more than 50,000 wildfires in America every year. The recent wildfires in California and Australia showed how important it is to have a disaster plan handy. While analyzing the wildfire data from past 15 years including number of fires, area burnt, wildlife damage, structural damage and funds required for suppression and fire operations, we found that the data followed a certain pattern.
We trained a Machine Learning model using python to learn this pattern and Predict how much damage will be there in the current year and how much funds will be required for the same.
Also, one of the major problem people have while donating for a good cause is that they don't know where their funds are being used. To solve this, we added a funds tracker so taxpayers and donates can see exactly how and where their money is being used.
Apart from this, we made a wildfire detector that can alert the Forest Service as soon as a fire starts.
What it does
The front page contains analysis of data from wildfires from past 15 years including Heat maps, Bar graphs and Pie charts. On the Funds page, you can see how much damage is predicted from wildfires this year and where the funds were used last year. Also, you can view ongoing wildfires and the funds that are predicted will be necessary to contain and repair any damage caused by those fires along with a distribution of funds in the following categories:
Fire Operations
Preparedness
Suppression
Other fire Operations
Hazardous Fuels
Other activities
FLAME Account
Additional Wildfire Appropriations
Users can donate using the donate button and see how their donations will used in a fair and transparent way.
The fire sensors can be installed at strategic locations throughout the forests so that if there is a wildfire, Forest Services get notified as soon as possible. Since the wildfires spread easily, a quick alert system can help reduce the damage and provide rapid response.
How we built it
We used
Python
for cleaning the data and making the Machine Learning models.
D3.js
and
Matplotlib
to make graphs, heat maps and pie charts.
Figma
for UI designing and
Netlify
for hosting services. Also, the fire sensor was made leveraging the thermistor on
Circuit playground express
board.
Challenges we ran into
We had trouble collecting and cleaning the data as most of the data for funds allocated was in form of pdf files so we had to convert it to csv format for proper usage.
Accomplishments that we're proud of
This was the first hackathon for two of our teammates. We're proud of how the website looks to be quite honest - for us, once it started coming together , it was so gratifying seeing it work together as a coherent whole. We learnt how to collaborate with each other smoothly despite being in different timezones.
What we learned
We learnt Data Visualization, Machine Learning, Data Analyzing, hosting websites and using Figma and Netlify.
What's next for Fund Predictor
We'll add a way for people to volunteer in case of calamities and to provide non-monetary donations.
Domain
Our entry for best domain name is
FundFireFrom.space
Built With
circuitplayground
d3.js
figma
heroku
kaggle
matplotlib
mu
netlify
python
Try it out
fundfirefrom.space
www.kaggle.com
www.kaggle.com
github.com
fundfirefromspace.netlify.app | Fund Predictor | Analyses and Predicts required fund for a disaster via a Machine Learning Model trained past 15 years of data. | ['Vishal Kumar', 'Jatin Dehmiwal', 'Luis Silva', 'Saad Rehman'] | ['Second Overall'] | ['circuitplayground', 'd3.js', 'figma', 'heroku', 'kaggle', 'matplotlib', 'mu', 'netlify', 'python'] | 1 |
10,368 | https://devpost.com/software/dotacounters | DATA VISUALIZATION CHALLENGE!!
Extended Elevator Pitch
Dota 2 is a highly complex, highly acclaimed computer game. Before each game 10 players, distributed in two opposing teams, are involved in a several minutes-long drafting phase in which they choose heroes.
Both novice and expert players of this game struggle to learn new ability combos that will surprise their opponents and help them earn victories and game experience. We have developed a dataset and a visualization tool that is capable of rendering the over 7000 relationships between the 119 heroes and guide each one of the trillions of possible Dota 2 draft phases (Combinations(119, 10) = 10^14).
We focus on the hard problem omitted by other Dota 2 pickers which is "why a hero is more effective than another" rather than just create a ranking of possibly effective heroes.
Problem
Dota 2 is a multiplayer online battle arena (MOBA) video game developed and published by Valve. It is an internationally acclaimed game for its rewarding gameplay and is regarded by many top-notch gaming magazines as one of the top games of all times.
Dota 2 has been criticized for its complexity and the steep learning curve required to master the game. One of its most complex parts is related to choosing a hero to play and identifying abilities that are effective against the enemy heroes. Before a match starts, there is a "drafting phase" when the 10 players choose what heroes to play taking turns. Usually the decision about what heroes to pick is guided by the user's prior experience on what heroes are effective against similar hero combos. Since there are 119 heroes to pick from, this is a daunting task, and very often the players don't have the opportunity discover during the drafting phase what abilities might work against other heroes during the drafting phase and much less during the game.
Solution
The visualization tool "Dota 2 Counters" we have developed has the objective of not only instructing what heroes are effective against other heroes but more importantly WHY. The related work in Dota Counter Pickers is focused exclusivelly on sorting the best hero choices according to their statistical likelihood of victory against a particular enemy hero combo. However they don't instruct the player regarding which opponent heroes they should consistently target during the match and what tactics will be effective.
Methodology
We have developed a graph-based Counters Picker technology that enables the user to pick a hero, quickly evaluate strategies that might or might not work and learn how to the game more effectivelly.
The Counter Picker graph has heroes in its nodes and abilities a hero can cast against enemy heroes. We had to build a dataset about how heroes are related to each other via their abilities. The existing Dota 2 datasets are already targeting the same problem of what heroes to pick via a win-lose ratio of some hero combos against other hero combos rather than explain these victory-loss ratios by atributting match characteristics to the usage of particular abilities.
The dataset was built using information on
https://dota2.gamepedia.com
. An example of Dota 2 counters webpage for just one (Bristleback) of the 119 heroes is available
https://dota2.gamepedia.com/Bristleback/Counters
.
We have distilled the information on a subset of the heroes on dota2.gamepedia.com and organized the information in a graph fasion. We have used the d3 javascript visualization library to render the graph together with the library of heroes.
The graph is visible on the lower part of the screen while the user can directly access detailed information about each hero by clicking on the upper part of the screen.
Future work
Support for all the heroes in Dota 2 is planned as well as edge and node occlusion for unselected heroes. This work was completely developed during this hackathon and it represents 100% original work.
Built With
css
html
javascript
Try it out
github.com | Dota 2 Counters | We have developed a dataset and a visualization tool that is capable of rendering the over 7000 relationships between the 119 heroes and guide each one of the trillions of possible Dota2 draft phases. | ['Camelia D. Brumar', 'Iulian Brumar'] | ['Third Overall'] | ['css', 'html', 'javascript'] | 2 |
10,368 | https://devpost.com/software/stance-taking-a-stand-against-hate-speech | Title
Data Visualizations
Tools Used
Main Chart
Inspiration
In todays hectic online landscape, toxicity and harassment can stop people from expressing themselves. I want people to be able to have conversations online, without feeling like they are being harassed.
What it does
This application takes in comments that the user wants to categorize. It can tell the user how likely it is that the comment falls under certain categories that are toxic.
How I built it
I built the machine learning model using sklearn, a python library. For the front end, I used flask to create a web interface to interact with the model. The visualizations were created using LIME (
https://arxiv.org/abs/1602.04938
).
Challenges I ran into
There were issues with the front end, especially with displaying the data visualizations. I was not able to get all the visualization to display on 1 page and resorted to using separate pages. There was also some difficulty with sklearn, but this was solved fairly easily due to the large online community on stack overflow and etc.
Accomplishments that I'm proud of
Getting the machine learning portion to work was something I am quite proud about. I am also very happy with how my front end turned out, particularly with how my data is shown.
What I learned
I am very new to machine learning, so I am very happy that I know how to use sklearn for machine learning , as well as using flask to interface with my back end.
What's next for Stance: Taking a Stand against Hate Speech
The next step would be to port it into an app. Another possibility would be to have it as a browser extension, or even a moderating tool that online forums can use to curb hate speech.
I also definitely want to try other more complex machine learning algorithms to improve the performance.
Built With
flask
html5
javascript
lime
pandas
python
sklearn
Try it out
drive.google.com | Stance: Taking a Stand against Hate Speech | In todays hectic online landscape, toxicity and harassment can stop people from expressing themselves. Stance is my solution. | ['Michael Li'] | ['Best Data Visualization'] | ['flask', 'html5', 'javascript', 'lime', 'pandas', 'python', 'sklearn'] | 3 |
10,368 | https://devpost.com/software/data-lit-bot | Inspiration
A few things inspired this bot: first, the message from a recent data science conference speaker who talked about the importance of cultivating data literacy in our society; second, the usefulness of InsideSherpa's free and virtual Data Programs and third, wanting to code a bot for the very first time!
What it does
This Twilio-powered bot matches the user to one of InsideSherpa's free Virtual Data Programs.
How I built it
I used my working knowledge of JavaScript, ES6 and the Twilio platform to create this chatbot. For simplicity, I decided to demo it using their built-in simulator.
Challenges I ran into
Initially, I could not get my function to communicate with the "tasks" portion of the bot. Eventually I moved a few lines around and went through my code line by line (as the debugger wasn't particularly helpful--maybe I need to get better at using it) and the problem was fixed.
Accomplishments that I'm proud of
Coding my first bot using Twilio!
What I learned
I learned how to "glue" various parts of my code in such a way that they communicated with each other even though they did not reside in the same location.
What's next for Data Lit Bot
Many improvements can be made but the first would be obtaining a Twilio number so that my bot can be accessed through it and second, challenging myself to make an intermediate version of this simple bot. I believe the latter can be accomplished as I have the basics down.
Built With
autopilot
javascript
json
twilio | Data Lit Bot | This Twilio-powered bot helps you identify which of the free InsideSherpa Virtual Data Programs is the best for you. | ['Adriana C'] | ['Best Beginner Hack'] | ['autopilot', 'javascript', 'json', 'twilio'] | 4 |
10,368 | https://devpost.com/software/wavebeat | Home Page
Mathematical Game Page
Table with all Songs & Performance Data
Brainwave graphs generated from Song
Attention Graph Enlarged
Inspiration
Wavebeat
is inspired by the idea that songs play an influence on our performance and brain activity throughout the day. Therefore,
Wavebeat
was born to see which songs are the most impactful in our performance and mind.
What it does
Wavebeat
collects
brain wave and performance data from an electroencephalogram (EEG) headset while a user is playing a mathematical challenge and listening to a song. This data is then
processed
and
stored
to make comparisons with other songs and provide a personalized dashboard for the user. After finishing the challenge, the user has access to all of his brain wave & performance data through intuitive tables and modern graphs. The user can compare how good is one song to another and find the most impactful song for their performance & brain in a second of time!
How we built it
Wavebeat
was solely built on Python and Javascript. The platform receives the brain wave data with a script that listens to Bluetooth signals from the NeuroSky Mindwave Mobile 2 hardware. These signals are parsed by our Python Script and recorded using file streaming. The file stream is read by the server using pooling to always update the value of the brainwaves in real-time.
For
one
song,
Wavebeat
received more than
12,000
data points with all the focus, relaxation, and brain wave values. Therefore, we saved this data in JSON files and then summarized it into a table and 10 charts.
The animations and mathematical game were build in Javascript.
Challenges we ran into
With only 24 hours on the clock, we struggled the most making sense of our data. For one simple song, we would get an overwhelming amount of data that we needed to process, store it, and create an appealing visualization. In addition to that, we had challenges sending the data from our Flask server directly to Javascript to process the information and generate the graphs.
Accomplishments that we're proud of
We are proud of completing a data-themed hackathon using hardware and brain wave reading. The time was really limiting and
Wavebeat
's team is proud of finishing an entire platform that works well and it's interesting. Besides that, we are proud of the translation of 12,000 data points per song to 10 modern graphs!
What we learned
This project taught us the importance of patience, communication, perseverance, and resilience in the times we were facing difficulties. Besides that, we learned how to start manipulating EEG data!
Built With
bluetooth
bootstrap
css3
flask
hardware
html5
javascript
mindwave
python | Wavebeat | Using brain wave reading to identify songs that improve performance on quantitative reasoning and problem-solving tasks. | ['Nathan Kurelo Wilk', 'Mauricio Costa'] | ['Best Hardware Hack presented by Digi-Key'] | ['bluetooth', 'bootstrap', 'css3', 'flask', 'hardware', 'html5', 'javascript', 'mindwave', 'python'] | 5 |
10,368 | https://devpost.com/software/where-am-i-online | Inspiration
Our project was originally inspired by a similar project,
Sherlock
, which is a command line based tool that "hunt[s] down social media accounts by username across social networks". We were interested in this concept and the overall idea of using the immense amount of data we all put put on the internet. However, instead of just looking for whether an account existed or not, we also wanted to show various other data on the account depending on which platform the account was on.
What it does
Our final product is a website that takes the user's inputted username and searches various popular websites for accounts that match that username and then displays various metadata about each of the accounts associated with that username. As of the end of this hackathon, we support searching for accounts on Instagram, Reddit, Stack Overflow, Twitter, Twitch and Youtube.
How we built it
Backend
The backend of our website is written in Python using the Flask framework. When a user sends a request to our backend, it uses a patchwork of different methods ranging from webscraping to official APIs to search popular websites for accounts with the provided username.
Frontend
The frontend of our website is written in Typescript using the Next.js framework and Tailwind and blocks.css as UI libraries. It accepts user input and displays the accounts that the server finds. Since the metadata associated with the account is different depending on which platform the account is associated with, the frontend also contains an algorithm to automatically identify what type of value a given metadata field holds. For example, it can identify URLs and format them as links as opposed to plaintext and also can guess when a number actually represents a UNIX timestamp and format it accordingly.
Challenges we ran into
One of the primary challenges was the time-consuming task of writing or finding scrapers or APIs for each social media website. Each website posed its unique challenges in terms of data parsing and error handling.
One later issue that went unresolved was that our scraper for Instagram didn't work once we hosted our backend on Heroku because Instagram had blocked associated Heroku IPs, probably because of other scrapers hosted by Heroku. In the future we would use Instagram's official API or have our own hosting to eliminate this problem.
Accomplishments that we're proud of
We're proud of being able to deliver a fully-workable accesible product, especially given the amount of busy work that had to be done researching and implementing the best ways to gather data from each website.
What we learned
We learned a lot about the varying degrees of accessibility that social media sites provide to their users' data. Some websites had open APIs, others had APIs with varying levels of authentication, and for others we had to rely on webscraping. In this age where data is being generated at blazingly-fast speeds, it's become increasing politically and socially relevant to see how different companies approach their stewardship over their data.
What's next for Where Am I Online?
The most clear next step for our project is to continue to add to the number of websites we support so our tool can provide a more comprehensive image of a user's online data footprint.
Built With
api
nextjs
python
tailwind
webscraping
Try it out
whereiam.online
github.com | Where Am I Online? | Visualise a user's online social media data footprint | ['cswil Wilson', 'Chris c3a', 'Kai Chang', 'David Brown'] | ['Best Domain Name from Domain.com'] | ['api', 'nextjs', 'python', 'tailwind', 'webscraping'] | 6 |
10,368 | https://devpost.com/software/shopper-bot | Shopper Bot Logo
Inspiration
Comparison shopping for the best prices has never been quick or easy. No two brick-and-mortar stores have exactly the same items or the same prices. Online shopping is even worse, with dozens of e-stores offering even more combinations of options and prices. It’s a difficult and time-consuming process to find exactly what you need at the lowest price.
Then COVID-19 came along and made everything tougher. Massive layoffs have made personal budgets tighter than ever before. Store closures and supply chain disruptions have made things even harder to find. And with the number of new COVID-19 cases skyrocketing in many parts of the country, many people simply don’t want to take the risk of going into a store even if it does have what they want.
What it does
Shopper Bot is a one-stop online shopping hub that scours the internet to find exactly what you need at the lowest prices, while letting you stay safe at home. By scraping different retails' websites, Shopper Bot helps you find the lowest price for the item you're searching for, without needing to do the comparison shopping yourself. It also helps you filter out items that are sold out, so you don't need to waste your time filtering out listings of products you can't even buy.
How we built it
The entire project was built in UiPath. Basically, we built 6 different modules: an input module, 2 store search modules, 2 store scraper modules, and an output module. The user first enters the search term that they are looking for, then that gets handed off to each of the store search modules (Walmart and Amazon at the moment). Those then analyze the search results pages to extract all of the product page links, which then get handed to the store scraper modules. Each of those modules then opens each product link, extracts the relevant information like product name, price, and if there is a product image, then all of that data is standardized, packed, and set to the output module. From there, the output module combines the data sent by the scrapers and a pre-built website template that handles everything from sorting to filtering to data display, and final that gets loaded into a document for the user to save. The only thing the user needs to do after entering what they are searching for is save that document as a web page, and open it in their browser.
Challenges we ran into
We had a hard time scraping the data from the internet using UiPath. While setting the search up was easy enough, trying to get it to parse through each of the different search results was much harder. We needed to basically write our own HTML DOM tree parser to process the results once we got the search container to use UiPath.
We also had a hard time normalizing the different data we got from the different websites in UiPath. To solve that, however, we actually built that logic into the web page the user saves and loads instead of putting it in UiPath. This meant that more of our heavy-lifting coding was done in JavaScript where we were more familiar with the functions and tools available.
Accomplishments that we're proud of
This was our first time creating a web scraper, and we were really proud of the fact that we got everything completely working!
What we learned
We learned a ton about web scraping, particularly with how to select different elements of the web page that may occur in different numbers or not at all depending on the search results.
What's next for Shopper Bot
In the future, we’d like to expand the comparison shopping to other stores, such as Kroger, Costco, Target, etc. We'd also like to add more functionality to both the scraper and the front end to do things like filter by brand or pull and display the product reviews
Built With
html5
javascript
uipath | Shopper Bot | The easiest way to save the most time and money! | ['Nathan Dimmer', 'Andrew Dimmer'] | ['Best UiPath Automation Hack'] | ['html5', 'javascript', 'uipath'] | 7 |
10,368 | https://devpost.com/software/write-it | Words are displayed on the screen.
Users hold up their work to have it read
If users spell the word incorrectly, they lose one of their three attempts
Using Google Cloud's Handwriting Detection, we find what was written
Users can write the word displayed
There is a leaderboard to see how users did in comparison to others
HandRight - HackMann 2020 Project
Inspiration
We were inspired to create HandRight after witnessing the struggles of parents trying to teach their their kids how to handwrite words at home firsthand. COVID-19 has especially exacerbated these difficulties as teachers and educators are unable to meet with and teach young students due to the quarantine restrictions. And with parents working full-time, students find it hard to stay motivated and are not able to practice their handwriting skills. They are left alone and with no one to guide them first hand, they are sacrificing their learning. We wanted to help struggling students deal with global education crisis.
What it does
HandRight is a fun and captivating game that uses computer vision to help students practice their handwriting without the presence of parents or other mentors. Furthermore, HandRight offers 3 core features:
Allows students to write with real writing implements such as pens and pencils
Allows students to get instant feedback (the kind they can't get from their teachers during COVID-19)
Gamifies the process of handwriting by assigning scores and points, and a leaderboard, motivating students to practice
How We built it
We used:
Flask for the backend and for handling the serving of the documents.
HTML, CSS, Javascript for building an aesthetically-pleasing frontend.
Python as the primary language for the functionalities and backend.
OpenCV for Computer Vision and reading the video.
EAST deep learning text detector to locate text in the video.
Google Cloud Vision Text Recognition for reading the words on the paper.
Challenges We ran into
We struggled to find or create a good handwriting detection library. We went through many options, including pyteresseract and a TF1-based model till we settled on Google Cloud, which working incredibly well.
In addition, as shown in
our issue
, VSC's in-built terminal on a Mac doesn't ask for video permission, which causes it to abort. Lastly, time was definitely one of our largest problems. Due to the sheer amount of work that we had to do in this short time period, we had to organize ourselves really well and work non-stop.
Accomplishments that I'm proud of
We're proud that we were able to finally find a working handwriting-identification library. In addition, we were able to resolve the Abort Error issue, so our app was functional for all our team members. Finally, we're proud that we got the application functional and ready for potential use! Making a complex game with ml vision, a right/wrong functionality and a realtime leader board was very tough, but we persevered through it and are extremely proud of our final results.
What We learned
We learned about different OCR platforms, having experimented with many options. We also gained a lot more experience with Flask, learning how to link videos inside of Flask and have them operate at the same time as the app itself without causing problems. Not only did we learn a lot about programming, we also learned how to work together really well in a high pressured environment. Since this was a virtual hackathon, we had a lot of difficulty at the start keeping track of each other and what we were supposed to do. But then, we started assigning roles, keeping track of our project through discord and Range.cc (tools that none of us had experience with) and regularly talking with each other on discord. After that, our project began flowing much more smoothly and by the end, we were able to complete it!
What's next for HandRight
Next, we hope to clean the game up a little bit, then push it out to the public. In addition, we hope to insert an audio feature to have a student learn spelling as well. Finally, we hope to make the website more kid-friendly, inserting characters and music to make the website compelling to children.
Team
Veer Gadodia
- Veer#7244
Shreya Chaudhary
- GenericPerson#6928
Mihir Kachroo
- Mihir#7285
Dhir Kachroo
-dhir2907#7695
Built With
css3
east-deep-learning
flask
google-cloud-vision
html5
javascript
opencv
python
sass
Try it out
github.com | HandRight | Teaching students how to handwrite using computer vision. | ['Veer Gadodia', 'Shreya C', 'Mihir Kachroo', 'Dhir Kachroo'] | ['MacroTech Sponsored Prize', 'Best use of Google Cloud'] | ['css3', 'east-deep-learning', 'flask', 'google-cloud-vision', 'html5', 'javascript', 'opencv', 'python', 'sass'] | 8 |
10,368 | https://devpost.com/software/fun-with-cartoonista | Inspiration
As we know data is all around us and is an integral part of our day-to-day lives. People nowadays are very active on social media and use it for various purposes such as better reach, entertainment, communication, social image and many more. The "socially-active" person loves to send and receive photographs of himself/herself and loves experimenting with them. He/She uses filters to modify the look and feel of the pictures! We have taken this opportunity to create a filter that converts a photograph of a person/animal/scenery etc. into a "cartoonized" version!
What it does
We allow the user to upload a picture of his/her choice and convert the picture into a cartoon. This converted picture can be downloaded and shared on other platforms. This gives users a platform to share different memories and experiences in the form of cartoons!
How we built it
We used OpenCV Python for processing the image and converting it to a cartoon form. The backend is handled by Django and for the frontend Bootstrap, JavaScript, HTML and CSS have been used.
Challenges we ran into
Both of us were beginners in Django and Computer Vision. It was very difficult to upload and save images using Django. The integration of OpenCV and Django seemed difficult at first!
Accomplishments that we're proud of
We created something fun and implemented a unique blend of computer vision and entertainment. We got to learn many new things during this journey.
What we learned
We learned to integrate OpenCV and Django. We learned to use time judiciously and have fun with coding!
What's next for Fun with Cartoonista !
Users will have the opportunity to upload their fun-filled memories for other users to view using personalised user accounts.
Built With
bootstrap
django
html
javascript
opencv
python
Try it out
github.com | Fun with Cartoonista! | Convert your image into a cartoon with just a click! | ['Trusha Talati', 'Akshat Bhat'] | [] | ['bootstrap', 'django', 'html', 'javascript', 'opencv', 'python'] | 9 |
10,368 | https://devpost.com/software/amazon-s-videogame-recommender-system | the results
the model
the database
BERT encoding in action!
accuracy
layers!
Inspiration
We wanted to challenge ourselves in this hackathon. For that, we decided to try and do a recommender system, since they are all the rage nowadays. Passionate about video games, we looked for a database that could help us recommend our next big hit for this quarantine 😊We took inspiration for the logic from this paper
https://arxiv.org/pdf/1708.05031.pdf
What it does
Our system takes the user’s data (past videogame purchases, rating score, product description, platform, etc) and outputs a list of recommended titles for a given user.
## How we built it
Taking inspiration from a paper Neural Collaborative Filtering by He et al. (2017) we decided to start with a simple architecture that combines general matrix factorization (GMF) and multi-layer perceptron (MLP) components to learn separate user and item embeddings and combine the two models by concatenating their last hidden layer. We also augment this model with text information from product brand, description, and category (including device).
Challenges we run into
Our initial model only took into account the user id, the product id and user-item ratings. We had a lot of text data (reviews, product title, etc.) that we wanted to incorporate to the recommender system, and we found a simple way to do so. We took all the relevant text data available, concatenated it, tokenized with BERT, and fed it to an LSTM (long short term memory) component of a model. In the end, we merged LSTM output with the output from GMF and MLP before feeding it to the last dense layer. We honestly didn’t know if it would work, but it did push MSE of our model from 0.95 to 0.95 :D ! However, due to time and RAM restrictions, we could not run the updated model on all user-item pairs and get recommender system relevance metrics.
Also, we were using google collab, so every time we run out of memory we had to restart the full session, and that was time-consuming.
Accomplishments that we are proud of
We successfully overcame many hurdles related to data preprocessing. Besides, our simple model did improve the metric we optimized. We believe with more effort, we could get even a better result.
What we learned
It was a lot of work getting this system to work, let alone clean and prepare the data. We learnt how to use BERT embeddings and augment an architecture of a given neural network.
What’s next for Amazon’s Videogame Recommender System
We lacked the team members to build an app. It would have been great to publish this system in a webpage or small app.
Built With
bert
huggingface
keras
python
tensorflow
Try it out
drive.google.com | Amazon's Videogame Recommender System | Using Amazon’s public videogame dataset we have created a recommender system that takes into account the user’s past purchases and the behavior of similar users. | ['Lourdes Crivelli'] | [] | ['bert', 'huggingface', 'keras', 'python', 'tensorflow'] | 10 |
10,368 | https://devpost.com/software/webcred-o6q8fh | 1. Home | Landing Page
2. Job Listing Checker
3. News Article Checker
4. Website Checker
5. Sample Results
6. Python Natural Language Processing Flowchart
Our Inspiration
The internet is a universe in itself with vast amount of data, networks, and resources. Along with this tremendous global facility comes user accountability. As internet usage increases, people with malicious intent will also naturally increase. So, it’s extremely important to keep every internet user safe; especially the more vulnerable. For example, one of the most prevalent ways that people are entrapped into giving away financial or personal details is with Fake Job listings.
How it works
General.
On the main page the user is prompted to enter a URL for a news article, job listing, or any general website. We then use HTTP requests with Beautiful Soup to parse and extract the relevant details from the given website. These details are transferred to our back-end through Flask; Three Natural Language Processing Neural Networks will then extract various text features and present them to the user.
How the Natural Language Processing Works.
Between the 3 NLP models, we used 125,000+ units of data (strings) to train and validate the networks. These strings are tokenized (mapped to a unique integer), padded (truncated and concatenated to have a common size), and passed into a recurrent neural network for training. After training, the model is exported and used for future predictions.
When Flask passes a string, tokenization and padding is applied. The padded sequence is then passed to the trained model and Predictions are made. These predictions and the associated confidence is then returned to the user through Flask.
We acquired data came the famous IMDB Dataset for sentiment analysis, the “Employment Scam Aegean Dataset” from The University of the Aegean | Laboratory of Information & Communication Systems Security for fake Job Listing Detection, and “Fake and real news dataset” from Clément Bisaillon on Kaggle for fake news detection.
(See Python Natural Language Processing Flowchart)
Challenges we ran into
Our initial NLP model used a simple Dense Neural Network (DNN) following embedding. With this technique we were observing around 70-80% accuracy on our validation data. Although this level of detection is Statistically Significant, it results in a relatively high chance of a False Prediction. We reasoned that this was due to word order not being a factor in the network's predictions. To solve this issue we implemented a Recurrent Neural Network (RNN) with Bidirectional LSTMs (Long Short Term Memory) to allow for all words in the sentence to affect each other. After making this modification we had our validation accuracy increased to 96%+ which is a significant improvement over our simple DNN.
We wanted to help enable visitors to visually understand our models, but we didn’t know how to represent such an abstract concept. We eventually were able to integrate a TensorBoard Embedded Projector, which provides data visualization by mapping the labels to values in vector space.
What's next for WebCred
We want to be able to expand our site to handle a broader range of data sources in order to ensure user dependability of our site. We’d also like to add a classification system (using graphs, charts, lists) that informs the users of what percent of the source is credible. We believe this feature can improve the overall awareness of different types of online sources and can allow users to decide whether or not to use their preferred websites.
Built With
css3
flask
google-cloud
html5
javascript
keras
python
tensorflow
tpuv3
Try it out
github.com
projector.tensorflow.org | WebCred | AI Powered Internet Credibility Checker | Includes: Job Listings, News Articles, and General Websites | ['Maanav Singh', 'Harsha Siripurapu', 'Pranish Pantha', 'Aditya Singh'] | [] | ['css3', 'flask', 'google-cloud', 'html5', 'javascript', 'keras', 'python', 'tensorflow', 'tpuv3'] | 11 |
10,368 | https://devpost.com/software/fundit-4mti2o | Login
Highlights
Startups Pitchs
Payments
fundIt
A platform that democratizes access to capital for small businesses via crowdfunding
Inspiration
Startups founders don't have those connections or profits to get funding and especially in a year full of uncertainties many big investors are scared to invest in small businesses. And not all startups makes million dollars in their beginning years.
Meanwhile, most people are not as rich but want to invest. So we want to build a platform that benefits color businesses (because majority of them are quite small) and Investors both. Startups put their video pitches to help make investor a decision on the startup and investor can make an appointment with the business to know about their future goals before investing.
What it does
fundIt is a an app for small businesses to get crowdfunding by retail investors for equity.
Users can login and authenticate their credentials via Apple/Google/Email
Startups can post data such as PDFs, Images, and Text to supplement their crowdfunding campaign and help investors to make investment decisions
Investors can browse all campaigns via a Tab view
The most unique feature of this platform is the highlighted businesses of the month. Underrepresentation and discrimination is a huge problem in business investments so we want to represent those businesses by having a separate page for them.
Investors can schedule a virtual meeting with the representative of startup that will help investor know about the future plans of the business
Investors can pay as little as $10 for a share in the startup’s equity offered in the crowdfunding campaign
Investors can view their past investments & their total investments on a profile view
Startups can checkout the funds raised from the crowdsourced campaign via Apple/Google Pay to Apple/Google Wallets in a virtual FundIt card
How I built it
Flutter: Dynamic Mobile Applications that runs both on Android and iOS.
Firebase: For authentication
Square: Payment Processing
SQL: For storing the Business and Investor Information
UiPath: For automating the process for investors displaying startups according to their search history
Potential Users
Retail investors - who will be investing in the companies that are listed on our platform
Startups - they sign up for crowdfunding in exchange for equity.
Challenges I ran into
Payment Processing using Square
Automation with UiPath
Making dynamic user interface for startup took some time to apprehend
Accomplishments that I'm proud of
Able to build a working platform with a great team work in such a short time.
What we learned
Learned how to divide tasks as a team and be accountable for it, setting report time
How to do payment processing
What's next for fundIt
We are planning to reach small businesses and small investors who could benefit from each other. Small businesses by getting money and small investors by getting returns on their investment with as little as 10 dollars.
Domain.com
FundIt.space
Built With
dart
firebase
sql
square
uipath
Try it out
github.com | fundIt | A platform that democratizes access to capital for small businesses via crowdfunding | ['Rupakshi Aggarwal', 'Sulbha Aggarwal', 'Rishav Raj Jain'] | ['1st Place'] | ['dart', 'firebase', 'sql', 'square', 'uipath'] | 12 |
10,368 | https://devpost.com/software/blink-3a5n80 | The Problem
The COVID-19 pandemic has left nearly all of us working remotely. This means that we’re spending more time than ever in front of computer screens. Research shows that this can lead to increased eye strain and fatigue. As students who will be taking online classes for extended hours in the near future, we saw a chance to solve this problem with data and optimize our daily grinds.
The Solution
To tackle the problem of screen fatigue, we combined the computer vision technology of OpenCV with the usability of a website app. With a Flask backend connecting the Python computer vision app with the user-friendly front end, Blink was born.
Blink is a data-driven wellness and productivity platform that tracks your eye movements and creates intelligent recommendations about optimal times for work and rest. Blink tracks you while you work to see how drowsy you get and how often you leave your screen to rest. If your eyes are getting strained, or if you're not taking enough screen breaks, Blink will send a gentle reminder to your browser notifications.
Blink’s landing page visualizes the user’s eye aspect ratio, which is a measure of their alertness. This graph updates live, showing the user how their data is used in recommendations. Your anonymous eye strain data is constantly stored in our database, which means that Blink is also able to update live and provide notifications when they are most necessary.
With a menu in the top right corner, you can navigate to a video with overlaid information displaying what Blink collects as data. Since Blink uses facial recognition, this video helps you to understand how data is collected from them, and how it relates to the graph displayed on the landing page.
Lastly, the Recommendation page is where Blink provides specific recommendations to users. These recommendations are bolstered by our system of notifications, allowing them to take breaks at optimal times, coming back to their online work well rested, happier, and more productive.
Technology, Tools, and Process
Blink utilizes OpenCV, a Python library that uses artificial intelligence to recognize images. Blink tracks eye movement and the frequency at which each user blinks, gathering data on their individual blinking habits. From this data, it calculates measures of eye strain and recommends breaks when signs of screen fatigue appear. With statistical analysis in pandas, Blink is also able to identify times in the day when users are most and least alert, and provide suggestions on how users should tailor their work schedules to their energy levels. Blink’s backend is built using Flask, and the data collected is stored in PostgreSQL. The frontend is built in HTML/CSS and vanilla Javascript.
Learning and Challenges
Building Blink was an amazing learning experience for all of our members. This hackathon challenged us to level up our product thinking, systems design, and technical knowledge under time pressure.
Technical Knowledge
One of our team members learned OpenCV, PostgreSQL and Flask for the facial recognition and backend elements of this project, tools that our entire team had no experience in. Another of our team members learned about Javascript and statistical analysis in Python with pandas to create the recommendation engine. Our final team member learned about integrating multiple applications to create a full-stack app and analysing large amounts of data in chart.js to create elegant data visualizations.
System Design
We began intending to create this app offline, so that OpenCV would be the only entirely new software we’d have to learn. However, as time went on, we became more ambitious and decided to host Blink online. This meant that we not only had to learn about databases and backends, but also create a robust, full-stack web application, communicating between our front- and backend.
Next Steps
It would be nice to deliver a more complete analysis of wellness and productivity; we’d like to include different measures of fatigue, such as poor posture, to expand Blink’s functionality and enable it to become more robust in its recommendations. Given enough time and data, we'd also like to leverage machine learning for the recommendation engine. This would be a huge undertaking, though, and we'd likely have to generate much of the training data ourselves.
Data security is a very real concern. To ensure that user data is secure, we could add encryption to Blink, ensuring that our users’ data is safeguarded and protected with the best security technology.
Finally, we would like to deploy Blink as a fully-fledged web application on Heroku, enabling anyone who wants to use it to run it whenever they want to. Since Blink can be used by anyone, we want it to be accessible to everyone. Making Blink freely available would help us increase productivity and boost wellness for more people, a goal we’re passionate about and hope to see realized.
Built With
chart.js
flask
javascript
opencv
pandas
postgresql
python
Try it out
github.com | Blink | The future of wellness tracking, powered by computer vision. | ['Matthew Varona', 'Pedro Velasquez'] | [] | ['chart.js', 'flask', 'javascript', 'opencv', 'pandas', 'postgresql', 'python'] | 13 |
10,368 | https://devpost.com/software/tremor-therapy | home-gamelay
json data
registering
gameplay
gameplay
game over
firebase auth
game
teraphist instructions
data received from iot
login
app
app
app
app
Inspiration
We were inspired to build this project by the increasing dis-compforatablility of therapist and doctors of not able to treat their patients especially children and teens because of the current lock down and COVID-19 safety measures. Thousands of children undergoing issue after accidents, brain surgery, Parkinson's disease , etc.. are stranded with no help to continue their recovery exercises. We found an opportunity for making something for the future. After some research we found that gaming are the most effective way of improving children and teens recovery as it makes the process fun and enjoyable.This is built in their subconscious mind and gaming makes them more inclined towards recovering faster than usual doctor visited training. Thus, We wanted to build a gaming system that has some hardware at users(patients) end which can be used by the patient for gaming. The intelligent iot device gets the data realtime and helps the patient play games. This data can be further collected and provided to the doctor/therapist for personal analysis and thus therapist can analyze the timely recovery of patients get all health related data in this side using an app or web. This can help him access the child more closely by sitting remote areas and during restrictions like we have now. This also enables doctors across the world to treat patients and help improving the medical network.
What it does
We have an iot device at the patients end. The patient wears this during the teraphy time in his hands. The patient can login to the system using his email and password. Then either go to instructors by learn button where the child can learn diffrent exercises or can click play option. We catogorized diffrent excercies in diffrent levels for fun way of interaction for the patients. For demo purpose we have only used one level, which we plan to expand to diffrent levels and add time based features. Now the user can do the exercies and the iot device will capture the data and send it in a csv format, which we change into json and is parsed to a dictionary in our system and hence our software can get the track of movements made by the patient and help him with moving object, balancing ,etc.. Severe jumps, or level failure detected can be noted. Now when a patient perform this the data generated later which is added to firebase database. Our teraphist can basically take this data from any remote area nad analyze it which gives him perfect way of treating the patient by understanding deeply how the patient is improving and giving ideas about how to procced in future.
How I built it
We are 3 developers building this project -
Rafi Rasheed started off building the hardware together to make them communicate the way we want. Rafi integrated NodeMcu - ESP8266 with MPU6050 - Sensor and got the track of actions performed by patients. Which he send to Siddharth for the game actions. Rafi used MicroPython a new language to him for many of initial works and also integrated some files in python due to lack of complete documentation in micropython. Siddharth worked completely on the game development. Siddharth had never done game development before and never used language GDScript or framework Godot. He studied it for a day and later buld the game for the next day. Siddharth also integrated the firebase auth with godot and send receiving data from patient to Anas. Anas worked completely on the UI/UX of the game and he made the android app for doctor/therapist, and integrated firebase backend to it. We also used EchoAR and integrated AR video innto gaming system that is like an instruction to the kids.
Challenges I ran into
Overall we faced a lot of challenges. The best challenge was we were using a language or trying to build a platform that we had never done in our life. Siddharth had never worked on game development and Rafi had never worked with micropython. Anas had new experience working with game design. Overall the new adaptation was a big challenge. Further we faced some challenges like authenticating firebase with a game development engine like Godot due to lack of any libraries. Integrating Augmented reality into gaming system.
Accomplishments that I'm proud of
We never thought we could make an entire new software without experience in a span on 2 days. We had planned this project just when the hack started , but at end we are proud we could learn tons of knowledge and add the stuff to our resume.
What's next for Tremor Therapy
For this demo we had taken the json and automated it for gameplay due to lack of time. We next plan to integrate realtime gameplay with video calling feature for therapist. Further we plan to integrate firebase MLKit so that we can have in depth data analysis and better decision making for the doctors. We also plan to make it open source so that tons of awesome developers can fork and contribute to make it better and everyone at any part of world can heal any other person.
Built With
android
ar
arduino
firebase
gdscript
godot
java
Try it out
github.com | Tremor Therapy | Tremor Therapy is an interactive game developed for helping children and teens with their therapy process for Tremors(Shaky Hands). It gives the therapist complete analytics about the patients | ['Siddharth M', 'Rafi Rasheed T C', 'ANAS DAVOOD TK'] | [] | ['android', 'ar', 'arduino', 'firebase', 'gdscript', 'godot', 'java'] | 14 |
10,368 | https://devpost.com/software/plant-buddy | Detection of plant Leaf Disease in real time
Result of the prediction
Inspiration
The inspiration for the project was to help out the society and agriculture sector.
What happened was I was listening to the prime minister of India Talk in his interview recently and he said that we need To try to help our agriculture sector that is the backbone of the Indian Economy.
What it does
We built a webpage where we can perform a live detection of plant leaf diseases.
How I built it
For this I acquired the data from a Webiste named PlantVillage which had a data of 9K images of 33 different plants
then i made a model using Keras CNN i used google colabs to Train the model
Link to the model
https://colab.research.google.com/drive/1CsPXjFyFnBwWM28xSSREJ3Z3ncev7Ucn?usp=sharing
this model had a accuracy of around 98%
then i Integrated this Model with Flask and opencv to perform Live detection of images
Then i made use of pandas to integrate Results on the protection like What is this disease,How , any organic controls that could be performed and more.
Challenges I ran into
Model Accuracy build in start the accuracy was not more then 70%
making the product for Iive detection of plant diseases
Accomplishments that I'm proud of
I am proud to say that i was able to have an accuracy of more then 90 % and also had a blast in working
What's next for Plant buddy
Integrating it with hardware like drone
Built With
css
css3
flask
html
learning
machine
machine-learning
python
Try it out
colab.research.google.com | Plant buddy | Plant buddy is an idea to help in improve plant Health . | ['shantam sultania'] | [] | ['css', 'css3', 'flask', 'html', 'learning', 'machine', 'machine-learning', 'python'] | 15 |
10,368 | https://devpost.com/software/social-story | Web App Logo
Vision for future Country pages
Firebase Storage
Firestore
Inspiration
We were intrigued by the statistics of various social issues around us. While there is an abundance of data in this era of technology, data still appears scare and scary. We are interested in the story that data tells about our society and want everyone to explore that story with us through this project. For instance, there has been an increase in the number of sexual harassment cases reported after the #MeToo movement. So we decided to illustrate the data trends for sexual harassment cases reported all over the world and see the changing trends with time through data.
What it does
Social Story is a web application that hopes to tell stories about our society through data. The initial launch hopes to inform the public about the social issue of sexual harassment and how cases have risen as more people are speaking up about their story. Related data and visualizations for the U.S. is provided. As data is scattered everywhere (especially regarding social issues), we want this platform to act as a means to consolidate it. This process will take place both through scraping popular news websites for information and relying on the open source community. As an open source project, people can contribute data attached with their trusted sources, which will then be reviewed by the maintainers of the project. There is also a list of resources and help centers for victims of sexual harassment which can also use more support of the community to cover a large part of the world.
How I built it
We used React as the main front-end framework for our web application and hosted the database and media storage on Google Firebase. After creating a world map via d3.js , we allow navigation to a country of choice, where the sexual harassment statistics for that country are displayed. We created a user authentication system and a contribution form that loads the information provided by an authenticated user to be stored in Firebase.
Challenges I ran into
We had trouble working with asynchronous functions in Javascript and getting certain elements to render properly in the DOM. There were difficulties extracting the data from CSV files and storing them into the Firestore, as well as fetching those data from the database and displaying them visually (i.e tables, graphs, charts). We especially struggled with data visualization when attempting to use the D3 React library because of structuring the layouts.
Accomplishments that I'm proud of
We managed to create our own authentication system and prohibit contribution to non-users. The data from the Firebase database was successfully fetched from our back-end and applied to our visualizations on the front-end. We were also able to render maps and stacked bar charts using d3.js!
What I learned
We learned about integrating Google Firebase with our React app and how to interact with the Firestore and Storage from the front-end through APIs. We got more practice programming in Javascript and understood why Promises are so important to the functionality. Our team had no prior experience with d3.js and we learned a lot about it.
What's next for Social Story
Social Story depends hugely on public contributions. Our web application currently only features data on the United States, so we are announcing a call to action and raise awareness on sexual harassment statistics. We plan to scrape more data from various trusted sources and expand the data availability for all countries, not just the U.S. From there, we think we could integrate other social issues in relation to sexual harassment.
Built With
antd
d3.js
firebase
react.js
Try it out
github.com
manyaagarwal.github.io | Social Story | Visualize the progression in data trends for sexual harassment cases all over the world | ['Manya Agarwal', 'Deniz Acikbas', 'Cindy Pham', 'yoodee'] | [] | ['antd', 'd3.js', 'firebase', 'react.js'] | 16 |
10,368 | https://devpost.com/software/covid-tracker-pj0h98 | Inspiration
With the recent resurges in COVID activity, it has become apparent that many people do not understand the extreme growth rate of this pandemic. We created this app to notify people of the status of COVID-19 in their area, so that they know and feel acutely the presence and danger of the pandemic.
What it does
Shows you data about COVID cases in your state, and analyses whether the growth is exponential or not, to allow people to perform a better risk assesment of venturing outside.
How we built it
Using React.js, a framework for web apps.
Challenges we ran into
Analysis of the data.
Accomplishments that we're proud of
The mathematical model used to determine whether case rate growth is exponential or not.
What we learned
Start early.
What's next for Covid-tracker
Stats on specific counties and cities, more robust analysis of rates of growth and the inferred risks.
Built With
css3
html5
javascript
react
Try it out
ivanaway.github.io | Covid-tracker | A tracker for COVID in your state. | ['Rohan Damani', 'Anushka Saxena', 'Ivan Zabrodin', 'Chase Rensberger'] | [] | ['css3', 'html5', 'javascript', 'react'] | 17 |
10,368 | https://devpost.com/software/ethical-hacking-using-python | ethical hackimg using python
Inspiration
my inspiration is elon musk
What it does
How I built it
Challenges I ran into
Accomplishments that I'm proud of
What I learned
What's next for Ethical hacking using python
Built With
python-package-index | Ethical hacking using python | we can hack the unknown passwords using ethical hacking we are using a tool known python | ['satish019 Bokka'] | [] | ['python-package-index'] | 18 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.