hackathon_id
int64
1.57k
23.4k
project_link
stringlengths
30
96
full_desc
stringlengths
1
547k
title
stringlengths
1
60
brief_desc
stringlengths
1
200
team_members
stringlengths
2
870
prize
stringlengths
2
792
tags
stringlengths
2
4.47k
__index_level_0__
int64
0
695
10,159
https://devpost.com/software/sugrfree
Inspiration COVID-19 has transformed life for everyone, changing the way we live, love, work and most importantly eat. With irregular food supplies, on-and-off lockdown, it becomes crucial to mainatain a healthy diet and eating habits within availability. The worst hit are the ones with poor immunity who, uncoincidentally, intersect with diabetes patients. And the hassle of having to manage daily diet along with the day-to-day struggles of the new lifestyle that we are forced to adopt is outright cumbersome. Our product solves the problem of having to manually manage diet charts and daily calorie intake. This is particularly appealing to users who are watching their waistline, building muscle mass, trying to lose weight or battling diabetes. What it does Our baffling AI system can track your daily calorie intake with nothing but a picture of the food you are eating right now. It can also provide you with foods that you can make at home with the ingredients that you have, and the closest healthy food to any chosen food that neither compromise in taste nor nutrition. You take a picture, we will manage the rest. How I built it We use PyTorch in two instances: Onboard PyTorch model that can detect food class from React Native app. On Cloud PyTorch model that can verify and/or update the food image’s detected class. This powerful combination can acquire more data and process in to improve model overtime. Once food class is detected, our Graph Database of 4000 Indian Foods takes over and starts predicting the calories and estimates the best food dish that can be substituted using our algorithm. All of these interactions occur via GraphQL making the entire experience seamless and fast. Challenges I ran into The major challenges we faced were: a) An ML Image Recognition system that outperforms all other classifiers and can operate on low powered devices in realtime. b) An AI Recommendation system that learns and adapts to your lifestyle and diet system using graph technology. c) Integrating Graph Database and Algorithms with traditional mobile systems without losing the power and richness of graphs and relationship between data. With resilient efforts and team work, we were able to accomplish these hurdles and provide an unique experience like no other. Built With django fastai graphql neo4j python pytorch reactnative Try it out drive.google.com
SugrFree
An AI based diet planning and management app that can help cut down on calories and recommend healthy eating habits.
['Vignesh Srinivasakumar', 'Sainath Ganesh', 'Siddarth Sairaj']
[]
['django', 'fastai', 'graphql', 'neo4j', 'python', 'pytorch', 'reactnative']
89
10,159
https://devpost.com/software/pi-ke4nfz
Logo Auto Cropping - Screenshot Landscape Colorization - Screenshot Deep Painterly Harmonization - Screenshot Inspiration Harmonies is an online photo editor that aims to simplify the process of editing photos. Now, you can use the same advanced tools from photoshop by dragging and dropping easily into the canvas. We take advantage of the capabilities of computer vision to help our users edit photos in an appropriate way. What it does Harmonies app helps the designers, and editors to create a full, rich experience for the users or customers. In addition to the regular editing tools (like Crop, Rotation, Drawing, and Shapes); we provide the user with three powerful computer vision techniques to cut, color, and add images. Image Colorization: Harmonies deals with the process of taking a grayscale input (black and white) image and then produces a colored image that represents the semantic colors and tones of the input. Image Segmentation: Harmonies benefits from the concept of Image Segmentation to extract some parts from the image and return a png photo. Deep Painterly Harmonization: Harmonies produces significantly better results than photo compositing or global stylization techniques and that enables creative painterly edits that would be otherwise difficult to achieve. How we built it Front-End: We used React for front-end development. It gives the privilege to have a single page application (SPA) with a clean modern design that is easily maintainable. Back-End: The technologies used in the backend are node js, express, MongoDB. We established our frontend and backend communication using JWT tokens. RESTful APIs We have used Flask library to create a web API for both the segmentation and the coloring models. This API takes base64 images as inputs and runs preprocessing on these images then feed them to each model depending on the request URL. This API was then deployed to Azure web services via a git repository and integrated with the front-end editor. Image Colorization: In this part we reimplemented Colorful Image Colorization using PyTorch for images auto colorization. Image Segmentation: In this part we reimplemented Rethinking Atrous Convolution for Semantic Image Segmentation using PyTorch for auto-cropping a person from an image. We used the same concept of image segmentation and instead of adding masks, we return a PNG photo. Deep Painterly Harmonization: In this part we reimplemented Deep Painterly Harmonization using PyTorch to add harmonies to the adjusted element. CV technical details are fully described on our Github Repository Challenges we ran into The biggest challenge we faced was that the team worked together remotely, spread over different time zones. Moreover, it was difficult to: Create a complete Machine learning web application using React and Flask, and dealing with different APIs and data types. Developing the application workflow to be fully automated. One big challenge is not having an NVIDIA GPU on my device, one way of solving this problem was using the cloud for testing and inference. Accomplishments that we are proud of We are proud that we have participated in this competition competing against people from all over the world. Furthermore, this Hackathon helped with meeting other incredibly talented people like us, working as a team, and taking on challenges that put our problem-solving skills to the test. This is our first time as a team and we have successfully created a full MVP during the Hackathon. Moreover, our model has been deployed as a real-life project, and it could be used easily. What we learned How to integrate the backend API with a frontend making it secure with JWTs. Deploying a machine learning model to the cloud. Get into the team-work technique and sharing ideas. Working completely remotely with a team for the first time. What's next for Harmonies The future of Harmonies depends on two main pillars, the first is the technical pillar and the second is the commercialization pillar. For the technical part, we will retrain our model with a bigger dataset to get better results with image colorization and auto-cropping. We are also studying adding some more computer vision features like image enhancement and converting images into pictures. We will optimize the code to reduce the run time. For the commercialization part, we are thinking about how to cover the cost of the cloud by adding some ads to the service, make premium packages, or making use of the imaging data. In the future, we will develop a mobile version of our website to support more users. Built With amazon-web-services azure express.js flask google-cloud jwts mongodb node.js pytorch react reactstrap torchvision Try it out harmonies.studio github.com
Harmonies
Because your images need some magic
['Mohamed Amr', 'Mahmoud Yusof', 'Mohamed Abdullah', 'AbdelRahman Emam', 'Ahmed Samir']
[]
['amazon-web-services', 'azure', 'express.js', 'flask', 'google-cloud', 'jwts', 'mongodb', 'node.js', 'pytorch', 'react', 'reactstrap', 'torchvision']
90
10,159
https://devpost.com/software/progressive-image-inpainting
Recently, learning-based algorithms for image inpainting achieve remarkable progress dealing with squared or irregular holes. However, they fail to generate plausible textures inside damaged area because there lacks surrounding information. A progressive inpainting approach would be advantageous for eliminating central blurriness, i.e., restoring well and then updating masks. In this paper, we propose full-resolution residual network (FRRN) to fill irregular holes, which is proved to be effective for progressive image inpainting. We show that well-designed residual architecture facilitates feature integration and texture prediction. Additionally, to guarantee completion quality during progressive inpainting, we adopt N Blocks, One Dilation strategy, which assigns several residual blocks for one dilation step. Correspondingly, a step loss function is applied to improve the performance of intermediate restorations. The experimental results demonstrate that the proposed FRRN framework for image inpainting is much better than previous methods both quantitatively and qualitatively. Built With jupyternotebook pix2pix python pytorch Try it out drive.google.com
Progressive Image Inpainting
Dealing with squared or irregular holes
['Jeevesh Verma']
[]
['jupyternotebook', 'pix2pix', 'python', 'pytorch']
91
10,159
https://devpost.com/software/unlock-jewtxs
Inspiration me What it does people crazy How I built it 98 Challenges I ran into Accomplishments that I'm proud of professionally hacking What I learned will see What's next for Unlock everything Built With better live-matrix Try it out zeewest.zendesk.com
Unlock
Home
['Zdenek Gazi']
[]
['better', 'live-matrix']
92
10,159
https://devpost.com/software/generative-imposter
Inspiration I've always been inspired by the video of the news anchor reading out news and was amazed when I first heard that it was actually an AI that had synthesized that voice. I'd like to make this project in the same spirit. Built With pytorch
.
.
['Aneesh Chawla']
[]
['pytorch']
93
10,159
https://devpost.com/software/pytorch-yolov3
messi traffic dog giraffe Inspiration We present some updates to YOLO! We made a bunch of little design changes to make it better. We also trained this new network that’s pretty swell. It’s a little bigger than last time but more accurate. It’s still fast though, don’t worry. At 320 × 320 YOLOv3 runs in 22 ms at 28.2 mAP, as accurate as SSD but three times faster. When we look at the old .5 IOU mAP detection metric YOLOv3 is quite good. It achieves 57.9 AP50 in 51 ms on a Titan X, compared to 57.5 AP50 in 198 ms by RetinaNet, similar performance but 3.8× faster. What it does Uses pretrained weights to make predictions on images. Below table displays the inference times when using as inputs images scaled to 256x256. The ResNet backbone measurements are taken from the YOLOv3 paper. The Darknet-53 measurement marked shows the inference time of this implementation on my 1080ti card. How I built it numpy torch>=1.0 torchvision matplotlib tensorflow tensorboard terminaltables pillow tqdm Challenges I ran into One of the challenges was to use complex mathematical formulae using Python to derive certain parameters. However, the Python Package on A2019 made it really easy to call Python functions inside and code and get the output. Also creating a waypoint plan for the Autopilot was initially a challenge however using the product documentation I was able to create the flight plans using Log to file package making all the values dynamic. Accomplishments that I'm proud of I am happy that I was able to use my academic knowledge in ML, my work expertise in AI/ML, and my curiosity to explore new possibilities in creating this solution which can help the world in this traumatic situation. I am also happy that I was able to bring out the true sense of Torch in the name "PyTorch". What I learned It was indeed a great experience to develop this Project with a lot of research, trial & error and new learnings. I was able to start off with ML and I'm looking forward to trying out further possibilities. What's next for PyTorch-YOLOv3 PyTorch Built With pytorch yolo Try it out pjreddie.com
PyTorch-Real-Time-Object-Detection
A minimal PyTorch implementation of Real-Time Object Detection, with support for training, inference and evaluation.
['Sambhav Kumar Thakur']
[]
['pytorch', 'yolo']
94
10,159
https://devpost.com/software/torchtoolbox
Inspiration I like pytorch and I found some codes I may write many times and some new/good work/tricks may not in Pytorch code. What it does ToolBox have two mainly parts:tools and some fashion work. I make this project hoping this could help you write Pytorch easier and find good things you need. How I built it I write it with Python3 and Pytorch. Challenges I ran into I think the only challenge is algorithm and reproduce paper correctly. Accomplishments that I'm proud of The good code and reproduce paper correctly. What I learned Through this I strengthen the understanding of Pytorch and improve Python skills same time. What's next for TorchToolbox I'll continue update this when I found good tools and new work. Built With python pytorch Try it out github.com
TorchToolbox
Aiming to make you write Pytorch code more easier, readable and concise.You could also regard this as a auxiliary tool for Pytorch. It will contain what you use most frequently tools.
['Devin Yang']
[]
['python', 'pytorch']
95
10,159
https://devpost.com/software/insurance-cost-prediction-using-linear-regression
we're going to use information like a person's age, sex, BMI, no. of children and smoking habit to predict the price of yearly medical bills. This kind of model is useful for insurance companies to determine the yearly insurance premium for a person. The dataset for this problem is taken from: https://www.kaggle.com/mirichoi0218/insurance We will create a model with the following steps: Download and explore the dataset Prepare the dataset for training Create a linear regression model Train the model to fit the data Make predictions using the trained model Built With basics: linear logistic minimal): pytorch regression regression: Try it out jovian.ml
Insurance cost prediction using linear regression.
In this project we’re going to use information like a person’s age, sex, BMI, no. of children and smoking habit to predict the price of yearly medical bills.
['Garima Singh']
[]
['basics:', 'linear', 'logistic', 'minimal):', 'pytorch', 'regression', 'regression:']
96
10,159
https://devpost.com/software/data-documentary-of-sound
Choose a theme Generated poems Give a prompt for poems Inspiration In a world overloaded with information and misinformation, could we just have a pure moment of joy reading kid's poetry? What it does Kids Poetry Slam generates kid-style poems based on user input or user-chosen key words. How I built it Using Pytorch, Flask, GPT-2 Challenges I ran into Combining fine-tuned GPT2 model and rule-based processing to achieve an ideal result. Deployment of the app. Accomplishments that I'm proud of Being able to transform the existing text into user-customized generative prose while retaining the original style. What I learned Deep learning: text generation Deployment: web app What's next for Kids Poetry Slam We could further finetune poems by different moods. Built With bootstrap flask python pytorch Try it out github.com
Kids Poetry Slam
A web app that generates the cutest poems ever
['Echo Liu', 'Boyu Jiang']
[]
['bootstrap', 'flask', 'python', 'pytorch']
97
10,159
https://devpost.com/software/leepi
Letter Letter Word Letter Word Letter Word Word Word ppt1 ppt2 ppt3 ppt4 ppt5 ppt6 User Flow Pitch Leepi wants to make learning sign language accessible and free for everyone! Learning happens with the help of an avatar who performs precise hand movements for everything from basic letters symbols to complex gestures for daily use. It is the only app where you can perform the hand gestures in front of the camera and get feedback in real-time. Our learning system uses short and crisply designed walk through of lessons and uses badges and stars as rewards to motivate you through your journey! Inspiration When our team interviewed the target user(students) for our app we encountered a common situation that he/she never understood the English language structure. They have a hard time practicing gestures around others. They don’t understand books if there are no illustrations in it. Sometimes the school would also lack the infrastructure to provide an advanced setup for gesture learning. If a camera-based application is available where the user can practice simultaneously learning the English language with graphics, it would not be a boring task. What it does The app uses on-device machine learning models for analyzing hand gestures. The hands and fingers are detected in an input video from the front camera. They are then fed to another model for performing gesture recognition. All this happens in real-time due to pruned models. The video is completely inferred on the phone without any need of sending it to the cloud. The major features apart from self-learning for the user which app provides are speedy feedback, built-in privacy, zero network data usage. Challenges we ran into Data collection Training for on-device models UX for the application Scalability over all words, sign and language Accomplishments that we're proud of Interactive learning structure Pocket size lessons for a fun experience Gamification for motivation: badges and stars Processing happens all in mobile, no data usage required Protects your privacy as the data never leaves the device What's next for Leepi Learning a new language is mostly dependent on the inclination and capability of the user. If the learning procedure is fun and initiates the user to practice continuously, the process is flawless. The app will keep on trying to improve the same way of learning. The backend is a machine learning model. It will improve with users with continuous training and validation. On the business side, partnerships with communities involved in the field and adding more user base for different languages will be the priority. Announcement Leepi( लिपि ) has been released on 15 June as a part of winning Google Android Developer Challenge 2020. By the support from Google's technical and design teams, Leepi( लिपि ) has been made keeping privacy, scalability, reliability and user learning experience in picture. Google has been promoting this application by showcasing it across the world. Various reviewers have received customized package and Android Developer magazine as a part of promotion. An online landing page is also made for the same. Here are the related articles to it - https://developer.android.com/helpful-innovation#leepi-section https://www.blog.google/products/android/developer-challenge-winners/ https://android-developers.googleblog.com/2020/06/dev-challenge-winners.html Built With mediapipe pytorch tensorflow Try it out play.google.com princep.github.io
Leepi
Our app helps people learn American Sign Language. The app gives instant feedback for the signs we make in front of the camera, the only app that does it.
['Prince Patel']
['All Participants']
['mediapipe', 'pytorch', 'tensorflow']
98
10,161
https://devpost.com/software/missing-justice-ke5rfx
Missing Justice is a solidarity campaign to raise awareness about Murdered and Missing Indigenous women, girls Trans and Two-Spirit people here in Montreal. We assist the Indigenous community by supporting them with the logistics of having vigils and memorials to commemorate their loved ones. We once had a website that worked okay and when our outreach coordinator left our organization, the site was taken offline and we had no access to fix that. Vision: A user friendly, bilingual (maybe one day an option to be in Indigenous languages) website, online pages for online memorials, a shop where we can sell hoodies to fund-raise, a place to eventually create a timeline of MMIWGT2S activism in Montreal. Built With odoo Try it out www.missingjustice.ca github.com
Missing Justice
New Built Website for Missing Justice Campaign for the Center for Gender Advocacy
['Carms Ng']
[]
['odoo']
0
10,161
https://devpost.com/software/accm-aids-community-care-montreal
The Teacher's Sex-Ed Toolkit is a collection of free, downloadable lesson plans on sexual health topics for elementary and high school teachers in Quebec. These lesson plans are written to be as inclusive as possible, particularly for queer and trans youth, while giving teachers access to free materials that are appropriate for the Quebec Sexuality Education curriculum. They also offer sexual health lesson plans that conform to Quebec’s elementary level sexuality education program. This project aims to redesign and rebuild The Teacher's Sex-Ed Toolkit Wordpress website. The website should be made as accessible as possible to expand the toolkit’s reach by creating an updated, user-friendly site that’s clean and modern. The website also needs to include a French version and be optimized for desktop, mobile, and tablet. This project entails designing and building on a new Wordpress theme, migrating all content from the current site to the new one and making sure both the front and backend are optimized for the user. Expertise Required: Website design, Wordpress experience, website development, website migration. Language: English
ACCM (AIDS Community Care Montreal)
Teacher's Sex-Ed Toolkit
['Rob Gordon', 'Sangwoo Park', 'Michael James Doyle']
[]
[]
1
10,161
https://devpost.com/software/conseil-lgbtq-extranet
ENGLISH BELOW Le Conseil québécois LGBT (CQ-LGBT) est la référence centrale au Québec en matière de défense des droits des personnes lesbiennes, gaies, bisexuelles et trans d’ici. Le Conseil québécois LGBT cherche à consolider les droits des personnes LGBT au Québec, en plus de militer pour les droits à acquérir, afin que personne ne soit laissé pour compte dans la reconnaissance des diversités sexuelles et de genres. Besoins pour l’extranet du Conseil québécois LGBT Caractéristiques • Simple • Intuitif • Plateforme de consultation et non de collaboration Fonctionnalités Administrateur (Conseil québécois LGBT) L’administrateur permet aux utilisateurices de créer un profil (avec identifiant et mot de passe) et doit valider ce dernier. L’administrateur peut exclure des utilisateurices de la plateforme. L’administrateur est le seul à pouvoir publier des documents (vidéos/ textes, etc) sur la plateforme. Profil utilisateurice (un seul profil par organisation) Nom de l’organisation Photo de profil (logo de l’organisation) Courte description de l’organisation Trois sections AGENDA : partage d’événements, webinaires et autres. Les utilisateurices pourront directement modifier cette section. RESSOURCES : partage de documents, articles, rapports, communiqués de presse, etc. Ces ressources seront triées par sources et par dates. Les ressources seront ajoutées exclusivement par le Conseil québécois LGBT (l’administrateur) après que les utilisateurices aient complété un formulaire pour soumettre des documents s’iels le souhaitent. Possibilité de télécharger certains documents. FORMATIONS : Vidéos de formation avec un texte descriptif (partage de webinaire par exemple). Possibilité de regarder les vidéos en mode “plein écran”. Section optionnelle Un newsfeed qui permettra d’avoir une discussion de groupe (un peu comme dans les groupes facebook), de voir les derniers documents ajoutés et de commenter. ENGLISH The Conseil québécois LGBT (CQ-LGBT) is the central reference in Quebec for defending the rights of lesbian, gay, bisexual and trans people. The Quebec LGBT Council seeks to consolidate the rights of LGBT people in Quebec, in addition to campaigning for the rights to be acquired, so that no one is left behind in the recognition of sexual and gender diversity. Requirements for the extranet of the Quebec LGBT Council Characteristics • Simple • Intuitive • Platform for consultation and not collaboration Features Administrator (Quebec LGBT Council) The administrator allows users to create a profile (with username and password) and must validate this latest. The administrator can exclude users from the platform. The administrator is the only one who can publish documents (videos / texts, etc.) on the platform. User profile (only one profile per organization) Name of the organization Profile picture (organization logo) Short description of the organization Three sections AGENDA: sharing of events, webinars and more. Users will be able to directly modify this section. RESOURCES: sharing of documents, articles, reports, press releases, etc. These resources will be sorted by sources and by dates. The resources will be added exclusively by the Quebec LGBT Council (the administrator) after users have completed a form to submit documents if they wish. Possibility to download certain documents. TRAINING: Training videos with descriptive text (webinar sharing for example). Possibility to watch videos in “full screen” mode. Optional section A newsfeed that will allow you to have a group discussion (a bit like in facebook groups), to see the latest documents added and to comment. Built With css html5 javascript wordpress
Conseil LGBTQ Extranet
Design & Implémentation d'un espace/portail/extranet pour le Conseil LGBTQ
['Francis L']
[]
['css', 'html5', 'javascript', 'wordpress']
2
10,161
https://devpost.com/software/sherbourne-health
Sherbourne Health’s “Supporting Our Youth” (SOY) program is a set of health promotion services and programming centered on supporting the health and well-being goals established by LGBT2SQ youth and young adults, many of whom are homeless, racialized and newcomers to Canada. When the COVID-19 crisis hit, Sherbourne made the difficult but necessary decision to put our SOY programs (and many of our other social groups and drop-in programs) on hold until we could find a safe and effective way to serve our group participants. SOY halted programming in mid-March and staff began to work from home and support participants remotely with one-on-one sessions by phone. We also began to provide an online weekly “Staying Connected” video series covering mindfulness activities, stories, and other fun activities to help participants stay connected with their community. Sherbourne has now grown these forms of virtual support by expanding our online programming and creating “Community Check-Ins” through platforms like Zoom and by taking our annual Pride Prom online as well to, again, ensure that participants stay connected and supported. Even with these successes, Sherbourne knows that we have more work to do in order to engage all of our clients virtually, especially those who have difficulties accessing online services. Sherbourne would like assistance from PrideHacks to advise us in further developing online platforms for SOY programs and in the creation of a project plan that will allow us to better engage all of our clients including those who face technological barriers. PrideHacks’ expertise will also prove invaluable when researching the types of technology and security measures Sherbourne needs to invest in to run SOY programs effectively online. Try it out docs.google.com
Sherbourne Health
Sherbourne SOY Tech Update Project
['Min Zhang']
[]
[]
3
10,161
https://devpost.com/software/interligne
Interligne is a first response centre that provides help and information to those concerned with sexual orientation and gender diversity. Their services are accessible 24 hours/day, meaning they are able to offer support to the LGBTQ+ communities, their friends and family as well as to service providers in the health, education and social service sectors. Through their outreach and awareness activities, they also promote greater openness to LGBTQ+ realities. This project centres around optimizing Interligne’s current Wordpress website to allow for their members to create and access their profiles on the site to update and add information to the LGBTQ+ resources guide. Currently this is a manual process on Interligne’s part, and this project seeks to streamline this system to allow organizations to register themselves and be able to upload content to the guide. The goal of this project is to create a system to make adding new members to the site autonomous and simple, to create permissioned member profiles to give them the ability to edit content in the guide, and to improve the filters, SEO tags, etc. to make the site more visible. Expertise Required: Wordpress experience, SEO experience, CRM/CMS experience. Language: French
Interligne
Website Update
['Nicolas Barrière-Kucharski']
[]
[]
4
10,161
https://devpost.com/software/dignity-network-canada
Dignity Network Canada is a national network of 37 organizations interested in advancing global LGBTIQ human rights. Dignity Network acts as a Canadian hub for communication and knowledge-sharing across organizations on global LGBTIQ and sexual orientation, gender identity and expression and sex characteristics (SOGIESC) human rights issues, especially emphasizing the perspectives of international partners. They convene and host gatherings, conferences and meetings on Canada and global SOGIESC human rights, encourage and support public awareness, education and research related to global SOGIESC human rights issues, and engage and support advocacy efforts both nationally and internationally to advance SOGIESC human rights and inclusion. This project aims to first create a complete assessment of Dignity Network’s current technological reality. Then, based on this assessment, to create a brief report outlining Dignity Network’s technology and communication landscape, as well as their needs and recommendations. This project will need a team to conduct an assessment of the organization's current tech and communications reality and provide a set of recommendations for implementation of a roadmap. This consulting team will assess Dignity Network’s needs and then provide recommendations for the best course of action. Expertise Required: tech consulting, project management, IT, budgeting. Language: English
Dignity Network Canada
Tech and Comms Assessment
['Kirk Brown']
[]
[]
5
10,161
https://devpost.com/software/fondation-emergence
Projet PrideHacks Fondation Émergence: Remise de Prix La Fondation Émergence a pour mission d’éduquer, d’informer et de sensibiliser la population aux réalités des personnes LGBTQ+ au travers de différents programmes parmi lesquels la journée internationale contre l’homophobie et la transphobie. Le gala de la Fondation Émergence remet des prix à ceux et celles qui sont impliqué.es pour la cause LGBTQ+. Pour l’édition de cette année, et celles à venir, la Fondation Émergence aimerait se dotée d’un micro-site (Wix) afin de vendre des tickets pour l'événement. Idéalement, le micro-site pourra: Permettre l’achat de billets pour le Gala selon plusieurs tarifs. Afficher des informations générales à propos du Gala. Afficher une description des 3 trois différents prix. Afficher les archives de qui a gagné quel prix dans le passé. Il est important que le site puisse être entretenu par la Fondation Émergence pour les années à venir, d’où le choix d’un micro-site avec Wix, une plateforme connu par l’équipe de la Fondation Émergence. Expertise Requise: Wix, UI/UX. Langue: Francais Projet PrideHacks Fondation Émergence: Formulaire de Commande La Fondation Émergence a pour mission d’éduquer, d’informer et de sensibiliser la population aux réalités des personnes LGBTQ+ au travers de différents programmes parmi lesquels la journée internationale contre l’homophobie et la transphobie. La Fondation Émergence envoie gratuitement du matériel pédagogique à travers le Canada et offre le téléchargement de ce matériel gratuitement à l’international, par l’entreprise de leur site web, créer sur Wix. Par contre, leur formulaire de commande actuel ( https://fondation-emergence.membogo.com/fr/commandes ) n’est pas user-friendly et n’est pas responsive. Une refonte de ce formulaire est donc nécessaire afin de faciliter la prise de commandes et la gestion celles-ci par l’équipe de la Fondation Émergence. La Fondation Émergence ainsi besoin d’un nouveau système de gestion des commandes, en plus d’un nouveau formulaire sur leur site web. Ce système doit idéalement permettre la gestion de l’inventaire et l’impression de rapports. Le formulaire, quant à lui, doit être bilingue et permettre aux utilisateurs d’effectuer des dons. Expertise Requise: Wix, UI/UX. Langue: Francais
Fondation Émergence
Remise de Prix & Système de Commande
['Julien Ouellet']
[]
[]
6
10,175
https://devpost.com/software/boomerang
Inspiration Imagine you’ve just been rejected for what feels like the 1000th time. Why were you rejected again? You’ve been practicing day and night, and could’ve sworn you aced that job interview. But you were rejected again. You ask, what did I do wrong? What am I missing? You’re asking all these questions, but nobody is answering them. Although interview feedback is highly regarded, it difficult for candidates to get. In a research study done by LinkedIn, they found that out of the 94% of candidates who want feedback, only 40% actually receive feedback. And out of the 60% of candidates who don’t receive feedback, 77% of them never got a response from their interviewer. Even for those who were lucky enough to receive feedback, 76% of candidates found the feedback to be useless. “For the most part, it was the same "try again...there was just a stronger candidate than you this time." Out of the 94% that value candidate feedback, 90% are dissatisfied with the candidate experience because current forms of feedback and post-interview processes are not effective in addressing the goals of candidates. Ineffective feedback is a major problem that impacts too many people to be left unaddressed. What it does Companies don't give feedback because of three main obstacles: Lack of time and resources due to hundreds of applicants and interviews Fear of legal implications from misinterpreted feedback Complex hiring processes that prevent connecting the person who wrote the feedback from sharing it with candidates However, all three obstacles can largely be solved through a more streamlined feedback process. Boomerang will address all of these obstacles by facilitating the exchange of feedback between candidate and company. This works because companies can gain immense value from giving feedback once the obstacles are addressed. Candidates ... Will share a positive experience to peers and networks, improving the company’s reputation among potential employees Will improve their skills to be better potential employees Are 4X more likely to consider future opportunities with the company Using boomerang, we can bring value to both parties through trading feedback. Candidates will complete a feedback form for the company in order to view the feedback the company has for the candidate. Boomerang will facilitate this exchange with specific legally-compliant forms that address all of the obstacles companies face. Our key features are: Easy-select options & reusable feedback forms - A comprehensive database of feedback options with multi-select, and stock feedback forms allows for feedback to be given faster Pre-approved feedback options - Database of options prevent discriminatory language and our platform mandates that no legal action can be taken Interviewer-provided feedback - Instead of going through recruiters, interviewers can directly give feedback to the candidate For better visualization of how our features are different, please view our demo and Figma. How we built it We are progressing through the phases of the product cycle including user research, product ideation, and prototyping. After conducting user research through secondary testimonies and surveying members of LinkedIn, we developed mockups using Figma. We also worked with industry professionals to get feedback on our product design. Next, we will implement our web app with our outlined features. Challenges we ran into Providing value/incentives for companies to integrate with our platform Designing intuitive user flow Accomplishments that we're proud of Watching our team go from early ideas being tossed around to a well thought out product through the use of mockups, prototyping, user personas, interviews, and user research was very rewarding. Getting positive feedback from our mentor, Michael Barnes, has excited us for when we being to share our MVP and get user feedback which will allow Boomerang to reach its full potential! What we learned Each of our team members came in with different backgrounds and experiences, which led to us all not only learning from the product cycle that we followed and the steps we took, but also from each other. As a group we have all gained a ton of insight into how a product manager operates and thinks, as well as the type of work that they do which will help us as we look for future opportunities! What's next for Boomerang First, we plan on working on and finishing up the MVP so we can begin our beta testing. After using what we learn from the testing to further improve upon the MVP, we will begin to start marketing our product and work on forming partnerships with universities. Eventually, we hope to expand from SWE internships to not only a variety of internships but a variety of full-time jobs as well! Additionally, we have a list of additional post MVP features we would like to add in below: Features: Scrape job descriptions for evaluated skills View organization of candidates by role Allow creation of multiple surveys per role Data visualization for candidate exp. feedback In-platform feedback reminder notifications Company admin editing privileges Built With figma Try it out www.figma.com www.figma.com docs.google.com
Boomerang
Provide growth opportunities to companies and job candidates through simple, enlightening feedback.
['Lucy Liu', 'Randy Shao', 'Vicky Liu', 'Luke LeVasseur', 'Jacob Brown']
['First Place']
['figma']
0
10,175
https://devpost.com/software/tecto
Inspiration Airbnb type service - for electronics What it does Connect peers to meet up and get expensive electronic equipment like cameras, DJ controllers and drones for cheap by renting them How I built it Miro and Figma, with my team Challenges I ran into Being on a time constraint, working with a team with varying schedules Accomplishments that I'm proud of Learning the entire process of building a product from the ground up What I learned How to use figma and miro, how to think in the eyes of a PM What's next for Tecto Expand markets and gather a larger category of equipment Built With figma miro
Tecto
Electronics Rental Platform
['Vijay Laxmi .', 'Alexander Epstein', 'Hong Dang', 'yogeshsarathy', 'Akshita Agrawal']
['Second Place']
['figma', 'miro']
1
10,175
https://devpost.com/software/cumulus
Our platform is utilizes existing resources on the internet to guide students through a college and career exploration journey. Our Pitch Click here to watch the video of our pitch presentation! Make sure you set the video on 1080p for the best viewing quality. Inspiration We came together as a team because we all experienced a similar struggle: a lack of exposure to and exclusion from the tech industry. Before we all decided that the tech industry was a place we wanted to grow our careers and product management was a career path we wanted to pursue, we were high school and college students feeling around the dark. We didn't know what majors were out there when applying, and our scopes of knowledge were limited to and influenced by what our friends, families, and schools knew. Tech also wasn't something most of us considered since we didn't see people who looked like us in the industry. The tech industry is saturated with Caucasian and Asian males, lacking representation and inclusion of females, LGBTQ+, Black, Latinx, and many more identity groups. Our mission through this Project Jam was to conceive a platform that our high school selves would have needed and found helpful in the major and career search. What it does Cumulus offers three main features: exploration, mentorship, and community. We give students an approachable way to explore various careers and roles in the tech industry through what we call explorations. A student simply takes a curated, 10-question aptitude quiz and is suggested some matched exploration paths. On these exploration paths, a student is guided through a curriculum that uses existing resources online to learn skills related to that path, watch videos and testimonies, read articles, and join Slack communities. Students can also reach out to mentors in that field to humanize the explored career path. Through conversations with mentors, students make purposeful connections and get a headstart on building their networking skills. Cumulus also has community groups that attract specific identities and career interests. This allows students to connect and see other students who look and think like them represented in the field. The communities will be facilitated and led by mentors so that students also have individuals to look up to within certain identities. How we built it Our prototype is built on Figma. It is a functioning, interactive prototype, and we encourage you to play around with it! We performed user testing sessions with some high school students using the prototype we built, and the overall response was incredibly positive. Feel free to take a look at our intuitive design here. Note: This prototype works best on a laptop/desktop rather than a phone. Also, make sure you open the link in incognito mode. It loads better that way. Challenges we ran into One of the biggest challenges we faced was scheduling. As an international team, it was tough at times to communicate time zones, set deadlines, and arrange meetings. However, being an international team also proved to have its positives. We had really diverse and unique perspectives from everyone, and it is really what elevates our platform and idea to what it is now. Accomplishments that we're proud of The most rewarding part of this process was being able to receive positive feedback from our mentor, Arinze (shoutout!), and users during testing! We had poured so much hard work and thought into every button, layout, and feature, and to hear users respond with compliments and affirmations that this is a platform they would use and are looking for, made everything worth it. What we learned We learned a lot about the product-development cycle thanks to the way the Project Jam was formatted. Even more importantly, we learned a lot from each other. Since we were a diverse group of majors and skillsets, we often had a more experienced person in a stage lead us through the process. This really helped expand each persons' skillsets. Lastly, we learned how important it is to ask the question: "If I were a user, would I use this?" There were some features and aspects of our design we had brainstormed but eventually cut out because we really wanted to buckle down and focus on user-centered design and thinking. What's next for Cumulus Cumulus is only just beginning! We've got a solid analysis of the market, a tested and reliable prototype solution, and user research to back the demand and features of our platform. We'd love to add more features (mentioned in our pitch) in the future and even look into developing and deploying this platform to make it a reality! Built With figma Try it out bit.ly
Cumulus
Unclouding the tech industry by helping underrepresented minorities in high school explore the limitless opportunities in higher education.
['Cristie Huang', 'Kim Tran', 'Natasha Hsu', 'Sharif Abbasi', 'Joyce He']
['Third Place']
['figma']
2
10,175
https://devpost.com/software/wander-op749k
Inspiration We were inspired to work on this project because of our shared passion for traveling and want to create a platform where travelers can explore, share, and plan their trips. What it does Through Wander we want to help travelers, first, personalize their travel exploration, second, make collaboration amongst friends simple and fun, and third, inspire each one of them to explore. How we built it We went from user research, to feature prioritization, to design sprint, to development. Challenges we ran into We ran into some design/tech discrepancies and estimation issues early on, but were able to resolve those issues through working sessions and retros. Accomplishments that we're proud of We're so proud of what we were able to accomplish in the past 2 months from design to a tangible prototype which we're excited to test with potential users. What we learned The whole two month process was a great learning experience altogether, With all the team members sitting in different timelines empathizing with each other's timelines was the most important thing. We followed the product life cycle adapted from Nielson Norman groups design thinking 101 frameworks and it helped us a lot in coming up with a structured roadmap. Apart from that the development team has also learned a lot by putting into practice the technology they've been studying. What's next for Wander Currently, we are focusing on the pre-trip phase of the application and the phase 1 proof of concept has features focused on planning and collaboration. We want to continue testing and iterating on our current features and release a beta later this year. Try it out www.canva.com
Wander
Wander, making group trip planning simple and fun.
['Abhishek Mundra', 'mel duong', 'Ahana Khandelwal', 'Karishma Muni', 'Shoban Singh']
['Runner-Up']
[]
3
10,175
https://devpost.com/software/news-buds
Inspiration This past spring showed us the extraordinary influence of media: news media and social media. Differences in beliefs between people led to conflicting reports on the same topics. Differences between stories told on mainstream news media and stories shared on social media propelled social movements forward. Challenges we ran into We changed our solution to address the issues in news media twice because we realized that the problems of fake news and misinformation are very complex issues to tackle. We had to take a few steps back to see the overall picture of how these individual issues fit together in the scope of news consumption and reevaluate what both our own user research and published research journals/articles conveyed about these issues. Accomplishments that we're proud of We're a global team with members from USA and Ireland. The four of us were not familiar with each other, yet we all coordinated meetings across 3 different timezones and stayed motivated to work on this project throughout this 3-month project timeline. It's amazing how far we've gotten in our final prototype because we only had 2 weeks to design and create it due to deviating away from our previous solution. Our first user research survey on news consumption gathered 180+ voluntary responses, so we know many people are interested in our project! Product Description News Buds is a mobile platform redesigning the way people interact with current news by bridging the gap between different styles of news intake and tackling modern issues in news consumption. Our mission is to empower our users to stay informed and engage in civil conversations on current events. We hope to address solutions to uprising issues such as fake news, bias, and hope to engage various users through strong user experience. We are passionate about this problem as current advancements in social technology has lead us to access news from so many different sources without being educated on fake news and media bias. We hope that our product continues to raise the serious question about how we intake information in this age of information. The News Buds Team Ilene Kang, LinkedIn: /in/ykang1001 Janice Liu, LinkedIn: /in/jjaniceliu/ Julia Hawley, LinkedIn: /in/julia-hawley/ Soundarya Senthilnathan, LinkedIn: /in/soundarya-senthilnathan/ Acknowledgements: We would like to thank our mentor, Nikhil, for his honest feedback on our project! We would also like to thank Product Buds for organizing this Project Jam so that we could practice our PM skills. Built With figma qualtrics
News Buds
Stay informed and engage in civil conversations on current events
['Ilene Kang', 'Janice Liu', 'Julia Hawley', 'Soundarya Senthilnathan']
['Runner-Up']
['figma', 'qualtrics']
4
10,175
https://devpost.com/software/pandabox
Inspiration We were first gen students who had no idea what to expect in college besides class. This leads to first-gen students being at a disadvantage when it comes to the things nobody tells you about: networking, internships, etc. We aim to bridge that gap and set first-gen students for success! What it does You give us some info -> we create a profile -> resources are catered to your needs and interests -> we help you every step of the way -> profit How I built it With love <3 Challenges I ran into Don't try to run before you can crawl, sometimes we got ahead of ourselves (it's ok we were excited) Accomplishments that I'm proud of We know what we're building and who we're building for What I learned TBD What's next for PandaBox $1 billion evaluation Built With love Try it out docs.google.com
PandaBox
Pandora's Box for First-Gen students, taking the leg work out of researching for school
['Joel Montano']
['Runner-Up']
['love']
5
10,175
https://devpost.com/software/helpmates-qjbepi
Inspiration Mental illness is the leading cause of disability in the world which can affect anybody, regardless of their background. While there has been a reduction in mental health stigma around the world, there is still a lot that needs to be done. Mental health among youth, for example, has been rapidly deteriorating over the years. This led us to explore possible solutions that can improve mental health. We invented an innovative solution that focuses on maintaining/improving well-being for the otherwise healthy individual (i.e. who may not be detected with a mental illness yet), because we believe all problems should be tackled proactively. What it does Helpmates scores individuals on a well-being scale and takes an inventory of their interests and requests for what kind of person they would want to connect with. Using machine learning algorithms, users will be given recommended profiles for them to connect with for 1:1 communication. This idea stemmed from scientific research that has revealed that something so simple as talking to another person can improve the state of your mental health. In addition to this pairing to encourage communication, individuals can keep track of their well-being score over time and will be provided with tips to increase this score. There will also be a newsletter feature that all users have access to, which will include resources and information to learn about mental health and how to improve it. Our primary goal is to encourage communication and provide information so that users can learn and take control of their own mental well-being. How I built it We completed several phases of the product cycle including user research, product ideation, and prototyping. We used Figma to design the app and sent it out to our prospective customers for feedback and improved the design. The next step in our product cycle is to implement the algorithms and equip helpmates with all the capabilities outlined in our deck. Challenges I ran into One of the major challenges we faced was scheduling all-hand meetings with our team since our members are spread across different countries. However we feel that being from different countries enhanced the diversity of our team! We faced some challenges throughout these months in different phases of our product development, but we were able to combat these thanks to our collaborative discussion and our mentor! Accomplishments that I'm proud of We are proud that we were able to identify a gap in society that affects nearly everyone and that we created a concrete plan for a tangible solution to bridge this gap. What I learned As a team, each of us were able to gain soft and technical skills throughout this process and experienced the different phases of product development. The guidance provided by the Product Buds community led us to gain knowledge about Product Management and equipped us with the resources needed to create a product that can make a great social impact. What's next for Helpmates Now that we have validated our idea through market research, we plan to begin doing the backend work to create this product. We plan to build out the basic functionalities and incorporate algorithms to match users for 1:1 communication. In the future we see ourselves partnering with mental health experts (e.g., psychologists, clinicians, etc.) to provide consultations for individuals directly on the app and personalized tips that can help users improve their mental health. All in all, we want to see helpmates grow so that it can improve well-being for lives across the globe! Built With figma Try it out docs.google.com
Helpmates
Improving Lives Through Communication
['Hannah Shimoga', 'amrita-suresh', 'Larai Audu', 'Akshay S']
['Runner-Up']
['figma']
6
10,175
https://devpost.com/software/project-ant-q2fip5
We are inspired by the rapid pace of innovation and the evolution of the workplace. With constantly evolving technology came new ways to work and learn such as working from home, collaborative online projects, MOOCs (Massive Open Online Courses), and digital freelancing. Freelancing has for the longest time been just a part of the gig economy that serves to generate a side income, and we feel that this is a wasted opportunity. It can be so much more. Driving from our personal experiences, in this competitive and highly skill-centric economy it is extremely tough for students to get meaningful work experience which, alongside being a source of income, also develops their professional careers and enhances their skills. Such opportunities are rare and hard to get currently, especially for students without relevant experiences. Moreover, since Covid-19, any job/task that could go virtual has now been forced there. The digital revolution has been accelerated by decades and there is no going back to the old way. Remote work is here to stay, so are online conferences and digital hackathons, and Project ANT will be facilitating this digital transformation for individuals and businesses. Therefore, our platform, ANT, will capitalize on the rise of the gig economy, utilize the opportunity created by the and the increase in corporations hiring temporary labor. We will offer students an opportunity to build their portfolios using meaningful real-world digital marketing projects and also give them offers and recommendations to improve their skillets. An overview of our technical infrastructure is attached in the slides. Built With amazon-web-services angular.js bubble express.js figma hubspot mailchimp mean meanstack mongodb node.js Try it out www.figma.com www.projectant.io www.figma.com
Project ANT
A freelancing platform that takes a long term interest in the growth and development of its users. ANT is a full cycle solution to Earn, Learn, and Grow within the platform itself.
['Areeb Mianoor']
[]
['amazon-web-services', 'angular.js', 'bubble', 'express.js', 'figma', 'hubspot', 'mailchimp', 'mean', 'meanstack', 'mongodb', 'node.js']
7
10,175
https://devpost.com/software/fern-4ovbgi
Inspiration We developed Fern because of our mutual interest in plants and also our passion to help out local businesses. We also have relatives that enjoy gardening and sell a lot of plants from their homes, and have issues with marketing/expanding their business. Two of us have experience ordering flowers online, and they were both poor (bad customer service/response rate and no real-time tracking) Challenges Building a two-sided marketplace was, to say the least, very difficult. We had to conduct plenty of user research on both ends (buyer and seller). Getting responses was hard, so we needed to continuously update our Typeforms (changed to Google Forms later) to make it easier for people to fill it out on the go. Two of us had full time jobs, so we could only meet up on weekends to discuss our project. What it does People can sign up as either buyers or sellers. Our plan is to help sellers gain more publicity, especially for those who cannot afford additional marketing or their own website. Buyers are able to create a profile where they can list specific colors/flowers that they are looking for in order to curate their marketplace to their interests. Sellers are able to list the generic location of their store/home (for privacy reasons their actual location isn't given until the purchase is confirmed). They can also create listings, adjusting pricing, and also view everything easily compared to selling via social media. How we Built it We drew out the user flow through InVision, then built the prototype through Adobe XD. We demo-ed our prototype to potential users to gain feedback and edit our features based off of that. All of our local businesses were found through Yelp, with the exclusion of family members. What we Learned and What's Next We all come from different careers: aspiring PM, consultant, and software developer. One big thing we learned was the basics of product development and the product development cycle. We realized how difficult it was to generate user research and to continuously reach out to people to gain more insight. We plan to further develop this product, so we are currently researching back-end tools! We are also hoping to create something solid enough to pitch to investors :) Then, we can focus on more features. View our pitch deck here: https://docs.google.com/presentation/d/1WlfJK5_nZXeCm6kC9l7HBcJwQkodZ3wrxAQAzitJzl4/edit?usp=sharing Pitch deck demo video: https://drive.google.com/file/d/1HP8uZ5sLp9QlCA9DLfiaRX_czSk0GDWS/view?usp=sharing Built With xd Try it out xd.adobe.com
Fern
Fern is a marketplace catered to local plant/flower businesses and plant enthusiasts.
['Celyna Su']
[]
['xd']
8
10,175
https://devpost.com/software/asian-grocery-finder
Inspiration A lot of our friends, us including, have been cooking a lot at home during quarantine, so we wanted to find a way to help improve the cooking process. We realized that a lot of people want to cook more ethnic cuisines but can't find the ingredients. Also, due to COVID, asian markets and neighborhoods have been affected (CT/Flushing/etc) so we want to bring business back to the area and drive exposure by bridging the gap between younger generations to local markets. What it does First digital platform providing exposure for small Asian supermarkets. Website that help user locate stores which have the ingredients they need Source Asian Ingredients and Lowers barrier to cooking Asian cuisines Save time and minimize the uncertainty of going into the store and not knowing if the product is there or not Cultural Preservation There’s a high consumer demand to cook more ethnic dishes, but small Asian markets lack digital presence. This site minimize the gap between the younger generation and smaller Asian grocers due to lack of general awareness and language barriers How we built it Just Adobe xD Challenges we ran into Majority of the team fell out half way 4 weeks into the project and also we got stuck on narrowing down our problem statement and figuring out a solution for it Accomplishments that we're proud of Really didn't think we'd finish, but glad we pulled through! What we learned JUST DO IT. Don't think too much. Doing something is better than nothing! Ideate, launch, test and repeat! What's next for Asian Grocery Finder Maybe able to actually built this site with a dev would be nice. Partnering with actual markets and getting their inventory Pitch Deck Here PDF Pitch Deck Built With adobe-xd Try it out drive.google.com
Asian Grocery Finder
Search for your ingredient and we’ll tell you which stores near you have it!
['Sunny Xu', 'Tiffeny Chen']
[]
['adobe-xd']
9
10,175
https://devpost.com/software/soundfull-streams-nogiy9
Onboarding question for HoH personalization "Accesibility Section" in settings "Report inaudible content" action button (like the current "Report explicit content") Popup when a user reports that the song has an inaudible section Alternative solution proposing Spotify's current equalizer tool (which a lot of HoH users didn't know about) Request for more feedback Added "inaudible content" badge for reported songs "Lyrics" search filter Problem: Lack of Inclusive Design Leads to Disrupted Listening Experiences. Inspiration Our team was joined by a common interest and passion of music. We were inspired to pursue this idea after our own experiences of seeing the Deaf and HoH community portrayed in movies and TV shows, on Youtube, etc. Seeing their struggles motivated us to delve deep into this question of accessibility on music streaming platforms, especially through a platform with a highly personalized community and global reach such as Spotify. What it does SoundFULL Streams leverages community reporting and existing design to optimize data: we recommend flagging (I) songs that have lyrics for easy filtering and (II) songs that have parts that aren't audible to the reporting user (much like flagging songs with explicit content). Sample user story: "As a Hard of Hearing User, I want more inclusive in-app customization so that I can enjoy the music curated just for me with little disruption." Core Principle: Leveraging Community and Existing Design to Optimize Data and Increase Accessibility How we built it We're working off of Spotify's pre-existing search/filtering algorithm infrastructure- just adding a few more queries to enhance the Hard of Hearing user experience while gathering even more useful, personalized data for Spotify machine learning processes. Challenges we ran into Research - Because this was a new world of users and experiences, we got a little lost trying to figure out where to start. Do we go with fully Deaf users? Partially Deaf? Which threshold should we assume for our persona? Breaking Barriers in a Marginalized Community - When we were first conducting customer research, we were wandering into unknown territory (the Deaf and Hard of Hearing communities). People from the Deaf/HoH communities are understandably frustrated: they’ve been treated repeatedly like animals in a zoo by hearing people who would only reach out to propose a brand new "quick fix" tool to problems they've spent their whole lives trying to work around. We wanted to be as respectful as possible as hearing team members to make sure we were proposing a new feature for a product that's ACTUALLY been used by them to capture THEIR needs. Accomplishments that we're proud of We're proud that we could engage with Hard of Hearing users to get pain points from their own words. Also, we found more alignment between stakeholders (e.g. the Spotify sound engineer mentioned in the demo) and loyal users who are Hard of Hearing to determine a logical, viable solution. What we learned LEARN from your customer (especially those who come from a marginalized community - listen more so that they can feel understood as well as heard when they walk through the use case and bring up pain points) EMPATHIZE with your customer ALIGN your user's needs with the company's mission and business goals to target the problem (Hard of Hearing users wouldn't have fallen under some of the stakeholder's radar, but it's actually going to be a growing problem with the world aging and losing its hearing (especially the 54.4% of majority Spotify users, who are aged 18-35 years old. (Assistive tech has been more pressing on the company's mind, especially with the hunt for a senior Accessibility PM for Spotify's NYC office over the past two months.) 4, SIMPLIFY the steps a user needs to take to give you the data you need. What's next for SoundFULL Streams a. Medium case study article b. Scrollable prototype for user interviews and feedback c. More iteration d. Final verdict from users whether or not this is a go e. Focus on the feature most prioritized by target users f. Develop the PRD and develop epics g. Re-prioritize around a roadmap h. Continuously check target KPI goal (user churn) for the target segment once sufficient data is added Thank you very much for your consideration!
SoundFULL Streams
Marrying Spotify's big data with inclusive design to make music more accessible to the Hard of Hearing
['Karen Kim', 'irene chang', 'Ivy Lee', 'Sabreena Yang', 'Xavier Gonzalez']
[]
[]
10
10,175
https://devpost.com/software/empire-m3x5h1
Expand your empire by completing focus missions and create buildings Co-pilots help users stay focus by texting the user to go back to studying if they stop the mission A variety of buildings, missions, and time focus sessions to choose from! Unlock each planet with more missions. Each planet comes with new terrains and new buildings Inspiration 🌌 Being college students, we know firsthand how difficult it can be to stay motivated and productive. We wanted to create a fun solution to help students work towards becoming their best selves with the help of those closest to them while making 'staying focused' exciting! Empire was created as a solution to solve the lack of motivation to study with accountability partners which is scientifically proven to increase performance. By having friends or someone who cares about your performance encourages you to study if you lose focus, Empire provides a way for communication to be seamlessly automatic. What it does 👩‍🚀 Empire aims to make the task of staying focused easier for students with the help of gamification and accountability partners. Students enrol as cadets and enlist their accountability partners as co-pilots. Co-pilots join our cadets on different focus missions to keep them on track. The more missions you complete the bigger your galactic empire becomes! How I built it 🚀 We conducted user interviews, surveys and carried out secondary research to learn more about our target users, their pain points, motivations and goals. We went through numerous design sprints to produce our MVP with the help of our mentor. We went through multiple iterations through Figma starting from sketches, to lofi designs, and now with our prototype. Challenges I ran into ☄️ After conducting our initial user research survey we realized that some of our assumptions were not correct. Also, coordinating work was a challenge at times as some of us were operating in different time-zones but we were able to work through that. Accomplishments that I'm proud of 👽 We’re proud of our MVP prototype and the cool story and adventure it brings the user on. We received really positive feedback on the prototype from our mentor and users which was very encouraging. We are also proud of how we were able to analyse the data from our initial survey and quickly pivot towards a solution that addresses a real user need. What I learned 👾 Coming from Tech and Marketing backgrounds this project was definitely a crash course on UI/UX Design and designing with the user in mind. We also got more familiar with Figma which was a huge bonus. As well as improving our design skills we got to practice showing user empathy, carrying out customer interviews, conducting research and designing a system at a high-level. This project has been a great way to learn about the product development lifecycle. What's next for Empire ☀️ We hope to run some more user usability tests on our current prototype and update it based on more user feedback Built With figma Try it out www.figma.com drive.google.com
Empire
Expand your space empire with accountability co-pilots and space focus missions 🚀
['Megan Lau', 'Sharon Olorunniwo', 'Phyllis Njoroge']
[]
['figma']
11
10,175
https://devpost.com/software/doctobot-disease-diagonsis-using-ai
Inspiration We have build this product based upon the several pain points that the people are facing during this covid-19 time, mainly doctor consultation. What it does chatbot feature for disease diagnosis that can handle 1000 user requests at a time, provides online doctor consultation, ticket booking,query translator and feature for storing patient medical history online. What I learned I could understand the roadmap of a product from scratch to prototype stage of product also gained some PM skills. Try it out www.canva.com
Doctobot
Disease diagnosis using bot also includes other features to ease the hospital procedures.
['AMAL JOSE']
[]
[]
12
10,175
https://devpost.com/software/the-mentor-buddy
Currently, there is a lack of access to available and affordable resources to students applying for a Masters degree We, at "The Mentor Buddy", hope to stick around with you in this journey, helping you streamline your preparation and put your best foot forward in the application essays. All this, ensuring affordability, accessibility and customizability. Try it out docs.google.com
The Mentor Buddy
Affordable, Accessible & Customisable mentorship for your Master’s application
['Shubham Roy']
[]
[]
13
10,175
https://devpost.com/software/the-break-room
Homepage See who's online and join the chat room! Play games with your coworkers Port over your existing groups from Microsoft Teams Inspiration With the recent shift to remote work, we've realised that a lot of us face the same challenges working from home. One common thing was the feeling of isolation, where a lot of us missed the casual social interactions in the workplace. What it does The Break Room provides a virtual lunch and coffee room experience for anyone to connect with their colleagues easily. It is integrated into Microsoft Teams, which allows people to easily see who's online and quickly jump in for a quick chat or catch up! How we built it We went through several sprints in the product development cycle, from user research to product brainstorming to user testing. Challenges we ran into Meeting with our end users was more challenging due to the fact that we couldn't meet face to face. Accomplishments that we're proud of We're proud of the clean user interface of the prototype we designed, and the feedback received from testing with our end users. What we learned Each coming from different backgrounds, we've all learnt a lot more about the product development cycle, such as user research/interviews, gaining user empathy, and developed a host of Product Management Skills. What's next for The Break Room We hope to further develop the app and put it on the Microsoft Teams app store! Built With figma Try it out docs.google.com www.figma.com
The Break Room
A Virtual Break Room Experience for Casual Social Interactions
['Gautam Venugopal', 'Michael Setyawan', 'Brandon Hunt', 'Ashly Lau', "Nicole O'Keefe"]
[]
['figma']
14
10,175
https://devpost.com/software/digital-friend-y51xr9
Homepage Website Page 2 Website page 3 Inspiration About a year ago, I had a full-time job and I was a part-time student and I belonged to a funny start-up. My sleeping schedule was a mess but my productivity was off the chart. I was making time for all my activities. 🗿 I wasn't consciously deciding to be productive, I just had to be. I can't seem to remember a more productive time in my life. I tried to recreate that experience on my own but I wasn't completely successful. What I noticed was missing is that I wasn't surrounded by awesome people, crystal clear goals, and tons of deadlines. What it does Digital Friend helps creative students find an accountability partner to accomplish a short-term goal. How I built it Figma, Webflow, Typeform, Google Spreadsheet, Airtable. Challenges I ran into Time management. Lack of structure. Accomplishments that I'm proud of I like how I design the "prototype" and other good looking things about the project. What I learned I need to work for an expert first to have a better experience leading a team. I learned that I wasn't ready for it and that took a toll on me. What's next for Digital Friend More tests in different universities and figure out the way to automate the sign-up and matching process. Built With airtable google-spreadsheets typeform webflow Try it out docs.google.com
Digital Friend
Platform that helps students achieve short-term goals.
['Faiza Younis', 'Dirghayu Kaushik', 'Misha Espinoza', 'Ilda Pogaci']
[]
['airtable', 'google-spreadsheets', 'typeform', 'webflow']
15
10,180
https://devpost.com/software/offline-movement
App Logo GIF App Demo App Ad Poster Offline Movement Offline Movement is a phone application that allows users to connect to others without using Cellular Data or the Internet. This Offline Messaging feature is secured by a peer-to-peer mesh network (Bluetooth) that allows for direct communication between smartphones.To enforce privacy the app includes encrypted messages and tools to blur individuals' faces in photos and vidoes. In order to ensure safety the app has a voice-activated video recording system installed. Offline Movement is a savior during disaster situations and mass gatherings. Vision & Inspiration As protests against police brutality have swept across the nation following George Floyd's Death, Protestors are looking for apps to ensure their safety. In order to guarantee the safety of friends and family going out to protest we looked into existing apps and we came upon the Firechat Offline Messaging App. Firechat uses a peer-to peer mesh network allows users in close proximity (400-500ft) to message each other without cellular data or the Internet. It inspired us to develop an app used in case of disaster situations and mass gatherings. Users may contact nearby individuals for help without cellular data or the Internet. Additionaly they may use voice activated features (code-words) to enable voice activated video or voice recording if they're enduring abuse or violence and cannot physically turn on their phone's camera. Finally, to guarantee that their cellular device is not used against them, a voice activated shut-down feature will also be implemented.. How we built it Design UI/UX Code UI/UX in Android Studio Secure peer to peer mesh network (Offline Messaging) with Bridgefy SDK in Android Studio Add Android Studio Voice Capabilities to allow for Voice Activated Commands Utilize OpenCV SDK for facial recognition tech to blur individuals’ faces in photos and video messages in Android Studio Challenges We Ran Into Bridgefy is a developer-friendly SDK that can be integrated into Android and iOS apps(including messaging apps) to make them work without the Internet. However, when this SDK was implemented into our app we noticed that many of the methods Bridgefy SDK used have been deprecated.This may be due to the fact that the last update to the repository was 12 months ago. We attempted to replace those methods with newer versions of the methods however some methods no longer had replacements so we were unable to implement the SDK like we wanted to.Additionally, many of us are better equipped in python and web development and know less about app development. We were unable to go into IOS app development and test the IOS Bridgefy SDK because not all members of the team owned a Mac. Our team faced difficulties in communicating with each other due to time zone differences.With the timezone difference, some of us had to compromise by sacrificing our sleeping schedules.In addition, sometimes the internet connection sucks.However, we did not want time zone differences, sleep deprivation and a bad internet connection to hold us back from participating in this hackathon. Accomplishments We are Proud of Utilizing the knowledge we gained from the workshops to create our application Being able to work together and produce something despite our major time zone differences and limited time. Being able to bring all of our unique educational backgrounds to produce a product. Learning that this is a novel idea that others have not created before. What we learned We learned how to use Android Studio for the first time, and how to work together with different skill sets. In addition, we learned that many tools like Bridgefy SDK and OPENCV SDK exist that can be used to make our app a reality. Built With Bridgefy SDK - SDK used to set up offline messaging Android Studio - Developing App Figma - Designing UI/UX OpenCV-SDK - Facial Recognition for Face-Blur Java - Coding Language Getting Started/Set Up Guide In Github click the "Clone or download" button of the project you want to import --> download the ZIP file and unzip it. In Android Studio Go to File -> New Project -> Import Project and select the newly unzipped folder -> press OK In Android Studio, create an Android Virtual Device (AVD) that the emulator can use to install and run your app.-->In the toolbar, select the AVD that you want to run your app on from the target device drop-down menu.Click Run. OR Use Android Device. On the device, open the Settings app, select Developer options, and then enable USB debugging. Authors Mualla Argin - App Development: Back: Full Stack Development | College sophomore in Computer Science - margin25 Victoria Nguyen - UI/UX Design | 42 Silicon Valley Biology/Computer Science - VictoriaNguyenMD Agnes Sharan - App Development: Back End Development| Student in CS - agnes-sharan Layan Ibrahim - Video and PPT | Junior at Emory majoring in Neuroscience and Behavioral Biology - layibr What's Next for Offline Movement In the future, we are hoping to fully implement the voice recognition tool so that it only recognizes registered users voices so others cannot misuse the app. Additionally, we are looking to make the program run so that it gives user advice according to the contents of their audio or voice recordings. We also hope to better implement our idea with a full understanding of Bridgefy SDK and OpenCV SDK. Built With android-studio bridgefy-sdk figma java opencv-sdk Try it out github.com
Offline Movement
Stay Safe And Let Your Voice Be Heard!
['Layan Ibrahim', 'Mualla Argin', 'Agnes Sharan', 'Victoria Nguyen', 'Victoria Nguyen']
['Grand Prize']
['android-studio', 'bridgefy-sdk', 'figma', 'java', 'opencv-sdk']
0
10,180
https://devpost.com/software/alz-vision-639nrj
Home Page. Upload Page to upload and describe a memory. Redescribe page to redescribe an uploaded memory. Detailed statistics page. Analytics page displaying graphs. Alz.vision Inspiration Nearly 50 million people worldwide fall victim to memory impairments such as dementia. We personally have met people who struggle with Alzheimer's and have forgotten critical information and cherished memories, even going as far as forgetting a loved one. And according to a report from Alzheimers Disease International, nearly 75% of Alzheimers patients worldwide go undiagnosed. Currently, when doctors diagnose dementia, they lack concrete data on a patient’s decline of memory, relying on a combination of brain scans, memory tests, and interviews with family members. What it does We decided to create Alz.vision, a web application that uses machine learning to help potential Alzheimers and Dementia patients. The app has 3 core components. First, it prompts users to upload memories each day with a single sentence describing the memory. Second, as time passes, the app prompts users to describe memories they have uploaded in the past again. Each of these descriptions are analyzed by a machine learning algorithm and are assigned scores. Using state-of-the-art machine learning algorithms, such as Random Forest regression, Support Vector Regression, and clustering to name a few, the app analyses the scores and displays compelling graphs and statistics in real time for users and their doctors to view By using machine learning to analyze many memories over time, Alz.vision is a data analysis tool that analyzes a user’s memory for signs of memory loss and provides doctors with key data to make a more accurate and informed diagnosis. The user uploads photos and videos to the application, along with a description of the event portrayed. As the user continues to upload photos and videos, they will be prompted to recall the memories by writing another description. Our similarity algorithm will analyze the two descriptions to determine their similarity. Using this data and algorithms such as Random forest regressions, Support vector regressions, Linear Regressions, and Natural Language Processing, the app provides graphs and visual aids to show a user's memory decline. In addition, it will search for any potential outliers in their memory loss and common keywords associated with those outliers. Overall, by analyzing and performing data analytics on the user’s memory over time, Alz.vision is able to use state of the art machine learning algorithms to detect powerful trends as well as create compelling and easy to understand graphs, helping users take a more active role in their health and helping doctors make a better and more informed Alzheimer’s and dementia diagnosis. With more testing of our algorithm, we plan to expand our application to warn users if their declination of memory suggests Alzheimer’s or dementia and recommend that the user visits a doctor. Accomplishments that we're proud of We got the entire web application working together! We were really proud of how our application is currently functional and accurately creating graphs in response to user descriptions in real time. We can successfully upload memories and display them for redescribing. Also, our machine learning analysis is integrated into our app, so we're really happy about how everything is coming together. We should be ready to pitch our app and put it out to production. How we built it We used Flask, HTML, CSS, JS, and Bootstrap for the frontend and backend for this project. We used Flask-Mongodb and MongoDB Atlas as our database to store user information, images, descriptions, and scores. For the frontend, we used HTML/CSS/JS with Bootstrap. We implemented a text similarity algorithm based on Levenshtein Distance, Linear regressions, Random Forest regressions, Support Vector Regressions, and Natural Language Processing to analyze the descriptions and create compelling and meaningful graphs for both patients and doctors. Challenges we ran into and What we learned It was particularly difficult for us to upload images through MongoDB and we learned a lot about images with MongoDB and that there was a library which handled MongoDB with Flask (we were originally using only the original MongoDB library). It was also difficult for us to figure out how to return images from our machine learning algorithms to display on the website. We finally learned how to convert the graphs to base64 images, which we could then display on the website using HMTL. For both the outlier detection and the sentence similarity scores, we tested a few models before reaching our final decision on the model which worked best. It was our first time using MongoDB Atlas and mongoDB, so it took some time to learn about the API. It was also a little difficult to work together online, but we used Discord as our platform and made sure to periodically check-in on each other. From a non-technical standpoint, our team also learned a lot about Alzheimer's. We spent two weeks researching the disease to learn more about how it is diagnose and listening to real user stories. Business Model Market According to WHO, There are nearly 10 million new cases of Dementia per year worldwide approximately, furthermore according to Alzheimers disease international 75% of people with dementia have not received a diagnosis. This makes our total market size over 40 million people. Revenue Model In terms of our revenue model, we will choose to make revenue in 2 key ways: Advertisements, and by offering a premium subscription offering advanced data analytic features and providing greater insight into a user’s change in memory. Competitive Advantage Finally, the competitive advantage. Current methods of diagnosing Alzheimers and Dementia include Interviewing family members, and conducting memory tests and brain scans, but there is no concrete data. Alz.vision on the other hand, analyzes images and memories over time to measure the change in memory, Uses ML and neural network to detect trends and patterns, and most important, provides concrete data. Next Steps We hope to share our app with local doctors to get their feedback on our app. We will adjust accordingly and then continue the design process to create a finished product. We especially want their feedback on how to display the data which will be most convenient to them. Then, we will pitch this product to local hospitals and clinics and hopefully collaborate with them to make this product a tool for patients to collect data to help doctors to diagnose dementia and Alzheimer's better. Overall we hope to see if we can make it a startup and push it out for our community and the world to use. Built With bootstrap css3 flask html5 javascript machine-learning mongodb natural-language-processing ntlk numpy python sklearn
Alz.vision
A web application that analyzes the memory of users to determine signs of memory loss and provide key data for their doctors to make a more accurate and informed Alzheimers diagnosis.
['Veer Gadodia', 'Shreya C']
['Finalist Prize']
['bootstrap', 'css3', 'flask', 'html5', 'javascript', 'machine-learning', 'mongodb', 'natural-language-processing', 'ntlk', 'numpy', 'python', 'sklearn']
1
10,180
https://devpost.com/software/sage-abu6yc
Sage Inspiration An hour of planning can save you ten hours of doing Planning is a very essential part of our lives. We spend a lot of time planning and rightly so because nothing gives more content than an efficient planning gone right. What can really help us achieve this is data driven planning. Having the opportunity to closely work with the management of a business, I came across innumerous situations when we wanted to plan ahead. One of the ways of doing this was to use our historical data to project the future trend of various aspects like monthly sales, inventory stock outs, customer traffic, manpower availability etc. With the advent of digital world, many small and large businesses have the data but do not have the technical know how of using this data for such type of forecasting. There is no universal solution which can be used to provide predicitions on various aspects in real time, and at the same time, be cost effective. What it does Sage is, as I call it, an absolutely simple web-app which can be used to forecast any univariate time series with a few clicks and within seconds! The app takes in historical data of the variable to be predicted and provides the option to either tune the forecasting model by itself or use your domain knowledge and do it on your own. How I built it The forecasting model used in Sage is built using Facebook's Prophet. Prophet is a forecasting procedure implemented in Python. The first phase of the prototype was to try Prophet with a few datasets. The next step was to interface it with a UI. The UI of the web app is built on Dash - A Python web application framework written on top of Flask, Plotly.js, and React.js Challenges I ran into The biggest challenge was to make the app easy to use. The forecasting model should be flexible and at the same time accurate. What really came handy in this was Prophet. Prophet provided accurate, fast and tunable forecasts. Another challenge I faced was getting the UI up to the mark. Even though Dash has a detailed documentation, there were a few complications involved with the callback architecture of Dash. Accomplishments that I'm proud of This was the first time I was developing a project for a hackathon and that too all by my own. I am really happy I could do this. The forecasting done is of good accuracy and at the same time I was able to capture my objective of making the solution easy to use for even a non technical user. This was very fulfilling. What I learned While developing this web-app, I got an opportunity to sharpen my python skills. It was also the first time I was working with an ML based library - fbprophet. I was able to learn other cool techs like Dash and plotly. What's next for Sage I am very excited to have developed Sage upto this level and am looking forward to develop additional features in this to make it an even more exciting product. Next in my plan is: Integrate Sage with online univariate time series datasets to be able to use Sage out of the box. This can be useful for preforming forecasting on public available data. Provide capability to compare multiple forecasts. For eg. compare sales forecasts of item A and item B. Allow user specific profiles to save parameter preferences. Better model diagnostic metrics to provide greater visibility into improving the model. Built With amazon-ec2 dash fbprophet plotly python Try it out github.com 13.234.77.145
Sage
A simple (absolutely simple) web-app for forecasting any univariate time series.
['Milind Shah']
['Beginner Award']
['amazon-ec2', 'dash', 'fbprophet', 'plotly', 'python']
2
10,180
https://devpost.com/software/detectit-s6r14c
Examples of Computer Vision Tasks Progressive Web App Shark Classification Example Inspiration Being curious about how face recognition tools work, I wanted to learn about its implementation and dig in more into the machine learning field, thus I decided to pursue this with object classification for this project, adding a touch of offline capability to it. Being able to use an app like this offline allows people with no internet access to still be able to classify objects in this case, but this could also apply to doctors in areas with poor connection to be able to use to identify and classify types of parasites, bacterias, or other organisms that can help save people's lives. What it does The web app lets users classify images either taken live through their device's camera or locally by uploading one. The app uses a trained model converted to the TensorFlow.js format to provide the predictions for each image chosen with percentages shown of its confidence in inferring what it is. The serviceworker also lets user to be able to use this offline. How I built it I built it using React and tensorflow for the web application the and model, with html and css for the front-end. I used IBM Watson's ML resource platform for the model. Most importantly, I made use of service workers making it a progressive web app that can provide offline functionality, as shown in the demo. Challenges I ran into Challenges included to integrate the model into the React app and getting used to IBM Watson ML's platform being new to it. Figuring out how to also allow camera device to be used was a challenge I ran into. Accomplishments that I'm proud of I am most proud of tackling the ML field that I have recently been curious about, and focusing on computer vision, and also to use new technologies such as Watson ML platform and PWAs. What I learned I learned how to make a machine learning app that classifies images, and integrate that within a React app. I also learned how to make a progressive web app using serviceworkers. What's next for Visionary Next, I will take this progressive web app one step further and explore the facial recognition field. Also, I want to make Visionary into a mobile app for even easier access to users. Built With css html indexeddb javascript python tensorflow Try it out github.com
Visionary
A progressive web app for offline image recognition and classification
['Alex W']
['Popular Choice']
['css', 'html', 'indexeddb', 'javascript', 'python', 'tensorflow']
3
10,180
https://devpost.com/software/ptwpus-protecting-those-who-protect-us-1oalqz
, Try it out devpost.com
.
.
['Mohamed Hany']
[]
[]
4
10,180
https://devpost.com/software/neural-artistic-style-transfer
Inspiration After reading a blog post about automated music generation through machine learning, I really wanted to apply my knowledge of machine learning to develop an automated means of generating some form of artwork, like poetry or drawing. While I was searching online for direction on how to do so, I found out about that some people had used neural networks to transfer artistic style, which appeared to be extremely fascinating project to work on. What it does The web application takes in an image from the user and a stylistic choice as input. The user image is then saved to a local folder. The user image is then resized and reshaped. Finally, through a deep neural network, the stylistic choice of the user (e.g. Van Gogh's Starry Night) is tranferred onto the original user image. You can test your own images via https://styletransfer1.pythonanywhere.com/ !! How I built it I obtained the deep neural network used for the artistic style transfer through tensorflow_hub, a library containing open-source deep learning models. I then used numpy and tensorflow to resize the user image and the style image that would be input into the deep learning model. The user image, style image, and the transferred style image would then be saved locally into another folder. I used the Flask web-framework library in python to design the web application, which was built using HTML and styled using CSS, Bootstrap and Javascript. Challenges I ran into I had extreme difficulty integrating the flask web application with the deep learning model’s predictions. It was hard to find a way to send form data from the user to a format that the machine learning model can predict on. Another issue was that, for some unknown reason, after the machine learning model predicted on the first user image and style choice, it seemed to have cached the user image, the stylistic choice, and the transferred/merged image. So after the first input, the output was always the same. I solved this issue by saving the final model prediction into a single file, and appending a random integer to the end of the filename. Accomplishments that I'm proud of I am very proud that I was able to successfully synthesize the flask web application with the deep learning model. Before, I had only worked on small machine learning-related projects. What I learned I learned that in Flask, the file paths are very different from other web design applications. What's next for Neural Artistic Style Transfer In this project, I utilized the neural style transfer model provided by tensorflow_hub. Next, I plan on training my own machine learning model for neural style transfer. In addition, I plan on using React.js to provide a more user-friendly interface. Built With bootstrap css3 flask html5 javascript python Try it out github.com styletransfer1.pythonanywhere.com
Neural Artistic Style Transfer
Web application that utilizes a deep neural network to provide a real-time means of styling user images with various famous artwork styles
['Bill Sun']
[]
['bootstrap', 'css3', 'flask', 'html5', 'javascript', 'python']
5
10,180
https://devpost.com/software/nighttime-cards
Quick View of Nighttime Cards Inspiration So I have trouble falling asleep at night after a long day of learning to code but I feel really easy to fall asleep in the morning when I'm taking my scheduled breaks watching ASMR videos, animal videos and reading animal facts. What it does There's so much more I can add to the current project. It kind of looks incomplete at the moment. I can add more cards and videos and make it more interactive. Create different sections. Maybe branch out to include not just animal facts. It's really easy to use, the app will display animal cards that users can view and interact with and read to when they're trying to fall asleep. They're all wholesome facts. Really interesting. How I built it I built it using HTML5 and CSS3. Challenges I ran into I'm new to coding. I ran into some trouble trying to make the page responsive. I wanted to add a search bar and more than 3 cards but I miscalculated the time I need to create the project. Accomplishments that I'm proud of I'm proud of using my existing knowledge to build the project. I also learned some more CSS and HTML tricks along the way. What I learned I learned to make cards using HTML and CSS. What's next for Nighttime Cards I will add other cards and cards containing animal videos. I will also make sure it can be viewed in the dark without blinding anyone's eyes and consequently decrease melatonin thus making it more difficult to sleep. I will also include bottoms that users can select to calibrate the desired results. Built With css3 html5
Nighttime Cards
If you have trouble falling asleep at night this my be of help to you
['Joyce G. Lee']
[]
['css3', 'html5']
6
10,180
https://devpost.com/software/globaldeveloperchallenge
Inspiration The business-to-consumer aspect of product commerce (e-commerce) is the most visible business use of the World Wide Web. The primary goal of an e-commerce app is to sell goods online. This aplication was inspired base on the recent outbreake of disease, the application aims at solving the issue of people moving into local stores to buy their daily needs. Hence, local store owners can advert their produce using the application and users gets to buy things from their own local stores and get it delivered to them very fast and safe. What it does This project deals with developing an e-commerce app for Local businesses. It provides the users with a catalog of different local produce available close to them for purchase. In other to facilitate online purchase a shopping cart is provided to the user. The system is implemented using a 2-tier approach, with a backend database and an application interface. How I built it I started with the development of a UI/UX design were I made a prototype design first, before going ahead to build an MVP. Challenges I ran into One of the challenges I had was to be able to get users feedback on what they feel the app should look like, The local buyers wanted something that imitates their local shopping system, so it was a challenge to make it suit in. Accomplishments that I'm proud of We are very proud of what we have archived so far, it took a collective system to get to this point. We have an MVP which users are able to interact with. What I learned We learnt a lot of new technologies, we learnt how to solve problems considering the locals. What's next The next thing is to keep building until we have a finished product and lauched for local stores to use as a means of goods distributions. Built With kotlin Try it out www.figma.com github.com
Swachev
Development of an e-commerce app for local markets
['David Sunday', 'Amit Dandawate']
[]
['kotlin']
7
10,180
https://devpost.com/software/maskit-zv8ji2
InspirationWe are inspired by the struggle of many of us to keep safe during the current public health crisis. That is why we have designed a system to keep track of Mask usage and to check that people use masks whenever they enter public spaces. What it does The system detects in realtime whether a person is wearing a mask and then responds with a mechanical output ie servo motor movement. How we built it Our system uses a Raspberry-pi camera to capture live images then sends the captured image to a custom HTTP Linux server. On the server, we use a TensorFlow model (from AIZOO) to examine the image from the Raspberry-pi. If the image contains a person with a mask the objection detection model will return true and send a request for the Raspberry pi to move a servo (“ie open the door”). Challenges we ran into We faced many challenges during the completion of the project. We tried to do all the image processing on the raspberry pi however the pi, unfortunately, did not have either the necessary speed or space to handle the complicated image processing in OpenCV and Tensorflow. Therefore, we had to design an HTTP server using python to send the image to a Linux computer where the image is processed. Finding the right model for mask detection was also difficult. We tried several different frameworks before finding an efficient Tensorflow mask detection model from AIZOO. Remote work also proved difficult especially for a hardware project where only one of us could see the immediate results. Accomplishments that we're proud of We are proud of integrating the hardware and ml software. There were many difficult aspects of the processing and sending the data between server and Raspberry-pi. What we learned We learned about the capabilities of different machine learning frameworks, the Raspberry-pi, and how to create an HTTP server and send images over this server. What's next for MaskIt We would like to extend our technology to test it to new situations ie large groups and with greater mechanical output ie doors opening. Hopefully, our technology could be beneficial in public spaces to check that visitors are using masks Built With python raspberry-pi tensorflow Try it out github.com
MaskIt
Mask detection Raspberry pi with machine learning technology to better manage public and private spaces
['Allen Mao', 'Hans Gundlach']
[]
['python', 'raspberry-pi', 'tensorflow']
8
10,180
https://devpost.com/software/exercise-together
Live Video Streaming Video Room Youtube enabled Live Data Syncing Search Bar Authentication DynamoDB Home Inspiration We know that physical activity and social interaction have immense benefits*. During lockdown, many people aren't able to go to the gym or see any of their friends in person. I wanted to create an app to help people get their endorphins up and see their gym buddies across the world. * https://www.cdc.gov/physicalactivity/basics/pa-health/index.htm , https://www.mercycare.org/bhs/services-programs/eap/resources/health-benefits-of-social-interaction/ What it does Exercise Together is a web app that allows 3 people to share video while watching the same Youtube exercise class and log their exercise activity. It works like this: A user visits the website and either creates and account or logs in. Amazon Cognito is used for authentication. Once authenticated, the user is directed to a dashboard depicting the amount of time spent exercising with Exercise Together. The user clicks join room and enters a room name. Up to 3 of their friends enter the same name to join the same room. The users enter a video chat room and can search for a Youtube exercise video together by utilizing the search bar. Once everything is ready, they click start exercise to begin! When the video ends, the user returns to the dashboard and their time spent exercising is logged. Exercise Together is helpful when you want to exercise with your friends and simulates an exercise class you could do at the gym like yoga or pilates. This way people can work out with their friends that are all over the world! How I built it I used react and redux to build the front end of the project. For the backend, I used Serverless functionality like Cognito, AWS Lambda, S3, DynamoDB, and App Sync. Cognito verifies the user so that I can log exercise data for every user separately. All data is stored in DynamoDB. When people enter a room, Agora.io livestreams everyone's video to each other, so they can see each other's faces while React is used to display everyone's video. Every change you make to the search bar or clicking a Youtube video is logged to DynamoDB and is logged to all the other clients in the same room through AppSync. As a result, everyone in the room can see the same view at the same time. When you finish the workout, the data is sent to DynamoDB with the email you logged in as the key for the data. On the dashboard, a get request is made back to DynamoDB, so that you can see your exercise data for the whole week. Challenges I ran into I used a wide variety of services in order to develop the application that I wasn't experienced with previously like Agora.io, AWS Amplify, and AWS AppSync. Learning them was difficult and I went through a lot of troubleshooting with those services in the code. Moreover, syncing all these services together into one application was a large challenge, and I kept trying different pieces of code one at a time to try to get them to work together. Accomplishments that I'm proud of I was able finally learn how to use web sockets (AWS AppSync uses web sockets), which I'm really excited to use for my future projects! Web sockets are especially crucial for online games, which I want to make. What I learned I learned how to use a multitude of services and link them together. For example, I learned web sockets, Agora.io, AWS Amplify, and AWS Appsync. All these services would be immensely useful for my fire projects, so I believed that I really benefited from creating this project. What's next for Exercise Together Some extensions I'd like to make include: Adding Fitbit and Apple Health functionality so that users who use them can all see data logged onto the website. Making a sidebar like to that people could use to see who is currently online out of their friends list and join a room with them. In order to implement that, I would have to use AWS Neptune, which uses the same technology that Facebook uses for Facebook Friends. Creating a phone app using React Native. I feel that more people would like to use a phone app rather than the website. There are still many bugs , especially with the video streaming since I'm using a third party API and a free account for it. For example: The video streaming only works chrome. Entering the video room with more than one person is a buggy process. The way I get it to work is by duplicating the tab for each user entering and closing the previous tab. The Cognito verification link redirects to localhost, but will confirm the account. Built With agora.io amplify appsync cognito cookie dynamodb graphql javascript lambda materialize-css node.js react redux s3 serverless ses websocket Try it out exercisetogether.rampotham.com github.com www.youtube.com
Exercise Together
Exercise Together is a webapp that simulates your own group fitness class online with your friends
['ram potham']
['The Wolfram Award']
['agora.io', 'amplify', 'appsync', 'cognito', 'cookie', 'dynamodb', 'graphql', 'javascript', 'lambda', 'materialize-css', 'node.js', 'react', 'redux', 's3', 'serverless', 'ses', 'websocket']
9
10,180
https://devpost.com/software/reading_log
Inspiration I am an avid reader so when the Covid-19 pandemic hit, losing access to the public library really impacted me. Furthermore, my eyes are very sensitive to light so I can't read at a computer screen for long. With this app, I wanted to be able to keep track of which parts of a book I've read so that I can take breaks and pick up where I left off. In addition, this website allows me to write down my thoughts about what I read and then share them publicly if I choose to. Background, Development, and Challenges This is my first ever hackathon and my first web development project which I have hosted. I knew about Flask and HTML before, but to make this project great I had to expand my technical knowledge. Along the way, I learned: -Bootstrap to create the front-end -HTML response codes, website routing, and security -Flask -Jinja2 to dynamically create HTML -Database relationships The biggest challenge was rigorously testing the responsive front-end elements. Before this project, I had never paid much attention to front-end design. But, because this was something I was actually going to use in my personal life (and wanted to share with my friends) I made sure all the pages looked great, were easy to use, and made use of Bootstrap's full capabilities. Built With bootstrap html jinja python sqlalchemy Try it out github.com amodug1664.pythonanywhere.com
Reading Log
A website made in Flask to keep track of all the books and articles you read. Makes it easy to keep track of everythin.
['Abhinav Modugula']
[]
['bootstrap', 'html', 'jinja', 'python', 'sqlalchemy']
10
10,180
https://devpost.com/software/covid19-infobot
Inspiration People are in great misconceptions and believing myths regarding Covid 19. So, I wanted to create a bot which can be a fact machine for everybody What it does This info bot is capable of telling latest data of covid19 cases in India and fact regarding covid 19 How I built it Its built using dialogflow and nodejs webhook. Challenges I ran into Implementation of webhook and creating a bot was very new to me this is my first chat bot Accomplishments that I'm proud of I have made the bot capable of bursting myth of covid19. What I learned Dialogflow, NodeJs and ExpressJs What's next for Covid19 Infobot I want to implement languages like hindi and want to make it more capable of giving facts. Apart from that I want it to give more details on live covid19 data so it can fetch data for every city. Built With bot covid19 dialogflow express.js heroku node.js react redux Try it out bot.dialogflow.com souravdey777.github.io github.com github.com
Covid19 Infobot
Idea is to create an awareness chatbot that is easily accessible through different apps Assistants, or via the website. Knowledge giving and capability of telling latest info of covid19 cases in India
['Sourav Dey']
[]
['bot', 'covid19', 'dialogflow', 'express.js', 'heroku', 'node.js', 'react', 'redux']
11
10,180
https://devpost.com/software/baldchecker
The initial menu screen when opening the app A menu that shows up when Pick Image is selected. One of the options is Use Camera, which allows for direct camera access My head of hair, as captured by a Galaxy 7. The picture was taken with the front-facing camera and flash After a picture is taken or selected, the app goes back to the main menu By pressing the Select Hair Region, the user can use a little selector square which adjusts in shape and position to select a patch of hair The same is done for a skin patch by pressing Select Skin Region By pressing Analyze in the main menu, a quantitative analysis of the differences between the two image patches is shown Inspiration I am very concerned about my hair health. Well, not that concerned, but I do recognize that some people may be concerned. And I may end up concerned. The problem is that I have not yet found an app out there that addresses the tracking of the hair follicle liveliness, at least not in a simple way. However, a modern cellphone, with a camera and flash, may be able to address this problem. By directing the flash of a cellphone on a hair patch, the captured image will look different if the hair patch is less or more dense. This difference can be captured by the cellphone's camera, and further, useful and trackable information about the hairs' status can be investigated. What it does As such, I've built a simple app that takes a hair patch, as taken by a cellphone's camera, and compares that hair patch to an image of the skin. By comparing the hair patch to the skin patch, I can then come up with an index that represents how the hair is doing, which can be tracked over time in order to say something about hair loss or hair quality in a very quantitative way. How I built it I used typescript with the ionic framework, in order to build a good-looking app from the start with minimal time spent on the look of the app and maximum time spent on how the app works. Challenges I ran into One of the challenges I faced was how to convert the string64 images that were gotten in typescript (a javascript look-a-like), which is primarily used to infuse front-end logic, and not used as a bespoke imaging analysis framework, into a useable RGB format. Apart for this little challenge, the rest of the project was smooth sailing. Accomplishments that I'm proud of I was able to carry out the entire software development (and I am not a software developer), which was pretty empowering. What I learned I learned about the typescript environment, as well as the ionic framework environment. Add to that the current cellphone-based strategies out there for baldness detection and tracking. What's next for BaldChecker I think that as a submission to a Hackathon is good enough for this app. If more comes out of it, that's also mighty acceptable. Built With ionic javascript typescript Try it out github.com
BaldChecker
This app aims at tracking changes in hair density by comparing 2 images. The two images are compared, and a baldness index is provided.
['Alexandre Dumont']
[]
['ionic', 'javascript', 'typescript']
12
10,180
https://devpost.com/software/bloggie-where-every-one-is-a-blogger-gi9enq
Inspiration With abundant resources for starting a blog, people often get confused on how to start it or don't have enough resources to start a blog. To overcome this problem, we wanted to create a platform so that anyone can publish articles and earn money without the need of setting up or maintaining a blog. What it does It provides a platform for people who want to start a blog but are unable to afford the expenses of running and managing a blog. Here in Bloggie, a person can publish articles and can generate passive income just by sharing them with others. The more influencing the article is, the more they can earn. Key/Important Features: Simple Signup Process - It takes less than 5 minutes to create an account and get started. Single Click Publishing Process - Any one can publish articles in a matter of minutes with just a single click. 3.Trusted Authentication - Bloggie uses Firebase (by Google) for authentication. User Friendly Interface - The simple and clean interface ensures that the platform is easy-to-use and intuitive.Publish an article and start earning. Categorized Explore - Explore various articles by all publishers based on your interested category. Hand-Picked Articles - Read featured articles hand-picked by us. How we built it We have used Flutter (framework written in Dart) to develop cross platform hybrid apps and Firebase as backend. We used Provider State Management Technique to persist and manage global state of app. What's next for Bloggie - where every one is a blogger Web App Major Improvements More New Features Rewards For Readers Built With dart firebase flutter Try it out github.com
Bloggie - Where every one is a blogger
A platform where anyone can be a blogger and can make money
['Mohammed Mohsin', 'Habeeb Ullah']
[]
['dart', 'firebase', 'flutter']
13
10,180
https://devpost.com/software/armageddon-5nk8ir
Inspiration All the role playing game or even the first person experience games in general never felt good enough, they always restricted movement and things you could do. At the end of the day we were still playing through our computer screen and not really doing anything. With lockdown in countries across the globe we wanted to make something so everyone could enjoy themselves from the confines of their home. So we decided to bring the gaming experience right in our homes using Augmented Reality What it does Its a first person story experience, Our game allows you to play the role of a detective who has to sort through clues to save the world. Our game gives a whole new definition to RPG as we allow players to experience the game environment right in their homes using Augmented Reality. When the app starts you are presented with a menu screen , you can choose to press play or to know how it works press help. Once you press play our introductory cutscene starts and you get to know about the back story and what you have to do. After the cutscene your game starts and when you point your camera towards the ground you will be prompted to place your environment. After placing your enviroment you can start your adventure. Solve clues to keep progressing. How I built it We used Unity Engine to build the base along with Vuforia for implementing Augmented Reality. We used Voxal and Audacity for Audio morphing and editing. Used cinemachine in Unity to create the cutscenes. Animated characters were taken directly from mixamo. Challenges I ran into This game required a lot of narration and directive and I had to do the voice overs. Being someone who hasn't done much of this, it was really challenging. Accomplishments that I'm proud of We made a professional level game and we did not compromise any part of the story due to lack of knowledge. We learnt new things and we powered through. What I learned I learned voice morphing, audio editing and shader graphs in Unity. Apart from technical skills I learnt product research and collaborative working. What's next for Armageddon We plan on using cloud anchors to make our game live multiplayer and to add a lot more levels to the story. This is the first game of its genre and we plan on diversifying it to educational, historical and even more kinds of story telling and experience. Built With audacity mixamo unity vuforia Try it out drive.google.com
Armageddon
Bringing a new dimension to first person gaming experiences
['Abhijeet Swain', 'Akash Jha', 'Aaditee Juyal']
[]
['audacity', 'mixamo', 'unity', 'vuforia']
14
10,180
https://devpost.com/software/covid-19-health-center-qogj8t
Doctor Inspiration I took inspiration to do this project from the growing crisis and how any help related to the current scenario can yield something better. What it does This is a GUI Application with which the user can check for symptoms with the help of a rule based chatbot, which will further give recommendations on evaluating their health. The user can also read all the latest worldwide news related to COVID-19. This digital health center also help the user to keep track of the statistics that is the total cases, total recoveries, total death, total active and critical cases of any country and allows the visually impaired to listen while the machine speaks the text. Furthermore the user can read or listen the precautions given by WHO to keep one safe. How I built it I built it using tkinter to give users an easy interaction. A text to speech library- pyttsx3 has been used to also help the visually impaired to some extend. There is a rule based chatbot that evaluates the health by asking certain questions. The data for evaluation has been adapted from DOH Guidelines. To keep track of the statistics I have used the "Coronavirus map" API. And the other data has been obtained from the WHO website. Challenges I ran into I ran into certain challenges while making the chatbot since pyttsx3 and tkinter were not syncing together so I decided to use multithreading to some extend. Other than this the designing and functioning needed a lot of thinking and efforts. Accomplishments that I'm proud of I'm really proud of the fact that I did something on the current crisis that might be beneficial for people and especially the fact that I did consider the visually impaired while building this. I am also proud of the fact that I used OOP completely to build this project. What I learned I have learnt various different libraries and I think I have grown a hands on experience in using most of them like Tkinter, BeautifulSoup and pyttsx3. Other than this I have grown a fair knowledge of information related to COVID-19. What's next for COVID-19-Health-Center I really plan on using machine learning and training the chatbot to answer smartly. There are still some thing that can be added to the GUI. Built With beautiful-soup python pyttsx3 tkinter Try it out github.com
COVID-19-Health-Center
This is a GUI application needed to keep track of almost everything related to coronavirus. This is made also to help the visually impaired, so that they can keep track with just one click.
['maryamnadeem20 Nadeem']
[]
['beautiful-soup', 'python', 'pyttsx3', 'tkinter']
15
10,180
https://devpost.com/software/moodsmart
MoodSmart Logo Business Model Inspiration There are a multitude of pressures permeating the current climate of fear in our society, including loss, unemployment and isolation. All of these problems and tragedies undoubtedly have an effect on our mental wellbeing; in fact, a recent survey of 800 people in the UK living with mental illness found that 80% felt that their mental health was much worse due to the pandemic’s impact. Furthermore, mental illnesses often go undiagnosed and untreated, resulting in an emotional and economic cost. The third Sustainable Development Goal promotes good health and well-being for all, and we aim to propagate this mission with our app that addresses the question, ‘How might we use AI to improve the mental health of people aged 14 onwards, during the pandemic and beyond?’ What it does MoodSmart is a cross-platform mobile application that aims to alleviate cases of mental health problems. This is done by analysing a user’s social media posts and detecting early indicators of negative mental health. In particular, we focused on depression for our prototype as it’s an extremely pertinent illness affecting 264 million people worldwide. How we built it The back-end, a Python Flask API, uses a Machine Learning algorithm that employs Natural Language Processing, in order to perform sentiment analysis on tweets. The front-end was created with Flutter. Challenges we ran into We found it difficult to query the Flask framework through Flutter but we successfully accomplished this. In addition, communication between all team members was hard to manage, but we successfully got around this. Accomplishments that we are proud of We are proud of our appealing UI design and a machine learning model with 99% accuracy. Moreover, it was greatly insightful to research into mental health and sentiment analysis. What we learned (Demi) I learnt how to be resilient when things don't go as planned. I also learnt how to be a great leader and allocate roles accordingly (Grace) I learnt some many new techniques using Python, Flask and Flutter, even completing a Coursera course! Additionally, I learnt how to manage my time effectively to produce an effective prototype. (Dinali) I learnt how to research effectively, find statistics and data relevant to our topic of mental health and depression and develop part of a concise script for a pitch. (Criofan) I learnt how to create a distinct logo and effective video for a pitch. What's next for MoodSmart Future plans for MoodSmart include integrating it with other social media apps such as WhatsApp, Instagram and Facebook. We also want to include online therapist sessions for users on a free trial basis to assist the users during these hard times. Furthermore, we would like to improve our model to detect other conditions such as anxiety. Built With flask flutter python Try it out github.com
MoodSmart
We seek to improve the mental health of our community using AI.
['Demi Oshin', 'Dinali J', 'Grace Sodunke', 'Madeleine Salem', 'Criofan']
[]
['flask', 'flutter', 'python']
16
10,180
https://devpost.com/software/fancio
Contractor Workflow Laborer Workflow What Inspired Me As someone who helps out with my family construction business, I notice many times that my father tends to have difficulty finding the right construction workers for a given contract. Many times due to the lack of time, he ends up having to drive around neighborhoods in order to pick up workers who wait for contractors to offer them a temporary job. An analysis in the most recent survey data by the National Association of Home Builders' Housing Economics shows that the percentage of immigrant workers constitute nearly **25% * of the overall construction workforce*, however, those workers tend to have difficulty obtaining jobs due to legal paperwork. On top of that, 70% of construction workers have a HS DIPLOMA , with the second most common being with NO education at 23% *. This is a large issue for these workers as in today’s society, high waged jobs majority of the time reflect your level of education/professional network. Solution This is why Fancio was developed. Fancio is a web app that CONNECTS labor workers with contractors looking for workers to help finish the job. This bridge between workers and contractors allows a seamless and friendly bridge of connection and provides both ends to have a platform that is tailored for them unlike other services websites (like Fiverr, Upwork, Craigslist) where the target audience is extremely broad and difficult to land a gig. On top of that, because of the Coronavirus pandemic , many of these laborers have lost their corporate jobs, especially those who physically go out to find jobs. With Fancio, workers and contractors will be able to present themselves by showing their professional profile to other individuals, and keep in touch while looking/posting different opportunities online without the shady interactions on other job sites. Tech Stack | Design What I learned This was my first time creating an application from scratch using any of the technologies from the MERN stack. I was able to learn: How to create + connect the frontend and backend Integrating APIs using Twilio to help with a feature, how to deploy a full-stack application Organizing, modularizing code with React components/routers Using a cloud database service to assist with API requests/DB Schema creations Challenges Faced Fustrating learning curve on how to organize the different components (worrying about the passing of data) of a React App. Learning/debugging the API endpoints created as it was my first time using Node/Express/Mongoose. On top of that, there was also a lot of refactoring for the API requests due to request errors/incorrect handling, but I was able to learn and understand a lot of mistakes that can easily occur (eg. Login/Signup Auth, Worker + Contractor scheduling requests, etc.) Built With css express.js firebase heroku html mongodb mongoose node.js react.js twilio Try it out github.com
Fancio
Fancio is a web application that connects labor workers (mechanics, construction, plumbing, etc.) with contractors looking for workers to help get the job done
['salman siraj']
[]
['css', 'express.js', 'firebase', 'heroku', 'html', 'mongodb', 'mongoose', 'node.js', 'react.js', 'twilio']
17
10,180
https://devpost.com/software/eagle-sight
Inspiration I wanted to build a mini game to entertain people and I also wanted to add a lot of design to it. So, I got an idea of making a quiz like game where it tests your eyes. What it does Eagle Sight is a website which is mobile-friendly too, therefore it can be used by any device with an internet connection. At Eagle Sight, you get 10 questions that are all related to your eyesight. The questions are like, "How many numbers do you see?" or "Do you see a circle?" It is not really easy but anyone with very good sight can get 10 out of 10... After answering the 10 questions, your result will come... It will say how good your eyes are or whether your eyes aren't really good. Then you can either play the game again or share your results on Facebook by pressing the "share score" button. How I built it Firstly, I wrote down what sort of questions to add and how to make questions more difficult as you go.. Then, I designed the graphics needed for the questions and the website. After that using html I coded the basic website with no CSS or JavaScript. I made all the pages except for the homepage and results page. Then, I started designing the homepage. I used a lot of CSS to design the homepage and also added a beautiful floating colorful balls design using JavaScript. After that I started adding CSS effects to the buttons and text of each and every question page. Finally, I created the results page with a similar design to the homepage. The domain was obtained for free through freenom and hosting was done through infinity free. Also I used Cloudflare and secured my website... Challenges I ran into As this was my first time using 100% code and no website builders at all this was difficult. I ran into so much of challenges. Whenever I couldn't add a beautiful design I would use an open source library's help. But most of the codes in open-source libraries didn't work really well on mobile, either the text would not show up or it would be really ugly. So, I had to try out so much of these codes... Making the site mobile friendly, similar to the second challenge I mentioned, sometimes my own code made the site ugly on mobile. Sometimes I had to make pages all over again because it worked well on PC but not on mobile. But finally I made the website really mobile-friendly. Accomplishments that I'm proud of I'm really proud of building a website with this much of code... Specially this much of CSS, I didn't even know something called CSS exists until this week, but now I have a website with a lot of it... Also, I am proud of the function of my website, when I told my family and few friends to try it out, it was perfect. Everyone got perfect results... What's next for Eagle Sight Now, Eagle Sight is live and anyone can visit it... I have two main goals with this new website, Bring in more people to my website and make it known among the people. Although I have published my website it doesn't mean I am done, it's now that the real game starts. I will have to promote it on my social media accounts as well as on other sites. I have to develop my site. I need professionals to review my site and learn about how I can develop it. Built With css html javascript Try it out www.eaglesight.tech github.com
Eagle Sight
Play this online game and see how sharp your eyes are... Are they as sharp as an Eagle's or weak as a bat's...
['Senuka Rathnayake']
[]
['css', 'html', 'javascript']
18
10,180
https://devpost.com/software/give-me-entertainment
Give-Me-Entertainment Boredom can be a major issue, especially during quarantine. Give Me Entertainment, can give you something to do, or, it can give you an idea of what you might want to do. Contents Motivation List of sections Code Screenshots Additional Notes Framework Credits Contact me about this project Motivation During quarantine, many people have found themselves bored. In order to keep a level of enthusiasm, I wanted to make a website with multiple tasks for people to do, as well as recommending new things to try out. 😄 List of sections In this website, there are different sections for each category. These include: Music In this section, the top tracks from Spotify, Apple Music and Deezer are listed Games In this section, you can play a game embedded from Wanted5Games Cooking In this section, you can press a button to get a random recipe Puzzle/Riddle In this section, you can do a puzzle (or recommend a riddle which might be added) Netflix In this section, the top Netflix shows of the year are given through youtube videos Other Shows In this section, the top show from popular streaming platforms are given through youtube videos Chat In this section, you can contact me via email to leave feedback on the website Code Check out the HTML Check out the CSS Check out the JavaScript Logo Screenshots At the top of the web page, you are greeted with the welcome section, introducing the user to the website. In the next section, if you click on the name of the category, you will be taken to that section. This is what the music section looks like. All of the other sections are similar. At the bottom, there is a section for if a user would like to contact me. Additional Notes Make sure ad blocker is turned off. In order to play games in the games section, ad blocker must be turned off. Framework I used Atom , a text editor, to develop my website. Programming languages include: HTML CSS (with Bootstrap) JavaScript (with a little jQuery!) Credits Things/Websites that helped me make this website: Meal Generator Games for games section Puzzle in riddles section Icons Bootsrap Logo Maker Spotify, Apple Music and Deezer - embedded playlists Youtube - embedded youtube videos Contact me about this project If you want to contact me about this project, email givemeentertainment@yahoo.com Built With atom css html javascript Try it out thecodingcrystal.github.io github.com
Give Me Entertainment!
This is a website created to give people something to do if they are bored.
[]
[]
['atom', 'css', 'html', 'javascript']
19
10,180
https://devpost.com/software/color-search-0euak6
Inspiration A friend of mine wanted to make a collage of their photo and a photo of the sky matching the color of their dress. That gave me the idea that maybe there's a website that finds photos like this. Even though there are websites that can be used to search based on color, there wasn't a single website that took a photo as an input. So, I decided to build one. The main algorithm used in the website is heavily influenced by this blog post What it does You can submit a photo as an input and then the website looks for photos (among the indexed photos from unsplash) and gives 100 photos that have the closest color palette. So, if you search using a photo that's mostly red, you'll probably get photos like the sky during the dusk etc How I built it I indexed the rgb photos of a certain number of photos and stored them in a sqlite database. I created the website using flask, where I take the photo and extract it's rgb values. Then I convert the rgb values to CIELAB values and then find their distance. The photos are then sorted based on distance to show the closest matches. The reason behind using the distance between CIELAB and not RGB values is that CIELAB better represent the human perception of color. Using RGB values would probably lead to a color appearing more often because they're closer in the RGB spectrum but wouldn't actually be closer in human perception Challenges I ran into Oh boy, A LOT of challenges. When I first built it, I was getting the color blue as a match more often than other colors. The reason behind this was when I was indexing the photos, I was clustering their colors to find the most dominant colors. In this process, the clustering isn't very accurate if the image quality isn't high enough. I still haven't figured out why this is the case but after I spent 2 entire days of frustration and troubleshooting, I figured it out and fixed it by increasing the image quality when indexing (should've realized the bug sooner). Moreover, I had some trouble hosting the website. That's probably because of how new I am to web hosting. Accomplishments that I'm proud of I actually completed building an app that uses datascince in it!! I've never done anything like this before. I've tinkered around with datascience and machine learning libraries before but never made a product with it. I was really proud when I saw it work for the first time. What I learned I learned a bit more about web hosting. There's still a long way to go. This was the first complete project I built it flask. Previously I only had experience with django. So, it's nice that I can now use a light weight web framework too. What's next for Color Search I need to learn how to make this website more scalable. I did build a search engine. But it's more of a brute searching the whole database instead of using any efficient algorithm. When I indexed 10,000+ photos, it took around 40-45 secodns to return the results, which was way too long. So, I had to drastically reduce the number of indexed images to make it more usable. However I want to add way more images to it in the future. So, maybe I'll learn how to build better sql tables or learn how to use nosql databases (like mongoDB) to make this more efficient. I also want to add a secondary text search engine on the page that displays the matches to filter through the results using the images' metadata. Built With bootstrap flask opencv python pythonanywhere scikit-learn unsplash-api Try it out colorsearch.pythonanywhere.com github.com
Color Search
A Search Engine for images and colors
['Imtiaz Ahmed Khan']
[]
['bootstrap', 'flask', 'opencv', 'python', 'pythonanywhere', 'scikit-learn', 'unsplash-api']
20
10,180
https://devpost.com/software/sms-genie
Responses come in an SMS Inspiration "The global scale and speed of the current educational disruption is unparalleled" - Audrey Azoulay, Director-General, UNESCO As the world is still trying to cope up with the unprecedented Coronavirus pandemic, education has come to a complete standstill for over 290 million students across the globe, reports United Nations . Although students in urban areas are somewhat continuing learning due to internet services, students in many other parts of the world don't have access to a stable internet connection, and are also restricted to feature phones instead of smartphones. A closure of schools has resulted in a closure of education for them. Our solution targets the demographic of students who have an iron will to continue learning, but are being unable to do so due to extremely poor internet connection or their monetary inability to purchase a smartphone. We realized that although the internet is still a commodity of leisure in many parts of the world, cellular connections are omnipresent, and people at least own a feature phone. We decided to leverage these thoughts for education in such uncertain times, and thus built SMS Genie - A WAY TO ACCESS THE INTERNET, WITHOUT THE INTERNET! What it does SMS Genie is a platform-agnostic service that can help students continue their learning via simple cellular SMSs, irrespective of internet presence and feature phone limitations. Using just SMS, now students will be able to: 1. Get subject related search results (Example: "Search who is Mahatma Gandhi", "search how many inches in a foot?") 2. Get help in solving math problems (Example: "Solve 2x+4=3") 3. Get News headlines to stay up to date with current affairs (Example: "Current Affairs please") 4. Get facts to satiate their inquisitiveness (Example: "Get me a space fact") 5. Get language related answers when learning new languages (Example: "Translate I am here in Hindi", "Define Metamorphosis", "Example sentence using the word trial", "Word of the day") 6. Get instructions on precautionary actions from the Coronavirus (Example: "Suggest some precautions for Corona") 7. Get Coronavirus stats via SMS (Example: "Tell me the recent Corona stats") The student receives answers via SMS. Clean. Simple. Efficient. No need for internet to continue studying now! How we built it We used Twilio to setup the SMS service, and Dialogflow for intent extraction. Next up, we hooked up each intent with relevant APIs to answer the students question in the best possible manner. The Query results are then sent to the student via Twilio. We have used a wide range of APIs to best cater to student quries such as DuckDuckGo, NewsAPI, Challenges we ran into We had initial challenges for integrating such a wide array of APIs with the service in such a short span of time. Also, while working remotely from 3 major cities of India (Delhi, Hyderabad, Bangalore), we ourselves experienced slow internet speeds. While the United States is going forward in R&D for 5G, urban India still struggles with 4G networks, that still need to be massively improved. This struggle is prevalent across all third world countries around the world as well. Accomplishments that we are proud of Given such a short time-frame, we were still able to integrate a lot of relevant and crucial services. Also, our product is production-ready, and could easily achieve a 100% penetration of the education market very quickly, since its platform-agnostic. We firmly believe that this SMS service has the ability to bring about a huge change in education at such uncertain times. What we learned From the tech aspect, we have gained a lot of experience while working on this service. We have been able to implement our premise very well, i.e, "ACCESSING THE INTERNET, WITHOUT THE INTERNET". But what I wish to highlight and emphasise more, is that we got to understand a lot about this particular economic sector, and its immediate requirements. What's next for SMS Genie We wish to make SMS Genie a worldwide phenomenon, to help students learn, uninhibited, any time of the day. We wish to increase its appeal by adding capabilities of MMS to support multimedia files as well. Built With dialogflow javascript python twilio Try it out github.com
SMS Genie
Want to continue learning, but stuck at home with a poor internet connection? Now you can remotely study and browse the internet, WITHOUT THE INTERNET! Remote education, now just one SMS away!
['Diptark Bose', 'Priyankar Kumar', 'Akashraj R', 'Pramod Shenoy']
[]
['dialogflow', 'javascript', 'python', 'twilio']
21
10,180
https://devpost.com/software/faco-fight-against-corona-jfcza9
GIF Confusion matrix for our final model INSPIRATION A diagnosis of respiratory disease is one of the most common outcomes of visiting a doctor. Respiratory diseases can be caused by inflammation, bacterial infection or viral infection of the respiratory tract. Diseases caused by inflammation include chronic conditions such as asthma, cystic fibrosis, COVID-19, and chronic obstructive pulmonary disease (COPD). Acute conditions, caused by either bacterial or viral infection, can affect either the upper or lower respiratory tract. Upper respiratory tract infections include common colds while lower respiratory tract infections include diseases such as pneumonia. Other infections include influenza, acute bronchitis, and bronchiolitis. Typically, doctors use stethoscopes to listen to the lungs as the first indication of a respiratory problem. The information available from these sounds is compromised as the sound has to first pass through the chest musculature which muffles high-pitched components of respiratory sounds. In contrast, the lungs are directly connected to the atmosphere during respiratory events such as coughs, heart rate. PROBLEM STATEMENT In this difficult time, a lot of people panic if they have signs of any of the symptoms, and they want to visit the doctor. It isn’t necessary for the patients to always visit the doctor, as they might have a normal fever, cold or other condition that does not require immediate medical care. The patient who might not have COVID-19 might contract the disease during his visit to the Corona testing booth, or expose others if they are infected. Most of the diseases related to the respiratory systems can be assessed by the use of a stethoscope, which requires the patient to be physically present with the doctor. Healthcare access is limited—doctors can only see so many people, and people living in rural areas may have to travel to seek care, potentially exposing others and themselves. SOLUTION We provide a point of care diagnostic solutions for tele-health that are easily integrated into existing platforms. We are working on an app to provide instant clinical quality diagnostic tests and management tools directly to consumers and healthcare providers. Our app is based on the premise that cough and breathing sounds carry vital information on the state of the respiratory tract. It is created to diagnose and measure the severity of a wide range of chronic and acute diseases such as corona, pneumonia, asthma, bronchiolitis and chronic obstructive pulmonary disease (COPD) using this insight. These audible sounds, used by our app, contain significantly more information than the sounds picked up by a stethoscope. app approach is automated and removes the need for human interpretation of respiratory sounds, plus user disease can also be detected by measuring heart beat from camera of smartphone. The application works in the following manner: User downloads the application from the app store and registers himself/herself. After creating his/her account, they have to go through a questionnaire describing their symptoms like headache, fever, cough, cold etc. After the questionnaire, the app records the users’ coughing, speaking, breathing and heart rate in form of video from smartphone. After recording, the integrated AI system will analyze the sound recording, heart rate comparing it with a large database of respiratory sounds. If it detects any specific pattern inherent to a particular disease in the recording, it will enable the patient to contact a nearby specialist doctor. The doctor then receives a notification on a counterpart of this app, for doctors. The doctor can view the form, watch the audio recording, and also read the report given by the AI of the application. The doctor, depending upon the report of the AI, will develop a diagnosis, suggest medicines, or recommend a hospital visit if the person shows symptoms of corona or other serious condition. In cases where the AI detects a very seriously ill patient, it will also enable the physician to call an ambulance to the users’ location and continuously track the user. HOW WE ARE GOING TO BUILD IT We will take a machine learning approach to develop highly-accurate algorithms that diagnose disease from cough and respiratory sounds. Machine learning is an artificial intelligence technique that constructs algorithms with the ability to learn from data. In our approach, signatures that characterize the respiratory tract are extracted from cough and breathing sounds. We start by matching signatures in a large database of sound recordings with known clinical diagnoses. Our machine learning tools then find the optimum combination of these signatures to create an accurate diagnostic test or severity measure (this is called classification). Importantly, we believe these signatures are consistent across the population and not specific to an individual so there is no need for a personalized database Following are the steps the app will take: Receive an audio signal from the user's phone microphone Filter the signal so as to improve its quality and remove background noise Run the signal through an artificial neural network which will decide whether it is an usable breathing or cough signal Convert the signal into a frequency-based representation (spectrogram) Run the signal through a conveniently trained artificial neural network that would predict the user's condition and possible illness Store features of the audio signal when the classification indicates a symptom IMPACT FACO will help patients get themselves tested at home, supporting in areas where tests and access to tests are limited. This will help democratize care in hard-to-reach or resource-strapped areas, and provide peace of mind so that patients will not overwhelm already stressed healthcare systems. Doctors will be able to prioritize patients with an urgent need related to their speciality, providing care from the palm of their hand, limiting their exposure and travel time. CHALLENGES WE RAN INTO No financial support Working under quarantine measures Working in different time-zones Scarcity of high-quality data sets to train our models with One Feature Related Problem- Legal shortcomings we might face when adding the tracking patient feature ACCOMPLISHMENTS We went from initial concept to a full working prototype. We got a jumpstart on organizational strategy, revenue and business plans—laying the groundwork for building partnerships with healthcare providers and pharmacies. On the creative side, we built our foundational brand and design system, and created over 40 screens to develop a fully working prototype of our digital experience. Our prototype models nearly the entire app experience—from recording respiratory sounds to reporting to managing contact, care, and prescriptions with physicians. Technologically, we successfully developed an algorithm for disease and have begun the application development process—well on our way to making this a fully functional product within the next 20 days. You can explore the full prototype here or watch the demo (and check out our promo gif )! WHAT WE'VE DONE SO FAR We wanted to show that the project is feasible. Scientific literature has shown that audio data can help diagnose respiratory diseases. We provide some references below. However, it is unclear how reliable such a model would be in real situations. For that reason, we used a publicly available annotated dataset of cough samples: It is a collection of audio files in wav format classified into four different categories. We wrote code in Python that converts those samples into MEL spectrograms. For the time being we are not using the MEL scale, just the spectrograms. We did several kinds of pre-processing of the signals, including data augmentation, then convert all pre-processed signals, along with their categories into a databunch object that can be used for training artificial neural networks created in the fastai library. The signals within the databunch were divided into training and validation sets. Because the dataset size was reduced, we used transfer learning . That is, we used previously trained networks as a starting point, rather than training from scratch. We treated the spectrograms as if it were images and used powerful models pre-trained to classify images from large datasets. In particular, we tried both two variants of resnet and two variants of VGG differing on their depth (number of hidden layers). This approach implied turning the sprectograms into image-like representations and normalizing them according to the statistics of the original dataset our models were trained on (imagenet). We first changed the head of the networks to one that would classify according to our categories and trained only that part of the net, freezing the rest. Later on we unfroze the rest of the net and further trained it. We finally compared the different models by the confusion matrices that we obtained from the validation test. We finally settled on a model based on VGG19 . We exported the model for later use in classifying audio samples through the pre-existing interface of our mobile app. The results are promising, especially considering the small amount of data that we have available at this moment. We have included an image of the final confusion matrix that shows how our current network can correctly classify all four categories of signal about 50% of the time, far better than the random level of 25%. We conclude that wav files obtained trough a phone mic provide information that can be useful for diagnosing respiratory condition. We are confident that we can vastly improve both the sensitivity and the specificity of our model if we can gain access to larger, more representative datasets. We provide an image of the final confusion matrix for our model in the gallery. This is a repository that contains the most important pieces of our work, including some code, the confusion matrix image and the exported final model. SUMMARY We are developing digital healthcare solutions to assist doctors and empower patients to diagnose and manage diseases. We are creating easy to use, affordable, clinically validated and regulatory cleared diagnostic tools that only require a smartphone. Our solutions are designed to be easily integrated into existing tele-health solutions and we are also working on apps to provide respiratory disease diagnosis and management directly to consumers and healthcare providers. Feel free to click on our website for more information. We developed this website using Javascript, HTML, CSS, Figma, and integrated it with Firebase to manage hosting and our database. Thank you for reading, and don't hesitate to reach out if you have any questions! REFERENCES Porter P, Claxton S, Wood J, Peltonen V, Brisbane J, Purdie F, Smith C, Bear N, Abeyratne U, Diagnosis of Chronic Obstructive Pulmonary Disease (COPD) Exacerbations Using a Smartphone-Based, Cough Centred Algorithm, ERS 2019, October 1, 2019. Porter P, Abeyratne U, Swarnkar V, Tan J, Ng T, Brisbane JM, Speldewinde D, Choveaux J, Sharan R, Kosasih K and Della, P, A prospective multicentre study testing the diagnostic accuracy of an automated cough sound centered analytic system for the identification of common respiratory disorders in children, Respiratory Research 20(81), 2019 Moschovis PP, Sampayo EM, Porter P, Abeyratne U, Doros G, Swarnkar V, Sharan R, Carl JC, A Cough Analysis Smartphone Application for Diagnosis of Acute Respiratory Illnesses in Children, ATS 2019, May 19, 2019. Sharan RV, Abeyratne UR, Swarnkar VR, Porter P, Automatic croup diagnosis using cough sound recognition, IEEE Transactions on Biomedical Engineering 66(2), 2019. Kosasih K, Abeyratne UR, Exhaustive mathematical analysis of simple clinical measurements for childhood pneumonia diagnosis, World Journal of Pediatrics 13(5), 2017. Kosasih K, Abeyratne UR, Swarnkar V, Triasih R, Wavelet augmented cough analysis for rapid childhood pneumonia diagnosis, IEEE Transactions on Biomedical Engineering 62(4), 2015. Amrulloh YA, Abeyratne UR, Swarnkar V, Triasih R, Setyati A, Automatic cough segmentation from non-contact sound recordings in pediatric wards, Biomedical Signal Processing and Control 21, 2015. Swarnkar V, Abeyratne UR, Chang AB, Amrulloh YA, Setyati A, Triasih R, Automatic identification of wet and dry cough in pediatric patients with respiratory diseases, Annals Biomedical Engineering 41(5), 2013. Abeyratne UR, Swarnkar V, Setyati A, Triasih R, Cough sound analysis can rapidly diagnose childhood pneumonia, Annals Biomedical Engineering 41(11), 2013. FACO APP VIDEO DEMO LINK FACO PRESENTATION LINK FACO 1st Pilot Web App LINK Built With android-studio doubango fastai firebase google-cloud google-maps java machine-learning mysql numpy pandas python pytorch sklearn sound-monitoring-and-matching-api spyder webrtc Try it out github.com
FACO: Fight Against Corona
A contactless digital healthcare solution to assist doctors and empower patients to diagnose and manage diseases
['Archit Suryawanshi', 'Oghenetejiri Agbodoroba', 'Ntongha Ibiang', 'Sahil Singhavi', 'Ruthy Levi', 'Navneet Gupta', 'Mohamed Hany', 'Prachi Sonje', 'GAVAKSHIT VERMA', 'Shraddha Nemane', 'snikita312', 'Gauri Thukral', 'udit agarwal', 'Francisco Tornay', 'Rubén Aguilera García']
['1st place', 'The Best Women-Led Team']
['android-studio', 'doubango', 'fastai', 'firebase', 'google-cloud', 'google-maps', 'java', 'machine-learning', 'mysql', 'numpy', 'pandas', 'python', 'pytorch', 'sklearn', 'sound-monitoring-and-matching-api', 'spyder', 'webrtc']
22
10,180
https://devpost.com/software/itutor-bntd5h
Inspiration Education involves gaining knowledge and skills that people are expected to have in a society.Like all other fields, technology has gained importance in education and training also. In 21st century programming is an essential skill, it teaches one how to think. Learning programming might be easy for some of us. YouTube,Programming courses are there but not everyone learns in same way. Some of us requires individual attention & line by line guidance. But there are no platform where we can learn programming one to one remotely.The existing solution are tedious and time consuming,which takes away the essence of learning programming. What it does Gives a common online platform for live videochat, programming, real time connection without using different software, like team-viewer, skype, text editor. How I built it HTML :-Hypertext Markup Language, a standardized system for tagging text files to achieve font, colour, graphic, and hyperlink effects on World Wide Web pages. CSS :-CSS describes how HTML elements are to be displayed on screen, paper, or in other media JAVASCRIPT:-JavaScript is a scripting or programming language that allows you to implement complex things on web pages FRAMEWORKS AND LIBRARIES BOOTSTRAP:-Bootstrap is is the most popular CSS Framework for developing responsive and mobile-first websites. jQuery:-The purpose of jQuery is to make it much easier to use JavaScript on your website NodeJS:- It is an run time environment that allows browser javascript to run on your console. Express:-Web application framework that provides a robust set of features for web and mobile applications. MONGODB:- It's an no-sql database that stores data in the form of JSON DATABASE - DATABASE(m-LAB) :- mLab is fast hoisting service for mongodb FILES(heroku):-Heroku is a platform as a service that enables developers to build, run, and operate applications entirely in the cloud. Challenges I ran into I ran into problem of using data to present it in front end in beautifull manner. What I learned I learned how to build complete end to end applications. Built With bootstrap css express.js heroku html javascript jquery m-lab mongodb node.js Try it out itutorsanjay.herokuapp.com
iTutor
To provide an online platform to learn and teach programming one to one remotely
['Sam J']
[]
['bootstrap', 'css', 'express.js', 'heroku', 'html', 'javascript', 'jquery', 'm-lab', 'mongodb', 'node.js']
23
10,180
https://devpost.com/software/the-impossible-tic-tac-toe-game
Inspiration We are a group of beginners to programming and we've learned the basics of python from resources on the internet. Although we learned it, We wanted to do create something that validates and progresses our learning experience in the real world and serve as a motivation to learn more. What It Does It is a basic TIc Tac Toe game, with a GUI built using the tkinter module. In addition to playing with a friend it has also an option to play against and try to beat the impossible computer. How We Built It In the beginning our only intention was to learn to create a GUI that can take inputs from the mouse button and call functions accordingly, In other words, initially we had meant it to be a simple Tic Tac Toe game which you could only play with your friend. When we finally reached the point where the goal was accomplished, we showed to our family and friends who were supportive, but in all wouldn't prefer it against playing it on a piece of paper, i.e it was not engaging enough. That is when we decided to create a computer to play against, initially the computer would just choose any random box among empty boxes to select from. Then we added code the computer part to stop the human player from winning by choosing the third winning box if two were taken by the human player, and to chose the third box if the computer had already chosen a winning two. but still, that nowhere near made it interesting. So we taught the computer to perform deadlocks that can guarantee a win, but always there was a way around that if the human could perform those deadlock first. Next up, we taught the computer to counter human deadlocks. Then again we tested it by making it go against very smart people. The computer did lose, but each time it did, We noted the moves (collected data) which ended in the computer losing and taught it how to counter it. This went on for a few times until no one we knew of could beat the computer. Challenges We Ran Into Whenever we taught the computer something new, It often broke the old strategies the computer was programmed to use. We had to code in a way that didn't affect the previous algorithm. Accomplishments That We're Proud Of As this was our first real project that involved any programming , it is for us a peek at the possibilities and opportunities that programming can offer. It required hard reasoning and logical thinking, and we are really happy that we didn't give up in between although there were times when we thought this wouldn't work. Used math we learned in grade 11, which many people said we would never XD. What We Learned Learned how to create GUI using tkinter. Learned how to take user inputs in the form of buttons, and call functions accordingly in real time. Learned how to Object Orient (Although we am nowhere near perfect, we have an idea now at the very least). What's Next For The Impossible Tic Tac Toe Game Can be developed further, give a bit more attractive UI, mainly we only focused on the back end for this project. The front end has a lot of improvements to be made. Can add an option to enter players name and store his/her scores in a database. Can add an option for the player to choose between X and O, for now the player is X and the computer is always O. Note While trying it out, please make sure you have the 'Pillow' Module installed. Built With photoshop python tkinter Try it out github.com
The Impossible Tic Tac Toe Game
A computer that can never be beaten in a Tic Tac Toe game.
['Amal Prakash', 'Andrew Chan', 'Deep Chandra', 'Manish Varrier']
['Best Design', 'Good Project']
['photoshop', 'python', 'tkinter']
24
10,180
https://devpost.com/software/covid19-outbreak-and-npi-prediction
Coronaob.ai - Pandemic Outbreak and mitigation prediction 1.Overview Coronaob.ai is the ultimate tool for predicting epidemic trends. It has been built with the help of artificial intelligence and statistical methods. This epidemic forecasting model helps in giving a rough estimate about the future scenario and also helps in suggesting non-pharmaceutical/mitigation measures to control the outbreak with minimum efforts. This will give a head start in the preparations that are made to curb the pandemic before taking the lives of people. Note: An NPI is the same as a mitigation measure. 2.What exactly is the problem? During any pandemic, it's difficult to scale up the implementation of the mitigation measures, this is often because of the chaos that is caused during the pandemic. It often becomes an unseen situation wherein the authorities lack smart judgment on which step to take further which makes the situation even worse. It's not always necessary to implement the strongest mitigation measure as medium-strength mitigation can get the job done, thus giving more weightage to the economic stability and other subjects. 3.What can be done to tackle this issue? A strategy that can give a rough picture of the future scenario describing the number of cases and the area of spread can give an insight into what could better be done to reduce the effect in an easy and cost-effective manner. Also, having a record of previously taken successful-steps can also provide much boost to this strategy. 4.Our Goals a. To give an estimate by forecasting the number of cases and trends in the spread etc, which will give a good construction of how the scenario would be. b. To suggest/predict the best suitable mitigation measures, according to previously taken successful steps, thus saving resources and not creating chaos. c. To make this approach a robust one, so that any agency working on 5.Milestones Prototype stage : We have completed our first stage training and testing on the covid19 data and have achieved over 90% accuracy in predicting the new cases the immediate next day and over 85% accuracy in predicting the long term scenario. On the mitigation prediction part, we have achieved an accuracy of 91.8% and we were successful in bringing down the hamming loss to as low as 8.2%. Accuracy : Our method is one of the most accurate ones among the others in predicting such trends. 6.Specifications Our submission is a script containing the machine-learning models that can be boosted with an interesting UI as mentioned in the gallery picture. 7.Technical details Major tools used : a. Kalman filter : It’s an algorithm that uses a series of measurements observed over time, containing statistical noise and other inaccuracies, and produces estimates of unknown variables that tend to be more accurate than those based on a single measurement alone, by estimating a joint probability distribution over the variables for each timeframe. b. Regression analysis : It’s a set of statistical processes for estimating the relationships between a dependent variable (often called the 'outcome variable') and one or more independent variables (often called 'predictors', 'covariates', or 'features'). c. Scikit-learn : Scikit-learn (formerly scikits.learn and also known as sklearn) is a free software machine learning library for the Python programming language.[3] It features various classification, regression and clustering algorithms including support vector machines, random forests, gradient boosting, k-means, and DBSCAN, and is designed to interoperate with the Python numerical and scientific libraries NumPy and SciPy. 8. Dataset Description Some details regarding the columns of mastersheet prediction are: Each row is an entry/instance of a particular Npi getting implemented. Country: This column represents the country to which the entry belongs to. Net migration: The net migration rate is the difference between the number of immigrants (people coming into an area) and the number of emigrants (people leaving an area) throughout the year. Population density: Population density is the number of individuals per unit geographic area, for example, number per square meter, per hectare, or per square kilometer. Sex Ratio: The sex ratio is the ratio of males to females in a population. Population age-distribution: Age distribution, also called Age Composition, in population studies, the proportionate numbers of persons in successive age categories in a given population. (0-14yrs/60+yrs %) Health physicians per 1000 population: Number of medical doctors (physicians), including generalist and specialist medical practitioners, per 1 000 population. Mobile cellular subscription per 100 inhabitants: Mobile cellular telephone subscriptions are subscriptions to a public mobile telephone service that provides access to the PSTN using cellular technology. Active on the day: The number of active cases of covid19 infections in that particular country on the day it was implemented. Seven-day, twelve-day and thirty-day predictions are for active cases from the date it was implemented. And the date-implemented is converted to whether it was a week-day or a weekend to make it usable for training. The last column represents the category to which the NPI that implemented belonged to. 9. I/O Input : The epidemic data such as the number of infected people, demographics, travel history of the infected patients, the dates, etc up till a certain date Output : 1) Prediction of the number of people who will be infected in the next 30days. 2) The countries that will get affected in the next 30days. 3) The mitigation/restriction methods to enforce such as curfew, social distancing, etc will also be predicted, to control the outbreak with minimalistic efforts. 10. Dividing the measures into categories: Category 1 : Public -health measures and social-distancing. Category 2 : Social-economic measures and movement-restrictions. Category 3 : Partial/complete lockdown. To categorize the npis we followed a 5 step analysis : Step 1 : We chose 6 different countries that have implemented at least one of the above-mentioned npis. Step 2 : We had chosen a particular date wherein one of the NPI was implemented. Step 3 : From that date (chosen) we had calculated a 5day, 8day, 12day growth rate in the number of confirmed cases in that country. Step 4 : According to 1) https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4327893/ 2) https://www.worldometers.info/coronavirus/coronavirus-incubation-period/ we took a reference that, over 50% of the people who are affected on day1 show symptoms by day5, over 30% of the people affected on day1 show symptoms by day8 and the last 20% start showing symptoms by day12. Assuming that, they get a checkup as soon as they are showing symptoms, we had calculated a cumulative growth rate. Step 5 : This cumulative growth rate was not very accurate due to the population densities of the countries being different. So, we had normalized the obtained scores from step4 by the population densities. That gave us the following results. More information can be found here: link [ (896.4961042933885, 'CHINA', 'SOCIAL DISTANCING'), (720.7571447424511, 'FRANCE', 'PUBLIC HEALTH MEASURES'), (578.0345389562175, 'SPAIN', 'SOCIAL AND ECONOMIC MEASURES'), (527.7087251438776, 'IRAN', 'MOV RESTRICTION'), (484.1021819976962, 'ITALY', 'PARTIAL LOCKDOWN'), (207.67676767676767, 'INDIA', 'COMPLETE LOCKDOWN')] Ex: (Cumilative growthrate(normalised), Country Name, Measure-taken) So the above analysis shows the decreasing order of growth rates and increasing order of strength, however, this is not very accurate due to various other reasons, but this gives a rough estimate of the effectiveness/strength of the npis. 11. Working a . The inputs given regarding the previous days’ record of the outbreak are first filtered by the Kalman filter and then further the modified inputs are sent to the regression model which will predict the scenario with better accuracies than any other simple regression model. b . Then the predictions from the above models are fed into the machine-learning model which will further help in predicting the mitigations to be used, based on the previous history given in the literature, ex-social distancing. c .We performed 10 Folds Cross-Validation by dividing our data set into 10 different chunks, then running the model 10 times. For each run, we designate one chunk to be for testing and the other 9 are used for training. This is done so that every data point will be in both testing and training. 12. Conclusions This method can help the authorities to develop and predict various mitigation measures that will help in controlling the outbreak effectively with minimum efforts and chaos. 13. What did we learn? a .This project was challenging in terms of the conceptualization and data collection part, there was no direct data available. We learned how to take relevant data from different datasets, engineer them, and use it for our purpose. b . The regular regression algorithms failed in giving accurate results, so we had to think something different that can increase accuracy. Thus, we came across the idea of using the Kalman filter, and using these updated inputs we could achieve better accuracy. c .Since we had to take regions having more than 1000cases only for the effectiveness of data, the overall dataset became small, deep-learning models failed. This made us switch to machine-learning algorithms. d . We also used clustering algorithms which gave a deep understanding of why these work better in some situations. e . Also due to some problems, it was exciting for us to use both R and python in a single notebook thus adding it to our learning. 14. The drawbacks of our approach a . This above-mentioned approach has many drawbacks, one of them is an incomplete dataset. b . There are no good-differentiating features in the dataset. c . In our approach, we are not able to decide the effectiveness and a go-to plan of action for deploying npis. All the data-points are very-similar to one-another, hence it is being difficult for the algorithm to learn. 15. What improvements do we want to make further? a .There could be a set of strong differentiating features in the dataset, which will make the generalization easy. b .There can be a further categorization of npis for better implementation of them. c .The dataset can also be combined with economic parameters further, to understand the economic feasibility of the NPI-implementation. d .It can further be used to predict the decrease in growth rates, once an NPI is implemented to further note the real-time effectiveness of the npis in a particular demographic 15. References a . https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4327893/ b . https://www.worldometers.info/coronavirus/coronavirus-incubation-period/ c . https://archive.is/JNx03 d . https://archive.is/UA3g14 All the other references are mentioned in the submission notebook at every step. 16. Product Roadmap The coronaob.ai team has the functionality of the platform. We are currently in the process of bringing our front-end up to speed with our U/X designer's wireframes. Below is our Product Roadmap post hackathon submission: a.New Security Features b.Admin Dashboard c.Analytical graphs 17. The team a. Saketh Bachu - Machine-learning b. Gauri Dixit - UI/UX development c. Shaik Imran - Medical Expert/Design Built With kalmanfilter matplotlib numpy pandas scikit-learn Try it out github.com
Coronaob.ai - Pandemic Outbreak and mitigation prediction
Coronaob.ai is built with the help of AI and statistical methods. This model helps in forecasting the number of cases and also predicts mitigation measures to control the outbreak.
['saketh bachu', 'Sandeep Kota Sai Pavan']
['Third Place']
['kalmanfilter', 'matplotlib', 'numpy', 'pandas', 'scikit-learn']
25
10,180
https://devpost.com/software/enhance-ny9pou
Inspiration Have you ever had your image compressed beyond belief by an app? Have you ever found that perfect wallpaper on Google only to find that it’s 720p? Have you ever just wanted to be able to enhance , like in CSI? As a student studying deep learning, I had to create my own solution. What it does Enhance is a web application that takes low-resolution images as input and upscales them by 4x with deep learning, making features more distinguishable and increasing image resolution. How I built it Enhance uses a Generative Adversarial Network trained on two diverse high resolution image datasets: the DIV2K dataset and the Microsoft COCO dataset. I did modeling with Tensorflow and Keras, and to prevent my computer from catching fire, trained it through Google’s Cloud ML Engine. I did preprocessing through basic libraries like NumPy and Matplotlib and used a VGG model for content loss. I then strapped up an easy-to-use, intuitive web application in Flask so anyone can use my model. Challenges I ran into Just trying to download and process all the training data was enough to almost brick my computer. Thankfully, Google Cloud Platform was there to save the day. I ended up doing my training in the Cloud ML Engine, and was definitely glad I did so. Built With keras tensorflow Try it out github.com
Enhance
Upscale your images on the web with deep learning
[]
[]
['keras', 'tensorflow']
26
10,180
https://devpost.com/software/patient-health-history-record-system-f8iyxt
Screens Maps Control of epidemics - Patient Control - Control of Vaccination Campaigns - Expert needs control by location - More reliable and secure data according to the new personal data protection law - More objective targeting of public health policy - Faster reaction to public health emergencies - Doctors with access to all health history and with more accurate patient information including emergency care - Patients with an updated and accessible health history when they need medical care - Public authorities using mathematical models to make projections with greater reliability - Patients using mathematical models to make projections and personalized predictions - Access to vulnerable population to technologies given only to people with private health insurance - Access to information of people using private health insurance by the government - People who have a private health plan and switch to the public health system without losing data - More accessible and reliable data than in DATA SUS - Data held by patients - Data released only for those patients who wish to access - Health History x Digital Health Record - Simplification of the database used today using data lake methodology Built With datalake flutter ionic react Try it out github.com
Patient Health History Record System
We have developed a project for the public health area that aims to serve the poorest population.
['Jose Alexandro Acha Gomes']
[]
['datalake', 'flutter', 'ionic', 'react']
27
10,180
https://devpost.com/software/myfacemaskapp
Prototype model 1 "Dany" pink opaque Prototype model 1 "Dany" white opaque AiRFace.it (App) A new customizable, economic and ecological mask tailored to you Millions of disposable masks are thrown out every day, which in addition to having a significant environmental impact on nature also have a high monetary expenditure. To meet all these needs, the AiRFace.it App team has developed a project that allows the customization of the mask or the creation of transparent masks that can adapt to the features and therefore customizable by scanning the face in 3D. This is AiRFace.it App , a mobile application that can be downloaded free on IOS and Android devices. The scanning process is quick and easy: it happens through the use of the mobile phone camera. In addition, the material used in the production of the masks, biodegradable and hypoallergenic allows a repeated use, being able to sterilize the washing at high temperatures . The mask is therefore ergonomic and environmentally friendly, because you just need to change the filters. The 3D model can also be printed from the comfort of the house without leaving. The advantage is that, being custom-built, the signs released by normal masks after hours of use will be reduced. Considering the difficult and delicate situation we are experiencing, AiRFace.it App would ensure protection and prevention with zero impact on the environment and would meet individual needs. It would be an optimal solution, without going out to buy it if you already have a 3D printer or otherwise we print and deliver directly to your home! The App is free as well as the availability of the various basic 3D models. Ergonomic My face mask with its 3D scanning process, allows you to create ergonomic masks suitable for every feature of your face. This custom adaptation will allow you to reduce the marks released by normal masks after hours of use. 3d Printer The realization of the 3D masks allows anyone to create them independently. In fact, a 3D printer and a mobile phone are enough to quickly create personalized templates based on the desired quantity. Ecological The material of these masks is absolutely ecological, a very important feature if you consider high consumption daily, especially in some sectors. In fact, these masks can be reused several times later washing at high temperatures which allows sterilization. Trasparent One of the main features of AiRFace.it is transparency. The idea of ​​making these masks with a transparent material was born mainly from the awareness of the importance of lip reading for deaf people. Privacy For us your privacy comes first, that's why AiRFace.it complies with all Gdpr ( https://gdpr.eu ) regulations and no personal data will be transmitted. The problem solved by the project With AiRFace.it App we want to find the solution to the problem of the availability of personal protection devices for everyone, in fact thanks to this app anyone can print his mask, following our guidelines for the use of safe and biodegradable materials, with their own 3D printer,only if you are certificated member of our network to ensure the quality and safety of the mask produced according to all applicable regulations, so that you have a mask for you and your family that can last for all this complicated period that we are living. The solution you bring to the table AiRFace.it is installable for free on all iOS and Android devices that have compatibility requirements and in a simple automated way and can create your own 3D mask and send it directly to the 3d printer certificated The impact of the solution on the crisis The idea behind this project is to create a safe, cost-effective product that respects nature for us and future generations The needs to continue the project This project needs funds in order to create the best mask with the best materials and for this a significant investment in research and development in addition to wanting to make and print masks to those who do not have the opportunity to have their own 3D printer The value of your post-crisis solutions We believe a lot in this project because it is a valuable help to anyone who does not have the opportunity to always have a disposable mask and this mask can be reused even after this crisis (hopefully it ends as soon as possible) in any sector that needs personal protection devices The AiR net blockchain We are creating a network of certified makers to be able to print masks for doctors and nurses for free to thank them for their valuable help. We are working with other startups to create a network of certified makers using the Ethereum blockchain to create smart contracts in order to be able to transparently verify the entire network. We are creating a decentralized system that is based on blockchain eos to ensure the total transparency of certification of the materials used to print the masks. Currently our team also collaborates in the creation of protective equipment for doctors and nurses to thank them for their difficult work. Built With blockchain c# html5 java javascript kotlin objective-c python swift Try it out airface.it bitbucket.org
AiRFace.it App
A new customizable and ecological mask tailored for you
['Massimiliano Pizzola', 'Daniela Tabascio']
[]
['blockchain', 'c#', 'html5', 'java', 'javascript', 'kotlin', 'objective-c', 'python', 'swift']
28
10,180
https://devpost.com/software/delhi-metro-covid-19-booking-system
ticket info the list of stations time slots the home page The Delhi Metro map Inspiration METRO TICKET BOOKING SYSTEM THE CURRENT SITUATION In times of the COVID-19 pandemic our government is easing out the restrictions on movement of people and operation of various businesses in order to reboot the Indian economy. But doing so will require restarting public transportation as a large percentage of Indians don’t have private vehicles. Even though bus services have started in the capital their capacity has been reduced to ⅓ rd which results in long waiting queues. Therefore it is imperative for the metro services to start in a restricted manner. THE PROBLEM The problems with starting the metro services are- Limiting the no. of passengers in a given train Ensuring that stations do not become crowded Ensure social distancing in the train and on the stations Ensuring smooth transition between different metro lines THE PROPOSED SOLUTION: AN APP FOR BOOKING METRO TRAIN TICKETS What it does Through the app users will be able to book train tickets at their desired departure time. The departure timings for each station would have a gap of 5 minutes(as opposed the current gap of 3 minutes). This increased gap in departure time would ensure that overcrowding situations are avoided at the stations.Also the halting time of trains at each station would be increased from 30 sec to 90 sec . The seating capacity of each train would also be restricted. In each coach of a train no more than 20 people would be seated (keeping in mind social distancing). Our booking system would keep a track of this. In order to validate the tickets and keep a check on whether passengers are adhering to their selected time and station of departure a QR code will be placed at the police checkpoints. How I built it I built a multi threaded python server and hosted it on GCP. Then I designed an app with an interactive UI and implemented it in flutter. The entire process of checking seat availability, find possible routes and booking the tickets is done on the server. The server receives a request from the app in the format: y04,b07,0900. y01: means station 4 on yellow line b02: means station 7 on blue line 0900: selected departure time After receiving this info the server queries the nosql database hosted on Firebase to check seat availability. From the database the server receives a list which corresponds to the train that would leave from the start station at the selected time. Each index of the list represents the number of seats occupied till the station that index corresponds to. For example station 4 corresponds to index 3(start index is 0).If there is a seat the system books it. Challenges I ran into HURDLES Suppose you want to go from station A to station C. But in order to do so you need to change the train at station B. Now the problem is that how do we ensure that the time you arrive at station B, the train you take from B to reach C would have any vacancy. Also if there is no vacancy then choose an alternative route. Accomplishments that I'm proud of What I learned I learnt how to use firebase and hoew to host on GCP. I also learnt how to make multi threading server. What's next for Delhi Metro COVID-19 booking system Next I would like to add the contact tracing api to the app. Built With firebase flutter python Try it out github.com
Delhi Metro COVID-19 booking system
A Metro Booking System which would place restriction on the number of people using the train and monitor the number of people in the train in real time to avoid overcrowding amid the COVID 19 pandemic
['siddhant sharma']
[]
['firebase', 'flutter', 'python']
29
10,180
https://devpost.com/software/co-help-a-web-portal-for-covid19
Introduction During this pandemic, almost every country is in the lockdown phase and every person in that country is in their home so it is very difficult for them to get the daily essentials and other things. Co-Help is a web portal where every essential service is offered so that social distancing can be maintained properly. What it does Co-Help is a web portal with tons of features. It provides a platform for the common people to get their essentials, to know about Corona Virus , and stay aware. We have made this site so that the common people do no come out of their homes to get their things, and it will help in decreasing the spread of the Corona Virus. Here, we have also introduced the hospital section so that people can know about the containment zone near their residence and visit COVID Hospitals if they feel that they are having the symptoms. Sections of Co-Help Co-Help has several sections for every section of people. These are discussed below: Hospital Section: 1. It shows the nearest COVID Hospital so that if people who are suffering from COVID19 can get isolated at COVID Hospitals. It uses the Google Map's API for showing the hospitals. Also, we have given the hospital contact numbers and common state government contact numbers for help. 2. It has a map, thanks to Google Maps , where it shows the containment zones in a particular city. It'll help and aware people to stay away of that zone which will reduce the spreading of the Corona Virus. 3. Corona Test Centers are also marked so that people can go there for the testing. Daily Essential Section for buying essential items and those will be supplied at the doorsteps by Govt authorities so that there will be no movement of public mass in the market. All the products as been categorized so that there will be smooth delivery of products. Information portal for updates from WHO. COVID Help section for Donation fo fund and volunteering option. In the donation section, people can donate money to the relief fund so that the Govt can use that money in providing food to the poor. In Volunteering Section, jobless people can apply for jobs like sanitization work so that they can earn money by doing that work. i-Education for providing free education to the students. There are sections for class notes on various subjects, videos so that students can understand each concept properly and free online courses also so that students can learn during this quarantine. Corona Go App : This app is used to track the COVID19 stats and give realtime updates from the Ministry of Health & Family Welfare (India) and WHO's Twitter feed and their website. Also, it shows the stats of COVID19 of the World as well as of India. Online Doctor is a section where you need to upload your queries and doctors linked to it will call you so that they can know from what you are suffering from. This will help people in getting medical care directly from the home. How we built it HTML CSS Google Map's API Java (for Android App) Team name: PIYSocial Members: Saswat Samal & Sanket Sanjeeb Pattnaik Links Website Link: https://cohelp.netlify.app/ Github Link: https://github.com/PIYSocial-India/Co-Help CoronaGo Link: https://bit.ly/piyappstore Built With css3 google-maps html5 java javascript Try it out github.com cohelp.netlify.app
Co-Help | A Web Portal for COVID19
A web portal to help the general public during this pandemic!
['Saswat Samal', 'Sanket Sanjeeb Pattanaik']
[]
['css3', 'google-maps', 'html5', 'java', 'javascript']
30
10,180
https://devpost.com/software/viralcheck-social-media-app
Web app Built With python
ViralCheck
Web app
['Jeremy Nguyen', 'Gideon Grinberg', 'Ritvik Irigireddy', 'Nand Vinchhi']
[]
['python']
31
10,180
https://devpost.com/software/again-vui0w1
Inspiration Few days before the start of the quarantine in Morocco, we were walking down the street and we saw a homeless guy trying to find food. Going back home, we were wondering what can this guy do if the quarantine gets imposed on us, Moroccans. A few days later, that was exactly what happened: we were quarantined. Thinking about that guy we saw the other day, we started brainstorming solutions that we can build as computer science passionates to make him and many others in the same situation as he finds a shelter especially during this tough time when they can be easily infected by the virus, as likely as easily spreading it. After seeing Covidathon, we believed that this is our chance to make our solution reach more people and to take the first step in making an impact. What it does Again is a solution that aims at securing shelter for homeless people during the lockdown by matching associations and organizations that deal with homeless people and house donators. The solution also creates jobs for people who have lost their jobs by being applications' reviewers (more details about this below). To secure shelter for homeless people, the application allows users to create accounts as an association, a house owner, or an applications' reviewer. All of the different types of users enter useful information about themselves when registering (details about the registration information required from each type can be found on the demo site): As a house owner: anyone who possesses a house or multiple houses can donate them via the application by filling a house donating application. The application asks for information about the house/s that the user would like to donate. This information includes the location of the house, the area, but most importantly a document proving that the user owns that house. The purpose of this proof is to reduce the wasted time after matching an association with a user that does not really possess the house. This proof document will be processed by an AI system that will either validate it or not. If the document is validated, it will be available to applications' reviewers to match it with an association. If not, the donor’s application will be withdrawn. After the donated houses have been matched with an association or more associations (if there are many houses that a lot of associations can use), the contact of the donor is given to the associations so that they coordinate to finalize the donation process. As an association: after registering in the application, associations can submit applications asking for matching with a donor. An approximate number of homeless people who will benefit from the donation should be specified in the application. It is then the job of applications' reviewers to review the application and decide on a match with a donor. As an application reviewer: applications’ reviewers are people recruited through the application in order to review the associations’ applications and match them to house/s donors. To be an applications' reviewer, one must apply to the job through the website (applications are available in case of need when the amount of applications is too much). Applicants must provide their personal information, but most importantly, proof of losing their job because of the pandemic. This proof can be of any kind: a screenshot of an email of firing (the email should be forwarded later to make sure it comes from a recruiter, a document..). This proof of losing a job, plus the first-come, first-served basis, and the description of the need in the application are the factors that the admins are going to rely on when assessing applications. Each applications' reviewer will get associations’ applications on a weekly basis. Their job is to assess the need for associations and match them with house donors in the same locations. They also have to distribute the houses in an optimal way taking the need and the impact into consideration. Applications reviewers get paid from donations to the web application. These donations have nothing to do with the house/s donations, they are monetary donations that can be done through the web application to a specific bank account for this purpose. Anyone can donate including people not registered under any type in the application. More on how application reviewers get paid in the section below. Payment Policy Applications reviewers will get paid from donations. Since donations are uncontrollable, our team came up with an adequate solution. Applications reviewers will get a token for each application reviewed and thus an association matched with a donor. The value of a token changes on a weekly basis depending on the donations received. Here is a hypothetical scenario: we have 3 applications' reviewers who have reviewed 10 applications each, this means that each applicant has earned 10 tokens, making 30 tokens in total. The amount of donations received in this week is 300 $, implying that a token is worth 10 $. In this case, each reviewer will receive 100$ for this week. However, this method is not good if the amount of donations for a certain week is very high, let’s suppose that in the same previous scenario, the amount of donations is 30000 $, then a token will be worth 1000$. This also means that an applicant will earn 10000 $ for a single week. This might be not fair for other applicants who will join in the coming weeks, and when the donations will be very much lower. To solve this problem, we decided on having a maximum amount that a token cannot exceed so that if the amount of donations is high, we save it for later weeks. Going back to our scenario, if we set the maximum worth of a token to be 20$, and having 30 tokens to issue, we will spend 600 $ and save 29400$ for upcoming weeks. Important notes: Before associations submit their applications, they have to agree to some terms and conditions. An important condition is that the associations should engage the beneficiaries in society by making them help either by doing a job, volunteering or helping other homeless people. The goal of the application is not only to find shelter for these people but to try to engage them in society especially during these tough times when we all have to unite. Link to the document about using AI in Again: [ https://docs.google.com/document/d/1RNNpGf3MIhp-lksVtGzXkH7Tb91Ilw4gRw7AJmu27bA/edit?usp=sharing ] How we built it To build our web app again, we (team members) divided the work into three parts: The front-end part (Mohamed Moumou): This part consisted of designing each web page in the web app. The story of AGAIN and all the scripts in the web app. Also, building the actual web app front end using the react framework. The back-end part (Ouissal Moumou): This part consisted of designing the database and building the actual back-end of our web app using the express.js framework, MongoDB(for the database), and APIs. Deployment (Ouissal Moumou & Mohamed Moumou): We used Heroku to deploy either the back-end and the front-end app. Accomplishments that we're proud of The team of Again is very proud that he is thinking about homeless people when everyone is thinking about the problems of the homeful ones. It does not mean that homeful people’s problems are not urgent, but it means that there is a huge part of society that struggled and now struggling more because of the COVID-19 outbreak that needs urgent help and re-integration. Another accomplishment we are proud of is that our idea is providing jobs for people losing their jobs. What's next for Again 1- Implementing AI solutions in our App, 2- Adapting the services offered by the app to every country's laws, 3- Make our web app available in many languages (Arabic, French...). Helpful hints about running the application in our demo site: http://againproject.herokuapp.com/ If the page returns an error message from Heroku, just refresh the page and it will work. Here are some login credentials for quick testing of the application: For an association: ** email: tasnimelfallah@gmail.com password: Tasnim123 * For a house/s donator: * email: mohamedjalil@gmail.com password: yay yay * For an application reviewer: * email: badr@again.com password: Badr123 **The information and metrics shown on our app are fictional. Built With heroku javascript mongodb node.js react rest-apis uikits Try it out againproject.herokuapp.com againbackend.herokuapp.com github.com github.com docs.google.com
Again
Again is a solution that aims at securing a shelter for homeless people during the lockdown by matching associations and organizations that deal with homeless people and house donators.
['Mohamed MOUMOU', 'Ouissal Moumou']
['The Wolfram Award']
['heroku', 'javascript', 'mongodb', 'node.js', 'react', 'rest-apis', 'uikits']
32
10,180
https://devpost.com/software/truth-seekers-5ofvy2
Inspiration News articles found online are usually from untrusted sources and are unverified. The community is at a high risk of falling into the trap of believing fake news articles. Spreading fake news can have severe repercussions such as heavy fines and possibly even jail time. Especially during this period of the pandemic, fake news can lead to mass panic. What it does Our application aims to curb the spread of fake news using the EOSIO blockchain platform, ensuring only reliable articles from trusted sources are promoted with the help of validation experts and by providing a community-driven platform. How we built it The web application was developed primarily using Node.js and Bootstrap with the EOSIO Javascript API for integration with EOSIO-based blockchain. The mobile application was developed using the Flutter SDK. Challenges we ran into One of the initial challenges we faced was the installation of the EOSIO CDT and the software platform. Furthermore, we also spent time familiarizing ourselves with the EOSIO Blockchain Platform and the recommended IDE, EOSIO Studio. But soon after, the platform was easy to use and thus, connect and integrate smart contracts with our application. Accomplishments that we're proud of We were successfully able to integrate the EOSIO blockchain platform in our application and thus ensure transparency through the voting transactions and more importantly, provide a secure platform to store the articles and their respective categories. Additionally, we were able to develop a fully-functional cross-platform web and mobile application built using Node.js and Flutter respectively. What we learned We were able to learn extensively about the EOSIO blockchain platform, understand its salient features when compared to other blockchain vendors. Moreover, we have realized the vast potential of research necessary in this field especially during this time of the pandemic in order to enable reliable news from trusted sources. What's next for Truth Seekers We aim to partner up with sponsors to grow our reward and badges system. More importantly, be able to find a more optimum strategy to backup our scoring process, as well as, provide a global platform for qualified validators and users to combat fake news as a community. Built With adobe-creative-sdk blockchain eosio flutter node.js Try it out github.com
Truth Seekers
Curb the spread of fake news, ensuring only reliable articles from trusted sources are promoted with the help of validation experts and by providing a community-driven platform.
['Rikesh Makwana', 'Aaishwarya Khalane', 'Alister Luiz', 'Bhargav Modha']
['The Wolfram Award']
['adobe-creative-sdk', 'blockchain', 'eosio', 'flutter', 'node.js']
33
10,185
https://devpost.com/software/glucocheck
Inspiration Having a way for users to easily determine which treatment plan is right for them without the hassle of dealing with multiple procedures can be heavily desired. What it does We utilize neural networks to determine the optimal treatment plan depending on user input given in a survey provided on a website and a mobile application. How I built it We used the google collaboratory application and visual studio code to code the neural networks and backend/frontend of the survey web application respectively. We also used Xcode and Android Studio to code the applications for the survey and treatment plans. Challenges we ran into Some challenges we ran into include fixing the PHP code to properly compile the code, and we also had trouble setting up the neural networks, but we were able to obtain good accuracy after we found the optimal parameters. What's next for GlucoCheck We plan to make an API using this algorithm and publish it and host it on a server to allow people all over the world to access our program. Built With android-studio css html5 machine-learning php python xcode Try it out github.com github.com
GlucoCheck
To Utilize Artificial Intelligence to Provide Diabetic Patients With The Best & Most Advanced Treatments Possible
['divyanshrajeshjain Rajesh Jain', 'Samarth314 Shah', 'Rohan Patra', 'Ishaan Bansal']
['$2000 in DigitalOcean Credits']
['android-studio', 'css', 'html5', 'machine-learning', 'php', 'python', 'xcode']
0
10,185
https://devpost.com/software/rapidcare-tomju4
RapidCare home page Inspiration When I heard the tragic news two years ago of my close family friend passing away due to cancer, I knew I needed to act. Diligently, I spent hours analyzing complex research papers on the various types of cancer, its causes, the treatments, and scientist's numerous attempts to find the cure for this disease as it has taken away countless lives. Additionally, I wrote comprehensive outlines on my newfound cancer research knowledge in order to someday code the cure for cancer. In the present day, our world is plagued with a new deadly virus that has also taken many lives, COVID-19. The dire need to create an equitable, accessible, and sustainable solution to aid with those more susceptible to COVID-19 has never been more prevalent. Cancer patients, in particular, are among those who are at high risk of serious illness from the infection due to weakened immune systems by cancer and its treatments. In the realization of these scientific facts, I desire to combat COVID-19 through developing a convenient device that revolutionizes the way we can quickly diagnose for cancer before its too late. I also now realize that life is so short. Every day, I wish I could go back in time to see my friend again to talk with her about my passions for writing, piano, hiking, and much more. I can't change the past. However, I can change the way we diagnose cancer patients in an accessible manner scientific researchers and even patients themselves can easily utilize. What it does RapidCare uses features from the University of California, Irvine's Machine Learning Repository Breast Cancer Wisconsin (Diagnostic) dataset. Features are computed from a digitized image of a fine needle aspirate (FNA) of a breast mass. They describe the characteristics of the cell nuclei present in the tumor image. Some of the features include. but are not limited to the average radius, texture, symmetry, and more diagnostic features. RapidHacks takes these features, inputs them in a logistical regression machine-learning algorithm, and outputs a prediction if breast mass image features are malignant (cancerous) or benign (non-cancerous). How I built it For RapidCare, I used Python for the machine-learning model. I utilized a lot of libraries such as sklearn, NumPy, pandas, pickles, and seaborn to aid with data cleaning and complex mathematical computations. I built the model using fundamental machine-learning processes which include splitting the data into two parts for training and testing, analyzing overfitting (underestimating the true error rate of the model), and then fitting the data into a logistic regression model. I used a logistic over linear regression because this dataset was categorial by specific diagnosis for malignant (M) and benign (B). When given new features to predict a label for, it outputs a probability of either a 0 or 1. 0 represents a malignant tumor whereas 1 represents a benign one. Then, I converted the logic regression model into a byte stream using Python's "pickling" method within the pickles module in order to properly import the model in my Flask web application. Through commands, Flask effectively loaded the model after running "flask run" in the command terminal. I designed the web application using HTML5 and CSS3. Challenges I ran into A significant challenge I ran into was getting the CSS3 to load with my HTML5 template for Flask. It would not load normally without existing within a folder called "static". I also struggled with cleaning the dataset. There are so many little, hardly noticeable mistakes a coder could make. Accomplishments that I'm proud of I'm most proud of my consistency and diligence throughout the entire coding process. I was able to efficiently build a machine-learning model and deploy it using Flask. Prior to this hackathon, I never used Flask before. Through hard work and a lot of Googling, I managed to code a working Flask application that uses my machine-learning model seamlessly. What I learned Ultimately, I learned more about critical thinking. Throughout coding this project, I realized that coding is more than just sporadically typing lines of code. I realized and applied the value of the "D.R.Y" software engineering principle, the "think before you code" methodology, and a more solid foundation of the process of machine-learning basics. In a short amount of time, I acquired newfound knowledge of deploying machine-learning models using Flask and how to analyze outliers in a dataset. What's next for RapidCare In the future, I plan to incorporate all variations of cancer, not just breast cancer. Additionally, RapidCare will be able to have an image classifier system where a user inputs a picture of a cell body to automatically determine if its a cancerous tumor or not. Despite RapidCare already being efficient for this case. the image classifier will automate the task a step even further. I also desire for a way for users to become more informed about cancer in a convenient, engaging way. Hence, I aspire to code an interactive cancer research journal within the app that highlights confusing words and redefines how we perceive cancer as an incurable disease. Breast Cancer Research Work Cited https://archive.ics.uci.edu/ml/datasets/Breast+Cancer+Wisconsin+(Diagnostic) https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3678677/ Built With css3 flask html5 machine-learning python sklearn sublime-text Try it out github.com
RapidCare
Transforming and automating the way breast cancer patients are treated in these unprecedented times.
['Abigayle Peterson']
[]
['css3', 'flask', 'html5', 'machine-learning', 'python', 'sklearn', 'sublime-text']
1
10,185
https://devpost.com/software/garden-fresh
Inspiration: Basically, I have inspired with Zamato App that performed very well in the market for food delivery, They acquired Uber eat India. What it does : my app is delivery app (like Vegetables, fruits, etc) How I built it: I am developing in swift language, backend in Firebase (Firebase store, realtime database, firebase auth) Challenges I ran into : Basically My app is in the development mode, Challenges coming soon when I launch in the market. Accomplishments that I'm proud of : when running my app in the market. What I learned: Firebase database handling, Realm (local database) What's next for Garden Fresh: I am looking forward for to sell grocery product on my app. Built With firebase realm swift
Garden Fresh
Basis idea is to sell online Vegetables
['Inderpal Singh']
[]
['firebase', 'realm', 'swift']
2
10,185
https://devpost.com/software/modulus-7i30cv
Inspiration In light of the recent COVID-19 crisis, we’ve seen staggering demand for online courses as students grapple with a reality in which education is now delivered over the internet. But traditional e-learning platforms like Khanacademy struggle to keep up with the pace of demand, while LMS platforms like Canvas, which requires teachers to sign up as part of large, wealthy organizations such as school districts, are difficult to use and lockout small independent teachers that just want to continue teaching. And on top of all that, all platforms rely solely on one medium of teaching, such as Udemy through videos, and Edmodo through text, without regard for user learning preferences. What is Modulus? Modulus is an online education platform, similar in concept to Canvas or Blackboard, both of which are used by schools and universities around the nation. But unlike existing platforms, Modulus directly integrates the VARK learning styles - a psychological framework for teaching - into an incredibly simple to use, modular course structure that anyone can use to teach anything. The result is a fairer, more accessible, and more equitable online education for everyone. Modulus Features Modulus includes VARK profiles, which are charts that display the proportions of different learning styles for a course or a user. Across the entire user interface, the colors and learning styles used in the profiles are consistent, which means you can tailor your education to your learning preferences. Fast, responsive, and intuitive, with no bloatware, unlike other LMS solutions that disadvantage those with poor hardware, slow internet connections, and little tech-savviness. Peer-to-peer: our platform lets anyone create, upload, and share courses, with the idea that we can recreate the Montessori model of learning in a digital environment. How is Modulus used? Modulus is used to create a digital classroom online, where teachers can post courses, assignments, lectures, and tests to share with students anyplace, anytime. Our goal is to recreate the best parts of modern educational methods, from VARK learning models to Montessori peer-to-peer instruction, in an online environment, so that as a society progress can continue to be made in the field of education, even from home during quarantine. How I built it We used React to develop the front end for the web application, while integrating with the Google Firebase service for backend database operations. For the landing page, we used Bootstrap, and React for the web app educational platform itself. Challenges I ran into This was the first time that our team used Firebase Google Cloud services for user authentication and data storage, so it was difficult to integrate that into our web app, which is written in React, a web framework we had learned for our first hackathon only two weeks ago. We encountered lots of issues thus with merging these new technologies together and deploying them successfully on Heroku. Accomplishments that we’re proud of Despite having just learned Firebase, and only having two weeks of experience with React and Bootstrap, we managed to do the following: A fully functional web platform, with an intuitive and extremely fast design. Full integration with a cloud-hosted database backend that tracks course enrollment for our individual users Automated emailing for password recovery Integrated course creation into the platform Anti-bot services like Recaptcha What's next for Modulus Our team hosts a tutoring service for middle school and high school students who either want to catch up or get ahead during this difficult time, so we plan on using this platform ourselves to promote education for all. Who we are High School Juniors from Seven Lakes High School, in Houston, Texas Daniel Wei - danielwei15#3016 Ryan Ma - GoblinRum#8553 Haoli Yin - Nano#4890 Built With bootstrap cmd css3 express.js firebase google heroku html5 javascript node.js npm react recaptcha research Try it out modulusplatform.site github.com
Modulus
An online education platform that directly integrates VARK learning styles for efficient online learning
['Haoli Yin', 'Daniel Wei', 'Ryan Ma', 'Mohamed Hany']
['Best Educational Impact']
['bootstrap', 'cmd', 'css3', 'express.js', 'firebase', 'google', 'heroku', 'html5', 'javascript', 'node.js', 'npm', 'react', 'recaptcha', 'research']
3
10,185
https://devpost.com/software/sesame-mobile-app-upz9bv
Inspiration After talking to managers at stores in Lenox Mall, one of the largest shopping malls in Southeast, we learned that retail stores face two primary problems: either there are too many customers for the limited store capacity created by social distancing guidelines or stores lack customers altogether. Since the start of the pandemic, retail is facing the largest decline on record with a CNN estimating a 8.7% decrease in sales during March alone. Since consumer spending and retail comprise 70% of U.S. economic growth according to the U.S. Bureau of Economic Analysis, we wanted to find a safe way to encourage customers to return to retail stores, while preventing lines. What it does Sesame is a mobile app that benefits both consumers and corporations by allowing customers to reserve a timeslot for entry at retail stores for rewards. Consumers gain guaranteed entrance to stores upon arrival, B2C rewards, and confidence that their health is prioritized. Customers can enjoy features such as sanitation ratings, scheduling calendars, and seed rewards. Corporations can open without door attendants, automatically track crowd density, eliminate long lines, and galvanize new customer interest. Features like automatic people counting and QR code entry will help to maintain a safe store capacity without a door attendant. How we built it We designed the app in Figma which is a digital prototyping tool. This allowed us to make the UI for Sesame so that we can demonstrate how consumers and businesses will use our app. Challenges we ran into One of the challenges that we ran into was in considering the ability of all stores to use the app in order to determine the number of people in their stores. We were mainly working on using the automatic capacity counter feature for larger department stores or stores in the mall. These larger stores would have security cameras and struggle to place attendants at multiple entrances. However, we realized that some retail stores do not have security cameras. In response, we decided to create a manual count feature for stores without preexisting security cameras in order to allow attendants to keep track of crowd density and limit reservations if neccesary. Accomplishments that we're proud of We’re really proud of our app’s ability to eliminate attendants at the door. The automatic capacity counting feature using live security camera feed can be combined with QR code entry with a reservation in order to enter. Currently, stores have attendants at the door who manually count the number of people entering and serve as a bouncer. Large stores have even closed certain exits to limit the number of attendants required; limiting the number of entrances and exits could create dangerous traffic flow by placing people in close proximity to each other. We think that our app is a great way for large department stores to avoid using attendants and utilize more entrances and exits. What we learned Through our research, we learned that there are two different problems for stores: Social distancing measures are generating long lines at certain stores, while customers are sparse at other stores. Overcrowded stores pose a significant risk for transmission of COVID-19, but undercrowded stores risk bankruptcy. Sesame simultaneously solves both ends of the spectrum. We also learned that managers at companies like Guess, L'occitane, and UGG are interested in offering generous benefits to customers through reward platforms in order to tackle these problems. What's next for Sesame Mobile App Next, we hope to release it on the app store so that businesses and customers can begin to benefit from Sesame. We will launch a beta version for use at Lenox Mall in Atlanta and if successful will expand to other retailers and to major cities in the US like New York, Los Angeles, Houston, etc. Besides word of mouth, we plan to expand our consumer base with more advertising and referral codes to gain benefits for inviting their friends to the app. We will also release a Google Play version of Sesame to reach a larger consumer base around the US. After successfully implementing the retail version of our app we will create spinoffs of the Sesame app by creating Sesame for schools, leisure, fitness, parks, and personal care services. Built With figma Try it out www.figma.com docs.google.com
Sesame Mobile App
As brands go bankrupt, the key to saving retail is the Sesame App. Sesame benefits consumers and corporations through time slot reservations for entry at retail stores in exchange for rewards.
['Rohan Paturu', 'Rebecca Kalik', 'Ilene Lei']
['2nd Place Winner - New Projects Category']
['figma']
4
10,185
https://devpost.com/software/healthplus
Splash Page Login SignUp Calendar/Reminders UI Settings Share App Contact cards Resources Resources Resources Resources Emergency Page Edit Your Emergency Profile Edit Emergency Contacts Careplan Add Medicines Add Activities Add vitals Inspiration As we know great ideas and companies have started at home. So, similarly I cantrace back thee inspiration to my old grandparents who usually forgot to take their medicines. Also it is too sad for me to say, but my grandfather recently died as he hadn't taken his bloodpressure meds properly and also no one could keep a check on his punctuality. Also when the med team found him, he could have survived if they gogt the information about him like med history, blood group and others on time which they unfortunately didn't get! What it does HealthPlus helps you keep a track of meds! This is a must-have pills tracker and reminder app for your health. It helps you take care of our loved ones at risk by reminding them from time-to-time and not overdosing in forgetfulness. HealthPlus works well for medication management, tracking vitamins, supplements, birth control, conditions, medication, and later plans also include tracking the symptoms, nutrition, activity, daily vitals, therapies, pregnancy, baby symptoms, notes, etc. Reasons why this is the next startup CREATE A CARE PLAN -- Used as a treatment & pill organizer Don’t create just a dull medication list which you would forget about days later!. Add Drugs, Meds, Vitamins, Minerals, Natural Remedies, Therapies, Fitness & Nutrition as part of your care plan Set dose form, dose color & set medicine reminders quickly!!! ADD YOUR CARE TEAM The most unique feature!!! Save CareContact information for future reference. Add CareContact pill alerts for missed dosages, have someone help you stay on track of your goals because together we can overcome anything!!! Allow CareContacts to view the care plan. -Giving a point of contact with emergency teams. Save & Share Health Appointments future------------Chatbot based GCS SOS button and voice-based actions. EMERGENCY BUTTON Easily accessible emergency button to call respective authorities right from the app! Also sending an SMS to CareContact about location coordinates and basic details filled up in EMERGENCY FORM. How I built it The UI is built using flutter, Google Location Services API from GCP for coordinates, Python-Django based backend API for making requests. The database is on MongoDB. Challenges I ran into Many APIs don't have documentation for flutter Accomplishments that I'm proud of The UI components. What I learned Working with flutter What's next for HealthPlus Releasing it on app stores after scaling up! Domain.com dontgivecovid.space Built With dart google-maps java kotlin makefile mongodb objective-c swift Try it out github.com
HealthPlus
Not just some random healthcare app!
['Dipti Modi']
[]
['dart', 'google-maps', 'java', 'kotlin', 'makefile', 'mongodb', 'objective-c', 'swift']
5
10,185
https://devpost.com/software/sentiment-analysis-of-books-impact-on-children-using-dl
Inspiration ** I wanted to do something for my university from my first day at uni. The purpose of this project was to help society with the power of AI and it actually is helping our university publishers to detect vulgar books. ** Learning factors ** Before starting the project, i thought to make an intelligence storage system inside a data-lake, but i could not find any specific solution till now. Books are the most essential utility for human beings to enhance their knowledge. Using vulgar words in book is not new, from years people are using it but we were never concerned about the impact of it on a child. There are a lot of publisher outside who are involved printing books which contains vulgarism without knowing the side effects of those books on children. The reason for doing this research is to help the publisher and general people by identifying those words using the power of deep learning. ** How I built it ** The toughest part of the project was to get the Data-set. The data was collected from an online platform and the data which is chosen is an unsupervised data. The data itself came with PDF format. The data is a adult story book for children. The project was done using Jupiter Notebook. Steps are shown below with points - _ 1. Changing format – The data was taken in pdf format and then it was converted to text format because i did not know about PYPDF _ _ 2. Data Pre-processing and Visualization - In order to clean the text file pandas library was chosen to fit the text into a data frame. Firstly, all the texts were taken to lower case using python function and those texts were split using simple python function. And to clear the stop words and do the stemming, NLTK library was used. Through this it can show us the specific words that are in the book. This book contains 227 distinct words. To get the visualization of the words that has been used in the book, wordCloud was imported and matplotlib library was used to see the plot of those words. Seeing those words, decision for the data set was affirmative._ _3. LSTM - The model was trained on 157 samples, validate on 40 samples, Total param was 96,337, Trainable param was 96,337 and Non-trainable param was 0. Total of three epochs gave the same accuracy which was 82.50%. After evaluating the model on the test set the loss was 67.2% and the accuracy was 62.9%. _ 4. Naïve Bayes - Each feature was taken separately to determine the proportion of previous measurements that belong to class A that have the same value for this feature only. For this project, 300 subject documents were taken and 300 object documents to train and test our model. The accuracy of the model was 80%. After getting the accuracy Vader (NLTK) was used to test the sentiments. Vader is a parsimonious rule-based model for sentiment analysis of text. For the data that we chose, our model was able to tell that there is 18% negativity, 77.3% neutral and 4.7% positive. Challenges I ran into * At the very begining of the project, I struggled to get a dataset and I had no idea how NLP works. I had to spend hours and hours for very minor problems. I tried to implement the projcet using Genism (Word2Vec) as well as BOW (Bag of words) bu8t i kept on getting errors and errors. This project helped me to gain a better knowledge about unsupervised data, NLTK library, sentiment analysis, ML algorithms . * Accomplishments that I'm proud of ** After the project, my university publisher came across me to talk about the future of the project so that they can practice it in their system and I am being called by the best financial institution (Maybank) for my internship as a Data Scientist because of particular expertise in NLP ** What I learned ** Time-management, problem-solving-skill, critical-thinking, Data-sets, Machine Learning, Deep Learning, NLP, Python libraries , Neural Networks, Sentiment analysis, pre-processing unsupervised data ** What's next for Sentiment Analysis of Books & impact on children using DL At the very beginning, word2vec was implemented to get the sentiments of data. By using word2vec, classification of words can be easy to get. But difficulties were faced while getting the accuracy, the model gave an error message saying that the weights of the words were initially sorted whereas the weight of the data was not sorted. Gensim library was used in this case. Bag of words (BOW) was also used to separate specific words and get the accuracy of the model. Future plan for this project is to get a good accuracy using different ML algorithms. Built With deep-learning lstm machine-learning naive-bayes natural-language-processing python python-package-index sentiment-analysis Try it out github.com
Sentiment Analysis of Books & impact on children using DL
An unsupervised data where a sentimental analysis is done for finding the vulgar words from a text-based file. This research is to help by identifying vulgarism using the power of DL.
['Atif Chowdhury']
[]
['deep-learning', 'lstm', 'machine-learning', 'naive-bayes', 'natural-language-processing', 'python', 'python-package-index', 'sentiment-analysis']
6
10,185
https://devpost.com/software/resumatch-co6bua
Our video forgot to include this, but clicking on the Job Title shows the LinkedIn page. This is the architecture of our project. Another logo design. They're all so beautiful! And another one! (Note: Please look at the first image we've uploaded alongside the video for a feature we missed in the demo. Inspiration Although the new cases of COVID-19 are terrible, our team noticed that the swathes of unemployed persons in the United States affected by quarantine parallels the Great Depression of the 1920s and might take years to fix. The people affected by job loss aren't usually white-collar workers: fast food employees, cashiers, and mall workers are being put out of their jobs by the millions. However, the one shining beacon is that these workers are highly adaptable and can fit many job descriptions! We realized that the best way to find new, compatible jobs for these workers is by analyzing soft skills in their resume that they've gained through experience. What it does Our application allows users to simply drag-n-drop their .pdf resume onto our site. From there, our NLP model will tag its soft skills, search for jobs that use those skills in a massive dataset of jobs and Google Jobs, and then recommend those jobs to those employees. How we built it The following summarizes the steps taken to provide smart job recommendations to applicants based on their uploaded resume: When the resume is uploaded onto our website as a PDF, we extract relevant information from the resume, such as the applicant’s skills and experience and parse them as text. We trained a Bidirectional Long Short Term Memory Network (BiLSTM) from scratch to categorize job descriptions into job titles like “business analyst” and “accountant”. With this model, we then predict the top 5 job titles that the skills and experiences listed in the resume is likely to fall under. This helps reduce the number of requests we have to make to the LinkedIn API, while still broadening the options of the applicant and not restricting them to a single job title. We query our MongoDB to see if we have stored the job listings for each of the 5 job titles in our database. If not, we make a request to the LinkedIn API to get job postings that are relevant to the top 5 job titles found. With the job postings, we use a state-of-the-art Natural Language Processing model, the Universal Sentence Encoder, to encode the job descriptions into high dimensional vectors and capture the nuances behind each word in the description. We also encode the resume information into high dimensional vectors and use cosine similarity to measure the similarity of the applicant’s resume with the job description. We return the top 10 most similar job postings that match the applicant’s skill sets. Impact of our Project We hope our project can help ease people back into jobs following the COVID-19 pandemic by providing them with more avenues to look for jobs. We hope this will simplify the job seeking process for them and open them up to more opportunities, both in terms of finding new suitable job titles and also finding more relevant job postings that are suited to their individual experiences and soft skills through the power of machine learning. Challenges we ran into Finding a suitable dataset -- datasets containing job postings and resumes are difficult to find as they are not commonly used in machine learning. Furthermore, the dataset we found was noisy so we had to do data preprocessing to clean it. Accomplishments that we're proud of What we lacked on the frontend, we made up heavily on the backend with not one but two machine learning models! We were surprised to see that our BiLSTM model worked really well despite the limited training data, achieving over 90% accuracy on the test set. What we learned Our team had to do a lot of research on the structure of .pdf files in order to extract _ relevant _ information from it. What's next for ResuMatch Our website is live at https://resumatch.online/ ! We plan to collect more data to improve the classification of the job titles and increase the range of job titles available. Built With flask machine-learning mongodb natural-language-processing react tensorflow Try it out resumatch.vercel.app github.com
ResuMatch
Helping the unemployed find work during the COVID19 Pandemic using NLP! A simple resume drop is all you need.
['Borna Sadeghi', 'Haohui Liu', 'Ansh Gupta', 'Eric Andrechek']
['Best use of MongoDB Atlas', 'Best COVID-19 Hack', '1st Place: Charity Donation']
['flask', 'machine-learning', 'mongodb', 'natural-language-processing', 'react', 'tensorflow']
7
10,185
https://devpost.com/software/tcdr
icon Logo Inspiration Scientific literature too complicated to understand? Flooded with too many news articles? Use TC,DR! to simplify the information overload about COVID-19! Theme Our project's theme is: Collecting, visualizing, and sharing information. What it does Our web app has three components: News Feed - Displays all relevant news articles relating to COVID-19, along with a credibility score calculated using a machine learning model trained from scratch to calculate the credibility of news articles News Search Bar - Confused about whether to trust an article? Paste the article URL to find the credibility of the article! Scientific Literature Search and Summary - Unsure about the current progress in the scientific literature? Uncomfortable with reading complicated research papers? Enter your query to receive links to relevant papers and a summary of the findings! How We built it The web app is built using two machine learning models: firstly an xgboost model that has been trained on a dataset we collected for this hackathon to predict the credibility of news articles, and secondly the BART summarizer, a state-of-the-art seq2seq natural language processing model finetuned on scientific research papers that distils information from the scientific research papers into simple summaries that are easily understandable by audiences from all backgrounds. For the news feed, we call the google news API to fetch news articles related to COVID. Then, we call the machine learning model to predict the credibility score of the article. For the news search bar, we use the newspaper library to scrape the article from the link and obtain the text of the article. Then, we call on the machine learning model to predict the credibility score of the article. For the scientific literature search and summary, we call the PubMed API to return scientific papers relating to the COVID query. Then, we use the BART summarizer to summarize the text of the scientific paper and display it along with the title and link of the paper. Impact of our Project We hope our project can help clear up the large amount of uncertainty and unease surrounding the COVID-19 pandemic by keeping users informed about the latest developments through the news feed, while also helping them make more informed decisions as to whether to trust the article or not. Furthermore, the scientific literature search and summary aims to empower people who are do not have an extensive background in academia to easily understand what is going on in the current research community. This will help prepare us for the future, to fight against misinformation and empower people to take charge of their situation. Challenges I ran into We had issues connecting a Flask Backend to a React Frontend due to the very little documentation available. This took up most of our time. Since we are a two-people team, it was difficult to accomplish so many things: coding the frontend, backend, training the ML models, and finally connecting the Flask backend to the React frontend. Accomplishments that we are proud of We are proud of giving something back to the tech community that helps us with the smallest bugs we have to the biggest technology we want to learn. We are also proud to be helpful to help in this fight with the covid-19 pandemic. What we learned We learnt to perform so many tasks in a short period of time, being in different timezones, it was even difficult for us to collaborate but we managed really well. What's next for TC,DR! - Too Complicated Didn't Read We hope to deploy the website online so it is accessible to everyone to use. How to go about our project There are two github repositories. One having the frontend, when started runs on port:3000. The other having the code for backend and the flask server, which when run, runs on port:5000 Built With bert flask machine-learning natural-language-processing react xgboost Try it out github.com github.com
TCDR - Too Complicated Didn't Read
Scientific literature too complicated to understand? Flooded with too many news articles? Use TCDR to simplify the information overload about COVID!
['Aarush Bhat', 'Haohui Liu']
[]
['bert', 'flask', 'machine-learning', 'natural-language-processing', 'react', 'xgboost']
8
10,185
https://devpost.com/software/telemedml
TeleMedML Logo TeleMedML System Architecture Deep Neural Network Diagram TensorFlow Machine Learning Shell Output iOS App Login Screen iOS App Patient Welcome Screen iOS App Historical Transactions iOS App Report New Symptoms iOS App Results Page With Start Call Option Website Transaction View Inspiration With the recent COVID-19 pandemic, the healthcare industry has been subject to unprecedented amounts of strain. Hospitals worldwide have been filled beyond capacity, and numerous medical professionals are being worked to exhaustion. As a result, any innovation that has the potential to save time, reduce exposure to the virus, and secure personal medical data in the process has the potential to save countless lives. What it does TeleMedML provides a basic disease diagnosis for a user given a list of their symptoms. The user first selects which symptoms they are experiencing from a predefined menu. This information is then passed into a deep neural network, which analyzes the given data using a trained model, and returns the diagnosed disease as well as a confidence score. The user can also enter a short list of phrases describing their symptoms, which are sent to the server and analyzed using natural language processing, returning a diagnosis. Each data transaction is added to a series of pending transactions, which are subsequently verified using a proof-of-work algorithm and annexed to a block in the blockchain. How we built it App The iOS app begins with a secure login and signup page for users. Patients are able to view the entire history of previous transactions encrypted with SHA-256 hashes and secured on a decentralized blockchain network. A user is also able to fill out a symptoms form and write additional symptoms, which is encrypted and sent to the server for analysis. If the returned values indicate a potentially significant diagnosis, a doctor and patient are connected for real time communication, ensuring that a doctor’s valuable time is spent on those who need them the most. Server The flask-based server handles secured requests from clients and encodes all data points into blocks for storage in the blockchain network. Every time a client-side transaction occurs, the data is given a unique hash, encrypted, and linked to a previous block, chaining the data set together. A recursive algorithm was developed to work backwards through the blocks and decrypt the transaction data to securely access the full data set. Alongside the server, a novel TensorFlow-based deep neural network machine learning model was also built and trained on a relevant data set. The data is reshaped on a 0 to 1 scale, regularized, and sent through 4 layers of decreasing node counts with relu and sigmoid activation functions to predict disease diagnoses based on inputted symptoms. This prediction was combined with Amazon Comprehend Natural Language Processing algorithms to analyze the connotation of written symptoms to quantify the severity of the diagnosis. Website A website was built as a healthcare portal to view all transactions made by clients. After a secure login, doctors are able to view a table containing full transaction data accessed and decrypted from the server using the recursive data access function. Challenges we ran into One of the issues we encountered early on was establishing the infrastructure for the blockchain. Most of our group had relatively little experience with blockchain prior to the project, so figuring out how to implement each block, compute its hash, and connect it to the overall blockchain with a proof-of-work validation algorithm was challenging. Another difficulty we ran into was with sending requests to the server. We received internal server errors since the server we were hosting on was outdated and insufficiently large for the TensorFlow modules that our program incorporated, so we updated the server’s Python environment and upgraded its storage capacity. Accomplishments that we’re proud of We are extremely proud of being able to successfully configure and train a deep neural network with an expansive dataset. This allows us to more accurately tailor the user’s diagnosis to his or her individual symptoms, creating a more personalized experience. We are also proud of managing to create a decentralized blockchain network using Python and encoding each transaction through a cryptographic hash function based on the current data and previous hash. The chaining functionality and the proof-of-work algorithm were also difficult to implement, so it was rewarding to be able to do so. What we learned Given the current circumstances with the pandemic forcing us to work remotely, we had to learn how to communicate effectively and collaborate virtually in order to produce the desired end result. We also learned about the workings of blockchain technology, including the hashing and validation algorithm involved to ensure the security of the chain. What's next for TeleMedML There are several future options that we can pursue with TeleMedML. Among these is the creation of a convolutional neural network that can analyze user images. The addition of this feature would be useful in diagnosing diseases with physical symptoms, broadening the abilities of our detection software. We could also integrate the project with certain hardware components, such as sensors that would measure the user’s heart rate, respiratory rate, and other vital signs to improve diagnosis accuracy. Built With ai-applied-sentiment-analysis amazon-ec2 amazon-web-services blockchain flask keras machine-learning python sashido swift tensorflow Try it out github.com telemedml.macrotechsolutions.us
TeleMedML
A mobile application that employs machine learning and AI-applied sentiment analysis to assist in the telemedical diagnosis of respiratory illnesses through a blockchain-secured backend.
['Jack Hao', 'Elias Wambugu', 'Sai Vedagiri', 'Arya Tschand']
[]
['ai-applied-sentiment-analysis', 'amazon-ec2', 'amazon-web-services', 'blockchain', 'flask', 'keras', 'machine-learning', 'python', 'sashido', 'swift', 'tensorflow']
9
10,185
https://devpost.com/software/study-buddy-0ui9bf
Inspiration Due to COVID-19, students have had to transition to online learning. In fact, 1.9 BILLION children are out of the classroom because of this. This comes with many challenges because students are now more than ever distracted with technology. It is difficult for students (especially children) to not be distracted and to self-motivate. Studies show that students are 25% more likely to multitask in non-academic work when enrolled in online courses compared to their peers in face-to-face courses. Therefore, we wanted to address this issue in order to help students optimize their studying and to alleviate the stress of parents. What it does This program, Study Buddy, allows students to stay focused while studying with a user friendly icon. It will keep track of what a student uses during their "homework" period. It will remind the student to not go on websites or use programs that are non-educational or that they/their parent "forbid" during this study period. It also provides study techniques, resources to children, and reminds students to take breaks as well. In other words, it acts as the "parent" on the computer but a friendlier version. Students will be able to choose between different icons such as their teacher, an anime character, animals, monsters, etc. Think of a Pomodoro clock on steroids! How we built it This was built using Script commands & Python in a Windows 10 comptuer. This program will later be an app/exe file that runs in the background and is built using Unity, which is a game development/animation software. In addition, we hope to also implement AI to be more real-life and interactive with the user to mimic more of a "buddy" like feel. Challenges we ran into Chrome extensions are written in javascript and many of us are better equipped in python and know less about javascript. Therefore, we had to look into python-to-javascript converters like bython. However, in the end, we ended up just using solely Python and script languages and disregard Javascript for now. We were able to find a great animation that mimics the kind of animation we want to do in the future. There were a few errors with running the script language since we had wanted to use a few commands in Python code. We eventually realized that we had to use a system call to run these codes. In addition, there are not many examples out there that shows an appropriate version of these desktop animation. The last more well-known one was Microsoft's "Clippy." Because there is not much guidance online regarding desktop character animation, we had to simplify the project. A mentor had recommended to figure out how to access the background processes of the computer to see what programs are running and then notify the user to close the program when it is running. In addition, there were no mentors who had experience in creating these desktop animation characters, so a lot of the time spent was figuring out how to even start the project. Accomplishments that we're proud of a) Utilizing the knowledge we gained from the workshops to create our application b) Being able to work together and produce something despite our major time zone differences and limited time c) Being able to bring all of our unique educational backgrounds to produce a product d) Learning that this is a novel idea that others have not created before What we learned We learned how to call system commands from a python script, how to work together with different skill sets, and gained more insight on how transitioning to online learning has affected students and parents. In addition, we learned that Unity is a tool that could be used to create animations, similar to the Everly-chan animation you see in the video. What's next for Study Buddy In the future, we are hoping that the icons utilized by the student can regularly encourage the student with positive affirmations, can set reminders, and include a to do list. Additionally, we are looking to make the program run so that the icon is voice activated and can be used for homework help with different API's. Built With bash-script python script-commands Try it out github.com
Study Buddy
Start studying more effectively!
['Layan Ibrahim', 'Mualla Argin', 'Victoria Nguyen']
['Education Track Runner-up']
['bash-script', 'python', 'script-commands']
10
10,185
https://devpost.com/software/algedi
Initial interface afterwards Inspiration According to the CDC, hand-washing is one of the most effective methods of preventing the spread of coronavirus. However, access to restrooms is not always easy to find, especially in public areas. Thus, I decided to build a website that would allow anyone with an internet connection to find the closest restrooms in a city. What it does Algedi displays a map with pointers indicating nearby restrooms. Restrooms can be found either by searching near the user as determined by geolocation, or imputing a city name. The city name can be inputted through typing or by voice recognition. Hovering over a restroom pin gives more information like how to access the restroom and a link to the address on Google Maps. How I built it I used React for the front-end, with a wrapper for the Google Maps API. Restroom info was received from the refugeerestrooms.org API through Express. Voice recognition was achieved through the Web Speech API from MDN, which uses ML models to interpret speech. Challenges I ran into I had much difficulty dropping map points based on data received from the restroom API. Thus, I found a wrapper class for React that allowed me to add points on a map by updating state. In addition, I also struggled with displaying information about restrooms after they had been received. The fix was to store the corresponding information of each point in the state and change the displayed information upon hovering over a point. The voice recognition also required the testing of several different APIs before I settled on the MDN API, which combined the power of ML with an easy implementation. Accomplishments that I'm proud of I'm proud of being able to tie together React and Express in an app that runs smoothly and is able to interact with 3rd party APIs. I hope this tool is able to assist others. What I learned I learned that React has powerful features that allow for well crafted and dynamic websites. I also learned about how to use Express. What's next for Algedi Including directions from your location to nearby restrooms. Adding user feedback to restrooms. Built With express.js react refugeerestrooms.org Try it out algedi-2020.herokuapp.com
Algedi
An intuitive and accessible for finding publicly available restrooms.
[]
[]
['express.js', 'react', 'refugeerestrooms.org']
11
10,185
https://devpost.com/software/covid-hero
chifa - the covid hero chatbot web page chatbot chat session Inspiration Since the outcome of fast-growing pandemic Covid-19, it has become important for us as an individual to keep a minimum distance from other well beings that we call social distancing. It has become important to know the precautions and so one may face many doubts and queries related to COVID. To keep us aware and to answer the COVID related queries we have developed a chatbot called COVID-hero that will guide you throughout this global disaster. How we built it This app lets you know the coronavirus cases in your country. It’s more than statistics, it’s also helped the user to know what the Do’s and Don'ts in this pandemic are. It also helps to know the symptoms of COVID-19. Permit to the invalid persons (blind or illiterate) scan QR and that 'll read for us the notice Remember the drugs taking time for persons who's has Alzheimer for example There is also an option for self-screening. You can use it to test yourself whether you have coronavirus or not. Challenges we ran into This project deals with API and Dialog flow. API will let you the exact data of the country. Most members we new with dealing with API. They were knowing what API means but didn't do a practical project. This was taking time busting for us. Most importantly, we were having time timezones so our coordination was not establishing among each other. This project needs a lot of attention. Accomplishments that we're proud of At last, we planned our whole work which team members have to do what. Due to timezone, we establish a common time we all met and discuss our work. We successfully created the chatbot with teamwork and consistency. As most of them never experienced any hackathon. Everyone shows their teamwork and at last created our project at best. What we learned With teamwork we accomplish our project. We learn team collaboration and remotely working. We all learn new technology like trending one chatbot. We learn HTML/CSS and javascript. We deal with APIs and dialog flow. We know how the frameworks work. The most important thing we learn consistency and teamwork which makes this project successful. Our connections increased with this hackathon. What's next for COVID-hero We are planning to introduce this website on a large scale. We will be adding more features like Doctor consultancy, showing data through graphs, and informing the nearest hospital with an SMS. We will work on the website and make it more useful by adding a sign-in feature. We will also create a chatbot for the hospital. With data collected we will make more usefull and more intelligent the chatbot to permit anticipate users illness. Built With css dialogflow html javascript node.js Try it out covid-hero.herokuapp.com github.com
covid-hero
BE AWARE STAY SAFE WITH YOUR TRUE COVID-FRIEND --Covid-hero
['SOULEYMANE TOURE', 'Rahul Sinha', 'Subhayu Kumar Bala', 'apmit2704 Mittal']
[]
['css', 'dialogflow', 'html', 'javascript', 'node.js']
12
10,185
https://devpost.com/software/litfit
Login Screen Achievements Calorie Calculator Tips and Health Resources Steps/Main Page LitFit Wearable (M5StickC) Heart rate and blood oxygen level sensor Inspiration Obesity is one of the deadliest killers in society today. About 1.9 billion people over the age of 18 are overweight: 1/4 of the entire planet and 650 million people out of that 1.9 billion being obese. Obesity alone causes 2.8 million deaths every year . It is safe to say that obesity is one of the most common killers of today, and is overlooked heavily by people. We sought to facilitate weight loss and encourage healthy living by creating a product that encourages you to be mindful of your daily activity. During the lockdown, people are more sedentary than ever. LitFit aims to eliminate a sedentary lifestyle and encourages users to adopt a healthier lifestyle. What it does Our app has multiple features that all are geared toward adopting a healthier lifestyle, whether it be through increased exercise or better eating habits. Our app tracks your movements and exercise using Step tracker, which tracks your daily, weekly, and total steps. Total steps are then translated into Exercise points, which earn you achievements, giving the user a sense of accomplishment, motivating them to live even healthier and be more active. The Calorie Calculator utilizes a novel Machine Learning algorithm to detect the food calories as well as the type of food given an image, which is uploaded by the user. The Health tab tracks your pulse and oxygen percent saturation, as well as gives you tips on how to live a more active lifestyle: the three golden rules of LitFit. We also provide useful resources on quick workouts that the users can quickly jump. All of these features contribute to an amazing application that promotes healthy living. How we built it We build the front end entirely out of flutter, a multiplatform mobile application SDK. The hardware base for this project is an M5StickC microcontroller. The M5StickC has an ESP32 as its base with an in-built 6 axis IMU sensor. The IMU takes the accelerometer and gyroscope reading which are then used to calculate the number of steps taken by a user. External hardware plugins were also used to gather surrounding data. These sensors provide us with ambient temperature (outside temperature), Ambient Humidity, Ambient air pressure, and heart rate (pulse). All the reading from the wearable device are then relayed to an open-source mobile app giving the user a real-time data with a user-friendly interface The backend servers are implemented as APIs running on a virtual server and using MongoDB as the primary database. Challenges we ran into One challenge we ran into was uploading the image onto a server. James, the frontend designer, had trouble figuring out how to stream the data back from the output of the backend into the UI asynchronously, however, he figured it out after a few hours of trial and error. M5StickC is a 2019 product, there is little to no online support for troubleshooting. Working with such new hardware and navigating through its library to get useful functionality out was a huge challenge. Accomplishments that we're proud of We are proud to actually accomplish what we set out to do in the beginning. Our initial goals were to build the ML algorithm for calorie detection and track steps, heart rate, and blood oxygen. However, we went above and beyond our initial goals through hard work and implemented further capabilities, such as the achievements page, login, additional resources, inspirational quotes, and more. Linking up the hardware with the software felt amazing because it seemed like everything was clicking into place. What we learned I learned a lot about how servers communicate with clients. I learned how to use HTTP get and post requests, as well as sending images to a server in a multipart server request. I also learned how to create a better and simple UI for a better UX. - James What's next for Lit Fit We hope to develop LitFit further to be more lightweight, as well as polish the Front-end and add more features. Some of the features are not fully functional, such as Login and achievements, but those features can be implemented with more time. We believe LitFit can actually be a revolutionary weight-loss miracle product and hope to pursue that goal of promoting a healthy lifestyle. Built With arduino flutter mongodb python Try it out github.com
LitFit
Revolutionizing Weight Loss Tactics
['James Han', 'Muntaser Syed', 'Dayna AC', 'Prashant Bhandari']
['Best Health Hack']
['arduino', 'flutter', 'mongodb', 'python']
13
10,185
https://devpost.com/software/auto-mask
Inspiration When mask-wearing became required due to COVID-19, I empathized with the doctors and nurses who have always had to wear the itchy and hard-to-breath face covering. Realizing that face coverings will always be uncomfortable no matter what material or design, I thought to make masks easy to take on and off so catching my breath in a grocery store would be effortless. What it does Auto Mask features an eye shield to protect from infected saliva, touchless control to minimize bacteria transfer from hands, and even a sneeze detector! An electrode on the abdomen activates the mask just in time to catch a cough or sneeze. How I built it I designed the 3D printed headpiece and combined the Arduino microcontroller with an ultrasonic sensor, muscle sensor, and a pair of servo motors. Built With 3dprinting arduino c++ cad Try it out www.thingiverse.com
Auto Mask
Normal masks are uncomfortable to wear all day. Auto Mask features touchless mask on/off control and abdominal muscle sensing sneeze detection to catch coughs in time.
['Taliyah Huang', 'Calista Huang']
['Hardware winner']
['3dprinting', 'arduino', 'c++', 'cad']
14
10,185
https://devpost.com/software/virtuquiz
Thousands of videos, understand your lessons clearly. Inspiration There are so many students around the world who are weak at studies. Many children don't like the traditional learning or the e-learning method. Although there are many learning apps, there are some features that I thought of that no other app has. Including all these features I wanted to create a learning app, and thus Virtuquiz, which is not limited to quizzes, was born.... What it does Virtuquiz is a learning app which anyone can download on their mobile phone and start to learn. This app is recommended to students of grades 6-12, but other grades will be added sooner. Virtuquiz has 2 main sections, one is learning and the other is quizzes. Leaning Section The learning section features 3 sub-categories which include videos, a homework checker and an extra knowledge bot. Videos There are thousands of videos under different topics which you can refer to. The video section consists videos from an online school 'Khan Academy.' Watching videos here is simple. Scroll down on the topic list, select the topic, then select a video and watch it. All videos are in English. Homework Checker This is a feature where anyone can submit there homework for a re-check before submitting it to a teacher. You can either send a picture or document. Then we will re-check it with automated systems as well as manual systems and send whether the work is correct or point out the mistakes and analyze them. We have clearly said that no one can use this feature for cheating. Extra Knowledge Bot This bot which is called as the Virtubot can be used for learning good qualities and learning about the society. This is also an essential part which education systems have missed out today. The Virtubut is still under development, it only has 3 questions yet. Using it is simple, the bot asks questions; for example how will you handle a situation where your friend is scolding you for what you didn't do. There will be some options of what you can do. You will have to chose the wisest solution. You will be judged and given feedback (You are rude, or , very good you are generous). Quizzes After you have learned using the video feature you can check your knowledge using the quizzes. There are 20 quizzes with 10 questions each at the moment, more will be added too. Quizzes are under 5 main topics. (Science, History, Technology etc.) Answering questions in quizzes is simple, all the questions are multiple choice questions, you just how to select the answer and press next. Finally after finishing all the 10 questions you will get a report on your performance. The pass mark for all quizzes is 70%. How I built it The Virtuquiz app was built using different app building platforms, the questions were built created by me with the help of online articles. The video feature was added in collaboration with Khan Academy Videos. The Virtubot was built using the virtual bot creator. The app was finally compiled using Android-Studio. Challenges I ran into There were many challenges. The first was finding videos, I couldn't do all the videos myself. But finally I found a Khan academy feature which allows you to add the videos which belongs to them. Another challenge was creating quizzes, I had to make 200 questions and add different answers. This was all done within 12 hours.. Also the Virtubot was difficult to create. I failed in creating the bot and integrating it successfully at the beginning, but later I was successful. Accomplishments that I'm proud of I am proud of adding a bot which is a unique feature and also a feature that plays a role in social-good. Also I am proud of successfully creating this app. What I learned While building my app, I had to read many educational articles, I gained a lot of education through this. Also this was one of the most difficult apps I built, it really taught me a lot about programming etc. What's next for Virtuquiz I have to let people know about my app, although it's good and working many doesn't know that something like this exists. So, I need to promote. Also I will have to develop this app more in the future. Built With android-studio appsgeyser appy-pie gimp Try it out github.com play.google.com
Virtuquiz
The Ultimate Learning App, quizzes, video lessons and even problem solving bots included...
['Senuka Rathnayake']
[]
['android-studio', 'appsgeyser', 'appy-pie', 'gimp']
15
10,185
https://devpost.com/software/professor-ranker-34x5ej
A sample view of a website and the title road to a+ can be a changed name for profranker if I could add the features listed in what next sec Sample output for the program Inspiration At the end of every quarter/semester, there comes the time to sign up for classes for the next academic term and when doing so every student races towards rate my professors to see which professor is the best one to take in terms of easiness and other students experience with that professor. I always find myself spending time trying to rank these professors on which one is going to give me the best time and the best learning experience and end up researching from many professor's websites to find the best one. To eliminate that I built a program that does that for me and gives me a ranking on my convenience. What it does What my program does is it takes in two inputs one which is the school you go to and the second the course you are about to take. Using these inputs it finds all professors from that university who thought that course before and ranks those professors from best to worst in terms of student comments and the overall rating on that website. It uses sentiment analysis for every comment for a professor and analyzes whether its a good comment or a bad comment on the professor and assigns it a value of favorability. Summing up all those values every professor is assigned that value for the comments they received by the students in the website ratemyprofessors.com. Finally, it sorts the professors from the best to worst in terms of student comments and easiness and a list of professors sorted best to worst in terms of the overall rating. How I built it I built this program on Jupiter notebook and due to a time constraint I couldn't develop a website for it or add additional features. However, I used the NLP tool kit and libraries such as text blob to train the analyzer to define a good comment and a bad comment and based on that give a value between 1 and -1 and in the end sum all those values to give an overall student rating for that professor by comments. Then I added all those professors in the dictionary and sorted them from highest rating to lowest rating. Highest indicating more positive comments towards the professor indicating student's favoredness and lowest indicating students aversion towards the professor. Using beautiful soup I was able to extract information for every professor who thought a certain course in a certain university and was able to extract all statistics on that professor. I used the google search library to extract all links from google regarding a professor who thought that course. Challenges I ran into A challenge I ran into was organizing information and parsing data. It took quite an amount of time to figure out how to extract the data and writing an algorithm to analyze the data. However the challenge was ranking the data. Since I am a beginner in programming it was a difficult time ranking however with the help of a mentor I was able to overcome that and was able to rank the data I extracted after analysis. Accomplishments that I'm proud of I am especially proud that this is my first CS project in my life and I was able to build something in just 36 hours with no prior knowledge but with just a simple ambition. What I learned I learned to research and analyze data using ML algorithms and lastly ranking data. I learned the process of extracting, analyzing, and displaying data for useful results. What's next for Professor ranker The next step for this project is turning it into a website or an app which has more features. My plan is to build an algorithm that extracts study methods for a given professor solely using this website and based on more inputs giving students a more customized result. For example, a student can input he learns the best with a more lecture oriented way and the program can analyze a professor and rank them based on their lectures and give a list of professors who lecture the best to lecturing the worst. Furthermore, I would like to implement a form where a student can give more parameters, and using the tags on the website my program can report a more detailed thorough ranking. For example, a professor can have a tag on the website which says he is a strict grader, and my program would output some study methods highlighted in the comment section. Essentially with more time, I would like to give study tips for every professor in addition to its ranking. The bottom line vision is it can be a universal ranking for every professor in a variety of parameters and more sources to parse from. Built With beautiful-soup nltk python textblob vader Try it out github.com github.com
Professor ranker
A professor ranker which ranks a professor from best to worst for a certain course in a certain university using ratemyprofessors.com and sentiment analysis to rank the professors from best to worst.
['Rohit Ganti']
[]
['beautiful-soup', 'nltk', 'python', 'textblob', 'vader']
16
10,185
https://devpost.com/software/tosyno-media-project
Inspiration on media project What it does to make life meaningful How I built it on interest of man kind Challenges I ran into well God is with me any way Accomplishments that I'm proud of on my achievement of today of who I am What I learned on media productivity What's next for tosyno media project to update the system more
tosyno media project
my idea is in cinematography and photoshoots including audio production
['08062081951']
[]
[]
17
10,185
https://devpost.com/software/covid-19-infobot
Inspiration The surge of misinformation has reached new heights during the COVID-19 pandemic that UNESCO itself recognizes it as a major threat to the people and termed it as a "disinfodemic" accompanying the "pandemic". Under such conditions people often believe in fake information which further perpetuates and causes mental strain and anxiety. False information and/or myths like "drinking potent alcoholic drinks as a cure" can not only overshadow the correct measures of precautions but also have adverse long term effects. However, if we could make collect information from credible sources like WHO, UNESCO in real-time to answer the user's queries and supply credible information then this can reduce the spread of the "disinfodemic". This in the long run would : 1) Reduce experiments with unproven cures/remedies 2) Reduce the high level of fears 3) Prevent amplification of false information 4) Reduce anxiety levels and preserve mental health 5) Prevent contamination of true facts What it does The InfoBot provides the user with credible and updated information regarding the number of cases, recoveries, precautionary measures, and a lot more. The InfoBot receives the user's input via voice commands. Depending on what the user requires the InfoBot fetches the latest information from the internet via web scrapping and provides it to the user in the form of audio responses. It has the following features: 1) Audio responses and speech recognition. [User commands] 2) Choice for male / female voice assistant. 3) Advanced error handling conditions. 4) Mental health support. 5) Includes basic support for North American slang. 6) Accepts voice commands for a hands-free experience. 7) Accessible to blind and deaf people. [Provides output as both audio and text] 8) Includes exception handling for personal questions. 9) Includes the ability to search for off-topic questions. How I built it The InfoBot is primarily a web scraping project integrated with a preliminary response handling algorithm, speech recognition API's and text-to-speech conversion libraries [pyttsx3]. The web-scrapping is done using the newly released selenium-python support. ALGORITHM: The InfoBot uses an algorithm that I built to deliver appropriate responses. I call this the "KeyMatter" algorithm as it identifies the key ideas behind the user's query and uses it to deliver appropriate responses. The algorithm breaks down the user inputs to separate words and converts them to lower case [i.e. makes it case insensitive]. Then it matches these words with the keyword-tags database which I created. Then based on the tags identified in the user response, the algorithm obtains the latest information through web scraping and delivers customized outputs through the device audio. Challenges I ran into I had never used selenium python before and hence ran into many problems during web-scrapping. Especially selecting the desired element using the source code of the website was very confusing. Furthermore, as I had to create my own database and algorithm, I ran into many logical fallacies which took great efforts to fix. However, at the end it all paid off!!! Accomplishments that I'm proud of I am very proud of successfully creating and implementing my own response generation algorithm and implementing a fully functional exception handling backend. What I learned I learned to use selenium-python for web scrapping and creating a single program with three major cross-functioning libraries and/or APIs, namely pyttsx3, Google speech-to-text API, and selenium-python. What's next for COVID-19 InfoBot The next stage is to merge the COVID-19 InfoBot with ML algorithms to better handle off-topic questions and provide an even more enhanced human-friendly response. Built With google-web-speech-api python pyttsx3 selenium webscraping Try it out github.com
COVID-19 InfoBot
The spread of fake information about covid has surged so much that UNESCO has termed it as "disinfodemic". To clear this cloud of misinformation, my InfoBot delivers credible and updated information.
['Hetav Pandya']
['The Wolfram Award']
['google-web-speech-api', 'python', 'pyttsx3', 'selenium', 'webscraping']
18
10,186
https://devpost.com/software/don-t-forget-to-social-distance
continuation of the elevator pitch Each time they lose a different part of of health since the effects of not social distancing are different each time. Inspiration This game was inspired because of Covid-19. It has impacted many people in unimaginable ways. I am extremely grateful that my family is healthy. Each day I social distance because of my family and grandma. Some people though don't social distance. Each time someone gets Covid their health is impacted differently. Thus I made the health bar with random value to show how people get affected differently every time. What it does The game shows what happens if people stop social distancing. How I built it I built the game using Java and with help from our awesome Milton teachers Challenges I ran into During the Hackathon my teammate and I were using different IDEs. It took us quite a while transforming all of the code. Additionally, my teammate and I knew different languages, so in the end we chose to work individually. That took a lot of time in the beginning. Additionally there were some challenges as putting the correct pictures and finding the right pictues. Accomplishments that I'm proud of I am really proud of everything I learned today, but also of everything I learned in this whole year that I have been taking Computer Programming 1. My teammate did not know a lot about Java, so I had to explain a lot of concepts multiple times. That helped a lot for me to truly understand everything we learned this year. I am also really happy that I was persistent in finishing this Hackathon since my mom told me some really sad news about my dog's health today. That made me sad, but I still wanted to finish this the best I could. I gave my best and I really put a lot of thought into the idea. What I learned I learned a lot of new things in Java. I made new levels. I also learned how to make multiple interactions. What's next for Don't Forget To Social Distance! I would like to make multiple levels in this game. If there were more levels, the game can get more addictive. If the game is addictive, more people will play it. The more people see the good message the game is promoting, the better the outcome will be. Built With java
Don't Forget To Social Distance!
This project's idea is promoting social distancing. Each of the characters are supposed to be social distancing. If they are not social distancing and get too close to each other they lose health.
['Dina-Sara Custo']
['Most Addictive Game']
['java']
0
10,186
https://devpost.com/software/tutorme-k0y752
Subject info sheet to be entered into the database Tutor info sheet to be entered into the database Tutor or Student Subject choices Google Sign in Inspiration: Many are not getting the proper attention and help they need in the standard classroom environment. The inspiration came from a friend who has lost all motivation to complete math work. He has fallen behind and does not believe he can catch up. What it does and How We Built It: Our program uses your google log in to create an account. When an account is created you may decide if you are a tutor or a student. If you choose you are a tutor you can then decide what subjects you feel comfortable in teaching. From here the tutor can put in his credentials and info such as name, age, GPA, price, test scores, phone number, and whether or not you can tutor online and or in person. All of this information is stored into a tutor object which is then further stored in the firebase database for future logins. On the other hand, if you are a student you choose subjects, then fill in different information. This includes name, age, price range, and whether or not you want to be tutored in person or online. This information is also stored in the database for future logins. With all information stored matches can be made using a HashMap. Based on the student's subject, price range, and preference of communication tutors are suggested. Challenges we ran into: This was both of our first times using an android studio and incorporating a database. This learning curve left us little time to make our app look pretty and accomplish debugging that would have made the app run smoothly. Accomplishments that we're proud of: Incorporating a database and becoming well versed in android studios. I was glad I was able to incorporate my knowledge in object-orientated programming into an app I can utilize at future hackathons. What we learned: Incorporating a database and becoming well versed in android studios. What's next for TutorMe: Getting it to run smoothly. The outline is done, we just need to add some connections to make it all flow and look nice. Built With android-studio authorization database Try it out drive.google.com
TutorMe
With online teaching taking over many students have difficulty adapting to the new style. Teachers become overwhelmed and students need more sources for help.
['Aidan Scharnikow', 'Ethan Weilheimer']
['Most Educational']
['android-studio', 'authorization', 'database']
1
10,186
https://devpost.com/software/fastimage
Example of the result of us super-scaling a 4x compressed image, and the subsequent image quality. Inspiration Throughout the corona virus, we have seen the effect that boredom has on people and the subsequent media they generate. Our goal was to pivot from traditional compression, and use AI based compression to encode photo media. What it does We have built a photo storage app. When users submit a photo, we use BiCubic interpolation to downsample the image by a factor of 4 on each dimension, resulting in a total decrease in data of 1600%. We store these latent downsampled states on a server, and when a user requests there photo, we upsample the photo using AI for super resolution. This returns the photo to its original state for the user. How We built it Our team split into three different parts. One was responsible for the AI/backends scripts, one was responsible for the AI and the server architecture, and one use responsible for UI/UX/Rounding out. To combat the problem of low quality, we applied the SRGAN (Super Resolution Generative Adversarial Network) model [ https://arxiv.org/pdf/1609.04802.pdf ] algorithm to upscale the image. A generative adversarial network is a neural network where two separate models are trained in competition witch each-other. In our case, one model learned to upsample images and the other model learned to detect upsampled images. These two models compete against each-other, each making the other improve. We fine-tuned this algorithm through further training and wrote additional python scripts to process this data for ready use by a flask backend. This flask backend then worked with a node server to deliver our speedy image-text/dm/ experience to an iPhone app. Our iPhone app also includes a thumbnail feature, where users are still viewing previous photos in low-quality thumbnail form until they click on the picture and access the high quality version. Challenges I ran into We ran into a lot of challenges with time. We originally planed to serve the SRGan locally through the users phone; however, integration with Swift Lite was a problem. This led to us being short on time in the rest of our work. Accomplishments that I'm proud of We are proud of the idea and tech that we have presented. The compression we present here we believe to be very promising as it can keep being pushed and pushed. While other image compression algorithms such as JPEG are hard coded and have relatively met their ceiling, our AI based compression is just beginning and will only improve along with the state of AI. What's next for Limitless We are very interested in the idea of using this compression to reduce bandwitdh when sending image media. We originally wanted the models to be present on the phones such that someone could send and receive compressed images and resample them. We are also very interested in applying this tech towards video streams. Explaining our Code Structure We have two gits (Serverside is backend is nicknamed FastImage) the UI/client is called Limitless. Built With flask heroku linux node.js python swift tensorflow xcode Try it out github.com github.com
Limitless
You ever keep running out of server space for your product because of user media? We are using AI based compression to help.
['Zan Huang', 'andrewbrodriguez', 'Alex Rodriguez', 'zankner Ankner']
['Most Technical', 'Best Overall Hack']
['flask', 'heroku', 'linux', 'node.js', 'python', 'swift', 'tensorflow', 'xcode']
2
10,186
https://devpost.com/software/isometric
data visualization in dashboard numerical analysis Inspiration As the pandemic has progressed, we've seen family members and friends feel the impact of COVID-19 both health-wise and economically. However, small retail businesses and restaurants have especially felt the consequences of prolonged lockdown. We thus wanted to create a tool that will allow business owners to track the overall health of their businesses, and make adjustments to their business model accordingly. How it works/How we built it Business owners first enter various statistics about their company in a spreadsheet; we have attached a link to a template for this spreadsheet. They can then upload their data to our website, where our program converts the excel template into a pandas data frame and visualizes graphs of growth and vitality using Python and Plotly, respectively. Different slices of the visualization and analytics can be changed using the drop-down menu, which uses a callback function to update the information. Challenges that we faced A few challenges included formatting the spreadsheet template and figuring out how to create a drop-down menu and deciding what analytics should be presented. Plotly and its integration with HTML was difficult, and we had to familiarize ourselves with the basics of web development as this is the first time for most of us. We also had a hard time with creating graphs, but it worked once we got the hang of it. What we're proud of This was the first hackathon for most of our teammates. Entering this project, only one of us had used Plotly before, and we all had varying skill levels with Python. We all got a degree of familiarity with Plotly, web development, and concepts such as callbacks. As we worked on it using Visual Studio Code Live, we were able to edit the program simultaneously. At the end of the day, we are happy that we were able to create our product within such a short time span. Built With dash heroku plotly python Try it out github.com docs.google.com isometra.herokuapp.com
Isometra
Dashboard for businesses (specifically small retail) to measure growth/vitality
['JaneJinjinMo Mo', 'azhao20', 'jshephard22', 'Daniel Wang', 'DKatz']
['Most Scalable']
['dash', 'heroku', 'plotly', 'python']
3
10,186
https://devpost.com/software/cap-count
our logo Future graphics: choose a mode Future graphics: Sign Up/Log in View for Shopper Future graphics: Sign Up/Log in View for Shop Owner Future shopper view: Browse stores near you and bookmarked stores Future shopper View: View store profiles Future shop owner view: tally number of customers in stores to update shopper view Actual view of searching stores by location Click on stores to pull address documentation of our process staples motivation: team members cannot eat their ice cream until they finish their job Inspiration We were inspired by the many stores reopening after the coronavirus pandemic. We noticed many stores have had to reduce the number of shoppers at any given time which has left many customers waiting. What it does Our app has two modes: shop owner and shopper. Shop owners can use a simple screen to add and subtract guests as they enter and exit the store. This information is then used to calculate the percent capacity of the store which is updated live and posted for shoppers. Shop owners can also provide updated store hours, PPE requirements, and other information about their stores. If the store is at 100% capacity, store owners can also provide wait times for shoppers. On the shopper mode, customers can scroll through their favorite stores and find accurate and updated information about wait times. Shoppers can share their location to better locate low capacity stores in their area. Customers can also favorite stores to quickly access their favorite places. How we built it We used Codename one in Java to build the skeleton of our app. We then pulled different APIs from google to find store and user locations. Challenges we ran into We had trouble pulling and storing data from an API. We also had difficulty pulling the location from our users. We also struggled with combining code from different computers. Accomplishments that we're proud of We are proud of our creativity and our teamwork. We worked very well together and communicated efficiently considering the circumstances. Our group had a wide range of coding abilities and we effectively divided up the work to ensure every team member participated. We also learned a lot of new skills along the way and are proud of our growth. What we learned We learned about accessing APIs and storing the information pulled in a database. What's next for Cap Count We plan on making our app more aesthetically pleasing. We will also develop a way to verify store identities. We will also work on developing a way to search for stores and algorithms to provide the most relevant stores for each individual users. Built With google-maps google-places java
Cap Count
To follow social distancing guidelines, we created an app to allow store owners to monitor and post their capacities while allowing shoppers to view live updates for store capacities and wait times.
['Victoria Choo', "Ella O'Hanlon"]
['Best Novice Hack']
['google-maps', 'google-places', 'java']
4
10,186
https://devpost.com/software/busy-beach
The main page of the app shows the local weather obtained from NOAA's api and a scrolling list of beaches nearby (from Google places api ) The Map Screen lets the user find nearby beaches and check how busy they are. This screen shows data about a specific beach and gives the user the ability to "Go to the Beach". This app is designed to help local governments maintain proper social distancing at beaches and help citizens figure our which beaches will be less crowded. As summer approaches everyone is going to flock to the beaches and possibly find themselves on a very crowded beach unable to maintain safe social distancing. A massive crowd of people is the best way for COVID-19 to spread and therefore crowds must be limited. This app helps to solve the problem of overcrowed beaches in two ways. The first way is that by asking people to sign in to the beach the app can give state officials an accurate count of the people on the beach, and they can keep the beach from getting too crowded. The second way it helps is by giving people the ability to find less crowded beaches as well as showing where people are on the beach. For example, there is a several mile long beach near my house and this app will let users know which end is more crowed at any given time. It will also give users the ability to plan ahead and know that the beach is too crowded before arriving and give them the opportunity to choose to go to a different beach. I was inspired to make this app because my town has closed all the town beaches because it has no way to ensure proper social distancing. Making this app was challenging because I had never used Firebase before and had to learn how to do it as I made the app. The app uses a Firebase database to keep track of how many people are on each beach, as well as where they are located on the beach. When the user clicks "Go to the Beach" the app verifies that the user is actually on the beach then saves their location in the database. When the user clicks "Leave the Beach," their location is removed from the database. Built With dart firebase flutter google-places nooa openmapapi Try it out photos.app.goo.gl
Busy Beach
Helps people and governments limit contact and over crowding at beaches and stop the spread of coronavirus.
['Oliver Eielson']
['Best UX/UI']
['dart', 'firebase', 'flutter', 'google-places', 'nooa', 'openmapapi']
5
10,186
https://devpost.com/software/waysafe
homepage Inspiration Living in the city, the street outside is always somewhat busy, even during this pandemic. Such constant activity hard to find a time when it's safe to go outside, or a place where it's even possible to maintain adequate social distance. After finding that Google Maps already has easily accessible data describing the population density of certain areas at certain times, we decided to use our preexisting knowledge of web apps and APIs to build a streamlined interface for retrieving this data. The purpose of Waysafe is to help users find the safest time and place to go outside through a simple, user-friendly website. What it does When a user finds their location on the map on Waysafe, a request is made to Waysafe's server with the latitude and longitude of the user's area and the radius of visible space on the map. The server fetches the necessary data from Google Maps and returns it to the client, which constructs a heatmap of safe places on the users map. The heatmap has accurate results for population density in specific areas for multiple times throughout the day. How we built it When we began building the site, we found a Python package that retrieved the necessary Google Maps data. Writing a script to do so was simple, and sending output from the Python script to our node server was relatively simple as well, using a node module called PythonShell . The backend was built with Express.js to handle the API, a virtual Python environment to fetch the data from Google Maps, and many many node modules that help to tie them together and complete the functionality. The frontend was built with React for client-side rendering and routing. Much of the styling is done with bootstrap and SCSS to provide beautiful and consistent design. Challenges we ran into The first challenge we encountered was communicating between a Python script and our Node.js server. We overcame this challenge by first writing a test Python script and setting up the PythonShell node module to ensure that our objective was possible. To make sure it would run correctly, we needed to set up a virtual Python environment. Once we were certain it would work, we wrote our python script to fetch the data and connected it to the server. Another challenge we faced was integrating out bootstrap theme and setting up the rendering of the heatmap. Because the Google Maps API we were using for rendering was a React implementation, it was hard to find complete documentation on how to manipulate it. However, a little bit of experimentation and a lot of Googling got us through this issue. Accomplishments that we're proud of We're proud to have built a beautiful, user-friendly website that can help people safely get a breath of fresh air during this pandemic. It's always reassuring in respect to the usefulness of a project when its inspiration comes from a problem we have faced ourselves. We hope other people will find Waysafe as useful as we will. What we learned We learned from the challenges we faced how to use Python in a web app and how to use the React implementation of the Google Maps API. We also got plenty of practice writing both backend code and frontend React code, and perhaps most importantly, we were able to practice working coherently as a team, helping each other out without compromising our version control. What's next for Waysafe As much as we believe this project will be useful during the pandemic and as long as social distancing is advised, we are uncertain as to how we can make it helpful afterwards, once it is no longer dangerous to interact with large groups of people at close range. We do not plan to remove our main feature, but we do hope to find a way to use this same Google Maps data to provide assistance to its users under regular circumstances; perhaps we can add features to help users easily find parking, or an uncrowded restaurant to eat at, or a quiet park at which a parent can play with their children. Such features should be easily implemented and ready for use by the time the Covid-19 pandemic is under control. Built With express.js google-maps node.js python react scss
Waysafe
Find the safest time and place to get a breath of fresh air.
['Robert May', 'Sequoyah Sudler']
[]
['express.js', 'google-maps', 'node.js', 'python', 'react', 'scss']
6
10,186
https://devpost.com/software/algedi
Initial interface afterwards Inspiration According to the CDC, hand-washing is one of the most effective methods of preventing the spread of coronavirus. However, access to restrooms is not always easy to find, especially in public areas. Thus, I decided to build a website that would allow anyone with an internet connection to find the closest restrooms in a city. What it does Algedi displays a map with pointers indicating nearby restrooms. Restrooms can be found either by searching near the user as determined by geolocation, or imputing a city name. The city name can be inputted through typing or by voice recognition. Hovering over a restroom pin gives more information like how to access the restroom and a link to the address on Google Maps. How I built it I used React for the front-end, with a wrapper for the Google Maps API. Restroom info was received from the refugeerestrooms.org API through Express. Voice recognition was achieved through the Web Speech API from MDN, which uses ML models to interpret speech. Challenges I ran into I had much difficulty dropping map points based on data received from the restroom API. Thus, I found a wrapper class for React that allowed me to add points on a map by updating state. In addition, I also struggled with displaying information about restrooms after they had been received. The fix was to store the corresponding information of each point in the state and change the displayed information upon hovering over a point. The voice recognition also required the testing of several different APIs before I settled on the MDN API, which combined the power of ML with an easy implementation. Accomplishments that I'm proud of I'm proud of being able to tie together React and Express in an app that runs smoothly and is able to interact with 3rd party APIs. I hope this tool is able to assist others. What I learned I learned that React has powerful features that allow for well crafted and dynamic websites. I also learned about how to use Express. What's next for Algedi Including directions from your location to nearby restrooms. Adding user feedback to restrooms. Built With express.js react refugeerestrooms.org Try it out algedi-2020.herokuapp.com
Algedi
An intuitive and accessible for finding publicly available restrooms.
[]
[]
['express.js', 'react', 'refugeerestrooms.org']
7
10,186
https://devpost.com/software/ambulant
What Inspired Us We were inspired by our teammate Grace, who said that the pandemic has made it difficult for her to regulate her sleep and exercise schedules. In order to help combat this, we created Ambulant with the intent to provide users with a way to get advice on when the best times to walk and sleep are. What We Learned While working on the project, we learned a lot about HTML, CSS, and Javascript. We learned how to display a live clock on the page, and we also learned about implementing cookies on our website. Also, we improved our remote collaboration skills, which will be incredibly helpful in a post-COVID world. How We Built It We collaborated on our project in Visual Studio Code by using an extension that allowed for us to all work on the project at the same time. Alex then screen shared the website on Zoom so that we could all see live updates to the website as we made edits in real-time. Challenges Some challenges we faced were implementing the API. This was very difficult at first, but we got a lot of great experience in trying to work around it. Built With css html javascript Try it out abao929.github.io
Ambulant
In a post-COVID 19 world, people will need to regulate their exercise and sleeping habits. Ambulant helps users keep track of their sleep and walking schedule to improve their physical health.
['Grace Smith', 'vtao21', 'Nathan Trudell', 'Drew Hesp', 'Alex Bao', 'Wyatt Ellison']
[]
['css', 'html', 'javascript']
8
10,186
https://devpost.com/software/rona-alert
Autofill Menu Home Page Option 1: Low Risk Option 2: High Risk Previous to this competition, most of this team had worked on another project with a much smaller time frame and a less creative idea. The premise was the idea that students of different skill levels have different skills to share, but none less valuable than the other. Our team stretches across skill levels and specialties, but we all specialize in video game design. Previously we stuck to what we knew and it turned out fairly well, but we wanted to try something that was new to all of us: codename one. We came up with the idea to educate people based on the recent rise in power of the media and the realization that it often skews facts to get its point across. With our application we hope to use accurate and real-time data to give people an idea of the effect COVID-19 has had on any country of their choosing in relative terms in order to give them the ability to properly scale the effect. Along the way, we spent more time researching and learning the syntax for codename one than we spent programming due to ambitions that soared higher than our actual abilities. In the future, however, we all hold faith that we have the ability to further our understanding of codename one to add new methods of displaying data to show people the information that we learned in ways that they can understand it. In our project, we used various codename one layouts to create a reasonably visually appealing menu that packs as much information in each page as possible. Using a manually hardcoded database, an autofill menu is used to increase the ease and usability of the project for its user. The inputted data from the menu is then saved and used as a search parameter for a free api called the "covid19api" that contains the recent and total deaths, recoveries, and confirmed new cases in each of the 186 different countries in the menu. Rather than tell the user a concrete answer, they are given the information directly from the api for them to decide themselves in order to establish a sense of credibility. We all recognize that it's hard to find reliable information that hasn't been skewed in some way, so we found it and displayed it ourselves with our suggestion on what they should take away from it. Built With codename-one covidapi intellij-idea java Try it out github.com
Rona Alert
Our app is meant to educate the public about the actual data surrounding COVID-19 in a country of their choosing in real-time.
['Rukevwe Omusi', 'Philip Okafor', 'Zachary Rahaman']
[]
['codename-one', 'covidapi', 'intellij-idea', 'java']
9
10,186
https://devpost.com/software/pandemic-jobs
Home page Listing a job Login with Google Looking for a Job Our inspiration came from multiple months of sitting at home, looking for ways to use our programming experience but unable to find opportunities to do so. We have also know that hundreds of thousands have been laid off from their jobs, and are now looking for work in an extremely dry job market. We want to help these groups of people and many more find ways to use their skills to support themselves. Our website allowing users to post and view job listings, connecting people who need help with those who can provide it. We built this website in VueJS. We have multiple different views, with a login system through Google. We also have a database (Firebase) which allows us to save jobs that users add and show relevant jobs to users. Lastly, we use a complicated distance equation to help users view in-person jobs that are nearby to them (within a range that they can set). One of our team members had never built a website, so he faced some syntax and logic challenges. Other than that, we spent a long time solving an account issue with the Google Developer Portal that prevented us from making API calls. Lastly, time was a very important factor, and as we continue building this website we think it will improve vastly. We are proud of the look of our website, which feels very simple and yet looks very nice. Also, our Login with Google and Location Autocomplete features took a lot of time, and we are very happy that we were able to include them. We learned a lot about using databases and interactions between different types of users. Also, working over Zoom presented some challenges, but we did really well with sharing our screens to fix bugs, and with helping each other as much as possible. We plan to keep working on Pandemic Jobs until it is completely functional. At that point, we might ask people if they would use our website, and update it to be more helpful to everyone. More specifically, we want to make better matches between employers and job seekers, and also add more skills and categories for people to choose from. Built With firebase google-places vuejs Try it out github.com
Pandemic Jobs
Pandemic Jobs connects students and adults who are stuck at home without a source of income with others who need help with household chores, manual labor, software development, and other jobs.
['Zane Bookbinder', 'Max Litvak']
[]
['firebase', 'google-places', 'vuejs']
10
10,186
https://devpost.com/software/cleanout-for-a-cause
Inspiration We were inspired by the loss of jobs during the pandemic. This company could create jobs as well as raise money and awareness for certain organizations. What it does Our project is essentially an online thrift store. It takes donations of items and sells them while benefitting a charity. How we built it We used HTML and basic CSS. The zip code finder was built using Python. Challenges we ran into One of our team members wasn't as familiar with HTML and learned throughout the day. Accomplishments that we're proud of We were content with the finished project as it was all done. What we learned We learned more about HTML and CSS and the ways in which they could be used to create a web page. What's next for Cleanout for a Cause This page could be made more visually appealing and put into motion. It could become a real page and benefit people and charities. Built With css3 html5 python Try it out github.com
Cleanout for a Cause
An online thrift store that encourages users to donate their old items. Proceeds from the sales will be donated to charity.
['amelia-otto otto']
[]
['css3', 'html5', 'python']
11
10,186
https://devpost.com/software/date-maker
Home Screen About Page OAuth Authentication 1 OAuth Authentication 2 Profile Page Profile Page w/ preferences Discover Page Inspiration Have you ever struggled to pick a restaurant with your friends? Are you frequently unable to reach a consensus on where to go? If so, Date Maker is the site for you. The theme for this hackathon was "Life After COVID-19," and after COVID-19, we'll be allowed to eat out again. The opening of new restaurants will surely cause many arguments about where to eat out and about convenience, so we made Date Maker. What it does Date Maker chooses the closest, highest quality restaurants to you and your friends. The app eliminates the need for argument and indecision in times of hunger. Just enter the type of food you want and who you want to go with and Date Maker does the rest and finds you restaurants nearby that you can agree on! How we built it We used a standard webstack with Firebase, as well as the Yelp and Google Direction APIs. Challenges we ran into On the backend, we had a bit of trouble implementing Google Cloud Functions. What's next for Date Maker Turn webapp into open source project and get more developers on board. Have them sign a Contributor License Agreement that allows them to retain intellectual ownership of contributions but Date Maker can freely use their software for proprietary purposes. Create project on Jira to help manage open source workflow under Agile development framework. Break up tasks into sprints and follow SCRUM principles. Follow "lean" product development methodology and develop an MVP. Use user feedback to help guide development of future product iterations. Potential features may include: Implementing restaurant recommendations as a premium feature using user preference data. Enable advertising of local restaurants' promotional deals to users as a source of revenue. Enable "matching" where people are notified of nearby bachelors who are also looking to eat. App sets up their first date. Built With css3 firebase google-directions javascript typescript
Date Maker
After COVID-19, we will be able to eat together again! A web app to find the most convenient place between people trying to eat.
['Sebastian Park', 'Mikhail Dmitrienko']
[]
['css3', 'firebase', 'google-directions', 'javascript', 'typescript']
12
10,187
https://devpost.com/software/xray-eye-nwysqt
Android Application Web Application Model Accuracy Graph Results Wireframe Inspiration There are over 20 Million of them every year, and a staggering 40% of them are chest X-rays, accounting for over 8 Million. Scientific studies have argued that clinically major errors in radiology have a 2 to 20 percent chance of occurring. Other research has also pointed that mammographs for detecting cancer can be misdiagnosed 61% of the time. In the industry, there has also been an increasing number of vacancies in radiology positions. There is a shortage of over 1,00,000 Radiologists worldwide which is also predicted to increase! In light of this information, “the increasing vacancy in radiologist positions, and current events surrounding the world, such as Covid-19 pandemic, we asked ourselves: How can we fill up this gap, speed up the process and prioritize vulnerable patients coming to the Hospitals for X-ray diagnostic testing, while aiding health professionals in performing their duties accurately and consistently?” What it does We have developed our app called “XRay-Eye” as a Decision support tool that uses machine learning to expedite the way frontline care workers identify lung-related abnormalities, which are typically associated with conditions such as Covid-19 infections, Pneumonia and other diseases. Our app empowers clinicians-from emergency doctors to nurses to administrative staff to Radiologists by providing immediate diagnosis of a chest radiograph, followed by showing the results in terms of percentage confidence level enhancing their ability to form an accurate diagnosis at the time treatment is prescribed. How we built it We have used machine learning's deep learning application of neural network in training our Image classifier model using 1000 Open source Samples of each Covid-19, Pneumonia and Normal X-rays with accuracy close to 95% as shown in attached images. Challenges we ran into The biggest challenge was to find the right Datasets each for Covid-19, Pneumonia and Normal Chest X-rays. Then second one was to Train our machine learning model close to its accuracy with minimal Loss as shown in the attached Graph Results and last challenge was to Integrate our Trained model in two platforms such as Android App and Web Application. Accomplishments that we're proud of The biggest Accomplishment was that we were able to Train our machine learning model to its 95% accuracy in matter of 2 days though the open source data we got is not that big(1000 samples each). Secondly, we were able to make functional prototypes on two platforms such as Android and Web Application. What we learned Honestly, during this competition, we learned a lot about Product Ideation in solving the Real world problem of speeding up the Testing process in situations like Covid-19 Pandemic using Machine learning Technology. What's next for XRay-Eye We are expecting to Win this competition to get a chance to meet Expert Entrepreneurs in person Office hours to learn and expand our Idea to Startup. Please check our full 4:35 min Demo Pitch Video including the Functional Apps demo starting at** [03:38 mins]** with the link : [ https://vimeo.com/424477958 ] Thank you, Team XRay-Eye Built With android-studio flask jupyter machine-learning python
XRay-Eye
Developed an App called “XRay-Eye” as a Decision support tool that uses machine learning to expedite the way frontline care workers identify lung-related abnormalities associated in Covid-19 pandemic
['Tanush Verma', 'Tushar Chauhan', 'Fabio Santos']
['Best Tech (1st place)']
['android-studio', 'flask', 'jupyter', 'machine-learning', 'python']
0