hackathon_id int64 1.57k 23.4k | project_link stringlengths 30 96 | full_desc stringlengths 1 547k ⌀ | title stringlengths 1 60 ⌀ | brief_desc stringlengths 1 200 ⌀ | team_members stringlengths 2 870 | prize stringlengths 2 792 | tags stringlengths 2 4.47k | __index_level_0__ int64 0 695 |
|---|---|---|---|---|---|---|---|---|
10,080 | https://devpost.com/software/thisorthat-jqmfnc | Inspiration
We were inspired by the deep question that drives us all: "What is this best thing?"
What it does
ThisOrThat gives users the option between 2 things. They vote on which is better. Their vote is then saved.
How I built it
We used basic HTML, CSS, and Javascript to do things.
Challenges I ran into
It took a long time to get the API to work to get random related images to load from Flickr
Accomplishments that I'm proud of
The graphic design. Graphic design is my passion.
What I learned
We all came with different levels of experience. In general, we all learned a lot about how to use Javascript!
What's next for ThisOrThat
We want to add functionality for users to view item ranking, and be able to a thing to the list of things.
Built With
css
flickr
html
javascript
Try it out
github.com | ThisOrThat | People have been asking themselves what is "the best thing" for decades. By crowdsourcing data from numerous participants, our software is able to determine what the best thing truly is. | ['Emily Gitlin'] | ['Best use of Comic Sans MS'] | ['css', 'flickr', 'html', 'javascript'] | 15 |
10,080 | https://devpost.com/software/regular-terminal | one such lovely terminal opportunity
alacritty has gpu support but the shaders are boring
Built With
glsl
rust
Try it out
github.com | Regular Terminal | gpu terminal emulator | ['Nolan Munce'] | ['mmmmm gpu support'] | ['glsl', 'rust'] | 16 |
10,080 | https://devpost.com/software/onlysibs | What it does
Auto-swipes people based on pictures of their siblings to find the perfect match.
How I built it
Python and OpenFace's api to place landmarks pictures.
Challenges I ran into
Building this app was really challenging on our morality and there was a certain point where we were just numb to the atrocities we were committing.
Accomplishments that I'm proud of
It works and the UI is great. And we didn't get arrested by the FBI.
What I learned
Unfortunately too much
What's next for OnlySibs
Hopefully nothing
Built With
css
flask
html5
nginx
openface
python
tinder-api
Try it out
github.com | OnlySibs | Tinder Auto-Swiper that only swipes on people who look like your siblings(or really any picture you upload) 'love, just one step away' | ['Dominic Matthew Morales', 'Andre Vallestero', 'Winston Van', 'Julie Nguyen'] | ['Best Dating App'] | ['css', 'flask', 'html5', 'nginx', 'openface', 'python', 'tinder-api'] | 17 |
10,080 | https://devpost.com/software/complexity | Inspiration
Since this is terriblehacks, where the submissions are the opposite of serious, I thought that I would create a project that appears serious, but actual contains nothing.
What it does
Absolutely nothing. | Complexity | Very very complex and serious hack | [] | ['Best Devpost Submission'] | [] | 18 |
10,080 | https://devpost.com/software/virtual_museum | VR View 3
VR View 2
VR View 1
Non VR View 1
Non VR View 2
Inspiration
Covid-19
has resulted in a upheaval of everyone's way of life. Suddenly, all the things that were easily accessible before are no longer within our reach. Our aim with this project is to make accessible one of the most important learning environments for everyone, the museums. We hope that this immersive experience will serve as a interesting way for people of all ages to get exposure to the various facets of history.
What it does
VR_Museum will place you in the simulation of a museum where you can walk around the different historical halls, view the exhibits in each hall and hear more information about the pieces.
How we built it
We first made 3D models of the various halls we wanted and their architecture to set up in Unity 3D. We then added the visuals for the exhibits and researched to get some information regarding each piece and used the textual content thus obtained and fed it to Alexa Text-to-Speech service to generate the audio files. We then augmented the audio files on top of each exhibit and added play and pause funcitonality to it.
Challenges we ran into
Some of the challenges we faced in the process are as follows:
Color blending for a milder, more educational environment setup
Positioning of the reticle pointer, could not accomplish objectives with gaze pointer
Accomplishments that we're proud of
We are proud to have completed a full audio-visual VR simulation despite our relative newness to the field.
What we learned
We learned to render 3d models and augment audio to visuals in Unity 3D and use extensions such as Google VR to add our desired features.
What's next for VR_Museum
In the future, we hope to add more halls and pieces to the framework, as well as innovate the way in which we present the information to our users.
Built With
3d
unity
vr
Try it out
github.com
drive.google.com | VR_Museum | Experience the worlds of the past from your home! | ['Nitesh Bharti', 'Agnes Sharan', 'Mohit Kumar'] | ['Best Serious Hack'] | ['3d', 'unity', 'vr'] | 19 |
10,080 | https://devpost.com/software/spark-k6txdh | Inspiration
- Needed a place to track and plan our projects.
- Frustrated over apps that are specialized to do only one thing resulting in having to keep many tabs/apps open at the same time.
- Saw a need for an all-in-one productivity app.
- Believed it had the potential to increase the productivity of those affected by COVID-19.
What it does
- Features: Calendar, Tasks, Team Administration, Project Tracking, Messages, Meetings and Zoom Integration, Discussion Board
- A cumulative productivity app that puts all the apps you need into one
- Assists teams and organizations by improving productivity and tracking the progress of projects
- Highlights team collaboration with SparkRooms to coordinate team members
How we built it
- Written in HTML/CSS/JS, Python
- Written in Visual Studio Code
- Utilized Travis for continuous deployment and autonomous configuration
- Divide and Conquer - Frontend, Backend
- Git, VSCode Live Share, and Discord
Challenges we ran into
- Responsiveness & Mobile compatibility
- Rendering iframes on the dashboard
- Saving arbitrary user data within Firestore
- Git collisions: Committing changes to the same lines at the same time
Accomplishments that we're proud of
- Firebase for hosting, database, and authentication
- Fully functional login system with Google Oauth 2.0 Authentication
- Dashboard to render iframes to show lots of content in a single page
- Travis CI/CD
- Lots of backend and Javascript to process website
What's next for Spark
- Expand Spark for function enterprise and educational usage
- Increase responsiveness of site to enable mobile usage
- Create a way to send personal messages to team members
- Create more tools for users eg. a personal File Storage Method
Built With
bootstrap
css3
flask
fullcalendar
google-cloud
html5
javascript
jquery
node.js
python
travis-ci
Try it out
sparkapp.cf
github.com
docs.google.com | Spark | An intuitive and empowering all-in-one productivity application. | ['Raadwan Masum', 'Rohan Juneja', 'Safin Singh', 'Aadit Gupta'] | ['Wolfram Honorable Mention', 'Track Winner: Work & Economy'] | ['bootstrap', 'css3', 'flask', 'fullcalendar', 'google-cloud', 'html5', 'javascript', 'jquery', 'node.js', 'python', 'travis-ci'] | 20 |
10,081 | https://devpost.com/software/cresh-dkbzej | App Logo
Alyssa Tan
Olufemi Adefila
Courvoisier Hopkins
Team idcs3
Inspiration
Covid-19 and quarantine has caused a dramatic shift in how everyone works and learns. Now more than ever, people are looking for ways to connect online and maintain relationships with their friends and family. Additionally, virtual learning and the "working from home" lifestyle can have negative impacts on someone's physical and mental health if they forego physical activity. By creating Cresh, we can help Big brothers and sisters to connect with their littles while also promoting a healthy lifestyle that is crucial for virtual learning and a healthy mental state.
What it does
Cresh helps users connect with one another and participate in exercise challenges known as "Cresh Battles". Two users connect online and choose an exercise such as push-ups or squats. They then compete synchronously and try to complete as many reps of the exercise as possible within the time limit, with the user that achieves the most reps being crowned the winner.
In addition to Cresh battles, users can keep track of reps by themselves, with their stats being displayed on their profile. Lastly, users can create and join group chats to look for exercise partners or simply keep in touch with one another.
How we built it
The majority of development was in Swift using Xcode, while utilizing CocoaPods to manage dependencies. In order to handle Cresh battles and track a user's movements, we used PoseEstimationML to create a rough skeleton from a user's video feed. Lastly, we created a database server using Parse to handle statistics and user interaction, such as group chat messages or scores from Cresh battles.
Challenges we ran into
One of the challenges we ran into was keeping good version control within github, as we had to do some reflogging at certain points. Additionally, we had to dig into the Parse and Swift documentation in order to properly store and retrieve objects from the database.
Third Party Apps Used & Other Links
"PoseNet"
model from the CoreML framework:
Built With
github
parse
poseestimationml
swift
xcode
Try it out
github.com | Cresh | Stay connected and make learning better with machine learning | ['Jamel Hopkins', 'Alyssa Tan', 'Olufemi O. Adefila'] | ['1st Place Entry'] | ['github', 'parse', 'poseestimationml', 'swift', 'xcode'] | 0 |
10,081 | https://devpost.com/software/eduar-vne0hq | This is our Subject Selection, where students are able to choose their learning.
This is our User Profile where Bigs are able to see the progress their Little has made.
This is our Learning Hub where Littles are able to utilize AR objects to learn and be able to call their Bigs.
Our main feature is our One to One video sessions where Littles and Bigs are able to get connect together and interact with AR objects.
Team Member: Eric Sanchez
Team Member: Rahul Athreya
Team Member: Jesel Reyes
We are EduAR, we want to change the way kids learn.
Inspiration
While coronavirus continues to spread across the globe, many countries have decided to close schools as part of a social distancing policy in order to slow transmission of the virus.
However, these closure of schools has affected the education of more than 1.5 billion children and youth worldwide due to the coronavirus (COVID-19) pandemic.
One of our teammates siblings' is one of them because of Covid19, she couldn't attend her classes, while later on, her school started doing video call classes but it was far less effective because of the missing practical experience, which is like the heart of teaching children.
Covid19 has been very tough for everyone but it can have an even severe effect if we let it to continue hampering the education of kids.
So I thought how can I help narrow the learning gap?
As the majority of us being students ourselves, we decided to make this app which can get the practical element back into teaching while being in a video call.
What it does
Augmented reality can help make classes more interactive and allow learners to focus more on practice instead of just theory. As AR adds virtual objects to the real world, it lets students train skills using physical devices.
Our app essentially does that it helps students learn about things like shapes, colour etc using AR. Helping children get the basic practical experience they are missing right now in their video call class
How I built it
We used for Flutter with Agora SDK for video call and ARCore for the Augmented Reality.
We chose Flutter as it helps in development of mobile apps that allow lightning speed transitions and it comes packed with design elements that are known to fit right into the native self of both Android and iOS.
Challenges I ran into
It was fun to carefully design the UI/UX of our app keeping children's as our target customer. Selecting icons or pictures that feel more appealing to children or making sure the interactive objects are big enough for small hands etc. Also integrating AR into the video call was pretty challenging.
Accomplishments that I'm proud of
One of our teammates was super proud to say that he showed this app to Mr. Manish who is the head of a school called Wisdom and got valuable feedback to work further on the app like adding the name tags and showed his interest in helping us integrate our app in their school.
What I learned
Honestly, it was a roller coaster ride and we would like to thank Agora team as this project helped me push my limits as working on app catering to children can be pretty challenging while we also learned the implementation of AR and the impact it can have on our education system.
What's next for EduAR
We want to launch it in the play store and tie-up with a school to implement our product. We would love to be able to apply a machine learning aspect to our application so it can be customized to Littles and Bigs if more time were to be allotted to us. We are awestruck with our product's future potential and would like to be one of the pioneers of bringing change in children's education especially during these testing times.
Built With
agora
dart
firebase
flutter
Try it out
github.com | EduAR: Changing how kids learn | EduAR is an AR designed to help ease the transition from in-person classes to virtual classes through our very own AR objects and lessons. | ['Rahul Athreya', 'Eric Sanchez'] | ['2nd Place Entry'] | ['agora', 'dart', 'firebase', 'flutter'] | 1 |
10,081 | https://devpost.com/software/project-protect-equip | Project Protect & Equip
Arnav Aggarwal: Team Member
Pranay Prabhakar: Team Member
Pavan Pandurangi: Team Member
Kinllen Peng: Team Member
Arnav Batta: Team Member
Harmon Bhasin: Team Member
Inspiration: After learning of our task and how we could help the Big Brothers Big Sisters of America organization, we were instantly inclined to combine machine learning in our efforts to help littles pair with bigs, as so many littles remain unpaired today. This is when we truly realized the goal and scope of our project, inspired especially by a meeting with one of our mentors.
What it does: Our project provides an in-depth, comprehensive survey for littles in a mobile setting, addressing many problems that may plague littles currently. We then collect the data obtained from the survey in our database, hosted by Firebase. Of course, this information will be vital in pairing littles with bigs and saving some valuable time. Furthermore, there is an admin portal feature in which the survey's results are displayed on a map of the United States. This map shows where bigs need to focus their attention most across the country, allowing them to spread and utilize their resources accordingly.
How I built it: Our team built this by making a JSON file of questions and question types (keys and values) that we could read in from another file. Incorporating Firebase to keep track of the data obtained allowed us to store valuable information provided by the littles. Based on the database collection and a series of other inputs (such as happiness of a certain region, COVID-19 impact on that region, and average survey response on a scale from 1-10), we were able to produce a machine learning model, displayed in the form of a map of the United States, in which the needs of the littles are highlighted accordingly. Finally, to make the UI appealing for our audience full of littles, we used some basic free CSS templates and styled it using html and css elements.
Challenges I ran into: Some challenges that we ran into were how to organize all the survey questions that we had compiled earlier. Initially, we actually hard-coded all the survey questions into an html file. Realizing how inefficient we were, we applied the idea of loose coupling, refactoring our survey questions to a JSON file and allowing the logic of our survey to simply run through the data. It was challenging to think of a way to make our code more efficient in that manner, but once we refactored it we made our lives a lot easier!
Accomplishments that I'm proud of: I'm proud that our team was able to come together and finish a hackathon together. The odds were against us, given all of our busy schedules and the remote nature of this hackathon (and all work in general). Additionally, a lot of us were participating in our very first hackathons, so I'm glad that we were able to develop something as a team, something that is effective and an innovative solution to the problems littles face. Accomplishing our first hackathon as a group was just a start, and I hope we, as a group, can work towards bigger and better solutions in the near future.
What I learned: From this experience, we learned many valuable skills and processes. First and foremost, the ability to collaborate as a team on this project taught us how efficient good communication can be and expanded our horizons on what we can accomplish as a team. We also learned and improved on many skills such as debugging (especially someone else’s unfamiliar code), integration of useful external resources such as firebase for user authentication and High-charts for incorporating our machine-learning model, being able to work with GitHub from the command-line, etc. It is also worth mentioning that we learned to persevere and stay determined, given that it was easy to give up and quit at many times given our busy college lives and other setbacks/delays.
What's next for Project Protect & Equip: We hope that our project can connect as many littles to bigs as possible, and that we can have a positive impact on the way Big Brothers Big Sisters of America organization provides aid for littles' development and growth. We seek to participate in more hackathons in the future, continuing in our growth, learning from new experiences, and contributing positively to society.
Built With
css
firebase
highcharts
html
javascript
python
vscode
webstorm
Try it out
github.com | Project Protect & Equip | Our group is interested in using big data to solve the problems littles deal with. We proposed an app that connects big and littles in a survey for littles to gain insight into the little’s situation. | ['Arnav Aggarwal', 'Pranay Prabhakar'] | ['3rd Place Entry'] | ['css', 'firebase', 'highcharts', 'html', 'javascript', 'python', 'vscode', 'webstorm'] | 2 |
10,081 | https://devpost.com/software/hobby-boc-dubx10 | Stella Wang
Tuan Lam
Tyrell Robbins
Inspiration
As we stay in quarantine to protect others and ourselves. Our lives have definitely become more mundane. Children have felt this effect more than others as they cannot go to school to see their friends or participate in any outdoor activities to stimulate their minds.
To help bring some excitement to children across America and provide learning opportunities for both Little and Big, our team envisioned Hobby Boc.
What it does
Users will out a form and the information they enter it will be saved in our MongoDB database.
When we will then query the database to get important information such as shipping addresses, names, donations, etc.
How I built it
My team used the MERN stack to develop this web app.
M - MongoDB is a open source database
E - Express is the web application framework for Node.js
R - React is a front-end Javascript library to build user interfaces
N - Node.js is a run-time environment that executes Javascript code on a server.
What I learned
At this hackathon my team learned about the MERN stack which end of development we liked. I personally enjoyed creating the backend server and schemas that was needed for the database. Stella enjoyed working on the front end and making the app responsive and pretty. Tyrell enjoyed both aspects and in the future wants to be a full stack developer.
There were some roadblocks along the way that we had to trouble shoot individually. Leading us to research our own problems and coming to a solution ourselves.
We developed and polished our soft skills such as communication, in order to get this project done on time.
What's next for Hobby Boc
Right now our web app has a great foundation but there are some things the team would’ve like to add to make it even better.
Have resource page that littles can do while they wait for their box
Increase the types of box that we have
Build full native IOS and android apps
Built With
bootstrap
mern
Try it out
github.com | Hobby Boc | Learning in a Box | ['Tuan Lam', 'Stella Wang', 'Tyrell Robbins'] | ['Technical Innovation Award-Sponsored by MindTree'] | ['bootstrap', 'mern'] | 3 |
10,081 | https://devpost.com/software/codelinc-76wde1 | virtual team photo!
Inspiration
Our goal was to blend the accessibility of Khan Academy with the enjoyment of social media.
What it does
Our app allows the user to browse and save events, resources, and workshops posted by the nonprofit. If a location is included, the user can reference an embedded google maps function. If a time is included, the user can automatically add the event to iCal.
How we built it
The events are stored in Google's firebase, with a collection for each tab in the app. The app itself was constructed in Xcode, using Swift as the language. The code is attached to this submission.
Challenges we ran into
Securing the firebase database from adversaries while allowing the app to access it seamlessly, formatting the different screens to be aesthetically pleasing, and choosing appropriate event categories were all important challenges that we needed to overcome.
Accomplishments that we're proud of
The embedded google maps and calendar functions are the two accomplishments that we are most proud of.
What we learned
We learned to always keep the target audience of our design in mind. Its easy to get lost in the code, but we tried to always be goal-oriented in our work so we could better serve the non-profit.
What's next for CodeLinc
We're excited for another opportunity to showcase our skills!
Built With
firebase
ical
swift
xcode
Try it out
github.com | Study Buddy | Link students with resources, events, and workshops to increase participation and accessibility. | ['Mark Laborde', 'Sydney Essler', 'Ian Thakur'] | [] | ['firebase', 'ical', 'swift', 'xcode'] | 4 |
10,081 | https://devpost.com/software/speech-assist-0e5ckm | Inspiration
Many people fear public speaking. But if they had someone to talk to before their speech, one or a group of audience that could check his/her speech and give advices, they might fare better off in a public speaking situation. This is what motivated us to build this app- a speech and debate practice app that allows Big Brother and Big Sister members to help out each other's public speaking fear, by pairing them up and let them practice their speech and their debate skills with each other
What it does
What makes the app unique is that we have a match making system which pairs people up into real time video/ audio chat, in which they can practice speech and debate with each other, and also that we have an AI, trained to catch common stutter words such as "uh" "ah" "and" so that even when offline, users can practice speech and debate using the AI.
How I built it
We built the AI with python and CMUSphinx
we built the match making backend and frontend with node.js , webRTC, and websocket. The match making is built with websocket and the video/audio chat is built with webRTC.
Challenges I ran into
Researching webRTC, and all the technology stacks used for a peer to peer real time video/audio chat.
Researching to build an AI that can catch stutter phrases. We settled for CMUSphinx eventually.
What's next for Speech Assistant
We will subdivide it into more categories, such as political debates, speech practice, impromptus, etc. And students will be leveled according to their skill, and they will be matched with people of their own level.
Built With
javascript
Try it out
github.com | Speech Assist | A speech trainer app that pairs up members of Big brother and Big sister | ['Justin Li', 'Devin Han', 'Max Chen'] | [] | ['javascript'] | 5 |
10,081 | https://devpost.com/software/phone-it-we-got-it | GIF
GIF
GIF
GIF
GIF
GIF
GIF
GIF
GIF
GIF
GIF
GIF
GIF
GIF
GIF
Inspiration came from the love and support from our families and community
What it does is it help people find a trusted WIFI hotspot, and/or help by giving them further resources.
How I built it was by using multiple API's hosted on AWS s3
There was no challenge hard enough for us.
What I learned that teams shouldn't cram same day when its due
What's next for Phone it, we got it --> we going to market in 2021
Built With
ai
amazon-web-services
dialougeflow
google
google-geocoding
twilio
Try it out
github.com | Phone it, we got it | Low Cost, Accessible way for everyone and anyone to get connected and plugged | ['Franklin Maggay', 'Anthony Zayas', 'paige arnold'] | [] | ['ai', 'amazon-web-services', 'dialougeflow', 'google', 'google-geocoding', 'twilio'] | 6 |
10,081 | https://devpost.com/software/quarantine-coders-codelinc-7 | The main class to display, fetch, and add the messages.
Messages endpoints reference
Decks endpoints reference
Cards endpoints reference
Inspiration
We wanted to learn new technologies, create an innovative project, and help a larger cause. Hunter's background of Japanese learning was an inspiration to build an app focused on memory retention and help people learn in ways he did.
What it does
Bigs can create flashcards for the littles to study, or the littles can make flashcard sets themselves to study. The bigs and littles can communicate with each other through a messaging system implemented in the web app.
How I built it
The frontend was done using React, our API was created using Springboot, and our database was created using mySQL. React was used for the ease of using components and their ability to individually update without the page refreshing. Springboot was used to create an API that connected to our database. This was done by creating several GET and POST endpoints. Finally, mySQL was used to create our database server.
Challenges I ran into
Our challenges were Competing Demand, Expected Availability, React Component Updates, Memory Leak. Competing Demand and Expected Availability was a challenge due to our schedules involving school, jobs, and the need to learn new web development concepts. React Component Updates was a challenge to learn because it was so impactful to our website and we had to learn all of the concepts about useState and Hooks in react and connecting these with GET and POST requests. Memory leak was a challenge because connecting our web application to the database caused our application to recursively perform API calls, using much of our computer's memory.
Accomplishments that I'm proud of
We're proud of learning all of these new computer science concepts and applying them to the same project. Moreover, juggling all of our schedules together was a difficult task but being able to overcome it was a valuable experience.
What I learned
We learned the React framework, Java Springboot, and mySQL server. We learned how to use these all together and we were exposed to many new concepts such as components, useState, dependency injections, and request mapping.
What's next for Quarantine Coders codeLinc 7
Learning how to use these frameworks better and working on new projects with them. Utilizing these concepts with Amazon Web Services and possibly buying a server to use would be a great opportunity.
Built With
css3
html5
javascript
json
mysql
node.js
react
spring
Try it out
github.com | Quarantine Coders codeLinc 7 | To improve the challenge of learning loss and connecting bigs / littles, our team created a web application that supports messaging and a flash card studying system. | ['Hunter Hawkins'] | [] | ['css3', 'html5', 'javascript', 'json', 'mysql', 'node.js', 'react', 'spring'] | 7 |
10,089 | https://devpost.com/software/breachdirectory | BreachDirectory.tk
Inspiration
I have been hacked before, and being interested in cybersecurity, I decided to figure out how it was down. After some research, I found that the specific user name and password combination stolen from me was exposed in a public data breach.
What it does
Uses a neural network to automatically detect, log, and index public data breaches, allowing users to enter and email, username, or passwords, and find out which of their credentials were leaked, and from where.
How I built it
The neural network is made in python using a dataset I produced myself using previous public databases.
The database was built on PostgresSql.
The website was built in AngularJS, HTML5, CSS3, and Bootstrap. We have integrated Google's reCaptcha V3 to prevent illicit usage.
What's next for BreachDirectory
The API is currently under development due to new features such as added source details and an update to the neural net dataset.
Links
Built With
angular.js
css3
html5
postgresql
python
Try it out
breachdirectory.org
github.com | BreachDirectory | Secure Your Online Future | ['Rohan Patra'] | ['1st Place'] | ['angular.js', 'css3', 'html5', 'postgresql', 'python'] | 0 |
10,089 | https://devpost.com/software/uncle-jon-gatekeeper-your-digital-a-i-security-guard | Security
Built With
security | Security | Security | [] | [] | ['security'] | 1 |
10,089 | https://devpost.com/software/emission | This showcases the initiation code, which projects the walkthrough screens however does NOT repeat after the user has used the app once.
This code enables our machine learning implementation. Users open their camera then we run the ML model to identify the brand.
This is the code which implements our algorithms. We displays tips according to the answers to the questions and your carbon footprint.
Inspiration
Reducing your emissions is, frankly, hard. Buying from sustainable companies, tracking how much carbon you output, realizing the environmental mistakes you’re making are all very tedious tasks. We wanted an app which allows for an easy way to identify how to be more environmentally responsible, and in turn help make sustainable choices. We wanted to know how much carbon we output, what companies to buy from, and how to reduce our carbon footprint on a personal level, so we got to work!
What it does
Our app has 3 main components:
1) Carbon Calculator
Carbon Calculator allows you to take a short survey and then uses our algorithm to help calculate how much carbon you output.
2) Product tracker
Our product tracker uses Machine Learning and Computer Vision to identify companies and provide details about their environmental practices,
3) eMISSION tips
We use data from our carbon calculator to provide customized tips to help you lower your carbon footprint, including unique ideas which are not hard to do in your daily lives.
How we built it
We built our entire application using Swift, a programming language meant for iOS development. We placed a huge emphasis on good user interface, thus we used the UIKit and SnapKit frameworks to make better, more appealing screens within our application.
As for our machine learning model which is used to detect the company logos, we used CoreML and CreateML to build the models to identify companies by their logo. This took tons of labeling, image scraping and data gathering but allowed us to build an effective model which provided accurate predictions.
Finally, our carbon tracker was built using Firebase and Swift, including logging user data and running our algorithm to classify and quantify information to determine the user’s carbon output. The research aspect was lengthy, as we had to do many conversions and deep searches to find bits of information which combined to make an accurate algorithm.
Challenges we ran into
We ran into challenges processing with our machine learning model, specifically making a pre-processed image which we could run our machine learning model on top of. We eventually figured out how to make a pixel buffer which would easily allow us to run our model, and this simplified our implementation a great deal.
What's next for eMISSION
In the future, we hope to add machine learning to our carbon tracking algorithm to use prior data to make better and more precise calculations. We also want to expand our computer vision model to include more companies so users are not constricted to our selection.
Built With
computer-vision
coreml
ios
machine-learning
swift
uikit
xcode | eMISSION | Help users make more environmentally sustainable choices using machine learning, surveys and swift! | ['Anish Kataria', 'Krish Malik'] | [] | ['computer-vision', 'coreml', 'ios', 'machine-learning', 'swift', 'uikit', 'xcode'] | 2 |
10,089 | https://devpost.com/software/tidal-predictions-kxrijb | Inspiration
"Britons warned to prepare for power blackouts in coronavirus lockdown" is a headline we came across recently.
Upon reading the article that followed, we realised that the electricity demand is at its peak due to people being at homes all day and the approaching summer. As our team's interest lies in renewable energy systems, we thought of an idea to maximise power generation from existing systems such as tidal lagoons.
What it does
A tidal lagoon has key components such as a sluice gate (to allow for water to enter), a turbine and a generator. Currently, predicted tidal heights are far from what what they are observed on the day.
The total observed height on any given day is a sum of the predicted tide and a residual term, also called a surge tide. This residual term is a value that is brought about due to weather changes, winds, the moon's position etc. and is hard to predict. However, this term causes a change in observed water levels and leads to loss in power (value in terms of Mega Watts) that could have been generated.
To solve this, at a time when power output is needed the most due to lockdown, we attempt at predicting the total observed height by trying to predict the residual term. Essentially, this would help generate more power than is currently able.
We have used Machine learning to solve this problem.
Our predictor predicts the total observed height on the day with 93.59% accuracy, hence maximising power output.
How we built it
We built it using python.
A 7 step process was used.
1) Data collection: We downloaded this dataset from the British Oceanographic Data Centre website for a site named "Newport". The data contains values from 2015-2019 at 15-minute intervals.
2) Data preparation and visualization: Performed by cleaning the data and creating probability density plots and pairplots.
3) Model Selection: Upon evaluation, random forest turned out to be the best combination of accurate and fast.
4) Model Training: Did a 80:20 Train:test split and trained our random forest model on the training data. Then used this model to predict values based on test data inputs.
5) Evaluation: Compared the accuracy of the predicted values to the actual values.
6) Parameter tuning: Tuned parameters such as "n_Estimators" which is the number of decision trees the model would use. Found an optimum number to be 128.
7) Accuracy: Accuracy is 93.59% and Mean Absolute Error (MAE) is only 0.23 degrees.
Challenges we ran into
Certain challenges we ran into included collecting and preparing the dataset. Alongside, figuring out the correct model took time and effort as each model would take minutes to run.
Accomplishments that we're proud of
We are proud that we were able to increase the accuracy of our model from 42% to 93.59%.
What we learned
We learned and applied lots of Machine learning!
What's next for Tidal Predictions
Looking at fine-tuning parameters to further increase accuracy and make the program run quicker. We also wish to develop an app/website that could provide a close-to-actual prediction for anybody who is seeking to know tidal heights eg. fishermen.
Built With
python
Try it out
github.com | Tidal Predictions | Maximising energy harvest in a time where electricity usage is at peak due to lockdown. | ['Arushi Madan', 'Rohan Chacko', 'Arun Venugopal'] | [] | ['python'] | 3 |
10,089 | https://devpost.com/software/prediction-of-forest-fires | Inspiration
Our idea originated from the bushfire crisis in Australia. The number of forest fires in the world is increasing and the destruction it causes is immense. It endangers the lives and livelihoods of local communities.
What it does
The python code uses machine learning to train a model on historic data of input factors such as wind, temperature, drought moisture code etc. that may or may not have resulted in a fire. The code can then predict if there will there be a fire or not based on a new unique combination of these input factors.
The web app is used to demo the prediction. The user enters various values and can choose which model he wants a prediction from. We have trained five models which are, Linear Regression, RANSAC, Random Forest, Support Vector Regression and Stochastic Gradient Descent.
How I built it
Using Python, HTML, CSS.
Challenges I ran into
Integrating the web app to the model.
Accomplishments that I'm proud of
Learning how to use HTML and CSS and integrating Machine Learning Models into the web app.
What I learned
Lots of tricks in coding + developed my skills in Python, HTML and CSS.
What's next for Prediction of Forest Fires
As of now the RMSE is relatively high, we believe this is due to over-fitting of data. We aim to sort this out in the next version of the same.
Built With
css
html
json
python | Prediction of Forest Fires | Machine Learning to Predict Forest Fires and Demo using manual input of values | ['Arushi Madan', 'Arun Venugopal', 'Aerica Singla'] | ['3rd Place'] | ['css', 'html', 'json', 'python'] | 4 |
10,089 | https://devpost.com/software/empowering-fashion-designers-across-the-globe | Inspiration
There are numerous fashion brands making a strong social impact but restricted to their local markets. Because we are aware of the role that fashion plays in our lives and the lives of those that create it, we decided to support these brands and bring them to a global mainstream market via our app Fashion : Design to the nines. Additionally, due to COVID-19, fashion designers and their plans have been thrown off track as fashion weeks have been cancelled.
What it does
Our solution is an application that allows fashion designers to collaborate, design, and launch their clothing lines and giving them a chance to showcase their line on our virtual fashion week. A platform bringing together people with diverse social and cultural identities through their designs giving them a chance to interact with each other, investors, manufacturing-establishments, apparel companies, and design firms.
How I built it
We used C++, Adobe XD
Expected impact our solution will have on Culture and Creative Industries
We are not oblivious to the fact that the creative industry contributes to 8% of the national income. The solution we have come up with aims not only to increase the revenue and the employment but also to empower local brands across the globe.
What's next for Empowering fashion designers across the globe
Our aim would be to bring together all design needs in one app, and so, we would wish to integrate design software (CAD software) within our app and allow for real-time editing with collaborators. We wish to execute the application and launch it in the near future on the app store and play store. We would like fashion designers to try it out and provide us with feedback that can be further incorporated into improving the app.
Built With
adobe
c++ | Empowering fashion designers across the globe | Empowering the fashion designers all across the globe during and after COVID-19. | ['Arun Venugopal', 'Aerica Singla', 'Arushi Madan'] | [] | ['adobe', 'c++'] | 5 |
10,089 | https://devpost.com/software/machine-learning-to-predict-end-of-season-crop-yield | Vision for the iOS application
Inspiration
When CodeWizards presented their challenge of improving sustainability, we, as a team were sure to tackle it. Back home in India, we have heard of numerous cases of farmers committing suicide due to crop failure and crop loss. Alongside, the amount of resources (pesticides etc.) that are invested in growing crops that eventually fail, is a huge amount, both in terms of cost and the environmental impact they have.
To tackle this issue, we wanted to use machine learning to help the government and farmers predict their yield at the end of each season depending on various factors such as area, season, weather conditions, Methane levels, soil quality etc.
What it does
Using Machine learning, we have trained the Random Forest Regressor model on the yield that was produced in previous years as a result of input parameters such as area and season. The model has been tested for accuracy and can be used to approximate the yield for future years. The model can be embedded into a web or mobile application but due to shortage of time we have not been able to embed it yet. The concept has been demonstrated through mobile application graphics.
We do wish to delve deeper than the top level prediction approach we have used. We do wish to let this be a personalised service for each farmer to let them monitor their farms, themselves. We aim to introduce more parameters into consideration, other than the existing parameters of 'area' and 'season', such as Soil Quality (pH sensor) etc. that would help the farmer detect any issues in their farm early on and accordingly invest resources (eg. pesticides, irrigation methods etc.) to fix avoidable issues.
How we built it
We built it using python, html, css and graphics.
Challenges we ran into
We got stuck at merging a ML code written in Python with an iOS Swift app.
Accomplishments that we're proud of
First Hackathon and a working code!
What we learned
Through the presence of the sponsors and experienced hackers, we learnt quite a few tricks to do things quicker!
What's next for Machine Learning to predict end-of-season Crop Yield
Delve deeper and make it a personalised service for each farmer to access and monitor their own farms!
We also wanted to address the
'Goose challenge'
using our ML skills so we wish to develop a program that is trained on images of the AstonGoose. We then hope to add a scanner and goose sensor to detect goose on the farm. If and when detected:
CHOP, CHOP, CHOP.
Built With
css
django
html
python
Try it out
github.com | Machine Learning to predict end-of-season Crop Yield | Using Machine learning to predict the production of crops at the end of each harvest season to better plan crop growth and reduce crop losses. | ['Arushi Madan', 'Arun Venugopal'] | ['Code Wizards Challenge'] | ['css', 'django', 'html', 'python'] | 6 |
10,089 | https://devpost.com/software/3d-health-hack | this is a photo of my mother Dr ibtissam and I am very proud of her/ shes my secret power
Doctors using our face shield
production face shields
3D printed face shield
mass production help of Lebanon response team
main corona virus center in lebanon Hospital Rafic Hariri accepted the design of teh face shield, and need large quantities
Superokk logo
some web photo
Some web photo
We have better quality videos in the YouTube channel
https://www.youtube.com/watch?v=QcwDAETHzvg
what are main solutions proposed?
1_ Self employment job opportunities (during and after the covid_19 period)
2_ Provide to
hospitals
personal protection equipment's. (specially in countries of bad economical situation)
Story:
The progress is divided into phases we are now at phase 4
Phase 1:
After the lock down of countries due the covid_19 shipping was not an option. Solutions should be cheap and Manufactured in the country. we used 3D printing to solve this issue. (you may think that 3d printing is slow and is not efficient for mass production but we invented a 3d model that reduces the time of 3d printing of Personal protection equipment face shield from 6 hours to ~2 min).
phase 2:
After that our country Lebanon was not able to afford to buy face shields and ventilators along with other medical equipment's. here a team of professional engineers where formed (Lebanon response team) and we proudly implemented most of the projects including the face shield using the 3d technology. We where able to support the main corona virus center in Lebanon with the needed equipment's. It was all done remotely and here is the key for the phase 3.
phase 3:
After that a separate team was formed ( France, Holand , Lebanon) to participate in a hackathon sponsored by the " Freiderish Noamann foundation" . The Idea now is how to transmit the same success to other countries "Remotely". And we did it, we won. We where able to get sponsored . We are building a platform (website ) links 3d printer owners to people who need 3d printed things.
Advantage of such website we will be able to :
1_ Provide self employment job opportunities during and after the covid_19 period.
2_Help specially Countries that are facing bad economical situation to provide 3D printable personal protection equipment's without shipping row material!
3_ Creating a worldwide community that is interested in 3D technology.
the estimated time for the website to be done is ~4 weeks from today
Meanwhile we should proceed to phase 4 :
phase 4: Currently We are here - Marketing strategy- Social media :
_YouTube channel:
_we realize after studying the market that , we need the process to be very interactive to succeed.
Since we are more into the "polling " business model concept than into the " pushing "concept we believe that building an educational platform on YouTube will generate lead's to the mechanism sustainably. So the main idea now is to provide high value content on the YouTube platform to generate lead and build a Brand.
technique to be used: After studying the way YouTube promote it's videos.
(based on views and how much it's interactive).
*
1_ Animation: *
Higher the efficiency (click throw rate).
2_ Interactive videos:
To maximize interactive part we will include the new technology of interactive video.
3_Business startups:
we will provide users with methods for effective uses of the program.
4_Multi-linguistic:
we will upload in several languages so we can spread the message to the maximum.
5_Daily uploads:
YouTube promote videos of users who uploading more (High quality content).
Main points
How the website will help medical field?
web will locate institution and people who need 3D printed solutions (example face shield)
What differentiates us?
• Marketing strategy that we developed this weekend.
• The sustainability in the business model.
• Community : We are joining people that has same goals together on this platform
_ one of our goal is to help the medical : even if it is a doctor that needs 3d modeling to medicine teeth of his/her patient.
Team and how we started briefly :
Clearly results started to appear hospitals show a need and we where able to provide them with their needs.
So a team of expert was found to make this solution achievable by other country.
Team: Ezzedin ayoubi senior lead Senior Lead in development at KLM ,DEV Team mentor
Hassan Hallal Team Leader at ASSYSTEM E&I ,Business, Management(mentor in business management )
Ali Hussein: mechanical engineering student founder of Superokk.
3D printer owners: we have a big database of people who are welling to start providing Hospitals and even people.
Any skilled in anything he is thinking that he can be an added value kindly don't hesitate to contact us.
Accomplishments that we're proud of:
We are proud that we where able to provide main covid_19 center in Lebanon with the face shields needed last week, and that we are able to make some profit to make it sustainable.
Won several competition
Ranked Top 15 in worldwide hackathon of IEEE
What we learned:
How to make from the weakness a point of strength.
Group work using social distance.
tech to use in execution
_ In labs doing experiments for compatibility if someone want to add a new product (like sterilizing test)
_the majority of it will spent on software's that will make the process easier:
we list:
_upgrade an animation software called **** (I have it but we need to upgrade enterprise level and we need the color versions)
_powerful animation app called **** that will give us the ability to make character moving (and to tell stories ,to make the contempt extremely easy to understood)
_this software is extremely powerful (specifically for us). 3D printing mean customization ,and this give us the abilities to customize any video to all our viewers!!
_will help translate ,** text to speech ,speech to text , join every slide to the voice inserted **without the suffer and the time loss of meshing
_intro videos and finalizing in high professional videos in matter of minutes (first to do that it was taking days !!! )
we want to make a difference in the digital age.
The Digital Health is extremely important at this time.
_It will simplify and accelerate the processes.
_No shipping requirements and fits specially in countries in bad economic situation.
_Higher efficiency ,Lower time
_ More job opportunities
never forgot: dream big you can fly
environmentally friendly
3d printing materil we are using are almost 100% (disintegratable,and recycable). (good for nature).In addition that we have other backup plans.
Mentors and sponsors
thank you for the great effort you are putting without these competitions we where not able to progress so far.
Built With
angular.js
firebase
java
www.superokk.com
Try it out
www.superokk.com
drive.google.com
drive.google.com
drive.google.com
drive.google.com | superokk Health | website link between 3d printer owners and people who need 3d prints | [] | [] | ['angular.js', 'firebase', 'java', 'www.superokk.com'] | 7 |
10,089 | https://devpost.com/software/divoc-e0fywm | Flow chart depicting the working of the whole system.
Homepage of the application
Teacher Login
Student Login
Teacher Dashboard
Student Dashboard
Canvas as a blackboard
Asking question in middle of a lecture
Tab Change alert to gain students attention to the lecture
Inspiration
There is an old saying,
The Show Must Go On
, which kept me thinking and finding out a way to connect teachers and students virtually and allow teachers to take lectures from home and to develop a completely open source and free platform different from the other major paid platforms.
What it does
This website is completely an open source and free tool to use
This website whose link is provided below, allows a teacher to share his / her live screen and audio to all the students connected to meeting by the Meeting ID and Password shared by the teacher.
Also this website has a feature of Canvas, which can be used as a blackboard by the teachers.
Including that, this website also contains a doubtbox where students can type in their doubts or answer to teachers questions while the lecture is going on.
Again this website also has a feature of tab counting, in which, tab change count of every student is shown to the teacher. This will ensure that every student is paying attention to the lecture.
Also, teacher can ask questions in between the lecture, similar to how teacher asks questions in a classroom.
How I built it
1) The main component in building this is the open source tool called WebRTC i.e. Web Real Time Communication. This technology allows screen, webcam and audio sharing between browsers.
2) Secondly Vuetify a very new and modern framework was used for the front end design.
3) Last but not the least NodeJS was used at the backend to write the API's which connect and interact with the MongoDB database.
Challenges I ran into
The hardest part of building this website was to find a
open source
tool to achieve screen and audio sharing. This is because Covid crisis has affected most of the countries economy due to lockdown. Hence, it is of utmost important that schools and colleges do not need to pay for conducting lectures.
Accomplishments that I'm proud of
I am basically proud of developing the complete project from scratch and the thing that anyone who has the will to connect to students and teach them can use it freely.
What I learned
I learned a new technology called WebRTC which I believe that is going to help me more than I expect in future.
What's next for Divoc
Integrating an exam module and allowing teachers to take exams from home.
Built With
mongodb
node.js
vue
webrtc
Try it out
divoc.herokuapp.com | Divoc | DIVOC - An Antidote For - COVID | ['Sanket Kankarej'] | [] | ['mongodb', 'node.js', 'vue', 'webrtc'] | 8 |
10,089 | https://devpost.com/software/carsale | CarSale
CarSale Web System. JSP Servlet with MySQL
Built With
hibernate
java
jsp
Try it out
github.com | CarSale | CarSale Web System. JSP Servlet and MySQL with JDBC | ['Sachith Mayantha Fernando'] | [] | ['hibernate', 'java', 'jsp'] | 9 |
10,089 | https://devpost.com/software/virtuquiz | Thousands of videos, understand your lessons clearly.
Inspiration
There are so many students around the world who are weak at studies. Many children don't like the traditional learning or the e-learning method.
Although there are many learning apps, there are some features that I thought of that no other app has. Including all these features I wanted to create a learning app, and thus Virtuquiz, which is not limited to quizzes, was born....
What it does
Virtuquiz is a learning app which anyone can download on their mobile phone and start to learn. This app is recommended to students of grades 6-12, but other grades will be added sooner.
Virtuquiz has 2 main sections, one is learning and the other is quizzes.
Leaning Section
The learning section features 3 sub-categories which include videos, a homework checker and an extra knowledge bot.
Videos
There are thousands of videos under different topics which you can refer to. The video section consists videos from an online school 'Khan Academy.' Watching videos here is simple. Scroll down on the topic list, select the topic, then select a video and watch it. All videos are in English.
Homework Checker
This is a feature where anyone can submit there homework for a re-check before submitting it to a teacher. You can either send a picture or document. Then we will re-check it with automated systems as well as manual systems and send whether the work is correct or point out the mistakes and analyze them. We have clearly said that no one can use this feature for cheating.
Extra Knowledge Bot
This bot which is called as the Virtubot can be used for learning good qualities and learning about the society. This is also an essential part which education systems have missed out today. The Virtubut is still under development, it only has 3 questions yet. Using it is simple, the bot asks questions; for example how will you handle a situation where your friend is scolding you for what you didn't do.
There will be some options of what you can do. You will have to chose the wisest solution. You will be judged and given feedback (You are rude, or , very good you are generous).
Quizzes
After you have learned using the video feature you can check your knowledge using the quizzes. There are 20 quizzes with 10 questions each at the moment, more will be added too. Quizzes are under 5 main topics. (Science, History, Technology etc.)
Answering questions in quizzes is simple, all the questions are multiple choice questions, you just how to select the answer and press next. Finally after finishing all the 10 questions you will get a report on your performance. The pass mark for all quizzes is 70%.
How I built it
The Virtuquiz app was built using different app building platforms, the questions were built created by me with the help of online articles. The video feature was added in collaboration with Khan Academy Videos. The Virtubot was built using the virtual bot creator. The app was finally compiled using Android-Studio.
Challenges I ran into
There were many challenges.
The first was finding videos, I couldn't do all the videos myself. But finally I found a Khan academy feature which allows you to add the videos which belongs to them.
Another challenge was creating quizzes, I had to make 200 questions and add different answers. This was all done within 12 hours..
Also the Virtubot was difficult to create. I failed in creating the bot and integrating it successfully at the beginning, but later I was successful.
Accomplishments that I'm proud of
I am proud of adding a bot which is a unique feature and also a feature that plays a role in social-good.
Also I am proud of successfully creating this app.
What I learned
While building my app, I had to read many educational articles, I gained a lot of education through this. Also this was one of the most difficult apps I built, it really taught me a lot about programming etc.
What's next for Virtuquiz
I have to let people know about my app, although it's good and working many doesn't know that something like this exists. So, I need to promote.
Also I will have to develop this app more in the future.
Built With
android-studio
appsgeyser
appy-pie
gimp
Try it out
github.com
play.google.com | Virtuquiz | The Ultimate Learning App, quizzes, video lessons and even problem solving bots included... | ['Senuka Rathnayake'] | [] | ['android-studio', 'appsgeyser', 'appy-pie', 'gimp'] | 10 |
10,089 | https://devpost.com/software/handsfree-basin | Inspiration
This is pandemic condition. Be a part of mission against a coronavirus
What it does
It will be used in public spaces or slum areas or army/medical camps. Also, useful for isolated people
How I built it
Using arduino, hardware components and sensors, etc and coding
Challenges I ran into
Lack of components or accessories
Accomplishments that I'm proud of
Really helpful
What I learned
Proud to be an Indian, use my knowledge for Nation
What's next for Handsfree basin
Helping others..
Built With
knob
machine-learning
mechanical-parts-like-spring
pedal
Try it out
drive.google.com
github.com | Defender (Handsfree basin and Corona Rakshak) | Restricted the transmission of Coronavirus from infected person to healthy person through surface. | [] | [] | ['knob', 'machine-learning', 'mechanical-parts-like-spring', 'pedal'] | 11 |
10,089 | https://devpost.com/software/providing-vulnerable-workers-with-legitimate-job-postings | Inspiration
COVID-19 pandemic is affecting economies in every continent. Unemployment rates are spiking every single day with the United States reporting around 26 million people applying for unemployment benefits, which is the highest recorded in its long history, millions have been furloughed in the United Kingdom, and thousands have been laid off around the world.
These desperate times provides a perfect opportunity for online scammers to take advantage of the desperation and vulnerability of thousands and millions of people looking out for jobs. We see a steep rise in these fake job postings during COVID-19.
In the grand scheme of things, what may start off as a harmless fake job advert, has the potential of ending in human trafficking. We are trying to tackle this issue at the grassroot level.
What it does
We have designed a machine learning model that helps distinguish fake job adverts from genuine ones. We have trained six models and have drawn a comparison among them.
To portray how our ML model can be integrated into any job portal, we have designed a mobile application that shows the integration and can be viewed from the eyes of a job seeker.
Our mobile application has four features in particular:
1) Portfolio page: This page is the first page of the app post-login, which allows a job seeker to enter their employment history, much like any other job portal/app.
2) Forum: A discussion forum allowing job seekers from all around the world to share and gain advice
3) Job Finding: The main page of the app which allows job seekers to view postings that have been run through our Machine learning algorithm and have been marked as real adverts.
4) Chat feature: This feature allows job seekers to communicate with employers directly and discuss job postings and applications.
How we built it
We explored the data and provided insights into which industries are more affected and what are the critical red flags which can give away these fake postings. Then we applied machine learning models to predict how we can detect these counterfeit postings.
In further detail:
Data collection: We used an open source dataset that contained 17,880 job post details with 900 fraudulent ones.
Data visualisation: We visualised the data to understand if there were any key differences between real and fake job postings, such as if the number of words in fraud job postings was any lesser than real ones.
Data split: We then split the data into training and test sets.
Model Training: We trained various models such as Logistic regression, KNN, Random Forest etc. to see which model worked best for our data.
Model Evaluation: Using various classification parameters, we evaluated how well our models performed. For example, our Random Forest model had a roc_auc score of 0.76. We also evaluated how each model did in comparison to the others.
Immediate Impact
Especially during but also after COVID-19, our application would aim to relieve vulnerable job seekers from the fear of fake job adverts. By doing so, we would be re-focusing the time spent by job seekers onto job postings that are real, and hence, increase their chances of getting a job. An immediate consequence of this would be decreasing traffic onto fake job adverts which would hopefully, discourage scammers from posting fake job adverts too.
Police departments don’t have the resources to investigate these incidents, and it has to be a multi-million-dollar swindle before federal authorities get involved, so the scammers just keep getting away with it. Hence our solution saves millions of dollars and hours of investigation, whilst protecting the workers from getting scammed into fake jobs and misused information.
Revenue generated
Our Revenue model is based on:
1) Premium subscription availability to job seekers to apply for jobs
2) Revenue from the advertisements
3) Commission from the employers to post the jobs
Funding Split
1) Testing and Development: $ 10,000
2) Team Hire Costs: $ 2000
3) Patent Application Costs: $ 125
4) Further Licensing conversations: $ 225
TOTAL: $ 12,350
Future Goals
We would hope to partner up with LinkedIn or other job portals in a license agreement, to be able to integrate our machine learning model as a feature on their portal.
Built With
adobe
python
Try it out
github.com
xd.adobe.com | Providing vulnerable workers with legitimate job postings | Preventing vulnerable workers from the trap of fake job posting scams | ['Arushi Madan', 'Arun Venugopal', 'Aerica Singla'] | [] | ['adobe', 'python'] | 12 |
10,089 | https://devpost.com/software/faco-fight-against-corona-jfcza9 | GIF
Confusion matrix for our final model
INSPIRATION
A diagnosis of respiratory disease is one of the most common outcomes of visiting a doctor. Respiratory diseases can be caused by inflammation, bacterial infection or viral infection of the respiratory tract. Diseases caused by inflammation include chronic conditions such as asthma, cystic fibrosis, COVID-19, and chronic obstructive pulmonary disease (COPD). Acute conditions, caused by either bacterial or viral infection, can affect either the upper or lower respiratory tract. Upper respiratory tract infections include common colds while lower respiratory tract infections include diseases such as pneumonia. Other infections include influenza, acute bronchitis, and bronchiolitis. Typically, doctors use stethoscopes to listen to the lungs as the first indication of a respiratory problem. The information available from these sounds is compromised as the sound has to first pass through the chest musculature which muffles high-pitched components of respiratory sounds. In contrast, the lungs are directly connected to the atmosphere during respiratory events such as coughs, heart rate.
PROBLEM STATEMENT
In this difficult time, a lot of people panic if they have signs of any of the symptoms, and they want to visit the doctor.
It isn’t necessary for the patients to always visit the doctor, as they might have a normal fever, cold or other condition that does not require immediate medical care.
The patient who might not have COVID-19 might contract the disease during his visit to the Corona testing booth, or expose others if they are infected.
Most of the diseases related to the respiratory systems can be assessed by the use of a stethoscope, which requires the patient to be physically present with the doctor.
Healthcare access is limited—doctors can only see so many people, and people living in rural areas may have to travel to seek care, potentially exposing others and themselves.
SOLUTION
We provide a point of care diagnostic solutions for tele-health that are easily integrated into existing platforms. We are working on an app to provide instant clinical quality diagnostic tests and management tools directly to consumers and healthcare providers. Our app is based on the premise that cough and breathing sounds carry vital information on the state of the respiratory tract. It is created to diagnose and measure the severity of a wide range of chronic and acute diseases such as corona, pneumonia, asthma, bronchiolitis and chronic obstructive pulmonary disease (COPD) using this insight. These audible sounds, used by our app, contain significantly more information than the sounds picked up by a stethoscope. app approach is automated and removes the need for human interpretation of respiratory sounds, plus user disease can also be detected by measuring heart beat from camera of smartphone.
The application works in the following manner:
User downloads the application from the app store and registers himself/herself.
After creating his/her account, they have to go through a questionnaire describing their symptoms like headache, fever, cough, cold etc.
After the questionnaire, the app records the users’ coughing, speaking, breathing and heart rate in form of video from smartphone.
After recording, the integrated AI system will analyze the sound recording, heart rate comparing it with a large database of respiratory sounds. If it detects any specific pattern inherent to a particular disease in the recording, it will enable the patient to contact a nearby specialist doctor.
The doctor then receives a notification on a counterpart of this app, for doctors. The doctor can view the form, watch the audio recording, and also read the report given by the AI of the application.
The doctor, depending upon the report of the AI, will develop a diagnosis, suggest medicines, or recommend a hospital visit if the person shows symptoms of corona or other serious condition.
In cases where the AI detects a very seriously ill patient, it will also enable the physician to call an ambulance to the users’ location and continuously track the user.
HOW WE ARE GOING TO BUILD IT
We will take a machine learning approach to develop highly-accurate algorithms that diagnose disease from cough and respiratory sounds. Machine learning is an artificial intelligence technique that constructs algorithms with the ability to learn from data. In our approach, signatures that characterize the respiratory tract are extracted from cough and breathing sounds. We start by matching signatures in a large database of sound recordings with known clinical diagnoses. Our machine learning tools then find the optimum combination of these signatures to create an accurate diagnostic test or severity measure (this is called classification). Importantly, we believe these signatures are consistent across the population and not specific to an individual so there is no need for a personalized database Following are the steps the app will take:
Receive an audio signal from the user's phone microphone
Filter the signal so as to improve its quality and remove background noise
Run the signal through an artificial neural network which will decide whether it is an usable breathing or cough signal
Convert the signal into a frequency-based representation (spectrogram)
Run the signal through a conveniently trained artificial neural network that would predict the user's condition and possible illness
Store features of the audio signal when the classification indicates a symptom
IMPACT
FACO will help patients get themselves tested at home, supporting in areas where tests and access to tests are limited. This will help democratize care in hard-to-reach or resource-strapped areas, and provide peace of mind so that patients will not overwhelm already stressed healthcare systems. Doctors will be able to prioritize patients with an urgent need related to their speciality, providing care from the palm of their hand, limiting their exposure and travel time.
CHALLENGES WE RAN INTO
No financial support
Working under quarantine measures
Working in different time-zones
Scarcity of high-quality data sets to train our models with
One Feature Related Problem- Legal shortcomings we might face when adding the tracking patient feature
ACCOMPLISHMENTS
We went from initial concept to a full working prototype. We got a jumpstart on organizational strategy, revenue and business plans—laying the groundwork for building partnerships with healthcare providers and pharmacies. On the creative side, we built our foundational brand and design system, and created over 40 screens to develop a fully working prototype of our digital experience. Our prototype models nearly the entire app experience—from recording respiratory sounds to reporting to managing contact, care, and prescriptions with physicians. Technologically, we successfully developed an algorithm for disease and have begun the application development process—well on our way to making this a fully functional product within the next 20 days.
You can explore the
full prototype here
or
watch the demo
(and
check out our promo gif
)!
WHAT WE'VE DONE SO FAR
We wanted to show that the project is feasible. Scientific literature has shown that audio data can help diagnose respiratory diseases. We provide some references below. However, it is unclear how reliable such a model would be in real situations.
For that reason, we used a publicly available annotated
dataset
of cough samples:
It is a collection of audio files in wav format classified into four different categories.
We wrote code in Python that converts those samples into MEL spectrograms. For the time being we are not using the MEL scale, just the spectrograms. We did several kinds of pre-processing of the signals, including data augmentation, then convert all pre-processed signals, along with their categories into a
databunch
object that can be used for training artificial neural networks created in the fastai library. The signals within the databunch were divided into training and validation sets.
Because the dataset size was reduced, we used
transfer learning
. That is, we used previously trained networks as a starting point, rather than training from scratch. We treated the spectrograms as if it were images and used powerful models pre-trained to classify images from large datasets. In particular, we tried both two variants of
resnet
and two variants of
VGG
differing on their depth (number of hidden layers). This approach implied turning the sprectograms into image-like representations and normalizing them according to the statistics of the original dataset our models were trained on (imagenet). We first changed the head of the networks to one that would classify according to our categories and trained only that part of the net,
freezing
the rest. Later on we
unfroze
the rest of the net and further trained it. We finally compared the different models by the confusion matrices that we obtained from the validation test. We finally settled on a model based on
VGG19
. We exported the model for later use in classifying audio samples through the pre-existing interface of our mobile app.
The results are promising, especially considering the small amount of data that we have available at this moment. We have included an image of the final confusion matrix that shows how our current network can correctly classify all four categories of signal about 50% of the time, far better than the random level of 25%. We conclude that wav files obtained trough a phone mic provide information that can be useful for diagnosing respiratory condition. We are confident that we can vastly improve both the sensitivity and the specificity of our model if we can gain access to larger, more representative datasets.
We provide an image of the final confusion matrix for our model in the gallery.
This is a
repository
that contains the most important pieces of our work, including some code, the confusion matrix image and the exported final model.
SUMMARY
We are developing digital healthcare solutions to assist doctors and empower patients to diagnose and manage diseases. We are creating easy to use, affordable, clinically validated and regulatory cleared diagnostic tools that only require a smartphone. Our solutions are designed to be easily integrated into existing tele-health solutions and we are also working on apps to provide respiratory disease diagnosis and management directly to consumers and healthcare providers.
Feel free to click on our
website
for more information. We developed this website using Javascript, HTML, CSS, Figma, and integrated it with Firebase to manage hosting and our database. Thank you for reading, and don't hesitate to reach out if you have any questions!
REFERENCES
Porter P, Claxton S, Wood J, Peltonen V, Brisbane J, Purdie F, Smith C, Bear N, Abeyratne U,
Diagnosis of Chronic Obstructive Pulmonary Disease (COPD) Exacerbations Using a Smartphone-Based, Cough Centred Algorithm, ERS 2019, October 1, 2019.
Porter P, Abeyratne U, Swarnkar V, Tan J, Ng T, Brisbane JM, Speldewinde D, Choveaux J, Sharan R, Kosasih K and Della, P,
A prospective multicentre study testing the diagnostic accuracy of an automated cough sound centered analytic system for the identification of common respiratory disorders in children,
Respiratory Research 20(81), 2019
Moschovis PP, Sampayo EM, Porter P, Abeyratne U, Doros G, Swarnkar V, Sharan R, Carl JC,
A Cough Analysis Smartphone Application for Diagnosis of Acute Respiratory Illnesses in Children, ATS 2019, May 19, 2019.
Sharan RV, Abeyratne UR, Swarnkar VR, Porter P,
Automatic croup diagnosis using cough sound recognition, IEEE Transactions on Biomedical Engineering 66(2), 2019.
Kosasih K, Abeyratne UR,
Exhaustive mathematical analysis of simple clinical measurements for childhood pneumonia diagnosis, World Journal of Pediatrics 13(5), 2017.
Kosasih K, Abeyratne UR, Swarnkar V, Triasih R,
Wavelet augmented cough analysis for rapid childhood pneumonia diagnosis, IEEE Transactions on Biomedical Engineering 62(4), 2015.
Amrulloh YA, Abeyratne UR, Swarnkar V, Triasih R, Setyati A,
Automatic cough segmentation from non-contact sound recordings in pediatric wards, Biomedical Signal Processing and Control 21, 2015.
Swarnkar V, Abeyratne UR, Chang AB, Amrulloh YA, Setyati A, Triasih R,
Automatic identification of wet and dry cough in pediatric patients with respiratory diseases, Annals Biomedical Engineering 41(5), 2013.
Abeyratne UR, Swarnkar V, Setyati A, Triasih R,
Cough sound analysis can rapidly diagnose childhood pneumonia, Annals Biomedical Engineering 41(11), 2013.
FACO APP VIDEO DEMO
LINK
FACO PRESENTATION
LINK
FACO 1st Pilot Web App
LINK
Built With
android-studio
doubango
fastai
firebase
google-cloud
google-maps
java
machine-learning
mysql
numpy
pandas
python
pytorch
sklearn
sound-monitoring-and-matching-api
spyder
webrtc
Try it out
github.com | FACO: Fight Against Corona | A contactless digital healthcare solution to assist doctors and empower patients to diagnose and manage diseases | ['Archit Suryawanshi', 'Oghenetejiri Agbodoroba', 'Ntongha Ibiang', 'Sahil Singhavi', 'Ruthy Levi', 'Navneet Gupta', 'Mohamed Hany', 'Prachi Sonje', 'GAVAKSHIT VERMA', 'Shraddha Nemane', 'snikita312', 'Gauri Thukral', 'udit agarwal', 'Francisco Tornay', 'Rubén Aguilera García'] | ['1st place', 'The Best Women-Led Team'] | ['android-studio', 'doubango', 'fastai', 'firebase', 'google-cloud', 'google-maps', 'java', 'machine-learning', 'mysql', 'numpy', 'pandas', 'python', 'pytorch', 'sklearn', 'sound-monitoring-and-matching-api', 'spyder', 'webrtc'] | 13 |
10,089 | https://devpost.com/software/smart-ventilators-pq2nwa | Side View Of the final Ventilator Attachment assembly in FUSION 360
Side View Of the final Ventilator Attachment assembly in FUSION 360
Top View Of the final Ventilator Attachment assembly in FUSION 360
Isometric View Of the final Ventilator Attachment assembly in FUSION 360
Isometric view Of the final Ventilator Attachment assembly in FUSION 360
Back View Of the final Ventilator Attachment assembly in FUSION 360
this is the actual 4T arrangement sitting inside the model
Valve with inlet and outlet pipe
Inspiration
As we know the world needs a helping hand in this difficult time period and ventilator shortage is a really big issue across globe. I come from India where we have only 20,000 ventilators for 1.3 billion population which is almost 1 ventilator for every 65,000 people. This condition inspired me to do something for the humanity, since i am doing mechanical engineering I targeted mechanical ventilators and tried to increase its efficiency to the fullest
What it does
I have made and special attachment which can be attached to any ventilator. this attachment is nothing just a simple 4T pipe splitting into 4 at the end which can be attached to the outlet of ventilator to use a single ventilator for multiple patient. Now the hack is that I have attached an unidirectional flow control valve at the ends of these junction this allows us to control pressure in each pipe an also stops cross contamination which enables it to be used in any situation irrespective of the type of patient we have
Also all these flow control valves are connected to micro controller which is further connected to an APP which enables a single doctor to regulate many ventilators and much more patients just through a tap of a button in the app
How I built it
I used Autodesk Fusion 360 to design the 4T pipe and the flow control valve and used the same software for the assembly of the same (4T pipe and unidirectional flow control vale )
Further I have Made and entire assembly which exactly reflects how the real world attachment will look like
I did flow simulations on ANSYS and used ANSYS Fluent tool for taking out computational fluid dynamics of the flow and taking out different pressures and calibrations to achieve a viable scale for the APP
IOT part is done on Arduino micro controller and IOT modules like Node MCU etc for the control of these valve
Making an APP for this setup so that the doctor can control all of them through an APP
Challenges I ran into
TO find appropriate pressure stats for an ventilator and developing perfect calculations for the pressure calculations
Since I am a mechanical engineer so i don't have much knowledge of these IOT although i have made a simple IOT model in Proteus but i wasn't able to make a very good code for the same
The biggest challenge with which i am still struggling is to build the APP due to no knowledge of the APP i am searching for someone who can help me in coding aspects of IOT and building APP part for the project
What I learned
A lot more about fluid dynamics, building equations and getting results out of iy. Tackling new challenges like methods of stopping cross contamination without any additional part . Generating an ideas so that it can meet every aspect like cost, application, etc
What's next for Smart Ventilators
Trying to build a really good APP for it and refining codes for IOT part so as to make the system more and more effective and easy to use
Built With
autodesk-fusion-360
cfd
iot | Smart Ventilators | The idea is to optimize a single ventilator multiple patient system and reduce the need of ventilator assistance doctors by providing an app to control these valve from anywhere any time | ['Akash Pandey'] | ['Wolfram Honorable Mention'] | ['autodesk-fusion-360', 'cfd', 'iot'] | 14 |
10,089 | https://devpost.com/software/masked-ai-masks-detection-and-recognition | Platform Snapshot
Input Video
Model Processing
Model Processing
Output Video Saved
Output Video Snapshot
Output Video Snapshot
Output Video Snapshot
Output Video Snapshot
Output Video Snapshot
Output Video Snapshot
Inspiration
The total number of Coronavirus cases is 5,104,902 worldwide (Source: World o Meters). The cases are increasing day by day and the curve is not ready to flatten, that’s really sad!! Right now the virus is in the community-transmission stage and taking preventive measures is the only option to flatten the curve. Face Masks Are Crucial Now in the Battle Against COVID-19 to stop community-based transmission. But we are humans and lazy by nature. We are not used to wear masks when we go out in public places. One of the biggest challenges is “People not wearing masks at public places and violating the order issued by the government or local administration.” That is the main reason, we built this solution to monitor people in public places by Drones, CCTVs, IP cameras, etc, and detect people with or without face masks. Police and officials are working day and night but manual surveillance is not enough to identify people who are violating rules & regulations. Our objective was to create a solution that provides less human-based surveillance to detect people who are not using masks in public places. An automated AI system can reduce the manual investigations.
What it does
Masked AI is a real-time video analytics solution for human surveillance and face mask identification. Our main feature is to identify people with masks that are advised by the government. Our solution is easy to deploy in Drones and CCTVs to “see that really matters” in this pandemic situation of the Novel Coronavirus. It has the following features:
1. Human Detection
2. Face Masks Identification (N95, Surgical, and Cloth-based Masks)
3. Identify human with or without mask in real-time
4. Count people each second of the frame
5. Generate alarm to the local authority if not using a mask (Soon in video demo)
It runs entirely on the cloud and does detection in real-time with analysis using graphs.
How we built it
Our solution is built using the following major technologies:
1. Deep Learning and Computer Vision
2. Cloud Services (Azure in this case)
3. Microservices (Flask in this case)
4. JavaScript for the frontend features
5. Embedded technologies
I will be breaking the complete solution into the following steps:
1. Data Preparation:
We collected more than 1000 good quality images of multiple classes of face masks (N95, Surgical, Clothe-based masks). We then performed data-preprocessing and labeled all the images using labeling tools and generated PASCAL VOC and JSON after the labeling.
2. Model Preparation:
We used one of the famous deep learning-based object detection algorithm “YOLO V-3” for our task. Using darknet and Yolo v-3, we trained the model from scratch on 16GB RAM and Tesla K80 powered GPU machine. It took 10 hours to train the model. We saved the model for deploying our solution to the various platforms.
3. Deployment:
After training the model, we built the frontend which is totally client-based using JavaScript and microservice “Flask”. Rather than saving the input videos to our server, we are sending our AI to the client’s place and using Microsoft Azure for the deployment. We are having on-premise and cloud solutions prepared. At the moment, we are on a trail so we can’t provide the link URL.
After building the AI part and frontend, We integrated our solution to the IP and CCTV cameras available in our house and checked the performance of our solution. Our solution works in real-time on video footage with very good accuracy and performance.
Challenges we ran into
There are always a few challenges when you innovate something new. The biggest challenge is “The Novel Coronavirus” itself. For that reason, we can’t go outside the home for the hardware and embedded parts. We are working virtually to build innovative solutions but as of now, we are having very limited resources. We can’t go outside to buy hardware components or IP & CCTV cameras. One more challenge we faced was that we were not able to validate our solution with drones in the early days due to the lockdown but after taking permission from the officials that problem was not a deal anymore.
Accomplishments that we're proud of
Good work brings the appreciation and recognition. We have submitted our research paper in several conferences and international journals (Waiting for the publication). After developing the basic proof-of-concept, We went on to the local government officials and submitted our proposal for a trial to check our solution for better surveillance because the lockdown is near to be lifted. Our team is also participating in several hackathons and tech event virtually to showcase our work.
What we learned
Learning is a continuous process. We mainly work with the AI domain and not with the Drones. The most important thing about this project was “Learning new things”. We learned how to integrate “Masked AI” into Drones and deploy our solution to the cloud. We added embedded skills in our profile and now exploring more features on that part. The other learning part was to take our proof of concept to the local administration for trails. All these “Government Procedures” like writing Research Proposal, Meeting with the Officials, etc was for the first time and we learned several protocols to work with the government.
What's next for Masked AI: Masks Detection and Recognition
We are looking forward to collaborating with local administrative and the government to integrate our solution for drone-based surveillance (that’s currently in trend to monitor internal areas of the cities). Parallel, The improvement of model is the main priority and we are adding “Action Recognition” and “Object Detection” features in our existing solution for even robust and better solution so decision-makers can make ethical decisions as because surveillance using Deep Learning algorithms are always risky (bias and error in judgments).
Built With
azure
darknet
flask
google-cloud
javascript
nvidia
opencv
python
tensorflow
twilio
yolo | Masked AI: AI Solution for Face Mask Identification | Masked AI is a cloud-based AI solution for real-time surveillance that keeps an eye on the human who violates the rule by not using face masks in public places. | [] | [] | ['azure', 'darknet', 'flask', 'google-cloud', 'javascript', 'nvidia', 'opencv', 'python', 'tensorflow', 'twilio', 'yolo'] | 15 |
10,089 | https://devpost.com/software/covidcentral-u21txv | Landing Page
Landing Page
Landing Page
Landing Page - Contact Us Section
Signup Page
Login Page
Content Summarizer
Comparison of 4 Types of Content Summarizer
Text Insights
Preprocessing
Inspiration
This year has been really cruel to humanity.
Australia is being ravaged by the worst wildfires seen in decades, Kobe Bryant’s passing
, and now this pandemic due to
the Novel Coronavirus
originated from the Hubei province (Wuhan) of China. Coronavirus disease (COVID-19) is an infectious disease caused by a newly discovered coronavirus.
More than 3 million people
are affected by this deadly virus across the globe (Source: World O Meters). There have been around 249,014 deaths already and it’s counting. 100+ countries are affected by this virus so far. This is the biggest health crisis in the last many years.
Artificial Intelligence
has proved its usefulness in this time of crisis. The technology is one of the greatest soldiers the world could ever get in the fight against coronavirus. AI along with its subsets (Machine Learning) is leveraging significant innovation across several sectors and others as well to win against the pandemic. After
Anacode releases “The Covid-10 Public Media Dataset”
, we took this as an opportunity to use Natural Language Processing on those data composed of Articles. According to Anacode “It is a resource of over 40,000 online articles with full texts which were scraped from online media in the timespan since January 2020, focussed mainly on the non-medical aspects of COVID-19. The data will be updated weekly”. Anacode further says “We are sharing this dataset to help the data community explore the non-medical impacts of Covid-19, especially in terms of the social, political, economic, and technological dimensions. We also hope that this dataset will encourage more work on information-related issues such as disinformation, rumors, and fake news that shape the global response to the situation.”
Our team leveraged the power of NLP and Deep Learning and built
“CovidCentral”
, a PaaS
(Platform as a Service)
. We believe our solution can help media people, researchers, content creators, and everyone else who is reading and writing articles or any kind of content related to the COVID-19.
What it does
Our tagline says
“Stay central with NLP powered text analytics for COVID-19”. CovidCentral is one of its kind NLP driven platform for fast and accurate insights. It generates a summary and provides analytics of large amounts of social and editorial content related to COVID-19. STAY CENTRAL INSHORTS.
It does three things:
1.
CovidCentral platform can help to
understand large contexts related to COVID-19 in a matter of minutes.
Through the platform, Get actionable insights from hundreds of thousands of lines of texts in minutes. It generates an automated summary of large contents and provides word-by-word analytics of the texts from total word count to the meaning of each word. The user can either enter an URL to summarize and getting insights or enter the complete content directly into the platform.
2.
The large content of text data is hard to analyze. It is very difficult to analyze the large content of texts. CovidCentral can help people to get insights within minutes. Manual analysis of texts leads to a number of hours. Media people, researchers, or anyone who is having the internet can access our platform and
get the insights related to the COVID-19.
3.
Humans are lazy in nature and people want to save time. This platform can generate content’s summary within minutes via a single URL. CovidCentral uses NLP and Deep Learning technologies to provide an automated summary of texts. Very helpful for getting short facts related to the COVID-19.
Why Use CovidCentral?
1. Fast
2. Ease of Use (User-friendly)
3. High Accuracy
4. Secure (No content or data will be saved in the server rather we are sending NLP to you at the frontend.)
How we built it
We built CovidCentral using AI technologies, Cloud technologies, and web technologies. This platform uses NLP as a major technique and leverages several other tools and techniques. The major technologies are:
a. Core concept:
NLP (Spacy, Sumy, Gensim, NLTK)
b. Programming Languages:
Python and JavaScript
c. Web Technologies: HTML, CSS, Bootstrap, jQuery ( JS)
d. Database and related tools:
SQLITE3 and Firebase
(Google's mobile platform)
e. Cloud:
AWS
Below are the steps that will give you a high-level overview of the solution:
1. Data Collection and Preparation:
CovidCentral is built on mainly using “Covid-19 Public Media Dataset” by Anacode. A dataset for exploring the non-medical impacts of Covid-19. It is a resource of over 40,000 online articles with full texts related to COVID-19. The heart of this dataset are online articles in text form. The data is continuously scraped from a range of more than 20 high-impact blogs and news websites. There are 5 topic areas - general, business, finance, tech, and science.
Once we got the data, the next step is obviously “Text Preprocessing”. There are 3 main components of text preprocessing:
(a) Tokenization (b) Normalization (c) Noise Removal.
Tokenization
is a step that splits longer strings of text into smaller pieces, or tokens. Larger chunks of text can be tokenized into sentences, sentences can be tokenized into words, etc. Further processing is generally performed after a piece of text has been appropriately tokenized.
After tokenization, we performed
“Normalization”
because, before further processing, the text needs to be normalized. Normalization generally refers to a series of related tasks meant to put all text on a level playing field: converting all text to the same case (upper or lower), removing punctuation, converting numbers to their word equivalents, and so on. Normalization puts all words on equal footing and allows processing to proceed uniformly.
In the last step of our Text preprocessing, we performed
“Noise Removal”
. Noise removal is about removing characters digits and pieces of text that can interfere with your text analysis. Noise removal is one of the most essential text preprocessing steps.
2. Model Development:
We have used several NLP libraries and frameworks like Spacy, Sumy, Gensim, and NLTK. Apart from having a custom model, we are also using pre-trained models for the tasks. The basic workflow of creating our COVID related NLP based summarizer or analytics engine is like this: Text Preprocessing (remove stopwords, punctuation). Frequency table of words/Word Frequency Distribution – how many times each word appears in the document Score each sentence depending on the words it contains and the frequency table. Build a summary or text analytics engine by joining every sentence above a certain score limit.
3. Interface:
CovidCentral is a responsive platform that supports both i.e. Mobile and web. The frontend is built using web technologies like HTML, CSS, Bootstrap, JavaScript (TypeScript, and jQuery in this case). We have used a few libraries for validation and authentication.
On the backend part, it uses python microservice “Flask” for integrating the NLP models, SQLITE3 for handling the database, and Firebase for authentication and keeping records from the User forms.
4. Deployment:
After successfully integrating backend and frontend into a platform, we deployed CovidCentral on the cloud. It runs 24*7 on the cloud. We deployed our solution on
Amazon Web Services (AWS)
and use an EC-2 instance as a system configuration.
Challenges we ran into
Right now, the biggest challenge is “The Novel Coronavirus”. We are taking this as a challenge and not as an opportunity. Our team is working on several verticles whether it is medical imaging, surveillance, bioinformatics and CovidCentral to fight with this virus.
There were a few major challenges:
Time constraint
was a big challenge because we had very little time to develop this but we still pulled CovidCentral in this short span of time. The data which has more than 40K articles are pretty much messy, so
we got difficulties dealing with messy data
in the beginning but after learning how to handle that kind of data, we eliminated that challenge to some extent. We also got challenges while deploying our solution to the cloud but managed somehow to do that and still testing our platform and making it robust.
Accomplishments that we're proud of
Propelled by the modern technological innovations, data is to this century what oil was to the previous one. Today, our world is parachuted by the gathering and dissemination of huge amounts of data. In fact, the International Data Corporation (IDC) projects that the total amount of digital data circulating annually around the world would sprout from 4.4 zettabytes in 2013 to hit 180 zettabytes in 2025. That’s a lot of data!
With such a big amount of data circulating in the digital space, there is a need to develop machine learning algorithms that
can automatically shorten longer texts and deliver accurate summaries that can fluently pass the intended messages.
Furthermore, applying text summarization reduces reading time,
accelerates the process of researching for information, and increases the amount of information that can fit in an area.
We are proud of the development of CovidCentral and to make it Open Source so anyone can use it for free on any kind of device to get important facts related only to COVID-19.
What we learned
Learning is a continuous process of life, the pinnacle of the attitude and vision of the universe. I tell my young and dynamic team (Sneha and Supriya) to keep on learning every day.
In this lockdown situation, we are not able to meet each other but we learned how to work virtually in this kind of situation. Online meeting tools like Zoom in our case, GitHub, Slack, etc helped all of us in our team to collaborate and share our codes with each other.
We also
strengthen our skills in NLP (BERT, Spacy, NLTK, etc)
and how to integrate our models to the front-end for end-users. We spent a lot of time on the interface so people can use it and don’t get bored. From design to deployment, there were many things that helped us improve our skills technically.
We learn many things around us day by day. Since we are born, we learn many things, and going forward, we will add more relevant features by learning new concepts in our platform.
What's next for CovidCentral
We are adding features like “Fake News Detector” to spam fake news related to the COVID-19 very soon on our platform. CovidCentral’s aim is to help content creators, media people, researchers, etc to
only read that matters the most
in a quick time. APIs to be released soon so anyone who wants to add these features in their existing workflow or website can do it so they won’t need to use our platform rather they can just use our APIs.
We are also in discussion with
some text analytics companies to collaborate
and bring an even more feasible, robust, and accessible solution. In the near future, we will make CovidCentral an NLP powered text analytics platform in general for all kinds of text analytics for anyone, free to use from anywhere on any kind of devices (Mobile, Web, Tablet, etc).
Built With
amazon-web-services
bootstrap
css
firebase
flask
html
javascript
natural-language-processing
nltk
python
sqlite
Try it out
covidcentral.herokuapp.com | CovidCentral | CovidCentral is one of its kind NLP driven platform for fast and accurate insights. It generates a summary and provides analytics of large amounts of social and editorial content related to COVID-19. | [] | [] | ['amazon-web-services', 'bootstrap', 'css', 'firebase', 'flask', 'html', 'javascript', 'natural-language-processing', 'nltk', 'python', 'sqlite'] | 16 |
10,089 | https://devpost.com/software/covnatic-covid-19-ai-diagnosis-platform | Landing Page
Login Page
Segmentation of Infected Areas in a CT Scan
Check Suspects using Unique Identification Number (New Suspect)
Check Suspects using Unique Identification Number (Old Suspect)
Suspect Data Entry
COVID-19 Suspect Detector
Upload Chest X-ray
Result: COVID-19 Negative
Upload CT Scan
Result: Suspected COVID-19
Realtime Dashboard
Realtime Dashboard
Realtime Dashboard
View all the Suspects (Keep and track the progress of suspects)
Suspect Details View
Automated Segmentation of the infected areas inside CT Scans caused by Novel Coronavirus
Process flow of locating the affected areas
U-net (VGG weights) architecture for locating the affected areas
Segmentation Results
Detected COVID-19 Positive
Detected Normal
Detected COVID-19 Positive
Detected COVID-19 Positive
GIF
Located infected areas inside lungs caused by the Novel Coronavirus
Endorsement from Govt. Of Telengana, Hyderabad, India
Endorsement from Govt. Of Telengana, Hyderabad, India
Generate Report: COVID-19 Possibility
Generate Report: Normal Case
Generated PDF Report
Inspiration
The total number of Coronavirus cases is
2,661,506 worldwide
(Source: World o Meters). The cases are increasing day by day and the curve is not ready to flatten, that’s really sad!! Right now the virus is in the community-transmission stage and rapid testing is the only option to battle with the virus. McMarvin took this opportunity as a challenge and built AI Solution to provide a tool to our doctors. McMarvin is a DeepTech startup in medical artificial intelligence using AI technologies to develop tools for better patient care, quality control, health management, and scientific research.
There is a current epidemic in the world due to the Novel Coronavirus and here
there are limited testing kits for RT-PCR and Lab testing
. There have been reports that kits are showing variations in their results and false positives are heavily increasing.
Early detection using Chest CT can be an alternative to detect the COVID-19 suspects.
For this reason, our team worked day and night to develop an application which can help radiologist and doctors by automatically detect and locate the infected areas inside the lungs using medical scan i.e. chest CT scans.
The inspirations are as below:
1. Limited kit-based testings due to limited resources
2. RT-PCR is not as much as accurate in many countries (recently in India)
3. RT-PCR test can’t exactly locate the infections inside the lungs
AI-based medical imaging screening assessment is seen as one of the promising techniques that might lift some of the heavyweights of the doctors’ shoulders.
What it does
Our COVID-19 AI diagnosis platform is a fully secured cloud based application to detect COVID-19 patients using chest X-ray and CT Scans. Our solution has a centralized Database (like a mini-EHR) for Corona suspects and patients. Each and every record will be saved in the database (hospital wise).
Following are the features of our product:
Artificial Intelligence to screen suspects using CT Scans and Chest X-Rays.
AI-based detection and
segmentation & localization of infected areas inside the lungs
in chest CT.
Smart Analytics Dashboard
(Hospital Wise) to view all the updated screening details.
Centralized database (only for COVID-19 suspects) to
keep the record of suspects and track their progress
after every time they get screened.
PDF Reports,
DICOM Supports
, Guidelines, Documentation, Customer Support, etc.
Fully secured platform
(Both On-Premise and Cloud)
with the privacy policy under healthcare data guidelines.
Get Report within Seconds
Our main objective is to provide a research-oriented tool to alleviate the pressure from doctors and assist them using AI-enabled smart analytics platform so they can
“SAVE TIME”
and
“SAVE LIVES”
in the critical stages (Stage-3 or 4).
Followings are the benefits:
1. Real-world data on risks and benefits:
The use of routinely collected data from suspect/patient allows assessment of the benefits and risks of different medical treatments, as well as the relative effectiveness of medicines in the real world.
2. Studies can be carried out quickly:
Studies based on real-world data (RWD) are faster to conduct than randomized controlled trials (RCTs). The Novel Coronavirus infected patients’ data will help in the research and upcoming such outbreak in the future.
3. Speed and Time:
One of the major advantages of the AI-system is speed. More conventional methods can take longer to process due to the increase in demand. However, with the AI application, radiologists can identify and prioritize the suspects.
How we built it
Our solution is built using the following major technologies:
1. Deep Learning and Computer Vision
2. Cloud Services (Azure in this case)
3. Microservices (Flask in this case)
4. DESKTOP GUIs like Tkinter
5. Docker and Kubernetes
6. JavaScript for the frontend features
7. DICOM APIs
I will be breaking the complete solution into the following steps:
1. Data Preparation:
We collected more than 2000 medical scans i.e. chest CT and X-rays of 500+ COVID-19 suspects around the European countries and from open source radiology data platform. We then performed validation and labeling of CT findings with the help of advisors and domain experts who are doctors with 20+ experience. You can get more information in team section on our site. After carefully data-preprocessing and labeling, we moved to model preparation.
2. Model Development:
We built several algorithms for testing our model. We started with CNN for classifier and checked the score in different metrics because creating a COVID-19 classifier is not an easy task because of variations that can cause bias while giving the results. We then used U-net for segmentation and got a very impressive accuracy and got a good IoU metrics score. For the detection of COVID-19 suspects, we have used a CNN architecture and for segmentation we have used U-net architecture. We have achieved 94% accuracy on training dataset and 89.4% on test data. For false positive and other metrics, please go through our files.
3. Deployment:
After training the model and validating with our doctors, we prepared our solutions in two different formats i.e. cloud-based solution and on-premise solution. We are using EC-2 instance on AWS for our cloud-based solution.
Our platform will only help and not replace the healthcare professionals so they can make quick decisions in critical situations.
Challenges we ran into
There are always a few challenges when you innovate something new. The biggest challenge is “The Novel Coronavirus” itself.
One of the challenge is “Validated data” from different demographics and CT machines.
Due to the lockdown in the country, we are not able to meet and discuss it with several other radiologists. We are working virtually to build innovative solutions but as of now, we are having very limited resources.
Accomplishments that we're proud of
We are in regular touch with the State Government (Telangana, Hyderabad Government). Our team presented the project to the Health Minister Office and helping them in stage-3 and 4.
Following accomplishments we are proud of:
1. 1 Patent (IP) filled
2. 2 research paper
3. Partnership with several startups
4. In touch with several doctors who are working with COVID-19 patients. Also discussing with Research Institutes for R&D
What we learned
Learning is a continuous process. Our team learnt
"the art of working in lockdown"
. We worked virtually to develop this application to help our government and people. The other learning part was to take our proof of concept to the local administration for trails. All these “Government Procedures” like writing Research Proposal, Meeting with the Officials, etc was for the first time and we learned several protocols to work with the government.
What's next for M-VIC19: McMarvin Vision Imaging for COVID19
Our research is still going on and our solution is now endorsed by
the Health Ministry of Telangana
. We have presented our project to
the government of Telangana for a clinical trail
. So the next thing is that we are looking for trail with hospitals and research Institutes. On the solution side, we are adding more labeled data under the supervision of Doctors who are working with COVID-19 patients in India. Features like
Bio-metric verification, Trigger mechanism to send notification to patients and command room
, etc are under consideration. There is always scope of improvement and AI is the technology which learns on top of data. Overall, we are dedicated to take this solution into real world production for our doctors or CT and X-rays manufacturers so they can use it to fight with the deadly virus.
Built With
amazon-web-services
flask
google-cloud
javascript
keras
nvidia
opencv
python
sqlite
tensorflow
Try it out
m-vic19.com | M-VIC19: McMarvin Vision Imaging for COVID19 | M-VIC19 is an AI Diagnosis platform is to help hospitals screen suspects and automatically locate the infected areas inside the lungs caused by the Novel Coronavirus using chest radiographs. | [] | ['1st Place Overall Winners', 'Third Place - Donation to cause or non-profit organization involved in fighting the COVID crisis'] | ['amazon-web-services', 'flask', 'google-cloud', 'javascript', 'keras', 'nvidia', 'opencv', 'python', 'sqlite', 'tensorflow'] | 17 |
10,094 | https://devpost.com/software/graphex | Compound nodes which contain expanded and collapsed compound nodes
GIF
Animated edges when hovered on table
Show results of installed query as JSON as well
GIF
Windows that can be dragged on top of each other like operating system windows
Inspiration
I'm working at
i-Vis Research lab
for more than one and a half years. I'm working on developing a generic graph visualization tool. When I checked the "Graph Studio" inside
https://tgcloud.io/app/solutions
, I see that it looks fancy. The gradient colors and highlighting effects look really nice. But I also see that it can be developed. I didn't find it good to make explorations and semi-automated graph analysis. For example, compound nodes and edges might make a big difference. Instead of loading a query, I would like to directly write a query and run it then see the results immediately.
I think I'm already doing graph visualization, so I can use my experience to make an alternative or supportive tool to "Graph Studio".
Also, I think I can support other graph databases such as Neo4j easily and make a database-agnostic open-source visualization tool.
In addition, I was also thinking about a graph editor for visualizing algorithms with rich styles. So I'm planning to add editing features to this project.
What it does
Rich customization of graph visualization styles with using
cytoscape.js styles
.
Run
Interpreted and Installed GSQL
queries and see the results as a graph or as a table or as JSON.
Show results as a table and a graph at the same time.
Use clustering, compound nodes, compound edges, and 12 different layout algorithms for complexity management.
It tries to maximize the space for graph rendering. For this, I use "operating system like windows" which can be dragged and resized.
How I built it
I used angular 10, angular/material for the application. I used cytoscape.js for rendering graphs.I use nodejs for running a server which gets response from tiger graph cloud database.
Challenges I ran into
GSQL is hard to learn. It behaves edges as second class citizens. It shouldn't be like that because edges also store data. Just like nodes they can have properties. Also, I expect to directly run and see the results of a query language. Interpreted queries are like SQL queries but normal queries are like SQL stored procedures. You have to install them.
Bringing data with GSQL is really hard. But it looks great for writing very complex and indeed useful queries.
Accomplishments that I'm proud of
Complexity management with compound nodes/edges and clustering.
Rich and customizable styles.
Let the user see data as table and graph at the same time.
Maximize the space for graph rendering by using resizable/draggable windows.
What I learned
I learned about GSQL and more things about user interface development. I learned the logic of the RDF database.
What's next for _ Davraz _
Many things can be done.
giving the user the ability to edit graph and save changes to the database
supporting other graph databases such as Neo4j
using a data template more advanced and customized features can be added like querying the database with a User Interface component, such that a layman who does not know anything about querying a database can query the database. With this UI component, database queries can be generated from UI components.
using and viewing graph-theoretical properties (degree, in-degree, etc..) and changing visualization with respect to the values of graph-theoretical properties
closeness centrality etc...
using more advanced clustering algorithms
page rank etc..
adding
bird's eye view
for big graphs so that the user won't get lost inside the graph
showing statistics about the current graph
hide/show elements by their types
more support for time-based filtering and exploration of graph
Built With
angular.js
cytoscape.js
material-theme
node.js
Try it out
github.com | Davraz | Graph visualization and exploration software. Leverages cytoscape.js and provides rich and customized graph visualizations. Aims ultimate complexity management, customization, and user-friendliness. | ['Yusuf Canbaz'] | ['1st Place Reward', 'First 50 Qualified Submission', 'General Submission'] | ['angular.js', 'cytoscape.js', 'material-theme', 'node.js'] | 0 |
10,094 | https://devpost.com/software/plume-cpg-analysis-library | CPG of a basic program from the demo
CPG of an if statement from the demo
CPG of a while loop from the demo
CPG of a switch statement from the demo
Inspiration
Plume is the library built as part of my post-graduate research inspired by the work done by Fabian Yamaguchi, Eric Bodden, Johannes Spath, and Karim Ali commercialized by
ShiftLeft
. Modelling static analysis problems as graph reachability problems has been done since the 90s but using graph databases to large programs and deep analytics are relatively new.
What it does
Plume allows one to extract a code property graph (a combination of a program's abstract syntax tree, control flow graph, and program dependence graph) from JVM bytecode and store it in a graph database. The storage backend (graph database) is pluggable and Plume currently supports TinkerGraph, JanusGraph, and TigerGraph. Plume has yet to complete a full interprocedural CPG extraction and thereafter will support program analysis.
The end goal is to perform an array of static analysis on a given program such as dataflow analysis and typestate analysis.
How I built it
Plume is built as a three-part library (only two available until the analysis component is added) comprised of the driver, extractor, and analysis respectively. The libraries are written in Kotlin using Gradle as the build tool and TravisCI + Codecov to run testing and measure code coverage.
The driver exposes a generic interface and domain to enforce the use of the CPG schema and implementing classes will communicate and configure to their assigned graph database appropriately. Soot is used to read and analyse the bytecode, extract the control-flow graph, and call graphs.
Challenges I ran into
For a large part of the year, the construction of the graph was done using ASM to build the graph directly from reading the bytecode which is complex and this took too much time and then rather opted to make use of Soot. The driver also had a few iterations before an appropriately generic set of methods could be agreed upon to unite all the functions necessary for the construction of the graph irrespective of which database was used.
Accomplishments that I'm proud of
The ability to create an intraprocedural code property graph and open-source a fairly polished tool that can be used in the static analysis in a domain where few code property graph tools exist with easy to follow documentation while supporting multiple graph databases.
What I learned
I started learning Kotlin and have begun to master it through the creation of this library. Beforehand I have not used TravisCI and Codecov to the degree that I have in this project. I also had not made use of the TigerGraph built-in endpoints and also created my own TigerGraph image in order to effectively test and load my CPG schema in during the CI/CD pipeline.
What's next for Plume CPG Analysis Library
Right now the library only creates intraprocedural CPGs and the next steps are:
Create an interprocedural CPG with a sound call graph hierarchy
Perform dataflow analysis
Perform alias aware typestate analysis
Support Neo4j and Amazon Neptune as graph databases
Benchmark all supported graph databases in extraction and analysis speeds
Publish my findings in a research paper
Built With
gradle
gremlin
gsql
janusgraph
kotlin
tigergraph
tinkergraph
Try it out
github.com | Plume CPG Analysis Library | A Soot-based, open-source code-property graph (CPG) analysis library to project and incrementally analyze the CPG of programs in graph databases. | ['David Baker Effendi'] | ['2nd Place Reward', 'First 50 Qualified Submission', 'General Submission'] | ['gradle', 'gremlin', 'gsql', 'janusgraph', 'kotlin', 'tigergraph', 'tinkergraph'] | 1 |
10,094 | https://devpost.com/software/tigergraph-js | TigerGraph.js
Inspiration
A few weeks ago, I was looking into integrating TigerGraph with an application I was create with a Node.js backend. However, as a beginner, I found it difficult to navigate through how exactly TigerGraph could be installed with JavaScript. After learning the process, similar to pyTigerGraph, I created a library that would allow full-stack developers and other users to easily interact with TigerGraph using my NPM library: TigerGraph.js.
By creating this library, I think it'll make TigerGraph easier to use and more compelling to full-stack developers and also beginners who are brand new to JavaScript. (JavaScript was my first language.)
What it does
I created a library (TigerGraph.js) and provided documentation for TigerGraph.js to allow users to more easily use and integrate TigerGraph with Node.js. From the website, you can look at particular commands and use the library to query your graph.
To use it, you can install the library with:
npm install tigergraph.js
Generate a token, then create a connection and then code whatever you please.
How I built it
I used Node.js, which queried the TigerGraph REST API and simplified the requests to a single function. To create the queries, I used the standard https library so importing external libraries weren't necessary. The createToken was a single function that got exported, but to create the connection, I used and exported a class. Finally, I created an NPM account to publish the library. When creating the commands, I tried to make them similar to pyTigerGraph so going back and forth doesn't pose a problem. For the documentation, I use mkDocs and their material theme and I published the website on GitHub Pages.
Challenges I ran into
Dealing with Promises and callbacks were difficult for me and took a few days till I was able to complete it. In addition, figuring out how to create the REST calls (especially with the headers) proved to be difficult for me as well. Before, I constantly got bad request errors and I didn't understand why. After a few days, I realised it was because the token wasn't sending and I couldn't send the token through the url; rather, I had to send it through the header.
Accomplishments that I'm proud of
I'm proud that I got the opportunity to create and publish an NPM library. In addition, when I was experimenting with TigerGraph.js and Discord.js, I found it cool how my work is a
real
library, which can work with other libraries and be used by others. I also found it mind-blowing how I could actually use it with projects.
What I learned
I learned how to use TigerGraph's REST API and pass a header in a GET request with https. In addition, I learned how to create an NPM library, and I've gained a thorough respect for those who create libraries now. I learned how to deal with the asynchronous ways of Node.js by using callbacks and promises.
What's next for TigerGraph.js
I'm going to create more functions and add more example projects (and finally finish the project that inspired me to build this library!). I also hope to explore more about using this library with JavaScript in the browser by using Browserify. In addition, I also want to maintain the library and explore the gsql endpoints.
Try it out
github.com
genericp3rson.github.io | TigerGraph.js | A Javascript wrapper for TigerGraph aimed to simplify the TigerGraph-JavaScript development process | ['Shreya C'] | ['3rd Place Reward', 'First 50 Qualified Submission', 'General Submission'] | [] | 2 |
10,094 | https://devpost.com/software/mixpose-web | Tigergraph Scheme
Tigergraph Explorer
Inspiration
We are building a yoga platform because yoga has helped our families get out of depression. As a side effect, it has made us more flexible. Throughout COVID-19, people are required to social distance and loneliness has become a big problem. We want to empower instructors to be able to produce better quality content and allow people to do yoga at home, and if possible, with friends in aid of creating community and battling loneliness.
What it does
We are building a live stream yoga class web application. What makes our app special and unlike other live streaming apps is we are using A.I. pose tracking and stick figures to provide a feedback loop from teachers to users. This way students are able to see each other, and instructors can view all of the students. Tiger graph in the backend helps to do fast analytic tools for instructors so they can see the run their classes better
How I built it
We used TigerGraph and GSQL for data analytics. Exporting firebase data directly into tigergraph. We have created 3 Verticies and 5 different edges for the hackathon itself. Lesson, User and Instructor. In which edges include users being friends with each other, user attending a class, user giving feedback to a class, teachers teach a class and users can follow the teachers. We've also created additional GSQL to help facilitate to tools.
We used Agora’s Real Time Engagement Video SDK. Then we are running TensorFlow A.I. pose detection on top, once we get the skeleton points, we can then draw the stick figure through Augmented Reality. Since you can’t inference on top of the HTML video element, We did this is by creating a canvas to redraw all the livestream, then run the inference on top of the canvas itself to draw the detection. After the detection is done, we then draw the stick figure through AR overlay on top of user’s live feed video in real time.
We are also giving choices for users to either join the public channels, their own private channel or create a channel for their friends to take the yoga class together. The instructors will be subscribed to all the channels. This way students can protect their privacy from other students while still allowing teacher to guide them.
Because we are using Agora SDK across all platforms, the Android user can actually now see the web users and vice versa, with instructors seeing everyone indistinguishably.
Challenges I ran into
Getting A.I. to run on top of live video feed from Agora’s Video SDK proved to be a little more difficult than we thought, but we were able to solve the problem by redrawing the video feed onto a canvas then doing the inference on top of the canvas itself.
GSQL was another challenge, luckily learning another tool to learn, the detailed step by step experience is being documented at
https://www.hackster.io/364351/how-to-use-tigergraph-for-analytics-e476fa
We are writing down our AI solution on
https://www.hackster.io/mixpose/running-ai-pose-detection-on-top-of-agora-video-sdk-d812ce
Another challenge is some users don’t really want to turn on their camera, so we created a private mode trying to accommodate their privacy concerns via Agora’s SDK.
Accomplishments that I’m proud of
We’ve launched web app on
https://mixpose.com
and we are now testing it with actual users. This is much more scarier because we want to ensure our users have the best experiences using our application.
Another accomplishment we are very proud of is that we actually have the license to use the music in the demo video :)
What I learned
GSQL for the first time, and running graph SQL becomes really powerful
What’s next for MixPose Web
We are ready to take this idea forward and turn it into a startup. 3 of us Co-Founders have quit our jobs to working towards it full steam ahead.
Built With
agora
ai
ar
augmented-reality
firebase
tensorflow
tigergraph
Try it out
mixpose.com
github.com
www.hackster.io
www.hackster.io | MixPose Web App | MixPose is a live streaming platform for yoga classes. We use A.I. on the Edge to do pose detection for the users and to send feedback to the yoga instructors. | ['Peter Ma', 'Sarah Han', 'Ethan Fan'] | ['First 50 Qualified Submission', 'General Submission', 'Most Popular', 'First Place (1)'] | ['agora', 'ai', 'ar', 'augmented-reality', 'firebase', 'tensorflow', 'tigergraph'] | 3 |
10,094 | https://devpost.com/software/tiger-nlp | TigerNLP
Generate GSQL from human sentences.
Currently a prototype. Built for the Tigergraph Graphathon challenge
https://tigergraph2020.devpost.com/
Tigergraph GSQL generation based on human input text. Open source, and deployed version available.
Built for improving accessibility and build tools for Tigergraph.
Limitations
Note this project currently isn't a complete representation of the GSQL language. Sentences are expected to have one subject and possible multiple direct objects or actions. Tiger NLP will currently identify the following constructs:
Vertices
Directed edges
Undirected edges
Vertex properties
Edge properties
Graph
An item is considered a property if it is not used in a vertex capacity. i.e. for elements that are defined using 'has a', they would be considered a vertex if we could derive an edge from it but a property otherwise.
Client
The client is the user-facing website.
From the
tiger-nlp
client directory:
yarn
yarn start
Server
The backend runs a flask server that serves the model for generating GSQL from english sentences.
From the
./server
directory:
pip install -r requirements.txt
python3 -m spacy download en_core_web_sm
flask run
Dev Notes
Get started with Tigergraph:
https://docs.tigergraph.com/start/get-started-with-tigergraph
Get started with spaCy:
https://spacy.io/
Tigergraph GSQL language spec:
https://docs-beta.tigergraph.com/dev/gsql-ref/querying/appendix-query/complete-formal-syntax-for-query-language
Define the schema:
https://docs.tigergraph.com/start/gsql-102/define-the-schema
References
https://www.researchgate.net/publication/258650012_Generating_UML_Diagrams_from_Natural_Language_Specifications
Try it out
github.com | TigerNLP | NLP-powered GSQL. Generate GSQL code from human sentences. | ['Chris Buonocore'] | ['First 50 Qualified Submission', 'General Submission', 'Most Creative'] | [] | 4 |
10,094 | https://devpost.com/software/tigergraph-net-libraries-and-building-blocks-for-graph-apps | The TG.NET CLI
TG.NET proxy running as a container on RedHat OpenShift
Ingesting data from the Windows event log
Cross-origin browser test using Bridge.NET
Inspiration
.NET is one of the most popular enterprise developer technologies and as graph databases become more mainstream, vendors like
Neo4j
and
DGraph
already ship .NET native libraries for their products which allow .NET developers to program graph databases in their own language and connect their existing apps and data sources without relying on manual HTTP calls to an API.
At the same time .NET, C# and F# have unique strengths as a platform and languages for developers, which make .NET an appealing choice for building graph-powered apps that can run both as traditional client-server apps and also as HTML-only SPA apps, together with interactive data-analysis notebooks in Jupyter
using
C# and F#. Frameworks like
Blazor
which can compile .NET code to WebAssembly can simplify development of multi-target applications by allowing code to be reused across a solution
Right now the first-choice client library for TigerGraph is
pyTigerGraph
which is widelt used across the TigerGraph ecosystem.
However I wanted to build a .NET native client library for TigerGraph using the following requirements:
Cross-platform
No external dependencies
The same library can be used in CLI and server apps and also target JavaScript and WebAssembly
Connect to Windows data sources like the Windows Event Log
In addition I wanted to build components that implement common design patterns and practices to solve challenges developers commonly face using the TigerGraph server. My own interest in graph databases is for security and endpoint protection and I want to build open-source apps that can ingest data from a wide set of sources on Windows and Linux and be deployed quickly and easily.
What it does
TigerGraph.NET is a set of libraries, tools and components for building multi-target graph-powered applications using C# and F#. There are several sub-projects under the TG.NET umbrella:
CLI
The
CLI
project provides a cross-platform client for querying and monitoring TigerGraph servers including free-tier server instances. It talks to the REST++ and GSQL endpoints does not rely on the Java based GSQL client.
Proxy
The
Proxy
project is a proxy server for TigerGraph that provides common app services like caching and mitigates some of the limitations of using free-tier TigerGraph instances for browser-based apps. The proxy is a small .NET Core app that can run on most Linux environments like on a AWS micro-instance or as a container on Redhat OpenShift and provides a transparent proxy for REST++ and GSQL API requests from client-side code with the following features.
Authentication: You can set environment variables for your
TG_TOKEN
,
TG_USER
and
TG_PASS
credentials on the server so you don't have to expose these in your client app code.
CORS: The server supports CORS headers and CORS pre-flighting requests so you can make calls to your TigerGraph server API from your JS browser code. Normally you would have to configure the TigerGraph Nginx server using
gadmin
to enable this support but this isn't available for free-tier instances
Keep-alive: By default the proxy server pings the
echo
endpoint of the backing TG server every 15 minutes. By default free-tier instances shutdown after about 90 minutes of inactivity and there is no way of restarting them automatically.
Caching: The proxy server implements a simple memory-cache which caches graph data requests using the URL requests as cache keys. Apps that use graph data can avoid hitting the TG server on every request. More sophisticated caches and schemes can be implemented pretty easily using the ASP.NET Core libraries and middleware.
TigerGraph.Base
This project contains common data models and code that is shared across projects and can be compiled to both .NEt IL code and JavaScript
Deployment
Deployment scripts are provided for deploying the server to OpenShift.
What's next for TigerGraph.NET: Libraries and building blocks for graph apps
I will post more updates and video as I continue work on this
Built With
.net
c#
f#
jupyter
tigergraph
Try it out
github.com
www.nuget.org
github.com | TigerGraph.NET: Libraries and building blocks for graph apps | Cross-platform libraries and tools for building graph-powered browser, server, desktop, and notebook apps in C# and F# with TigerGraph. | ['Allister Beharry'] | ['First 50 Qualified Submission', 'General Submission', 'Most Technical'] | ['.net', 'c#', 'f#', 'jupyter', 'tigergraph'] | 5 |
10,094 | https://devpost.com/software/tigergraphcli | Motivation
Having always held an interest in graph databases and their unique applications in the realm of machine learning and data science, I was intrigued by TigerGraph. When I was onboarding onto the service, however, I found that there was a lot of overhead to simply get started. For example, I had to manually download a
jar
file to interact with a TigerGraph server, and use Docker to start my own instance of TigerGraph.
Thankfully,
tgcloud
offers free cloud instances, and
pyTigerGraph
offered a high-level client library to interface with the
jar
CLI and REST++ endpoints.
I wanted to make the onboarding experience even smoother. With other enterprise tools such as Kubernetes, a simple-to-use CLI is often the go-to choice for simple commands (for example, listing all the deployments and their status). I wanted to build something similar for TigerGraph.
Implementation
The CLI was built in Python, leveraging the
pyTigerGraph
library to interface with TigerGraph servers. I used the
typer
library, which makes it easy to build command line applications in Python.
TigerGraphCLI
is on its first stable release, which is also available on PyPI. See installation instructions in the repo.
Next Steps
I would love to continue to make the development experience for TigerGraph better by contributing to both pyTigerGraph as well as TigerGraphCLI. I'm hoping to receive user feedback so I know what features to develop/bugs to squash!
Built With
python
Try it out
github.com | TigerGraphCLI | An easy-to-use CLI for interacting with TigerGraph databases. | ['Frank Jia'] | ['First 50 Qualified Submission', 'General Submission', 'Best Documented'] | ['python'] | 6 |
10,094 | https://devpost.com/software/tg-bot | Bot in action
Inspiration
Since Discord is the main communication platform for TigerGraph, I thought it'd be cool to have a TigerGraph integration with a Discord bot.
What it does
It looks through an API to pull out similar text to the question asked and gives similar articles that could be helpful.
How I built it
Use GET and POST requests to query the Discourse API
Push the Discourse API into a Graph
Query the Graph to see how similar the inputted message is to the statements
What I learned
It was my first time using pyTigerGraph, so I learned how to use the library.
I learned how to query the Discourse API.
What's next for TG-bot
Currently, I'm just about to explore RASA with someone else! | TG-bot | A TigerGraph-chat bot model | ['Shreya C'] | ['First 50 Qualified Submission', 'General Submission'] | [] | 7 |
10,094 | https://devpost.com/software/frc-data-analytics | For Inspiration and Recognition of Science and Technology, or FIRST, is a global robotics community preparing young people for the future and the world's leading youth-serving nonprofit advancing STEM education. FIRST Robotics Competition, or FRC, is an extracurricular activity where high-school students under strict rules, limited time, and resources design and build a robot that competes against other schools and communities. Each year, teams of high school students, coaches, and mentors work to build robots capable of competing in that year's game that weigh up to 125 pounds (57 kg). Robots complete tasks such as scoring balls into goals, placing inner tubes onto racks, hanging on bars, and balancing robots on balance beams. The game, along with the required set of tasks, changes annually. While teams are given a kit of standard set of parts during the annual Kickoff, they are also allowed and are encouraged to buy or make specialized parts.
FRC has a unique culture that is built around two values. "Gracious Professionalism" embraces the competition inherent in the program, but rejects trash talk and chest-thumping, instead embracing empathy and respect for other teams. "Coopertition" emphasizes that teams can cooperate and compete at the same time. The goal of the program is to inspire students to be science and technology leaders.
One way in which “coopertition” is achieved is how the events are set up:
-There are two portions of an event: the qualification matches and the elimination matches
-During qualifications, teams are randomly assigned to each other in each match such that there are two alliances each making up of 3 teams
-Alliances play against each other in matches and move up or down the rankings depending on how well they did during these matches
-Once all of the qualification matches are over, the top eight teams have the opportunity to choose their other two alliance members
-Using snake draft ordering (1st seeded teams chooses, then 2nd seeded, etc. then 8th seeded team chooses twice in a row followed by 7th, 6th, and so forth until it’s back to the 1st seeded team), the alliances are created and compete in a bracket style elimination games where games are played best 2 out of 3.
-The team that wins the bracket moves on the the world competition
Knowing how well a team performed during the qualification matches helps teams in the top 8 determine which teams to pick for their alliance in elimination matches.
As recent as 2015, FRC has provided more match data through their API and thus allowed teams to gather and analyze this data. In addition, other independent organizations like The Blue Alliance have created a website and app for teams to make this data more accessible. Our team decided to try our hand in analyzing this data, experimenting with the Tiger Graph Cloud, and developing a dashboard with Plotly Dash.
Built With
dash
plotly
python
Try it out
frc-analytics.herokuapp.com
github.com | FRC Data Analytics | Analyzing FIRST Robotics Competition, or FRC, data using Tiger Graph for the back-end and Plotly Dash for the front-end. | ['dataNerd23 Seto', 'Renzo Viccina', 'evrangel94', 'Courtney Ngai', 'Andrew Matsumoto'] | ['First 50 Qualified Submission', 'General Submission'] | ['dash', 'plotly', 'python'] | 8 |
10,094 | https://devpost.com/software/power-bi-app-over-tigergraph-database | Patient Distribution Analysis
Patient Trend Analysis
Inspiration
Tiger Graph Database together with Vertices, Edges and Attributes becomes a rich source of data. But, I couldn't find any way to explore this data for analysis using conventional BI tool. Therefore, the main inspiration was to get required chunk of data from Graph Database and pulling it into BI tool for data analysis.
What it does
This Power BI application pulls a query data using REST API endpoints from TigerGraph cloud. The pulled data is generally not directly into a format suitable for running analytics on Power BI. However, this pulled data can be transformed into suitable format by applying a power query skills. This application just shows one such example. It can serve as a template or sample for BI developer and Analyst, thereby empowering them with a way to perform analytics over portion of a Graph Database without GSQL.
How I built it
TigerGraph Cloud has a base of Covid-19 Analysis. This base is used to create a solution. In GraphStudio, data is loaded in the solution and GSQL query is performed only to get Patients data. Installed query has a query endpoint which can be used for external application (which is Power BI here). Also, authentication is taken care of by generating a token with the help of Secret creation at Admin User Management portal. Once, endpoint and authorization credentials are ready, then it is all about connecting using Power BI Web Connector. Data received from connection is transformed to conventional table format so that Power BI visualization be created. Note that these visualizations are first step towards an effective data analysis and insight study.
Challenges I ran into
Had to learn about TigerGraph REST API, authorization and a basic GSQL.
Connecting a Power BI through a web connector demanded proper mapping of authorization credentials.It took some time to understand.
Accomplishments that I'm proud of
I think this app was a first step towards establishing a talk between Power BI and TigerGraph.
What I learned
TigerGraph Cloud, TigerGraph GraphStudio, GSQL Basics and overall how TigerGraph Database works.
What's next for Power BI App Over TigerGraph Database
Like I said earlier, this app is a first step towards establishing a talk between Power BI and TigerGraph. So far, it is selective and uni-directional. The next big step should be establishing a smooth both way talk between Power BI and TigerGraph. And the only possible way to do this will be creating a Custom Connector in Power BI for TigerGraph.
Built With
powerbi
tigergraph
Try it out
github.com | Power BI App Over TigerGraph Database | Enabling Data Analyst to analyse Graph Database based on query | ['Kaustubh Mande'] | ['First 50 Qualified Submission', 'General Submission'] | ['powerbi', 'tigergraph'] | 9 |
10,094 | https://devpost.com/software/gsql-real-time-editor | Login Screen (Light Mode)
Login Screen (Dark Mode)
Home Screen (Dark Mode)
Inspiration
Over the summer, our team explored TigerGraph, utilizing its power to build projects like a Similarity Search Engine, a Gene Mapping System, and a Personalized Patient Dashboard. While we worked on these projects, we often ran into problems while writing queries and wanted to collaborate simultaneously. Screen sharing was a good way to collaborate on these complex projects. However, using such a platform always resulted in various network issues and an extremely slow stream. Screen sharing also prevented us from actively interacting with each other and the program we wrote, which reduced our overall productivity. The only way to interact was to guide the person sharing the screen which was quite ineffective. Our efforts to solve this problem led us to the idea of building a real-time GSQL editor, similar to Google Docs, that allows developers to collaborate simultaneously and write queries with relative ease.
What it does
It is a shareable GSQL editor that allows users to collaborate in real-time when writing queries by simply sharing the box credentials.
How we built it
Web sockets are the core principle behind this app. Sockets are extremely efficient in dealing with real-time data sharing. We used a Flask-SocketIO Python library to build the server backend and a socket.io library with HTML and JavaScript to build the client-side. Sockets are responsible for adding users to specific rooms based on their box credentials and also emitting data to all the users in the room when someone makes a change to the editor.
Challenges we ran into
Initially, we weren't sure how to go about building a real-time editor, but we eventually stumbled upon web sockets. We spent a lot of time tinkering with sockets to learn more about how they worked. The first version was just a simple shareable text editor, without any room access. This was our first roadblock. We spent a lot of time with the socket library to add users to rooms based on their box credentials. We managed to get the rooms working and made requests to the code check endpoint behind every box. Subsequently, we ran into an issue here due to Cross-Origin Resource access. Ultimately, we were able to get all the features to work and it was working as we expected on the localhost. However, we ran into a boatload of problems when attempting to deploy it. First, we had no clue on how to deploy a Flask app. We somehow got it to deploy on Heroku and was excited to see it work. Unfortunately, it only worked on some occasions. In addition to that, we also ran into an XHR Polling issue, and it was impossible to find the solution. After hours of debugging and searching, we were able to add some configurations to the Flask app and got everything to work.
Accomplishments that we are proud of
First of all, we are proud to make a working version of the app. It feels great to build something so helpful and useful for not only us but also for the TigerGraph community. When we began, we weren't sure how to even start. It seemed impossible to make a real-time editor like Google Docs, and we're proud of the fact that we were finally able to build something very similar to it. Moreover, we are excited to continue to work on the app and add more features to it.
What we learned
We learned a lot about network connections and sockets. More importantly, we learned how to share data in real-time between two devices. We also learned how to make secure requests. Learning these concepts sparked many other ideas, and we're excited to build on them. Apart from all the technical skills we learned, we learned to be patient and never give up. For instance, we almost decided to give up and just use the localhost version of our project for the Graphathon submission, but we didn't let our laziness conquer us as we were patient and kept debugging, eventually fixing all of the bugs.
What's next for GSQL Real-Time Editor
Currently, there are some bugs when multiple users edit the same text, we are planning to apply operational transformation and improve the debouncing mechanism to enhance the user experience. We also want to add an install query functionality and allow developers to run them. This would greatly increase the value of the app. In more general terms, we want to continue working on this app to fix any bugs and also keep adding features to make it a more useful application.
Github Repository:
https://github.com/rohanshiva/sharable_gsql_editor
Built With
api
css
gsql
html5
javascript
python
rest
tigergraph
Try it out
gsqleditor.herokuapp.com | GSQL Real-Time Editor | A Shareable GSQL Editor that Allows Real-Time Collaboration | ['Akash Kaul', 'Charles Shi', 'Rohan Shiva'] | ['First 50 Qualified Submission', 'General Submission'] | ['api', 'css', 'gsql', 'html5', 'javascript', 'python', 'rest', 'tigergraph'] | 10 |
10,094 | https://devpost.com/software/one-click-dashboard | Inspiration
Making fully functional, clean dashboards is a hard task, even for experienced developers. For people with no coding experience, it’s even more difficult. But, dashboards are extremely useful for visualizing data, especially for graph databases where the data is sometimes hidden inside a vertex or edge attribute. So what if anyone, regardless of experience, could deploy their own dashboard without touching a single line of code? What if anyone could unlock the secrets of their data without doing any analysis themselves? This was the inspiration for the one-click dashboard. To make a clean, functional, and easily-deployable data visualization.
What it does
The dashboard has 2 components. The first is the actual dashboard, which grabs data from TigerGraph and displays it. This dashboard specifically uses the Covid-19 starter kit and displays patient information. The second component is the launcher. The launcher is a separate page that asks for the user information and does all of the necessary configurations behind the scenes to allow the dashboard to be deployed. This is the 2-step framework that we created. This has many applications. For example, from TigerGraph’s end, you can prepackage the starter kits with these dashboards, and GraphStudio have an option to deploy the dashboard. Thus anyone with the starter kit can have access to a visual representation of the data. This also has applications for developers, especially for proof-of-concept dashboards. For example, if a developer wants to show a draft dashboard to clients or to their team of developers, they can use this same framework to easily share the dashboard so everyone can access it with one click.
How we built it
The actual dashboard was built entirely with Python using the Streamlit package. This package makes creating dashboards very easy and reduces complex html/css to one-line Python statements. We also used the pyTigerGraph Python package to allow us to more easily connect to a graph server and install / run the streamlit query. For the launcher, we created an input form using Python widgets in Google Colab, a cloud virtual python environment. This was also made entirely with Python.
Challenges we ran into
One major challenge we ran into was actually deploying the streamlit app. Normally streamlit is run via localhost. However, when using a cloud environment like Google Colab, the localhost is not actually accessible. Thus, we had to actually deploy the app on an accessible server. To do this, we create a secure tunnel to an ngrok server. This was somewhat tricky to implement, especially since the tunnel doesn't always get created (if the server is busy or if you try to create too many tunnels), so we had to implement a fail-safe to ensure a tunnel was created.
Accomplishments that we're proud of
I think we’re proud of making this project as a whole. When we first started, we didn’t know if this kind of deployment system was even possible. So, having created a system that actually works as we intended is really amazing. Even though this project is more of a starting point than a final draft, having made a version that works opens up a lot more opportunities for improving and modifying what we made already.
What we learned
We learned a lot about making dashboards in python, specifically using the Streamlit package. We also learned a lot about Google Colab and the hidden functionalities it offers (ie widgets). We also learned how to configure a secure tunnel to an ngrok server and deploy streamlit from Google Colab, which a lot of people don’t even know is possible.
What's next for One Click Dashboard
The One-Click Dashboard is more a framework than a specific dashboard/launcher combination. So, the next step would be working on other ways to implement this dashboard. For example, we could use Dash, a Python package offered by Plotly, to make the dashboard instead of using streamlit. We could also create a dashboard manually using HTML/JS/CSS and configure the communication between the launcher and the dashboard. On the flip side, we can also switch up the deployment method. For example, we could have one streamlit app with the form for user input, and once validated the user is redirected to another streamlit dashboard (separate page or separate server) with the actual visuals. With the general framework down, we can work on trying different combinations for the dashboard visuals and for the launcher to see what works best.
Built With
colab
python
pytigergraph
streamlit
Try it out
medium.com
github.com | One Click Dashboard | Easy to use shareable dashboard launcher | ['Akash Kaul', 'Rohan Shiva'] | ['First 50 Qualified Submission', 'General Submission'] | ['colab', 'python', 'pytigergraph', 'streamlit'] | 11 |
10,094 | https://devpost.com/software/automatic-demo-loader | Inspiration
Imagine you just watched a TigerGraph Graph Gurus episode and are really excited to try out the implementation for yourself. Well, there are multiple steps before you get to start working with the demo you saw. You have to create an account, create a blank box, copy all of the code you just saw being used, upload all of the data, and install all of the queries. Only after all of that is done do you get to play around with the graph. What if there was an easier way. What if you could install the demo without having to touch the code yourself? What if you could install that demo with the click of a few buttons? This is what inspired the automatic demo loader. An interactive Jupyter notebook that lets you grab does from the TigerGraph GitHub and automatically upload them to your server.
What it does
This code creates an interactive Jupyter notebook environment (using Google Colab) that allows a user to automatically upload TigerGraph demos onto their own server. After the user enters their server information (hostname, username, password, etc.), the code connects to the user’s graph. It then pulls up a list of demos for the user to choose from. The user can choose one, and all of the scripts corresponding to that demo (create schema, load data, create queries, etc.) are run. Then, the user now has access to the full demo without having to do any work on their own (besides typing in their info and clicking some buttons). This solution is handy because it automates the process of creating a graph, and makes it much easier. Also, each user will have their own copy of the interface, so there is no need to worry about storing private data or handling user requests. Each user gets their own, personal package to work with. Finally, all of the code is provided on the interface itself, so if a person is curious they can click a button and see all of the code.
How I built it
The UI and all the backend was written with Python. The UI was created using Python widgets, and the backend (connecting to the TigerGraph server) was done using the pyTigerGraph package for Python.
Challenges I ran into
One of the biggest challenge I ran into was actually loading the data. I used pyTigerGraph, but there was no method or functionality available for directly loading data via the ddl REST endpoint. So, I had to engineer it myself. Using the Python requests library along with the TigerGraph Docs, I manage to figure out how to attach the appropriate headers and filenames to upload the data based on a given loading query. However, this only works if the loading script uses the
Define filename f=‘some/file’
. This brings me to the biggest challenge I faced. The demos available in the TigerGraph ecosys are very different, and there is no common pattern between them. Additionally, some of them just provide the graph tar file and not the scripts, which to my knowledge you can not upload by remotely accessing the server -if it is possible, I’d really like to know how (: But, for the demo loader that I had, it can be generalized to work on any demo (not just the one I included in the sample code) if all of the demo folders have the following:
A README detailing what the demo is and how it works (not important for uploading, but important for giving information to the user
A folder with all of the data files
A bash .sh file listing all of the scripts needed to be run (in order)
The loading jobs to all be of the format
Define filename f=‘some/file’
. So, no scripts that load the file like this:
load f=‘some/file’ to …
. In actuality, the loading files just need to be consistent, and either format works as long as it’s consistent. But, the first (define) format is much easier to work with.
Accomplishments that I'm proud of
My biggest accomplishment was figuring out the direct ddl load. When I first encountered that problem, I was fully expecting to rely on pyTigerGraph. But I was shocked to see that the functionality wasn’t even available. So, once I finally figured it out I felt really proud of myself for semi-inventing the functionality I was looking for.
What I learned
The main thing I learned was how to use Python widgets. These are tools I never even knew existed until I started working on this project. They are really handy for creating proof of concepts and showing potential workflows for products. Also, since it’s entirely coded in Python, I didn’t have to learn any new languages (just a new library).
What's next for Automatic Demo Loader
The next step for the Auto Demo Loader is to add additional functionalities available in Graph Studio. For example, we could add a query creator/editor, a schema editor (that runs schema-change jobs in the background), or even a 3D visualizer. I actually made a sample query editor (with the help of Jon Here) which you can view here. In essence, we could recreate Graph Studio, with potentially more functionality (thanks to the flexibility of Python) in a Jupyter notebook environment. This will especially help data scientists (one of TigerGraph’s biggest user groups) to get introduced to the basics of graphs and GSQL while in the comforting language of Python. It also helps automate the process, so there’s very little work needed on the side of the user (no need to find the demo, copy it/recreate it in Graph Studio, etc.) which will help in the transition to using graphs and GSQL. Additionally, I think it really helps reinforce that the graph database at the end of the day is a database, and Graph Studio is no the actual database but a visual representation that helps make the process easier. This is a concept that I struggled with when first introduced to TigerGraph, so hopefully seeing graphs in that new environment will help reinforce that idea.
Built With
google-colab
ipython-widgets
python
pytigergraph
widgets
Try it out
github.com | Automatic Demo Loader | This project is a Google Colab notebook that can automatically upload a demo(from the TG demos folder in ecosys on Github). The notebook comes with widgets so no coding is required(just click buttons) | ['Akash Kaul'] | ['First 50 Qualified Submission', 'General Submission'] | ['google-colab', 'ipython-widgets', 'python', 'pytigergraph', 'widgets'] | 12 |
10,095 | https://devpost.com/software/edusphere-1bfkem | this is the link to the website and the password (Website is not fully functional. All links in site header are accessible. All social links are accessible. You can click on ‘contents’ under ‘physics’ in ‘your subjects’ in ‘students’. You can click on ‘make it’ in ‘classes’ in ‘teachers’. NO OTHER LINKS/BUTTONS WORK)
copper-crocodile-l22t.squarespace.com
password : tnrs
EduSphere:
The coronavirus pandemic and the crises it has caused, has taught us many things. Of
these things, the most apparent is the fragility of our education system. To
create a more robust system in the face of this pandemic, we must bring education into the 21st
century. Distance learning has been a major symptom of the pandemic; various shortcomings of
existing methods have been exposed.
It is impossible to transport the experience of being in a physical classroom to an online platform
which is what apps like zoom, in the form of webinars have tried and failed to do. The issue is
that students cannot be expected to retain the same focus and concentration on a web platform
as they would in a physical environment. The downgrade in student-teacher interactions cannot
be understated.
Attentiveness and engagement of students have always been a critical part of the classroom. We, at
EduSphere, have combated that issue by creating an education suite loaded with multiple features
to promote ease of learning. The online class has now transformed from a mere video conference to an
emulation of interpersonal face-to-face classrooms.
Often it is hard, for not only students but also teachers, to keep track of all their deadlines.
Here, at EduSphere, we build a database of all deadlines, assignments and materials required
for all members of the institution which is seamlessly integrated into their profile, thereby
ensuring that no deadline is missed. Furthermore, we have provided features such as validation
of course content, grading and testing of assignments online and even the ability for seamless
transfer of student profile between schools and colleges. We have created not just a service, but
a timeless platform.
We believe that the standard webinars are not the answer. We believe that easy to use interactive
courses which can be easily created by teachers is the way forward. The system also allows for
the uploading and submission of assignments by students, as well as a platform to measure and
calculate their GPA and progress reports.
The main aim of our platform is to use technology to effectively propel education into the 21st
century. We understand that traditional education cannot be replicated online in the same
capacity, which is why we have focused on completely revamping distance learning.
Whats sets us apart from similar private websites used by institutions is that while such websites
have to be custom-built from scratch by schools and colleges at high costs and with a
requirement of a large IT department, with EduSphere, your schools can build these systems
fast and easily, effectively making them immune to situations such as these.
EduSphere allows students, teachers and even institutions to collaborate,
which results in a higher quality of education. Students can create projects, cross-examine one
another and even create student handbooks full of tips and tricks for others to use. Teachers
can help one another with grading, testing and creating course content, thereby improving
efficiency. Institutions can collaborate to build courses suited to their strengths, thus creating
courses that are superior to ones that would be created by individual institutions.
Here at EduSphere, not only do we aim to inculcate an environment of learning, but we
also strive to create an ecosystem that allows teachers, students and institutions all to
thrive. With our interactive system and platform that offers access to tools for every
need, education has never been easier.
Our features
Teachers can easily make subject-specific courses that combine videos, quizzes,
tests and interactive puzzles.
Students can complete assignments and make submissions which are compiled
and kept in the platform's database.
Students can keep track of videos, courses, and assignments etc they may have
undertaken.
Schools may take into account the scores and performance of students, by
viewing them under each student’s profile.
Individual teachers can now maintain a record of coursework and tutorials put out
We are creating an ecosystem that strives to do four things:-
1) Make online education easier for schools to implement
2) Make online learning enticing and more engaging for students by implementing
interactive content
3) Allow collaboration of multiple institutions to facilitate higher quality education
4) Create a student database that allows networking, scheduling and seamless transition of
data between schools, colleges
Built With
squarespace
Try it out
copper-crocodile-l22t.squarespace.com | EduSphere- team 13(Don) | A reinvention of online education by empowering schools to build their own custom online infrastructure. | ['Pranav Iyengar', 'Othman Ghani', 'aadith ved', 'advaya dutta'] | [] | ['squarespace'] | 0 |
10,095 | https://devpost.com/software/conexion | Inspiration
We took inspiration from Omegle, Tinder, and Linkedin. We all want to be able to meet with people in a professional setting which will allow both students and entrepreneurs alike to make connexions and improve their experience and even set up potential future business ventures.
What it does
Connexion allows people to connect with the people that they are looking for. People that can help them grow, people that can teach them skills, people who if you know them could provide you with insider help later on in your professional career.
How we built it
We used the design and prototyping tool Figma to make a design for our app.
Challenges I ran into
We had trouble fully developing our idea and coming up with a final design while trying to sort out the features we really wanted to make the app completely cohesive.
Accomplishments that we are proud of
We are proud of the design and how we developed our idea of Conexion while working professionally.
What I learned
We learned management skills, design skills, and prototyping skills.
What's next for Conexion
Conexion will on day be a platform that everyone can use and benefit from.
Built With
figma | Conexion | A platform where students and professionals can make meaningful connections with the people who can help them accomplish their goals through 1on1 video chats, swiping left or right, and sending texts. | ['Rohan B', 'Edward Wei', 'Akash Abraham', 'Anirudh Hemige'] | [] | ['figma'] | 1 |
10,095 | https://devpost.com/software/coronahacks | Inspiration
The amount of unproductive time we have spent as students in the past few months led us to find an innovative solution to change our life in quarantine for the better.
What it does
Using Imaginei5’s ideation process, we realized that the biggest issues that stall people’s productivity is not knowing where to start with something they want to accomplish, a lack of structure in their daily life, and an absence of social interaction. Thus, we created CoronaHacks to tackle these issues by providing students with useful tools to reach their goals. Our app give them structure and purpose for their days, and also supplies students with the option for learning through social interaction.
How I built it
Using Android Studio with the help of the workshop and our previous knowledge. We used Java for the hardcode and XML to format it. JSON along with Firebase would be used to take real-time courses, events, articles, and educational materials from the internet for each field of interest and level of difficulty.
To be specific, we incorporated familiar UI elements like buttons, textviews, edittexts (in the search bar) and spinners and divided and organized them based on color and difficulty. We used Intent to move from one page to another and to the browser, and we used a ListView (with adapters) and specifically formatted each item to make it pleasing to the eye with all the required information at hand. We also showed our UI to our parents and friends for feedback, using different layouts (LinearLayout, RelativeLayout, ScrollView...) with specific locations and layout weights for each item
to make the app as user-friendly as possible.
Challenges I ran into
Formatting and making the UI seamless (only after rigorous feedback and do-overs)
Less time to work on, especially since most of us are from Europe and one of our team members was in America (we pulled an all-nighter)
Coming up with an original idea that solves all of our issues that we listed above and focused on the positives rather than the negatives of confinement (we spent more than 2 hours on ideation and planning)
Accomplishments that I'm proud of
Successfully creating a working demo
Making a presentation in one night
Working amazingly with team members and using each of our strengths to our benefit
What I learned
Android development skills
How to present an app
How to ideate
How to work on a time crunch
How to collaborate!
What's next for CoronaHacks
With more time we would use json parsing to access courses, events, articles, seminars, and other educational materials from the web to give real time suggestions to the user in order to pursue his or her interest in any subject, he or she desires. In order to expand this idea even more, ideally we would also include an email verified account login in order to save their preferences, personalize their experiences, and track their progress by making a customized schedule, and achievable goals.
Our Project Video (if needed):
https://drive.google.com/file/d/1j1JTNACP16dy4TIoSxdH2GUkcnMPSwvx/view?usp=sharing
Built With
android
android-studio
app
development
firebase
github
java
json
xml
Try it out
github.com | CoronaHacks | Replace the attitude of “I can’t” with “I will” with CoronaHacks, an app for students to easily pursue all of their hobbies and interests. | ['Pranav Sreedhar', 'Remeek', 'Vignesh Sreedhar', 'Isha Sinha'] | [] | ['android', 'android-studio', 'app', 'development', 'firebase', 'github', 'java', 'json', 'xml'] | 2 |
10,095 | https://devpost.com/software/mediknow | Unique Selling Points
Business Details
Backend work
Real-world implementation ideas
Unique Selling Points
AI-based scraping of the web for accurate medical data in a summary-like, annotated form or a quiz format with gamification elements
+
Tools to process webinar recordings to find important sections, keywords, definitions, diagrams in an automated, hassle-free fashion to gain skills and rewards. This means that the user doesn't have to attend the webinar live, but our software can be the "ears" and identify key points and further process recordings.
+
Hosts and attendees can also view the sentiments, interest, and activity of others and the host can adapt their style accordingly.
Inspiration
The pathetic situation of medical burnout and the continuous churn in medical education especially during COVID-19 compelled us to embark on this project. Physician burnout was an epidemic BEFORE the Covid-19 pandemic. According to a 2018 study, 400 physicians die by suicide each year – double that of the general population. Many studies have pointed out the lack of effective tools to balance education and work times. This is especially true during the surge of COVID-19 cases: 5.31M confirmed cases. Even worse is that education on COVID-19 is continuously fluctuating due to new information coming in and many of the smaller healthcare workers, midwives, and nurses aren't kept up to date on these alterations and could be fed misinformation without appropriate education. In fact, in the African continent many healthpost workers still believe that the virus does not exist. New temporary healthcare workers are being employed and they need to at least keep abreast with COVID-19 medical news. A few weeks back, more than 160 employees at Berkshire Medical Center in New England were furloughed for quarantine after possible exposure to the coronavirus from patients who have tested positive. A temporary agency was asked to quickly hire 54 nurses who specialize in medical/surgical, intensive care and emergency services. Hence, they need smart educational tools to quickly but effectively learn about COVID-19 information in a remote way.
Hence, we became passionate about creating potential solutions to the question "How can be improve medical education to medical personnel without comprehensive expertise by still not chewing into their precious time?"
What it does
This app is a comprehensive tool to enable medical education in an active but time-efficient manner given the high burden of all healthcare workers and medical personnel. The medical education deals with drug treatment options, vaccine candidates, and other medicinal products and symptom-treating OTC drugs. The app leads to lists of the particular product category with search, filter options to help navigate. When an item is clicked, a COVID-19 dataset with reliable sources like Harvard Medicine and WHO is searched for the key word, and NLP is used to discover the overall sentiment of the drug in the medical world: "Positive", "Negative", or "Neutral." You get further insight on the most crucial information spotted in these huge articles, helping professionals to avoid wasting large amounts of time.
The information is curated and labelled automatically to pick out relevant phrases, keywords, sentiments, concepts, etc without manually reading through the millions of articles especially due to time constraints and the misinformation present on the web. The app also generates quizzes by looking for keywords in the articles and picking out sentences that will be turned into a question using our question algorithm (see below). Quizzes are a scientifically-proven model for active learning that ensures optimum retention by the user of the app.
Since a lot medical education is now being delivered towards webinars/web conferences/workshops for all types of people, there will be a page to register a webinar and attendees can join them through the app. Furthermore, the app is equipped with a feature that can connect to the recording of a webinar (in real-time too!) and look for medical key words during the webinar and fetch its definition as well as visual diagrams using AI or host requests. The host can also see the engagement of his/her attendees through facial analysis features (if applicable), chat activity, occurrence of asking and answering questions, etc and give reward/skill batches accordingly and possibly adapt his/her teaching styles based on this feedback. Other prospects (not fleshed out in this prototype) is to enable disabled people to leverage webinar sessions through a text to braille and text to sign-language translators that are already built by Microsoft researchers.
How we built it
The mobile app development was done using Android Studio using Java. IBM studio Watson Discovery helped to get pre-enriched data on the medical information requested through a special, reliable COVID-19 database. The related-words Datamuse database was called using Python. A model for facial analysis was built through Custom Vision AI Models that can be easily integrated to the app. The voice-to-text translation would be used for the keyword extraction and recording processing of the medical webinars.
Challenges I ran into
A lot of network-based connectivity issues to IBM Cloud services required a lot of testing and trying various online suggestions. Additionally, we had to find many images to complete the facial analysis model built through websites like Pixel.
Accomplishments that I'm proud of
Ideated, brainstormed, researched, and created a prototype app in 24 hrs!
What I learned
Teamwork and work allocation; using Custom Vision AI to easily create image classification models; making an effective pitch; different coding SDKs and APIs
What's next for MediKnow
Personalised account and dashboard for quiz performances, webinar notes, and user preferences.
Language translation options
Diversifying the type of questions (currently only mcq and fill-in-the-blank)
Using automatically updating database
Implementation Plan
We hope to target mid-June to test implement as many features and test edge cases. This might require expanding our team to get an extra developer who is familiar with these technologies. We also have started surveying doctors in our locality got a 92% positive response. However, a more robust needs assessment will be carried out in parallel to the development phase.
KEY DEMOGRAPHIC
Mainly healthcare workers who require training and education like nurses and midwives who don’t have such comprehensive prior training.
Common people who want to seek medical advice
KEY RESOURCES
Along with Internet-available news resources, establishing connections with medical researchers, educators, and labs would be useful so they can champion and upload content to the app.
KEY PARTNERSHIPS
Some useful partnerships need to be also pursued for example collaboration with Google to have a medical education specific tool (although some of the tools developed here have the advantage of being applied in all types of education fields) as well as services from language translation services (possibly from Duolingo).
DIFFERENTIATION
There are many educational websites and webinars but our AI, NLP, Watson, Gamification, and Voice Processing tools lead to unparalleled, active, and time-effective learning, which is very urgent for many healthcare workers. There are no apps that provide short but relevant and processed medical information, and most people need to read through big articles and resources to find answers. There are no tools to carry out the webinar trends and helpful automated recording processing that we have performed.
BETA TESTING
We will also need to have a beta testing phase in which we will collect a lot of feedback from preliminary users about potential new features and failing functionalities.
MARKETING
Will have to initially invest to market our app mainly through social media platforms like Facebook, Twitter, and Instagram.
COST AND REVENUE
This app could also leverage significant partnerships through integrating advertisements of educational medical tools using a pay-per-click model (matchmaking of two services). According to InMobi, this could be around $2 to $5 or possibly 10% of each sale made through the app. Paying for the app itself might place a barrier to acquiring customers. Hence, the app would have a Freemium model, costing for additional analytics options (personalised dashboards for example). Lastly, the app forge a partnership with big chains of hospital (like Apollo hospital) to promote participation for drug trial testing. Some portion of the money gathered here can be donated to coronavirus-related funds, especially towards research labs.
The main costs would be incurred while developing the app, marketing and outreach, using premium service offerings, incentives to make people our app champions and upload content to our platform.
Built With
android-studio
custom-vision
ibm-watson
java | MediKnow | A revolutionary medical education tool to enhance learning in a time-efficient and reliable way through AI-based simplification of COVID-19 medical information. | ['Zahrah Imani', 'sanah imani'] | [] | ['android-studio', 'custom-vision', 'ibm-watson', 'java'] | 3 |
10,095 | https://devpost.com/software/quarantimes-classroom-4jegl5 | Inspiration
There are many who don't have the privilege of attending online classes. India and many other countries have children who don't have the resources to attend school from home. Recently my housemaid mentioned how her son could not attend classes because they didn't have enough money to buy a laptop. This made me want to do something for him and kids alike.
What it does
Quarantimes Classrooms is a solution that can help many students. Most households in India have a smartphone with mobile data. Many of these families don't have enough money to afford a laptop and wifi. Quarantimes classroom is a platform that uses voice over along with other interactive features such as a chat box, polls and document sharing but uses a lower bandwidth compared to other platforms and hence can be accessed by most around the world. Even if you have only data and no wifi, you will be able to attend class.
How I built it
I thought about the problems people have been facing more recently. I realised that there are many people in India who don't even get the privilege to attend classes and thought that instead of creating something that might better our learning experience, providing that basic experience to the millions around the world would be more valuable to me even though it may be harder. I thought of the resources that such kids might have and came up with a solution using the limited resources.
Challenges I ran into
The biggest challenge was thinking of a solution that could help children with fewer resources. Solutions such as Quarantimes classrooms can bring the classroom home in so many households.
Accomplishments that I'm proud of
My biggest accomplishment was being able to come up for a solution for people with lesser resources and even though this may be hard to achieve, it can help so many kids around the world.
What I learned
I really liked the web design workshop I attended. The prompt also made me think about my privileges and the problems people might be facing daily because of Covid-19.
What's next for Quarantimes Classroom
Making quarantimes classroom more accessible is probably next. Reaching a lower bandwidth is definitely the goal so that more and more children can attend classes. | Quarantimes Classroom | It an easel accessible classroom for everyone! | ['Aarushi Dutta'] | [] | [] | 4 |
10,095 | https://devpost.com/software/bubble-p4hw6u | Sample Project
Built With
android-studio
java
xml | Random | Random | ['Chandrachud Gowda', 'Saksham Gurung'] | [] | ['android-studio', 'java', 'xml'] | 5 |
10,095 | https://devpost.com/software/quaranteen-aipkyn | Home screen
One of the pages
Contact info
Inspiration
We were inspired by our classmates who started an NGO to help underprivileged people during this pandemic
What it does
Gives us an overview and gives sufficient information on anything related to COVID-19, Quarantine, Time, Personal skill development, etc.
How we built it
We used Adobe XD - a prototype creator to portray our idea with visuals
Challenges we ran into
1.Figuring out a new and alternate solution to all the available options that help education and comunication of and about COVID-19
2.Prototype is under-developed due to time constraint
3.Video for the app demo took a lot of the given time
Accomplishments that we're proud of
Great team work and fresh design and thinking
What we learned
We learned how to use Adobe XD , and learnt great communication skills from this summit
What's next for Quaranteen
Developing an app based on the protoype and publishing it for general awareness
Built With
adobe-xd
Try it out
xd.adobe.com | Quaranteen | Run to the Roar | ['Manav Muthanna', 'Akash Kamalesh', 'indraneel acharya', 'Tarran Sidhaarth'] | [] | ['adobe-xd'] | 6 |
10,095 | https://devpost.com/software/meetsecured | Application Demonstration:
https://streamable.com/wvv36g
Demo
rohanpatra.com/MeetSecured
MeetSecure
Free and Secure Blockchain Vide Conferencing - Based on Jitsi SIP Framework
What?
MeetSecure is an open-source video conferencing platform. It is encrypted and running on the Ethereum Blockchain network for state-of-the-art security and privacy.
Video is transfered in two ways, main video feed is transferred via the webRTC protocol for a peer-to-peer conference, and compressed for added streaming speed using Google's Brotli algorithm.
Subsidiary video feeds are streamed via a decentralized system through Ethereum's Blockchain.
All data is stored in the users' cached and individual users are identified via hardware identifiers such as os, pgp key, viewport, etc.
Features
Free
Unlimited Users
Completely Private
No Logs
Fast Streaming
Screen Share
Technologies Used/Credits
Backend: NodeJS, webRTC, web3.js, Ethereum Blockchain, PostgresSQL
Frontend: Bootstrap, ReactJS
Google Brotli Compression
SRTP-DTLS Encryption
8x8's Jitsi SIP Framework
Frontend React UI Library
In-Browser Video Processing/Decryption/Decompression
ethereum-connect.js
Cloudflare Rate-Limiting and DNS
DigitalOcean Kubernetes Cluster (Increase Room Sizes)
LivepeerJS
Communicate with the Ethereum network and implement live, instantaneous video transfer over blockchain
Coming Soon...
Mobile Apps
One-Time Use Links
Secure In-Chat File Sharing
Realtime Closed-Captioning
Text-to-Speech Via Chatbox
REST Api
reCaptcha V3
AI-Based Compression Algorithm Decisions on-the-fly
Flask Universal App
Built With
blockchain
bootstrap
brotli
html
javascript
livepeer
lua
node.js
react-native
shell
webrtc
Try it out
github.com
streamable.com | MeetSecured | Free and Secure Blockchain Vide Conferencing - Based on Jitsi SIP Framework | ['Rohan Patra'] | ['First Place'] | ['blockchain', 'bootstrap', 'brotli', 'html', 'javascript', 'livepeer', 'lua', 'node.js', 'react-native', 'shell', 'webrtc'] | 7 |
10,095 | https://devpost.com/software/hackathon-covid-19-life-planner | Cover for our website
Inspiration - We wanted to keep peoples life organized.
What it does - It keeps your day on track.
How I built it - We built it through HTML.
Challenges I ran into - We couldn't get the font we wanted.
Accomplishments that I'm proud of - Star scout in Boy Scouts, FLL robotics, Football, and Basketball.
What I learned - I got a better understanding of HTML.
What's next for Hackathon Covid 19 Life Planner.
Built With
html
Try it out
hackathon-project--dhruvsuresh.repl.co | Hackathon Covid 19 Life Planner | We will build a website that keeps people organized and plans their daily routine out for them in this covid pandemic. | ['Deepak Ananth', 'Sahil Gandhi'] | [] | ['html'] | 8 |
10,095 | https://devpost.com/software/quarantimes-classroom-yb9n3z | Inspiration
Bad internet making Zoom calls unusable
What it does
Allows teaching over slow internet speeds without loss of quality.
How I built it
Its a concept as of now
Built With
node.js | Quarantimes Classroom | A text based Classroom | ['Arth Gupta'] | [] | ['node.js'] | 9 |
10,095 | https://devpost.com/software/godonate-food-donations | Home Page Screen
Finding Location Screen
User Profile
Donation Request Sample Screen
When the hackathon began, I immediately focused on the portion of the prompt revolving around serving underserved populations and fostering communication among them. Among the populations that suffered the most from the virus, as mentioned by guest speaker Kameron Rodrigues, minorities living in rural areas across the globe not only face the adverse health effects of the virus, but also face a major issue of food shortages.
That is why I set out to come up with an incredible idea of solving this issue by leveraging the technology that everyone can use in order to participate in solving global hunger without leaving their home. Once the hackathon began, I sat down and started designing the “GoDonate,” mobile app.
Here is what I created since the Hackathon began:
Using my mobile app, people can send out their donation request to nearby registered volunteers as well as to nearby food banks and other non-profit organization based on their choice. Once the organization or individual volunteer receives the request to their mobile phone, they will send their representatives to come pick-up the food item and provide it to a homeless shelter, food bank, or other institution that feeds food to insecure people. The process to donate is efficient and secure and I believe this is very unique concept and may be first application on mobile. In under two minutes, anyone across the globe can fill out a food request form and have a nearby volunteer in their area come and pickup their food items.
Using my non-profit organization ShareandChange.org and its own mobile application that is currently fostering blood donations, I have access to a wide breadth of partnerships with reputed organizations such as VITAS hospice and the Shepherd’s Gate Foundation. Since I had only very short time to come up with an incredible idea of solving global hunger issues by leveraging a technology that everyone can use , I developed this food donation module as an extension of my already developed application called “Donate Blood”. This allowed me to implement and deploy it directly into the existing app infrastructure. I minimized my implementation cost and timeline by leveraging open source technologies.
As previously mentioned, I used Swift Language, Apple X-Code, Google Places API, Mongo DB Database, Apache Tomcat Server, Amazon Web Services (AWS), and Photoshop to develop my project.
Specifically, I used X-Code IDE from Apple to develop the application using the Swift programming language. I additionally utilized the open source Mongo DB to store the information for easy retrieval and global scalability. Another open source application server from Apache, called Apache Tomcat Server, was also implemented. I used Google Places API, which enable users to find their current location as well as search for nearby food banks and non-profit organizations. I developed the front end UI screens with Adobe photoshop. I have secured the database access in AWS from hackers so that no one else other than me can access the data to ensure user security for potential users of my service.
Built With
amazon-web-services-(aws)
android-studio
apache
apache-tomcat-java-implementation-as-our-rest-web-service-to-pass-all-our-data
apache-tomcat-server
apple-x-code
google-places
mongo-db-database
mongo-db-database-program
photoshop
secure-socket-layer-(ssl)
swift-language
xcode | GoDonate Food Donations | COVID-19 has elevated global food shortages and caused disruptions in the supply of food to underserved population across the world. My GoDonate app will resolve this issue and enrich their lives. | ['Rahul K'] | [] | ['amazon-web-services-(aws)', 'android-studio', 'apache', 'apache-tomcat-java-implementation-as-our-rest-web-service-to-pass-all-our-data', 'apache-tomcat-server', 'apple-x-code', 'google-places', 'mongo-db-database', 'mongo-db-database-program', 'photoshop', 'secure-socket-layer-(ssl)', 'swift-language', 'xcode'] | 10 |
10,095 | https://devpost.com/software/coronalert-mi58ka | Inspiration
Not many countries use this type of method to alert people about COVID-19. We can use technology to inform more people and since Google Maps is a very commonly used app, using google maps API might have been a good idea. If people become more aware, the spread will be decreased.
What it does
Reads CSV files, and marks markers on google map with transparent circles around it. It also shows alerts depending on user location. The circles are color-coded with red, yellow, and blue; red is the most critical while blue is the least.
How I built it
We used the Android Studio and Google Maps API. We would all be in a meeting and keep discussing what to do further.
Challenges I ran into
For some reason, we couldn't read any CSV files which was very annoying. We googled for hours and hours to fix this issue and finally managed to do it. I think the reason was that since it runs on an android phone, the path to this file is different from that on my computer.
Accomplishments that I'm proud of
We have spent hours and hours trying to read a CSV file. After googling for hours and hours, We finally fixed my issue which was very refreshing. Furthermore, the fact that we made this with something that we have no experience with was fun to do.
What I learned
We learned quite a lot. We have never used the android studio google maps API. This was our first project about google maps. This was also our first hackathon which was very interesting. We also learned that not all group members have to be a developer to make something cool.
What's next for Coronalert
Since it reads from CSV files, it might be better if we could use a website or some other programs that helps us add data to CSV files. Right now, we would have to add data manually.
Built With
android-studio
google-maps
java | Coronalert | Coronalert gives alert when user is near a zone where COVID-19 carrier was at. | ['Jihoon Hwang', 'Jongheon (Marco) LEE', 'Hyeok Kwon'] | [] | ['android-studio', 'google-maps', 'java'] | 11 |
10,095 | https://devpost.com/software/anonymous-hope-platform-team-17 | Anonymous Hope Home Page
Code for the Google Maps API
Code for the Links to Sidebar + Navigation Bar
Inspiration - The conditions and huge issues that are occurring regarding job loss and huge lay offs causing massive financial and emotional struggles for millions of Americans inspired us to create a platform to aid them and provide support in the form of food, shelter, or education. We also ensured that the platform will involve the entire community as everyone can access it and learn something, and also donate and do their part for society. We also noticed that such a platform isn't very comprehensive and effective on the internet, so we wanted to start our own initiative to fill that gap. With the help of other young leaders, we can truly improve and expand our platform with the donation and support of the community.
What it does - Our website provides people in need with information for their local food banks and shelters. It provides a streamlined interface for extending and adding new features to allow everyone to connect and virtually help. It incorporates location and necessity for those using the website to provide them with local places they can use to get resources and necessities they cannot afford. Finally, it provides multiple resources for educational purposes that are credible sources that spread the right information.
How I built it - I used repl.it and html and css, utilizing bootstrap and java script to maximize the features available for us to use. To track the location of the user, with their consent of course, we used geolocation feature of html and embedded google form and google map API's. Additionally, we incorporated links to external videos to consolidate information on our website and provide gateways for our users to find proper, credible information.
Challenges I ran into - The largest challenge was thinking of an innovative, creative, and feasible idea that has not been implemented yet while also making an impact on society. Another challenge was incorporating the different aspects such as education and communication into our project. Formatting on repl.it also was at times very problematic and the server crashed many times. Finally, we also had trouble with the formatting of images and coding different special features such as menus and images. Figuring out the geolocation and embedding different API's should also be an honorable mention here since it took lots of researching and very tedious work. The nature of html and the tedious focus necessary also made all of these tasks more challenging.
Accomplishments that I'm proud of - We are proud that we were able to use more advanced concepts such as API's and learning the different special aspects of html such as bootstrapping that we never knew of before but made our lives so much easier. In addition, we are proud of just spending the day coding and learning and expanding on our knowledge of html and css. Finally, our innovative idea and the effort we spent thinking of and implementing this idea to help others made us very proud.
What I learned - I learned more about html and css and the special features that we incorporated into our project. In addition, we also researched into the effects of the coronavirus and that inspired us to create a website oriented towards helping others who have not been so fortunate. The workshops were also very interesting and useful in helping us combat not only the problems we faced today, but more advanced concepts for coding that we may need to learn in the future.
What's next for Anonymous Hope Platform - Team #17 - We plan to continue expanding our website if it does gain traction on the internet and continue advocating for and promoting helping others and society in general. The website has a lot of growth potential and there are many more innovative ideas that can be implemented that would just take time and effort. Recruiting more people and potentially even allowing organizations to utilize our website to take it to the next level are all ideas that we can use in the future to help others and improve society.
Built With
css
google-form
google-maps
html
Try it out
anonymoushope--harshilsrome.repl.co | Anonymous Hope Platform - Team #17 | Keep Calm and Participate in the Charity | ['harshilsrome Shah', 'Siddharth Bid', 'Aniket Sheth'] | [] | ['css', 'google-form', 'google-maps', 'html'] | 12 |
10,095 | https://devpost.com/software/a-brighter-tomorrow | An interactive and visually appealing title page
A reward system to keep students engaged
Interactive place where students can communicate with teacher or each other
Website plans and sorts student assignments so students don't procrastinate
Inspiration
All of us have worked with special-ed students in the past or have family/friends who are in the Special-Ed program. We know that students with learning disabilities often take a lot of effort to teach. In the midst of this Covid-19 situation, we realized that traditional classroom platforms like Google Classroom and Zoom don't help Special-Ed students and hinders their ability to concentrate. Special-ed students need more support (and a better platform), especially during remote learning, and this inspired us to take action to help these students.
What it does
The website is meant to provide students with learning disabilities a platform to work on assignments and projects, as well as a place where teachers can easily monitor progress and assign work. The website allows these students to communicate in more ways than one so that even though they might not be able to speak, they can express how they feel to teachers. Additionally, the website is interactive and allows students to communicate personally with their teachers and classmates. The website is also meant to help users stay focused with reward systems and daily-relaxing exercises which help the student not get distracted and finish work on time.
How we built it
We used a web design page called Figma to design a user-friendly interface and used some HTML and CSS to make the website more functional when pressing buttons and chatting. It is definitely a prototype right now.
Challenges we ran into
The process to make the website was long and the platform had multiple components and we needed time to express our ideas and communicate fully, although we did so in the end.
Accomplishments that we're proud of
The video came out how we wanted to, and the web page looked exactly as we envisioned. We are proud of our initiative to help these students who deserve the same quality education we do.
What we learned
We learned how to design web pages, and we also learned how to implement ideas into HTML and CSS.
What's next for A Brighter Tomorrow
Putting the idea into fruition would be great for a next step, and it would take time to fully code into HTML, although then we would be able to publish the website and help special-ed students around the country. Another step would be to implement it for students at our school, Dougherty Valley High School, and eventually spread it across our district SRVUSD and our state.
Built With
css
davinciresolve
figma
html
Try it out
www.figma.com | A Brighter Tomorrow | Making Online Special Education Possible | ['Nivedha Kumar', 'Vaishnavi Himakunthala', 'Tyler Dee', 'Aniket Dey'] | [] | ['css', 'davinciresolve', 'figma', 'html'] | 13 |
10,095 | https://devpost.com/software/quik-study | Online learning poses many challenges for students due to the unorthodox learning environment and the lack of face-to face communication. Many students struggle with the steep learning curve and end up falling behind in their classes. We decided to combat this issue by creating a customized resource database. It pinpoints the areas that the student is struggling with, and provides materials to help the student understand the concept. It is a four-step process. First, the student takes a diagnostic test which is created by their teacher. This ensures the content is curriculum specific and catered to the student. Next, our website identifies the questions they got wrong and informs the user of the concepts they need to work on. Then, the website provides resources like videos, forums and websites about the areas of improvement that were identified. Finally, the student can retake the quiz after using our materials and reflect on their progress. We used Python and HTML to program our website. One challenge we ran into was programming the website in such a way that it directs the user to resources for the appropriate concept. We had to write code to associate the questions with certain concepts to achieve that. We are proud of our idea and we believe it will make a positive impact on our community. We learned how to work well as a team and build off of each other's ideas to create a design we are all passionate about. Some future developments to our website could be to add practice games, this would cater to students who are interactive learners. We were also considering creating some form of a points/currency system, where correct answers are rewarded. The student would then be able to spend the coins their website avatar. Finally, adding a class/teacher discussion board embedded in our website could be a future prospect for our project.
Built With
html
python | Quik Study | A website that is a customized resource database. | ['Mitali Mittal', 'Sahana Ravula'] | [] | ['html', 'python'] | 14 |
10,101 | https://devpost.com/software/clubhouse-l203nm | Participate in after-school activities in a safe and secure manner
Interact with like-minded individuals in a space moderated by teachers
Build student communities in a club-like setting
Inspiration
With COVID-19, schools have had to transfer online. As a result, there has been a loss of the social aspects that come with school. My brother told me all about his new learning situation. Students from all over a school board were placed together in classes, making it difficult to connect with new classmates. Classes rarely have any interaction with other students during class. And when the school day ends at 3pm, there are no after school activities.
Students become bored after school and feel isolated. Parents are worried about younger children, needing them to stay safe and busy while they finish up their work day. Teachers have ideas and the ambition to help students, but don’t know where to start or how to organize activities.
How might we connect students and create a sense of “togetherness” in a school setting while protecting everyone’s privacy?
What it does
ClubHouse is a secure, online platform meant to build student communities within school boards. It allows students to meet and interact with other like-minded students in a club-like setting that is supervised by teachers; just like in-person clubs.
Students can meet individuals that they never would’ve had the opportunity to meet before and bond over shared interests by creating mini-communities, or “clubs”. In addition, any student can request to create a club, so they are free to express their passions. Overall, ClubHouse revives students’ sense of belonging and connection, contributing to better physical and mental health.
The second key problem is privacy. Students can meet each other through clubs and each club is approved and supervised by a teacher within the board. In addition, no student information is collected other than what is already collected by their school board, such as student IDs and emails. Students’ usernames and profile pictures are also anonymized to make it even safer. To top it off, students under 16 years old will require parental consent, ensuring that their parents/guardians know what they’re up to.
For parents, ClubHouse reassures them that their children are making friends and learning from their peers in a secure manner, almost like a bubble where only students and teachers with the school board are allowed in. For parents of younger students, they gain back an “extra hour” of their day since they don’t have to watch over them while they’re on ClubHouse, which is much needed for parents who are stressed and overworked especially while working from home.
For teachers wanting to go the extra mile to help their students out, ClubHouse presents an easy way for teachers to get involved in a high impact way. Teachers directly help by moderating the clubs students create to ensure a safe space for all individuals, no matter who they are.
How we built it
Initially, we explored a variety of problem spaces including online proctoring, caring for patients in hospitals, and domestic abuse. We took a problem-centric approach so that we knew that when we came to a consensus on a topic, we wouldn’t be submitting to solutionism. The one that stuck out to us was educational technology, or ed tech. As high school students not too long ago, we thought of how online schooling and the lack of extracurriculars would have negatively impacted our wellbeings. There’s no sense of community or the support that comes with it.
We spoke to younger students and searched online to see how schools and school boards have dealt with this social aspect of school in an online setting, only to find no existing, cohesive solution. Using this, we crafted user personas and stories of what students, teachers and parents were looking for. This helped us identify different goals, pain points, and needs that we need to address in our design.
To narrow the scope of our project and prioritize features, we followed a structure similar to the Kano model by first listing out the basic features that users essentially expect from our platform. We created wireframes to visualize all these ideas and then as a team of 5, we created the prototype using Figma.
Challenges we ran into
With the prompt of privacy, we initially ran in circles trying to find a problem space we wanted to tackle that would fit the criteria. It made it difficult to move past the phase of defining a problem we could tackle in the limited time frame. When we began to focus more on the connection aspect of the prompt, this helped open our eyes to the possibilities we could tackle. We thought some more about our current situation and own experiences to settle on designing a solution for middle schools and high schools.
Another challenge we ran into was the abundance of features we wanted to prototype for our product idea. With the limited time we had for this designathon, we had to prioritize the features we wanted to create. This helped us set a direction for creating the foundation of our product and communicate the essential features.
Accomplishments we are proud of
We are proud of completing a high fidelity prototype with several microinteractions and illustrations to build our product. Also, we are proud of finding and bringing together a unique combination of features to fill a gap in the market.
What we learned
We learned that the research and ideation phases are very important parts of the design process. While we spent a lot of time in these phases, it was very helpful in assisting us to identify the problem space we ultimately wanted to tackle through a privacy-centric approach.
What’s next for ClubHouse?
ClubHouse can be expanded further to incorporate more features. We’d like to explore how ClubHouse can be used for more established clubs and organizations outside of schools. We’d like to see if this is something we can work on and eventually bring to market.
Built With
figma
Try it out
www.figma.com | Clubhouse | Clubhouse is a secure, online community platform for students to meet and interact with like-minded friends in an after school club setting which is supervised by teachers. | ['Julia Sim', 'Jayden Hsiao', 'Krystal Truong', 'Emily Louie', 'Leon Han'] | ['1st place team'] | ['figma'] | 0 |
10,101 | https://devpost.com/software/bondfire | Bondfire is a real-time interactive platform for isolated college students that builds meaningful connections through the power of storytelling
. We aim to transform the way modern conversations are held through the inspiration of in-person, personal campfire experiences, which positions this concept as highly differentiated and viable (compared to all current social media and communication platforms).
Figma Prototype:
https://www.figma.com/proto/HlxvV5LhGUpNmVLLRePcaO/Bondfire?node-id=16%3A77&viewport=-89%2C98%2C0.10701365023851395&scaling=scale-down
Pitch Video:
https://www.youtube.com/watch?v=L1hdqBZjCjU
Pitch Deck:
https://www.dropbox.com/sh/xdtc6g70w4u8q8n/AAASWr7sj9r_UivRelIMTBoUa?dl=0
To summarize, Bondfire incorporates the following privacy functionalities into the solution through various privacy frameworks:
Visual privacy policy
to make the features salient and simple for users to understand and consent to at the outset
Multi-level authentication
to funnel through a network of verified and trustworthy users to the platform
End-to-end encryption
of data to ensure that what is said at the campfire, stays at the campfire
Machine learning algorithm
(through
Natural Language Processing
) to filter out profanity and negative behaviour in Campsites
Privacy reminders
at every touchpoint to reassure users that we really care about the safety of the campfire environment; removal of any dark patterns
From a user experience perspective, Bondfire draws further parallels to the real-life campsite concept by integrating categories, story prompts, interactive elements (e.g. smores), and options to add more firewood to keep the deep talks going!
Built With
figma
Try it out
www.figma.com
www.dropbox.com | Bondfire | Lighting the Flame for Meaningful Conversations | ['Vedant Patel', 'Elyssa Smith', 'Daniel Hanick', 'Sam Wong', 'Disha Kanekar', 'Sherry He'] | ['2nd place team'] | ['figma'] | 1 |
10,101 | https://devpost.com/software/secretum | We provide a detailed description of our prototype in our Github repository and a clickable mock-up of our interface. The mock-up can be found in our repository or at:
https://www.figma.com/proto/s1ChoOeVNbsq36lBEMDh6U/Secretum?node-id=1%3A263&viewport=748%2C345%2C0.19068248569965363&scaling=min-zoom
The Github repository can be found at:
https://github.com/mpafla/secretum.git
Built With
figma
powerpoint
Try it out
github.com | Secretum | Secretum is an open source, decentralized social media platform FOR and controlled BY users. Our goal is to make digital activism safe and sustainable for everyone in COVID19 times and beyond. | ['Stanley Valitchka', 'Marvin Pafla', 'Alessandra Luz', 'Utsav Das'] | ['3rd place team'] | ['figma', 'powerpoint'] | 2 |
10,101 | https://devpost.com/software/talpa | Introduction
In a world where generational change happens cyclically and continuously, intergenerational conflicts arise due to the systems and stereotypes that we face. Alongside this, we can benefit from the knowledge sharing found in intergenerational collaboration and learning. Social media helps provide a way to bridge the gap between all people, regardless of social context (such as a global pandemic) or generational difference. If you desire to connect with someone, it can feasibly be done without much effort.
But that can come with a cost, and a hefty one at that.
In order for its algorithms to function optimally, social media platforms like Facebook, Twitter, and more require a plethora of publicly available information. That information, while unassuming on the surface, can wreak havoc in an unprepared and unwitting user’s life. And the worst part? They might not even be aware they’re giving the social media platforms they use all of the permission they need to do it. Privacy settings are currently clumsily arranged and often thick with unintelligible jargon that is inaccessible to lay folk. Tech savvy users who have the time to custom tailor their privacy settings while still receiving an optimal social media experience may be able to navigate this problem with ease, but that’s not something that you can expect across the broad spectrum of users that each of these social media platforms enjoy.
Enter
Talpa
Talpa is a privacy-conscious product that reduces the negative impacts of the increased use of social media due to the physical and mental barriers the people are facing during the pandemic and social distancing requirements. The negative impacts caused by the heightened use of social media sites like Instagram, Facebook, Twitter, and more revolve around the increase in data being shared and the increasing importance of privacy literacy among all social media users. This is especially true for users who are not tech-savvy and require more assistance to decipher the plethora of social media sites and each of these users’ privacy settings.
Using social media can connect all of us to the people we want to be connected with, and block those that are trying to harm us with a few critical tweaks to the privacy settings found on each platform.
How does it work?
Talpa is simple. Users begin by choosing a social media platform of their choosing and answer a brief questionnaire that asks them about their various privacy tolerances in a way that is easy to understand and not filled with legalistic jargon. Users then receive a Privacy Risk Profile assessment. They can then take action on this assessment by changing their own social media privacy settings. Or users can pay for a simple and continuously improving automation system that will automatically detect changes in privacy settings, update their settings according to their Privacy Risk Profile assessment, and change their assessment at any time.
Who
is Talpa for?
We are primarily catering our product to two extremes: Younger users who are tech-savvy but are not experienced enough to understand the consequences of their privacy insecurity and more elderly users who may have the experience to understand the consequences of being too public with your personal information but do not know how to reflect this sentiment to their social media accounts.
How does Talpa tackle privacy at its core?
Our app is proactive and has privacy intentionally embedded into its user flow and overall design. It does not need any personal data from the user up-front in order to work. Its core functionality will work even when on Incognito mode because the information being provided by the user creates generalized and openly available resources that users can act upon on their own.
The only time the app does personal require information from the user is during the automation phase wherein users can choose to pay for varying degrees of social media privacy security through automated privacy settings customizations. At this point, it is the legal responsibility of our company to provide confidentiality and protection to our paying users and we will be detailing the privacy impact of our paid services as well as provide a risk assessment of the potential drawbacks of using our software to automate their privacy settings. These risks will be mitigated through a various security chain of principles and data security in data anonymization.
According to the GDPR, anonymous data constitutes “information which does not relate to an identified or identifiable natural person or to personal data rendered anonymous in such a manner that the data subject is not or no longer identifiable.” Our software will only require the typical billing details required for users to process their payments, the user’s email, and access to ONLY the user’s privacy settings on each of their authorized social media platforms. The user can choose to turn these settings on or off at will and change their privacy risk profile at any time without incurring any fees or penalties.
Within our platform, only one critical failure point exists and has been intentionally added into the design of the paid version of the product: The automation. The privacy setting automation requires users (in a hypothetical API supported version of each social media platform) to provide permission to change their privacy settings. No other data needs to be gathered aside from purchase information and billing details and the information provided to get their Privacy Risk Portfolio assessment which, in itself, is not inherently telling of a user’s individual personal information. It only requires users to provide information about HOW our software should interact with their social media privacy settings (settings which are typically dials and knobs and does not require information that can be specifically cross-referenced back to the user, hence anonymization of the data), not the specific individual information that the user has in their social media platform.
Built With
figma
miro
Try it out
www.figma.com
miro.com | Talpa: Privacy Settings Made Simple | Our product is a privacy-aware software that enhances the privacy of its users according to their unique needs and preferences through custom-tailored privacy setting recommendations. | ['Daphne Lai', 'Kalil Magtoto', 'Grace Enns', 'Emma Tian', 'Shannon C'] | [] | ['figma', 'miro'] | 3 |
10,101 | https://devpost.com/software/the-big-other | Inspiration
We found inspiration for our concept from the game “Among Us” and social media giants. We strive to provide a safe place where students can learn about the internet and how to protect their data privacy.
What it does
The goal of our project is to provide children with interactive learning on data privacy online. We chose to focus on this demographic because as younger generations are born and raised in the social media revolution, it is important for them to be aware of the privacy breaches and data collection that happens on social platforms.
How we built it
Our prototype was made on Photoshop and Figma. We find this was the most efficient and user friendly way to work in teams whilst in different locations.
Challenges we ran into
Doing background research about privacy regulations, as well as having teammates in different parts of the world made the meeting times and work periods more challenging than usual.
Accomplishments that we're proud of
After doing thorough research and consulting with our mentors our final prototype is something that we are very proud of. We polished our design after going through different stages of prototyping, starting from low fidelity hand-drawn wireframes to a high fidelity MVP which is optimal for the users.
What we learned
This design challenge has given us the opportunity to go more in depth on how to incorporate responsible data collection into our interface. We believe that users have a right to their data being kept private and this project is a step in that direction. We polished our skills in different types of prototyping, as well as had a chance to collaborate with people in different time zones and produce an MVP within a short period of time.
What's next for The Big Other
In the future we will like to add more levels and difficulties to the game. It would be ideal to touch on more topics so that the user is well informed with all aspects of data privacy. Beyond this hackathon we would also love to continue to work on our idea and pitch it for elementary and middle school education in Canada and internationally. From a young age children in Canada are introduced to their rights. The right to online privacy is something missing in the curriculum, and it should not be any less important than the rest.
Built With
figma
paper
pen
photoshop
Try it out
www.figma.com | The Big Other | Multiplayer Interactive Game That Solves The Problem Of Loneliness And Builds The Fundamentals Of Privacy Consciousness. | ['Amina Makhmudova', 'Grace Yip', 'Valerie Nault', 'Montse Herrera', 'Irfan Mostafa'] | [] | ['figma', 'paper', 'pen', 'photoshop'] | 4 |
10,101 | https://devpost.com/software/tertu | Inspiration
Tertu comes from the Spanish word for gathering and conversation, which is our vision for the app. We were inspired by our own nostalgia for meeting strangers and becoming fast friends. We want to evoke the same sense of spontaneous conversation and discussion offered by coffee shops prior to COVID 19, while giving our users complete control of their personal information.
What it does
Tertu is a platform that enables connections between complete strangers based off of their shared interests. It's part chat app, part friend finder and entirely anonymous. To show the story, let's follow the story of IN24M56 and 21SD54G, two members of the Tertu community.
Both of these Tertu members have been assigned unique IDs registered to the Tertu public directory, neither of them had to provide an email, phone number or even a name to join this community. However, 21SD54G chose to provide their email address, allowing Tertu to partly verify their account.
Neither of these members have met, but both have indicated their interests in DJ Khaled and music history. Once they hit the find a new friend button they are quickly matched based on their interest. At the top of their screen they see a prompt based on their shared interests, 'What do you think DJ Khaled would say to Antoni Vivaldi?' 21SD54G is a quick texter and sends a message about their thought's on DJ Khaled and the four seasons. This is the spark of a new friendship and twenty messages later, both of these members have hit the
bold
Become Friends*
bold
* button.
Once both of these friends hit the button, their chat is no longer anonymous, the information they've chosen to give and make visible is shared. 21SD54G becomes Taylor and IN24M56 is revealed as Alex.
Tertu has brought two people together from across the world, based on their shared interests. Taylor and Alex continue to talk to each other through Tertu because they know that Tertu is both private and secure. Taylor feels empowered by the fact that he has total control over his private data. Every piece of personal information is optional to give and revocable at anytime. Alex enjoys the security provided by Tertu through end to end encryption. All messages are encrypted and sent directly to their recipient, stored only on that recipients device.
Tertu is built on the principles of bringing people together while ensuring they have complete control of their privacy.
How we built it
Tertu is built by a team of five members from around the world using Miro for research and wireframing, and Figma for design and prototyping.
Challenges we ran into
The greatest challenge for the team was creating a system that allowed for private and anonymous conversations while still being a safe and welcoming community. Our solution for this challenge was a unique per contact verification system. Our team chose to make anonymity the default, every piece of personal information is freely given and revocable by the user. Personal information is used to verify the identity of our community members, increasing the safety and trust of every community member. On Tertu we all begin anonymously, an email or phone number verification allows the community to know we're human and by providing additional personal information, the community's trust in you increases. Members can search by trust level, knowing they remain anonymous until they hit the Become Friends button.
Accomplishments that we're proud of
We are proud to have created an online experience that combines community with superior privacy settings. Through Tertu, anonymity is a vehicle to friendships. We created an experience that empowers people and returns them control of their personal information. Tertu turns the table on data privacy and returns control to the people.
What we learned
Creating Tertu has taught us so much. We learned about cryptography and how to protect anonymous users. We researched ways to provide user verification while collecting as little data as possible. To create Tertu we had to rethink our assumption on privacy, social media and personal information. We learned that people can and should have the power to control their personal information and the giving of personal information should be a revocable choice.
What's next for Tertu
Tertu has the potential to create millions of new connections between complete strangers from all around the world. Our team has created a potential marketing plan using influence marketers to drive app downloads. Imagine connecting randomly with celebrities and influencers from around the world. We hope that Tertu allows people to see that privacy is a right. Instead of having to search for hidden privacy settings, privacy settings should be easily accessible. Privacy policies should be written in plain English and not a boring legal piece. Our personal information is our currency in this digital age and we should have the right to give it and take it back.
Built With
figma
material
miro
Try it out
www.figma.com | Tertu | Tertu is a secure messaging platform where anonymity is the default | ['Sabine Kwan', 'Ethel Zlotnik', 'Kira (Shiqi) Xie', 'Peter Pearse-Elosia'] | [] | ['figma', 'material', 'miro'] | 5 |
10,101 | https://devpost.com/software/facebowl | Carl, a major jerk
Alex, an impressionable kid
Evil Detective, a mysterious entity
Susan, a prickly conservative
Kat, a half-fledged adult
Inspiration
Our inspiration was the way Facebook often enacted quietly made decisions that allowed users' information to become less private. We were also inspired by a large variety of anecdotal examples of the power of surveillance capitalism; for example, when a teenage girl received pregnancy ads before she knew she was pregnant herself. As well as the time GameStation changed their Terms & Conditions so that the shopper gave away their soul.
What it does
It's a game! A social game. The players can search for good privacy measures, or be beaten by the dark pattern powers of Facebowl.
How we built it
Screens were built in Figma, then animated in After Effects in place of a prototype for demonstration.
Challenges we ran into
Narrowing our focus was difficult, since the idea of a social game could mean a number of different things and different methods of gameplay. Also, making sure the criteria that led to a win state was equal for both sides of the game was difficult, since it always seemed skewed to one side.
What we learned
Watching the introductory talks introduced us to dark patterns and defining the prevalence of them in our day-to-day lives. The ethics in privacy workshop was particularly enlightening in taking considerations of our users, and the messages, both implicit and explicit, we send with our work.
What's next for Facebowl
Like real-life social media, we would want to develop the advertisements more so that it would pertain to a very specific faucet of a user’s identity. Developing a variety of posts that might enter the news feed would lengthen the time period of each round and give further insight into each character.
The main select screen UI, a gamebook-like onboarding, and having informative losing and winning screens would be the next components to work on as well.
Built With
figma | Facebowl | a social game | ['Krystle Tang', 'Rachel Xu', 'Amanda Ding', 'Amanda Du'] | [] | ['figma'] | 6 |
10,101 | https://devpost.com/software/fitnet-a-network-for-fitness-lovers-presented-by-stopp | Inspiration
We are all fitness lovers who have struggled to stay active and share our passion for fitness during the lockdown. So our communal passion for fitness led us to create FitNet, a network for fitness lovers! On top of this we also added in a redesign of terms and conditions because we are also equally passionate about data rights and privacy. Terms and conditions documents have been too confusing for far too long, we hope that our innovations in terms and conditions will help structure future policies around these documents.
What it does
FitNet is a community for fitness lovers to share their passion for fitness through: workouts, recipes, inspirational posts, and more! What separates this app from any other fitness app is the anonymity, all your data is completely secured, there is no location tracking of any type. Your identity is totally secure so you can enjoy your healthy lifestyle while maintaining peace of mind about your data!
How we built it
We collaborated using zoom and figma to accomplish all of our design goals. We split up into 2 sub-teams one for app design, and one focusing on the terms and conditions. This split of teams allowed us to accomplish our work in an effective manner.
Challenges we ran into
Idea generation, trying to find an effective way to work around the problem statement to be able to tackle both aspects of the project that we envisioned and wanted to take on.
Accomplishments that we're proud of
Designing this app from start to finish as a team was a huge step in our design careers for all of us. It was a great experience, thinking through a unique problem and challenge such as data privacy. We are extremely proud of the product that we created, and our accomplishments in innovation for app security and privacy as well as terms and conditions.
What we learned
How to work effectively in teams online as well as how to collaborate via social distancing on a design project. For some of us it was our first time using Figma, so that is another skill to add to our arsenal!
What's next for FitNet - a Network for Fitness Lovers (presented by STOPP)
If there's a demand for something like this or for ideas that we generated through this project (in regards to privacy) then the next steps would be to conduct further research and pitch this to investors or government officials to see how our ideas can be adopted as policy.
Built With
figma
Try it out
www.figma.com | FitNet - a Network for Fitness Lovers (presented by STOPP) | FitNet is a community for fitness lovers to share their passion for fitness through: workouts, recipes, inspirational posts, and more! | ['Austin Jack', 'Ryan Lott', 'Meera Dabhi', 'Gorja Vasudev', 'Jenny C'] | [] | ['figma'] | 7 |
10,101 | https://devpost.com/software/tapcare | Slide 1
Slide 2
Slide 3
Slide 4
Slide 5
Slide 6
Slide 7
Slide 8
Slide 9
Slide 10
Slide 11
Wireframes
Mood Board
Prototype
Inspiration
Our original concept was inspired by an insight from a Reply-All podcast. They reported that hotline services have become increasingly popular with seniors in long-term care after COVID hit, as they were more comfortable with using phones than other modern technologies to connect with people when they are lonely in lockdown. The design of the application was to serve the needs of residents in long-term care facilities that were being barricaded due to COVID-19 lockdown restriction. The lockdown also limited student volunteering experiences since events had been canceled that could have helped them gain hours to graduate or gain professional experience for their resume. TapCare helps connect the two though a secure and feasible application that helps residents in long term care facilities feel they are wanted. The design of the app interface is designed to be a simple tool to manage while its aesthetics are inspired by a nostalgic interface of old technology to match the mental models of our target audience: senior citizens living in long-term care.
What it does
The app connects seniors in long-term care with volunteers who apply for experience or volunteer hours. It is managed by admins, workers at their respective long-term care residences, to ensure security and prevent the seniors from being scammed; a problem that was brought to light in the podcast.
How we built it
We used Miro to collaborate on the initial flow of the application. We moved onto Figma where the wireframing and prototyping took place.
Challenges we ran into
Overall, the biggest challenge was narrowing our idea down to one strong concept, as we were not allotted a comfortable time frame to flesh out various ideas. We were required to explore as much as we could in as little time as possible, which caused some conflict. Overall we were able to find an idea that everyone was supportive of and used this concept to inspire our product.
Accomplishments that we're proud of
We were glad to have finished the submission in time of the deadline, as well as overcoming the challenges of collaboration and hopefully finding a solution to the design challenge.
What we learned
One of the most important things we learned from this hackathon was how to work together as a team. Everyone had their own ideas and inputs but even though they sometimes clashed with one another, we were all able to come up with a compromise that satisfied everybody.
Along with teamwork, we learned more about how to protect the privacy of others. With our app, TapChat it was important that we make it safe and simple for seniors to use. Thanks to the help of our mentor, Mike Accettura, he guided us through the process and gave notes that make the app safer for our users.
Built With
adobe-illustrator
figma
Try it out
www.figma.com
www.figma.com
www.figma.com | TAPCare | A digital solution to help residents living in long-term care homes feel connected during lockdown. | ['Mikayla Garcia', 'Angelina Tran', 'Bhavya Shah', 'Samantha Kristen Astles', 'Colman Tsang'] | [] | ['adobe-illustrator', 'figma'] | 8 |
10,108 | https://devpost.com/software/alpapp-457ao1 | Screenshots from interactive wireframe
Prototype demo show/index books components
Prototype demo barcode scanner
Diagram of application architecture
ALPapp
The African Library Project App for Library Management
Inspiration
When reading about the African Library Project's mission, our first thought was about how our skills as software engineers can really make an impact. We started asking the team questions about the ALP's operations and discovered that most libraries in Sub-Saharan Africa do not have regular access to computers and reliable internet. This inspired us to develop a library management system that does not require either of these things.
What it does
ALPapp is a mobile first library management system, that allows libraries to scan a book's barcode to add it to their inventory, check it in or out, get information about the book, back this data up, and share it with their community. ALPapp requires a simple android phone with a camera and occasional low bandwith internet access.
How we built it
We designed an interactive wireframe in Figma, then divided into frontend, backend, and flex teams. Through daily stand ups we broke down the task of building an MVP into tickets and completed 25% of them within the week. We decided to go with an Apollo API for the backend so it could handle multiple API queries while also allowing for fine tuned control over the size of our response data. On the frontend we went with React-Native for the purpose of quickly implementing a prototype.
Challenges we ran into
So far we did not run into any major challenges, since we are pretty early on in the development process. Because we are using an Agile approach, we start off with the simplest most straightforward features and build up. The really challenging and complicated features will be in our beta version.
Accomplishments that we're proud of
We are confident that this can be a very useful product for the ALP and libraries in developing countries generally. There are some really exciting challenges down the road for this project, especially when it comes to implementing our big ticket item (from a technology standpoint) which is an interlibrary loan system.
What we learned
Throughout the process we focused on using technologies we are familiar with, but everyone was also exposed to something new from their teammates. For example Olga and Louis got to work with Apollo Client for the first time and Zohaib had his first experience with React-Native.
What's next for ALPapp
If we are selected to move forward with the project, we would like to discuss our prototype design documents with the ALP team and connect with librarians who would be using the app. Then we would use September to finish the app, October for alpha testing, then release a beta in November.
If you haven't already, please watch our pitch video which includes demos, diagrams, and more!
Pitch video:
https://youtu.be/iRboHsHs5x0
Pitch deck:
https://bit.ly/344fTOI
Built With
apollo
expo.io
graphql-yoga
node.js
postgresql
react-native
sequelize
Try it out
github.com | ALPapp | The only mobile first library management system designed for the unique challenges faced by communities in developing nations. | ['Zohaib Khaliq', 'Jonathan Gil Yaniv', 'Olga Smirnova', 'Louis Velazquez'] | ['Best Education/Literacy Hack'] | ['apollo', 'expo.io', 'graphql-yoga', 'node.js', 'postgresql', 'react-native', 'sequelize'] | 0 |
10,108 | https://devpost.com/software/donateaplate | Home Page
Account Signup
Add Donation
Adding Category
Adding Custom Items
Inspiration
Every year, an estimated 1.3 billion tonnes of food is wasted globally, amounting to 2.6 trillion dollars annually, which is more than enough to feed all 815 million hungry people in the world ten times over. Although inefficient consumer habits are contributors, the majority of food waste comes from the supply chain majorly constituting distributors, retailers and restaurants. As 12th Grade High School Students from Bangalore, India, we envisioned our app DonateAPlate which allows local restaurants, supermarkets as well as individual donors to donate, daily or weekly, the excess unused food by setting up highly customizable donations through the app.
What it does
DonateAPlate allows local restaurants, supermarkets as well as individual donors to donate, daily or weekly, the excess unused food by setting up highly customizable donations through the app. In addition, NGOs and other charity organizations can view and sort nearby donations for pickup, by requesting donations from the donors. The entire system allows feasible communication between the 2 entities directly through the app, and also allocates points for each successful donation, calculated based upon, distance, food weight, etc, and users can view monthly leaderboards to see how their social work stacks up against other users.
How we built it
The app was developed on Android Studio deployed on the Gradle Framework with 20,000+ Lines of code, in Java, Kotlin & XML. Various APIs such as the Google Maps & Places APIs were integrated into the app. The Backend data storage was built on Firebase and implemented FireStore, Firebase Realtime Database, Firebase User Authentication & Firebase ML.
Accomplishments that we're proud of
Initially we were unsure whether we would be a able to complete the app & fully implement it in time for the submission. But at the end, everything turned out well, and we have a stunning video accompanying our completed product.
What's next for DonateAPlate
Since our app is theoretically scalable to all across the world, with virtually no constraints, we hope to implement this product in Africa, following which we could expand it globally and hopefully bring about a major change in terms of reducing food wastage & solving the food insecurity problems amplified by the Covid-19 pandemic.
Built With
android
android-studio
firebase
gradle
java
kotlin
machine-learning
xml | DonateAPlate | Are you ready to take a Bite out of Hunger? | ['Rohit Kanagal', 'Chandrachud Gowda'] | ['Best Health/COVID-19 Hack'] | ['android', 'android-studio', 'firebase', 'gradle', 'java', 'kotlin', 'machine-learning', 'xml'] | 1 |
10,108 | https://devpost.com/software/locust-build | Logo
Webpage
A look into the Locust locator MAP (rep every spotting on locust overa time period)
AfriHack
GROW N TRACK
Our Vision :
Equiping every single farmer with acessible resources and aid them in making the right choices .
Q1. What is a locust attack/invasion/plague ?
When the locusts start attacking crops and thereby destroy the entire agricultural economy, it is referred to as locust plague/locust invasion. Plagues of locusts have devastated societies since the Pharaohs led ancient Egypt, and they still wreak havoc today. Over 60 countries are susceptible to swarms.
Q2. Types of locusts -
There are four types of locusts that create a plague – desert locust, migratory locust, Bombay locust, and tree locust. The desert locust is a notorious species. Found in Africa, the Middle East, and Asia, this species inhabits an area of about six million square miles, or 30 countries, during a quiet period, according to National Geographic. During a plague, when large swarms descend upon a region, however, these locusts can spread out across some 60 countries and cover a fifth of Earth's land surface.
Q3. How and when do locusts become harmful?
During dry spells, solitary locusts are forced together in the patchy areas of land with remaining vegetation. This sudden crowding makes locusts. Then, when rains return—producing moist soil and abundant green plants, locusts begin to reproduce rapidly and become even more crowded together. In these circumstances, they shift completely from their solitary lifestyle to a group lifestyle in what’s called the gregarious phase. Locusts can even change colour and body shape when they move into this phase. Their endurance increases and even their brains get larger. Locusts have huge appetites.One of these insects can eat its own weight in food in a single day.And they're devastating crops in East Africa, where millions of people are already considered food-insecure.
Q4. What is a locust swarm?
Locust swarms are typically in motion and can cover vast distances—some species may travel 81 miles or more a day. Locust swarms devastate crops and cause major agricultural damage, which can lead to famine and starvation. A swarm of desert locust containing around 40 million locusts can consume (or destroy) food that would suffice the hunger need of 35,000 people, assuming that one person consumes around 2.3 kg of food every day. In 1954, a swarm flew from northwest Africa to Great Britain, while in 1988, another made the lengthy trek from West Africa to the Caribbean, a trip of more than 3,100 miles in just 10 days.
Locust swarms devastate crops and cause major agricultural damage, which can lead to famine and starvation. Locusts occur in many parts of the world, but today locusts are most destructive in subsistence farming regions of Africa.
Q5. Locust Effect on Africa ?
The worst locust outbreak in generations has descended upon East Africa and the Horn of Africa. Without immediate action, 4.9 million people could face starvation this summer. This disaster comes at the worst possible time for countries like Somalia already facing the double emergency of food shortage and COVID-19. Seven facts about the situation on the ground:
1. Desert locusts are extremely dangerous –
These migratory insects inflict insurmountable damage in minutes. Even a tiny swarm consumes the same amount of food in one day as 35,000 people. Swarms have already destroyed hundreds of thousands of hectares of crops and pastureland in eight countries—Kenya, Uganda, South Sudan, Ethiopia, Somalia, Eritrea, Djibouti and Sudan—and threaten to spread wider.
2. Five million people are at risk of hunger and famine-
As of March, the locust infestation in East Africa has already damaged more than 25,000 kilometers of cropland. Without swift intervention, populations will face mass starvation this summer.
3. A new swarm is hatching –
A fourth generation of locust eggs is now hatching, which experts predict will create a locust population 8,000 times larger than the current infestation.
4. Somalia will likely be hit hardest –
The Somali government was first in the region to declare a nationwide emergency in response to the desert-locust crisis. Without humanitarian assistance, 3.5 million people are projected to face food crisis between July and September. The region is already overwhelmed by cycles of widespread violence, drought, floods, chronic food shortages, and disease.
5. This the worst outbreak in 70 years –
Without expedited preventative measures, swarms will migrate from East Africa to West Africa. “This is the worst locust invasion we have seen in our generation,” says Sahal Farah of Docol, an IRC partner organization. “It destroyed pastures, contaminated water sources and [has] displaced many pastoral households. The worst of all is that we do not have the capacity to control it, and so far we have not received any external support.”
6. Women face increased risk –
If harvests fail, the IRC estimates that 5,000 households, especially those led by women, will need urgent humanitarian assistance by August. As food prices skyrocket, women and girls will face an increase in violence and theft as their partners are forced to travel in search of food and work. Additionally, women will be forced to take on additional responsibilities in managing existing farms or small businesses, even as they tend to the needs of their families.
7. More funding is necessary to stop widespread famine –
The IRC is calling for $1.98 million to alleviate the desert-locust emergency in Somalia in 2020. We are also appealing to the United Nations and affected countries to continue technical analysis of locust movements along with continued information sharing—before it is too late.
Q6. Crop Failure and Hunger Famine In Africa .
In Africa, hunger is increasing at an alarming rate. Economic woes, drought, and extreme weather are reversing years of progress so that 237 million sub-Saharan Africans are chronically undernourished, more than in any other region. In the whole of Africa, 257 million people are experiencing hunger, which is 20% of the population.
Successive crop failures and poor harvests in Zambia, Zimbabwe, Mozambique, and Angola are taking a toll on agriculture production, and food prices are soaring. In the past three growing seasons, parts of Southern Africa experienced their lowest rainfall since 1981.
As a result of these dire events, 41 million people in Southern Africa are food insecure and 9 million people in the region need immediate food assistance. That number is expected to rise to 12 million as farmers and pastoralists struggle to make ends meet during the October 2019 through March 2020 lean season.Close to five million people in East Africa could be at risk of famine and hunger as the ‘worst locust invasion in a generation’ continues to destroy crops, contaminate water sources and displace thousands of households, a new report has warned.The infestation, which first appeared in the region last June and has already passed through a number of generation cycles, is feeding on hundreds of thousands of hectares of crops across at least eight countries.
HISTORY OF FOOD FAMINE –
• 2011 to 2012 — The Horn of Africa hunger crisis was responsible for 285,000 deaths in East Africa.
• 2015 to 2016 — A strong El Niño affected almost all of East and Southern Africa, causing food insecurity for more than 50 million people.
• 2017 — 25 million people, including 15 million children, needed humanitarian assistance in East Africa. In September, inter-communal conflict in Ethiopia led to more than 800,000 people becoming internally displaced.
• 2018 — Africa was home to more than half of the global total of acutely food-insecure people, estimated at 65 million people. East Africa had the highest number at 28.6 million, followed by Southern Africa at 23.3 million, and West Africa at 11.2 million.
• 2019 — Food security is deteriorating and expected to worsen in some countries between October 2019 and January 2020.
Locusts attack across the world
By the end of 2019, there were swarms in Ethiopia, Eritrea, Somalia, Kenya, Saudi Arabia, Yemen, Egypt, Oman, Iran, India, and Pakistan
As of January 2020, the outbreak is affecting Ethiopia, Kenya, Eritrea, Djibouti, and Somalia. The infestation "presents an unprecedented threat to food security and livelihoods in the Horn of Africa," according to the United Nations Food and Agriculture Organization.
Kenya has reported its worst locust outbreak in 70 years, while Ethiopia and Somalia haven’t seen one this bad in quarter of a century.
They are now heading toward Uganda and fragile South Sudan, where almost half the country faces hunger as it emerges from civil war. Uganda has not had such an outbreak since the 1960s and is already on alert. Uganda has not had to deal with a locust infestation since the ’60s so there is concern about the ability for experts on the ground to be able to deal with it without external support
In a country like South Sudan, where already 47% of the population is food insecure this crisis would cause devastating consequences.
Q7.How can locust swarming/attack be prevented?
Weather patterns and historical locust records help experts predict where swarms might form. Once identified, an area is sprayed with chemicals to kill locusts before they can gather.
Historically, locust control has involved spraying of organo-phospate pesticides on the night
resting places of the locusts.
Intervention in the early stages of a locust outbreak is generally advised.This reduces the amount of pesticide to be applied because the locusts are localized over a relatively smaller region.
As an outbreak continues to develop first into an upsurge then into a plague,more and more countries are affected and much larger areas need to be treated. Nevertheless a preventive strategy may not always be effective.Access to infested areas may be limited due to insecurity;financial and human resources can’t be mobilized quickly enough to control an outbreak in time;or weather and environmental conditions are unusually favourable for locust development so the national control capacity is overwhelmed.
So,what can be done?
HERE COMES THE USE OF LOCUST LOCATOR
Locust swarm attcks can be prevented with early monitoring of the breeding grounds of the insects. Now,United Nations is already doing this work. Through various ground,air and satellite surveillance techniques,image processing methods,data analysis and a diversified modus operandi,scientists,researchers,biologists are working day in and day out in order to build a model,or a method so that these attacks can be prevented,before they grow to wreck massive destruction and havoc.
But,the common man cannot comprehend the need or purpose behind all this.
This is a situation where experts with years of experience,modern technological software,methods and tools at their disposal are still baffled by the unusually high outbreak of the locusts this year.
So what can we expect from an ordinary let alone an illiterate person to do ?
How can they know how to save themselves from this raging menace?
How can we ensure that they - the pillars of support of this entire urbanised culture and people; survive and continue to prosper?
Here’s where our application is useful.
By making an application in their local language and making it easy to use, we remove any challenges the locals might face while taking advantage of our app.
Q8.But,why did we do this?
Being fortunate enough to be able to use technology amidst the comfort of our living conditions,we were discussing about the havoc that this year had bestowed upon humankind,starting with Australian bushfires to COVID-19.
And we yearned to do something,in order to make the world a slightly better place,than what it was. We knew that we couldn’t be frontline warriors of Coronavirus alongwith doctors and other personnel,since none of us are associated with medical background. But we had the belief that using our knowledge in the fields of data science,database management,app development;to name a few,we could atleast try to do something to give back to society,and thus was born..GROW N TRACK.
So,while browsing for things we could do,we stumbled upon this idea and saw the wonderful initiative Microsoft and the African Literacy Project had taken to organise this Hack for Africa global event.
Q9.What do we do?
Essentially, we track locust and send warning message to the registered users.
From the satellite data available,we obtain the locusts location.
We keep a record of the user location and when the locust enter the vicinity of the user we warn them via text and whatsapp.
For now we used Whatsapp but if we can implement the project with funding and resources then we plan to use normal text messages.
Q10.How warning them is useful ?
It helps them take necessary protection to save themself from such adversities.
Also,it has a vital role to play in formulation of future plans
We implemented machine learning in our tracker to predict the direction of movement a couple of days before it happens and try to predict the next possible mass breeding spots.
We also plan to have a feature in which a user can mark a place where they spot the locusts and if we get same marking from a specified radius of the users we alert the concerned authorities and mark the place in our map.
By analysing the data,we found that locusts infested only specific crops,and only during specific time periods of the year. By correlating that with the pH of the soil in those areas,we were successful in building an algorithm that would help them to decide the best crop to be planted according to the pH of the soil,so that they could yield the maximum profits out of their crops,all the while being protected from the problem of locusts ruining their hardwork.
As of April 2020, efforts to control the locusts are being hampered by ongoing restrictions in travel and shipping due to the COVID-19 pandemic, contributing to the global coronavirus food crisis. Hence,if we can implement Grow N Track,then surely we can put a huge leap in bringing the whole world to normalcy if the nations can slowly go back to their food production levels before the disaster and hence resume trading activities of food and other products.
Q11. Who are we?
Visit the developers page to know more about us and contact us .
We love to work on Projects that helps imporve people's lives and leaves a good impact in this world.
Regards
Nima Pourjafar Kartik Agarwal Anush Krishnav.V Indrashis Mitra
Credits
Video editing Aaditya VK Connect:
aaditya.v.krishnan@gmail.com
Built With
css
flask
html
javascript
python
shell
Try it out
grow-n-track.herokuapp.com | Grow N Track | A simple but efficient web app that provides farmers make better choices and save their work from locust attacks . | ['anush krishna v', 'Kartik Agarwal', 'Nima Pourjafar'] | ['Best Out of the Box Hack'] | ['css', 'flask', 'html', 'javascript', 'python', 'shell'] | 2 |
10,108 | https://devpost.com/software/bookonnect-5i796p | Personal Message
Credentials
Confirmation
BooKStore Dashboard
Random BooKStore Dashboard
Cart
Home Page
Inspiration
Our inspiration are the Children of Africa who has minimal access to books. They were enough for us to take action for such causes.
What it does
BooKonnect is a book donation platform that allows book donors to choose which books to donate to Children Readers of Africa. Through "swiping right" and "swiping left", a donor get to choose which books he wants to give, and through that, he gets to mold the outlook of an African child that will possibly affect his future! In partnership with non-profit organizations, these books will be sent to a child on the Sub-Saharan Africa while containing contact details and a personal message from the donor. We aim to establish a connection between the donor and the receiver - hence, the name BooKonnect!
How I built it
JavaScript, HTML 5, and CSS 3 to build the website, and several templates/sourcecode to establish the entire website fastly. We used Github for collaboration.
Challenges I ran into
We had trouble with different timezones. My teammate and I are literally on opposite sides of the globe that was a challenge.
Accomplishments that I'm proud of
The aesthetics of the Website that I intended to be parallel with the African Library Project's website, and of course, the possibility that I am a part of something that is bigger than me, something that would help the Sub-Saharan Africa.
What I learned
I learned how to execute plans and ideas to help others.
What's next for BooKonnect
We are really looking forward on implementing this to the people of Africa. We are looking forward on materializing the partnerships that we are aiming with Publishing Houses, and hopefully, African Library Projects.
Built With
bootstrap
css
html5
javascript
Try it out
github.com | BooKonnect | Connecting the Children Readers of Africa to the World! | ['Ramnick Francis Ramos'] | ['Literacy/Education Runner Ups'] | ['bootstrap', 'css', 'html5', 'javascript'] | 3 |
10,108 | https://devpost.com/software/kitabu | Landing Page
beginning flow chart
MVP
Inspiration
In this past year, the world has had to undergo a countless number of changes in response to the Covid-19 pandemic. With social distancing being one of the most crucial parts to fighting the virus, we felt it would be important to ensure that charities like the African Library Project would still be able to efficiently run their organizations by moving their operations online.
To help ALP continue to make the donating and collecting process as seamless as possible, we’ve created an online portal that allows users to digitally log information on the status and availability of books to be donated. As people have increasingly become dependent on digital means to sustain their routine activities, this platform will help make the donation and collection process easier, make book drive management more seamless, and eliminate the need for frequent in-person contact.
In doing so, the portal will help broaden ALP’s donor and collector base, and develop an incentivized point-based donation process that facilities consistent use - one that could help ALP effectively oversee activity, even beyond Covid-19.
What it does
Broaden ALP's donor and collector base through a digtal portal
Develop an incentivized, points-based donation process that facilitates consistent use
Create a seamless donation system for new and returning donors
Create a seamless cataloguing experience for collectors that can be expanded to librarians
Help donors and collectors visualize the impact of their donations
Make donating easier during COVID-19 by digitizing the donation process
Bring awareness to black owned businesses
Generate socially conscious partnerships that further strengthen incentive/point system
How We built it
For more info:
https://github.com/jml0123/alp-spring-client
(Client Repo)
https://github.com/gonsaje/Kitabu
(API Repo)
Challenges we ran into
Packing data into a qr code was more difficult than we thought, so we built out routes to work around.
-Time Constraint
Accomplishments that we're proud of
Deepened backend knowledge
Learned new design framework (Material UI)
What's next for Kitabu
We built Kitabu to be scalable. For example, the collector model can eventually be extended to a librarian, where they may scan QR shipment codes to catalogue all books within a box to avoid having to write down the books manually.
Collectors may eventually also have points for every collection that gets successfully scanned by a library in Africa. Kitabu is meant to facilitate a self-sustaining donation and point system that encourages both donors and collectors to donate and host book drives.
The points are intended be redeemed with partner businesses. As this is a proof-of-concept, we don't actually have partnered businesses! But the intention was to create relationships with small black-owned businesses and influencers globally in order to create a sustainable model of consumption that gives partnered businesses more visibility and create potential streams of income for ALP's shipping efforts (e.g. if businesses decide to donate a portion of their profits to African Library Project).
Built With
barcode-scanner
express.js
google-books
material-ui
mongodb
node.js
react
Try it out
kitabu-client.vercel.app | Kitabu | Kitabu is a digital platform to donate, collect and catalogue books. | ['Jae Song', 'Miguel Lorenzo'] | ['Literacy/Education Runner Ups'] | ['barcode-scanner', 'express.js', 'google-books', 'material-ui', 'mongodb', 'node.js', 'react'] | 4 |
10,108 | https://devpost.com/software/tuber-0i9seq | Inspiration
According to the World Health Organisation, Tuberculosis and Malaria are among the top 10 most deadly diseases worldwide, and TB is the leading cause of death from a single infectious agent. In 2018 alone, there were over 10 million registered cases of TB and 1.5 million deaths due to TB and 228 million cases and 405,000 deaths due to malaria. Unfortunately, over 95% of those deaths were from low income nations in Africa, Asia etc. Even though these diseases are preventable and curable, why do they cause so many deaths? The major cause of this is lack of infrastructure and human resource required in low income nations to diagnose and treat these diseases. We believe that machine learning can be used to combat this global issue and improve the efficiency of the healthcare system, and reduce the dependence on human resources.
What it does
Tuber is a machine learning based diagnostic tool and patient management system targeted towards hospitals and medical clinics, that enables them to instantly diagnose Tuberculosis from chest X-Ray scan images, and malaria from a blood sample image, with a high accuracy. This web app aims to fully replace the diagnosis aspect of doctors in low income nations. This would improve the efficiency of the current system as it is faster and less heavy on resources compared to a human being. Our system collects basic information and an image, and then returns quick and accurate results. We use advanced machine learning algorithms to classify the user uploaded image into Normal or Infected (diagnosed with Tuberculosis or malaria), and then automatically generate a CSV medical report. We also have a patient data management system, as well as a data visualization page, displaying various analytics regarding patients. Lastly, we developed an AI based questions page, where the user can ask any questions about Tuberculosis, and get instant results.
Tuber significantly reduces the human resource and expertise required for diagnosing Tuberculosis and makes the entire system far more efficient and less prone to error. Tuber would have a huge impact in rural areas in continents such as Africa, where there is acute lack of qualified doctors to detect such diseases.
How we built it
Python flask for the web app and HTML/CSS/js templates for the front-end.
Keras with TensorFlow back-end for the image classification models.
TB Model was trained using Shenzen Hospital Tuberculosis X-Ray dataset. link:
https://lhncbc.nlm.nih.gov/publication/pub9931
(93.2% accuracy)
Malaria model was trained using a free access dataset on IEEE. (95.5 % accuracy)
Python NLTK and TF-IDF Vectorizer model for AI search page.
Google Firebase for storing patient data.
Google charts for data visualization.
Heroku free tier for web hosting
Challenges we ran into
The first major challenge we ran into was getting good quality training data for the ML models. After hours of research, we were able to compile a dataset that allowed us to provide fast and accurate classifications. As it was our first time with firebase, we also struggled with integrating fire-base to store patient data. Lastly, hosting web applications with data intensive ML models such as this one was a huge hurdle, which we tried our best to approach using Heroku. Making the NLTK corpora work with Heroku was also a signification challenge.
Accomplishments that we're proud of
We are proud to have developed relatively fast and efficient models to predict with over 95% accuracy on the test dataset. We are proud to have learned firebase in a relatively short amount of time, and to have integrated it with a fully functional firebase database connected to our web app. We are also proud to have integrated a TF-IDF vectorizer and natural language processing model from scratch to build an AI powered question answer engine from scratch. Lastly, we are proud to have created a feature-rich web app and clean UI within the time constraint.
What we learned
We learned how to use TensorFlow and keras for image classification. We also learnt how to make an NLP based search engine feature. Lastly, we learned for the first time how to set up a firebase database and integrate it with a flask web-app.
What's next for Tuber
We hope to develop a login system for specific hospitals to access their patients' data securely. We also want to make a more sophisticated data visualization page, to help provide more detailed analytics. We anticipate getting a domain name for the web app, as well as expanding to a mobile app. Due to memory limitations, we were not able to host our machine learning model, and it only works offline. We would like to change this by using a more premium web service such as AWS or Gcloud. Lastly, we hope to experiment with more robust front-end frameworks such as React.
Built With
css
firebase
flask
google-chart
heroku
html
keras
python
tensorflow
Try it out
github.com
tubercheck.herokuapp.com | Tuber | Using Machine Learning to fight Malaria and Tuberculosis. | ['Nand Vinchhi', 'Veer Gadodia'] | ['Health/COVID-19 Runner Up (1)'] | ['css', 'firebase', 'flask', 'google-chart', 'heroku', 'html', 'keras', 'python', 'tensorflow'] | 5 |
10,108 | https://devpost.com/software/operation-100-for-africa | Login Page
Main Page
Main Page after signing up, and where you can record your voice and being assessed by speech recognition
About Us page
Inspiration
After analysis of the reason why the Hack for Africa is held, and as Africans, we felt pushed by the desire to help Africa through this Microsoft Challenge.
We have chosen to Hack for Africa in the path Literacy & Education, because we know that Education is Power, and that we are sure that we Learn Today to Lead Tomorrow as our Logo can express it.
Many students do not have direct access to books like they do during the school year.
This prevents students from continuing to improve as readers, and also has the potential to set them behind their expected reading levels without inspiration and motivation.
Many online services that provide a platform for students to continue reading outside of school require paid accounts, but we believe that literacy isn't a paid privilege and that everyone should have the chance to fall in love with books reading, and stay motivated.
What it does
We explained it properly and clearly in the Video demo.
How we built it
We built our project using:
Express;
React;
Node;
PostgreSQL;
JavaScript;
Html;
CSS;
API and with Visual Studio Code as editor;
Code version control
GitHub.
Challenges we ran into
We are in 4 different Countries ( Ethiopia, India, Kenya and South Africa ), so we encountered the time zone difference problem, and having almost 75% of first time experience, it was difficult to code quickly, put the ideas together and deliver on time.
Accomplishments that we are proud of
Being able to submit our project that gave us sleepless night and times of brainstorming for Africa.
What we learned
Patience in Teamwork make us reach common goals no matter the difficulties.
And we have been able to Hack for Africa, because we have had access to Literacy and Education.
What's next for Operation 100 For Africa
More sophisticated and responsive platform to meet the requirements of the users.
Built With
api
css
express.js
github
html
javascript
node.js
postgresql
react
visual-studio
Try it out
github.com | Operation 100 For Africa | Making Africans highly interested and motivated to reach the level of 100% of Literates. | ['YABAHA SOLO MARIUS BAMBA', 'Kaleab Melkamu', 'Lyle D', 'Ann Maina'] | [] | ['api', 'css', 'express.js', 'github', 'html', 'javascript', 'node.js', 'postgresql', 'react', 'visual-studio'] | 6 |
10,108 | https://devpost.com/software/tellusone | Inspiration
The amount of elective procedures in the United States and other countries has dropped significantly as a result of COVID-19. Consequently, doctors who are not actively working with COVID-19 patients are seeing far less work throughput. Meanwhile, Africa has a quarter of the global disease burden but
only 2% of the world’s doctors
. Further, in developing countries, making behavior changes to reduce the reproduction rate is significantly harder, making COVID-19, among other infectious diseases, significantly more potent. Doctors in the U.S. could help those in developing countries suffering from pandemics such as Ebola or COVID-19, but
often lack the ability or funding
to actually connect with and help them.
Our platform directly
virtually connects American physicians to the patients in Africa
, enabling them to perform telemedicine in their added “free time” from the decrease in elective procedures in their own country. With our platform, American doctors can see patients and prescribe critical treatments, even from thousands of miles away. This process also
alleviates congestion in healthcare systems
by leaving doctors physically present in those countries to focus on the high priority cases, leaving no one behind. Further, crucial and time-critical data like X-rays and blood test results can be
analyzed quickly
, saving lives.
TellusDoc bridges the gap between the surplus of American doctors and the shortage of doctors in Africa.
What it does
In the modern age of telemedicine, it's vital that the services provided by so many doctors around the world can be afforded by those who require them. We provide a self-contained platform for this entire telemedicine process, which also breaks the information and literacy barrier which many of the patients in Africa face. This includes evaluating the patient's clinical status, scheduling them with an appropriate specialist, and providing a video and message interface with built-in transcription and language accommodations. Our user portals allow patients and doctors to keep track of their past and future appointments and also provide an avenue for patients to upload pertinent medical files for their doctor’s reference. We also ensure that the patients who need the most urgent care are matched up with doctors who can best help them through our state-of-the-art triage system. By incorporating both a severity classification, through our preliminary AI diagnoser, and a specialist matching system, we make this possible.
How we built it
We leveraged
Python Flask
for web app development, using
HTML, Jinja, and CSS
tools to design the various website pages.
Our platform’s more advanced functions involved
Python and Javascript
API calls, as well as several JS scripts for functions on the page, while the backend relies on
Firebase
.
Our self-sustainable video chat feature is run in the browser through
websockets
, and a server locally hosted. We use
WebRTC and RecordRTC APIs
to stream audio and video.
The text-to-speech functionality used the
Mozilla Developer Network Speech Synthesis API
.
We used
Google Speech-to-text API
for our video call transcription,
Microsoft Azure Translator API
for web page translation, and
API-Medic Symptom Checking API
for the preliminary diagnoses.
Challenges we ran into
Synchronizing scheduling pages via Firebase
Refining our Triage System algorithm
Incorporating the self-sustainable video chat feature
Streaming in browser audio in appropriate format to google speech API
UX of calendar scheduling
Accomplishments that we're proud of
Conquering Barriers:
Translation and Transcription Services
allow our site content to be translated and spoken in the user’s native language.
In-browser video calling service
allows for digital appointments to be directly held on our web app.
Direct Messaging System
allows patients to speak to past and current doctors and doctors to speak to all of their patients.
Instant File Transfer
allows doctors and patients to securely and reliably send medical information.
Quick AI Diagnostics
allows for patients to receive a free, instant diagnosis and for doctors to confirm their medical recommendations.
Integrated Patient Portal
allows doctors to access all of their current patients key information (name, condition, condition’s severity, AI Diagnosis, Appointment Date) in one concise interface. Additionally, clicking on the patient’s name will redirect doctor to their recent messages and clicking on the condition will redirect to a medical article (e.g. WebMD, HealthLine) on the topic.
Appointment Scheduling Algorithm:
matches patients to doctors, factoring in schedule compatibility, doctor’s speciality/patient’s condition, and severity.
What we learned
How to build intricate web applications with
API integration
, highly efficient
backend logic in Firebase
, and a
user-friendly frontend interface via Flask
.
What's next for TellusDoc
We are planning on building out a full fledged nonprofit, with additional features such as: order and delivery system for doctors to send critical health supplies based on telemedicine consultation and nurses portal.
Built With
azure
flask
google
html/css
javascript
node.js
python
Try it out
github.com | TellusDoc | Bringing cutting-edge healthcare to high need patients in Africa | ['Roshan Warman', 'Rajat Doshi', 'Sukesh Ram', 'Ashwin Agnihotri'] | [] | ['azure', 'flask', 'google', 'html/css', 'javascript', 'node.js', 'python'] | 7 |
10,108 | https://devpost.com/software/smart-class | Landing
Bot Enters the meeting
Live Attendance
Features
Attentiveness Tracker
Note Taker
Inspiration
Due to the worldwide pandemic education sector is most one of the most affected sector in this situation online learning is the only hope. In these days online learning has emerged as one of the leading ways to transmit education and the government is looking for ways to shift education to online platforms due to the pandemic situation.It becomes difficult for the administration like schools, colleges,etc to have an unbiased feedback of the students for the faculty.
What it does
Our solution ie SMART CLASS Application helps professors better interact with those in their class and track their students' comprehension of the material with numerous ways to collect more data about classroom engagement. i.e. Total number of hands raised on a particular question, class attendance scheduling at specific time, attention analyzer of the students, and feedback of the students by face recognition.
Our Solution SMART CLASS bot will join the online meeting on ZOOM and collect the information from the browser client in the background of the host's computer. And will analyze the behaviour of the students/members and with the power of Smart Class App, teachers can also write/ draw in air and will be shown on the screen and will be live on the other student’s screen.
How we built it
The data gathered using our python + selenium component is fed into our python + tkinter interface that is displayed on the host's computer, alongside their Zoom client.
We built a bot using python and selenium to join the call (headless-ly) and collect all the information from the browser client in the background of the host's computer.
Note taking feature using web-speech-api.
Used CanvaJS for graph attentive analysis.
Challenges we ran into
Zoom has no API for accessing a lot of the features we wanted to use, like the number of people raising their hands, the ability to send messages, the ability to get current users, etc.
2.While we had success with actually doing recognition of facial expressions, but making machine learning model that is accurate was tough task.
Accomplishments that we're proud of
Built a self-contained, fairly full-featured client to interface with the Zoom client headless-ly and providing some features which are not provided by Zoom.
What we learned
Throughout the hackathon we learn to deal with API and use them in proper way and using Machine learning being unfamiliar with it.
What's next for SMART CLASS
Feedback Expression Analyzer which uses face recognition and gives the automated feedback of the students.
Creating more accessible online classroom with its closed captioning service. This allows users with limited hearing to follow along more closely which improves usability
Built With
canvas
css
google-cloud
html
javascript
machine-learning
python
selenium
tkinter
Try it out
github.com | SMART CLASS | SMART CLASS Application helps professors better interact with students in their class and track their classroom engagement | ['Ashutosh Kumar verma', 'Arpit Agarwal', 'Atishay Srivastava', 'Yashashvi Singh Bhadauria'] | ['Best Tool For Educators'] | ['canvas', 'css', 'google-cloud', 'html', 'javascript', 'machine-learning', 'python', 'selenium', 'tkinter'] | 8 |
10,108 | https://devpost.com/software/protea-rcxgql | protea
Tentative Backend for protea, a freelancer platform with opportunities for internship
Built With
android
flutter
javascript
mongodb
Try it out
github.com | protea | A freelancing platform with internship opportunities | ['Akinboluwarin Akinwande', 'Manish Chandra', 'benedictha pam', 'Happy Neutron'] | [] | ['android', 'flutter', 'javascript', 'mongodb'] | 9 |
10,108 | https://devpost.com/software/vaccine-distributor-jzh5ar | As the world’s population ages, more people than ever are living with chronic medical conditions that need regular monitoring for the patient to stay healthy and live a longer life.It is designed to help patients monitor their chronic health conditions, such as diabetes, with a simple, voice-controlled interface. It also notify the nearby hospital if the patients health got serious and hospital can send the VACCINES TO THE PATIENTS ACCORDING TO THIER MEDICAL CONDITIONS AND DISEASES. This project can even alert relatives if the patient’s health takes a turn for the worse. It acts as a sort of virtual in-home healthcare provider, offering helpful reminders to users to take medication or test their blood glucose levels.
Built With
bootstrap4
css3
ejs
html5
javascript
jquery
mongodb
node.js | HEALTHCARE-ASSISTANCE | we are building a heatcare assistant web app | ['Aditi Goyal', 'Kunal Gupta', 'Atishay Srivastava', 'Shreya Asthana'] | [] | ['bootstrap4', 'css3', 'ejs', 'html5', 'javascript', 'jquery', 'mongodb', 'node.js'] | 10 |
10,108 | https://devpost.com/software/project-connect-cmex8w | Inspiration
After seeing all these high school students searching for volunteer opportunities during the corona virus lockdown, I came up with a way of remotely providing oppurtunities.
What it does
Connects groups of individuals looking for volunteer opportunities from around the world to remotely help advance literacy skills in rural Africa, while also encouraging the growth of a community of local volunteers in sub-saharan Africa.
How I built it
I used adobe after effects to create the video and I designed the app UI through figma.
Challenges I ran into
Accomplishments that I'm proud of
I was a
What I learned
I learned a lot about the communities of rural sub-saharan Africa, through my research, and the immense impact that donating books can have. I also learned a great deal about app design
What's next for Project CONNECT
I plan to partner with the African Library Project to help bring my program and app to fruition. | Project CONNECT | Project Connect will grow a new community of people committed to helping develop literacy in Rural Africa, focusing greatly oral methods of teaching, through the connect app and local program | ['Rohit Shetty'] | [] | [] | 11 |
10,108 | https://devpost.com/software/covid-healthcare-data-management | All details for installing metamask and running the project have been given in the Project Readme on github :
https://github.com/World-Hackers/Covid-health-data-storage
Inspiration
This application will help users/patients to connect with their doctors and getting immediate and remote assistance for their health. This may also act as a health tracker.
The application aims to solve the problem of doctors’ availability at this time of crisis. As most of
the countries are going through lockdown at the moment, and doctors are working on finding a
cure for COVID-19, it became very difficult for patients to get a checkup even if they have
common flu.
What it does
This application will help users/patients to connect with their doctors and getting
immediate and remote assistance for their health. This may also act as a health tracker. The application aims to solve the problem of doctors’ availability at this time of crisis. As most of
the countries are going through lockdown at the moment, and doctors are working on finding a
cure for COVID-19, it became very difficult for patients to get a checkup even if they have
common flu.
This application will help solve this problem, by providing a platform for patients and doctors to
interact and share details and prescriptions so that the patient can get assistance anytime.
This application will not only be useful at such critical times but can also act as a health
monitoring device for patients who require regular monitoring and checkup.
How we built it
Steps performed by the patient:
● The patient register himself as a patient on the Registration tab
● After registration, he can upload his health care data in the form of excel files, pdf, docx,
or images. This data can be recorded any smart device that the patient may have, or can
be compiled by the patient or any relative.
● Once the patient uploads the data, he will get a unique hash that points to his data on
the IPFS blockchain. The data is encrypted using AES encryption so that it is not
understandable by anyone.
● The patient can then share this unique hash with his doctor (who is also registered on
this platform).
● The doctor can then view this data and send the prescription accordingly.
● The doctors are incentivized using the token mechanism (HealthToken) which we have
introduced.
Steps performed by the doctor:
● The doctor register himself as a doctor with his fee (fee to be paid by the patients to get
diagnosed and get a prescription).
● After registration, he can view the files which his patients have sent him and then he can
diagnose and send the prescriptions accordingly.
● After the doctor sends the prescription, he will receive his fee in HealthToken.
Incentive mechanism:
● We introduced native ERC20 tokens, named HealthToken.
● On first registration, users (both patient and doctor) may receive 1000 free tokens
(owner can send it to each user only once) so that they can use the platform (this may
act as ICO)
● When the user sends his data to the doctor, the doctor’s fee is automatically deducted
from his account and is stored securely in the Smart Contract.
● The stored fee is only released and sent to the doctor only when he sends the
prescription to the patient.
Challenges we ran into
We overall faced 3 major challenges:
1) Storing web3 and contract instances in local storage because of their cyclic redundancies.
Found a workaround by removing cyclic objects.
2) Deploying smart contracts on the Matic Network. Solved after discussing the same with the
Matic Team.
3) CORS error with IPFS storage. Found a solution on google.
Accomplishments that we're proud of
This application will help solve this problem, by providing a platform for patients and doctors to
interact and share details and prescriptions so that the patient can get assistance anytime.
This application will not only be useful at such critical times but can also act as a health
monitoring device for patients who require regular monitoring and checkup
What we learned
Use of :
PFS:
○ IPFS is a data storing platform and is used to store the patient’s data.
○ Storing patient’s data would be very costly and not very efficient which made us
choose IPFS for data storage.
○ The IPFS hashes are generated according to the content of the files so it is
practically impossible for any random user to find the unique hash.
○ By any chance, if any hacker finds out the file hash and tries to steal the data
from IPFS, then he will have to decrypt the AES encrypted file which is another
practically impossible task.
○ The patient can share his file hashes with the doctor. To view those files, the
doctor will have to download files using our platform only so as to decrypt it.
Hence allowing only legit users to access the sensitive data.
● Truffle:
○ Solidity dApp development framework.
● Metamask:
○ Browser extension that acts as a bridge to connect ethereum network with
browser.
● Matic:
○ It is a Layer 2 scaling solution that achieves scale by utilizing sidechains for
off-chain computation.
○ It is used to remove the gas price for every transaction and to make the
transactions faster and efficient.
○ Because of Matic Network, the application doesn’t have any extract cost at the
moment (0 gas for every transaction).
What's next for Covid Healthcare Data Management
Payment system for doctors, chat system for real time chat with doctors.
Built With
bootstrap
ethereum
ganache
matic
metamask
react
solidity
Try it out
github.com
tender-mclean-a3d648.netlify.app | Covid Healthcare Data Management | This application will help users/patients to connect with their doctors and getting immediate and remote assistance for their health. This may also act as a health tracker. solve doctors’ availability | ['Shivay Lamba', 'Suraj Singla', 'Pulkit Midha', 'rahul garg'] | [] | ['bootstrap', 'ethereum', 'ganache', 'matic', 'metamask', 'react', 'solidity'] | 12 |
10,108 | https://devpost.com/software/syllabase | Home Screen
Create A New Entity + Profile Settings
Browse Entities
Browse Entities
Browse Syllabi
Edit Your Own Syllabi
Inspiration
Teachers from all over the world, including my own previous teachers, have been lacking resources. They want to interact with other teachers to get course material or inspiration for their own curriculum, but they have no easy way to do so. They cannot attend professional development conferences everyday, and are only limited to educators in their geographical vicinity in many cases. That is why I created Syllabase, to connect educators across the world so they can instantly access the world's educational resources through technology.
What it does
Syllabase is the premier "database of syllabi". It connects educators across the world with each other to share their teaching material and inspire others to incorporate what they found successful in the classroom into their classrooms. Anyone can create an account. Users create entities, that can be anything they want to share with the world's educator community, including book recommendations, presentations, curriculum ideas, links to resources, worksheets, and much more. If a user sees an entity they like, they can add it to a syllabus, an organized collection of entities. Anyone can create a syllabus and anyone can view it. Syllabase is a perfect platform for teachers to communicate and share their best resources with the world to make it a better place.
How I built it
I used Google Firebase on the backend, with Firebase Authentication, the Cloud Firestore Database, and Google Cloud Storage. The made the frontend using React JS.
Challenges I ran into
This was my first time using a Backend as a Service for a backend instead of creating my own backend. I was not familiar with the Firebase API and had to do some extensive debugging with the database and the authentication services. However, I believe it was worth the time for I gained experience in using this new branch of modern technology. I also have to worry less about scaling and the backend, because Firebase takes care of much of that for me and I could focus on making the app.
Accomplishments that I'm proud of
I'm proud that I could navigate the Firebase API and get the basics of the database and authentication working.
What I learned
I learned about how teachers could really benefit from a sharing platform for educational material, especially in places where teaching resources are limited and literacy rates are low, like in Africa. Programming wise, I learned how to integrate a BaaS (Backend as a Service) into React, and how to use this new piece of technology into my future projects
What's next for Syllabase
Syllabase is built to scale and becomes more useful when more people use it. Therefore, it's future is brighter than ever. I plan to add messaging functionality between users, enhance the speed of the application, and scale it to all continents for educators in Africa to have access to material from US educators, and educators across the world.
Attached Below are the links to the demo site and the github repo
Built With
css3
firebase
google-cloud
html5
javascript
react
Try it out
youthful-morse-2d6bb7.netlify.app
github.com | Syllabase | The World's Education at Your Fingertips | [] | [] | ['css3', 'firebase', 'google-cloud', 'html5', 'javascript', 'react'] | 13 |
10,108 | https://devpost.com/software/odogwu | Inspiration
A lot of students have trouble accessing education resources because of limitations in internet access. In Africa, the prevalence of social media platforms such as WhatsApp and Facebook has made internet bundles access easier for such tools and cost effective, but the broader internet coverage is still expensive. Odogwu seeks to bridge the gap between students and teachers who have resources with those who don’t have by leveraging the internet and social media platforms such as Whatapp, and also the use of USSD to give access to the most affected students.
What it does
Upload resources/stores in the cloud
Whatsapp bot searches
USSD searches
Server view
People
Brian Ntanga, Hillary Tamirepi
How I built it
We used React Javascript for the frontend and Node JS for the backend. We also used Twilio for creating the WhatsApp bot and also Africa's Talking's USSD services for our USSD services
Challenges I ran into
We had trouble implementing an effective search for books. We ultimately decided to use an external library (Fuse.js) to make the search process easier.
It was hard finding USSD service providers and Whatsapp API( which is not easily accessible). Luckily Twilio came through and we were able to use its API to make bots.
Accomplishments that I'm proud of
Building an MVP in less than a week
What I learned
The value of teamwork .
What's next for Odogwu
Customization for specific organizations to allow stakeholders to use the platform in order for clients to access resources offline.
Built With
firebase
fuse.js
javascript
react
twilio
Try it out
github.com | Odogwu | Bridging gaps between learners and resources for a brighter future | ['Brian Ntanga', 'Hillary Tamirepi'] | [] | ['firebase', 'fuse.js', 'javascript', 'react', 'twilio'] | 14 |
10,108 | https://devpost.com/software/telebrary-257vq0 | Inspiration
With more permissive internet bundles, e.g social network bundles, and cheaper entry level smartphones being made available even rural areas in Africa can be on some form of messaging app.
This was the inspiration for building a library system for the African Library project on telegram as a bot.
What it does
It provides a person running a rural library to manage the library through the bot. Viewing all the books in the library, overdue books, lending books to children and a few more administrative tasks.
The app would then have a dashboard that would be able to see all these changes in a city or town where access to internet is more available. Keeping track of various "telegram libraries" across various rural locations.
How I built it
The telegram bot is a python script whilst the api to control the library is built in flask
Challenges I ran into
I initially started to build the api and dashboard in Django but realized I would not be able to complete it before the hackathon was complete. Switching to flask was a bit late and I did not complete the application before the deadline anyway.
Deployment of the telegram bot, currently it is running off my local machine.
Accomplishments that I'm proud of
Built a responsive telegram bot
What I learned
Learned how to use the telegram api with python.
What's next for Telebrary
Rebuilding the backend in Django, completing the admin dashboard and deploying the bot to a production server.
Built With
flask
python
sqlalchemy
telegram
Try it out
gist.github.com | Telebrary | A library system on Telegram | ['Kacha Mukabe'] | [] | ['flask', 'python', 'sqlalchemy', 'telegram'] | 15 |
10,108 | https://devpost.com/software/library-heros | Building out our scene
Inspiration
The African Library Project is an incredible social venture, but the members of my team had never heard of it. Knowing that more publicity translates to more donations for African literacy, we tried to think of ways we could use our unique backgrounds and skillsets to enhance African Library Project's marketing and outreach strategy. Our team specialized in the VR and 3D production pipeline.
What it does
Library Heroes is a VR web app that can be used in browser. In a virtual environment, users can meet and learn about important people in African literature. It is an interactive way to learn more about literary heritage. It allows users to read about famous African writers and choose to buy their books online. A portion of the proceeds made through the app benefit The African Library Project. The webxr app also allows users to donate straight from the experience.
The experience is meant to be added to the African Library Project's Website. It could be opened in a normal browser, as well as a virtual reality headset where you will me immersed in the experience and interact with Famous Africa Writers.
How we built it
We used Unity, Maya, Substance Painter and Avatar Maker.
Maya for modeling all our assets.
Substance Painter to paint the assets.
Avatar Maker to make avatars.
Unity for all the scripting.
Challenges we ran into
Our submission video that we worked on for so long got deleted last minute and we couldn't submit it.
Other challenges: Working with WEBXR is new for every developer ! It is a new feature ad it was hard to set it up ad make sure it works.
Accomplishments that we're proud of
One of the accomplishments that we are most proud of is building this application in webXR, which is a cutting edge API that enables virtual reality headsets to integrate with the web. This is still extremely new and
What we learned
We learned how to develop for WebXR ! We never used it before.
What's next for Library Heros
Speech intractability - more authors
Built With
avatarsdk
c#
lightroom
maya
substance-painter
unity
webxr
wix
Try it out
jazzimms.wixsite.com
github.com | Library Heros | Engaging with Africa's rich literary heritage in XR for the greater good | ['Ines Said', 'Austin Stanbury', 'jeffrey zimmerman'] | [] | ['avatarsdk', 'c#', 'lightroom', 'maya', 'substance-painter', 'unity', 'webxr', 'wix'] | 16 |
10,108 | https://devpost.com/software/smesbus | Landing page
service provider page
signup page
login page
Inspiration
Whenever we travel to a new location and wish to have a haircut, access tailoring services or simply want to eat, the challenge comes in trying to figure out the best among tens or hundreds of such services. It would usually involve a lot of "trial and error" which is economically exacting, at best.
This problem inspired the idea of Smartcityz!
What it does
Smartcityz is an online platform where service providers and their potential customers meet. Services providers showcase their services while the customers (service consumers) come around to look up good services.
Service consumers rate and submit review on any service they patronize. Subsequent consumers use such ratings to infer the quality of services of the providers in question.
For service providers, Smarcityz provides them with tips and recommendation on methods to improve the quality of their services.
How we built it
Smarcityz was conceived and developed by a team consisting of back end, front end and UI/UX developers.
For the back end, the following were employed:
Django
Python
Postgresql
For the front end, the following were used:
JQuery
Bootstrap
HTML
CSS
Challenges we encountered
Trying to get an API for user's location detection via the I.P. address with reliable results. Different free APIs were adopted but the locations generated were not consistent.
Accomplishments that we're proud of
The team is quite proud of is its capacity to actualize an idea that initially seemed almost impossible for the accomplishment of a set of junior developers.
The prospect of how businesses in Africa and beyond stand to benefit from our innovation is yet another plus to be excited about.
What we learned
The team learned a great deal on the methods and tools used for collaboration between disparate developers necessary to build solutions, even as it was engaged in building an enterprise application.
Prior to this project, most of the members had little to zero experience integrating location and map into applications.
What's next for Smartcityz
Although we are still making modifications on the projects, we are aiming at:
acquiring an office space
properly hosting our web app
organizing seminars and throwing radio adverts as means of awareness for Smartcityz
launching fully our company and generating remarkable profit
Built With
bootstrap
django
jquery
python
Try it out
smartcity090.herokuapp.com | Smartcityz | Smartcityz is an online platform for small and medium scale service providers to showcase their services, boosting customers' confidence on opting for service providers irrespective of their location. | ['Bello Shehu', 'Luqman sani', 'Fasina Ifelajulo', 'Opeyemi Ajala'] | [] | ['bootstrap', 'django', 'jquery', 'python'] | 17 |
10,108 | https://devpost.com/software/coachally-interactive-virtual-classroom-video-calling-app | Assist feature
CoachAlly Home page
User's can easily seek guidance , report bugs
Seek guidance with in-app screenshot&doodle feature instantly
Video Call
AR Classroom
Broadcast Mode
Inspiration
During these pandemic days, our team too are facing issues while learning through online portals. So our team took a
step forward in resolving the common issues and further improvising it.
What it does
CoachAlly application helps in creating interactive virtual classrooms using the latest technologies like
Augmented Reality
and creates room for the virtual classroom through
high-quality video calling
with a low-latency experience.
Augmented reality in education is surging in popularity in schools worldwide. Through AR, educators are able to improve learning outcomes through increased engagement and interactivity.AR features aspects that enhance the learning of abilities like problem-solving, collaboration, and creation to better prepare students for the future. Teachers can include custom AR objects and pre-recorded lecture videos which help students view course materials at the ease of their home.
Live sessions can be held virtually through the class meet option. We have designed a one-step join meeting keeping in mind of young students. App seeks only the meet code and doesn't collect other credentials thus improvising the privacy of end-user.
We have also integrated an
ASSIST
feature which guides the users step-by-step if they either need a walkthrough on a feature or if they encounter a bug. Our main advantage of this feature allows users can make use of an in-app screenshot feature with a doodle option on board with ease to contact the admin/developer hassle-free.
How I built it
Came across the
Flutter
technology recently and since then was caught up with it. We are
amateurs
and this is our first big step upfront on solving the problem with it.
We have approached our problem with Flutter which makes the app run natively on all platforms. The UI is made with help of google's material UI. The video call runs seamlessly with the help of agora as backend. The feedbacks, assist is done with the help of wiredash which provides instant messages which the end-users provide.
Would thank our sponsor echo-AR which helped us integrate AR seamlessly with our app.
CoachAlly is a light-weight app which is available across various platforms
-
Mobile platforms- IOS, Android
Desktop app-MacOs, Windows, Linux
Web app- Across all browsers
Challenges I ran into
We came across many challenges as this our first big approach using Flutter. We thank the mentors who took the time to help us. Students get insights on concepts& better understanding with AR & am proud to be a part to contribute to the global community.
Accomplishments that I'm proud of
We are very proud of the big leap which we dared to attempt has come out a bug-free working app in a short span of hours.Have learned many skills way from starting of the Hack. We learned to face the challenge by short days to give the best outcome of our app.
What's next for CoachAlly -Interactive Virtual Classroom & Video Calling app
We aim to increase security and add feature-rich contents and make our app more accessible to all age groups.We plan to improvise our app consistently for best end user satisfaction.
Built With
agora
ar
cupertino-ios
dart
echoar
flutter
materialui
Try it out
github.com | CoachAlly -Interactive AR Virtual Classroom & Video Call app | CoachAlly application helps in creating interactive virtual classrooms using the latest technologies like Augmented Reality and creates room for the virtual classroom through high-quality video calls. | ['Sudir Krishnaa RS'] | [] | ['agora', 'ar', 'cupertino-ios', 'dart', 'echoar', 'flutter', 'materialui'] | 18 |
10,108 | https://devpost.com/software/hack-for-africa-wyd6k4 | GIF
Greetings from
Team Cypher
,
We are glad to present our idea in
Hack for Africa
. It is called
"Shield Africa"
and we believe it will help millions of people from the African continent.
## Inspiration for doing the project :
We went through various case studies and journals which told us more about the people of Africa and their lifestyle. This helped us in giving the idea of the hack we can develop for them. This also fascinated our team to develop a solution that empowers them to fight COVID-19. We were cautious of the limited testing resources in Africa and wanted to develop something that prevents COVID-19 from spreading and would be widely acceptable.
## What it does :
We have created an app that helps the users maintain social distancing through an immersive AR experience visualizing their safe zone in the real world. The app also helps the authorities to trace the COVID-19 spread. We also provide a direct link between the people and local businesses to help them grow. We prevent the onslaught of misinformation by providing real-time COVID-19 updates.
## How it works :
__Registering Users to the Database__ :
Each user upon registration gets a UID and the device is registered to the database.
__Detecting nearby devices__ :
If the user comes in contact with another, they exchange their UIDs. This is achieved by BLE technology.
__Have your personal AR safe zone__ :
This is implemented by Unity and Google ARCore.Users can maintain social distance by visualizing their virtual safezone in the real world.
__Access nearby stores__ :
This is made possible by integrating Google Maps API. Users can approach local stores. This is a win-win situation for both the users and the businesses.
__Get real time updates on COVID__ :
Users can check the realtime data countrywise as well as worldwide in order to prevent onslaught of misinformation.
## Challenges we ran into :
It was a big challenge for us to develop a hack for a community about which we have very less exposure. Besides this, we also faced some technical problems like integrating
cross platform Unity Application in Android, working on Plane Detection by ARCore.
## Accomplishments that we're proud of:
We are proud to build upon technologies like _Augmented Reality(AR)_, _Bluetooth Low Energy (BLE)_ which were more or less alienated from us and worked under a short window of time to give life to this idea of ours by building an industry-ready platform from scratch.
## What's next for Shield Africa :
Applying diameter change functionality for safezone .
News section regarding the African continent integrated with COVID Application Program Interface(API)
Enhanced Navigation so that people can know about the status of not only stores but also government offices while avoiding crowds.
Meet our team :
Deepak Chaturvedi
Github
Linkedin
Kartik Gupta
Github
Linkedin
Kushagra Goel
Github
Linkedin
Saurabh Chaudhary
Github
Linkedin
Built With
android-studio
arcore
c#
covid-api
google-cloud
google-maps
java
unity
Try it out
github.com | Shield-Africa | To prevent COVID-19 outbreak in Africa, we made an app that assists in tracing individuals while helping them maintain social distancing with an immersive AR experience and info about local businesses | ['Kartik Gupta', 'saurabh chaudhary'] | [] | ['android-studio', 'arcore', 'c#', 'covid-api', 'google-cloud', 'google-maps', 'java', 'unity'] | 19 |
10,108 | https://devpost.com/software/lite-library | Inspiration
I was initially inspired to participate in this Hackathon because I've always wanted to do more to impact Africa, especially Kenya where I'm from. I was further inspired by what the African Library Project is already doing in Africa. I looked into the way they support literacy in Africa and looked for ways to enhance their reach. I realized that librarians that partner with the African Library Project have to document their transactions on paper which can be very tedious and frustrating, so I came up with an app that would alleviate that stress.
What it does
This app is meant to make it easier for independent librarians to document transactions rather than using pen and paper. Anyone can create and account, log-in, and keep track of books that are checked-out.
It is a platform for librarians. The ability to login is necessary for multiple librarians to have access to the product.
How I built it
I used React Native and referenced React Navigation's library. I began development in snack and migrated to android studio when I had a basic skeleton.
Challenges I ran into
This was my first time using React Native and I spent most of the week just setting up the project. However, when I was able to get the project working, React Native allowed for quick development. I used Firebase's authentication and database for logging users in and storing data.
Accomplishments that I'm proud of
I'm proud that I managed to implement my idea to some degree. I'm also proud that I can say I've built an app, though it barely works.
What I learned
I learned that expo is really annoying. I also learned that I love developing apps and would like to improve my skills.
Built With
firebase
javascript
react-native
Try it out
github.com | Lite Library | An android app that helps independent African librarians manage their libraries. | ['Dan Chepkwony'] | [] | ['firebase', 'javascript', 'react-native'] | 20 |
10,108 | https://devpost.com/software/sh-renet | Home Page
Inspiration
"Everybody has something, nobody has everything" inspired by this quote I thought of
Sh@reNet
. A system through which everyone can not only help others but get help when they require it as well.
What it does
Sh@renet creates a network between various organizations to share resources. It is an e-platform which provides space for uploading the inventory of equipment or items one has and, at the same time, request for the equipment and items one needs. The institutions and organizations can register on this platform with the common intention of multiplying resources by way of sharing.
How I built it
We first created the pipeline for the project. Then set up the databases. then we worked simultaneously on the display pages and the functionality of the website. On the last day, we styled the website.
Challenges I ran into
Working as a team was a challenge due to the difference in the python packages' versions. Eventually, we figured out a way to work together by creating a virtual environment and sharing parts of code to be added instead of the whole folders
Accomplishments that I'm proud of
I am proud that irrespective of the hurdles we faced we were able to full through and make a full-scale project with all edge cases though off.
What I learned
I learned a lot about Django considering this was my first full-scale website with it.
What's next for Sh@reNet
The mission of Sh@renet is to enable sharing and pooling of resources among, and between, various institutions and organizations like health, education, and public works department etc. for the wellbeing and common good of the society. This will help the economically challenged African countries to optimally utilize the resources and pursue the path of growth. In the future, we wish to integrate as many categories of organizations as possible to widen our network.
Built With
bootstrap
css3
django
html5
javascript
python
Try it out
github.com | Sh@reNet | The mission of Sh@renet is to enable sharing and pooling of resources among, and between, various institutions and organizations like health, education, and public works department etc. | ['Aaditya Yadav', 'William Xu'] | [] | ['bootstrap', 'css3', 'django', 'html5', 'javascript', 'python'] | 21 |
10,108 | https://devpost.com/software/african-library-project | Knowledge Library
Front Page of WebApp
Inspiration
There are three major problems in the African continent:
1.Lack of internet access
Lack of tech access
COVID-19 spread
I believe these problems make it hard for libraries to operate. Hence this project attempts to provide a free platform for African libraries to connect as well as make it easy for librarians to handle the library operations. The most important aspect is that it can carry out 90% operations without internet access which is a major problem in Africa.
What it does
This project is a responsive web app that can be operated easily on any device. It attempts to resolve the issue of Library organizing and handling hardcopy of book issues forms on both the student as well as the library end. In the student end, the student can determine the location of the books and sort them according to their level. Whereas the librarian end deals with three main things. Firstly, the addition of new books into the existing directory. Secondly, issuing book lending forms which can be easily stored in the portal, and lastly, it builds a strong connection between the libraries by enabling a link through which one library can request or send books to the other library. Hence there are four major benefits of this web app:-
Responsive(operate on any device)
Internet needed for only 10% operations
Reduced paper workload
Reduce unnecessary movement in the library hence limit COVID-19 spread
Accomplishment that I am proud of
I am proud of the fact that I have been able to complete the entire front end in a passage of just one week. I have been able to do this independently and I am really satisfied with how user-friendly and responsive the project is. Lastly, I am 100% sure that this project can be implemented without any major struggle or initial investment.
What I learned
To be honest, this hackathon provided a major opportunity for me to learn the entire front end development whether it is the implementation of cards or designing the navbars, it helped me learn responsive deigning. I am quite confident of React and it increased my interest in web development.
Improvements
I believe I did an amazing work within the time frame given but once the backend is done I am confident that this project will be ready to use
Built With
bootstrap
css
materialui
react
vscode
Try it out
github.com | Knowledge-Library | Responsive Web App Library to facilitate the African library Project | ['HamaylAfzal Afzal'] | [] | ['bootstrap', 'css', 'materialui', 'react', 'vscode'] | 22 |
10,108 | https://devpost.com/software/corona-protective-smart-hat | CORONA PROTECTIVE SMART HAT
ART WORK DIAGRAM
We all know that corona virus is a very dangerous virus and we all are worried about it. Corona virus enter human body through eyes nose and moth by our contaminated hand. When we go outside from our home unconsciously we touch our eyes, nose and mouth by our contaminated hand.
The effective module introduced with personal protective intelligent hat. The hat integrated with a small circuit made by hall sensor, resistor, diode, transistor, buzzer, battery and switch. The manual switch is incorporated as per the requirement of on/off of this module. The novel corona virus has capability to enter our body by touching eyes, nose and mouth. This special type of smart hat is based on the working principle of hall sensor and neodymium disc magnet set ring. The hall sensor has ability to detect the magnet within the distance of 3-3.5 cm. The range of this distance will also able to change according to requirement.
When the neodymium set ring get come close to the hall sensor, due to magnetic field a voltage difference create in the hall sensor this voltage difference is also known as hall voltage. From this hall voltage an output current generate in the hall sensor. This output current goes through the resistor, diode and transistor. After activate the base part of the transistor the current flow to the buzzer and make the buzzer active. Sound of the buzzer alert the person from unconsciously touches of eyes, nose and mouth.
This invention claims its importance on the basis of present scenario of world and it is obvious to introduce in new industry sector.
Built With
battery
buzzer
diode
hall
magnet
resistor
sensor
wire | CORONA PROTECTIVE SMART HAT | Reduce the chances of unconsciously touches of eyes, nose and mouth by contaminated hand. | ['Sarthak Chatterjee'] | [] | ['battery', 'buzzer', 'diode', 'hall', 'magnet', 'resistor', 'sensor', 'wire'] | 23 |
10,108 | https://devpost.com/software/hack-for-africa | Inspiration
i gain very much inspiration from my two idols - Odion Ighalo and Sadio Mane, especially for what they have done for africa
What it does
It does everything for africa - these two people have the ability not only to change their respective countries but to change the whole perspective of the world based on africa
How I built it
I dedicated myself to my two idols and did what i had to
Challenges I ran into
defeating liverpool and man city
Accomplishments that I'm proud of
Most goals in CONCAF
What I learned
Need to do a lot more
What's next for HACK FOR AFRICA
Kalidou Koulibaly
Built With
dos
java
python
tres
uno
Try it out
www.sadio-mane.com | HACK FOR AFRICA | Love Africa, Save Africa, EVERYTHING FOR AFRICA | ['Saurav Sharma'] | [] | ['dos', 'java', 'python', 'tres', 'uno'] | 24 |
10,108 | https://devpost.com/software/readup-qz0cat | Software architecture diagram
Logo
ReadUp
Inspiration
Pandemic Covid19 has highlighted the adversities faced by disadvantaged groups and disproportionally affect them in multiple ways. Coming from a tech and STEM background with strong belief in education as a driver of change, we would like to contribute our expertise and ideas to alleviate sufferings of the disadvantage and improve their future prospects one way or another.
What it does
We designed a progressive web app which acts as a personalised library to complement the literacy and education effort of the African Library Project. This app is built in with an integrated and centralised book cataloguing system from existing libraries fo the African Library Project
How we built it
We built a UI design using Figma and present our final products in Prezi
Challenges we ran into
The major challenge is the limited technological infrastructure that is currently available in these Afrophone countries which is further exacerbated by the current pandemic. Another big hurdle is our lack of personal and practical experiences to the background and culture of Afrophone countries. As we progress along the project, we realise that having a humanity and social science background, which the team lacks of, would be a huge advantage to better customise our features to learning pattern of Afrophone people.
Accomplishments that we're proud of
ReadUp is our pride and joy. It's simple, elegant and promising forward-thinking product. We believe this idea hold lots of potentials to establish a foundation to tackle the current lack of library management as well as literacy improvement in Afrophone countries.
What we learned
We appreciate the multiple levels of adversities faced by the Afrophones people and the corresponding challenges of
What's next for ReadUp
We are excited to have the beta version of ReadUp to be developed and implemented. As we are still very new to the software development process as well as the Afrophone situation, some initial data could guide us toward greater improvements for the app that better cater to the unique learning environment in these countries. Hopefully, in the near future, our collected users and book loan database generated from ReadUp could facilitate customised library and teaching in a region-relevant and age-dependent manner.
Built With
figma
imovie
miro
prezi
Try it out
www.figma.com
docs.google.com | ReadUp | ReadUP! There's more than one way up! | ['Quyen Do', 'Ava Chan', 'Do Quyen'] | [] | ['figma', 'imovie', 'miro', 'prezi'] | 25 |
10,108 | https://devpost.com/software/knowpool | Platform Architecture
KnowPool
Source Code :
https://bitbucket.org/Xonshiz/knowpool/src/master/
APK :
https://bitbucket.org/Xonshiz/knowpool/downloads/com.companyname.KnowPool.apk
Demo Video :
Architecture :
https://bitbucket.org/Xonshiz/knowpool/src/master/Architecture.png
Documentation :
https://bitbucket.org/Xonshiz/knowpool/src/master/Xonshiz%20-%20KnowPOOL%20%5BAmazon%20Aurora%20Database%20Challenge%5D%20Documentation.pdf
Presentation :
https://bitbucket.org/Xonshiz/knowpool/src/master/Amazon%20Aurora%20Hackathon%20Presentation.pdf
KnowPool is an innovative platform where the users can share their programming expertise with the other learners easily via the means of professional blogs and videos. Personally, we think that there is a lot of great study material related to programming, computer and internet in general. However, this material is scattered on various platforms, making it harder for people to get the best content at one place.
Our Solution :
KnowPool is an innovative platform where the users can share their programming expertise with the other learners easily via the means of professional blogs and videos. Personally, we think that there is a lot of great study material related to programming, computer and internet in general. However, this material is scattered on various platforms, making it harder for people to get the best content at one place.
Various people have various blogs/YouTube Channels and some have dedicated websites for the same. However, it's hard to keep up with these various platforms simultaneously. Plus, there's another factor of switching to various different websites and then getting confused with the content, because of how the data is represented on different platform(s) is different.
This creates an unnecessarily complicated Architecture consisting of Modules or Silos. Thus, instead of creating another Silo of this complex hierarchy, we aim to unify and simplify this problem.
Platform Architecture
How Platform Works :
Currently, if you want to learn a particular skill, you go through various websites, or sources to find content with good ratings. Various platforms have various judging criteria for judging the content. Sometimes, there are people who want to share their knowledge about a certain topic, but they find it cumbersome to create a blog/channel/website manually to place their content. We wish to give these users a way where they don't have to go to such lengths.
Whoever wants to share their programming knowledge, they can simply sign up on our platform as a "instructor" and start writing blogs or sharing their tutorial videos on our platform. This way, anyone can teach whatever content they have superior skill(s) in.
The learners will have the ability to rate/comment/share those posts. This will also give the new learners a unified review about a particular content source. We also wish to keep this platform completely free, because we ourselves as students believe that education can be free.
However, we will be having a donation section for maintaining server and architecture costs.
We've used C# (Xamarin Framework) to develop the client-side and on the back end, we have developed our own PHP based APIs to POST and GET the data to/from the database. For database, we've used MySQL.
You can take a look at our application's "Workflow" or "Wireframe" in this zipped folder itself.
Running This Project :
Let us remind you again that the minimum Android OS that you need to run this project is Lollipop (API Level 21). So, make sure you're satisfying the minimum requirements first. Otherwise, your handset won't be able to parse the apk file.
Permissions Required :
This application requires you to provide few permissions to it, in order to work properly. Here's the list of permissions that the application needs :
Internet Access
View WiFi Connections
Storage (Read/Write Perms For Cache)
Read Google Service Configuration
Instructions For Direct APK Installation :
If you want to run this application on your android phone, please move over to the "
Release
" section and download the latest stable APK build for your android phone. You do not need any external libraries or application.
Instructions For Developers/Testers :
If you're a developer or any user who wishes to test this application and run this android project, it's recommended to install Visual Studio with Xamarin Support and Android SDKs on your system. Remember that Android SDKs should be in your local path for you to be able to compile the project properly. You can find the source code in the "
SOURCE
" directory.
If you do not happen to have Visual Studio, it is recommended to get it because it'll download all the required packages on its own, if they're not present. You can use Visual Studio's Free Community Edition. It'll work, as we've developed this application on it.
But, if for some reason, you don't want to or can't install Visual Studio, you will need to have .NET, Xamarin, Android SDK and required Packages in your system's local path for you to be able to compile and execute this application project.
Built With
amazon-auto-scaling
amazon-rds-relational-database-service
c#
mysql
php
xamarin
xaml
Try it out
bitbucket.org
bitbucket.org
bitbucket.org
bitbucket.org
bitbucket.org | KnowPool | KnowPool is a platform where users can share their programming expertise with learners via blogs & videos. | ['Dhruv Kanojia'] | ['Best Aurora Serverless'] | ['amazon-auto-scaling', 'amazon-rds-relational-database-service', 'c#', 'mysql', 'php', 'xamarin', 'xaml'] | 26 |
10,108 | https://devpost.com/software/hfa | Inspiration
Enhancing the aims of the ALP
Promoting Libraries
Enhance training of local teacher-librarians
Starting Libraries
What it does
The application has 3 different user classes
Libraries
Librarians
Learners - Those accessing the libraries
There are different functionalities for the different user classes.
Libraries:
- Requesting for Books(based on topical needs)
- Initiating a partnership with ALP
Librarians:
- Accessing resources posted by ALP
- Accessing Information regarding ALP training sessions.
Learners:
- Subscription-based service to daily SMSes on a topic of choice
- USSD based learning
How I built it
The first step was generating a project overview, then the database design. Next followed, setting up the codebase and creating application views.
Challenges I ran into
The time to develop the application wasn't a lot.
What I learned
I typically learned about build PWAs with Flask. I have done that in the past with js technologies, but using flask was new.
What's next for hfa
Building a chatbot using IBM Watson to answer FAQs.
Built With
africa's-talking
flask | hfa | A project that enhances the promotes and enhances literacy via accessible platforms. | [] | [] | ["africa's-talking", 'flask'] | 27 |
10,108 | https://devpost.com/software/self-tutor | Inspiration
Inspiration was to help eradicate illiteracy from Africa.
What it does
Identifies the objects shown to it, counts the number of objects and spells them out. This helps the illiterate person learn how to count, identify and learn the alphabet through spellings.
Displays a series of images of alphabets on screen and asks the person to identify it. Also tells whether the identification is correct or not.
Speaks an alphabet for the user to write it on the paper, captures the image of the alphabet and checks whether it matches with the asked alphabet or not.
Helps the person to read a book by taking in the image of the captured page of the book and reading it aloud.
How I built it
The front end was entirely made using pySimpleGUI. YOLO was used as object detection system. Pyttsx3 was used for text to audio and google sound API was used for audio to text.
Challenges I ran into
This was my first time working with pysimplegui. Designing multiple windows through master slave architecture was a challenge. Apart from that getting speech to text conversion API work was also a tough task as the application crashed everytime it was used but the issue was solved at the end.
Accomplishments that I'm proud of
I am proud of bringing my idea to life in a given period of time and experimenting with new technologies.
What's next for Self Tutor
Support for other languages can be added
Other interactive teaching methodologies can be added.
Built With
google-speach-api
pillow
pysimplegui
pytesseract
python
pyttsx3 | Hack for Africa | .. | ['kinster007', 'Chirag Goyal'] | [] | ['google-speach-api', 'pillow', 'pysimplegui', 'pytesseract', 'python', 'pyttsx3'] | 28 |
10,108 | https://devpost.com/software/reader-s-ear | Reader's Ear
The following is my submission for the
Hack for Africa: A Microsoft Challenge
under the
Best Education/Literacy Hack
category.
Preface
I joined the
Hack for Africa
competition late. Unfortunately, a working prototype is all but impossible given my time constraint (< 5 hours) and personal abilities.
None-the-less, I recently stumbled upon these
Hackathons
, am overjoyed at their existence, and had to start right away!
That being said, I present what I'd consider an executive summary for an application that aims to assist new readers with learning their fundamentals and correcting errors when reciting passages of text.
Inspiration
A while ago I watched one of Google's I/Os where they went over a fairly complex feature. Their
"Okay Google"
voice-powered assistant was able to generate, using sound, automated messages for things like answering the phone. This technology has existed for companies, who use it to automate parts of service desk jobs.
The bread and butter that interested me was the ability to deconstruct a given sentence, then construct one as a response. Is there a way to gather a person's
broken
sentence? This is to say, for instance, if someone new to reading English was to butcher a sentence could this technology identify the mistake(s) and provide an alternative?
What it does
With the above stated, I can begin to explain more where my original idea of a "new reader assistant" comes into fruition.
From here on out, a
user
refers to a person using the application with the intention of improving their reading and pronunciation abilities. An
instructor
refers to a person offering academic support to their users.
Core idea
The core idea is for an application to deploy a neural network. This network would be trained on the proper pronunciation of pre-chosen passages. When this network is introduced to a new reading of a passage, given by speaking through a device's microphone, it will compare this reading to its trained counterpart.
Any differences the network detects would be signs of the user reading the passage improperly (see
Challenges I forsee
below for a caveat). The application would keep track of the errors encountered while the user is reading. At the end of the passage, the user will go back through their errors with the application. The application will provide information as to why it considered an error an error. In addition, the application would offer a number of tools for the user to correct their issue. This would chiefly include a correct reading from the application and the ability for the user to correct themselves, by rereading their mistake, while the application affirms this correction.
Extending the platform
The biggest area for an extension I see is letting instructors control certain instances of the application. This would be especially useful for the initial setup and configuration of the app.
For the user, this would eliminate any language-related pre-requisites they may encounter when using the application. The user may only need to press a "Begin" button and start reading. Completing a passage with a certain level of correctness may prompt them to graduate to the next configured passage.
This relationship between a user and an instructor would let the instructor track an individual's, and community's, progression. Letting them identify problematic passages or words and overall make more informed decisions when offering help.
How I built it
N/A
Challenges I forsee
As I said before, a caveat I foresee is the network's responsiveness to differences in diction as a result of geographic and language barriers. In that I mean would the network need to be trained based on someone from a similar area it's deployed in? As to avoid unwanted, false-positive errors.
Another area of concern I have is how the application aims to replace a face-to-face instructor and user interaction. I can't possibly imagine the app overshadowing that kind of relationship. So instead, the focus I had in mind was for areas where individualized help is hard to provide. Or, where the application can act as a tool for instructors to identify a user who requires additional support.
I've elected to leave out more technical challenges to save myself speculating too much on the idea.
Accomplishments that I'm proud of
I'm proud of my initial idea, given I came up with it in about half-an-hour. I feel that for my first participation it is something I'd of enjoyed building.
What I learned
While I wasn't apart of this particular Hackathon long enough to move past the planning step, I have discovered Hackathons in general! I will certainly be participating in more to sharpen my skills and help make an impactful difference in the world.
What's next for Reader's Ear
The next thing I would do is begin planning a proposed technical architecture. Laying out the various features and what will be needed to accomplish them. I'd also go about creating some designs in a prototyping software such as Adobe XD or Figma. | Reader's Ear | An app using a neural network to judge and provide feedback for a person reading out-loud to it. | ['Ryan Lockard'] | [] | [] | 29 |
10,108 | https://devpost.com/software/expan | Inspiration
It is necessitated by the current coronavirus pandemic where the government has put in place measures/guidelines
for people to keep safe. However people must find a proper way to continue with their lives and work. This includes travelling. Passengers using public transport are a huge majority of travelers and at a high risk of contracting the virus. Hence there is need for a solution to have a manifest especially in Africa where mobile phone penetration is high however there's reduced technology literacy. The approach is to use USSD which is widespread and familiar to most users. This enables easier contact tracing, in the event of a positive case being identified among the passengers who in most cases don't know each other.
What it does
This project aims at creating a solution that addresses that by:
Passenger phone number/msisdn as a reliable contact detail
Saving time in the collection of customer details
Digital storage of the manifest in the cloud
Easy access to the manifest i.e. downloadable pdf
Usage
Dial *483*129# on your phone(Kenya). Enjoy!!
Challenges I ran into
Connecting services in the cloud e.g database, redis memory store, running container,networking
Working with docker and successful deployment
USSD set up. First time working with USSD
Handling user session and proper routing during USSD use
Creating a frontend. First time for everything
Accomplishments that I'm proud of
Handling a user session from start to finish
Dialing USSD code and seeing it working
What I learned
Design with users in mind
Importance of the design phase
Reading product documentation is important
What's next for Expan
Trying a simple trial run / proof of concept with a local Sacco in town
Built With
bootstrap
docker
flask
google-cloud
postgresql
redis
vue
Try it out
expan-one-lfw4jdwenq-ew.a.run.app
muchogoc.github.io
github.com
github.com | Expan | Contact tracing and tracking spread of the COVID-19 pandemic | ['Charles Muchogo'] | [] | ['bootstrap', 'docker', 'flask', 'google-cloud', 'postgresql', 'redis', 'vue'] | 30 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.