hackathon_id
int64
1.57k
23.4k
project_link
stringlengths
30
96
full_desc
stringlengths
1
547k
title
stringlengths
1
60
brief_desc
stringlengths
1
200
team_members
stringlengths
2
870
prize
stringlengths
2
792
tags
stringlengths
2
4.47k
__index_level_0__
int64
0
695
10,395
https://devpost.com/software/ridhima-s-covid-app
This is the downloaded app on the phone Inspiration COVID-19 What it does Self declaration for COVID How I built it I built it with my imagination and the situation the world is in right now Challenges I ran into The code kept disappearing Accomplishments that I'm proud of That I finished I nice project even if I do not win What I learned How to code better than I used to What's next for Ridhima's COVID App Enhance for more features of sending alerts and notifications Built With code.org-local-school-database Try it out studio.code.org github.com docs.google.com
Ridhima's COVID App
Self Declaration for COVID
['Ridhimasgame Bembey']
[]
['code.org-local-school-database']
17
10,395
https://devpost.com/software/physiotherapy-aid-tjiage
Inspiration During Covid or in general working class specially IT professionals cant get time for exercise, sitting for long hours therefore they might get issues in body. So if we get them a virtual physiotherapist, so that sitting at home they can work out with proper guidance and do exercises will be beneficial. What it does Our application uses a tensorflow.js (browser-based) model to make predictions on the state of the current user's pose. It has been trained on a dataset of images created by us (~300 images per pose) to predict whether the position is correct, or incorrect - and what makes it so. I have used GCP Machine Learning Studio, a GCP Machine Learning tool, to train our models in the various physiotherapy poses. GCP Services Speech-to-Text API was also used to enable the application to be accessible by the visually impaired. The user can start their exercises via speech in various languages using GCPTranslator Speech API remotely and this is more convenient and easier to use for our target audience. The application utilizes GCP Services for text-to-speech. This is useful for the visually impaired as they can hear if they are in the right position as the application will tell them to adjust their posture if incorrect. We also use the webcam to track the user's movement which is fed as input to the posenet machine learning model and outputs posture image on the user's body. How I built it This is fully supported on Desktop/Android Google Chrome. What's next for Physiotherapy Aid Make it available for Gym enthusiasts About the project : AIDEN. Your physio assistant. By Sanskar Jethi, Ankit Maity, Shivay Lamba Access the live application at: https://https://aidenassistant.azurewebsites.net/ View our presentation slides at: https://docs.google.com/presentation/d/1wfyXXhWVZlDHjmuZIOpDDSzM61cEW7bwP6AOjpxJU_A/edit?usp=sharing Our demo video: https://youtu.be/9HOEje4E2i8 AIDEN is a web app utilising tensorflow.js , browser-based Machine Learning library, to enable accessible physiotherapy for the Visually Impaired and other people as well - talking through exercises by responding to users' postures in real-time. AIDEN makes it easier for users to not only complete but to improve their techniques independently. How to use AIDEN Allow browser access to microphone and camera Say “start exercises” or press “Start” or any particular language ( translation ) Try to do a “back bend stretch” approximately 8 foot away from the webcam with whole body in frame like in demo video. (only works in one orientation currently) Technology Machine Learning - tensorflow.js AIDEN uses a tensorflow.js (browser-based) model to make predictions on the state of the current user's pose. It has been trained on a dataset of images created by us (~300 images per pose) to predict whether the position is correct, or incorrect - and what makes it so. We have used Azure Machine Learning Studio, an Azure Machine Learning tool, to train our models in the various physiotherapy poses. Azure Cognitive Services Speech-to-Text API was also used to enable the application to be accessible by the visually impaired. The user can start their exercises via speech in various languages using Azure Translator Speech API remotely and this is more convenient and easier to use for our target audience. The application utilizes Azure Cognitive Services for text-to-speech. This is useful for the visually impaired as they can hear if they are in the right position as the application will tell them to adjust their posture if incorrect. We also use the webcam to track the user's movement which is fed as input to the posenet machine learning model and outputs posture image on the user's body. Key Azure Services that have been used in our product: Azure Storage Services - storing machine learning model ( TF) Azure Cognitive Services ( Inference ) Text-to-Speech Speech-to-Text Custom Vision ( to classify between correct and incorrect images) Translator Azure CDN ( three js and other libraries ) Azure Web App with Continuous Deployment Linux Virtual Machine ( for hosting the website ) Azure CLI ( for deployment) Azure Cloud Shell (for web app continuous deployment integration) Azure Pipelines (Continuous deployment feature) Visual Studio Code ( for all our life <3) Supportability This is fully supported on Desktop/Android Google Chrome. Client Folder The web application is located in the clients folder. The web application consists of two files: index.html and index.js. Index.html The index.html contains all the HTML that forms the backbone of the website. We have used the bootstrap open-source CSS framework for our front-end development. Index.js index.js contains the Javascript code for the web application. This works with HTML to add functionality to the site. Loads the model and metadata and handles image data. Built With gcp javascript machine-learning tensorflow Try it out github.com
Physiotherapy Aid
Your physio assistant.
['Sanskar Jethi', 'Shivay Lamba', 'QEDK .', 'Pulkit Midha', 'rahul garg']
['Matchathon & Incubation']
['gcp', 'javascript', 'machine-learning', 'tensorflow']
18
10,395
https://devpost.com/software/diy-social-distance-tag
DIY Social Distance Tag schematic final product Inspiration In Malaysia, we are under recovery Movement Constraint Order phase and social distancing compliance is very important at very least 1 meter What it does Detect distance below 1 meter at front of you and it will produce beep sound indicate how near are you from the people at the front How I built it Tools that I used are: Hardware Maker Nano (Arduino Nano compatible built-in LEDs and Buzzer) Ultrasonic Sensor HC-SR04P Perfboard 7cm x 9cm Male Pin Header Female Pin Header Soldering Tools 9V battery holder Wires Software Arduino IDE Challenges I ran into The challenge I ran into is how to design the layout to make it compact size and make customised pin header for position the ultrasonic sensor. Another challenges I into is to choose the right microcontroller to make the tag look compact. What I learned Design layout or the important of position layout of the board to make it compact and choosing the suitable microcontroller for this project What's next for DIY Social Distance Tag Design enclosure with 3D printer, make and option for user used power bank to power on the tag or use 9V battery, and optimised the code itself Built With arduino c++ maker-nano ultrasonic-sensor Try it out github.com
DIY Social Distance Tag
A Tag to distance yourself
['Amir Hamzah']
[]
['arduino', 'c++', 'maker-nano', 'ultrasonic-sensor']
19
10,395
https://devpost.com/software/immunity-booster
Categories we are submitting for: Best Overall, Most Practical/Scalable, Best Impact, Best Original, Best Design Video link https://youtu.be/27L9vhVPWjw Greeting Hello judges! Thank you for giving your time to judge all the HackTheLib submissions! We really appreciate all you do to make this hackathon possible. We hope you enjoy Immunity Booster, a connected Alexa skill and cross-platform app designed to help people get the right nutrients they need to strengthen their immune system. Bellow, we describe in-depth how we made our project, including how it is scalable for the future, and what we like about the UI/UX design. Thank you again, Nathan and Andrew Dimmer Try It Yourself Try the Web App You can try Immunity Booster as a web app anytime at https://andrewdimmer.github.io/immunity-booster/ . Try the Native App If you already have Flutter installed on your computer, simply clone this repo, then in a command line switch to the flutter folder. Note: We built the app using the master channel to support desktop apps. As a result, you may need to run flutter channel master to switch into the master channel. This MAY not be required to test the app on either Android or iOS, but we've only used the master channel ourselves. Depending on the stablity, if you want to try a desktop version, you may need to re-run the some of the flutter create commands. From there, run flutter devices to get the deviceId of the device you want to run the flutter app from. This should include your desktop, a web browser, any emulators, and any mobile devices you have plugged in. Finally, to start the app, run flutter run -d <deviceId> to start the app. Inspiration We’ve spent a lot of time over the last few months sheltering in place and practicing social distancing. While these measures, along with using extra hand sanitizer, antibiotic soaps, and bleach wipes, have helped people avoid exposure to the coronavirus, they’ve also accidentally left our immune systems weaker than ever before and desperately needing a boost. And with many parts of the country opening back up, and especially schools reopening in the fall, a weakened immune system is exactly the LAST thing anyone should have. Fortunately, we have the power and control to build up our own immune systems and microbiome so that when we’re exposed to larger amounts of microbes again (for example, being around a lot more people), we’ll have a much greater ability to fight them off. The most effective ways to do this are: get enough sleep; exercise; and eat the right foods. Everyone already knows when they’re sleeping and exercising, but most people don’t keep track of everything they eat, and how it impacts their immune system. That’s what inspired us to build Immunity Booster. Our app helps you keep track of, and remember to eat, all the healthiest foods you need to build a stronger immune system. What it does Immunity Booster is a cross-platform app designed to help you improve your immune system by suggesting healthy foods you should be including in your diet. There are 5 main categories of foods that are needed to improve your immune system: Vitamin C rich foods, Vitamin E rich foods, Vitamin B6 rich foods, prebiotic foods and probiotic foods. Our app recommends diverse foods that include these vital nutrients, to ensure that you have them in your diet, and helping you have a stronger immune system. Through our Alexa Skill, you can use our app to find out what you should be eating, and either eat or skip them, all hands-free. This makes our app a great cooking companion, and allows users to use Immunity Booster without interrupting what they are doing. By also creating a cross platform app using Flutter, users can check what foods they should be eating from anywhere, on any device, through Web, Windows and Mac desktop apps, and mobile apps for iOS and Android. How we built it To begin with, we selected two main platforms for the front end. We built our web/mobile/desktop app in Flutter, and we built our smart home component for Amazon Alexa. These services allow our app to run on pretty much every device that people have, anywhere, so there are no accessibility issues with getting the data. The whole point of the app is to reduce stress, we don’t want the user to always need to come back to a single device to reduce their stress! We also started working with Google Assistant at the end, but we didn’t have enough time to get that fully integrated. From there, we next built out our database architecture using Google Firestore. This serves as the central connection point for all data throughout the app that is coming in or out from any device or interface. We then built a wide range of central database functions on top of the database as basically our own internal library, so that the database actions are identical regardless of the requesting device. In between these two layers (the front end and the database), we used Google Cloud Run to stand up a serverless Application Programming Interface (API) for each and every device that we support. These can range from a single massive function that handles all Alexa Skill responses, to a complex web of endpoints, helpers, and pass throughs to provide diverse and versatile functions in the Flutter app. Then, it just became a matter of connecting the front end to the associated API, which passed the data to the database, and everything was ready for use! Challenges we ran into We wanted to try supporting Desktop Applications using Flutter, and so we found out when setting everything up that Desktop Applications aren’t even available in the developer package (bleeding-edge distribution) of Flutter. If you want to compile to a desktop app, you need to be running the code directly from the master branch of the Flutter GitHub. Consequently, when it came time to connect to our database, there were exactly 0 libraries that supported compiling to desktop, so we kind of needed to build our own round-about method. This led to a lot of trial and error and troubleshooting with the data formats, as we were using Dart on one side of the connection, and TypeScript on the other! In the end though, we got it working, and we’re really happy about that. We also had a very wide range of input devices and systems for Immunity Booster, and so we needed common database handlers that would work for all devices, no matter the data format that we are getting. To do this, we first started by writing a cross-platform set of standard database functions, then we built out and exposed the same functions in different endpoints (depending on the device). This ensured that if we updated the database structure, all of our code for each of the devices would receive the update at the same time to avoid data mismatches and rewriting the same code for each device. Accomplishments that we're proud of Our main accomplishment is the User Interface and design. We’re proud that we got both an Alexa skill and a Flutter app up and running in the short time frame of the hackathon, and that both have very fully featured UI’s. The Alexa skill is fully featured, and can be used as a standalone app, and the Flutter UI uses expressive gestures to make using the app fluid. We’re also proud of how seamlessly all of the devices interconnect and share data, so this can truly be a cross platform service for however users want to access and interact with Immunity Booster. What we learned We’re fairly new to using Flutter for either Web or Desktop, and so we learned a lot about Flutter in trying to get that set up. It turns out that desktop is still so new that you need to run Flutter from the master branch of their GitHub, you can’t even run it in their developer bleeding edge download. As a result, we had to implement a lot of database connectivity that we usually use a library for ourselves, and we learned a lot about Dart vs. JavaScript data types, and some of the best practices to ensure connectivity between different systems. This was also our first time running Queries in Firestore. We’ve used Firestore extensively in the past, but only ever when we knew exactly where the data was that we needed. This time, we needed to run multi-step queries to things like filter by tag or rearrange the items to get the database to reflect the changes we made in the Flutter app. In the process of setting that up, we learned a lot more about not only Firestore Queries, but how NoSQL databases are structured and indexed behind the scenes. What's next for Immunity Booster In the future, we’d like to make the app more fully featured, so it can help people’s immunity past just the food they are eating, by recommending and tracking sleep and exercise. We’d also like to expand to include a Google Home app, along with the Alexa and Flutter apps. Built With alexa firebase flutter google-cloud-functions Try it out github.com andrewdimmer.github.io
Immunity Booster
Choosing superfoods to supercharge your immune system!
['Nathan Dimmer', 'Andrew Dimmer']
[]
['alexa', 'firebase', 'flutter', 'google-cloud-functions']
20
10,395
https://devpost.com/software/pocketbook
Dear Reader, Thank You For checking out Pocketbook! This project was inspired by an application I use called Notepad. The application was useful, but it lacked something. It was old and boring. That's why I decided to make Pocketbook! Pocketbooks legacy includes cross-platform usage, user-friendly UI, Cloud saves, File cleaning, and many more, such as widgets, Multiple user note-taking, Plagiarism checkers, and a whole lot more! But the road to success is not easy. I came across a plethora of problems when developing this software, causing me to scrap my first basic version because of a bug that would not let me store a lot of info at once. But using The Python dictionary Method, I was able to overcome this. I built this using Jupyter notebook, and have posted a Github page with the full code. If I had a chance to further develop this project, I would start by learning a whole lot more about coding and UI for about a year, then revamp all the basic visuals to a more appealing, modern-looking state. I would also start work on a proper save system since mine only saves while the program is running. also having more features would be essential to success. So, If you are interested, please help me accomplish my goals by voting for me as a winner, or a finalist. Every vote counts and I will be grateful for your dedication to my cause and fueling the fire for what's to come. Thanks For Reading! Rishi Suresh Creator of Pocketbook Built With python Try it out github.com
Pocketbook !
Pocketbook is an easy way to write, read, edit, and store files, all while on one program! It's easy to use, and has limitless potential for it's rise to the top!
['Rishi S']
[]
['python']
21
10,395
https://devpost.com/software/walkie_talkie-gknw24
Inspiration:- I'm a second year undergrad student. Arranging all the notes is a tedious task if have to write myself :( ...another possible solution is asking seniors and batchmates but almost everyone does it and asks the other person for the same. Also everyone is working on some sort of thesies or projects that others also want to understand but asking about the same thing again and again irritates the everyone.There are whatsapp group on which information is hared still it is limited to the particular group and difficult to transmit between juniors and seniors.Also most of the people clear whatsapp history after exams so it is wastage of lot of work.Thus Ive tried to minimize that gap for important data sharing in college. What it does:- My website has (actually I'm working on it) various pages for each year and branch and also extra activities such as robotics and coding so that particular information can be uploaded to particular pages. We can provide the facility that everyone can upload google drive link or dropbox link of their notes and resources or papers they have written. How I built it:- I used Django to build the backend (including 'blog-app'-for sharing information and 'user-app'-for users to create their account. Challenges I ran into:- creating database is the n=most difficult task as I'm a student and can not afford too much for the creating the website.Other than that I used AWS S3 and gmail usser authentication for the first time and many other small things also.I still have to work a little so as to make it completely useable and add all my college-mates in the website. Accomplishments that I'm proud of:- Learned Django, Learned how back-end devlopment works, Learned deploying a website and solved one of the problems I faced in my college life. What I learned:- Django, JavaScript, deploying a website What's next for Walkie_Talkie:- bug fixes and adding all the pagesfor classes and clubs. Built With bootstrap css django html5 javascript Try it out walkie-talkie-10.herokuapp.com github.com
Walkie_Talkie
Data sharing web-app for college classes.
['Khushi Agarwal']
[]
['bootstrap', 'css', 'django', 'html5', 'javascript']
22
10,395
https://devpost.com/software/package_collector_anshuman
Hello Judges, I hope you have had a wonderful day today. My name is Anshuman Bhamidipati, I am a rising sophomore from James B. Conant High School, and I am really delighted to present my project to you today. This is my first time participating in hackathon, especially a virtual hackathon. For this hackathon, I would like to enter in all the 5 categories, which are "Best Overall", "Most Practical/Scalable", "Best Impact", "Best Original", and "Best Design". The game I have created is called Package Collector. In this game the car can be controlled using the arrow keys. And the main goal of the game is to collect the packages that are scattered all over. The unique thing about this game is that the roads are continuous, which means that by driving the car off the screen, will cause the screen to switch to the next frame full of roads. The game is split into 6 separate levels and each level will have one package, which is located on one of the frames. This package needs to be collected in order to progress to the next level. To start the game, load the HTML file in any browser, preferably chrome. Then click the "Start Game" button. This activates the car on the screen. By using the arrow keys the car can be moved between the frames and is used to search and collect the package. Successfully collecting all 6 packages, leads to the final winning screen. Built With css html javascript Try it out github.com
Package_Collector_Anshuman
This is my project for the HackTheLib Hackathon.
['Anshu B']
[]
['css', 'html', 'javascript']
23
10,395
https://devpost.com/software/agrocare-f09xod
Inspiration Farmers are unable to get new machinery and their services for various purposes like harvesting, land preparation, crop care,sowing, irrigation, etc. Build a platform to make these processes easier, keeping in mind the safety and restrictions of the global pandemic COVID-19 and we also included an another features in our project to solve the problem faced by the consumer to buy the required things like fruits &vegetables etc ..from their near by shop. What it does Our idea is to develop an android application to help the farmers to exchange their product with the help of available services. We also add a feature in it for shopping.so that the needs of people gets too close to satisfy. This application provides an interface which connects the buyers and vendors in the aspects of various available services. How we built it XML ,JAVA,FIREBASE as database using android studio Accomplishments that we're proud of *Helps farmer in farming buying giving machinery , man power and selling their goods. *Helps consumer to get quality goods from farmers and shop their needs. *Educate farmers in all scheme, insurance details and help them to get income via land by land rent or planting crop by soil testing. *Helps consumer and user to know more about agriculture and involve them to grow a plant. *Helps farmer to know procedure to prepare manure with home wastes. *Our service to farmers not end by only in providing machinery , but also helping them to sell their goods. we buy the goods from framers and sell them as door delivery services. *Services cost will be applicable. other exciting features is that in this application you can know how to plant crops like cereals , spices and how to prepare manure with household things. *People can ask queries and share their thoughts with other common in queries and knowledge transfer. user can add post and use this as friendly blogs. What's next for AGROCARE to develop into ios app Built With android-studio Try it out drive.google.com github.com drive.google.com
AGROCARE
Our idea is to develop an android application to help the farmers to exchange their product with the help of available services.
['Megala J', 'HARIHARAN M', 'SHANMUGA RAJA RAJESHWARAN M', 'Hartika Nagarajan', 'Aruneshwar V']
[]
['android-studio']
24
10,395
https://devpost.com/software/snippet-i047go
Inspiration As a frequent web developer, I always find myself copying code from old projects to speed up the process or I have forgotten it. I developed this website too organise the random code snippets into one place. What it does Snippet allows you to keep all of your frequently used (or forgotten) code in one easy to access place. Also, you are able to search and discover other's snippets, to learn tips and tricks. How I built it I built it with Django, as an API and Vue.js with Vue Router for the front-end. This allowed me to make it an SPA. Other libraries include jQuery, and Axios for requests. Challenges I ran into Authentication! This was the first time I had used authentication for an SPA. I used tokens to verify users. It also has a very complex database structure. The code editor was also a challenge, as normal HTML inputs don't save formatting, so I had to make a parser to convert it to HTML (enter to , space to  ). Accomplishments that I'm proud of It looks really nice! I like the clean UI, and it is very responsive. I also did the main part of it in about 24 hours over 2 days. What's next for Snippet A better sense of being online and sharing will be next (searching for other snippets, and filtering for popular ones!) Also, the 'snip' feature isn't yet used, where users can save other's Snippets to reference in the future. Built With django restframework vue vue-router Try it out github.com
Snippet
The best code snippet organiser!
['Connor George']
[]
['django', 'restframework', 'vue', 'vue-router']
25
10,395
https://devpost.com/software/meetingai
MeetingAI Logo Inspiration Time and time again, we see the use of meetings becoming diminished. Many people find meetings "pointless and boring" and the tedious work of transcribing the text of a meeting is extremely difficult. While meetings can become tedious, seeing and conversing with people is the essence of human existence, and without it, we would be nowhere, and our civilization would simply be a survival of the fittest. To bring back people to the scene, we created MeetingAI What it does Our project takes in an audio file input and returns an immediate transcription of it, based on who spoke and exactly what they said. This technology includes speech to text, and speaker diarization, and is useful for meetings. How we built it The logic behind our project was created using python. In python, we use GCP's API's and flask for the server. In addition to this, we used spectral cluster for the clustering algorithm, and we used resemblyzer for the voiceprint library. The front end of our library was built in HTML5, CSS, and JavaScript, simply allowing the upload of a file, and return of a transcript. Challenges we ran into At first, we had the issue of properly getting transcriptions and speaker diarization(separating between multiple speakers) on the same platform. Eventually, we were able to organize it based on who spoke(based on a numbered speaker list) and for what time interval they spoke as well. We were able to do this by iterating through a tuple of the speaker value, and the start time, as well as the end time. This allowed us to split the audio file into parts, and then create transcriptions of what that speaker said. Once we were able to incorporate that aspect, the entire logic was able to come together very nicely, and it was smooth sailing from there. Accomplishments that we're proud of We are super proud of the machine learning aspect we could come by, and we learned a ton about clustering, speech to text api's, as well as using voiceprint. What we learned We learned a ton of machine learning, and we learned the biology behind the voiceprint techniques, as well as how to apply it to this project What's next for MeetingAI Next we hope to launch on the app store. Be on the lookout for that! Built With css3 flask gcp html5 javascript pydub python resemblyzer spectralcluster wave Try it out github.com
MeetingAI
The revolution need for all: A new and improved meeting transcriber.
['Shrey Jain', 'Shashank Vemuri']
[]
['css3', 'flask', 'gcp', 'html5', 'javascript', 'pydub', 'python', 'resemblyzer', 'spectralcluster', 'wave']
26
10,395
https://devpost.com/software/corono-info-corner-amvh0p
Symptoms checker Navigation over features Front page of app ...email icon for emergency email contact Website containing all the covid details and hospital near me feature. Helpline number Donation FAQ's issued by government Locates nearby hospital from current user location(nearest) Fake news reporter Friendly chatbot Inspiration Nobody wants to keep separate apps to access necessary functionality during the time of emergency. There are 100’s of useful apps to help us through this time of turmoil, What if we had many such useful features on the same app? This is exactly what our app gives the user features that might come in handy for What it does These are few features we have worked on Daily Use: Fake news Verification: With the number of rumours making rounds on the internet , it is important to be informed about the news that is fake. The app allows you to view the fake news and also to report the news that might be fake, The reported news will be cross verified by the authorities before being uploaded. Shops trackers: Since it is difficult for the people to go outside and buy groceries .Using our app, we redirect it to some online groceries facilities like BigBasket, who will deliver at your doorstep with no contact delivery. Emergency: Hospital tracker: The location and the distance of the closest hospital is provided to the user. Using the existing map data from openstreetmap (an open source world map). The information from the GPS of the user's device along with the location of the hospitals on the maps allows for seamless navigation. Symptoms check: A Questionnaire is given to the user based on which an assessment of vulnerability to corona is provided. This is not a complete proof of corona, the app will only recommend the user to consult a doctor if they are vulnerable to corona virus. Helpline: A list of all the helpline numbers in India can be found here. Miscellaneous: Corona cases tracker: Graphical representation of the overall effect of the virus on a country wide scale with complete analytics of the same. Keeping the user well informed of their situation with easy to understand dynamic interface. FAQ’s: The most commonly asked questions and important news related to corona virus. Donation: A direct portal to donate to the PM cares fund through any UPI app of your choice. Queries: An easy way to send your queries to the official government body that is dealing with the virus Chatbot: To answer all the queries regarding the FAQ’s . How I built it Android studio: To develop the app and test it. HTML: For the webpage layout CSS: Cascading Style Sheets is a style sheet language used for describing the presentation of a document written in HTML Google JavaScript maps API: For location of the user. JavaScript: To get the distance and direction between user and hospital and to process as required .To make the web application dynamic and user friendly. Chatbot which answers all the queries regarding corono virus is implemented All are accessed within a single app with multiple use Challenges I ran into Google had temporary stopped the new registration for google cloud .So we had to implement using the open source API' s which were available. Few errors while creating GUI for android app Accomplishments that I'm proud of Within this short duration we were able to come up with an app with all the features required for a common person.The whole team played the important role in achieving such a big task. Link for website: https://covid-19iit.000webhostapp.com/ What I learned Team work how to split up the work and complete the task within the given deadline What's next for Corono info corner So in the future we are trying to integrate a feature where the cop can use his mobile camera to check the social distancing in the public area Also helping the cops to track the people quarantined in the house. Put the app on Playstore and App store. Built With android android-studio css firebase html5 java javascript Try it out github.com
Corono info corner
Our application aims to bring all the possible utilities and quench the curiosity of the people , while under lockdown due to corona under one platform.
['rakshit ks', 'PRATEEK JANAJ', 'Amogh Kuruwatti']
[]
['android', 'android-studio', 'css', 'firebase', 'html5', 'java', 'javascript']
27
10,395
https://devpost.com/software/listicle-mv7k6x
Listicle Hello Judges! I am Aditya Gupta, an incoming Freshman at Fremd High School. I have recently taken an interest in coding and am a avid learner. I currently code in HTML, CSS, and JS. This is my first Hackathon! Bio: I love playing tennis in the summer. I currently play chess and I am rated 2000+, the equivalent of a chess expert. I am also an Illinois Warren Junior Scholar. I have been the Illinois Math League Champion twice and I received a Gold Medal in the 2018 MathCon. I have reached state level twice in the Mathcounts Competition and I was a qualifier for the AIME in 8th grade. I also have a math blog, ThePuzzlr.com. Be sure to check it out if you would like to learn more about me. Categories: Best Overall, Most Practical/Scalable, Best Impact, and Best Design. Link To Video: https://drive.google.com/file/d/1916DMLOAbWdrOxzVieH10Y5B8MmV_m73/view?usp=sharing Project Intro: Have you ever had that moment when you forget that one thing that you needed to buy from the store? Or when you forgot to join a pre-scheduled Zoom meeting? Well, Listicle is an application that works to make sure moments like the above don't happen. With a simple user interface, users can add any items that they would like to remember by typing it and then pressing the "Add" button. To mark something complete, just double click it. Additionally, once you have completed your tasks for the day, you can press the "Clear Completed" button to erase the tasks you have completed so you can focus on what remains. To make it easier, I have also added an "Empty List" Button that clears the whole list. Now comes, in my opinion, the most important feature, the save button. You can close the tab, reload, or anything else, and as long as you have pressed the save button, your list will still be there when you come back! Additionally, we have added a timer feature that can be used to set a goal for you to complete your tasks by. Once your time is over, there will be a pop-up alerting you to finish your tasks if you have not. Future modifications: We will be adding a feature that lets you set a specific time for each task(separately) and make the UI cleaner. Built With css html javascript Try it out github.com
Listicle
Listicle Remembers, You Can Forget
['Aditya Gupta']
[]
['css', 'html', 'javascript']
28
10,395
https://devpost.com/software/no-touch-disinfectant-wipes-dispenser-x0cmdy
The problem Hey, I am Tanya Rustogi and I got the idea of the wipes dispenser when I was thinking of how Covid-19 is affecting developing countries. My first thought was that to open a wipes container like lysol you need to touch at least two surfaces which can spread coronavirus. Additionally, having a container of wipes per person in an office or school is not realistic due to the shortage of disinfecting wipes. Then came the idea of an affordable, easy disinfecting wipes dispenser that can be used for classrooms to day cares to shopping carts everywhere. The solution So what this dispenser does is when an object such as your hand comes within ten centimeters of the sensor, the motor starts moving which is connected to a rod with rolled up wipes on it. The rotation of the motor moves the roll of wipes, causing them to unroll and make their may out of the container. How to build So, each of the pins except the ground and vcc on the motor driver are connected to pins on the arduino, which we defined in the code. The trigger and echo pin on the sensor are also connected to the arduino which are defined in the code. Then the ground and vcc of both the motor and the sensor are connected to the ground and vcc of the arduino which is connected to the power. The sensor detects the distance by seeing how long it takes a wave to come back. The code on the arduino makes sure that if the sensor detects something within 10 centimeters of it, it runs the function stepper which causes the motor to run. The container is made from the lysol container, hopefully making it cheaper for developing countries. The container has two holes, one for the wipes to come out from and one for the motor. Then we need to connect the motor to the container which I achieved with tape. The rod connects to the motor which is held on the other side through the hole already provided in the lysol container. Now when the motor rotates, the rod rotates as well. What’s next This is just a prototype, with more material, the final product would look cleaner with a box covering the circuits and the pcbs and circuits connected to the container. What did I learn I think the most important thing I learned through this experience is time-management due to the time constraints of two days to make the whole thing as well as perseverance to be able to try again despite how many times the circuit and the code did not work as it was supposed to. Built With arduino stepper-motor Try it out github.com
No-Touch Disinfectant Wipes Dispenser
A prototype of a no-touch dispenser that is easy and affordable to make and could be used from cleaning tables to disinfecting carts.
[]
[]
['arduino', 'stepper-motor']
29
10,395
https://devpost.com/software/eagle-sight-cbny23
This is a cool website. Try it out. Built With css3 html5 Try it out www.eaglesight.tech
Eagle Sight
Cool Eye checking game.
['Senuka Rathnayake']
[]
['css3', 'html5']
30
10,395
https://devpost.com/software/compass-yk84ej
Compass Functionality Inspiration Friend What it does The compass will direct you in right direction, when lost How I built it Using python and microbit Challenges I ran into Figuring out the angles Accomplishments that I'm proud of To participate in Hackathon What I learned The experience with meeting other coders What's next for Compass Can be made an app, use in watches Built With microbit python
Compass
Don't get lost in woods
['Dharan K']
[]
['microbit', 'python']
31
10,395
https://devpost.com/software/mathcounts-countdown
Question Alexa Inspiration Mathcounts was a math competition that I participated in while in middle school, so I wanted to make something that would help other students practice for Mathcounts competitions while having fun (the countdown round is the fun one). What it does Alexa reads questions from the Mathcounts countdown competition chapter and state level (160 problems total) one at a time, while keeping track of your score. You have 16 seconds to answer after every question is read and the question is repeated twice. If your Alexa device has a screen or you are playing on the Alexa app, you can also see the questions while they are read by Alexa. How I built it I built the specific utterances (the words the user would say) on the Alexa console. Using the history of the user's responses and their utterances, I determined Alexa's response with AWS Lambda (the Alexa device sends JSON containing history and utterances to the lambda and the lambda sends JSON containing what words to say back to the Alexa device). The problem database was pulled from the mathcounts website and stored in the code. Challenges I ran into The format of some of the problems on the website were in such a form that Alexa couldn't read them (such as fractions, division signs, special characters, subtraction signs, parenthesis, and multiplication signs), so I had to manually edit some of the questions so that Alexa could say them. Moreover, optionally adding units to your answer was a problem at first, because sometimes Alexa wouldn't understand how to respond to an answer with units. So, I had to manually type out all the possible units that the user could say, so that Alexa could formulate the correct response. Accomplishments that I'm proud of I created a helpful and educational Alexa skill that could help anyone get better at math if they practiced using the skill! What I learned I learned how map utterances (what the user would say) to specific intents (whether the user stated and answer, was asking for the next question, saying no...), which will be very helpful in future Alexa skills. What's next for Mathcounts Countdown I plan to get others to use my skill, so it can help anyone prepare for math competitions. Built With ai alexa hardware lambda mathcounts node.js Try it out github.com www.amazon.com
Mathcounts Countdown
Alexa reads easy to difficult math problems to you and keeps track of your score.
['ram potham']
[]
['ai', 'alexa', 'hardware', 'lambda', 'mathcounts', 'node.js']
32
10,395
https://devpost.com/software/the-platformer-8abl3f
The platformer is a Javascript canvas game that has two objectives: stay atop the platform or risk losing health, and defeat the incoming wave of aliens looking to kill you. Tutorial: While in-game, press 'a' to move to the left, 'd' to move to the right, 'w' to jump (only have 2 jumps until you land atop a platform and regain them), and 'space' to fire your cannon. In the home screen, you will see a list of images (excluding the first one). These images show the different power-ups that can be found in-game. The description of each powerup can be found beneath the image. If a powerup appears on the screen and starts flying past, you will obtain the powerup for a short period of time if you run into it. All the powerups (except for the shield) are automatically activated and don't require user input to use. When the shield is obtained, you will need to press the 'shift' to use it. After a certain period of time, you will see red squares start appearing from the right side. These are aliens, and their goal is to kill you with their projectiles. Each alien has a certain amount of health, and it can be found above their characters. They will track your movement, and attempt to move to the same Y-coordinate as you so that they can fire their projectiles at you. As you progress through the game, the speed of the platforms approaching you gets increasingly faster. There is the possibility that you could get stuck behind a group of platforms, and only escape once the platforms go off the screen. One way to escape faster is to obtain the shield power-up, but you can also fire your projectiles and eliminate a platform, as they also have a certain amount of health. In case original video is not working: https://youtu.be/tBxJqQ-9Sso Built With html javascript Try it out github.com
The Platformer
The platformer is a Javascript canvas game that has two objectives: stay atop the platform or risk losing health, and defeat the incoming wave of aliens looking to kill you.
['Sanjeev Devarajan']
[]
['html', 'javascript']
33
10,395
https://devpost.com/software/c-care
When our app worked, Satisfied Inspiration During this current COVID 19 pandemic, I see health worker is curing the patients, doctors are innovating new medicine, the police is controlling the crowd movement and even bus drivers are helping people to get back to home. As a future engineer, I felt like my contribution is none, so I felt motivated to do my part and try to bring a positive change and to make sure my product can also be used in a future pandemic. problem our project solves Offices and workplaces are opening up and as the lockdown loosen we have to get back to work, but there is a massive possibility that infection can spread in our workplace as, When a person is infected he can be asymptomatic for up to 21 days and still be contagious, so the only way to contain the spread is by wearing a mask and maintaining hand hygiene. WHO and CDC report said that if everyone wears a mask and maintains hygiene then the number of cases can be reduced three folds. But HOW we will do that? , How can we make ever one habituated to the following safety precaution so the normalization can take place. So we have come up with a solution called C-CARE 1st ever preventive habit maker that will bring a positive change. What our project does Our app is 1st of its kind safety awareness system, which works on google geofencing API, in which it creates a geofence around the user home location and whenever the user leaves home, he will get a notification in the C-CARE app ( ' WEAR MASK ' ) and as the users return home he will get another notification ( ' WASH HANDS '), ensuring full safety of the user and their family. It is also loaded with additional features such as i.) HOTSPOT WARNING SYSTEM in which if the user enters into a COVID hotspot region he will be alerted to maintain 'SOCIAL DISTANCING' And it also has a statics board where the user can see how many times the user has visited each of these geofences. With repeated Notification, we will make people habituated of wear masks, washing hands, and social distancing which will make each and every one of us a COVID warrior, we are not only protecting ourselves but also protecting others, only with C-CARE. Challenges we ran into 1,) we lack financial support as we have to make this app from scratch. 2.) the problem in collecting data regarding government-certified hotspot and also we have to do a lot of research regarding the spread pattern of COVID-19. 3.) Due to a lack of mentors, whenever the app stop working we had to figure out by ourself, how to correct the error. 4.) It took us too long to use it in real-time as during lockdown it was too hard to go outside in the quarantine but finally, after lockdown loosens a bit we tested it and it gave an excellent result. 5.) we didn't know much about geofencing before that so we have to learn it from scratch using youtube videos. Accomplishments that we're proud of WINNER at Global Hacks in the category of HEALTH AND MEDICINE. WINNER at MacroHack As the best Android Application. WINNER at MLH Hackcation in the category ( Our first Hackcation ). TOP 5 in innovaTeen hacks. TOP 10 in Restartindia.org and Hack the crisis Iceland. What we learned All team members of C-CARE were able to grow their area of competence by participating in the whole process of idea definition, market research, validation, prototyping, and presentation. Through different mentor sessions, we learned that problems could be approached by many means, but most importantly our mission should be clear. What's next for C - CARE COVID cases are increasing every day, and chances are low that we can create a vaccine immediately, apps like C-CARE will play a crucial role in lower the spread of infection till a proper vaccine is made. Our app can also be used for seasonal diseases such as swine flu or bird flu or possible future pandemic such as Hantavirus, G4 Virus, bubonic flu, Monkeypox. Built With android-studio geofence google-maps java sqlite Try it out drive.google.com
C - CARE
C - CARE An app that makes every person a COVID warrior.
['Anup Paikaray', 'Arnab Paikaray']
['Track Winner: Health and Medicine']
['android-studio', 'geofence', 'google-maps', 'java', 'sqlite']
34
10,395
https://devpost.com/software/stock-vakri
Inspiration While existing stock trading platforms like Zerodha or Money Control are for individual users, our project is for a broker to manage multiple clients. Seamless experience across mobile, tablet and PC, the application is a one stop solution for all things stock. Built With angular.js express.js mongodb node.js tensorflow.js
Stock
A fully responsive MEAN stack stock broker dashboard that allows brokers to log client details, view real time prices, analyze predictions, watch market news, all in one place.
['Hrithik Sahu']
[]
['angular.js', 'express.js', 'mongodb', 'node.js', 'tensorflow.js']
35
10,395
https://devpost.com/software/weather_talkie
Weather_Talkie This Weather App is able to find out the live weather data in the current location of the device as well as the weather for any city you can think of! Demo Video: https://drive.google.com/file/d/1C8N9ItlgzlTHKuEGrNAWlFqvKYN2QDEw/view?usp=sharing Built With dart java objective-c ruby swift Try it out github.com
Clima- The Weather Talkie
This Weather App is able to find out the live weather data in the current location of the device as well as the weather for any city you can think of!
['Aakriti Agarwal']
[]
['dart', 'java', 'objective-c', 'ruby', 'swift']
36
10,395
https://devpost.com/software/rock-paper-scissors-h2jke5
Starting Area Gaming area Rock-Paper-Scissors A Simple Html Game using JS. Inspired To create the Rock paper scissors online. what I learned I learned the fade-out, in and maths random function when doing this project. how I built I build using HTML, CSS and JS, Here JS is playing an important role challenges I faced. I faced several challenges while doing the JavaScript but Googling help me to find explore the functions that I need. Todo In future, I planned to make an online multiplayer option, Winner Screen when reaching a score and to add sound effects when increasing score, etc Built With css html javascript Try it out github.com jobin-s.github.io
Rock-Paper-Scissors
A Simple Html Game using JS.
['Jobin-S S']
[]
['css', 'html', 'javascript']
37
10,395
https://devpost.com/software/e-learning-platform-for-students
Different AR Demos AR - Microscope Our App online meet on conducted on our sited Flowchart Report - After attempting test Wordpress - Google Cloud E-learning-platform-for-students Brief description of steps taken to complete the project The initial set up : 1. Bought Virtual Machine. 2. Installed Word Press on Virtual Machine. 3. Designed user-friendly Website. 4. Created AR (Augmented Reality) based app for visualisation Connections : 1. Virtual Machine IP Address linked to domain. 2. Generated SSL Certificate (https) for website 3. Developed the App The output : 1. Students can visualise 3D models in AR as well as Browser (e.g. Digestive system, Earth's Core, Microscope) 2. Teachers can mark student attendance and add exam marks on our portal. 3. Study material (Resources) for students to study during pandemic. 4. Parents can see their child's attendance and marks on portal by logging in. 5. Students can attempt proctor (webcam) based exams (ensures no cheating) 6. Teachers can see students online exam report in detail. (Face detected or not if not - screenshot , recording when noise detected) 7. Students can attend live lectures (classes) on our site itself. [To download app Click Here] Technology Stack For hosting website - Google Cloud For integrating Augmented Reality feature - echoAR and Unity Improve site performance (CDN) - Clouflare Domain service - .xyz For designing website - Wordpress & Elementor Built With .xyz cloudflare echoar elementor google-cloud unity wordpress Try it out github.com www.dscjscoe.xyz
E- Learning Platform + Augmented Reality + Login portal
Online portal for Student, Parents, Teachers to see attendance, marks, upcoming events ( Website & App) Portal + Augmented Reality Library (Created by us) for visualisation and better understanding
['Sanket Patil', 'Chaitanya Abhang', 'Tejas A', 'Mahesh Gavhane']
[]
['.xyz', 'cloudflare', 'echoar', 'elementor', 'google-cloud', 'unity', 'wordpress']
38
10,395
https://devpost.com/software/mask-notifier-app-kt071u
ideation Inspiration It has become mandatory to wear a face mask during this pandemic period. Wearing a face mask will help prevent the individual from contracting any airborne infectious germs. WHO advises everyone to wear ask, every time you move out to any public area. However, it’s quite common to forget. That’s where our app comes in rescue. What it does Mask notifier, automatically notifies you to wear a mask every time you move out of your predefined home territory. Here is how it functions, After a small walkthrough, you are requested to set a Geo-Fence. Where a marker appears on the map at your current location with a predefined radius. You can adjust this radius to fit the surroundings of your home. Next, you will move on to the dashboard. In the dashboard, you can enable the Automatic Mask Alerts. Moreover, a small live map showing your current position in the GeoFence will be displayed. So, using GeoFence you provided, every time you walk out of your home territory, you will get a notification to wear a mask. Also, all these notifications will be saved under the notifications section in the dashboard. Finally, you can edit your GeoFence anytime you need, from the dashboard itself. All these transactions will be taken place within the user’s local device there is no issue with privacy. How I built it The app is built using Android Native SDK tools with the following libraries. Google Maps API V2 - for GeoFencing ButterKnife - for View Binding Glide - image loading framework for Android FireBase - As backend Database for storing information about various shops OneSignal - Easy Firebase Notifications Alerter - For Material Style Alerts Challenges I ran into Initially, it has become very hard for me to establish a live location-based GeoFence. However, using Maps API provided by Google Cloud Platform it became super simple to implement all the features. Accomplishments that I'm proud of It took me just 2 hours to wireframe and I was able to build the whole functional prototype within 36hours and I am proud of it. What I learned I learn to implement Google Maps API v2 on Android with an active GeoFence. I also learned to implement live location detection on Android. What's next for Mask Notifier App Want to make the user-interface more intuitive and launch it onto the play store so that everyone can make use of it. Built With android firebase geo-fence google-cloud google-maps Try it out github.com
Mask Notifier App
No way to forget wearing mask, next time you move out of your home
['Sainag Gadangi']
[]
['android', 'firebase', 'geo-fence', 'google-cloud', 'google-maps']
39
10,395
https://devpost.com/software/grocer-community-app
ideation Inspiration COVID-19 pandemic panic buying has emptied shelves of basic needs in all supermarkets, leaving various individuals in the community without access to the most basic needs. Even though supply chains are implementing measures to increase their stock levels, but fear of shortages of food and services are making their stock on site fluctuate every week and supermarkets become high-risk infection areas. What it does Grocer Help App is a community driven common platform that helps users optimise their visit to the grocery shops, by informing them about item stock availability in their nearest shops, while reducing their exposure to the virus, eliminating unnecessary movements to risk areas. How I built it FireBase - As backend Database for storing information about various shops UiPath - For webscraping The app is built using Android Native SDK tools with the following libraries. Google Maps API V2 - for GeoFencing ButterKnife - for View Binding Glide - image loading framework for Android Okhttp - as android Http Client OneSignal - Easy Firebase Notifications Alerter - For Material Style Alerts Challenges I ran into Initially, Webscraping for data regarding various shops has become hectic. However, UiPaths powerful web scraping tool has made our job super simple. Also, Using Firebase provided by Google Cloud Platform as database helped us to save lot of time. Accomplishments that I'm proud of I was able to wireframe and design the whole app within a day. Later within the next 12hrs, I was able to complete designing the whole functional prototype. What I learned I learnt web scraping using UiPaths powerful web scraping tool. I also learnt to integrate Firebase in Android. What's next for GrocerHelp App Add several other stores like pharmaceuticals and restaurants where there is a high chance for crowding. Built With android firebase google-cloud uipath Try it out github.com
Grocer Community App
Community Driven Platform to help everyone find where the required Grocery and Daily needs are available right now.
['Sainag Gadangi', 'Nitish Gadangi']
[]
['android', 'firebase', 'google-cloud', 'uipath']
40
10,395
https://devpost.com/software/facerecognition-locker
Why This Project? We decided to try and write this project because of how annoying it is when someone else tries to access your computer without your presence. What it does Attempts to use facial recognition to see if the allowed user is present. How we built it We used Python, with the OpenCv library. Challenges we ran into Victor's laptop was too old to run modern OpenCv, and so he had to wait until his new one arrived to code. For me, I'm used to c++ so I had to learn the basics of python. Accomplishments that we're proud of We were able to get OpenCv working. What we learned How to use OpenCv What's next for FaceRecognition Locker Clean up code and make the main idea. Built With opencv python Try it out github.com
FaceRecognition Locker
Program that locks itself if an unknown face is identified
['Manuel Mateo', 'Victor Hernandez']
[]
['opencv', 'python']
41
10,395
https://devpost.com/software/secure-your-space-cx3hwd
allowing betteer healthcare Built With api
be safe
Empowering safety between people in order to make a safer world.
[]
[]
['api']
42
10,395
https://devpost.com/software/compostly
Identifying Properties Of Leaf Identifying Properties Of Paper Identifying Properties Of Dairy Inspiration One day I was doing some research and then I came across composting. The idea of using old food and other ideas intrigued me, and then next thing I knew a composter was on its way. Once it came I had no idea where to begin. There was no guide or on information on how to start. After that I started learning about composting and started doing it. I found that when people want to start they should not have to do research every time they want to compost something. That was when I came up with the idea Compostly, an app that uses machine learning to give you the best items to compost. What it does Compostly uses Apples CreateML machine learning software, to give suggestions on what to compost, and what not to compost bases on your items. All you have to do is scan the item with your phone and then it gives you instructions on if you should compost it or if you should not. How I built it I built this using CreateML, Apples machine learning software. Using pictures I manually collected, I created a dataset which I used to make my machine learning model. After that I programmed and designed the app using swift, then I imported my model into the app. Challenges I ran into I ran into a lot of problems, but I was luckily able to fix them in time. Some of these problems included my machine learning model not importing. I had to debug this by changing the app version. Figuring out these problems took me some time. What I learned I learned a lot about machine learning in total. There was not much information on the internet on how to implement this into an app, so I had to learn it from scratch. I also learned about the programming language swift, as I had never done anything related to machine learning with it. What's next for Compostly I want to add more functions to it so users can have an even better time. I am also looking to help add more data to the machine learning model so I can get even better results. Built With ai coreml createml machine-learning swift xcode Try it out github.com
Compostly AI
Incorporating Machine Learning, CreateML, and Swift, Compostly gives you the best recommendations to help you save the world by composting.
['Navadeep Budda']
[]
['ai', 'coreml', 'createml', 'machine-learning', 'swift', 'xcode']
43
10,395
https://devpost.com/software/destruction-simulator
Inspiration My favorite parts of a lot of video games that I played was creating lasting effects in the worlds I was playing in, I felt like the easiest way to implement thins was through a destructible environment. What it does The only part of the game that I was able to complete was the "infinity mode," in this mode in addition to destroying a objects, every objects that you destroy duplicates itself, for this part I did not have to design complex levels in order to have fun because this mode allowed me to have an opportunity to create a limitless supply of resources to experiment with and create a fun experience on my own. How I built it I C# to code the game and compiled all of my assets into the Unity game engine to build the game. I wrote all of the scripts myself except for the script which is used to split the cube into smaller fragments which I downloaded from GitHub ( https://gist.github.com/ditzel/73f4d1c9028cc3477bb921974f84ed56 ). For the player, I imported a character model and animations from Adobe Mixamo. Challenges I ran into I poorly managed my time during this hackathon, which prevented me from implementing all of the ideas that I wanted to implement into this game. These include: implementing character animations and modifying the cube destruction script to have the character punch the cubes into smaller fragment, not implementing levels where the player has to get from one point to another, and improving existing menu/pause backgrounds. Accomplishments that I'm proud of I am proud of both my character controller and menu system. What I learned I learned that before starting hackathons I should schedule my time, taking into account my strengths and weaknesses, so that I can be realistic about what content I should cut and what content I should keep. What's next for Destruction Simulator Built With adobe-mixamo c# github unity Try it out github.com
Destruction Simulator
Quick video game about having fun breaking things
['Zane Mohammad']
[]
['adobe-mixamo', 'c#', 'github', 'unity']
44
10,395
https://devpost.com/software/saferlife
Dashboard page - view everyone on your contact "list" here with new updates Search page - add a location/person to your personal list or have someone scan your QR code Inspiration Last week, I received an email from my school district discussing ways to reopen schools in the fall. They were considering having half of the students on campus for one semester and the other half on the second semester, having teachers move from class to class instead of students, and all other types of scheduling nightmares in order to trace who comes into contact with who. After talking to my parents, I found out that their workplaces were also in a similar state of confusion as to how they could reopen a physical location while keeping track of who contacts who. Later that same day, I opened up Snapchat and noticed how they used a personalized QR code to add users to a personal friends "list." I also realized that almost every student at my school has a mobile device that they carry around all day. At that moment, I got the inspiration to create an app that is able to trace everyone a person comes into contact with via personalized QR codes like Snapchat. What it does My app keeps track of the people you come into contact with when you leave the house. It can be used in schools, offices, restaurants, or any public place where you want to keep track of who you meet. Every time you enter a building, room, or meet a person, you find/they show you their individual QR code and you can, like Snapchat, scan it. The app then adds all the person(s) in that room/building (or the individual person you scanned) to your own personal contacted "list" where it monitors if they have COVID/have come into contact with it. (The list only keeps track of people you met in the last 2 weeks). If anyone you have met in the last 2 weeks gets COVID, indirectly through another person/contaminated area or directly face to face, you will be notified that you are at risk of COVID and recommended to self-isolate. How it helps Without contact tracing, schools and companies are walking blindly into a dangerous situation. They must be able to know who has met who and who is infected with COVID. This app helps them accomplish this goal. School districts, small businesses, and large companies can all use this app to monitor which people their employees/students/customers meet. When the app tells them someone has COVID or is at risk of it, they can ensure no one else meets them in order to prevent others from getting infected. How it works I used android studio and Java to create this app. I set up the basic layout with the dashboard, login, and "search" page using the android studio IDE template. I used the Zxing API to scan QR codes and the QRGEncoder API to generate custom QR codes. How the app works is, after scanning someone's/a location's QR code, it takes all the people that person/location had come into contact with and adds it to your personal list with a time stamp. It monitors all of the people on your list until one of them changes their status to infected. When this happens, the app will notify you, change your status to "At-Risk", and notify all the people on your list that you are potentially infected with COVID and to avoid you. What's next for QR Contact Tracing One feature I was working on was using Bluetooth for proximity detection to check if someone is within 6 feet of you. Instead of using QR codes to add people to your contact "list" for one on one meetings, the app would automatically do it on its own and buzz to let you know you have "contacted" someone. My Challenges I struggled to make this app initially because I could not figure out how to make a login page for the app. Another issue I had with it was making the QR code scanner work. A large majority of the time developing this app was spent figuring out how to get the custom Zxing camera to display instead of the default android one. Built With android-studio java qrgencoder zxing Try it out github.com
SaferLife
Going back to school in the fall and want to trace who you meet? My app uses custom QR codes to trace and monitor everyone you contact in a day. Go back to school safe with SaferLife!
['Arnav Sangamnerkar']
[]
['android-studio', 'java', 'qrgencoder', 'zxing']
45
10,395
https://devpost.com/software/baking-converter
Inspiration Quarantine has had me exploring baking recipes from around the world, thanks to ample time spent indoors. However, living in the United States and attempting foreign recipes generally means the baking tools I have (Imperial units) do not match what the recipes are written in (Metric units). This clash of systems was particularly annoying when I would browse for a new recipe: I had no idea what 100g of flour is, and Googling for the proper Imperial conversion for every ingredient listed soon became tiresome. So, why not make an app that can quickly and easily convert Metric baking units into their Imperial counterparts, and vice versa? What it does As the title implies, the Baking Converter converts Imperial or Metric units into their alternate counterparts. Users select which system of units they wish to convert to, before selecting which ingredient + unit to convert. The app also allows for Imperial-to-Imperial unit conversions, so users can more easily convert between cups, tablespoons, and teaspoons. How I built it This app was built entirely using Android Studio, with Java for the back-end and XML for the front-end. Challenges I ran into Converting from Metric into Imperial Different ingredients have different weights, and thus all had different Imperial measurements. For instance, 120g of white sugar (1 half cup + 2 teaspoons + 1 quarter teaspoon) was a different conversion process from 120g of flour (1 half cup + 1 third cup + 2 tablespoons). Different calculations had to occur for each of the individual ingredients. Unfamiliarity with Android Studio I had barely touched Android Studio before tackling this project, and the field of app development was very new to me. Doing simple things, like adjusting available options for users depending on the current screen, was more difficult than anticipated. Accomplishments that I'm proud of I am proud of how much I learned about Android Studio in such a short time frame. In particular, I'm very proud of implementing the feature where the app auto-selects your units depending on your ingredients (ie, liquid units will be in mL while dry units will be in grams). This sounds like a small thing to implement, but it was a detail in the app I was determined to get, as I have always been frustrated with existing converters that do not do this. What I learned Working with Android Studio Working with XML files Jumping between different display screens, depending on the conversion Adjusting the available unit options, depending on the ingredient How to build a UI How to create a minimum viable product in only 3 days Connecting to GitHub through Android Studio What's next for Baking Converter In the short term, I would like to accommodate for more ingredients and their weights, so this app could potentially be used for most common bakery goods. In the long term, I would like to include a feature where the user simply takes a picture of the recipe they would like to convert. The app would find the text and numbers within the image taken, and automatically convert the recipe to the desired measurement system. Built With android-studio java xml Try it out github.com
Baking Converter
Converting between Metric and Imperial measurements has never been easier.
['Alexa Wang']
[]
['android-studio', 'java', 'xml']
46
10,395
https://devpost.com/software/paragon-fw2nbl
This image shows the program we used, Adobe Premiere Pro, to create and edit our video. This image shows the website we used, Canva, to create our designs, such as the background of our demo. Inspiration In light of COVID-19, the entire nation is filled with the cries of parents who hate mathematics struggling to home-school their children. An even less shocking revelation is that the United States does not invest sufficient resources toward education. Consequently, children that attend schools that are not well-funded tend to lack the basic fundamentals of a subject such as mathematics, which results in an endless production cycle of below-average test scores. In fact, according to thebalance.com, the math skills of students across the nation have remained stagnant since at least 2000. This is reflected by the U.S. mathematics test scores resulting well below the global average. Globally, the U.S. is ranked 38th in terms of mathematics. Looking at all of these issues, our team was inspired to create a solution that will not only help children improve their mathematical abilities and train them to process information at a higher rate, but also improve the standard of education starting from the fundamentals. To do this, we looked toward other websites such as Math Dojo and Khan Academy as inspiration for the setup of our project. What it does Paragon is a website where students can challenge themselves and their friends to answer as many questions as they can correctly within a time frame of sixty seconds. This website offers different aspects of mathematics such as: addition, subtraction, multiplication, and exponents. No matter what subject they choose, students can put their thinking abilities to the test and improve their mathematical skills with the high-paced structure of Paragon. How we built it Using the IDE Visual Studio Code, we coded the website by utilizing Javascript, HTML, and CSS. For the background of the website, we took advantage of the graphic design platform known as Canva and put our own creative twist to the background to make it fit with the sleek yet playful aesthetic we set ourselves to achieve. For the video itself we utilized Adobe Premiere Pro with royalty free music and stock footage both free for commercial use. Challenges we ran into All of our team members do not use the same coding languages, so we had to compromise when creating the project and split our roles accordingly. At one point, we even struggled to place the background image because of a few barely noticeable syntax errors. Individually, we all had our own challenges we had to overcome depending on our knowledge of the different languages ranging from inexperienced to advanced. Accomplishments that we're proud of We are extremely proud of ourselves for looking at a problem affecting our nation and coming up with and creating an accessible solution within a matter of days as a highly diverse group of strangers turned team-mates. Whether it be learning two programming languages in a matter of days or just fixing an error message after hours of trying, each of us also had our own accomplishments that allowed this project to come into fruition. What we learned The importance of communication is a major lesson we learned. From the beginning, the team-members with less experience made it very clear that this is their first hackathon and we listened to each other with patience, guided each other through the process, and never hesitated to ask each other questions to ensure our own growth. This taught us the valuable skill of communicating which is needed not only in the workplace, but also in life. All of us also learned about different aspects of computer science, whether it be new languages and how to create projects with them or even just learning from our own mistakes. What's next for Paragon We want to realize this project and implement free resources for students (and their parents) such as lessons/lesson plans, YouTube videos that go over each subject, and even expand the subjects to encompass each individual grade level starting from first grade going all the way into eighth grade. Built With css3 html javascript Try it out hackthelib.github.io github.com
Paragon
Feel like your math skills are diminishing? Exercise your pink memory muscle using Paragon!
['Rashmit Shrestha', 'Sofia Murillo Sanchez', 'Harshithr467 Revuru']
[]
['css3', 'html', 'javascript']
47
10,395
https://devpost.com/software/blind-braille-board-mjiac6
Inspiration What it does How I built it Challenges I ran into Accomplishments thhat I'm proud of What I learned What's next for
h
h
['Maninder Singh']
[]
[]
48
10,395
https://devpost.com/software/bio05_enteryourprojectname
Inspiration What it does How I built it Challenges I ran into Accomplishments that I'm proud ofblabla What I learned What's next for BIO05_EnterYourProjectName
BIO05_EnterYourProjectName
blablabla
['Frank Valentin']
[]
[]
49
10,395
https://devpost.com/software/covid19-outbreak-and-npi-prediction
Coronaob.ai - Pandemic Outbreak and mitigation prediction 1.Overview Coronaob.ai is the ultimate tool for predicting epidemic trends. It has been built with the help of artificial intelligence and statistical methods. This epidemic forecasting model helps in giving a rough estimate about the future scenario and also helps in suggesting non-pharmaceutical/mitigation measures to control the outbreak with minimum efforts. This will give a head start in the preparations that are made to curb the pandemic before taking the lives of people. Note: An NPI is the same as a mitigation measure. 2.What exactly is the problem? During any pandemic, it's difficult to scale up the implementation of the mitigation measures, this is often because of the chaos that is caused during the pandemic. It often becomes an unseen situation wherein the authorities lack smart judgment on which step to take further which makes the situation even worse. It's not always necessary to implement the strongest mitigation measure as medium-strength mitigation can get the job done, thus giving more weightage to the economic stability and other subjects. 3.What can be done to tackle this issue? A strategy that can give a rough picture of the future scenario describing the number of cases and the area of spread can give an insight into what could better be done to reduce the effect in an easy and cost-effective manner. Also, having a record of previously taken successful-steps can also provide much boost to this strategy. 4.Our Goals a. To give an estimate by forecasting the number of cases and trends in the spread etc, which will give a good construction of how the scenario would be. b. To suggest/predict the best suitable mitigation measures, according to previously taken successful steps, thus saving resources and not creating chaos. c. To make this approach a robust one, so that any agency working on 5.Milestones Prototype stage : We have completed our first stage training and testing on the covid19 data and have achieved over 90% accuracy in predicting the new cases the immediate next day and over 85% accuracy in predicting the long term scenario. On the mitigation prediction part, we have achieved an accuracy of 91.8% and we were successful in bringing down the hamming loss to as low as 8.2%. Accuracy : Our method is one of the most accurate ones among the others in predicting such trends. 6.Specifications Our submission is a script containing the machine-learning models that can be boosted with an interesting UI as mentioned in the gallery picture. 7.Technical details Major tools used : a. Kalman filter : It’s an algorithm that uses a series of measurements observed over time, containing statistical noise and other inaccuracies, and produces estimates of unknown variables that tend to be more accurate than those based on a single measurement alone, by estimating a joint probability distribution over the variables for each timeframe. b. Regression analysis : It’s a set of statistical processes for estimating the relationships between a dependent variable (often called the 'outcome variable') and one or more independent variables (often called 'predictors', 'covariates', or 'features'). c. Scikit-learn : Scikit-learn (formerly scikits.learn and also known as sklearn) is a free software machine learning library for the Python programming language.[3] It features various classification, regression and clustering algorithms including support vector machines, random forests, gradient boosting, k-means, and DBSCAN, and is designed to interoperate with the Python numerical and scientific libraries NumPy and SciPy. 8. Dataset Description Some details regarding the columns of mastersheet prediction are: Each row is an entry/instance of a particular Npi getting implemented. Country: This column represents the country to which the entry belongs to. Net migration: The net migration rate is the difference between the number of immigrants (people coming into an area) and the number of emigrants (people leaving an area) throughout the year. Population density: Population density is the number of individuals per unit geographic area, for example, number per square meter, per hectare, or per square kilometer. Sex Ratio: The sex ratio is the ratio of males to females in a population. Population age-distribution: Age distribution, also called Age Composition, in population studies, the proportionate numbers of persons in successive age categories in a given population. (0-14yrs/60+yrs %) Health physicians per 1000 population: Number of medical doctors (physicians), including generalist and specialist medical practitioners, per 1 000 population. Mobile cellular subscription per 100 inhabitants: Mobile cellular telephone subscriptions are subscriptions to a public mobile telephone service that provides access to the PSTN using cellular technology. Active on the day: The number of active cases of covid19 infections in that particular country on the day it was implemented. Seven-day, twelve-day and thirty-day predictions are for active cases from the date it was implemented. And the date-implemented is converted to whether it was a week-day or a weekend to make it usable for training. The last column represents the category to which the NPI that implemented belonged to. 9. I/O Input : The epidemic data such as the number of infected people, demographics, travel history of the infected patients, the dates, etc up till a certain date Output : 1) Prediction of the number of people who will be infected in the next 30days. 2) The countries that will get affected in the next 30days. 3) The mitigation/restriction methods to enforce such as curfew, social distancing, etc will also be predicted, to control the outbreak with minimalistic efforts. 10. Dividing the measures into categories: Category 1 : Public -health measures and social-distancing. Category 2 : Social-economic measures and movement-restrictions. Category 3 : Partial/complete lockdown. To categorize the npis we followed a 5 step analysis : Step 1 : We chose 6 different countries that have implemented at least one of the above-mentioned npis. Step 2 : We had chosen a particular date wherein one of the NPI was implemented. Step 3 : From that date (chosen) we had calculated a 5day, 8day, 12day growth rate in the number of confirmed cases in that country. Step 4 : According to 1) https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4327893/ 2) https://www.worldometers.info/coronavirus/coronavirus-incubation-period/ we took a reference that, over 50% of the people who are affected on day1 show symptoms by day5, over 30% of the people affected on day1 show symptoms by day8 and the last 20% start showing symptoms by day12. Assuming that, they get a checkup as soon as they are showing symptoms, we had calculated a cumulative growth rate. Step 5 : This cumulative growth rate was not very accurate due to the population densities of the countries being different. So, we had normalized the obtained scores from step4 by the population densities. That gave us the following results. More information can be found here: link [ (896.4961042933885, 'CHINA', 'SOCIAL DISTANCING'), (720.7571447424511, 'FRANCE', 'PUBLIC HEALTH MEASURES'), (578.0345389562175, 'SPAIN', 'SOCIAL AND ECONOMIC MEASURES'), (527.7087251438776, 'IRAN', 'MOV RESTRICTION'), (484.1021819976962, 'ITALY', 'PARTIAL LOCKDOWN'), (207.67676767676767, 'INDIA', 'COMPLETE LOCKDOWN')] Ex: (Cumilative growthrate(normalised), Country Name, Measure-taken) So the above analysis shows the decreasing order of growth rates and increasing order of strength, however, this is not very accurate due to various other reasons, but this gives a rough estimate of the effectiveness/strength of the npis. 11. Working a . The inputs given regarding the previous days’ record of the outbreak are first filtered by the Kalman filter and then further the modified inputs are sent to the regression model which will predict the scenario with better accuracies than any other simple regression model. b . Then the predictions from the above models are fed into the machine-learning model which will further help in predicting the mitigations to be used, based on the previous history given in the literature, ex-social distancing. c .We performed 10 Folds Cross-Validation by dividing our data set into 10 different chunks, then running the model 10 times. For each run, we designate one chunk to be for testing and the other 9 are used for training. This is done so that every data point will be in both testing and training. 12. Conclusions This method can help the authorities to develop and predict various mitigation measures that will help in controlling the outbreak effectively with minimum efforts and chaos. 13. What did we learn? a .This project was challenging in terms of the conceptualization and data collection part, there was no direct data available. We learned how to take relevant data from different datasets, engineer them, and use it for our purpose. b . The regular regression algorithms failed in giving accurate results, so we had to think something different that can increase accuracy. Thus, we came across the idea of using the Kalman filter, and using these updated inputs we could achieve better accuracy. c .Since we had to take regions having more than 1000cases only for the effectiveness of data, the overall dataset became small, deep-learning models failed. This made us switch to machine-learning algorithms. d . We also used clustering algorithms which gave a deep understanding of why these work better in some situations. e . Also due to some problems, it was exciting for us to use both R and python in a single notebook thus adding it to our learning. 14. The drawbacks of our approach a . This above-mentioned approach has many drawbacks, one of them is an incomplete dataset. b . There are no good-differentiating features in the dataset. c . In our approach, we are not able to decide the effectiveness and a go-to plan of action for deploying npis. All the data-points are very-similar to one-another, hence it is being difficult for the algorithm to learn. 15. What improvements do we want to make further? a .There could be a set of strong differentiating features in the dataset, which will make the generalization easy. b .There can be a further categorization of npis for better implementation of them. c .The dataset can also be combined with economic parameters further, to understand the economic feasibility of the NPI-implementation. d .It can further be used to predict the decrease in growth rates, once an NPI is implemented to further note the real-time effectiveness of the npis in a particular demographic 15. References a . https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4327893/ b . https://www.worldometers.info/coronavirus/coronavirus-incubation-period/ c . https://archive.is/JNx03 d . https://archive.is/UA3g14 All the other references are mentioned in the submission notebook at every step. 16. Product Roadmap The coronaob.ai team has the functionality of the platform. We are currently in the process of bringing our front-end up to speed with our U/X designer's wireframes. Below is our Product Roadmap post hackathon submission: a.New Security Features b.Admin Dashboard c.Analytical graphs 17. The team a. Saketh Bachu - Machine-learning b. Gauri Dixit - UI/UX development c. Shaik Imran - Medical Expert/Design Built With kalmanfilter matplotlib numpy pandas scikit-learn Try it out github.com
Coronaob.ai - Pandemic Outbreak and mitigation prediction
Coronaob.ai is built with the help of AI and statistical methods. This model helps in forecasting the number of cases and also predicts mitigation measures to control the outbreak.
['saketh bachu', 'Sandeep Kota Sai Pavan']
['Third Place']
['kalmanfilter', 'matplotlib', 'numpy', 'pandas', 'scikit-learn']
50
10,395
https://devpost.com/software/covid-19-rescue
This is screen shot of the Corona Relief App Running on Microsoft PowerPlatform Inspiration I saw that many people who are poor or belongs to middle class families are facing Food Scarcity due to this Covid-19 Pandemic because their income source is stopped due to the Lockdown. There are many pregnants women, children, old aged people who are facing the same situation and they cannot afford food for them because they are in short of money and they are helpless. I saw that many organizations are helping these kinds of people by are taking live call in Television Channels to address people problems and helping as they require. Only one calls can be taken at a time on a live television. Also, there will be problem of network Busy and those organizations cant reach many people in this way. So I thought to develop and application so that people can report their problem and the organizations will get to know about the problem and help them as they need. What it does Basically my app minimizes and speeds up the process of addressing people Problem so that their problems can be fixed as soon as possible in this Pandemic. So when the app runs, user will enter their name and they need to choose whether they want to request for Food Help(Basic Need Relief) or Corona Check(this is additional feature added in app) When they choose the Food Help or Basic Need Relief then they will be asked with several questions such as total family member, no of pregnant women, kids, old aged, and remaining food stock, and also a option to send an attachment. Whereas, when people will choose the option for corona check then they will be asked with several questions like gender, what the symptoms seen on person so you are checking corona, has he came from abroad or not and so on.. . Now, When the user will submit this information then these data will be recorded in Microsoft Sharepoint where the admin or the organization can view the problems or people along with the GPS CORDINATES and Date that will be automatically sent while submitting the problem inside the app. Then from this information, the organization can directly go with the necessary relief to the needed families using the GPS Cordinates that is sent while submitting the app How I built it I built it using the Microsoft Power Platform Challenges I ran into I had learnt some Flutter. And I was trying to make this app using Flutter but my laptop got some ransomware and i lost my everything and I reformatted it again and installed flutter. I did made the forms but the conditional things, GPS Location and so on was very complex for me as I am beginner to Flutter. And learning everyconcepts from the beginning and developing the app would really be difficult and time consuming. Then I heard about Microsoft Power Apps, I already had and idea and I just needed to implement it properly so I started learning Microsoft PowerApps and I tried to bring my idea to life. Using PowerPlatform also I had some Difficulties in Conditional Forms for Corona Check and Basic Need Relif. But finally I figured it our after couple of research. Accomplishments that I'm proud of I don t know that the project I have made will be used by some organizations that are really doing these kinds of Relief Programmes but I think this app will really be helpful for them because if they can address many people problems in less time then they can provide reliefs to many people as possible and poor and needy people as well as middle class people who are deprived of food, pregnant womens, kids who are not getting nutritious food due to the Covid-19 Pandemic and lock down in the country will get relief and should not die out of hunger. Because we dont t know how long this Pandemic is going to take place. What I learned I was complete beginner to Microsoft Power Platform. But I had an idea that must be implemented to help people so I started learning by heart and got to know many things about the Microsoft Power Apps and I think it also increased my creative skills and logical thinking capacity. What's next for Covid-19 Rescue Currently this app can be run under the Microsoft PowerPlatform only but it needs to be platform-independent. So I am planning to make it platform independent very soon if I will get some good mentors to support my project.I have also planned many other features in this app that can be really beneficial and helpful for the people facing Covid-19 Pandemic. I think this kinds of apps must be developed to address poor people`s problem who can be helped by the person who can help them. Problem sending app demo link Since it is dependent on Microsoft PowerPlatform. I require your email to show you this app. It must be opened within Microsoft Power Platform so i couldnot send the link but i have sent image on how it looks when the app is running and demo video for working of the app. Built With powerplatform
Corona Relief
My idea is to let the poor and middle class families get chance to request food help as well as Corona Check within the app so the helping organizations or teams can reach them with the solutions.
['Naseeb Dangi']
['The Wolfram Award']
['powerplatform']
51
10,395
https://devpost.com/software/wecare-5l9dgi
Home Screen of app, which allows you to report your symptoms, check the status of your circle, and get daily personalized tips. Map Screen of app, which allows you to see hotspots around you and your Care Circle. Care Circle screen of app, which allows you to health conditions of your loved ones. Web interface, which can be used to update the symptoms. It is synced with the app. New logo. Update with a key. Hotspots for countries. Options from the start. Questions about your health. Hot spots. App design As the outbreak of COVID-19 continues to spread throughout the entire world, more stringent containment measures from social distancing to city closure are being put into place, greatly stressing people we care about. To address the outbreak, there have been many ad hoc solutions for symptom tracking (e.g., UK app ), contact tracing (e.g., PPEP-PT ), and environmental risk dashboards ( covidmap ). However, these fragmented solutions may lead to false risk communication to citizens, while violating the privacy, adding extra layers of pressure to authorities and public health, and are not effective to follow the conditions of our cared ones. Unless being mandatory, we did not observe the large-scale adoption of these technologies by the crowd. Until now, there is no privacy-preserving platform in the world to 1) let us follow the health conditions of our cared ones, 2) use a statistically rigorous live hotspots mapping to visualize current potential risks around localities based on available and important factors (environment, contacts, and symptoms) so the community can stay safer while resuming their normal life, and 3) collect accurate information for policymakers to better plan their limited resources. Such a unified solution would help many families who are not able to see each other due to self-quarantine and enable early detection and risk evaluation, which may save many lives, especially for vulnerable groups. These urgent needs would remain for many months given that the quarantine conditions may be in place for the upcoming months, as the outbreak is not reported to occur yet in Africa, the potential arrival of second and third waves, and COVID-19 potential reappearance next year at a smaller scale (like seasonal flu). There is still uncertain information about immunity after being infected and recovered from COVID-19. Therefore, it is of paramount importance to address them using an easy-to-use and privacy-preserving solution that helps individuals, governments, and public health authorities. WeCare Solution WeCare is a cross-platform app that enables you to track the health status of your loved ones. Individuals can add their family members and friends to a Care Circle and track their health status and get personalized daily updates on best prevention practices. In particular, individuals can opt-in to fill a simple questionnaire, supervised by our epidemiologist team member, about their symptoms, comorbidities, and demographic information. The app then tracks their location and informs them of potential hotspots for them and for vulnerable populations over a live map, built using opt-in reports of individuals. Moreover, symptoms of individuals will be tracked frequently to enable sending a notification to the Care Circle and health authorities once the conditions get more severe. We have also designed a citizen point, where individuals get badges based on their contributions to solving pandemic by daily checkup, staying healthy, avoiding highly risky zones, protecting vulnerable groups, and sharing their anonymous data. WeCare includes a contact tracing module that follows the guidelines of Decentralized Pan-European Privacy-Preserving Proximity Tracing (PEPP-PT) . It is an international collaboration of top European universities and research institutes to ensure the safety and privacy of individuals. What we have done during the weekend Have been in contact with other channels in Brazil and Chile. We have updated the pitch (extended), app-design and backend connection of the app this week. New contacts with Chile and Singapore. We have also made some translation work with the app. Shared more on social media about the project and also connected to more people on slack and LinkedIn. We have also modified the concept of Care Circle and how to add/remove individuals. Now, the app is very easy-to-use with minimal input (less than a minute per day) from the user. We are proud of the achievements of our team, given the very limited time and all the challenges. Challenges we ran into The Hackathon brought together plenty of people of different expertise and skills. There were challenges that we faced that were very unique, as we faced a variety of communication platforms on top of open-source development tools. Online Slack workspaces and Zoom meetings and webinars presented challenges in forms of inactive team members, cross-communications, and information bombardment in several separate threads and channels in Slack and online meetings of strangers that are coordinated across different time zones. In developing the website and app for user input data, our next challenge was in preserving the privacy of user information. In the development of a live hotspot map, our biggest challenge here was to ensure we do not misrepresent risk and prediction into our live mapping models. Also for the testing of the iOS version, we ran to the new restriction of App Store for COVID-related apps, which should be backed up by some health authorities or governmental entities. The solution’s impact on the crisis We believe that WeCare would help many families who can see each other due to self-quarantine and enable early detection and risk evaluation, which may save many lives, especially for vulnerable groups. The ability to check up on their Care Circle and the hotspots around them substantially reduces the stress level and enables a much more effective and safer re-opening of the communities. Also, individuals can have a better understanding of the COVID-19 situation in their local neighbourhood, which is of paramount importance but not available today. The live hotspot map enables many people of at-risk groups to have their daily walk and exercise, which are essential to improve their immunity system, yet sadly almost impossible today in many countries. The concept of Care Circle motivates many people to invite a few others to monitor their symptoms on a daily basis (incentivized also through badges and notifications) and take more effective prevention practices. Thereby, WeCare enables everyone to make important contributions toward addressing the crisis. Moreover, data sharing would enable a better visual mapping model for public assessment, but also better data collection for the public health authorities and policymakers to make more informed decisions. The necessities to continue the project We plan to continue the project and fully develop the app. However, to realize the vision of WeCare we need the followings: Public support: a partnership with authorities and potentially being a part of government services to be able to deploy it on AppStore. It also makes WeCare more legitimate. This would increase the level of reporting and therefore having a better overview and control of the crisis. Social acceptance: though being confirmed using a small customer survey, we need more people to use the WeCare app and share their data, to build a better live risk map. We would also appreciate more fine-grained data from the health authorities, including the number of infected cases in small city zones and municipalities. Resources: So far, we are voluntarily (and happily) paying for the costs of the servers. Given that all the services of the app and website would be free, we may need some support to run the services in the long-run. The value of your solution(s) after the crisis The quarantine conditions and strict isolation policies may still be in place for upcoming months and year, as the outbreak is not reported to occur yet in Africa, the potential arrival of second and third waves, and possible COVID-19 reappearance next year at a smaller scale (like seasonal flu). Therefore, we believe that WeCare is a sustainable solution and remains very valuable after the current COVID-19 crisis. The URL to the prototype We believe in open science and open-source developments. You can find all the codes and documentation (so far) at our Website . Github repo . Pitch: https://youtu.be/7fMrVqxoPKY Pitch extended version: https://youtu.be/Vo0gs3WlptU Other channels. https://www.facebook.com/wecareteamsweden https://www.instagram.com/wecare_team https://www.linkedin.com/company/42699280 https://youtu.be/_4wAGCkwInw (new app demo 2020-05) Interview: https://www.ingenjoren.se/2020/04/29/de-jobbar-pa-fritiden-med-en-svensk-smittspridnings-app Built With node.js python react vue.js Try it out www.covidmap.se github.com
WeCare
WeCare is a privacy-preserving app & page that keeps you & your family safer. You can track the health status of your cared ones & use a live hotspot map to start your normal life while staying safer.
['Alex Zinenko', 'Sina Molavipour', 'Ania Johansson', 'Hossein S. Ghadikolaei', 'Christian M', 'Seunghoon HAN', 'Tomasz Przybyłek', 'Mohamed Hany', 'Alireza Mehrsina']
['1st Place Overall Winners', '2nd Place']
['node.js', 'python', 'react', 'vue.js']
52
10,395
https://devpost.com/software/contactless-medicine-dispenser
Inspiration Since Covid19 cases are increasing day by day .It is our responsibility and duty to protect our Covid warriors who are fighting in the hospitals. So we thought of making contact less medicine dispenser . What it does Our idea is contact less medicine dispenser . • The medicine dispenser will be placed near the bed of the patient and the medicines required for COVID patients will be loaded into the dispenser. • The dispenser has an alarm which will start beeping at the time scheduled by nurse for patient to take the medicine. • The nurse can set the alarm using the app (contact less) and at the right time patient will get reminder for taking the medicines . • This will not only reduce human involvement but will also help patients to recover very soon by taking proper medication How I built it the project was divided into parts. some of us worked on sensor interfacing and app development. some worked on the prototype body (made of plywood). Challenges I ran into building the user-friendly interface for the end-users. Accomplishments that I'm proud of Able to build a prototype that could help the COVID warriors. What I learned Adjusting ourselves according to the situation. Team and time management. What's next for Contactless Medicine Dispenser We are trying to integrate the machine with ESP8266 module so that anyone from anywhere can use the app to set up the reminders to take medicines for covid patients treated in hospitals. Built With app arduino embedded rtclib.h softwareserial.h wire.h Try it out github.com
Contactless Medicine Dispenser
Our idea is contact less medicine dispenser will be placed near the bed of the patient and the medicines required for COVID patients will be loaded into the dispenser and the user will get reminder.
['rakshit ks', 'G A Ankit', 'charan shivamurthy']
[]
['app', 'arduino', 'embedded', 'rtclib.h', 'softwareserial.h', 'wire.h']
53
10,395
https://devpost.com/software/noq-app-35qhvm
Slot booking Inventory Shopkeeper Dashboard Customer Dashboard Home Team Dejaavu has developed a web-app called NoQ for the Hackathon ‘Crack the Covid-19 Crisis’. During this Covid-19 Pandemic, it is advised to avoid gatherings and avoid contacting with people as it can lead to an increase in spread of the disease. The government only allowed shops which sold essential commodities to remain open for a certain number of hours throughout the day. People followed social distancing when they went for essential commodities shopping, this led to long waiting lines outside the shops. The long lines meant longer waiting period and thus the customers had to wait outside with a large number of people thus increasing the risk of contracting the virus. It also led to discomfort among customers, and difficulty in managing these lines when multiple such shops were close by or had limited space outside their shops. NoQ app – ‘Say No to Queues’ developed by team Dejaavu solves this problem of gathering in queues. NoQ app is a Progressive Web App made using Flask framework. Customers are given a provision to book a slot in the available slots for a shop. When booked, the customers receive an auto-generated message of confirmation using Twilio API. This message acts as a ticket to enter the shop. Each slot is decided to have timings and maximum number of people according to the size of the shop, this helps avoid overcrowding. The customers are shown the inventory of items available in each shop, which is filled in by the shopkeeper. Another benefit of NoQ app is that, we can keep a track of the people that were in the shop at a specific time, this helps to find the spread of the virus if found that a person was a carrier of the virus, thus tracking the possible affected people becomes easy. An alternative solution to implement social distancing is doorstep delivery which will reduce the no of people gathering at places like shop fronts. However, doorstep delivery is not feasible looking at the current scenario. Migrant workers returning to their villages and towns has led to a reduce in logistic manpower required as delivery people. Also, people are scared of taking delivery jobs due to the fear of contacting an unknown infected person. The solution provided by us is also very feasible and easy to implement. Further no-q can be extended to all kinds of shops and also in different sectors. Built With bootstrap css3 flask html5 javascript jinja mysql progressive-web-app python twilio Try it out noq-app.herokuapp.com github.com
Noq-app
NoQ helps in reducing the risk of infection both in stores and in queues outside by using virtual queuing system
['Tejas Khanolkar', 'DEBDATTA KUNDU']
[]
['bootstrap', 'css3', 'flask', 'html5', 'javascript', 'jinja', 'mysql', 'progressive-web-app', 'python', 'twilio']
54
10,395
https://devpost.com/software/pianotunesar
Playing along! Enter a YouTube URL and launch AR camera after getting our marker Building muscle memory! Intro Inspiration Have you ever tried learning the piano but never had the time to do so? Do you want to play your favorite songs right away without any prior experience? This is the app for you. We required something like this even for our personal use. It helps an immense lot in building muscle memory to play the piano for any song you can find on Youtube directly. What it does Use AR to project your favorite digitally-recorded piano video song from Youtube over your piano to learn by following along. Category Best Practicable/Scalable, Music Tech How we built it Using Unity, C# and the EasyAR SDK for Augmented Reality Challenges we ran into Some Youtube URLs had cipher signatures and could not be loaded asynchronously directly App crashes before submission, fixed barely on time Accomplishments that we're proud of We built a fully functioning and user friendly application that perfectly maps the AR video onto your piano, with perfect calibration. It turned out to be a lot better than we expected. We can use it for our own personal use to learn and master the piano. What we learned Creating an AR app from scratch in Unity with no prior experience Handling asynchronously loading of Youtube videos and using YoutubeExplode Different material properties, video cipher signatures and various new components! Developing a touch gesture based control system for calibration What's next for PianoTunesAR Tunes List to save the songs you have played before by name into a playlist so that you do not need to copy URLs every time Machine learning based AR projection onto piano with automatic calibration and marker-less spatial/world mapping All supported Youtube URLs as well as an Upload your own Video feature AR Cam tutorial UI and icons/UI improvements iOS version APK Releases https://github.com/hamzafarooq009/PianoTunesAR-Wallifornia-Hackathon/releases/tag/APK Built With augmented-reality c# easyar unity Try it out github.com
PianoTunesAR
Use AR to project your favorite digitally-recorded piano video song from Youtube over your piano to learn by following along!
['Zoraiz Qureshi', 'Farrukh Rasool', 'Ahmed Farhan', 'Hamza Farooq']
['1st Place Winning Team: 1,000 € + Wallifornia MusicTech + Les Ardentes 2021 Tickets + Cloudsploit Cloud Security Services']
['augmented-reality', 'c#', 'easyar', 'unity']
55
10,395
https://devpost.com/software/productivity-app-h84lxu
Inspiration Leading up to the hackathon, I knew I wanted to build a productivity app because I had notes, reminders, and to do lists scattered everywhere. They were on my cellphone, desktop, and laptop. It even got to the point where I would send myself reminders on messenger on my phone before I sleep so I would see it when I would wake up on my computer. All of this culminated in me wanting to make my own personalized productivity app. What it does This productivity app is built on the web to ensure that you can use it on any device, anywhere as long as you have internet or mobile data access. There are accounts for different users and each user can make to do lists, notes, and alerts. The alerts will be sent to you via text as I've found conventional phone reminders to be ineffective at times. While you need internet to set the alert, you don't need it to receive it which is what's really powerful about that compared to other options. How I built it Currently I only have the design and some functionality completed but it's built using the MERN stack. MongoDB, Express.js, React.js, and Node.js. The dependencies I have installed in the backend are as follows: bcrypt for password hashing, body-parser to read requests, dotenv for config variables, express-session to create sessions, mongoose for MongoDB schemas, passport and passport-local for authentication, and session-file-store to create sessions. The dependencies I have installed in the frontend are as follows: axios for local api calls and jQuery. Challenges I ran into I wasn't able to complete the project within 72 hours because we had a small team. Accomplishments that I'm proud of The design looks very good in my eyes and might be one of my best works so far. What I learned While we didn't learn much in the coding, we learned a lot more on the design side of things as we decided to focus more of our time there. What's next for Productivity App What's next is to finish building the app with functionality. It shouldn't take more than a week to finish all of the functionality and then we can start using it for ourselves! :) Built With css3 express.js html5 javascript mongodb node.js react sass Try it out github.com
Productivity App
To do lists, notes, alerts, organization system
['Edward Chang', 'Véronique Delage']
[]
['css3', 'express.js', 'html5', 'javascript', 'mongodb', 'node.js', 'react', 'sass']
56
10,395
https://devpost.com/software/liquay
Model of the Liquay Inspiration As I was relaxing at my desk, watching a youtube video on different Asian snacks, one part of the video got my attention. As the vlogger was talking about the mountain of snacks piled on the check out table, I noticed that instead of directly handing money to the cashier, he placed it on a tray. The cashier then took the money and placed some coins on that same try. Teeming with curiosity, I did a quick Google search. What it does The Liquay offers a place to put money so that the cashier and the customer don't need to directly touch each other to complete an in-person transaction. This system of putting money in trains originally comes from Japan, but I am just making my own version with a few changes due to the coronavirus. In addition, it's meant to be cleaned at the end of each day because of all the money that it touched. How I built it I first made a model of the tray in Autodesk Fusion 360, then I made a simple website to display some of the information about my project. Then after I made a presentation, I published it to youtube and began learning how to edit the video well. Challenges I ran into Since it's been a long time since I've used Autodesk Fusion 360, I had to relearn the basics and even some advanced techniques to bring out the best in the model. Plus, my computer's GUI isn't optimal for Fusion 360 so there was a plethora of crashes and problems that I ran into. Accomplishments that I'm proud of I'm proud of launching my first, complete, individual project on DevPost. Plus I'm really proud of relearning some design and implementation techniques in Autodesk Fusion 360. What I learned I learned about basic and advanced techniques in Autodesk Fusion 360. I learned how to solve some of the problems with my GUI and learned a little more deeply in Computer Hardware. What's next for Liquay All I'm really looking forward to do is to inspire someone more qualified to release products and hope that the community improves based of this idea. I just hope that the negative side effects of the coronavirus become alleviated thorough our hardwork and determination. Built With autodesk-fusion-360 css3 html5 javascript w3s-css Try it out rashstha.netlify.app
Liquay
A CAD-Designed Cash Tray to leverage direct contact in places like stores
['Rashmit Shrestha']
[]
['autodesk-fusion-360', 'css3', 'html5', 'javascript', 'w3s-css']
57
10,395
https://devpost.com/software/heart-o-k
probability of heart disease in the individual. patient entering the information to check for heart disease. Information of the patient that is being entered. When recommending medicines, the view will be something of this kind. login page sign up page home page from where the patient can jump directly to report/check/logout. What it does My project provides a simple ui where you can check for probability of heart disease and get recommendations for medicines and drugs correspondingly. How I built it I built a simple prediction model using best out ml algorithms available like glm, gbm,svr, etc. After that I prepared my own data of medicines by scraping data from medlife, etc and give recommendations using collaborative filtering. Challenges I ran into I face difficulty while building medicines data set. Also, results were not that accurate in the past, so i had to work a lot. Accomplishments that I'm proud of The website that i designed provides a very simple ui that anyone is able to operate. Results are promising. What's next for HEART-O-K I would like to extend this project for another diseases like thyroid, etc. Built With css google-web-speech-api html javascript parse r rshiny server
HEART-O-K
Predicting heart disease in an individual and getting recommendations for medications and drugs.
['Deepshi Sharma']
[]
['css', 'google-web-speech-api', 'html', 'javascript', 'parse', 'r', 'rshiny', 'server']
58
10,395
https://devpost.com/software/good-morning-girlfriend
One stop solution to all your relationship problems. Kshitij Dhyani 👽 I pump iron during the day and smash my keyboard in the night.🐙 Goodmorning Girlfriend 🐕 Link to the Live website : https://morningrobot.herokuapp.com/ Link to the Github Repo : https://github.com/wimpywarlord/Goodmorning_Girlfriend “Does your girlfriend complain about you not wishing them goodmorning or goodnight ? Are you loosing the spark in your relationship ? Or have you just become too lazy to type a long ass message every morning and night?” Well not anymore ! Get delicately drafted messages on a tap distance away. Subscribe to the Pro version for wishing your loved one each day everyday hassle free with no efforts what so ever (Yet to be released). A smarter way to a longlasting relationship. Getting Started 🎧 This project utilized the benefits of multiple technologies such as node , vue.js. Its better if we familiarize ourselves with these technologies. Prerequisites There no necessity for any software for running the project ! The editor and package manager are all at your discretion. Installing☔ Its pretty straightforward : Clone the repo : git clone <repository Url> Install all dependencies npm install Deployment 💡 Run the application npm run dev Built With 🎯 A lot of love and a little JavaScript Node.js - The web framework used Vue.js - Dependency Management Contributing Make Pull requests which improve the functionality of the application in any sorts. It should conform with the following conditions. Clear , short , crisp description of the PR. Should add on to the value of the application. Become my distraction (Social Media)🏅 I am From New Delhi Kshitij Dhyani MIT © Kshitij Dhyani Acknowledgments 💖 To my family👪 and friends 👫 who always kept me motivated. To the community of computer science 💻. Built With css3 ejs html5 javascript node.js npm vue.js Try it out morningrobot.herokuapp.com
Good-morning Girlfriend
“Does your girlfriend complain about you not wishing them goodmorning or goodnight ? Are you loosing the spark in your relationship ? Well not anymore ! Get delicate messages on a tap distance away.
['Kshitij Dhyani']
[]
['css3', 'ejs', 'html5', 'javascript', 'node.js', 'npm', 'vue.js']
59
10,395
https://devpost.com/software/again-vui0w1
Inspiration Few days before the start of the quarantine in Morocco, we were walking down the street and we saw a homeless guy trying to find food. Going back home, we were wondering what can this guy do if the quarantine gets imposed on us, Moroccans. A few days later, that was exactly what happened: we were quarantined. Thinking about that guy we saw the other day, we started brainstorming solutions that we can build as computer science passionates to make him and many others in the same situation as he finds a shelter especially during this tough time when they can be easily infected by the virus, as likely as easily spreading it. After seeing Covidathon, we believed that this is our chance to make our solution reach more people and to take the first step in making an impact. What it does Again is a solution that aims at securing shelter for homeless people during the lockdown by matching associations and organizations that deal with homeless people and house donators. The solution also creates jobs for people who have lost their jobs by being applications' reviewers (more details about this below). To secure shelter for homeless people, the application allows users to create accounts as an association, a house owner, or an applications' reviewer. All of the different types of users enter useful information about themselves when registering (details about the registration information required from each type can be found on the demo site): As a house owner: anyone who possesses a house or multiple houses can donate them via the application by filling a house donating application. The application asks for information about the house/s that the user would like to donate. This information includes the location of the house, the area, but most importantly a document proving that the user owns that house. The purpose of this proof is to reduce the wasted time after matching an association with a user that does not really possess the house. This proof document will be processed by an AI system that will either validate it or not. If the document is validated, it will be available to applications' reviewers to match it with an association. If not, the donor’s application will be withdrawn. After the donated houses have been matched with an association or more associations (if there are many houses that a lot of associations can use), the contact of the donor is given to the associations so that they coordinate to finalize the donation process. As an association: after registering in the application, associations can submit applications asking for matching with a donor. An approximate number of homeless people who will benefit from the donation should be specified in the application. It is then the job of applications' reviewers to review the application and decide on a match with a donor. As an application reviewer: applications’ reviewers are people recruited through the application in order to review the associations’ applications and match them to house/s donors. To be an applications' reviewer, one must apply to the job through the website (applications are available in case of need when the amount of applications is too much). Applicants must provide their personal information, but most importantly, proof of losing their job because of the pandemic. This proof can be of any kind: a screenshot of an email of firing (the email should be forwarded later to make sure it comes from a recruiter, a document..). This proof of losing a job, plus the first-come, first-served basis, and the description of the need in the application are the factors that the admins are going to rely on when assessing applications. Each applications' reviewer will get associations’ applications on a weekly basis. Their job is to assess the need for associations and match them with house donors in the same locations. They also have to distribute the houses in an optimal way taking the need and the impact into consideration. Applications reviewers get paid from donations to the web application. These donations have nothing to do with the house/s donations, they are monetary donations that can be done through the web application to a specific bank account for this purpose. Anyone can donate including people not registered under any type in the application. More on how application reviewers get paid in the section below. Payment Policy Applications reviewers will get paid from donations. Since donations are uncontrollable, our team came up with an adequate solution. Applications reviewers will get a token for each application reviewed and thus an association matched with a donor. The value of a token changes on a weekly basis depending on the donations received. Here is a hypothetical scenario: we have 3 applications' reviewers who have reviewed 10 applications each, this means that each applicant has earned 10 tokens, making 30 tokens in total. The amount of donations received in this week is 300 $, implying that a token is worth 10 $. In this case, each reviewer will receive 100$ for this week. However, this method is not good if the amount of donations for a certain week is very high, let’s suppose that in the same previous scenario, the amount of donations is 30000 $, then a token will be worth 1000$. This also means that an applicant will earn 10000 $ for a single week. This might be not fair for other applicants who will join in the coming weeks, and when the donations will be very much lower. To solve this problem, we decided on having a maximum amount that a token cannot exceed so that if the amount of donations is high, we save it for later weeks. Going back to our scenario, if we set the maximum worth of a token to be 20$, and having 30 tokens to issue, we will spend 600 $ and save 29400$ for upcoming weeks. Important notes: Before associations submit their applications, they have to agree to some terms and conditions. An important condition is that the associations should engage the beneficiaries in society by making them help either by doing a job, volunteering or helping other homeless people. The goal of the application is not only to find shelter for these people but to try to engage them in society especially during these tough times when we all have to unite. Link to the document about using AI in Again: [ https://docs.google.com/document/d/1RNNpGf3MIhp-lksVtGzXkH7Tb91Ilw4gRw7AJmu27bA/edit?usp=sharing ] How we built it To build our web app again, we (team members) divided the work into three parts: The front-end part (Mohamed Moumou): This part consisted of designing each web page in the web app. The story of AGAIN and all the scripts in the web app. Also, building the actual web app front end using the react framework. The back-end part (Ouissal Moumou): This part consisted of designing the database and building the actual back-end of our web app using the express.js framework, MongoDB(for the database), and APIs. Deployment (Ouissal Moumou & Mohamed Moumou): We used Heroku to deploy either the back-end and the front-end app. Accomplishments that we're proud of The team of Again is very proud that he is thinking about homeless people when everyone is thinking about the problems of the homeful ones. It does not mean that homeful people’s problems are not urgent, but it means that there is a huge part of society that struggled and now struggling more because of the COVID-19 outbreak that needs urgent help and re-integration. Another accomplishment we are proud of is that our idea is providing jobs for people losing their jobs. What's next for Again 1- Implementing AI solutions in our App, 2- Adapting the services offered by the app to every country's laws, 3- Make our web app available in many languages (Arabic, French...). Helpful hints about running the application in our demo site: http://againproject.herokuapp.com/ If the page returns an error message from Heroku, just refresh the page and it will work. Here are some login credentials for quick testing of the application: For an association: ** email: tasnimelfallah@gmail.com password: Tasnim123 * For a house/s donator: * email: mohamedjalil@gmail.com password: yay yay * For an application reviewer: * email: badr@again.com password: Badr123 **The information and metrics shown on our app are fictional. Built With heroku javascript mongodb node.js react rest-apis uikits Try it out againproject.herokuapp.com againbackend.herokuapp.com github.com github.com docs.google.com
Again
Again is a solution that aims at securing a shelter for homeless people during the lockdown by matching associations and organizations that deal with homeless people and house donators.
['Mohamed MOUMOU', 'Ouissal Moumou']
['The Wolfram Award']
['heroku', 'javascript', 'mongodb', 'node.js', 'react', 'rest-apis', 'uikits']
60
10,395
https://devpost.com/software/corona-protective-smart-hat
CORONA PROTECTIVE SMART HAT ART WORK DIAGRAM We all know that corona virus is a very dangerous virus and we all are worried about it. Corona virus enter human body through eyes nose and moth by our contaminated hand. When we go outside from our home unconsciously we touch our eyes, nose and mouth by our contaminated hand. The effective module introduced with personal protective intelligent hat. The hat integrated with a small circuit made by hall sensor, resistor, diode, transistor, buzzer, battery and switch. The manual switch is incorporated as per the requirement of on/off of this module. The novel corona virus has capability to enter our body by touching eyes, nose and mouth. This special type of smart hat is based on the working principle of hall sensor and neodymium disc magnet set ring. The hall sensor has ability to detect the magnet within the distance of 3-3.5 cm. The range of this distance will also able to change according to requirement. When the neodymium set ring get come close to the hall sensor, due to magnetic field a voltage difference create in the hall sensor this voltage difference is also known as hall voltage. From this hall voltage an output current generate in the hall sensor. This output current goes through the resistor, diode and transistor. After activate the base part of the transistor the current flow to the buzzer and make the buzzer active. Sound of the buzzer alert the person from unconsciously touches of eyes, nose and mouth. This invention claims its importance on the basis of present scenario of world and it is obvious to introduce in new industry sector. Built With battery buzzer diode hall magnet resistor sensor wire
CORONA PROTECTIVE SMART HAT
Reduce the chances of unconsciously touches of eyes, nose and mouth by contaminated hand.
['Sarthak Chatterjee']
[]
['battery', 'buzzer', 'diode', 'hall', 'magnet', 'resistor', 'sensor', 'wire']
61
10,397
https://devpost.com/software/flikk-flokk-ofwumh
Inspiration It is a global problem to sometimes be unsure about how things are supposed to be recycled. It is easy to ignore the problem and simply throw stuff in the trash, but it should be just as easy to know exactly how everything should be recycled. What it does It scans a product that you do not know how to recycle and tells you exactly how it should be recycled. How we built it We built an app. We built a server. We connected them to a database. We connected everything to data provided by the organizers. And then, we saved the world. Challenges we ran into It was a challenge to create the grading system that says how environmentally friendly a product is. Accomplishments that we’re proud of We are proud of our project as a whole with respect to how it will help save the planet with added recycling. The slick user interface of the app is also worthy of our admiration. What we learned We improved our teamwork and got better at communicating our ideas and feelings about each others’ designs and code. What's next for Flikk Flokk To reach out to companies for collaboration on gathering data on how to recycle products. Built With barcode-scanner flutter google-maps node.js postgresql Try it out github.com
Flikk Flokk
Flikk Flokk is a mobile app that makes recycling effortless. It reduces the time needed to find information about how to recycle a product.
['Þórður Ágústsson', 'Róbert Ingi Huldarsson', 'Þórður Friðriksson', 'Davíð Phuong Xuan Nguyen']
['Best data project']
['barcode-scanner', 'flutter', 'google-maps', 'node.js', 'postgresql']
0
10,397
https://devpost.com/software/greenbytes-nd0psl
Inspiration Overordering in restaurants leads to food waste, a lot of food waste. In Iceland, we throw away 5.840 tons of food every year. Food waste has a huge environmental impact that we cannot afford in the midst of climate change. Food waste releases greenhouse gas emissions during farming, processing, shipping, distribution, storage, and disposal. Food waste has a very large carbon footprint, water footprint, and land usage. We are dedicated to improving the wellbeing of our planet. After having personally seen buckets and buckets of food thrown away through a decade worth of experience in the foodservice industry, we knew we had to something. What it does GreenBytes reduces food waste by telling restaurants how much food they should be ordering. We have developed a cloud-based solution that helps food retailers reduce food waste and increase profits by breaking down restaurant menus and predicting future food consumption using our newly adapted machine learning algorithms. Our solution takes daily sales data into account to predict the amount of food necessary for the upcoming days. We use a Recurrent Neural Network (RNN) to predict the amount of each product that will be sold in the upcoming days based on past sales data as well as weather forecasts for the area a restaurant is located in. How we built it The weather data that was incorporated into the model were obtained from Veðurstofnun. This data was used with our already existing data from a restaurant in Reykjavík. All fo the data was cleaned and visualized. The correlation between total sales and each weather variable was assessed and the relevant elements were included as parameters to train the RNN. After the RNN was trained, predictions were made for a predetermined amount of days into the future. The amount of each ingredient required to fulfill the predicted orders was determined. To validate the problem two steps were taken. 1) The predictions were compared to the true number of items sold. The average Root Mean Square (RMS) Error for the predictions was 2.5 with an average product number of 6.6 2) The known amount of food that was ordered for the restaurant was compared to the amounts that we predicted would be necessary to fill the order. The amount of waste that was produced for those three days was 648 kg. With the predictions, the amount of food that would be produced was -19 kg, The negative indicates that the model slightly underestimated for some items. Challenges we ran into We ran into challenges getting the predictions into a data frame to compare how much food is needed Accomplishments that we're proud of We are proud of raising awareness about food waste and working to reduce food waste. We understand that food waste has an impact on our planet and our society, making it a problem worth solving. We are proud to provide a solution that can help restaurants benefit the planet and stay in business. We are living through a complex time where climate change and a pandemic our constantly on our minds. We think of GreenBytes as the contribution we can make to help chip away at the climate change problem while helping restaurant owners stay in business. At the core of Greenbytes is our belief in our local communities and the individual’s ability to affect environmental change. If we can empower domestic businesses to improve the food supply chain, we can help Iceland serve as an international role model for better consumption and production practices throughout Europe. Europe throws away 88 million tonnes of food per year which costs €143 billion, reducing this by 20% can save Europe €28.6 billion. The GreenBytes team has the industry knowledge and business model to be part of the conversation that starts with improving Iceland’s domestic food supply chain. Our solution can help Iceland reach its Paris agreement goals, as well as our domestic goal of becoming carbon neutral by 2040. GreenBytes can help fulfill the following UN Sustainable development goals: Responsible Consumption and Production (12), Climate Action (13), Zero Hunger (2), Gender Equality (5), and Reduced Inequalities (10). What we learned Over the past week, we learned about the correlation between weather and food sales. We learned that the climate factors that have the greatest correlation between the total sales and the weather/time data are gusts, max wind speed, precipitation, snow cover, day of the week, and sunlight hours. We learned that there are many datasets available in Iceland that could aid is in more accurately predicting the amount of food requirements for commercial food retailers. With some more research, we can more accurately predict the quantities of food and have a greater impact on reducing food waste What's next for GreenBytes The next steps for GreenBytes will be to further analyze the algorithms, conduct lifecycle assessment analyses, and conduct full-service trials. Now that we have seen an improvement in our overall algorithm accuracy through the introduction of external factors such as weather, we will seek out other datasets and see if we can further draw connections between food demand and external factors. We want to better understand the impact we are making in the restaurants we work with. In order to do so, we need to accurately quantify the waste we reduce and analyze what that means in terms of carbon emission reduction, water usage reduction, and land usage reduction. Lastly, we need to test our full service in restaurants. Testing means having a restaurant input their menu, stock, and distributors and allow our progressive web application to make order suggestions without our interference. Testing the web application will allow us to understand the scalability of our project. Built With jupyter python Try it out github.com
GreenBytes
Reducing food waste in restaurants using AI and weather forecasts.
['Renata Bade Barajas', 'Jillian Verbeurgt']
['Improved solution']
['jupyter', 'python']
1
10,397
https://devpost.com/software/hjolad-fyrir-umhverfid
GreenBike Kerfis arkitektúr Innblástur Losun gróðurhúsalofttegunda á heimsvísu er orðin að stóru vandamáli sem við þurfum að takast á við sem heild. Vegasamgöngur er stór partur af vandamálinu og einn af þeim þáttum vandamálsins sem almenningur getur haft bein áhrif á. Við vitum að með fræðslu og hvatningu getum við breytt viðhorfi fólks til samgangna eins og annara mála. Hver ferð sem farin er á hjóli í stað einkabílsins skiptir máli. Virkni Lausnin okkar Hjólað fyrir umhverfið er hvetjandi hjóla app sem gefur notenda upplýsingar um kolefnisspor sem sparast við það að hjóla í stað þess að keyra. Það gefur einnig hvatningarorð á hverjum degi sem tekur mið af veðri næsta dags. Sem dæmi gætu skilaboð fyrir blíðviðrisdag verið eitthvað á þessa leið: “Það er frábært hjólaveður í fyrramálið, njóttu ferðarinnar.” og fyrir blautan og kaldan dag gætu skilaboðin verið: “Við látum veðrið ekki stoppa okkur, mundu eftir regnjakka á morgun. Þetta verður hressandi ferð”. Mælingar appsins sem snúa að losun gróðurhúsalofttegunda taka alltaf mið af stystu leið bíls milli staða en leiðarvísirinn fyrir hjólreiðamanninn miðast alltaf við hjólastíga og bestu hjólaleiðina. Framkvæmd Við notuðum draw.io til að teikna upp arkitektúr appsins. Útlit og flæði appsins hönnuðum við í forritinu Figma. Áskoranir í framkvæmd Aðal áskoranir okkar voru að halda hugmyndinni einfaldri og bæta ekki of mörgum aðgerðum í hönnunina. Afrakstur sem við erum stolt af Við erum með raunsæja hugmynd sem hægt er að koma í framkvæmd og byggja ofan á. Útlit og viðmót appsins er snyrtilegt og notendavænt. Það sem við lærðum Við höfum fengið góða reynslu í hugmyndavinnu, samvinnu og hönnun á notendaviðmóti. Næstu skref Forritunarvinna til að koma appinu Hjólað fyrir umhverfið í almenna notkun. Built With draw.io figma opendata openstreetmap osrm Try it out www.figma.com
Hjólað fyrir umhverfið
Hjólað fyrir umhverfið er farsíma app sem heldur utan um sparað kolefnisspor og hvetur til hjólreiða.
['Marta Björgvinsdóttir', 'Guðjón H. Kristinsson', 'Karítas Sif Halldórsdóttir']
['Best idea']
['draw.io', 'figma', 'opendata', 'openstreetmap', 'osrm']
2
10,397
https://devpost.com/software/towards-a-better-future-qfxe6k
window.fbAsyncInit = function() { FB.init({ appId : 115745995110194, xfbml : true, version : 'v3.3' }); // Get Embedded Video Player API Instance FB.Event.subscribe('xfbml.ready', function(msg) { if (msg.type === 'video') { // force a resize of the carousel setTimeout( function() { $('[data-slick]').slick("setPosition") }, 2500 ) } }); }; (function (d, s, id) { var js, fjs = d.getElementsByTagName(s)[0]; if (d.getElementById(id)) return; js = d.createElement(s); js.id = id; js.src = "https://connect.facebook.net/en_US/sdk.js"; fjs.parentNode.insertBefore(js, fjs); }(document, 'script', 'facebook-jssdk')); Inspiration Climate change has long been an ignored problem. Especially so in our country. Hence we found it heartwarming that Iceland is working to better it. We looked at the preliminary data and concluded that emissions continue to be the biggest problem. Why hasn't it been solved yet? We saw that there are already many potential solutions in literature. What it does An interdisciplinary approach for a combined solution with the aim to reducing the emissions, a broad policy level changes that we have discussed the validity and ability of easy execution amongst ourselves. How I built it Each member of the team took upon different approaches for finding a solution. Despite our busy schedules, we have been making it a point to devote time into detailed brainstorming sessions and discussions. Challenges I ran into The first was of course getting the data itself. It was so hard to pull data from the links. None of us are data scientists or coders. In fact we met over a study group for Coursera classes. So yeah, we discovered another website and started working off it. Next we all had different ideas which had to be streamlined. Accomplishments that I'm proud of Pulling off the whole project with consensus. What I learned Learned many things about sustainable energy and environmental-friendly practices. Also, that you do not need to be data scientists or huge personalities to understand things like climate change and critically analyze them. Oh and yes to check out things like how to make a video beforehand. What's next for Towards A Better Future We are going to refine it for sure. Go into detailed case by case analysis. Our presentations is here: https://drive.google.com/drive/folders/1xmEIRYfdwhUkolCzCSdFioBtoKsBAqZw?usp=sharing Built With ggplot2 r Try it out drive.google.com
Towards A Better Future
Reducing Emissions from Multi-Pronged Approach
["Alister-Dcruz D'cruz", 'Nevil Abraham Elias', 'SHAHANA BEEGAM S', 'Varun Chaturvedi', 'Ayushi Rashi']
[]
['ggplot2', 'r']
3
10,397
https://devpost.com/software/hemp-pack
When I started this journey this summer I did not know which direction I was headed, with Covid-19 crippling my ability to pursue my current self-employed line of work in tourism I noticed a new legislation being passed by the Icelandic government. The legalization of being able to import and grow industrial hemp. With very little background in agriculture but a strong determination to not be discouraged by the pandemic, I set out to be amongst the first group of local Icelandic pioneers to grow hemp, and four weeks later after extensive research I was planting seeds on my grandmother´s land. Three months later I have partnered with the University of Akureyri Bio-Chemistry department in a joint-venture to create a sustainable, cost-effective, and fully compostable bio-plastic product that is local. Our main focus is producing PHB, a bio-plastic created from microbes and industrial hemp. Our product replaces petroleum-based plastics and imported bio-plastics, therefore reducing the harmful toxins released by petroleum-based plastics both in recycling plants and nature while also reducing the carbon footprint of imported plastics. How is it made? The two main components of our PHB product are microbes and industrial hemp. After the hemp has been harvested, it is broken down into a carbon substrate through the use of enzymes. Then the microbes are put under stress conditions and "starved". Under such conditions and mixed with the bio-mass they produce PHB, which can then be put into injection molding casts (such that traditional plastics use) and molded into any shape. The biggest challenge we've encountered has been how to present our solution in reverse. We are entering the Hackathon with an already formed solution, therefore our challenge has been finding a problem within the given dataset to match it. Other challenges I encountered have been learning how to farm hemp and connecting with the right people to tie it all together. Despite these challenges, it has allowed me to step out of my comfort zone and learn a new skill set that I am very proud of. What I learned I learned that forming connections with the right people is important to develop an idea into a tangible and feasible project. All in all, through my research I have learned a substantial amount about sustainable products, how they are manufactured, and ultimately utilized. Through my research, I have developed a true passion for sustainability and creating an environmentally friendly solution for businesses to improve their carbon footprint and achieve their corporate social responsibility. What's next for Hemp Pack Hemp Pack is a startup company stepping into its early stages of research and development. Our next step is to find out where the application of PHB would best suit the Icelandic market, either consumer or industrial. Optimization of the growing process and sourcing of microbes as well as all cost-effectiveness is needed before the project can reach the next level. There are exciting applications of PHB within both markets, for the consumer market we are looking at plastics such as cups, cutlery, clothing and packaging for other retail items. Within the seafood industry, there is a great need for compostable packaging tp replace traditional bulk food shipping boxes and plastic bags used in the fishing industry. The solutions mentioned above need to be explored during the next stage of business development. All prize money will go into working directly with the University of Akureyri to move research and development in order to move the development of our business forward. Built With baccillius cellulose enzymes hemp lignins teamwork
Hemp Pack
To create an Icelandic company in the sustainable packaging industry.
['Robert Francis', 'Kristofer Freysson']
[]
['baccillius', 'cellulose', 'enzymes', 'hemp', 'lignins', 'teamwork']
4
10,397
https://devpost.com/software/meniga-carbon-index-gom1ar
Þó að Íslendingar njóti hita og rafmagns frá endurnýjanlegum orkugjöfum telst kolefnisspor Íslendinga hátt í alþjóðlegum samanburði. Að hluta skýrist þetta vegna „innflutts kolefnisspors“ vegna framleiðslu á vöru og þjónustu sem Íslendingar kaupa erlendis frá. Losun gróðurhúsalofttegunda á Ísland jókst um 30% á árabilinu 1990-2018. Kolefnisspor hvers Íslendings á árinu 2018 var að meðaltali 12 tonn sem er 23. mesta kolefnisspor á mann í heimi. Ísland hefur skrifað undir Parísarsáttmálann og hefur ríkisstjórnin sett það markmið að landið verði kolefnishlutlaust árið 2040. Svo Ísland nái markmiðum Parísarsamningsins til ársins 2030 þarf losun í ákveðnum geirum að dragast saman um 30 – 40% miðað við útblástur ársins 2005. Ísland tekur þátt í markmiði ESB og Noregs um 40% samdrátt í losun gróðurhúsalofttegunda fyrir árið 2030, miðað við losun ársins 1990. Sett hefur verið fram aðgerðaráætlun um hvernig Ísland muni ná þessum markmiðum. Geri Ísland ekki stórátak í minnkun á losun gróðurhúsalofttegunda stendur Ísland frammi fyrir miklum kostnaði vegna kaupa á losunarkvótum. Hversu há upphæðin verður, veltur á gangverði losunarkvóta. Með okkar lausn getur fólk orðið meðvitað um kolefnisspor vegna einkaneyslu sinnar, séð hvernig það hefur þróast í gegnum tíðina og gripið til raunhæfra og einfaldara aðgerða til að lágmarka það með breyttri kauphegðun. Built With azure python streamlit Try it out drive.google.com
Meniga Carbon Index
Við útbjuggum kolefnisstuðul sem gerir fólki kleift að fá upplýsingar um kolefnisspor þeirra einkaneylsu.
['Egill Vignisson']
[]
['azure', 'python', 'streamlit']
5
10,397
https://devpost.com/software/l-rvsgc4
Inspiration Upprunalega skoðuðum við hvort Ísland gæti tekið við rafbílavæðingunni, semsagt hvort rafmagnsframleiðsla myndi vera nóg ef allir skiptu yfir í rafmagnsbíl. Svo var til hliðar hversu mikil kolefnislosun myndi minnka hjá ef hann myndi skipta út bensín/dísel bíl yfir í rafmagns. En eftir að við skoðuðum margar reiknivélar sem telja sig geta reiknað út kolefnisspor notendanna fundum við út að margar þeirra hafa eitt stórt vandamál. Þeim vantar grunnlínu svo hægt sé að bera sig saman við meðal mann. Verkefninu var breytt í að endurhæfa þessar reiknivélar til þess að hafa meðal mann með í útreikningunum. What it does Reiknivélin leyfir notandanum að setja inn bílnúmer sitt eða sína eyðslu í L/100 og hversu mikið þeir keyra á mánuði, til þess að reikna út hversu mikið kolefnisspor þeirra er. How we built it Við hjá &L notuðum pandas til þessa að taka gagnagrunn frá orkustofnun og hreinsa það verulega til yfir í nothæf gögn. Kóðinn sem við höfum hreinsar þess ,,skítuga” gagnagrunna í hreina og fína gagnagrunna sem betra er að nota. Challenges we ran into Hvað á "skítugur gagnagrunnur" að þýða? Einfalt, við fundum út að Orkustofnun hafði ekki allt réttar reglur þegar höndlað er töflugerð, svo sem að setja kommur í CSV (Comma seperated values) skrá sem braut hana alveg, þau gleyma einingum og fleira. Einnig vegna aðra vandamála að þá duttu vefforritararnir okkar út úr verkefninu. Accomplishments that we're proud of Við erum mjög stoltir af hversu vel kóðinn okkar hreinsar gagnagrunna. Við erum einnig stoltir af hugmyndinni sjálfri og framtíðar möguleikum hennar. What we learned Við lærðum á verkefninu að margar stofnanir eru gjarnar að gleyma einingum þegar kemur að gagnaskráningu og það getur tekið löng símtöl við marga aðila að fá að vita hvaða einingu er verið að nota. Við komumst líka að því að strætó eyðir 41L/100km af díseli og að rafmagn setur það lítið af CO2 í loftið að það er ótrúlegt. Einnig lærðum við að setja upp almennan gagnahreinsikóða sem ætti að vera einfalt að nýta aftur og breyta til þess að hann henti öðrum gögnum. What's next for &L Framtíðarmöguleikar &L er að klára að setja upp vefsíðu og selja auglýsinga stofnunum gögn til þess að selja betur rafmagnsbíla. Einnig að bæta við fleiri liðum til að reikna kolefnispor eins og fyrir utanlandsflug og fleira. Built With python Try it out github.com
&L
Við hjá &L viljum hjálpa Íslendingum að finna út afhverju það er betra að vera grænn, bæði peningalega séð og vegna mengunar.
['Huldar H', 'Kjartan Bjarmi Árnason Klein', 'Bjarki Björgvinsson']
[]
['python']
6
10,397
https://devpost.com/software/hark-umhverfissja
Nafn Teymis Team HARK Meðlimir Þorri Líndal Guðnason - thorrilindals@gmail.com Ari Björn Ólafsson - ari@dynamo.is Lárus Óskarsson - larusoskars@gmail.com Vandamálið sem lausnin leysir Það eru ekki margir aðilar sem sjá sér hag í því að nýta íslensku opnu gögnin og bera kostnað og vinu af því að birta þau utan opinberra stofnanna. Það sem umhverfissjá HARK leysir er að sjálfvirknivæða notkun og framsetningu á slíkum gögnum. Það sem tekur við væri svo allsherjar kerfi til þess að hvetja fólk og fyrirtæki til þess að styrkja græn verkefni, skógrækt, framleiðslu og gróðurrækt; eða fjárfesta í grænum verkefnum. Lausninni er skilað inn í flokkinn Besta Gagnaverkefnið Lýsing á lausninni Hark Umhverfissjá er tól til þess að birta og vinna úr opinberum gögnum með nýjustu tækni í kortum og gagnaúrvinnslu. Lausnin er hýst á okkar eigin vefsvæði og er byggð með ýmsum tólum sem eru listuð bæði innan síðunnar og í skilaeyðublaðinu. Síðastliðin vika Síðastliðin vika hefur farið í að skoða umhverfis- og efnahagsgögn frá ýmsum stofnunum út frá landfræðilegum gögnum og tíma. Mörg tól til þess að vinna úr gögnum, sjálfvirknivæða slíka vinnu og útgáfa yfir netið, tenging milli síkra kerfa og að miða vinnuna að skilum í gagnaþoni hafa verið ríkir þættir. Undanfarinn sólarhringur hefur svo farið í að púsla því saman sem við vorum komnir með og búa til framsetningarsíðu um verkefnið. Áhrif lausnarinnar á umherfið Ef að næstu skref væru tekin alvarlega á verkefninu, fjárfesting fengist; eða áhugi ríkisstofnana lægi fyrir, væri hægt að auka uhverfisvitund í þjóðfélaginu, sem og framlög til grænna verkefna. Efnahagsleg áhrif gætu verið talsverð ef að nógu margir taka þátt. Kolefnisfótspor myndi lækka í kjölfarið og innlend velta myndi aukast í grænum viðskiptum. Hvað vantar uppá Til þess að seinni kaflar verkefnisins gætu átt sér stað þyrfti einfaldlega lengri tíma, laun fyrir þá sem eru að vinna verkefnið og betri gögn til þess að vinna úr. Til dæmis eru gögnin frá umhverfisstofnun yfir loftmælingar yfir eitt ár frekar nákvæm og um margar mælingar er að ræða, en loftmælistöðvarnar eru svo fáar að það er ekki unnt að vinna úr þeim myndræna framsetningu sem sýnir dreyfingu loftgæða. Ef að það væri mælt með mun fleiri stöðvum væri hægt að nýta gögnin betur. Um fleiri slík dæmi er að ræða varðandi þau gögn sem stofnanir eru að bjóða uppá. Built With angular.js cartodb excel firebase html5 javascript json postgresql typescript Try it out hark.team github.com
HARK Umhverfissjá
Umhverfissjá - Myndræn framsetning á opinberum kortagögnum
['Ari Olafsson', 'Lárus Óskarsson', 'thorrilindal Þorri Líndal Guðnason']
[]
['angular.js', 'cartodb', 'excel', 'firebase', 'html5', 'javascript', 'json', 'postgresql', 'typescript']
7
10,397
https://devpost.com/software/eno-zgm8no
Inspiration : Data connections are hard to use for most users, and problems occur when sharing reports or elementing new datasources. What it does Eno connects to data from public datasources directly and stores that data in a v2 blob storage. Users upload their own data (connect to their own datasources in future version) via the web app which has web api that connects to the Excel add in. How we built it Using a web application front end, sql and blob storage & then an Excel add in we developed. Challenges we ran into Gathering information from public data sources, which was sometimes a hectic and tedious process, hence our solution. Accomplishments that we're proud of Getting the MVP working in a timely manner. What we learned Public data is really not accessible and easy to use. What's next for Eno Eno requires additional/improved setup of database structures so that it can eventually generate revenue through users connecting their own systems or databases and recieving that data within excel via the add-in. Built With .net excel react sql
Eno
Eno changes the way you work with, share and report your data while giving users access to all publicly available datasources, without ever having to leave Excel.
['Ingimar Baldursson', 'Auður Líf Benediktsdóttir']
[]
['.net', 'excel', 'react', 'sql']
8
10,397
https://devpost.com/software/skrefinu-framar-2yund7
Inspiration “Ég mæti á öll loftslagsverkföllin en það er ekkert verið að hlusta á okkur og stundum verð ég vonlaus.” Almenningur elskar að skora á hvorn annan á samfélagsmiðlum. Börn í Vinnuskóla Reykjavíkur í sumar tóku þátt í umhverfisáskorunum en upplifðu sig ekki hafa vald til þess að koma í gegn varanlegum breytingum. Hvernig væri hægt að skora á þá sem hafa ekki mikinn áhuga á umhverfismálum en hafa þó vald í samfélaginu? What it does Smáforritið Skrefinu Framar hvetur fyrirtæki og einstaklinga til þess að huga betur að umhverfinu í gegnum skemmtilegar skammtímaáskoranir og skýra birtingu gagna um áhrif þeirra. Framför fyrir einstaklinginn, fyrirtækið & umhverfið How I built it Við notuðum einfalda gervigreind til þess að reikna út kolefnisfótspor ákveðinnar hegðunar einstaklings, við völdum akstur og hjólreiðar. Þetta var gert til þess að vekja athygli á áhrif ákvarðana okkar á umhverfið. Einnig fundum við til áhugaverð gögn um stöðu Íslendinga í umhverfismálum og settum þau fram á læsilegan og skiljanlegan máta. Challenges I ran into Við áttum erfitt með vinna úr gögnunum sem við völdum þar sem uppsetning þeirra var óhentug fyrir kóðann. Accomplishments that I'm proud of Við erum stoltar af jákvæðum viðbrögðum við smáforritinu sem við fengum frá bæði fyrirtækjum og einstaklingum í markaðskönnunum okkar, og fyrir gagnavinnsluna sem við unnum á þessum stutta tíma. Við erum því stoltar að því að vera með raunsæja lausn sem getur gagnast umhverfinu. What I learned Við lærðum að nóg er til að gögnum í heiminum, en gríðarlega mikil þörf er á því að nýta þau og markaðssetja þau betur. What's next for Skrefinu Framar Næstu skref eru að ná betri samstarfi við fyrirtæki þar sem þau gegna lykilhlutverki í forritinu. Einnig þurfum við að sækja um í Nýsköpunarsjóð og leita að fjárfestum svo hægt sé að eyða meiri tíma í þróun verkefnisins. Þar að auki er á döfinni að gera ítarlegri markaðskannanir. Built With bootstrap dash html5 marvelapp python xlrd
Skrefinu Framar
Framför fyrir einstaklinginn, fyrirtækið & umhverfið
['Kristín Sóley', 'Zoe Sands', 'Brynja Dagmar Jakobsdóttir', 'Sandra Karlsdóttir']
[]
['bootstrap', 'dash', 'html5', 'marvelapp', 'python', 'xlrd']
9
10,397
https://devpost.com/software/netzero
NetZero.is home page NetZero calculator Select offset method Success! Inspiration The use of data to assess personal lifestyle choices in the context of the climate crises is what inspired our project. Another source of inspiration is our perceived lack of tangible options for individuals to offset their carbon footprint beyond what is possible through individual actions. What it does We created a solution that facilitates individuals to assess their carbon footprint as well as providing the opportunity to offset their emissions in a quantifiable way. How we built it First, we had to familiarize ourselves with the available data sets, and find any missing data from reputable sources to fill the gaps, so we could figure out our methodology in calculating the carbon footprint. Once we had found the data we created emission factors to use in the carbon calculator. The factors included averages per person and more complex factors based on user input, such as emissions per portion of red meat. We also had to familiarize ourselves with the available carbon offset solutions to see which options we wanted to offer and which options are already available to the public. We used a production ready, modern tech stack that we are familiar with, which we could use to develop the project further, in favor of simpler short term solutions which would perhaps have allowed us to deploy a prototype with less effort. The front-end is implemented in React and the back-end is implemented in GraphQL and postgres. The application is deployed on AWS. Challenges we ran into We ran into several challenges. The most notable involved the data sets, including figuring out a way to source the data available, format it, and merge different data sets in a meaningful way. Another facet of the data challenge was creating the emission factors and ensuring that the calculations correctly used the data. Accomplishments that we're proud of We are proud of the design and user experience of the website and the usefulness of the calculator. Another point of pride is the incorporation of data sets both provided by the Datathon and sourced elsewhere. We managed to merge it into a useful format for the emissions calculator. We cannot forget to mention how important it was to choose what we consider valid offsetting options. It is something that we feel has been missing from the offsetting market currently available to the public. What we learned We now have a deeper understanding of the most significant contributors to an individual's carbon footprint in Iceland and what areas are ripe for improvement. Adding the option to calculate the factor of owning horses on one's footprint is not present for its element of humor alone; part of the intention behind that choice is to highlight the disproportionate impact of human's use of animals on total emissions. Equally important is the realization that individual lifestyle choices alone are not sufficient to lower emissions. In our solution, we highlight offsetting. However, it is hugely apparent that large scale restructuring on the government and intergovernmental level are necessary to lower emissions on a scale beyond the abilities of individual choices. What's next for NetZero The next step is to establish a working relationship with Carbfix, access the ETS market to offer credits for sale on NetZero, and implement a functional purchasing site. We would like to expand the calculator to use reliable data to include more complex calculations for import and export, refrigerants and waste disposal. Moving forward, we envision that NetZero could also functions as a personal monitoring system for its users, in which it is possible to assess and ddress lifestyle choices to lower emissions. Built With amazon-web-services amplify graphql lerna postgresql react typescript Try it out www.netzero.is
NetZero
NetZero is a solution that facilitates individuals to assess their carbon footprint as well as providing the opportunity to offset their emissions in a quantifiable way.
['Gunnar Sturla Ágústuson', 'Marissa Pinal', 'Halldór Reynir Tryggvason']
[]
['amazon-web-services', 'amplify', 'graphql', 'lerna', 'postgresql', 'react', 'typescript']
10
10,397
https://devpost.com/software/svifryksspa
Svifryksmengun er vandamál Svifryksmengun hefur neikvæð áhrif á heilsu fólks og í þéttbýli á Íslandi eru bílar og allt sem þeim fylgir mestu mengunarvaldarnir. Aðeins hluti af mengunarinnar frá umferð kemur með útblæstrinum út um púströrið. Vegslit og uppþyrlun svifryks er líka ein stærsta uppsprettan samkvæmt sérfræðingum Umhverfisstofnunnar. Því er ljóst að þrátt fyrir orkuskipti í samgöngum mun svifryk ekki heyra sögunni til og því er tækifæri til að nýta gögn og tækni til lausna. Spálíkan Í dag má finna inni á loftgæði.is ýmsar upplýsingar um svifryks- og loftmengun. Þar má sjá mæligildi í rauntíma víðsvegar um landið en gögnin eru öll athuganir og segja því aðeins til um hvað hefur nú þegar skeð. Ekki er hægt að segja til að hvernig morgundagurinn muni líta út. Okkar lausn stefnir að því að spá fyrir um magn svifryks í loftinu nokkra daga fram í tímann. Á svifryksspánni verður hægt að sjá spágildi fyrir svifryk næstu 72 klukkustundirnar. Auk þess er hægt að sjá dagsspá næstu 7 dagana. Hægt verður að skrá sig á póstlista til að fá tilkynningar fyrir fram ef spágildi svifyks fer yfir viðmiðunarmörk. Þannig er hægt að auka vitundavakningu almennings af loftmengun og á sama tíma hægt að hvetja fólk til að nýta vistvænar samgöngur og þar með minnka svifryksmengun. Uppbygging spálíkansins Til að búa til svifryksspána var notast við gögn frá Umhverfisstofnun og Veðustofu Íslands. Annars vegar voru sögulegum veðurathugunar-, svifryks- og loftmengunargögn tvinnað saman, allt frá árinu 2015 til dagsins í dag. En einnig eru rauntímagögn með nýjustu mælingum hverju sinni notuð í gegnum API frá stofnununum. Við athugunargögnin er svo bætt við nýjustu veðurspá á hverjum tíma. Spálíkanið sjálft byggir á gervigreind sem smíðuð var í python og notast við svokallað tauganet sem þjálfað er á sögulegu gögnunum. Svo er veðurspáin notuð í stað sögulegra veðurathugunargagna til að spá fyrir um svifryks- og loftmengun fram í tímann. Mælaborðið er smíðað í Node Red. Á mælaborði svifryks- og loftmengunarspárinnar er hægt að skoða spágildi fyrir mengunarvalda. Einnig væri í framtíðinni hægt að skrá sig á póstlista til að fá tilkynningu ef spáð er fyrir um mikla svifryks- og loftmengun. Mælaborðið er hýst hjá AWS og er öllum opið, hægt er að skoða mælaborðið hér. Næstu skref Næstu skref teymisins verða að vinna með Umhverfisstofnun til að sameina mælaborð svifryksspárinnar og núverandi mælaborðs á loftgæði.is. Þannig getur fólk séð bæði mæld gildi og spágildi á sama stað. Einnig viljum við vinna með Strætó til að skoða möguleikann á því að bjóða íbúum afslátt eða frítt í strætó þegar spáð er mikilli svifryksmengun. Built With amazon-web-services csv json keras node-red python tensor-flow ust.api.is vedur.is xml Try it out 52.49.106.50 github.com
Svifryksspá
Svifryksspáin tvinnar saman veðurathuguargögnum og svifryksgögnum til að geta nýtt veðurspá til að spá fyrir um svifryk í lofti fyrir komandi daga
['Snorri Tómasson', 'Kristjan Theodor Sigurdsson', 'Örn Dúi Kristjánsson']
[]
['amazon-web-services', 'csv', 'json', 'keras', 'node-red', 'python', 'tensor-flow', 'ust.api.is', 'vedur.is', 'xml']
11
10,397
https://devpost.com/software/nuloft
Núloft Núloft miðlar upplýsingum um gæði andrúmsloftsins og spáir fyrir um svifryksmengun í Reykjavík. Gervigreindarlíkan Núlofts gerir notendum kleift að vera meðvitaða um loftgæði næstu daga. Gögnin Loftgæðalíkanið byggir á loftgæðagögnum Umhverfisstofnunar veðurspágögnum Veðurstofu Íslands veðurspágögnum Bliku.is sögulegum gögnum frá Evrópsku reiknimiðstöðinni í veðurfræði (ECMWF) Umhverfið Almenningur hefur ríkt tilkall til góðra upplýsinga um svifryk og aðra mengun á höfuðborgarsvæðinu. Þetta er ekki einungis vegna þeirrar jákvæðu áhrifa sem vitundarvakning um loftgæði hefur á hnattræna hlýnun, heldur einnig vegna þess að stjórnvöld hafa víðtæk úrræði til að sporna gegn slæmum loftgæðum. Núloft spilar því hvort tveggja hlutverk í baráttunni við hnattræna hlýnun og veitir stjórnvöldum jafnframt nauðsynlegt aðhald í umhverfismálum. Teymið Í verkefnateyminu eru Irma Leinerte Sveinn Gauti Einarsson Þorkell Einarsson Built With love Try it out svifryk.is
Núloft
Núloft miðlar upplýsingum og spáir um loftgæði í Reykjavík
['Þorkell E.']
[]
['love']
12
10,397
https://devpost.com/software/fuglabjarg-bni7so
Inspiration Við höfum báðir brennandi áhuga á vinnslu gagna og því ákváðum við að skrá okkur saman í Gagnaþonið. Okkur finnst að mikið af þeim gögnum sem sérfræðingar þurfa að vinna með er að finna á mörgum af mismunandi svæðum, t.d. csv skrám og gagnagrunnum, hjá mörgum stofnunum. Við vildum því hanna lausn sem að hjálpar sérfræðingum að ná í þau gögn sem að þeir vilja vinna með frá öllum þessum mismunandi svæðum og stofnunum, og setja það á eitt miðlægt svæði hvort sem það er gagnagrunnur, csv skrár eða annað þar sem þeir geta leikið sér með gögnin. Við teljum að þetta geti sparað heilmikinn tíma fyrir sérfræðinga og þar af leiðandi fjármuni þar sem að tímakaup sérfræðinga er ekki ódýrt. What it does Einfaldar sérfræðingum lífið með því að sækja þau gögn sem sérfræðingar vilja frá mörgum svæðum og setja þau á eitt miðlægt svæði fyrir þá til að vinna með. How we built it Við notuðumst eingöngu við Python við úrvinnslu verkefnisins en frekari lýsingu á úrvinnslunni má finna hér: https://github.com/stulli103/gagnathon2020 Challenges we ran into Var formið á gögnunum sem við fengum í upphafi. Laga þurfti gögnin sem við fengum gefin (csv skrárnar) svo hægt var að vinna með þau. Accomplishments that we're proud of Vinna með mikið af ólíkum gögnum á skömmum tíma þ.e. Hagstofu Íslands, Umhverfisstofnun, Orkustofnun og Veðurstofunni. What we learned Það er hægt. Ef þú hefur plan. What's next for Fuglabjarg Búa til Fast Api þjónustu með Python, gagnagrunn og vefsíðu Hægt verði að uploada csv skrám með stillingum sem eru vistaðar í grunn Bjóða upp á fleiri möguleika af gagnasettum, eins og beintengingu við gagnagrunna, með sömu stillingum Geta líka vistað gögn á NoSql formi ef notendur vilja þann möguleika líka Built With python
Fuglabjarg
Simplify data gathering between number of systems.
['Arnar Ingi Njarðarson', 'Sigurður Sturla Bjarnason']
[]
['python']
13
10,397
https://devpost.com/software/marea
Inspiration We were inspired by the UN Sustainable Goals, more specifically Goal 14 Life Below Water. What it does We are developing a new material that is completely biodegradable and provides a tangible solution for consumers and businesses to replace single-use plastic items. How we are building it By combining marine resources with technology, innovation, creativity and sustainability. Challenges we are running into We have an extensive and ambitious list of our product´s properties that include challenging milestones such as resistance to moisture, bacteria repellence, the efficient preservation of food items... We have checked many items on our list and are still working our way through the remainder of our criteria. Accomplishments that we´re proud of We are well under way in the development and design our prototype. What we learned Consumers are becoming exponentially mindful of packaging and its impact on the environment. What's next for MAREA To continue tackling the challenges and bring our product to market! Built With databases Try it out www.marea.is
MAREA
Bioplastics instead of single-use plastic: our project would solve the pollution problem stemming from single-use plastic in Iceland.
['Julie Encausse']
[]
['databases']
14
10,397
https://devpost.com/software/global-field-balancing-centres
Early understanding to be evolutionary for the environment A sense of service Inspiration Our inter-connected responsiveness for climate change or global warming is gaining vital importance, as we see environments are getting affected, wherein the stand to conserve and preserve life is a road ahead. Iceland is vitally important for the rest of the world, it indicates the balance for climate change mitigation and acts as an influencer for ecosystem preservation and virtual congregation of knowledge. . Iceland has a top priority to design Evolutionary thinking that matches procreative dimensioning with formative dimensioning to mitigate risks to its environments. What it does The Global Field Balancing Centre helps develop a Continuous Relationship Table with autonomic self-management for all the ecosystems involved. The Holistic sense of purpose: The insight is to achieve Acceleration for problem solving Questions used for the problem solving a. What are the elements of the weather in Iceland? b. What are the features of the climate of Iceland? c. What are the causes behind the changes in weather? d. Does the sun set at the same time and in all months of the year? e. Are the days and nights (virtue wise) equal in length in all months of the year? f. At what time or in which month, is the temperature of the day maximum? g. In which region or in which month, is the temperature of the day minimum? h. Are there mentionable adaptations shown by man? What is the risk? i. Are there mentionable adaptations shown by animals? What is the risk? j. Are there mentionable adaptations shown by the climate influenced environment and what are its frequent risks? k. What can mitigate climate change, depending upon the differences between weather and climate in Iceland? i. What are the sources of knowledge about the climate of Iceland or its regions? What are the characteristic features of the sources of knowledge? j. Are there plants or animal life more commonly seen depending upon the climate of a region? k. What are the weather forecasting systems? What are the weather reporting systems? m. What are the weather warning systems? n. What are the weather alerts that matter? o. Are satellites, radars, weather or climate watch systems active to broadcast details or accelerators for problem solving? p. Can winds affect these systems? q. What is the self-help available or relief available? What are the sense and respond systems available to manage climate change or climate extremity? r. Can snowfall or climatic extremes affect telephones, communication systems, soil systems, flora/trees, important or vital for functioning “facilities/buildings/infrastructure”? s. What is the diameter of a region? How is the boundary decided upon or defined by layouts? t. Is air movement or changes in air pressure a source of change for risk mitigation? o. Will a pendulum and its movement remain fixed at all times of the year? Whether time for development & growth of specific living things is constant? The virtual adaptation is described in the framework. p. Whether the water cycle is predictable? r. Whether the food chain is predictable? s. What are the natural, time related or foreign element decomposers ? t. What are the natural, time related or foreign element balancers? u. What happens to animal droppings, plant waste, man’s litter or disposal of waste, sewage, sanitation specific waste, dwelling specific waste or any other dynamic living entity’s waste? What is the main source of water, seas, springs, melting ice, glaciers, ground water, man-made wells? Are there agricultural systems? Are there waste water treatment plants? Are there sewage treatment plants? What is done to the sludge produced? Whether composting or bio-gas systems are operable? What is done to paints, oils, solvents, pesticides/ insecticides/hazardous chemicals? v. What is the interaction between the soil, water, air and living organisms? w. Does nature regenerate at all times? x. Does education cognitively enable people to sense & respond? y. Does healthcare cognitively assist people in health, wellness and disease prevention? z. Does economic development have nominators and denominators? Does there exist any sense & respond system for demand/supply? Does there exist, any definitive inventory of life support systems and resources? Does there exist, any dynamic demand conditioning? Does there exist, any Social performance conditioning? Are there surveys and assessments for any PRT counselling? PRT stands for Problem-solving, Remedial involvement and Transformational approaches. Are there Difference-to-Conservation-Strategy programmes in Iceland? Are there any Core and Complementary systems to decide on priority areas such as epidemiological study, clinical development, bio-informatics for a rational use of drugs, long term pharmacology, narrow therapeutic window and adverse drug event guidance? Have there been any condition monitoring systems incorporated to mitigate and help during the COVID-19 crisis? Can lifecycles of people, elements of the society & industry change from basic, to managed, to adaptive, to autonomous approaches, as or whenever identified by PRT counselling? Is there profit sharing to help Risk admittance and achieve more Quality in triple bottom line benefits & biz empowerment? Can Surveyed Data Integration (SDI) help for any Global Field Balancing of cognition myriads & ecosystems? 4.a Technical details: (a) Data augmentation via a Continuous Relationship Management Table of continuous and discontinuous change (b) Definition of inhibitors, activators and inducers for process science, relief work versioning & environmental problem solving (c) Design of Integrators (Conscious Leaf) for eco-system therapeutic windows, sustainable ownership and “learning” grounds (d) Design of Perimeter protraction / inter-linking principles for achievable & proportionate problem solving, conservation and preservation (e) Global Field Balancing Centres to drive quarantining or responsiveness at the macro level 4.b Architecture: PPAS-SDI framework with Cognition myriads, inter-related ecosystem nominators & denominators, augmented procreation or balancing. PPAS stands for Permeation, Perpetuity, Advancement and Showcasing of nominators and denominators. How I built it The Global Field Balancing Centre is a framework that will be showcased for implementation via the landing page www.venkataoec.wixsite.com/globalresponsiveness The solution building introspects on associations of common importance such as (1) Relief work versioning for natural and man-effected disasters (2) Value chaining for environment or globally showcased conservation and augmenting pro-creation for different sectors and areas of businesses or industries (3) Balancing responsiveness via Functions such as + Risk reduction where versioning drives problem solving or involvement + Process science implantation, proportionate objectivity and responsibility for eco-system preservation + Standardizing of Tools and Techniques for ecosystem conservation or pro-creation + Knowledge harvesting for (ownership specific) self-preservation and networking of strategy + Autonomic Quality and Future Reasoning, where Problem-solving, Remedial endeavor and Transformational responsibilities save ecosystems. Versioning inter-connected macro solutions with dimensioning & future reasoning can help decision-makers step towards Autonomic Quality in Asset building, fulfillment of sustainable development & risk, and mitigating of risk & vulnerability. The dimensioning could include Targeted dimensioning Benchmarked dimensioning Fundamental space dimensioning where there are dimensions for Political, Scientific, Social and Spiritual liability Universal value dimensioning in outside perimeter responsibility Organizational value dimensioning for Code of Conduct and Agility to show Liability mindfulness, Resilience and Contingency planning The links provided build the foundation for the ideation. Endeavor India Proof of concept URL http://www.venkataoec.wixsite.com/ourindia2020 Help Integrate Africa Proof of concept URL http://www.venkataoec.wixsite.com/helpintegrateafrica Ensuing solutions for Manufacturing: http://www.venkataoec.wixsite.com/procreation Sustainable roadmap for tree conservation http://www.venkataoec.wixsite.com/treeconservator Connecting people and environments via http://www.venkataoec.wixsite.com/consciousleaf Challenges I ran into In Iceland and for that matter India, interconnecting of perimeters is a functional knowledge for us as we see states, cities and regions interconnected via geographies of both “micro level division and macro level connection”. To manage climate change and configurations that enable & augment pro-creation, the next step is about protracting further to inter-link ecosystems for achievable or proportionate problem-solving and principle of support for culture, sustainable development and growth (in an effort to be self-sustaining). All inter-linking is a transition, as not all elements are proportionately conservative & preservative or functionally similar for the time ahead. Accomplishments that I'm proud of The insight to develop a Continuous Relationship Table with autonomic self-management for all the ecosystems involved. 5.1. Connected ideation for Endeavour India, where the Issue of sustainable development & growth is being addressed via an accelerator called Inter-ecosystem balancing. 5.2. Submission to Help Integrate Africa via Data integration for Cognitive Education, Cognitive Healthcare and self-driven economic development via SMART Agricultural systems Proof of concept URL www.venkataoec.wixsite.com/helpintegrateafrica 5.3. Risk mitigation at a knowledge and strategy level Developing a Conscious Leaf framework for being physically distanced and socially connected Proof of concept URL www.venkataoec.wixsite.com/consciousleaf 5.4 Developing a tree conservation roadmap Proof of concept URL www.venkataoec.wixsite.com/treeconservator What I learned As per insightful quality practices, the Continuous Relationship Table will need to help the governing authorities and/or interested parties for social-accountability (1) manage continuous change (that is termed as a time related sense & respond function) and (2) manage discontinuous change (that is termed as an incidence related sense & respond function). To manage the aforesaid elements that are two different ends of our equation in managing climate change with sustainable development & growth or ends by themselves for responsiveness, we concern ourselves with autonomic sense and respond functions that can manage this type of change. The ideation states that, to do this, we conceptually need to develop or deploy Global Field Balancing Centres for scenario based inter-connections to help drive or quarantine responsiveness. What's next for Global Field Balancing Centres This is a Centre (like any Centre for Industry development) that can help seed, condition and develop more sustainable ownership for different role building associations (for all the current and emerging missions of India) , where there is Permeation, Perpetuity, Advancement and Showcased balancing for different Trust levels (like being Standardized, SMART (Specific-Measurable-Achievable-Relevant-Time-specific), Sustainable, Pro-creative (in showing Fundamental development or recovery)). Necessities to continue the project Deployment of FASTBIZ systems for fast biz empowerment and profit sharing to help Risk admittance and achieve more Quality in triple bottom line benefits Baseline Proof of concept URL www.venkataoec.wixsite.com/fastbiz Built With microsoft wix Try it out www.venkataoec.wixsite.com www.venkataoec.wixsite.com
Global Field Balancing Centres
The Global Field Balancing Centre introduces an Asset Profile to help Evolutionary Dynamics in Iceland and other climate affected countries
['Venkatram K']
[]
['microsoft', 'wix']
15
10,397
https://devpost.com/software/jofnum-okkur
Innblástur Við lögðum hausinn í bleyti og hugsuðum okkur hvað það væri sem við gætum smíðað, sem raunverulega myndi skipta máli fyrir umhverfið. Töldum að lausnin þyrfti að vera tæknilega frekar einföld, til þess að meiri líkur væru á því að hægt væri að smíða almennilegt prototype á viku, og þar með auka líkurnar á því að það sæist að raunverulega afurðin væri ekki of stór í sniðum til þess að taka hana lengra. Hvað lausnin okkar gerir Jöfnum okkur er vefsíða link , þar sem almenningur getur kolefnisjafnað sig, og séð hversu langt á leið heimili landsins eru komin til þess að kolefnisjafna losun sína. Einnig má þar sjá stöðu sveitarfélaga, framlag öflugustu þátttakendanna, ásamt tölfræði yfir kolefnislosun hagkerfis Íslands. Hvernig við smíðuðum Jöfnum okkur Við byrjuðum á því að smíða grunn að vefsíðunni í React, með Bootstrap og ChartJS. Bakenda skrifuðum við í Django/Python ofan á Postgres grunn, sem svo var hýstur á sýndarvél hjá DigitalOcean. Nýttum okkur innskráningu hjá Facebook til að flýta fyrir þróun. Gagnasettin sem við nýttum voru bæði frá Hagstofunni, annars vegar mannfjöldatölur eftir sveitarfélögum og byggðarkjörnum link , og hins vegar losun gróðurhúsalofttegunda frá hagkerfi Íslands link . Erfiðleikar í þróun Þetta gekk allt saman ágætlega fyrir sig, en helsti áhættuþátturinn var kapphlaupið við tímann. Báðir hópmeðlimir eru í fullri vinnu og með heimili sem þarf að sinna. En þar sem tímamörk Hakkaþonsins voru skýr, þá sýndu makar þolinmæði gagnvart þátttökunni, þó það hafi kostað flestar mögulegar samverustundir yfir vikuna. Hverju við erum stoltir af Við erum aðallega stoltir af því hversu robust kerfi við náðum að smíða í frístundum á þessari viku. Einnig var gaman hvað samstarfið gekk vel, skiptum verkum skipulega á milli okkar, vissum okkar sterku og veiku hliðar og höguðum okkur í samræmi við það. Hvað við lærðum Bæði lærðum við ýmislegt forritunarlegs eðlis, sem og skipulagslegs eðlis. Annar forritarinn hafði aldrei snert á React, og var það því smá lærdómskúrfa, en hann stóð sig með stakri prýði og á skilið Heineken fyrir. Sá sem sá um bakendann starfar ekki sem bakendaforritari, og þurfti því að ryfja upp ýmsa gamla takta. Hvað skipulagið varðar, þá sáum við bara hversu nauðsynlegt það er að skipta verkunum skynsamlega á milli sín. Annars er hætt við að hópmeðlimir séu að vinna í sömu hlutunum sem skapar bæði flækju og gremju. Hvað ber framtíðin í skauti sér fyrir Jöfnum okkur Við vonum að sjálfsögðu að við hreppum vinningin fyrir besta gagnaverkefnið, sem væri stökkpallur fyrir verkefnið og myndi vonandi gera það að verkum að þær stofnanir sem gætu veitt því lið myndu sjá kosti þess og aðstoða við að koma því á flug. Built With bootstrap digitalocean django github githubpages javascript python react Try it out jofnumokkur.xyz
Jöfnum okkur
Portall fyrir kolefnisjöfnun, sem heldur utan um stöðu og framvindu kolefnisjöfnunar heimila landsins.
['Simon Örn Reynisson']
[]
['bootstrap', 'digitalocean', 'django', 'github', 'githubpages', 'javascript', 'python', 'react']
16
10,397
https://devpost.com/software/ananke-t2we3g
Inspiration Necessity to include environment in decision toward a sustainable future Creating a solution that create bridges between various entities (and datasets) Keep a simple dashboard to provide overview of a complex matter: sustainability What it does Aggregates data from various sources Gives a comprehensive reading of the sustainable data, including environmental data Ananke allows to compare sustainability indicators with each other. Challenges we ran into Normalising data and aggregating data Managing project scope (keep it simple) and priorities (demo quality over data quantity) Accomplishments that we are proud of Find a solution that does not exist as such, and mentors would be really interested in. Manage to produce a functioning demo in such a short time What we learned We learned about environmental issues, and how they affect society, directly and indirectly. Manage individuals skills, hope and ambitions. What's next for Ananke Normalise data set aside Built comparison with other countries Built With dash pandas plotly python Try it out github.com
Ananke
Dashboard of sustainable indicators for decision makers. Comprehensive reading of data for a sustainable strategic vision.
[]
[]
['dash', 'pandas', 'plotly', 'python']
17
10,397
https://devpost.com/software/gagnathon-fyrir-umhverfid
Inspiration . What it does . How I built it . Challenges I ran into . Accomplishments that I'm proud of . What I learned . What's next for Gagnathon fyrir umhverfid . Built With communication devpost google slack teamwork
Gagnathon fyrir umhverfid
Gagnaþons purpose is to increase the utilization & visibility of open data. Participants process the data various organizations to find solutions to environmental issues by using Icelandic open data.
['Stefán Örn Snæbjörnsson', 'Kristjana Björk Barðdal', 'idunn-a123', 'Anna Lóa Vilmundardóttir']
[]
['communication', 'devpost', 'google', 'slack', 'teamwork']
18
10,397
https://devpost.com/software/greenhouse-monitoring-system
window.fbAsyncInit = function() { FB.init({ appId : 115745995110194, xfbml : true, version : 'v3.3' }); // Get Embedded Video Player API Instance FB.Event.subscribe('xfbml.ready', function(msg) { if (msg.type === 'video') { // force a resize of the carousel setTimeout( function() { $('[data-slick]').slick("setPosition") }, 2500 ) } }); }; (function (d, s, id) { var js, fjs = d.getElementsByTagName(s)[0]; if (d.getElementById(id)) return; js = d.createElement(s); js.id = id; js.src = "https://connect.facebook.net/en_US/sdk.js"; fjs.parentNode.insertBefore(js, fjs); }(document, 'script', 'facebook-jssdk')); 1/2 Scheme Connect NodeMCU & Sensor 2/2 Scheme Connect NodeMCU & Sensor 1/2 Application 2/2 Application 1-3 box 2-3 box 3-3 Soil Moisture Sensors Inspiration I was inspired to make this project because I would like to contribute a very useful project to the environment of all of us and my contribution to the planet that we occupy today namely our Earth. Soil Moisture. What it does Description: This project helps you to look at your greenhouse. It consists of sensors: DHT11 (temperature and humidity), MQ-135 (Air Quality), Soil Moisture. ## How I built it ## Step 1: Scheme DHT11 sensor data pin is connected to NodeMCU via D0 pin. -Soil Moisture sensor data pin is connected to NodeMCU via D1 pin. MQ-135 sensor data pin is connected to NodeMCU via A0 pin. VCC pin on sensors is connect to VIN pin on NodeMCU and GND pin is connect to GND pin on NodeMCU. Step 2: Arduino IDE libraries The required libraries are: FirebaseArduino.h dhtnew.h ESP8266WiFi.h ArduinoJson.h Step 3: Google Firebase Google Firebase serves to store data collected on the NodeMCU. These datas can be further used on websites, mobile applications and anywhere they can access the Internet. Sign in using your Google Account and follow the steps below. After login follow the next steps: Click on "+ Add project" Fill information and click "Create" After loading, click "Develop" in left navigation bar. Then click "Database" Click "Create database" then check "Start in test mode" and click "Enable" Next to "Database" title in the drop-down menu, select "Realtime Database" Click on "Roles" tab. In code delete "false" and add "true". Back to "Data" tab. Copy link of your database and insert in Arduino code. Click on icon gear (left navigation bar) choose "Projects settings" and click on "Service accounts" choose "Database secrets". On right copy "Secret" code and insert in Arduino code. Now, we've connected NodeMCU and Google Firebase. Step 4: NodeMCU NodeMCU is a board that has the ability to connect to the Internet. In addition, there are several digital pins and one analog pin. It is excellent for projects that need to connect to the Internet. The code required to connect to the Internet and connect to Google Firebase is shown below: include include include include define FIREBASE_HOST "firebase_link" define FIREBASE_AUTH "firebase_secretcode" define WIFI_SSID "wifi_name" define WIFI_PASSWORD "wifi_password" void setup() { Serial.begin(9600); WiFi.begin (WIFI_SSID, WIFI_PASSWORD); while (WiFi.status() != WL_CONNECTED) { delay(500); Serial.print("."); } Serial.println (""); Serial.println ("WiFi Connected!"); Serial.println(WiFi.localIP()); Firebase.begin(FIREBASE_HOST, FIREBASE_AUTH); } void loop() { } Step 5: DHT11 sensor Connecting the dhtnew.h library and pin to which the data pin of the sensor is connected is done using the following command: int dhtPin = 0; DHTNEW dhtsensor(dhtPin); Reading the temperature and humidity is done using this command: dhtsensor.read(); float t = dhtsensor.temperature; float h = dhtsensor.humidity; Finally sending data to Google Firebase is done with using this command: Firebase.setFloat("t", t); Firebase.setFloat("h", h); Step 6: Soil moisture sensor The intialization of the pin for read value of soil moisture data is done with using this command: int soilPin = 1; Reading data and sending to Google Firebase is done with using this commands: int soilData = digitalRead(soilPin); Firebase.setInt("soilData", soilData); Step 7: MQ-135 sensor The intialization of the pin for read value of air quality data is done with using this command: int airPin = A0; Reading data and sending to Google Firebase is done with using this commands: int airData = analogRead(airPin); Firebase.setInt("airData", airData); Step 8: Android application The application was created in the android studio. The part that connects the app from Google Firebase from where the sensor data is taken from is shown below. For Temperature, Humidity and Air Quality: dref = FirebaseDatabase.getInstance().getReference(); dref.addValueEventListener(new ValueEventListener() { @Override public void onDataChange(@NonNull DataSnapshot dataSnapshot) { temp = dataSnapshot.child("t").getValue().toString(); text_temperature.setText(temp + "°C"); } @Override public void onCancelled(@NonNull DatabaseError databaseError) { } }); dref = FirebaseDatabase.getInstance().getReference(); dref.addValueEventListener(new ValueEventListener() { @Override public void onDataChange(@NonNull DataSnapshot dataSnapshot) { hum = dataSnapshot.child("h").getValue().toString(); text_humidity.setText(hum + "%"); } @Override public void onCancelled(@NonNull DatabaseError databaseError) { } }); dref = FirebaseDatabase.getInstance().getReference(); dref.addValueEventListener(new ValueEventListener() { @Override public void onDataChange(@NonNull DataSnapshot dataSnapshot) { air = dataSnapshot.child("airData").getValue().toString(); text_airquality.setText(air); } @Override public void onCancelled(@NonNull DatabaseError databaseError) { } }); For Soil Moisture(if-else is used to decides whether watering is necessary or not): dref = FirebaseDatabase.getInstance().getReference(); dref.addValueEventListener(new ValueEventListener() { @Override public void onDataChange(@NonNull DataSnapshot dataSnapshot) { soilmoisture = dataSnapshot.child("soilData").getValue().toString(); int soilData = Integer.parseInt(soilmoisture); if (soilData == 0) { text_soilmoisture.setText("No watering required."); text_soilmoisture.setTextColor(col2); } else { text_soilmoisture.setText("Watering required!"); text_soilmoisture.setTextColor(col1); } } @Override public void onCancelled(@NonNull DatabaseError databaseError) { } }); Part of the code for linking text that is written in activity_main.xml (displayed in the application) and Google Firebase is in attachments with name MainActivity.java. Also the xml file is in attachments. My GreenHouse Monitoring System Power for my system is power bank. Power bank + is connect to VIN on NodeMCU and - is connect to GND on NodeMCU. Challenges I ran into There is no difficulty for us if we are indeed serious and trying to create our own project. Accomplishments that I'm proud of I am proud of my success to be able to contribute to making this project for the environment and for this hackathon event. What I learned DHT11 (temperature and humidity), MQ-135 (Air Quality), Soil Moisture. What's next for GreenHouse Monitoring System I hope my work can win as the best Google Cloud usage and in the future I hope this project is useful so that I can provide new features in the results of this work in the future. Built With android-studio arduino c c++ google-firebase Try it out github.com
GreenHouse Monitoring System
Android application monitoring system for GreenHouse, several sensors for reading data and send to application.
['Nur Rimba Fadil Muhammad']
[]
['android-studio', 'arduino', 'c', 'c++', 'google-firebase']
19
10,398
https://devpost.com/software/hack-my-tree-house
Computer vision for trunk detection using cloud segmentation and Hough Circle Transforms. Final logo. Cylinder fitting from reality data. Obligatory Venn diagram. Logo. Graphic design for logo Trees in not-their-final form. Ellis Shores are another example of an eccentric clamp. Rhino scripts, used for algorithm testing. Design thinking. 1st Platform with handrails Kids watching a robot invade Endor. (Kidding - that's a MATTERPORT!) Trees as data. Wiki on Github 1st Platform - no handrails A meme we used for internal communication. Procedural trees. LIDAR scan of trees. Our first generated 2x4! We designed a t-shirt. It's kinda awesome. Prototype post. Eric and Jake, fabbing a prototype. Our tree connections were inspired by deer stands. Computer vision for tree trunks Trees, being misunderstood. Prototype trees. Prototype deck with 5" on center holes. Prototype deck-to-post connection. 1st tree clamp mockup We're building tree houses! We are using tree houses to explore current, relevant, difficult challenges in the architecture, engineering, and construction (AEC) industry. The objective is a software tool and product definition that will allow parents and kids to configure a tree house that can be easily built, either with parents or independently by kids. We'll follow offsite construction principles, use simple materials, and treat the process as an exploration of "productization" in the built environment. Our hack is like a "professional art project" - our goal is to make you think by creating objects that defy current industry standards. What it does Our app uses reality capture, photogrammetry, and design automation to allow a user - hopefully a kid - to design and configure a tree house in their neighborhood. Point a phone or tablet's camera at the environment captures surfaces. Algorithms detect likely tree trunks and a user selects three trees to support their structure. The structure is shown in augmented reality on the phone camera display. In a design mode, a user can configure platform and rail heights. The app is a deployment of a Unity project, which includes other algorithms and processes geared for desktop computing. For example, we can display point clouds, detect trees, and create networks of tree houses. How we built it We built it on Unity, using code we wrote from scratch (except for a few basic algorithms, like cylinder detection from a set of points). We used Matterport scans, publicly available LIDAR data sets, and a data set from Hexagon. The tree house was designed by us and detailed by engineers on the team. All graphic design was done by our team. Challenges we ran into A kid wanted a zip line, so we added a zip line. We initially designed based on a triangular platform but our self-imposed limitation of only using 2x4's limited our capacity. We considered using a team member's SPOT robotic dog by Boston Dynamics, but the LIDAR data from the nav unit is not yet accessible (but will be later this summer). Accomplishments that we're proud of We deployed an AR app! We're doing reality capture on very limited devices, like Android phones with no depth sensing. We created funny videos. We exceeded our expectations and used the talents of everybody who was willing to join our team. The design worked and resulted in great insights into offsite construction. No kids got hurt! We designed an awesome t-shirt! We did a home project with only TWO trips to Home Depot and ONE of those trips was because we needed zip line parts! That's amazing and a credit to software-based bill-of-materials generation. What we learned The stress of a complex hackathon project mimicked a real project and made us think hard about decisions and outcomes. Overall, the use of software to improve prefabrication outcomes was validated in a number of ways. For example, construction waste, fabrication errors, and field-driven design changes were eliminated. Jake Olsen is a pretty good singer. What's next for Hack My Tree House We think we can dominate the market for tree houses by adding just a few more tree house typologies. We created a website and may find others who can contribute to a growing software code base. Built With c# desktop github lidar matterport mobile unity Try it out treehousehacker.com github.com www.linkedin.com
Hack My Tree House
Our project provides kids with a tech-enabled process for tree house design, procurement, and installation. The goal is to take kids beyond Legos and Minecraft while exploring offsite construction.
['Shuai Cao', 'Brett Young', 'GastonBC', 'jmiller9001', 'Jake Olsen', 'Todd Elkins', 'Victor Varela', 'Mercedes CARRIQUIRY', 'Luke Gehron']
['BEST OVERALL PROJECT']
['c#', 'desktop', 'github', 'lidar', 'matterport', 'mobile', 'unity']
0
10,398
https://devpost.com/software/hey-henry
Hey Henry Logo Joke Where and When A construction voice assistant that leverages Google Assistant to answer project schedule questions in real time on mobile devices during team meetings and on the move. By querying activity, date, duration and location information from an XML export of a project schedule, our solution can be used for quick reference to help keep meetings on track and projects on schedule. Our team developed this solution entirely during the 2 weeks of the hackathon event. We had never met or worked together before the event. Our team combines years of experience in scheduling, construction project management, technology and programming. After daily meetings and brainstorming sessions, we learned to work together with new unfamiliar faces, and respect eachothers' relative levels of expertise while pushing ourselves to learn a new set of technical and presentation skills. Our MVP solution leverages a real-life construction schedule exported from Primavera P6 to XML, hosted on Google Sheets, using the Sheetdb.io API, Google Dialogflow as the natural language programming engine, and the Google Assistant mobile app as the user interface. Built With google google-assistant google-sheets ios javascript p6 sheetdb.io Try it out github.com www.notion.so skanska.box.com
Hey Henry
Your project schedule now has a voice.
['Evan Reilly', 'Hassan Emam', 'Daniel Fahmi Soliman ★', 'Fernanda Benezra', 'Larissa Bianchini', 'James Norris', 'Omar Meky', 'Brooke Gemmell', 'Mauricio Santos Filho', 'Melissa Frydlo']
['BEST SOLVES A BIG PROBLEM PROJECT', 'BEST STARTUP PROJECT']
['google', 'google-assistant', 'google-sheets', 'ios', 'javascript', 'p6', 'sheetdb.io']
1
10,398
https://devpost.com/software/apatos-reshoring
APATOS Reshoring = Augmented Placement & Analysis Tool for Optimial Structural Reshoring Initial mind map session Outlining the risks and where they come from Existing workflow process map Name origin Data collection architecture mind map Example drawing for structural Inspiration for Hack One of our team members, Jason, was a concrete formwork designer for 4 years at a concrete contracting company. Jason spent most days analyzing structural concrete slabs, beams, columns within vertical high-rise buildings to determine the best procedure to install the temporary formwork to pour the concrete in a safe and cost-effective manner. These structural calculations would take weeks to perform for high-rise buildings, and the data was difficult to communicate in a method in which it could be communicated to others. It is critical to determine the early-age load strength of the floor slabs to avoid the possibility of partial or total failure of the structural system due to construction overload. Decision regarding the removal of forms and relocation of the shores are too often made without the benefit of a proper analysis of the structural effects, or in many cases, without any analysis at all. Load Distribution Methodology Used Apatos Reshoring automates the 'Simple Method' of concrete analysis by Grundy and Kabaila (1963), which includes the following assumptions: The deformations of concrete slabs are considered as elastic (shrinkage and creep of concrete are neglected) The shores are infinitely stiff relative to the supported slabs The reactions of the shores are assumed as uniformly distributed The lowest level of shores and reshores are supported on a rigid foundation at the beginning of the construction The loads applied to the slab/form system are distributed between the supporting slabs in proportion to their relative flexural stiffnesses. Construction Load Safety Factors Recognized ACI 318 specifies load factors for specific combinations of design loads used in design of the permanent structure. It does not however, specify load factors for construction load factors. ACI 347 does not specify construction load factors. ANSI 10.9 recommends a combined load factor of 1.3 for both dead and live construction loads. SEI/ASCE 37 specifies a minimum load factor of 1.4 dead load when combined with only construction and matrerial loads, and 1.2 for all other combinations, and a load factor of 1.6 for construction live loads. What our tech does Our first ribbon command is "Create Visualization View" which generates new 3D-geometry within the existing model by effectively tracing out of all structural elements (slabs, beams, columns) and calculating the respective construction live load for each section. The command then generates a visualization based on our calculated distribution of loads carried by the concrete structure based on the estimated strength of the concrete members to resist the construction loads imposed. Our second ribbon command 'Place Temporary Shoring' analyzes the imposed loads, and determines an appropriate layout of reshores based on the specific conditions per point. Schedules are generated Our third ribbon command "Create Reshoring Layout Sheets" which analyze the building data by level as well as all levels above, and generate a layout of reshores using dynamic families to place and extend the reshores to the site specific conditions. Lastly, the fourth ribbon command "Create Pour Sheets", generates layout plans for each pour (based on pre-defined) Revit scope boxes by floor. These are field deliverables for the actual deployment of the reshores. How we built it Due to the complexity of vertical high-rise structural concrete and temporary formwork, a LOT of requirements discussion with our team revealed a very challenging workflow that could be automated by using BIM data effectively. While working with the team to better understand the design requirements, we spent extra time putting together some deliverables for the field. We wanted to ensure we spoke to one of the most immediate issues we identified, ensuring the field gets the information, not just the office. We used SimpleMind to map out our ideas and understanding of the existing workflows, our research into the engineering codes, risks, standards and guides commercially available. We leveraged Trello to aggregate our initial thoughts and resources, and then later moved to a Sharepoint to When the complexity of the design problem was better understood, we were able leverage the Revit API and C# programming to generate some pleasing visuals in testing that seemed valuable for conversations about where demand is most heavily applied. Features Automated Structural Analysis and Temporary Reshoring placement based on Simple Method Generation of color-coded visualization sheets, indicating demands and capacity Automates Reshoring Layout Sheets (by level), with Dimensions and Schedules for Loads and Reshoring Bill of Materials Automates Pour Sheets (by pour sequence), with Dimensions and Schedules for Loads and Reshoring Bill of Materials Benefits Clear visuals, indicating where additional load is applied as load is distributed down the building Time Savings - Weeks of effort, automated into minutes Less manual data entry, less potential room for human error Model-based takeoffs and deliverable documents for field installation Challenges we ran into We attempted to use PowerApps to build a data architecture to interface with to collect building structural data, but ran into issues building the appropriate connections. Accomplishments that we're proud of We solved the problem we aimed to tackle! We also built new Revit families of reshoring poles, that dynamically adjust to the models respective site conditions for the Clear Shore Height (CSH) that did not previously exist. Built With .netframework assemble c# powerbi revit revitapi xaml Try it out github.com
Apatos Reshoring
Apatos Reshoring is a structural analysis and placement tool which assists complex building design by generating 3D-model load elements which visualize the demand/capacity of each concrete element.
['Tim Egan', 'Jacqueline Schklar', 'Jason Nicholas', 'Lee Posey', 'YanaBe Berkovich', 'Z S']
['BEST MASHUP PROJECT']
['.netframework', 'assemble', 'c#', 'powerbi', 'revit', 'revitapi', 'xaml']
2
10,398
https://devpost.com/software/twin-peeks
Embedded live IoT gauges IoT data simulation Choose data source for simulation 3D object insertion Tag data management Embedded quizzes Physical trailer: hydronics training module Inspiration We built a virtual training platform with live IoT visualization and simulations via a Digital Twin. We were inspired by UA's idea to provide an immersive experience for skilled trades training, and the potential for improving facility management by using reality capture. We want to simplify data management by centralizing access to IoT data, service records data for training and maintenance, as well as student knowledge - for reality capture on Matterport, 3D models on Forge and standalone 3D files etc. What it does Our virtual training center recreates in person training and monitoring through rich interactive content in a digital twin, live IoT data visualization and simulations. We combine , Matterport reality capture with sensor data (Fluke and other data sources), 3D objects, quizzes and educational content. Admins or instructors can: link their matterport scans upload any gltf/glb files link BIM360 / forge (not implemented) Admins can then: annotate the scans with rich content, embed videos, links, streaming video, etc. create virtual manuals, set up IoT sensors and data sources run simulations on IoT data, play and pause for teachable moments insert 3D objects to communicate complex information Authorized viewers or students can access the virtual training center via a web app and experience the digital twin in VR. This prototype uses an existing UA mobile training center. The end goal is to network 300 training centers into a virtual super classroom that provides training on any type of system relating to the plumbing and pipefitting trades. This solution is extensible to other use cases that can benefit from reality capture, IoT, data visualization and training. Examples include: facilities management and other skilled trades. How we built it We built a React web app to view and manage Reality capture files and associated IoT data. It begins with OAuth2 for Admin and Viewer access. On both Admin and Viewer dashboards, users have the options to view their matterport scans, or upload any gltf/ glb files - which we show in A-frame. Admins can insert 3D objects in both viewers, implemented with A-frame and the Matterport SDK. The application runs on Google cloud compute, uses a realtime database (Firebase) to store IoT sensor data, virtual manuals, educational content and quizzes for students. We are using both a simple graph library to simulate live IoT data visualization from historical data (CSV files). For all other data sources, we used Grafana (open standards) for IoT data visualization of big data (this prototype only used public datasets). We tried to add the app to a Knox container for additional security, but couldn't get approved for the partner program in time. Challenges I ran into There was no API for for Fluke sensor data and the data was exported as excel sheets. We loop through xls data to simulate a live scenario. Matterport scan when exported as an .obj and converted to gltf/ glb lost a fair amount of resolution -- limiting our usage of a-frame Applied for the Knox platform for biometric authentication 3 days before the deadline and didn't get approved in time. Definitely tried to add a lot of features. We are primarily backend engineers so front end work was a bit slow and caused some delays and limited fidelity of the final solution. Being new to web 3D, it took us some time to come up with an idea, and it took finding UA and understanding their needs to get going. Accomplishments that we are proud of Our team was very new to web 3D until 3 weeks ago and we were proud to have picked up the technologies, industry needs and direction in time for the hackathon. Support for multiple reality capture formats Works in web and VR Integrated Big data visualization libraries to help make sense of IoT data Ability to visualize historical data Ability to include educational content What we learned 3D web is learnable and fascinating. Mashup approach is a fast path to generating value What's next for Twin Peeks Knox and other secure containers -Biometric and badge authentication (RFID/QR) Access controlled areas in the scan Visualize IoT data in the 3D scene with heatmaps Work with UA to round out the educational/training platform LMS solution Understand the needs and infrastructure of niches like facilities management Focus on providing a simple solution to expand adoption of digital twins Peter, Arpita, Dane Built With aframe big-query bootstrap firebase fluke google-cloud grafana knox matterport react vr Try it out github.com docs.google.com
Twin Peeks
A platform for virtual training, IoT monitoring and facilities management via a Digital Twin
['Arpita Shrivastava', 'Peter Spannagle']
['BEST HACK FOR HUMANS']
['aframe', 'big-query', 'bootstrap', 'firebase', 'fluke', 'google-cloud', 'grafana', 'knox', 'matterport', 'react', 'vr']
3
10,398
https://devpost.com/software/wisdom-607kib
Team intro slide HACK_AdminAppForValidationOfAIHack HACK_MicrosoftTeamsIntegration HACK_DesignSoftwareNotification Inspiration Collaboration and knowledge transfer between the different AEC silos is one of the biggest problems in the AEC. This is the major factor for the low digitization rating of the AEC industry. We are creating the new superpowers of the AEC industry. What it does Our Hacks create bridges between AEC silos and improve therefore improve the industry's productivity. How I built it We have miner tools that mine data from the design our engineers and architects are doing daily. Then we have a cloud AI engines, that process that data and convert it into Wisdom. Our hacks are tools that connect to those AI engines, extract the Wisdom and then target the Wisdom, in various different ways, to the users when they need to consume that Wisdom. Challenges I ran into The datasets are huge, they amount to 600 years of design data, the current data collection rate is 100 years of data per month. Training/validating the AI engines is a slow process. There were some issues when integrating the data consumption tools into the different tools that the engineers and architects use daily. But these issues were overcome. Accomplishments that I'm proud of The biggest accomplishment is that these tools will be deployed in the production environment on the next Monday and will be deployed to Sweco's project WISDOM ecosystem, which is now used by over 1500 people and this number is growing very quickly. So these hacks will potentially help all those users. What I learned I have learned a lot of different tricks to integrate tools into the current design processes. This will be super useful in the future when I need to integrate other tools. In this Hackathon I also have learned a lot of different tools/techniques to do good visualization/marketing things. What's next for Data2Wisdom The team members of Data2Wisdom are part of Sweco's dataScience team that has been working on Project WISDOM for couple of years now. So all the things that we learned during the hackathon will be usefull for our day-day lives. We will also extend and build new AI engines to support our mission to use data to create the new super powers in the AEC industry. Annexes Website built in the hackathon: https://sway.office.com/MPfNIItbG4dBHVY6?ref=Link Video link for the presentation final pitch: https://youtu.be/o91WxfqrBhQ Video for the HACK_AdminAppForValidationOfAIHack: https://youtu.be/iLkP5U1EyOU Video for the HACK_MicrosoftTeamsIntegration: https://youtu.be/W7wnkbtPsPA Video for the HACK_DesignSoftwareNotification: https://youtu.be/5ZAyx3WZvVw Others Unfortunately, all these systems where we have worked with are protected by million firewalls and cannot be accessed outside Sweco. I tried to make some of them available, but I did not manage, It is also July, most of the people here are now on holidays, so I could not get help for this. Anyway, I did videos from all the hacks, so you can get an idea how they work. Built With .netcore3.1 apis azure c# powerbi python revit teklastructures webhooks wpf Try it out sway.office.com youtu.be youtu.be youtu.be
Data2Wisdom
Hacks that extract wisdom from a dataset containing 600 years of AEC design data which then target wisdom to the right person at the right time. Creating the new superpowers of the AEC industry.
['Ricardo Farinha']
['BEST HACK FROM A PAST EVENT']
['.netcore3.1', 'apis', 'azure', 'c#', 'powerbi', 'python', 'revit', 'teklastructures', 'webhooks', 'wpf']
4
10,398
https://devpost.com/software/bim-to-devs
CO2 Concentration in the office buildings Inspiration We have been looking for a way to convert BIM models for simulation and we were able to do so through the use of Autodesk Forge APIs. In doing so we were able to create a better and more meaningful visualization. What it does Extraction and translation: Starting with data from BIM360, our extraction tool takes the geometry of a model and converts it into a format called OBJ. This format is then voxelized and stored in a JSON file which can be run through a simulator. The output from the simulator is then parsed into a form which can be visualized. Simulation: Our labs’ simulator, Cadmium, has a CO2 dispersion model which we used for this project. The model classifies the space as cells of different types, impermeable objects (used for walls), CO2 sources, workstations (used for chairs+desks) and air. Most of the model is occupied by air cells and all air cells start with a constant equilibrium value of CO2 concentration. The workstations start off unoccupied and a few steps into the simulation they are replaced by a CO2 source and that is when the CO2 level changes can be seen. The CO2 sources occupy the workstations for a period of time to show what it looks like when the space has been fully staffed for an extended period of time, after which they return to being unoccupied for some time before the end of the simulation. This shows the amount of time it takes for the CO2 levels to return roughly to equilibrium. Visualization: It uses PointCloud for the visualization purpose. It reads the simulation output data in CSV format (using D3-fetch) and plot the x-coordinate and y-coordinate on the PointCloud (using THREE.js). There is a use of the ShaderMaterial which provides us with various material properties such as point size, vertexColors, load textures and many more. It makes use of custom shader material such as vertex shader and fragment shader. There is a color gradient legend (using D3.js) to reflect the color change of the CO2 concentration level in different time frames. How we built it We built it using a cloud based developer tool, Forge API. The programming languages we used are JavaScript, Python, and C++. Challenges we ran into Data extraction: A lack of precision when converting model information to Cadmium’s input format causes some details to be lost. Some doors become too narrow while some passages are walled off. This was corrected by accessing the resultant file and modifying cells manually. Simulation Large 3D models take excessive amounts of time to run using Cadmium in its current state. To remedy this issue, we intercepted the model information after parsing and manually combined information from several layers into one. We then used this single 2D layer to run the simulator. This is out of scope for the hackathon. Visualization Autodesk Forge uses THREE.js version 71 and we had difficulties in using the latest extensions and we had to look around to implement newer elements. We went back and forth between PointCloudMaterial and ShaderMaterial for defining custom shaders. This was an issue because the THREE.js version. Accomplishments that we're proud of Our team is very proud of the accomplishment as undergraduate and graduate students made a significant contribution to the API's implementation. Our goal was to extract data from the BIM model in a format acceptable to our Cadmium simulator, and this is successfully implemented. The Cadmium simulator ran a large BIM model for the first time and took more time than practical to complete. But it performs a quick simulation on 2D models. We used the CO2 simulation results for a 2D model in order to visualize results in a timely manner. Though the extraction is not complete in its entirety, it is still a functioning prototype that will be developed further. As a team, we are proud that we were able to integrate the connection between the simulator and BIM models during the hackathon. What we learned We learned that hackathons are a great resource of information and a hub for connecting with a variety of professionals. We were able to dive into technologies that provide a platform for processing and visualizing large amounts of data. The hacking experience was crazy, chaotic, and hectic, but we were able to meet our goals and have fun. What's next for BIM-to-DEVS Visualizing the simulation of CO2 levels or the spread of a pandemic in building models on a large scale like a campus will give contextual information to understand the phenomena. Our goal is to refine data extraction and make the file loading process take less time. Our future goal is to deconstruct large BIM models for appropriate data extraction and run a quick simulation. In the future, we hope to improve our simulator, Cadmium, to be able to handle large, 3-dimensional models in a much more time effective manner. Built With .obj bim bim360 c++ cadmium d3.js express.js forge javascript model-derivative-api model-derivatives-api python sqlite three.js Try it out github.com
BIM-to-DEVS
Data extraction and translation of BIM geometry to a voxel format for simulation and back for end user visualization.
['Vinu Subashini Rajus', 'Cristina Ruiz Martin', 'Saif Rahman', 'Mitali Patel', 'Gabriel Wainer', 'Zijun Hu', 'Griffin Barrett', 'Bruno St-Aubin', 'Tiago Ricotta', 'Thomas Roller', 'Kevin Henares Vilaboa']
['BEST AUTODESK BIM 360 PROJECT']
['.obj', 'bim', 'bim360', 'c++', 'cadmium', 'd3.js', 'express.js', 'forge', 'javascript', 'model-derivative-api', 'model-derivatives-api', 'python', 'sqlite', 'three.js']
5
10,398
https://devpost.com/software/bimsocket
BIMSOCKet GIF How BIMSOCKets share data How to use it We use Google Firebase under the hood to communicate all the different programms Example of connection between Unity and our custom Three js Viewer Example of connection between Unity, Revit, the Database and our custom Three js Viewer Example of connection between VIM and Rhino Example of connection between Revit and google spreadsheets BIMSOCKet is a tool designed to be the road where data flows so our idea is to allow the users to exchange any type of format that they want Inspiration We work with different AEC software every day and is a constant struggle the migration of data and to work collaborative with people who are not in the same office, even more in the current Covid times. What it does BIMSOCKet is a bidirectional cloud socket to connect any type of AEC software in real time. How we built it We used Google Firebase under the hood to create each socket, so every time there are changes on the database it will let the connectors know to update any new data. As an exchange format we used Json V3 but our idea is to keep our solution agnostic so any format can be used with it. As an starting point we developed our own custom viewer using Three js and Javascript and started modifying the database from multiple computers at the same time. After we made sure that the connection with the database was working in real time, we decide to develop connectors for Revit, Rhino, VIM, Unity, Google spreadsheets and Power BI. Challenges we ran into -We discover there are so many incompatibilities between different 3D engines: units, scale,normals, faces, vector,3D coordinates,languages, etc -We started using just one Document per model in Firebase but we faced problems due to the size limitations of said Documents. Our testing models exceed the 10 mb limit of the Firestore structure which lead us to re think the whole process. Checking for Nulls on almost any normal programming language it is something to be worry about. We had to deal with multiple platforms interconnecting in real time and chasing those null was half of the work. Transactional software such as Revit need special care during modification of its elements. We develop creative solutions to deal with it. Object change during Revit Events. Javascript Specially JS event handlers and triggering changes in ThreeJs While git submodules are an amazing way to connect repos, it's a struggle to keep them update. Accomplishments that we're proud of We had a blast coding and we learnt many concepts that we didn't know! That we got the solution working between all the different programs in real time, hooray!! What we learned -We learnt a lot about the way that each different API of 3D software works. -Building on top of a simple concept allowed us a world of different possibilities of what to do. -KISS is the key. -It is important to learn to prioritize and be lean. -Talk to potential users made us change so many things that though we had a clear path. -Good ideas don't grow in the vacuum, sometime it just connecting something existing in new ways. What's next for BIMSOCKet -Add more connector to it and increase the ecosystem -Improve stability of the Three js Viewer -Include glTF as another format. -Get the community involved with the project -Improve the system to get bigger files -Add the clash detection feature to our viewer Built With c# css firebase html javascript python Try it out bimsocket.rocks github.com viewer.bimsocket.rocks
BIMSOCKet
A bidirectional cloud socket to connect any type of AEC software in real time
['Valentin Noves', 'Pablo Derendinger', 'Ivan Alexander Corral', 'jason ekensten']
['VIM Mashup Prize']
['c#', 'css', 'firebase', 'html', 'javascript', 'python']
6
10,398
https://devpost.com/software/a-ar-geo-video-b-align-and-display-site-camera-in-forge
Team: Cool People's Really Exclusive Club Members: Jack Hayes 2 projects (A) AR Geo Video problem: clients want scene/ street context for lcoation of underground utilities. solution: record an AR video that overlays the BIM model onto the scene and is also geo-referenced so clicking a map point shows the relevant video portion. link: 128.199.43.119/AR_Geo_Video/index.html usage: wait ~10s for video to load then click any map marker, then click "play here" to play video at that location. workflow to create: 1. Trimble SiteVision 2. heavy duty gimbal (Zhiyun Crane 3) 3. screen recorder 4. auto-screen clicker app 5. ducktape the SiteVision with phone into the gimbal 6. set auto screen clicker to click every 5s the "draw line point" button in SiteVision (this records the GNSS in a .csv) 7. set the screen recorder to start recording 8. start walking the route of the BIM model 9. after recording, export both the screen recorder video and csv of GNSS points to server 10. the video and and GNSS points correlate since they both use the common phone clock for timestamps 11. the index.html shows how the two data sources interact (B) Align and display site camera in Forge (using Autodesk Dublin office) problem: clients want to see a live view of the site against their BIM model, also how to align the camera without being on-site? what if the camera moves a bit? solution: pick corresponding pairs of 4 points on the Forge model and 4 points in a frame of the video feed, use OpenCV to solve the camera position and rotation, set the Forge Viewer virtual camera to match the real solved video, display the video as two HTML canvasses with the Forge Viewer having transparent background the OpenCV solver is "PNP" using the "P3P" version: https://docs.opencv.org/4.4.0/d9/d0c/group__calib3d.html#ga549c2075fac14829ff4a58bc931c033d calibrate camera files: http://128.199.43.119/site_monitoring_Forge/align_camera_to_Forge_model/ view model with video background: http://128.199.43.119:3001/index.html workflow: take photos of chessboard and record square size: calibration_images folder contains images taken by my DSLR fedd calibration images to OpenCV calibration script: camera_calibration_cmd.py script was used to create the camera_calibrated.yml file pick 4 points on a frame/ screengrab of the video feed: image_coordinates.html was used to pick 4 image points pick 4 corresponding points on the Forge model: 4 points on the model were read directly from Forge using the raycaster and reading the interseciton points in console.log using this page: http://128.199.43.119:3001/getObjPoints.html the original_camera_align.py script was fed the input data in camera_align_real_params.txt to output the solved camera pose to solved_camera.txt usage: go to http://128.199.43.119:3001/index.html click on model of windmill lane office Bugs: the camera seems to have the right position, but I'm not sure if the Forge camera is actually being set to the correct field of view and rotation the slider is stuck for some reason sometimes the model viewer doesn't have a transparent background To Do: fix video slider check Forge viewer field of view and rotation parameters add live video using RTSP protocol integrate Gantt chart to show 4D model progress against video feed integrate all camera calibration steps into single webpage w/ API to Python OpenCV back-end Here the video correctly occludes part of the Forge viewer: Built With autodesk-forge html5 javascript leaflet.js opencv python three.js trimble-sitevision Try it out 128.199.43.119 128.199.43.119 128.199.43.119
(A)AR Geo Video (B) Align and display site camera in Forge
(A)record AR video that overlays the BIM onto the scene and also geo-referenced so clicking map point shows the relevant video(B)Align and display site camera in Forge (using Autodesk Dublin office)
['Jack Hayes']
[]
['autodesk-forge', 'html5', 'javascript', 'leaflet.js', 'opencv', 'python', 'three.js', 'trimble-sitevision']
7
10,398
https://devpost.com/software/embodied-carbon-augmented-reality-visualization
Inspiration Needing a way of communicating the embodied carbon emissions of construction materials in a compelling way to building industry stakeholders and the general public. Make the invisible carbon emissions visible and relatable. What it does After doing a Whole Building Life Cycle Assessment for a given project, we can visualize the results using Augmented Reality. The sizes and scales used in this AR visualization are from real numbers on a 10 storey mixed-use project where I did an LCA study. Note that this demo is not filmed at the location of the building that was modelled, so it does not reflect the emissions from this library building where it was filmed. It visualizes the volume of concrete, steel, glass, insulation, etc. used on this project, per m2 of floor area. Additionally, it shows how much embodied carbon emissions results from 1 m2 of floor area on this project, represented by these black spheres, each representing the volume of 1 kgCO2e at standard pressure. For this project, the baseline building was 528 kgCO2e/m2, so 528 black spheres are shown. To show 12% emissions reduction on this project, which was achieved through low carbon concrete mixes and changes in insulation, we animate 12% of these black spheres falling, with physics properly simulated. This is a file that can be sent and viewed through any iPad gen 5 or iPhone 8 or later. It can be embedded in websites and viewed in Safari, as well as sent as a file directly to others via iMessage or email. This means hundreds of millions of iOS devices could launch this AR experience, making it a compelling way to convey this information to industry stakeholders as well as the general public. How I built it Used Blender to create a sphere Downloaded material texture stock images for different construction materials Used Apple's Reality Converter to apply material textures onto the spheres and export into suitable file format. Import spheres into Apple's Reality Composer app on Mac and iPad. Create animations/behaviours using Reality Converter. All spheres are scaled according to actual volume in m3, represented by the volume of the spheres. Created 1 kgCO2e as black spheres, properly representing volume at standard pressure. Showcased 12% reduction in embodied carbon by having 12% of the black spheres fall down, with proper physics simulated. All values (material volumes and CO2 emissions) are based on LCA results from a 10 storey mixed use project I had already modelled previously using One Click LCA. Challenges I ran into Learning Reality Composer, Reality Converter, and Blender. Accomplishments that I'm proud of Learning about how to create interesting AR experiences using Reality Composer. What I learned Reality Composer and the capabilities of this software to create AR experiences. What's next for Embodied Carbon Augmented Reality Visualization Refine the narrative and visualization, then apply this for future projects and get feedback from clients, architects, developers, etc. Share publicly on LinkedIn. Built With blender reality-composer reality-converter Try it out www.dropbox.com
Embodied Carbon Augmented Reality Visualization
AR Visualization of Embodied Carbon for a specific project, visualizing the volume of different construction materials, the embodied CO2 emissions per m2, and reductions achieved on project.
['Anthony Pak']
[]
['blender', 'reality-composer', 'reality-converter']
8
10,398
https://devpost.com/software/react-bim
REACT BIM Inspiration The AEC community must rely heavily on the features of proprietary modeling programs to coordinate a vast amount of both geometric and parametric data for every project. The tools are amazing, but regardless of the feature-set of the modeling program, the ability to innovate becomes completely coupled with the ability for the software vendors to progressively add features. Vendors are forced to either have to work within the limitations of the software or build add-ons via the program’s SDK, which simply become faster automated ways of running the modeling programs user interface and do little to fully address the inherent limitations. These problems become particularly painful at the interface between describing the design intent to trying to extract enough manufacturing data to actually construct. The BIM model is inherently a generalist tool and the lack of a common modeling language makes it difficult to use this data in a way that can then be useful for digital fabrication. Creating a framework that allowed building component manufacturers to enforce the required parameters necessary to drive their products could remove a lot of the need for design churn with secondary and more specialized modeling applications. There are many similarities between crafting a user interface and modeling the built environment. On the web, for example, an application consists of a nested hierarchy of components. These components are generally reusable concepts from application to application, but vary widely from project to project. Just like two buildings may not have the exact same window style--there exists the idea of a window in a building and these components need to be customized to fit the nearly infinite needs of the building program. This is very similar to how a web component may be styled from application to application (or instance to instance). Components must also respond to changes in a building program. Often during construction, building elevations, wall lengths and the overall parameters can often change. Smart building design allows for these components to be driven by these top level parameters. This is very similar to how a responsive web application may reorder and scale components based on the browser or device size What it does Our program displays a simple web based user interface which then generates an XML model of the configured building. This XML has all of the instructions to configure and place the resulting part families into a building assembly. The XML file is then read in by an add in for Revit and also Inventor--which has the task of configuring and placing the parts. The idea is that the add ins can be fairly 'stupid' and all of the logic can be pre-determined by the react syntax. How I built it The user interface and the XML generation is all done in react. We created a simple building component framework in react that allows a developer to write the logic of the building component layout just like they would if they were writing a web page. Challenges I ran into We decided early on that Forge would be an ideal environment for this. However, Forge was new to everyone on the team and we had some problems completing this integration. Accomplishments that I'm proud of Proud of how everyone worked together to try and complete. All of the individual pieces work great--the ui and XML generation work well. The add ons for revit and inventor also work really well. What I learned Learned a lot about Forge and very excited to learn more. What's next for REACT-BIM Lots of ideas on how to make this a viable framework. Need to create a library of standard building parts and a way of registering any parametric CAD part with the framework. Also looking at adding a geometric constraint solver and part numbering service capabilities. Also planning to starting using this to help model the VIFAB's stair system. The framework is barely even an embryo now, but with industry help this could be a powerful and useful tool. Built With .net inventorsdk javascript react revitsdk Try it out github.com docs.google.com www.youtube.com
REACT-BIM
The built environment is a user interface. Let's build it like one.
['Simon Biddle', 'Luis Alonso Otero', 'Raul Sanchez Barato', 'Sahil Tadwalkar', 'Aaron Rutledge']
[]
['.net', 'inventorsdk', 'javascript', 'react', 'revitsdk']
9
10,398
https://devpost.com/software/sullybee
SullyBee Inspiration I realized how important it is to have a family together, when i started following my dad to work at his construction site. He knew that working in the industry is a risk because at any given time you are exposed to danger. As i grew up i felt that becoming an engineer would help make the world a better place and i did until i got my first professional in a top NY firm. In my first month a grave accident occurred at the project i was placed in, it was then known that two men had fallen off an aerial lift. One was critically injured and one unfortunately died, the gentlemen that died left behind a family, while the other was critically injured with a hip replacement. As everyone scrambled to get information on the equipment on the men, i realized that there is a lot of room for technology to be used in order to help prevent more people from injury or worse death. That Technology can be SullyBee. What it does SullyBee helps communicate rental equipment better faster, updates everyone in the project letting supervisors know how many rental equipment are used at any single minute and who is using them. How I built it We are building it with IOT and application based software using soracom, raspberry pi, a Huwai modem, and firebase for softwrae application. Challenges I ran into We ran into the following challenges: Working remotely, coordinating the meetings, working on the hardware from different areas. Accomplishments that I'm proud of The Team we have, the project we are aiming for and that we agreed to continue together after the competition. What I learned How to use IoT and program raspberry Pi. What's next for SULLYBEE Get one rental equipment to be successful, keep building IoT devices for construction and help improve safety. Built With amplify cloud glide ionic iot Try it out github.com docs.google.com
SULLYBEE
Lets help supply-chain in construction be smarter and safer when ordering and using equipment on job sites.
['Jhoan C Avila', 'Ana Margarita Gonzalez Vasquez', 'Kudzai Tunduwani', 'Angel Beltre', 'Joseph Cardozo']
[]
['amplify', 'cloud', 'glide', 'ionic', 'iot']
10
10,398
https://devpost.com/software/streamvr
Revit Add-in Configuration Form Selecting Materials Onto Material Palette Using a Wheelchair to Review Universal Design Placing Familes in an Open Office Setting Reviewing Paint Selection in a Patient Room Model Server Web-Viewer Selecting Furniture for a Lobby Inspiration Having full-scale mockups allow owners to understand how their building will function in a way that renderings can't convey. Currently, this requires additional cost to everyone due to the time it takes to produce VR experiences on an individual project basis. What it does Our solution enables bi-directional streaming of data between a Revit model and our VR application. This allows client, architects, and contractors to edit their Revit models in VR and synchronize those changes back to Revit in real-time. Teaser Video Demo Video How we built it There are 4 moving parts to our solution Message Bus The message bus is the simplest component of our solution. We are using a NATS server to allow for real-time communication between our other system components. This is a light-weight bus that separates traffic into queues and channels that can be published and subscribed to. Model Server The model server allows for the storage of exported Revit family geometry and materials. It acts as a simple API: the Revit add-in can POST OBJ and material data and the VR application can GET them as needed. This server also comes with a simple web portal to view and manage cached data. Revit Add-in This add-in allows the Revit model to connect to the message bus and receive commands from a connected StreamVR application. Upon receiving a request or command, the add-in executes a subroutine. These processes primarily include Getting, Setting, Painting, and Deleting. Also part of this add-in is the ability to export Family geometry and Material textures to a caching model-server that the VR application will later access. VR Application The VR application is where BIM data are rendered and interacted with. It first connects with the message bus and the Revit add-in using the provided configuration parameters. It then queries Revit for structural geometry such as walls, floors, and ceilings along with metadata pertaining to Families and Materials. After retrieving these data, the application then renders the VR space, downloading OBJ models and textures from the model-server as needed. Challenges we ran into We certainly ran into plenty of challenges along the way. One of our biggest hurdles was handling hosted families. We encountered problems with synchronizing and updating placements when these families included sub-components. Another challenge we faced was the Unity3D learning curve as we came into this hackathon as relative novices. Accomplishments that we're proud of We are proud of all the progress we made during this Hackathon. Some noteworthy achievements include our dynamic menu loading and our Revit Family caching and retrieval which allows for the use of any Revit Family in VR. What we learned Along the way, we learned many deeper features about Unity3D including how Coroutines work and some of the finer points regarding shaders and post-processing. What's next for StreamVR In the future, we would like to enhance our dynamic lighting and material rendering. We would also like to solve the challenges we faced with hosted families and allow for placing them and moving them between hosts. Additionally, we would like to incorporate the option to move non-family geometry such as walls, floor, and ceilings. Finally, we built our VR app using Unity's XR toolkit which allows for future adoption by other hardware including AR headsets. Built With c# nats node.js revtiapi unity Try it out github.com
StreamVR
Develop a Revit add-in that allows for bi-directional communication with a VR application
['Andreas Brake', 'Lisa-Marie Mueller']
[]
['c#', 'nats', 'node.js', 'revtiapi', 'unity']
11
10,398
https://devpost.com/software/remote-labor
Sweeping Bot Prototype Inspiration CoVID, and the fact that labor is hard. It would be nice for physical laborers on a site to work from home. What it does Tele-operated broom with wheels. Allows a worker at home to remotely sweep up on a job site. How we built it Took some design from previous hack-a-thon, and expanded on it. 3D printed parts, T-Slot framing. Challenges we ran into Simulating the robot was harder than expected. Remote physical labor is a new concept. The size of the construction industry itself makes targeting use cases very hard. Accomplishments that we're proud of We can drive a robot around and sweep a room. What we learned Customer development is hard. Robot development is hard. Different markets ( US / Brazil ) have very different needs. Working with the mind of workers/managers. What's next for Remote Labor We'll pitch at the Startup round as well for the learning experience. Built With arduino gazebo go godot ros
Remote Labor
Work from home for physical labor.
['Jaqueline Natália Guerra', 'Imran Peerbhai', 'Brenan Lundquist']
[]
['arduino', 'gazebo', 'go', 'godot', 'ros']
12
10,398
https://devpost.com/software/bim-enabled-bi-tool
Inspiration With the rise of new technologies like BIM, Analytics, IoT, and Artificial intelligence, the complexity of systems and the pace of learning has been rapidly increasing. Experienced AEC professionals have been facing the challenge to cope up with the change in technology with their busy schedules and work load. This has inspired us to work on demystifying the complexity of these technologies and make them available to professionals whilst keeping the complexity under the hood. What it does The solution we have developed integrates BIM models in form of IFC with project schedules and pricing into an ecosystem that provides construction project managers a holistic view on project performance without having to worry about the hassles of integration and technicalities. How I built itl We have worked on producing a backend using Python and Flask framework that provide a RESTFul API. IfcOpenshell was used to handle IFC files and parse them into a MySQL database. The frontend used libraries like three.js for reading converted IFC files and displaying these on screen and D3.js library was utilised to integrate with crossfiltering functions of the PowerBI application. We have also used Microsoft Power BI framework PBIVIZ which uses Typescript. Challenges I ran into Handling the large amount of data in big models and parsing IFC files to the database was a big challenge. The conversion of geometry into a readable format by WebGL libraries whilst maintain reference to IFC GUID objects Accomplishments that I'm proud of The overall integration worked and it is smooth in the ways models are handled What I learned What's next for BIM Enabled BI Tool Performance improvements to enhance response times on large models processing Potential opportunity to investigate integration of VR/AR with BI and BIM for future improvements. Built With 4d 5d bi bim ifc kpi-dashboard python typescript Try it out app.powerbi.com
The BIM Machine - BIM Enabled BI Tool
Our project involves integrating BIM Models with BI dashboard to remove the barriers and complexities from utilising theses technologies.
['Mohamed Abdelaal', 'Hassan Emam', 'Marjan Sadeghi, PhD']
[]
['4d', '5d', 'bi', 'bim', 'ifc', 'kpi-dashboard', 'python', 'typescript']
13
10,398
https://devpost.com/software/w-e-b-whatsapp-evercam-bot
Whatsapp Bot Frontend Whatsapp Bot User End Inspiration 'I got into construction so I wouldn't have to use a computer or stupid apps' - this was said to me a few years ago when running a training session for a Site Diary app and it stuck with me. With so much technology being introduced to support the completion of a project sometimes the question of necessity is overlooked. We need the most important information at our fingertips with the smallest amount of effort. This is what inspired W.E.B. (Whatsapp Evercam Bot), a simple to use bot that you can send a couple of texts to and get back clear concise information or visual snapshot. To simplify communication on-site by using live images and reports. What it does W.E.B. connects to your Evercam account and Whatsapp to give you a live snapshot of your project at the touch of a button. There are two main elements that can be captured through communication, Camera Live View and Gate Reports How we built it We built our W.E.B bot around WhatsApp capabilities using a headless browser, the backend stars a puppeteer instance in the background asking for QR authentication. After the authentication, the bot will be ready to receive the user’s messages. The bot will use the user phone number to authenticate to the Evercam API, once the communication is well established, the bot will be able to answer the user’s questions. The bot backend sends HTTP requests to get user’s responses following a well-defined flow. Challenges we ran into Keeping the chatbot connected (for that, we are storing the headless browser session in an S3 bucket). Integrating with Procore, this was mostly down to our own back end issues which we couldn't resolve in time to get a live demo of the integration Accomplishments that we're proud of Successfully proving that we can use WhatsApp chatbots (or any similar service) as a construction API client. What we learned Designing a chatbot communication flow can be tricky. What's next for W.E.B, Whatsapp Evercam Bot Improved functionality and integrations with Procore and Autodesk. We aim to build this further and make it a seamless part of the site teams day toolbox. Now that we've managed to keep the chatbot running its only a matter of time before we start pulling more information into one place. Built With amazon-web-services api node.js postgresql Try it out github.com
W.E.B, Whatsapp Evercam Bot
Create a simple to use Open-Source Whatapp bot to push & pull site images and reporting information from Evercam and Procore to improve communication between the site team and administration
["Brendan O'Riordan", 'Salah Eddine Taouririt', "Eoin O'Neill", 'Marco Herbst', 'Javier Calero de Torres']
[]
['amazon-web-services', 'api', 'node.js', 'postgresql']
14
10,398
https://devpost.com/software/test-tgbz25
File processed successfully (view or download report) COBie report summary Cobie Submission page COBie report detail Inspiration Verification and Validation of cobie files at the time of handover is difficult for both contractors and facility managers. As these files can be both extremely large and complex, reviewing manually can be time consuming, expensive and prone to human error. There is currently an open source command line and gui application available developed by Bill East. Please see link to git hub https://github.com/OhmSweetOhm/CobieQCReporter/releases for details on these applications. While these open source tools in github are great, they lack professional support and they are not always accessible to less technical AEC specialists. What it does We built as our contribution to this hackathon an online installation-less free to use application to support the verification and validation of cobie files. How we built it Using the source code from an open source project (see link above). We adapted this java swing COBie Quality Control reporter and exposed its functionality as an API and built a Spring-boot Angular web application . Challenges we ran into Technical challenge is merging frontend and backend components which took a bit more time than originally expected. Accomplishments that we're proud of We have taken a powerful open source application that was not accessible to most people in our community and made it more accessible and free to use. What we learned We got the opportunity to explore and understand the complexity and challenges faced by contractors, BIM managers and facility managers delivering and receiving COBie data What's next for Nextgen CobieQC Next objective is to adapt the application to support the coordination and management activity of Bim managers during the design and construction phase in relation to COBie deliverables Built With angular.js cobieqcreporter java linux spring-boot tomcat Try it out 86.101.228.173
Nextgen CobieQC
Building a web frontend to support verification and validation of COBie files.
['Timothy Kelly', 'Ralph Montague', 'Patrick Slattery']
[]
['angular.js', 'cobieqcreporter', 'java', 'linux', 'spring-boot', 'tomcat']
15
10,399
https://devpost.com/software/cammunication-du1jml
Inspiration All 3 of us sat down in front of Skype to brainstorm about ideas for the hackathon. Since we were all in a mess, and are very shy people, we did not turn on the ideas. While suggesting ideas to each other, the other 2 would get lost, or we could not tell what the other 2 were feeling. Then, Benjamin mentioned, "if only there was some way for me to know what you two are experiencing, then I would not need to keep asking 'do you guys get it?'" That was when we hit our Eureka moment. Hence, CAMmunication! What it does CAMmunication is a software application that uses a computer's camera to scan and predict emotions through Artificial Intelligence and Machine Learning Algorithms
CAMmunication
Communicating emotions; remotely
['Tan Wei Qiang, Benjamin', 'Imtiaz Bin Yazdany', 'Hariharan Mohanabala Krishnan']
['First Place']
[]
0
10,399
https://devpost.com/software/collaborative-classroom-jaw2z7
Room Features Login System Private Messaging System Room Entering Quiz Feature What it does While there exists many apps that allow for some form of online collaboration, they fall short when it comes to creating an ideal online classroom environment. This app aims to provide TAs and students with all the features they would need to conduct a conducive classroom. Some notable features include a real-time whiteboard, real-time quiz conducting feature, and a personal messaging system. How I built it This website is with React as the frontend, with a NodeJS backend server. Whiteboard: Consists of a classroom whiteboard and a private board. Allows two-way communication between students, such as collaboration between student and teacher, or in group discussion. Zoom does not have two-way Whiteboard and sharing feature, while Microsoft Teams allows screen sharing but no whiteboard sharing. Whiteboard sharing allows for innovative teaching. Video/Audio sharing: Allows for video and audio streaming as well as screen sharing. 2.Facilitates communication and encourages student participation. Allows for 2 way interaction between student and teacher. Real time quiz maker/ taker Allows the teacher to effectively test the knowledge of the students in real-time, to reinforce learning 2.The quiz feature is split into 2 components: A quiz maker for the teacher to create quizzes and view the results of the students in real-time. A quiz taker and score system for students to answer questions in real time, and keeps track of their results. Real-time messaging both in room and privately Room messaging allows instant communication between teachers and students in the same tutorial room Private messaging allows the users to communicate even if they are not in the same tutorial room. There is also an unread messages support, which allows the users to track messages quickly. Unlike Luminus quizes, Collaborative Classroom allows for interactive evaluation and discussion of answers. Challenges I ran into Bugs. Lots of nasty bugs. As this application involves persistent data in the database as well as real-time communication between users, it tends to be vulnerable to bugs. Moreover, we gained further insight into software development and the challenges of implementing a real-time service. Through this experience, we have become more aware of the possible pitfalls and common bugs in developing our server, and will be able to better lookout for them when writing code in future. Accomplishments that I'm proud of We believe that we have become better software developers through this experience. This website consists of many libraries weaved together in an innovative way. This includes the video/screen sharing, realtime-whiteboard, react grid layout, and socket programming. We are proud that we have managed to bring these libraries together into a single working product for both students and teachers. What I learned We have become more familiar with web development and how bugs could have been avoided with better coding practices. Why websites are down sometimes. An accident push to the production server can cause the public website to be down! What's next for Collaborative Classroom NUS Should take us up on the idea, as this provides a conducive environment for students to work together just like in real tutorial rooms. In addition, other universities and schools in general could use our app. Try it out Education is fun! Built With heroku node.js react Try it out shengxue97.github.io
Collaborative Classroom
A app that aims to create a productive classroom environment
['yuhongtay Tay', 'Sheng Xue Sim', 'xiuqi sun']
['Second Place']
['heroku', 'node.js', 'react']
1
10,399
https://devpost.com/software/idpyr4
Inspiration We observed a problem, whereby remote working hinders group dynamics and communication, especially in newly formed student groups. What it does We aim to solve this problem by aiding in team dynamics building through a series of multiplayer challenges that serve to improvement level within the team. How we would build it We would utilize Unity Game Engine, an industrial-grade game development platform, coupled with the Myres Briggs Test Instrument to develop avatars that the team-members would be assigned to. We would be using Metabase and Redash for data analytics.
IDPYr4
The future of Remote Collaboration for every team
['Aiden Koh', 'Anthea Foong', 'Jeremy Ong']
['Third Place']
[]
2
10,399
https://devpost.com/software/culture-shock-absorber
Inspiration Working in companies before, we felt that culture was something that was difficult to interpret. In most cases, our colleagues had a different impression of the culture than we did! We felt that there was no concrete method used by companies (especially newer ones) in cultivating the culture and with it an inclusive community. What it does Culture Shock Absorber allows employers to monitor the cultural fit of their employees to ensure that they are integrating well into the company culture. Simultaneously, it allows employees to better understand the company culture and adjust to their job roles better. It is an app that provides a template for employers to create questionnaires for their employees to answer and send back to them. The app will then help to process the answers and rank employees according to their cultural fit. The compiled results are displayed on the dashboard for the employer to see and make decisions on how to help their employees better integrate into their company. How we built it Used React for the frontend, hosted on S3. We then setup the backend on a EC2 instance using ExpressJS. MongoDB was hosted on another EC2 instance and connected to our backend as well. Lambda function for analysis jobs were then declared and endpoints attached using Amazon API Gateway. Challenges One challenge would be in the setting up of the AWS Lambda function to allow for processing of analytics jobs. Data needs to be sent to the function in an efficient manner in order to avoid excessive delays in analytics. Accomplishments that we're proud of Effective brainstorming sessions in terms of the idea and the app architecture. We were able to ideate quickly and pivot our idea as time went by to better address the problem statement and to make it a more robust application :) What we learned We learnt new full stack technologies such as React and ExpressJS through the creation of the application. What's next for Culture Shock Absorber We hope to complete the application and get some companies onboard to trial it. Maybe this will help to strengthen company culture in an uncertain and disjoint time such as this! Built With amazon-ec2 amazon-lambda amazon-web-services express.js javascript mongodb react rest-api
Culture Shock Absorber
The app that strengthens company culture from a safe distance.
['Justin Foo', 'Rohit Rajesh Bhat', 'Raphael Joseph']
['4th Consolation Prizes']
['amazon-ec2', 'amazon-lambda', 'amazon-web-services', 'express.js', 'javascript', 'mongodb', 'react', 'rest-api']
3
10,399
https://devpost.com/software/remote-collaboration
Just A Collaboration Tool Inspiration Remote collaboration is the new norm thanks to COVID-19. Given the many challenges faced during remote work and learning, our idea aims to improve users' experiences by taking into account the challenges faced and improving their remote collaboration experience. What it does Experience the advantages of Face-to-Face interactions with JACT, bringing you novel AR and VR experiences! Complete with the best functions inspired from other collaboration platforms, we aim to provide a one-stop solution for easier collaboration, without toggling over various platforms like we do now. How did we come up with the idea? Usually, from our brains..? On a more serious note, through personal experience utilizing remote collaboration like Zoom and Slack during this period for orientations as well as lessons. Even while working on Ideate 2020, we utilized Google Meets and shared documents as well. The Singaporean thing would be to complain about the many problems remote collaboration platforms give us, yet we don’t usually innovate new solutions to solve these problems. Hence through this hackathon, we hope our idea would be found useful, and will be created into a real app soon! Challenges we ran into The usual challenges that come with remote collaboration - unstable wifi, lack of real face-to-face interaction and being unable to feel that ‘human touch’ that remote collaboration simply lacks. Also, we had to take turns speaking, ensuring no one is speaking over the other even though there were only 3 group members. Accomplishments that I'm proud of Everything! We’re really glad to have known each other through this process too. Kudos to the whole group for being really efficient, completing this project in a total of 3 meetings - brainstorming, slide deck completion and the final submission! What I learned We’ve learnt a lot about the current tools available which wouldn’t have discovered if not for Ideate 2020. It has been a very eye-opening experience for our group as we explore different ideas and look at how we might address the limitations of remote collaboration. We hope that our ideas could spark a change in remote collaboration! What's next for JACT Coming to all App Stores soon!
JACT - Just A Collaboration Tool
Experience the advantages of Face-to-Face interactions in remote collaboration through JACT with AR and VR technologies.
['Allard Quek', 'Verlyn K', 'Patrick Zou']
['5th Consolation Prizes']
[]
4
10,399
https://devpost.com/software/pomotracker
Inspiration Ever since the COVID-19 pandemic started, we had to complete many of our collaborative work via virtual meetings. Oftentimes, meetings scheduled for 2 hours suddenly end after 4 hours! Yet, no work has been completed. Virtual meetings have become so accessible and convenient that we often take time for granted because we are able to "just extend the meeting". This is especially true if we are in an informal setting, i.e. discussion among peers. Thus, we decided to develop Pomotracker to help keep virtual meetings stay on track and on time! What it does Pomotracker allows you to : Input your tasks and allocate a time limit for each task. When you start the timer on pomotracker, it reminds you of the task and counts down the time for the task and the meeting. You can also share the timer with others in your team so everyone is on the same page.
Pomotracker
Pomodoro is an easy-to-use software that helps virtual meetings to stay on track. Users have an online-sharing countdown timer for whatever tasks. Pomodoro will surely “Prompt your problems away”
['Cheryl Kek', 'Sean Gee']
[]
[]
5
10,399
https://devpost.com/software/zcomm
Inspiration During zoom meeting, sometimes I feel that lecturers are talking to fast or sometimes internet connection is not great. However, using scripting recording and subtitle can solve these issues. What it does It creates subtitles on the screen when someone talks and in the end of the meeting, the meeting and the scripts are recorded and sent to all the audiences. What I learned Innovation come from an improvement of the original products. Built With server
Zcomm
Implement Online Meeting server with Subtitles, Scripting and Recording features to help people hearing disabilities and provides convenience to the audiences.
['Chih Chieh Chen', 'Nurul Fatimah', 'Seow Denis']
[]
['server']
6
10,399
https://devpost.com/software/ring-ring-ring
Inspiration We created this taking into account our recent experience as our internships and lessons transited online. What it does It introduces features that considers common pain points of remote work and collaboration on video conference calls.
Ring Ring Ring
Modernizing Video Conference Calls
['Ming Soon Kor', 'Jerryl Chong', 'Aerin Ng']
[]
[]
7
10,399
https://devpost.com/software/focused-learning-oriented-webinar-plug-in-f-l-o-w
Our group was frustrated by the awkward interruptions online during meetings and conferences when multiple people would speak at the same time or one would not know when to speak. We figured there had to be a better way and that is why we came up with F.L.O.W. Built With convnet imagenet keras pyaudioprocessing tensorflow
Focused Learning Oriented Webinar Plug-in (F.L.O.W)
F.L.O.W is a novel plug-in that eliminates unwanted interruptions in video conferences.
['Harriz Adry', 'Abdullah Tariq', 'Suraj R Jaya']
[]
['convnet', 'imagenet', 'keras', 'pyaudioprocessing', 'tensorflow']
8
10,399
https://devpost.com/software/collabplay
Inspiration We surveyed the general population and we found out that the demographic facing issues whilst remote collaborating are the students and working adults. What it does CollabPlay is a one stop platform for conducting collaborative work between people. It is available on desktop, mobile and tablets. How we built it Challenges we ran into Accomplishments that we're proud of What we learned What's next for CollabPlay Wearable CollabPlay smart devices like watches, headgear, glasses, belts that collects data from user. This data can be consumed by the platform to generate more accurate results for PlayIdea and improve other AI functionalities
CollabPlay
CollabPlay is a one stop platform for conducting collaborative work between people. It is available on desktop, mobile and tablets.
['Jun Hao Ng', 'J jef', 'Zhiming Zhong']
[]
[]
9
10,402
https://devpost.com/software/coronasim
Inspiration In today's society, the coronavirus plagues the globe. As much as direct mitigation of the such a pandemic is crucial, what's more important is both the prevention and control over another pandemic similar to the novel coronavirus. CoronaSim aims to provide a streamlined means to simulate a pathogen with sophisticated, yet adaptive, technology. What it does CoronaSim essentially simulates a modern city with the use of hidden nodes and agents. Several features have been added so the user can customize what type of environment and factors the simulation is conducted on. This allows the user to see the effects of demographic factors as well as preventative measures on infection rates, such as quarantining a building. The user can also analyze the simulation in real-time with a graph to gain a better understanding of the correlation between infected and recovered. How I built it All of CoronaSim was created with JavaScript. Node.js is used for processing data/backend and React was used for the UI/UX/frontend. In terms of the "science" behind it, CoronaSim is inspired by the Susceptible-Infected-Recovered (SIR) model. For the sake of technicality, the people being simulated are referred to as "agents". At initialization, each agent is assigned the "Susceptible" state, with the exception of the indicated number of "Infected" agents. As time passes, infected agents spread the pathogen to rest of the simulated population and other agents get infected as well. After contact with infected agents, they switch into a "Recovered" or a "Dead" state based on probabilistic values. Each agent's state function and data is stored server-less for seamless computing and processing as well as limiting lag on clients. The probabilistic values are defined on a deterministic Markov chain. A Markov chain is a model utilized describe possible event sequences that can occur in the future. In CoronaSim, the Markov chain determines whether the agent is to be infected or dead based on contact. Additionally, all agent pathfinding is facilitated by a similar Markov chain between varying building nodes, which are acting as schools, supermarkets, and hospitals. In regard to the simulation, the simulation’s clock operates as a function of ticks. Each tick, every agent is subject to a transition model optimized with the weights of the coronavirus. For example, susceptible agents can only transition to infected after contact meanwhile infected agents can transition to recovered or dead. Additionally, the model is compiled based on the assumption that recovered agents have developed some extent of immunity to the pathogen. Also, to better the user experience as well as improving public awareness/education, I did some analysis and compiled data on the Demographic Transition Model (DTM), a means to evaluate demographic conditions in varying regions of the world. I applied the DTM to CoronaSim and created pre-determined logarithmic weights for the major regions of the world, based on factors such as climate, population density, and technological/medical development. This allows for the health application of CoronaSim to be more applicable as well as providing characteristic details for researchers and analysts. Challenges I ran into Storing each individuals agents state and contact tracing proved to take up a lot of memory and processing power. Therefore, the code monitoring agent state had to be optimized for minimalistic operation while still providing precious real-time feedback. Real-time monitoring was also a struggle in terms of the way I had to think about approaching the solution itself. Accomplishments that I'm proud of I had a hard time with real-time data analysis and plotting but, by managing data in individualistic agents, I was able to create a wireframe in which the agent's state was reported directly to the module plotting the data points/line on the graph. Although seemingly small, I think it definitely makes a difference to the overall user experience. What I learned I learned how to: create multi-step functions for simulation, use the Demographical Transition Model, create a React frontend, maintain a responsive and intuitive UI navigation, create a Markov chain, and collecting data in real-time, all while supporting various frontends/backends to make a cohesive application. What's next for CoronaSim At this point, I had compiled world data from the Demographic Transition Model and applied the contrasts to different regions of the world to generate weights. Ideally, the DTM data would be collected through an API to provide relevant statistics to exponentially increase the precision of CoronaSim simulations. This way, CoronaSim not only acts as a stand-alone simulation, but a full-fledged technology for everyone to use in any way possible. Design Inspiration and Credits Built With css3 html5 javascript node.js react Try it out github.com
CoronaSim
An adaptive COVID-19 pathogen simulation for medical professionals, government officials, and public education/awareness
['Rohit Rajan']
['Overall Best in Hackathon (1st Place)']
['css3', 'html5', 'javascript', 'node.js', 'react']
0
10,402
https://devpost.com/software/smartpill
boom Built With javascript smartphone
Smart Pill Bottle
boom
['Kshitij Dhyani']
['Runner Up (2nd Place)']
['javascript', 'smartphone']
1
10,402
https://devpost.com/software/masterprofile
Screenshot 4 Screenshot 2 Screenshot 3 Screenshot 1 Screenshot 5 Screenshot 6 Inspiration We’ve all been part of the job process. It’s a long, gruesome process, where we have to create a resume, send it to the employer, and get a response many weeks later. We decided to create a simple, but revolutionary platform and standard to allow employers to receive information about potential employees. What it does MasterProfile is a web application that has one input and two outcomes. First, the user inputs their entire profile and portfolio. They add items such as jobs, achievements, projects, skills, their bio, and more. For the first outcome, we auto-generate a completely free portfolio website for them that they can customize. They can share this website with anyone. The second outcome is based on our token system. Each user is assigned a token, and with that token, companies and employers can access the profile and portfolio of the user through our API. Our API has extensive documentation that employers can use. Our API is also built on our standard. We standardize the way users’ portfolio data is presented. No more hand-reading resumes with missing information and everything in different places. Our standardized API creates opportunities for portfolio automation and analysis. Along with the standardization comes new business models and opportunities. For example, if an employer wants to find which candidate is the best match, they can use a product which offers a rating to each candidate based on the job requirements, through machine learning. There can be entire products that are focused on assigning ratings to job candidates! This presents endless opportunities for companies and users alike. How we built it This project required extensive backend and frontend development. The UI of our project was built using the Materialize CSS framework. Meanwhile, the backend of our project was built with Flask, the python-based web micro-framework. We are using Heroku to host our project, and we are also using a PostgreSQL database to handle user information and portfolios. Our UI design process was a multi-step process that was used to ensure the quality of the UI and UX. First, we wireframed our UI using Balsamiq. Then, we designed the rough UI with HTML, CSS, and JavaScript, using Materialize CSS. After that, we combined our frontend and backend to finish the UI. Finally, we quality controlled our UI and UX through testing. Our landing page was built with Bootstrap Studio, a WYSIWYG HTML editor. Creating the backend was also a multi-step process. First, we created database models using SQLAlchemy, and created forms using Flask-WTforms. We eventually did the login and register pages, and then the portfolio management pages. Soon after, we wrote the code that auto-generated portfolio websites for every user. Finally, we wrote the API which allowed the companies and employers to get the data of users through the secure token. The API reference was generated using Stoplight. Challenges we ran into We ran into multiple challenges when creating MasterProfile. The first challenge we ran into was allowing users to customize their portfolio websites. We had to use an unfamiliar Python library, Colour, to handle the color choosing aspect of our customization aspect. Another challenge was designing the landing page, known as the “How it works” page. That page was not designed using Materialize CSS, rather it was designed using Bootstrap Studio and a Bootstrap Studio template. We also had a particularly hard time of designing the way the user’s portfolio would be displaying, with Materialize carousels. For the backend, we had an issue developing the API, and returning the data in JSON format, so we developed our own method of returning the user’s information in JSON format. Accomplishments that we're proud of We are proud that we built a fully-functioning web application with an extensive backend and stunning frontend. We are also proud of API, which allows employers to make data-driven decisions on who to employ. What we learned Delegation & Teamwork We learned how to delegate tasks and assign tasks for different group members, so we could work simultaneously. For example, if Vishnu was working on the backend and the login and register pages, Lavan would be designing and building the frontend for those pages, and Pranav would be creating the associated database models. This delegation allowed us to work efficiently. Other We also learned how to wireframe our application with Balsamiq. Along with that, we learned how to create an API with a secure token system, as well as how to write API documentation using Stoplight. What's next for MasterProfile First, we plan on defining an elaborate standard for the way the portfolio and profile information of a user is presented. We will expand this standard from what we have now with multiple additions. We will also work towards patenting the standard and marketing it to expand its usage. Next, we plan on creating a machine learning integration for our product. We want to create a product that assigns ratings to potential candidates based on a list of requirements for a job. This will allow employers and companies to simplify their hiring process. Built With cloudinary css3 flask html5 javascript jquery python Try it out masterprofile.herokuapp.com github.com
MasterProfile
MasterProfile is a simple yet powerful web application designed to revolutionize the way employers get information about potential candidates.
['Pranav Rao', 'Lavan Surendra', 'Vishnu S.']
[]
['cloudinary', 'css3', 'flask', 'html5', 'javascript', 'jquery', 'python']
2
10,402
https://devpost.com/software/covidnet-jq97oa
Our site's homepage. Site homepage (ctnd). A diagram showing the relationship between the various components in our project. covidNet by Anirudh Kotamraju and Kailash Ranganathan - A Deep Learning Powered Coronavirus Visualization and Prediction Software Inspiration Coronavirus is undeniably the most major problem of current times, but it seems that quarantine and the effects it is having on the global market and on people’s lives are influencing a new wave of openings, and with it, an even stronger and more potent virus. People are not adhering to social distancing as much, and while we understand their perspectives to want to quickly resume their normal lives, we wanted to deliver an objective and informative way to show people that coronavirus is more potent and spreading faster than ever. To do this, we built covidNet, a realtime web app that uses up-to-date coronavirus data to display the growth of the virus in all US states and also use deep learning to give predictions and insights into how coronavirus will grow over the next 30 days using recurrent neural networks. This will be able to give a better insight into how coronavirus could grow over standard regression and curve-fitting models in place. What it does covidNet delivers a neat and clean UI straight to your device that gives visual data for all 50 US states on total cases and how they have grown since the start of the coronavirus outbreak. But the main attraction is the prediction curves, where we trained individual “recurrent neural network” models for each of the 50 states to analyze how the virus has grown over the past few months and continue the curve to predict the virus’s cases over the next 30 days, a lookahead from today. Quick highlights for predicted (total) cases tomorrow, in 3 days, and in one week are also visible for each of the states. The website is updated daily. Each day, a Google Cloud Virtual Machine automatically starts up. It gets the new data for the day, and automatically trains each of the 50 models using the new Johns Hopkins data (not from scratch, but starting from the previous day’s trained models). It then uploads the new models’ predictions for the historical time and the next 30 days into a Github repository as well as the model files for the next day’s training. Whenever a user visits the website (hosted on Heroku), the predictions are taken from this Github repository. Thus, both our models and website are automatically always up to date with the newest trends in each state's fight against the pandemic. With this project, we are able to give well organized and accurate predictions using state-of-the-art deep learning methodologies of what the virus could become in each state if further action is not taken. We hope that this will help present the true danger of the uncontrolled virus and encourage preventive action to be taken in the future. How we built it We used Dash and Plotly for our frontend, libraries especially relevant for visualizing data science results. With this, we are able to efficiently and effectively integrate our neural network prediction backend with a clean frontend all in Python with some CSS formatting. The model updating happens on a Google Cloud Virtual Machine. It is configured with CRON to automatically start up at a certain time and run a startup bash procedure each day. This procedure begins training the models and pushing the csv and model files to our main github repository, where these results are integrated with our frontend. It features a streamlined python script that fetches data from Johns Hopkins, formats it and retrains 50 state models, and saves those to files as well as their historical and 30 day future predictions to a csv. The models are trained using 2 layer deep LSTM neural networks, with dropout rates of 0.2-0.3 implemented to prevent overfitting. The model files are saved so that on the next day, rather than starting training on the dataset and next day’s data from scratch, the training can load in the preexisting model and continue from there, making the model more accurate as time goes on. The website is live and anyone can access it at https://covid-net.herokuapp.com/ . Because our training is scheduled, the website and models are automatically updated daily without us having to do anything. Challenges we ran into Because LSTM’s are more complicated structurally (and time-dependent as opposed to normal ANNs) than normal feed-forward neural networks, our analysis of coronavirus time series from raw data was quite difficult. We spent a long time devising algorithms to convert the data from raw text (from the Johns Hopkins coronavirus dataset) to properly formatted and normalized tensors to run through our LSTM but were able to generalize this process and train a model given state data just by inputting the state name into our program. With this data our models devised, we were able to quickly create interactive graphs with it to display on our website. Furthermore, one of the most important parts is that the whole program updates daily automatically, meaning that we don't have to manually change anything and the models will automatically take data from the dataset, continue training from the previous day's models, and fit to the new data while becoming more accurate. This was extremely hard, as we had to go from a python script that ran one model at a time to one that took data, formatted it, and trained and saved all models in an organized manner. Furthermore, integrating the frontend and backend using Github was also a challenge between configuring the GCP Virtual Machine and Heroku. Accomplishments that we're proud of We are proud that we were able to write the neural network, train 50 individual models on 50 individual datasets (one for each state), and deliver it in a functioning and visually pleasing UI in the timeframe of the Hackathon. We were able to functionally link the backend neural network Tensorflow model with the frontend Dash and Plotly web app and in the end, could deliver a useful and easy to use visualization of current coronavirus statistics for all 50 states and our deep learning predictions for future cases starting with the next 30 days. Because daily updates from a virtual machine have zero room for error (or the whole website comes crashing down), our process for automatic updates is robust, reliable, and organized well so that connections between backend and frontend are clear (through GitHub). We are also proud of the completely automated model and data update process, which updates the models using * bash scripts on Google Cloud, then writes them to Github, and the Heroku app accesses those updated data files on Github and presents them to clients on a real time basis. * What’s next for covidNet Our application and neural network still have some refining to go through, even though it works quite well for all 50 states as a prediction device. We wish to make a bigger, deeper, and more powerful LSTM network to detect more subtleties and insights in the coronavirus data, and also bring in more variables, such as the openings of states, different social distancing orders, and other factors that may affect the spread of the virus. We already expanded on this project by going from only 37 states in a static website initially to a realtime, updated daily website with not only more accurate results but new data and extra training every single day. Perhaps with more computational power and a multivariate system, we will be able to not only predict coronavirus cases, but simulate different paths for the virus using different societal and community parameters, such as level of lockdown, average interaction among crowds, and others. We hope you all stay safe and have fun :) Built With dash github google-cloud heroku numpy pandas plotly python scipy tensorflow Try it out covid-net.herokuapp.com github.com
covidNet
A Deep Learning Powered Coronavirus Visualization and Prediction Software
['Hoponga (Kailash Ranganathan)', 'Anirudh Kotamraju']
['Second Place']
['dash', 'github', 'google-cloud', 'heroku', 'numpy', 'pandas', 'plotly', 'python', 'scipy', 'tensorflow']
3
10,402
https://devpost.com/software/chit-chat-bek25y
Create Room Screen Our chat app with live texting feature Traditional Chat app Inspiration Has it ever happened that you are texting with someone and you have to wait very long for their texts? Or that they type for like 15 min just to send a "Okay". If you also have anxiety issues like us, then you probably know the feel of waiting for someone's text is the worst feel in the world. We wanted to design a solution that would allow us to view texts as they are being typed and that's exactly what we did! What it does Meet Tardis, a truly real-time texting app that let's you see the message even before it's sent. Now you no longer have to get anxious over what someone would say or how they would react because you can read their texts as they are typing them. No more backspacing those emotions! A texting app so good that it would blur the lines between texting and calling. How we built it The App was designed in Figma , created in React-Native leveraging Firebase 's Realtime Database. Challenges we ran into We had a lot of trouble updating the text in real-time as it was being typed. There were some hardware related issues due to android studio. Accomplishments that we're proud of We completed all the goals we had set for ourselves. We completed the hack in time and now have a texting app we are proud of. What we learned This was the first time for all of us with React-native. We learned how to create a nice looking UI keeping User Experience in mind and connecting everything with Firebase. What's next for Tardis We plan to add more features like integrating Giphy for sending gifs and attaching images during conversations. Also, efforts are to make the idea into a fully-fledged product where using Web-RTC video calls can be made. Built With figma firebase react-native Try it out github.com
Tardis
Don't just Text, Talk!
['Harshal sanghvi', 'blackcrabb Niyati', 'Jatin Dehmiwal']
[]
['figma', 'firebase', 'react-native']
4
10,402
https://devpost.com/software/save-our-soul-quo1pn
Join a room as a helper or a help seeker and share your specific issues Live video chat system using WebRTC and Socket.IO coupled with a text chat. Tested on mobile and desktop both. Realtime anonymous chat system Reporting suggestive nudity and content moderation via API Prompt system messages for user friendly feedback Responsive real-time video and chat UI Responsive landing page! Ready for the mobile experience. Inspiration Since the start of coronavirus, many people have been facing greater mental stress. This could be due to the untimely loss of a loved one to corona, being away from home, being unable to meet friends and family - the people that keep you positive or because of some other deep issues. There are issues that at times you are unable to share with your closest friends or family members. Therefore, we wanted to create an application that did not only allow us to connect to the people we regularly talk to but also to others whom we have never met. It helps create a safe space where people can talk to each other, discuss their problems, and look for advice. It's like a free therapy session! What it does Save our Soul is an anonymous Video and Chat Web Application developed to help such people. It allows users to anonymously connect with other users, tell them their stories, easier their burden by sharing with them. The application allows users to chat with other users or share their stories via video call. It features advanced reporting and is primarily based on linking such helpers and help seekers together regarding life issues in these tough times of quarantine. Themes How do we connect people in unique ways to prevent loneliness when isolation is their only option? How do we help people engage with each other in safe ways when social distancing is not an option? The application tries to implement the following goals: Anonymity The application has been developed in such a way to protect the anonymity of the users. It does not show the IP addresses of the users, neither does the app keep a record of the messages being communicated between two users. Once the user clicks on next, the messages are removed from everywhere. Ability to Help others The application lets the users choose, the reason they are using the platform. It allows users to either seek help or help those in need. The application provides specific categories that the users can choose. While paring the users, the server tries to match someone who is there to help with someone who wants to seek help first. If no such users are available, the server tries to match people seeking help within the same category first, before moving on to people in other categories. These categories are kept anonymous and the stranger can not know the reason why a user is there. Nudity Detection One key challenge that most anonymous video chats face is that the users tend to use the platform to conduct sexual activities anonymously. We strongly discourage such acts on our platform and have tried to keep this platform a safe space for users of all ages. Therefore, we implemented a nudity detection feature within our app that uses remote API to detect whether the stream contains nudity. The user has to report the stranger, and if the stream coming from a stranger contains nudity, the user is blocked. Safe Space To ensure the application is a safe space for such vulnerable users, the application has been designed with such a UI that calms the users. Bright colors, ads, animation, and any other feature that may irritate such users has been disabled. The application makes use of a grayscale model for its chat page, which helps calm the user. The landing page is dull and welcoming. Reassurance Whereas the members of the community will be trying to provide as much help to each other as they can, there are steps taken by the platform that ensures that the person visiting the page starts feeling better. Therefore, apart from the design and other features, we are sending out motivational messages to the users. These are displayed on the screen and keep changing every few minutes. These include messages like - 'This quarantine will end' and 'We are in this together' to try to provide a little reassurance to the person. These messages provide an additional feature, aiding to improving the mental health of the user. How I built it Using WebRTC for realtime video conferencing and Socket.IO to handle peer-to-peer intercommunication as well as with the NodeJS HTTP server with socket listeners which mediates the waiting lists and block lists and the priority algorithmic matching of participants based on their preference (to seek help or to help) and problem category. ReactJS with HTML and CSS was used for the front-end of the application, reactively generates random positive messages and fetches the chat in realtime as well as online users. Deployed on Heroku. Challenges we ran into Connecting the video between two clients was a tedious task. There were more than a few errors that resulted. These were sorted out. Audio issues with streams and video elements Intertwined socket pairing functions for WebRTC had to be traced over and over Deployment complications including stream disconnection Accomplishments that I'm proud of The application turned out to better than the expectations. We randomly asked the opinions of our family members and the responses were all positive. The Interface was better than our initial thought and we were able to add further functionalities. What we learned The most important outcome was the ability to work in teams. The application had its challenges and without the efforts of the complete team, this application would not have been developed. The use of RTC for video streams helped add an additional skill. This is an extremely important component that is used in many websites and applications and therefore, developing video streaming from scratch, increased our knowledge of such apps. What's next for Save our Soul The application is ready for deployment. The relevant licenses and agreements need to be achieved and the application will be ready to be deployed. We can create a mobile application, that provides ease of access and better availability. Better responsiveness Reporting based on chat text using a trained model Built With css3 html5 jquery node.js react socket.io webrtc Try it out github.com save-our-soul.herokuapp.com www.saveoursoul.live
Save Our Soul
A platform where people seeking mental help can anonymously connect with others seeking help or willing to provide help.
['Zoraiz Qureshi', 'Fahad Farid', 'Saad Ullah']
[]
['css3', 'html5', 'jquery', 'node.js', 'react', 'socket.io', 'webrtc']
5
10,402
https://devpost.com/software/quarelief
Loading Screen Homepage Anonymous Chat Pornographic Related Videos Mental Health Related Videos Mental Health Related Articles Inspiration Quarantine, self-isolation and social distancing, due to COVID'19, have taken a negative toll on the mental and physical health of people around the globe. Everyone has been trying to overcome the stress, by either watching motivational and inspiring videos on YouTube or by reading meaningful articles on the internet. Although there are hundreds of videos and articles on the internet regarding Mental Health, Pornography, Yoga etc, but people often tend to find it extremely difficult to find the right video or the right article for themselves. In addition, many people are facing serious mental health issues and are too shy to share their condition with others. They tend to be afraid that people will make fun of their condition or not understand them. Many people just want to talk to mere strangers and discuss their lie stories and Covid'19 affected life. What it does Keeping in mind all the above mentioned things, we though of providing a platform for everyone around to globe to get their hands on the most popular and most relevant resources on serious topics like Mental Health, Pornography, Yoga etc. QuaRelief (Quarantine Relief) brings together quality YouTube videos, articles for the people so that they don't have to go through the hassle of surfing the internet to identify a good article, on spend hours on YouTube to find a video specifically related to Overcoming Pornography . In addition to this, QuaRelief also provides a completely Anonymous Chat System for each of the category available. People can talk to complete strangers and can open up about their feelings, without having the fear of being judged or being mocked. How I built it QuaRelief was built using Flutter and Firebase . Challenges I ran into We found it hard to gather the best of the resources available on the internet. We spent hours to not only find relevant, but also authentic resources for the app. We also found it difficult to make an android app using Flutter, because we were working with it for the first time. Accomplishments that I'm proud of We are proud that we were able to make a good working demo of the app, given that we were using Flutter for the first time. We are also extremely proud of our research work that led to us extracting some high quality content for the app. What I learned We learnt Flutter from scratch and Research Skills. What's next for QuaRelief We will be working on making the UI more interactive. We will also be working towards adding new features to the app e.g. pictures & video upload in the chat, ability to up or down vote a video or an article, and we will also be working towards making this app available on IOS too. APK Releases https://github.com/farrukhras/QuaRelief/releases/tag/v1.0 Built With firebase flutter Try it out github.com
QuaRelief
Provide a platform where people can anonymously chat with other people on some sensitive topics like Mental Health, Pornography etc and also be able to find resources for such topics in one place.
['Farrukh Rasool', 'Ramez Salman', 'Hamza Farooq']
[]
['firebase', 'flutter']
6
10,402
https://devpost.com/software/melanomai-i3e1c7
Thanks Page Home Page Note After the deadline we were able to integrate the AI and host the website. We couldn't host the AI model because the free tier wouldnt allow it and the AI models were quite large, but you can check everything else out. (There are some minor bugs related to styling, but if you enter an image and your email it should work!) The github repo was updated according to the changes as well. https://melanomai.herokuapp.com/ Note The images needeed for diagnosis are dermascopy images which need to be taken by a dermatologist. Thus this isn't entirely remote, however by providing a more effective and accurate analysis this could cut down on the number of trips required to the dermatologist and can also be much more convenient and cheaper. MelanomAI is still a much better alternative to current methods of diagnosis. There are ways for people to do self dermascopy, but those ideas are still in development. (This could be something we do in another hackathon or after this hackathon to suplement and support this idea) Inspiration Melanoma is a deadly skin cancer which affects all ages. It starts off as a cancerous growth but can spread to other parts of the body as well. The worst part is that Melanoma has a 25 - 30 percent misdiagnosis rate meaning 1 in 4 people have been misdiagnosed with the cancer. Considering how dangerous and scary cancer can be a 25% misdiagnosis rate is too high. 1 in 4 people should not have to suffer due to an accident that can be avoided. MelanomAI works to fix this problem by making Melanoma diagnosis easy, fast, and above all accurate. What it does MelanomAI is a 6 layer convolutional neural network which can analyze an image to detect and classify melanoma. Convolutional neural networks are very good at analyzing images and giving accurate results. With enough time our team is confident that we could bring accuracy into the mid to high 90’s. Here’s how it works for the user: Go over to our website Upload an image of melanoma Enter your email Submit Check your email for results! Diagnosing Melanoma accurately is just 5 steps away. How We built it We used pytorch to build the AI model and train it. We used bootstrap, django, css, and html to create the website. Challenges I ran into One of our members lost half of their files 3 hours before the hackathon and had to recode all of them. It was a traumatic experience and was a good learning opportunity on how to store files and use github. We struggled to actually integrate the AI and the website due to some errors, given some more time we might have been able to fix it. Accomplishments that I'm proud of We are proud of our accuracy. We were able to get an extremely high accuracy in such a little timeframe and with more time we are confident we can get the accuracy to above 90%. We are also proud of our website as this was out first time making an AI and a UI to go along with it in a Hackathon. What I learned How to work with AI(Pytorch) How to use django to build websites. The benefits of using github as source control and a way to ‘backup’ files. What's next for MelanomAI We want to integrate the self standing AI and the website, we faced issues with this part, but given some more time we are confident we would have been able to succesfully integrate the two. Once we get a complete product we want to host the website to make it available to all, and given that it gains some popularity possible even try to implement it in the real world! Built With css3 django html5 python pytoch Try it out github.com
MelanomAI
Detect and Classify Melanoma Effectively
['Gaurish Lakhanpal', 'Anish Karthik']
[]
['css3', 'django', 'html5', 'python', 'pytoch']
7
10,402
https://devpost.com/software/fridgeremindr
Auto generated grocery list based on items needed Suggested recipes air quality measurements showing a spike in particulate values (someone coughed or sneezed) Essential groceries list Expiry date recognition Product identification the air quality workshop which inspired us attending the rust workshop for the learning and the bonus points at this point we were wondering if we could build it cheaper and better air quality workshop the sps30 sensirion sensor up close sensor basic hardware setup Inspiration Like it or not, grocery shopping is a major unavoidable chore. Especially in these circumstances, shopping can be a pain and we decided to automate the tediousness of checking expiry dates, whats in the fridge/pantry and also whether we should wear a mask indoors (sometimes we should, especially if there is company) the workshop on Saturday inspired us to build a cheaper and better quality air sensor which can even detect cough and sneeze droplets indoors What it does there are two main sets of features: 1: shopping list generation and expiry date detection. All you have to do is take pictures of what’s in your fridge and kitchen, FridgeRemindr takes care of the rest. The app detects the products present in the images using machine learning and generates a list. It lets you add essential items to a list along with their expiry dates. The app keeps track of the dates and generates your grocery shopping list for you. FridgeRemindr also generates a shopping list for you based on your cravings. All you have to do is name the dish you want to prepare. You could exploit this feature to help you prepare for a potluck, Thanksgiving dinner or any special occasion. FridgeRemindr compares the list of ingredients required with the items you have at home and generates the list for you. 2: active air quality monitor the price and functionality of the purple air sensor was shockingly high, so we "borrowed" a sensirion sps30 sensor which is probably the most accurate retail available air quality sensor which is less than 45$ retail. paired with a DHT11 (less than 3$ retail) and a cheap microcontroller with wifi (esp32, less than 7$ retail) it is possible to build our sensor for a hardware price of under 55$. This setup senses particles from pm10 levels all the way down to pm0.5 levels. This is important because respiratory droplets, airborne pathogens and micro pollutants can be detected at this level of granularity (we actually tested if a cough or sneeze can be detected, and our sensor can do this: video here - ) our setup uses a dht11 for temperature and humidity but we can also use a better sensor like a bme280 . basic video https://youtu.be/QD7vP-gXwbU hardware video 2: https://youtu.be/iIwDxE88KfY How we built it hard work and perseverance and no sleep Machine learning: used models from GCP cloud vision and clarifai OCR - used google cloud and tesseract Servers: used python flask and GCP functions React-Native to build a cross platform app Adobe Illustrator for designing the logo, assets, UI/UX Hardware : sensirion sp30 particle sensor, dht11 temperature and humidity sensor, ARduino, raspberry-pi (didnt make the UI, ran out of time :( ) Ngrok tunnels everywhere nodejs/express : push notifications server Scientific references . https://www.sensirion.com/fileadmin/user_upload/customers/sensirion/Dokumente/9.6_Particulate_Matter/Datasheets/Sensirion_PM_Sensors_SPS30_Datasheet.pdf https://www.ncbi.nlm.nih.gov/books/NBK143281/#:~:text=Published%20data%20have%20suggested%20that,the%20same%20number%20as%20talking Challenges we ran into coding for the sps30 was challenging (had to adapt some things directly from datasheet). Accomplishments that we're proud of the things all work. What we learned its hard work integrating everything . What's next for FridgeRemindr .better integration of components more robust hardware and casing better notification system Placing orders automatically for essential groceries Built With adobe-illustrator arduino clarifai dht11 raspberry-pi react-native sps30 Try it out github.com
FridgeRemindr
Automated grocery shopping
['Ebtesam Haque', 'Muntaser Syed']
[]
['adobe-illustrator', 'arduino', 'clarifai', 'dht11', 'raspberry-pi', 'react-native', 'sps30']
8