hackathon_id int64 1.57k 23.4k | project_link stringlengths 30 96 | full_desc stringlengths 1 547k ⌀ | title stringlengths 1 60 ⌀ | brief_desc stringlengths 1 200 ⌀ | team_members stringlengths 2 870 | prize stringlengths 2 792 | tags stringlengths 2 4.47k | __index_level_0__ int64 0 695 |
|---|---|---|---|---|---|---|---|---|
10,526 | https://devpost.com/software/foodneighbour | 1. Intro
2. Problem
3. Solution
4. Implementation
5. Architecture
6. Team
Inspiration
The project is inspired by COVID and caused social distancing.
What it does
The service suggests recipes based on the client’s shopping list and shopping lists of neighbours and provides convenient way to meet and cook.
In case some of the ingredients are missing the service suggests to visit the closest Migros store or make an order at Migros.
Challenges we ran into
Android development with the recent and unreleased framework Compose
Hybrid team where two of the team members are located in one of the most northern points of Russia (
https://en.wikipedia.org/wiki/Teriberka
), have poor internet connection and ping over 200ms.
What we learned
Yet another time we learned that it's never enough time to implement everything we want on a hackathon, so most valuable parts of project should be deliberately filtered out and implemented.
What's next for FoodNeighbour
Links to the sources
https://github.com/gangozero/hackzurich2020-front
https://github.com/gangozero/hackzurich2020-be
Built With
android
azure
css3
go
html5
kotlin
react
Try it out
gangozero.github.io | FoodNeighbour | Use the most common and natural thing to bring people together: Food | ['Alexey Schebelev', 'Eugene Levenetc', 'Vladimir Aluferov', 'Andrey Prokopiev'] | [] | ['android', 'azure', 'css3', 'go', 'html5', 'kotlin', 'react'] | 63 |
10,526 | https://devpost.com/software/gartelme | Inspiration
Everyday life became more comfortable with the development of smart home systems ranging from vacuum cleaners to smart lights. As part of the smart home system, we developed a prototype for gartelme, the future of digital urban gardening. The goal is to make growing plants at home as easy and fun as possible, supporting the trend towards self-sufficient food supply and a nutrition-conscious lifestyle. Importantly, the app and hardware will support the survival of house plants over long durations when home owners are on holidays or business trips.
What it does
The gartelme app provides a web interface to control multiple wireless stations that are equipped with sensors and devices for precise irrigation. The energy-efficient Arduino stations are interconnected in a mesh network and centrally controlled by a WiFi-enabled Raspberry Pi. A locally-hosted web frontend allows the user to easily schedule irrigation tasks and integrate with public APIs for weather forecasting and crowd-based gardening services. A public REST API allows third-party developers to integrate with other smart home devices and external services. In the future, notifications will remind the user to refill water tanks, add additional nutrients or check on not-so-happy plants.
How we built it
We developed a web frontend using Boostrap, Javascript and Vue.js to support user input and visual representation of the irrigation system. The web frontend communicates with a Python/FastAPI-based REST webservice that exposes a relational database storing the scheduling information. All components are hosted on a WiFi-enabled Raspberry Pi that acts as a controller for the wireless irrigation stations. Communication among the controller and the Arduino-based irrigation stations is powered by an nRF24 mesh network with a custom-built dynamic address assignment. All components were selected for affordability and low energy consumption.
Challenges we ran into
The biggest challenge we faced was to integrate the independently developed components. At the end, we did not succeed to implement all features that we hoped for.
Accomplishments that we are proud of
We are proud to have built the basis for a complete framework that lets us control the irrigation system remotely. Even though there are few bugs remaining, we have managed to combine a number of programming languages and hardware components to build a semi-functional prototype.
What we learned
We learned how to programmatically setup the hardware and work with mesh networks. Furthermore, we advanced in understanding how different layers interact with each other within local networks.
What's next for gartelme
We want to fix known remaining bugs and use the system to water our plants!
Built With
arduino
c
javascript
mysql
python
raspberry-pi
Try it out
github.com | gartelme | A wireless gardening tool for everyone | ['Nils Eling', 'Jonas Windhager'] | [] | ['arduino', 'c', 'javascript', 'mysql', 'python', 'raspberry-pi'] | 64 |
10,526 | https://devpost.com/software/now-you-see-me-af3cjh | The architecture of our solution.
Technical details of our implementation.
General pipeline for main functionalities.
Inspiration
Everyone takes trains, I mean, literally everyone. But what if rains and snows and nights destroy your itinerary and travel plans? We have some solutions to tackle them! You can arrive on time whatever the weather is and whenever the time is!
What it does
It detects traffic lights through the AI-controlled system even under the condition that drivers are unable to physically observe traffic lights such as during heavy rain or in the night. As a result, drivers do not have to slow down their trains to peek through the vague world to actually see where traffic light are.
It uses geo-location and visual information to convert an unclear scenery under any condition into pictures of that location taken previously such that drivers could take a reference to the environment and has somewhat grasp of it even when they could not observe it promptly.
How we built it
React API as front-end and Tomcat and Python as back-end.
We used AI-aided approaches with help of some geoinformation provided by Siemens and traditional data science techniques.
Use GPS information from each image sample and GPS locations of all traffic lights to determine the real-time distance between the train and the next traffic light.
Preprocessing of night images using Clahe filters, Histogram Equalization and Adjustments in Hue and Saturation were implemented as a preprocessing step for training Yolo from scratch, but also for assisting visually the drivers during night shifts.
Use CNN to detect traffic lights real-time such that drivers could see them more easily.
K-Means clustering on good weather images with GPS locations is operated on the train railway such that real-time image could be appropriately assigned to its respective clustering group.
After clustering, we are using autoencoders from Resnet18 with Pytorch to extract feature vectors of that particular image and use cosine similarity as a metrics to determine the clear nice image of the same place in good weather.
Challenges we ran into
Inaccuracy of GPS
DIfficulty to augment dark images taken in the night
Difficulty to preprocess images as there are many similar sceneries
Low accuracy using the Yolo Signal detection implementation
Accomplishments that we're proud of
We could smartly solve the issue of bad weather conditions and night images with the right-mix of advanced image processing techniques, transfer learning, clustering, deep learning approaches.
We are proud of each other that within this shorter duration we were able to try out multiple algorithms or approaches and built a reasonable solution architecture.
3.Tried challenging algorithms like GANs to convert Night to Day, Yolo Object Detection implemented on the Signal/Traffic Light Detection
We made through it!
What we learned
Programming is FUN.
Working a long time is PAINFULLY FUN.
What's next for Now you see me!
Our initial solution is very promising, but we still have some room for improvements such as achieving the precision when the train takes a turn, more enhancement in the driver view even with bad weather situations and further possible preprocessing approaches to better brighten up dark images. Moreover, we would like to incorporate the AR techniques to make it more supportive of the train drivers. Also, the Yolo Implementation was almost fully achieved, but with low accuracy. Finally, the CycleGAN implementation required more time but will be used in the future, to convert the night images to their corresponding day ones using image-to-image translation!
Built With
opencv
python
pytorch
rest
sklearn
springboot
tensorflow
tomcat
Try it out
github.com | Now you see me! | Lifting the veil | ['Rosni k v', 'Christos Antoniou', 'Georg Kropat', 'Baxevanos Theologos', 'Hong Chul Nam'] | [] | ['opencv', 'python', 'pytorch', 'rest', 'sklearn', 'springboot', 'tensorflow', 'tomcat'] | 65 |
10,526 | https://devpost.com/software/the-legaltech-suite | Look of the desktop application
One of the notebooks with the input document
a notebook with an output document
Notebook with input and partial output
mobile view of the suuite website
mobile view of the suuite website with filter applied
Inspiration
The LegalTech Suite came as a natural response to the set of challenges presented by the Legal Tech Team.
Natural because when you set the User Experience (UX) as the most important feature of your project, beautiful things happen.
First, you realise that the user needs one, and only one point of access to the tools. She/he should be able to find all that is required to perform her/his tasks in a single place, something easy to infer when the challenges were presented.
Once the main objective is set, the tools were selected according to the most up-to-date tools in ML and static web pages development (Open Source), and desktop app cross-platform development.
Desktop and cross-platform because this is how the lawyers perform this difficult job, in a laptop or a desktop computer. This feature was carefully analysed and discussed with the Legal Tech Team.
What it does
Machine Learning and Jupyter join forces to analyse legal documents. It includes algorithms that:
Anonymise legal documents
Taking legal documents and using OpenCV for Computer Vision, it extracts the information that other ML tools use to identify names, companies, cities and countries, among other features from the text, and perform anonymisation of those fields. This is a capital task that takes a lot of time to the lawyers and their assistants. Some of those documents contain 200 or more pages. So, any help, particularly Smart an automatise help, can boost the productivity of any team, and minimise mistakes that can have real consequences when performing business. Constant fine-tuning and more robust models will be the next step. But the prototype is entirely functional and complete the task in less than a second.
Smartify legal documents
Another script uses algorithms to identify keywords and names, similar to the previous section. Still, in this case, the idea is to enhance the information that the lawyer can have immediate on the document—imaging a case of "Augmented reality" for your PDF.
The working prototype does not perform this enhance document yet, but it does identify key terms and look for those automatically on the internet. Also produce an HTML version of the paper where the user can get context directly in the text that is reading in the form of popups.
Aggregation of legal documents to identify patterns and generate templates
Created a simple mockup of how text detection and pattern recognition can be used to generate an ordered collection of documents with standard features and concepts that can allow identifying candidates to templates for similar cases. Also, boosting the productivity of the users.
How I built it
It was built using Jupyter notebooks running in a Ubuntu Virtual Machine serving as a local server in the machine of the user.
The Electron.io framework was used so to have a clean cross-platform design and deployment, while Hugo framework was used to create the web-app.
A desktop application is the best proposal to solve the current needs, and it is a way to adapt to the user as much a technological possible. And minimise the friction in the interactions. It was critical during all the time that the design was under construction to keep in mind that the learning curve for this product needs to be smooth.
Challenges I ran into
The ML learning technologies used are complicated. Nothing new there, but it is relevant to keep in mind that this kind of product we will need constant improvements, and it will learn from more and more cases. Being better with each iteration.
Accomplishments that I'm proud of
Genuinely proud of merging technologies and techniques as I never did before. Computer Vision was the best of the learnings. The same approach will be applied in two other challenges, even after the event is over.
What I learned
I learn some insides on the dynamic of the future users of this suite: the Law firms and employees. I had the opportunity to get direct feedback from the Legal Tech Team.
What's next for The LegalTech Suite
I want to address all the challenges of this collection. Also, to deploy some computer elements in the cloud. Something that I must say, it was not done because this kind of documents contains sensitive information and the law firms do not distribute that information out of their premise —an extra reason for the desktop application development.
Built With
atom
bash
css
electron
git
github
google
html
hugo
jupyter
linux
mac
machine-learning
matplotlib
ml
notebooks
numpy
opencv
pycharm
pytesseract
python
virtualbox
vm
Try it out
github.com
universidad.ch
nbviewer.jupyter.org | The LegalTech Suite | LegalTech Suite: A Machine-Learning-powered working desktop App prototype that put the user at the centre of its design and development. For professionals dealing with complex documents | ['Arturo Sanchez Pineda'] | [] | ['atom', 'bash', 'css', 'electron', 'git', 'github', 'google', 'html', 'hugo', 'jupyter', 'linux', 'mac', 'machine-learning', 'matplotlib', 'ml', 'notebooks', 'numpy', 'opencv', 'pycharm', 'pytesseract', 'python', 'virtualbox', 'vm'] | 66 |
10,526 | https://devpost.com/software/detective-carbon | Detective Carbon is analyzing the given repository
The analysis shows, that the software has a lot of potential to be more energy efficient
A zoom-in to the actual findings and alternatives gives a big picture
The VS Code extension helps you in your daily work as a developer
Inspiration
Billions of computing devices run on the same bits of code: operating systems, popular applications, services and webpages. From servers in data centers to the phones in our palms – their power consumption adds immensely to the global carbon footprint.
Software developers will more and more ask themselves: how can I reduce the carbon impact of the code I write? What can I do to save computing power?
What it does
"Detective Carbon" is a tool for developers to make code more energy efficient, and reduce its carbon footprint by an exponential multiplier. Firstly, our algorithm is scanning the implemented algorithms, data structures and crawling dependencies to then calculate the optimisation potentials. This way you can see the estimated carbon footprint of your software. Second part of the solution is an intelligent helper directly in the IDE, to find and fix those optimisations. In the best case, changing only some lines of code can mean saving a power plant worth of energy.
How I built it
Our solution consists of three parts. Every parts serves a specific task:
Part 1 (Frontend):
The web app (Detective Carbon Web Analyser) built with react.js serves as the main entry point for first-time users or analysis from time to time. It is used to send an analysation request to our backend, where we use all our computational power to scan, cross-check and validate code. It will then show results about the quality, the environmental impact and possible improvements in a clean modern style.
Part 2 (Backend):
The node.js backend takes the heavy lifting and contains the „brain“ of Detective Carbon. This is the place where we compute the majority of our analysis to keep the response times as short as possible make sure to generate as low of an carbon footprint as possible
Part 3 (Plugin):
The plugin is a visual studio code extension that could be used to analyse your code as you write it. It will provide useful infos about possible energy bugs and marks them with alternatives and improvements.
Challenges I ran into
We always love tackling big challenges, but in the area of climate there is just an unlimited amount of possibilities to choose from. Finding the one piece where we really felt, that we are able to make a difference and have a big enough lever to influence behaviour change is a challenge in it self.
Accomplishments that I'm proud of
We felt that our solution could not only work on a big scale, but it can make a difference in a time where we don't have much time left to act.
What I learned
We learned about the limitations of code analysis tools and that so called "energy bugs" are often just a sign of missing tools. So providing them in a nice, clean way is a gap that we’re closing.
What's next for Detective Carbon
We strongly believe in our solution and "continuous sustainability" in specific. We really think it is absolutely mandatory to push this concept further and we definitely will! We would love to have you on board!
Built With
express.js
grep
javascript
node.js
npm
react
visual-studio-code
Try it out
gitlab.com
gitlab.com
gitlab.com
drive.google.com | Detective Carbon | Detective Carbon helps making software energy efficient. Analyse the carbon footprint with a source code scan, and identify greener choices for algorithms, data structures and dependencies. | ['Dennis Wehrle', 'Tobias Oliver Khan', 'Matt Koslowski'] | [] | ['express.js', 'grep', 'javascript', 'node.js', 'npm', 'react', 'visual-studio-code'] | 67 |
10,526 | https://devpost.com/software/smartify-legal-docs | Inspiration
Lawyers spend a lot of time reviewing contracts and other documents. These documents often contain
information, which require outside information to fully understand the context and relevance of the document at hand (e.g. a purchase price listed in a foreign currency makes it difficult to understand, at first sight, if a contract is important). A constant back-and-forth between the document and Google and, as a consequence, a loss of time is the result.
What it does
We created a service that enhance legal documents by recognizing important elements such as person names, company names, addresses and currencies and adding a hover-overlay which provides useful related information on these elements.
The user uploads their document (in PDF format for the moment, but eventually more types of document could be added) to the service. The document is then processed in order to extract the relevant information. Finally the result is displayed in a user friendly Webapp interface that runs locally.
How we built it
The project consists of two main parts: front end and back end.
The front end part has the responsibility for displaying the content of the document, matching the tagged information with its position on the page and showing an overlay.
On the back end side, the text content of the document is extracted and important information is selected using Named Entity Recognition (NER). The elements that the program looks for are: name of persons, reference to money/currency, company names and IDs, addresses.
Challenges we ran into
Finding the right tool was a long process. In particular we needed to find a free NER system that works reasonably well enough to be used as a starting point. We tried several open-source libraries (NLTK, Spacy, Flair), and they both have advantages and disadvantages to them. The best result was proced by flair, but recognition on one page takes on average 3.5 seconds on laptop hardware.
PDF is very unpleasant to work with programmatically, and it is quite difficult to make it render correctly in the browser. Not all aspects of PDF rendering in browsers is supported, adding interactivity on top of the render was especially hard. If we had more time and manpower, we could have probably made more finished frontend, but we made a successful POC.
Accomplishments
We made a program that is able to process PDF documents and find which where is the relevant information. The generated output is used to highlight corresponding parts of the document and the type of entity is also indicated.
What I learned
I have never used NER system before. I realised that there is a big diversity of such systems, with different levels of functionalities, such as Microsoft's Azure to NLTK which both sits at opposite ends of the spectrum, and intermediate systems such as Spacy and MonkeyLearn. (Alexandre)
I learned a lot about NER, and was impressed by pre-built classifiers provided with flair project. I think we also had an interested experience working remotely. (2 in Zurich, 1 in London)(Dmitrii)
What's next for Smartify Legal Docs
Improve the recognition of the information, for example by fine tuning the Named Entity Recognition models. The recognition of addresses in particular need to be improved a lot.
Improve the quality and quantity of the extra data in the overlay.
Search for other elements to enhance, such as legal statuses and definitions, and anything that lawyers would find useful.
Improve the rendering of the output, especially the alignment.
Display more useful information based on the type of entity that is being highlighted.
Built With
flair
json
material-ui
ner
nltk
python
react
react-pdf
spacy
yarn
Try it out
github.com | Smartify Legal Docs | Improving the experience of reviewing legal documents | ['Vincent Kowalsky', 'Alexandre DeZotti', 'Dmitrii Dmitriev'] | [] | ['flair', 'json', 'material-ui', 'ner', 'nltk', 'python', 'react', 'react-pdf', 'spacy', 'yarn'] | 68 |
10,526 | https://devpost.com/software/empower-you | Logo
Team bonding
Profile
Explore the shop
Weekly goals
Optimizer
Purchases
More logos
More logos
Mission Statement
To be the trusted personal shopping assistant enabling smart and quick product purchasing decision based on individual needs and preferences.
Inspiration
Real life was the inspiration for the idea behind this project. We tell the story using the fictional student Maxine Cannotcook:
Maxine has very simple eating habits and knows how to cook three different meals which he switches up on a regular basis (spaghetti, ricasimir, schnitzel).
Maxine does not have the time to research on how to cook different meals / healthier meals/ more sustainable meals and does not know whats inside the product (nutritiens, etc.).
Maxine spends a lot of time in front of the shelf analysing and comparing different products.
He is curious about alternatives and tries to compare different options, but it is too complicated, overwhelming, time consuming and frustrating because everything looks the same.
Maxine wastes time comparing different objects, but ends up buying always the same products.
Goals
More satisfied customers
Quicker decision making
Better cover to individual needs
Solve to paradox of choice: more choices do not result in more sales.
Customer profile
Health choices
Allergies
Sustainability
Price
What it does
M-power YOU
is a trusted personal shopping assistant enabling smart and quick product purchasing decision based on individual needs and preferences:
it makes recommendations based on health goals, sustainability goals, nutrients, allergies, carbon footprint and price;
provides a recipe book to search and explore new recipes;
analyses previous purchases and recommendations to improve future purchases to better match your profile.
How I built it
We decided to use
Ruby on Rails
for the backend to interact with the Migros API and
React Native
for the frontend. We picked
React Native
to be able to develop one implementation for multiple platforms (iOS, Android, ...). We used
Figma
for designs and mockup of the app interfaces.
Challenges I ran into
Build a prototype from scratch using new (for us) technologies in a limited amount of time.
Accomplishments that I'm proud of
Didn't spill coffee on our laptops!
What I learned
New technologies (Ruby on Rails, React Native).
What's next for M-power YOU
Further develop the product and integrate it with existing Migros technological offering (website, scandit system, ...).
Code and server
GitHub repo
Example of API call
from our Ruby on Rails server
Built With
github
react-native
ruby-on-rails
Try it out
github.com
empower-listing-service.herokuapp.com | M-power YOU | Empowering YOU to make shopping easy | ['Filippo Broggini', 'Pascal Wieler', 'Nicole Jaquemet', 'Rohit Paudel', 'Shruti Srivastava'] | [] | ['github', 'react-native', 'ruby-on-rails'] | 69 |
10,526 | https://devpost.com/software/halofit-7i9hg3 | Select recipe
Recipe instructions
Inspiration
The Hololens adds an additional dimension to our 3d physical world. We think it inspires people better than any other digital devices cause it allows user to get more understandable information about the surroundings without limiting the line of sight within a tiny digital screen.
Based on the device/platform we have passion about we want to help people develope a healthy, sustainable life style. It is not only of personal advantage for a positive life, but it also contributes to an environment-friendly society, for example, by controling the amount of comsuption in food of the user, the waste is reduced.
What it does
It keeps track of life routines of the user, including his/her health condition and exercise habit, and documents grocery items the user buys for meal. Based on the ingredients the Hololens find avaliable in the fridge via image recognition, or the data provided by some Smart Home devices, our app will find recipes for a meal that in accordance with the user's diet habit and in support of user's health condition and excersise habit through nutrition suggestion. A ranking system would be built based on user's extra input (e.g. personal preference) and suggestions from Elena (the health-assistant chat-bot built by University of St. Gallen).
For a mature end-product we thought about, a QR-code would be put on the table in the kitchen area in the house of the user. The Hololens recognize the code automatically when the camera captures it. If the local time is around the meal time, our app starts automatically and directly provide the user 3 choices of recipes taken from the top 3 by the ranking system. Some extra information, e.g. cooking time, contained Calorie, etc, would be provided and displayed, which is personalised by the user in the settings of the app. After one recipe is chosen the Hololens displays 3d virtual objects of ingredients needed by the recipe on the table. Through animations the amount of each ingredient needed and operations like cutting or peeling required for every step would be displayed in an understandable and interactive way. At the same time Elena would provide health advices via voice, which is more user-friendly in a mixed reality situation.
How I built it
We built the app through Unity with MRTK.
Challenges I ran into
With a successful build of our code, unfortunately we could not test it on the real device Hololens2 provided by University of St. Gallen. The reason is that to enter the development mode of the device we need authentification from the IT department of the University, but the communication is sadly impossible on the weekend.
Virtual and on-site hacking combined has been challenging. From the communication side we were facing problems like low sound quality and loud enviromental noise in Zoom calls. It led to low efficiency of exchanging thoughts and ideas, especially in our case that we do not know our remote teammates at all before the event.
The Hololens2 is a brand new and unique product with very limited popularity, so that online there's neither many documentations nor various discussions regarding development problem. We are short of help when we ran into problems.
Accomplishments that I'm proud of
We are one of the only two teams at HackZurich that have the guts to claim a brand new challenge on a cutting-edge device Hololens2 that even not many people have heard of it. We are an adventurous team!
We came up with a scenarior that fits our original goal of hacking (we could try something new, and the project is equipped with some practical value). For the app we designed a good architecture for the development, even though half of our team members do not study computer science. We split the job well so that everyone got technical things to investigate, as well as have fun in the whole event.
What I learned
We learned some basics about app development for Hololens 2. We learned how to dismantle a quite big goal into small tasks that are achievable.
What's next for HaloFit
If we could have a testable device, test our current build must be the first step. Then based on the performance we might need to debug and improve data flow efficiency. Then we have to replace all our current fake data with practical API. Finally we would realize all other functions we originally designed for the scenario.
Built With
unity | HaloFit | The app built for Hololens 2 is aimed for helping people realize a healthier life style. The goal is to be achieved through assist maintaining nutritious diet and regular exercise. | ['Xiao Jean Chen', 'Ashray Adhikari', 'Mahiem Agrawal', 'Artem OBOTUROV', 'Dora Liang'] | [] | ['unity'] | 70 |
10,526 | https://devpost.com/software/handewegapp | Flashscreen
Viewfinder
Product Landing Page
Allergen Information
Afraid of touching too many surfaces in current times ?
Now with the new HändeWegApp there's no longer the need to grap the wrapping on the hunt for ingredients, Nutri-Score or allergens information. With image recognition the product of interest is perceived, and all helpful attributes are conveniently displayed on your mobile phone.
Built with Unity and deployed on Android for a plethora of phones.
During the development process multiple challenges arose starting with recognizing the product, over parsing the Migros API to create a mobile layout with unity.
Nevertheless, an app could be coded which solves the above mentioned task for a selected range of products.
During the last two days valuable hands-on experience with vuforia, unity and APIs has been gained. Friendships built and maybe a couple pounds added.
As next steps a better image recognition would improve the user experience, including a classifier for better product recognition and a scalable layout to improve the fit for various phone screens.
Built With
c#
migros
phone
unity
vuforia
Try it out
github.com | HändeWegApp | Get all your shopping information without touching a single item | ['Costanza Maria Improta', 'Tobias Zumsteg', 'farkas93 Kalotay', 'Jan Leutwyler'] | [] | ['c#', 'migros', 'phone', 'unity', 'vuforia'] | 71 |
10,526 | https://devpost.com/software/digimatic | Digimatics allows users to store all their previous uploads in the cloud so they can eventually switch to a paperless workflow
This is a screenshot of the main screen, showing the output JSON
A manipuable graphing tool is offered to users so they can reärrange the schematics to better understand the interconnexions.
Inspiration
We were inspired by PhotoMath and what it represented for mathematics, and we thought it might be useful to have such a tool for schematics. Moving the Schematics from paper form to digital means that you have the freedom to use it and store it anywhere. Easier integration with BMS (building management systems) Maintenances and assets management software. Easier facilities management and control
The user interface was based around WolframAlpha's, because we're striving to be a PhotoMath for schematics.
What it does
It recognizes the schematics from a pics and allows the user to see digital connexions within them. The user can export the schematic as a JSON or GraphViz file. The user can also see the schematic as a manipulable graph.
How we built it
We used OpenCV for the recognition and VueJS for the UI. We do line and symbol detection then recognize the connections and the letters. This greatly simplifies the recognition task.
Challenges we ran into
Recognition of lines (connections) was hard because if you only segment lines, you get all lines, also the ones of other symbols. Furthermore only color-based extraction was also not possible, because in the hand-drawn images there were black lines inside the red ones, which made segmentation impossible, so we used some kind of thresholding over all RGB-Channels in the image to extract the connection lines.
Integrating Cytoscape into Vue
This was difficult because the model used by Vue was very different from Cyto. It took several hours to get them to play together, even with the vue-cytoscape plugin and involves jerry-rigging some CSS to guarantee the positioning is correct. This was probably trickier than trying to integrate AppKit stuff into SwiftUI.
Accomplishments that we're proud of
Getting DigiMatics up and running semi-remotely with a new team in very little time.
The quality of the user interface and the care given to the æsthetics
What we learned
Breaking up of the problem into smaller pieces and having a MVP
Using Vue.js and Vuex.
What's next for DigiMatics
We think the next thing to pursue would to miniaturise our vision stack and try to implement it on a smartphone or tablet, so that it may be used on the go. Apart from that, we also think it would be interesting to generalise this solution, so that other schematics (e.g. electrical, gas) may be used. Another thing to consider would be allowing natural language queries to be processed on the graph (e.g "is pump X connected to valve Y") which would simplify any necessary cross-checking.
Built With
django
python
vue
Try it out
github.com
github.com | DigiMatics | Digital schematics from paper | ['Hachi Hotarota', 'Tarek Abdellatif', 'Benedict Lindner'] | [] | ['django', 'python', 'vue'] | 72 |
10,526 | https://devpost.com/software/green-energy | using blockchain technology to enhance and optimise production and distribution of renewable energies
Built With
blockchain | energychain | using blockchain technology to enhance and optimise production and distribution of renewable energies | ['Ali Mataei moghadam'] | [] | ['blockchain'] | 73 |
10,526 | https://devpost.com/software/design-for-duck-technology | Inspiration
Technologies such as augmented reality, digital fabrication, sensors and IOT enable new possibilities. If multiple of such converging technologies are combined together, the opportunities are almost limitless and enable new, unprecedented applications. As a team, we work with different technologies and were inspired by this idea of bringing together multiple technologies. Within this project, we realized this vision and present a digital fabricated and technology enabled DUCK.
What it does
The duck is a 2-meter high, digitally augmented, wood sculpture that offers a unique user experience for events, parties, developers or anybody who wants to connect with new converging technologies and ideas. The duck offers LED-driven, multi-color visual effects and provides a digital, augmented layer for user control and experience through microsoft hololens 2 interfaces. With the duck, we show how multiple converging technologies can be combined in an unprecedent sculpture.
How We built it
The design of the duck was generated using modern algorithms for digital design and fabrication. We use custom generative design scripts to generate nodes and struts of the duck, and optimize them for fabrication. To build the wooden wireframe of the duck, we mount wood slabs with laser-cutted nodes. After building each layer and section of the duck, we placed of 100 meters of LED lights on slabs, which illuminate the duck during night and day.
Challenges We ran into
To generate the design and layout plan of the duck wireframe, we could not use a manual CAD design process. Therefore, we had to program custom algorithms to generate all the struts and node gemetries using Rhino/Grasshopper. To assemble the duck sculpture, we had to mount over 150 slabs with 100 nodes, which is not possible to assemble using standard drawings or measurement tools. Therefore, we use the augmented reality interface of a microsoft hololens 2 to augment the digital duck as guidance during assembly. This allowed us to assemble the duck sculpture in under 6 hours. Besides using augmented reality for duck assembly, we use the AR interface to also control the individual LEDs stripes of the duck. This allows to generate new visual LED effects, which can be easily controlled by a user.
Accomplishments that We are proud of
1) Automated design and optimization of complex duck wireframe structure using generative design
2) Intelligent use of augmented reality (AR) interface to assemble wood structure
3) TCP/AR/ connection and interaction to over 100 meters of LED lights
4) Fabrication of large scale, 2 meter high wood structure in under 6 hours
5) Great agile team achievement and implementation of our vision
What's next for Design for Duck Technology
Further optimization of technology enabled process and adaption for new geometries and sculptures
Transfer of playful ideas to real-world industrial challenges and applications
Publication in high impact journal (>10) and Instagram postings for the fame
Built With
agile
boschakkuschrauber
c#
dachlatten
design
digital
fabrication
grasshopper
lasercutter
led
lotkolben
microsoft-hololens
mrtk
python
raspberry-pi
rhino
tcp
unity
visual-studio | Design for Duck Technology | The duck connects multiple converging technologies, which merge in the physical and digital space. As a playful platform, it brings together people from different disciplines and inspires new ideas. | ['Patrick Beutler', 'Manuel Biedermann'] | [] | ['agile', 'boschakkuschrauber', 'c#', 'dachlatten', 'design', 'digital', 'fabrication', 'grasshopper', 'lasercutter', 'led', 'lotkolben', 'microsoft-hololens', 'mrtk', 'python', 'raspberry-pi', 'rhino', 'tcp', 'unity', 'visual-studio'] | 74 |
10,526 | https://devpost.com/software/corona-world | Public mood analyzed using Watson Tone Analyzer
Inspiration
Application that:
Provides realtime information from real people & reputable sources
Contains insightful analytics
Has easy-to-use interface and on-the-go
Inspiration
Application that:
Provides realtime information from real people & reputable sources
Contains insightful analytics
Has easy-to-use interface and on-the-go
What it does
"Corona Scare Application" that will display a map with the (almost) real-time "Corona Scare Level" status of an area. Now it is include twitter posts,
It also display level of mood powered by Watson, including level of anger, disgust, fear and joy.
Events pulled from NASA EONET
Tweets from Twitter API
News articles from Google API
Weather from Watson Weather Insights
Public mood analyzed using Watson Tone Analyzer
Safety Tips to help users prepare for any given natural disaster.
Satellite view using NASA GIBS
Neural Network Analysis, discovering natural disasters trends around the globe; using 750+ reference data points from Glide Disaster Database
How I built it
Open Source
https://github.com/qwook/3rdRock
Try it for yourself!
https://tomkax.github.io/coronaworld/
Challenges I ran into
Accomplishments that I'm proud of
What I learned
Geolocation identification
Sentiment analysis
Twitter API
Watson
What's next for Corona World
We will polish this app, and maybe try to add data from other social media (Instagram, ets)
Built With
ibm-watson-tone-analyzer
ibm-watson-weather-insights
nasa-eonet
nasa-gibs
node.js
react
three.js
twitter/google-api
Try it out
tomkax.github.io
bit.ly | Corona World | "Corona Scare Application" that will display a map with the (almost) real-time "Corona Scare Level" status of an area. | ['Tamara Koliada', 'Mariia Koliada'] | [] | ['ibm-watson-tone-analyzer', 'ibm-watson-weather-insights', 'nasa-eonet', 'nasa-gibs', 'node.js', 'react', 'three.js', 'twitter/google-api'] | 75 |
10,526 | https://devpost.com/software/burning-expenses | Main Activity / Start Screen
Inspiration
Having worked quite some time remotely I know of the annoying practice of keeping every receipt in order to get your expenses back. Further we see the potential to establish a loyalty program based on receipts for small to medium enterprises. That's how we had the idea to build an ML based receipt scanner that seamlessly sums up your expenses.
What it does
Films receipt, finds text & numbers, sums up expenses, saves picture of given bill with matching information. Saves time.
How I built it
Thanks to our friends at Huawei we were able to test their HMS Core ecosystem on a device generously given to us for the duration of HackZurich. By using the ML Kit framework which lives in HMS Core we built a receipt scanner with ease in a language previously unknown to us, Kotlin.
Challenges I ran into
Working with diverse previously unknown frameworks, languages and paradigms.
Accomplishments that I'm proud of
Working with diverse previously unknown frameworks, languages and paradigms.
But seriously, finishing an MVP on time in an area (ML / Computer Vision) we both were not acquainted with.
What I learned
Kotlin, Android Studio, HMS Core -- ML Kit and some Java.
What's next for Burning Expenses
We see a strong use case for our solution to the expense summation problem therefore we will be looking into further optimization and development of our application.
PS We don't have video editing software which is why we put it up our pitch and the demo together in a playlist -- we hope you don't mind!
Built With
android
android-studio
hmscore
kotlin
sqlite
Try it out
github.com | Burning Expenses | Handle your personal or your buisness expenses with ease | ['Noe Thalheim', 'Konrad Handrick'] | [] | ['android', 'android-studio', 'hmscore', 'kotlin', 'sqlite'] | 76 |
10,526 | https://devpost.com/software/circularasphalt | Inspiration
CO2 emissions from the construction sector make out a significant part of the global emissions and are often overlooked in the current sustainability discussion.
What it does
Our web application does encourage the recycling of asphalt by matching suppliers and purchasers with the object to minimize the travel distance of the material. This is attractive from both a cost and a environmental perspective.
How I built it
The Server part of the application was tasked to process and store all the incoming requests, while also solving the route optimization problem for a large amount of different travel distance minimization problems. In the front end the user gets an overview of the existing supply and demand situation on a map and can input his offers.
Challenges I ran into
Some of the challenges we faced included interfacing python with external APIs (for example the Google Maps API) and orchestration a network of docker containers.
Accomplishments that I'm proud of
It was a pleasure to learn so many new tools and to take a deep dive in a business sector, which was completely new to us. We enjoyed working hard in this novel team setup while also enjoying the experience of HackZurich.
What I learned
What's next for CircularAsphalt
Built With
docker
flask
javascript
json
python
react
rest
Try it out
github.com | CircularAsphalt | CO2 emissions from the construction sector make out a significant part of the global emissions and are often overlooked in the current sustainability discussion. | ['Balazs Pinter', 'Manuel Galliker', 'Sandro Lutz', 'Josefine Quack'] | [] | ['docker', 'flask', 'javascript', 'json', 'python', 'react', 'rest'] | 77 |
10,526 | https://devpost.com/software/mapie-fk87jq | Mapie
Inspiration
The Global Pandemic and Swisscom Workshop
What it does
This project helps to reduce crowd across the Zurich city for the safety of people during this pandemic. We have created an app which tracks the crowd density of various places inside the city with the help of map. The user login in the app where the chatbot take data from user such as “which place they are visiting and the time frame”. Then this app will suggest different time frame or recommend similar points of interest to the user.
How we built it
For Front End : Made in React Js as the data for Homepage might get gigantic with use, so What better than React to handle heavy number of Components.
The data that is rendered is fetched via APIs made in Flask
For backend: Rasa Core and Rasa NLU chatbot, Machine learning, python, pycharm
Challenges we ran into
Accomplishments that we're proud of
What we learned
What's next for Mapie
Built With
machine-learning
pycharm
python
rasa-core-and-rasa-nlu-cahtbot
rasa-core-and-rasa-nlu-chatbot
react
Try it out
github.com | Mapie for Social Good During this pandemic | An app that suggests the least crowded places for people to visit. It's very helpful application for social good during this pandemic. | ['Pratik Singh', 'Shalini D'] | [] | ['machine-learning', 'pycharm', 'python', 'rasa-core-and-rasa-nlu-cahtbot', 'rasa-core-and-rasa-nlu-chatbot', 'react'] | 78 |
10,526 | https://devpost.com/software/sustainable-price | Inspiration
It used to be so easy to distinguish the forbidden fruit. Nowadays however everything has gotten much more complex. Every fruit has a forbidden version.
Imagine you're doing your weekly grocery shopping and are standing in front of a shelf full of different tomatoes. Which tomato should you choose? The cheapest one or the premium version? The regional one or the world-traveler? The organic one or the conventional? A label? Which label: Bio, Demeter, Fairtrade, Vegan, TerraSuisse, Alnatura ... ? Or is your current focus actually to become your healthiest you - which is the healthiest decision? We're spoilt for choice.
We know your struggle.
And that's why we built Choicebia! We help you get over your fear of making the wrong choice by suggesting the
right
product based on what's important to
you
.
Challenge #1
,
Challenge #2
and
Challenge #18
What it does
Firstly, you create your profile by telling us what you care about: Sustainability, healthy eating, saving money. Or all of the above.
Then it's up to you, when you're in the Migros shop you can scan any product you're interested in and we will suggest the product which is best suited for you by comparing it to all other similar products based on the profile we have of you.
How we built it
Firstly, we made a prototype with Figma.
Then we built an IOS App (Swift) and used the Scandit SDK to read the Barcode of the products. A request to our python Backend (Flask) is being triggered, fetching the best suitable product for the user. The Backend connects to the Migros API to receive the underlying product information to calculate the scores and product comparisons which are returned and displayed in the UI.
We chose sustainability, healthiness, and cost as the most important factors for the customer.
We computed our own sustainability score because we wanted it to be product specific which wasn’t possible using 3rd party APIs. The score takes into account distance travelled and sustainability certificates and we would enhance this in the future.
It's deployed on the IBM Cloud (cloudfoundry).
Challenges we ran into
We wanted to use the Eaternity API suggested by IBM, but sadly they didn't provide the data we needed. For this reason we calculated our own sustainability score.
Not enough sleep
Accomplishments that we're proud of
Built a full working product in Python and Swift
Got the goodies we wanted - ALL the ovo rocks ;)
What we learned
Deployment IBM Cloud
Design Thinking Tools
What's next for Choicebia
Team up with Migros to publish an application which can be used in any grocery store to help find the best product for you.
The Migros App already contains the possibility to scan products. The next logical step would be to combine it with our functionality and include the possibility to pay the scanned items directly via the app.
Integrate analytics - challenge yourself and friends to become more sustainable.
Built With
cloud-foundry
figma
flask
github
ibmcloud
ios
miro
python
scandit-product
swift
Try it out
www.figma.com
github.com
github.com | Choicebia | Make more sustainable and healthier choices while shopping without breaking the bank - don't fear choices. | ['Omar Ahmed', 'Alexander Davis', 'Dana Shmaria', 'Oliver Brenner', 'Carla Jancik'] | [] | ['cloud-foundry', 'figma', 'flask', 'github', 'ibmcloud', 'ios', 'miro', 'python', 'scandit-product', 'swift'] | 79 |
10,526 | https://devpost.com/software/foodster | Overview page, my favorite recipes & all my grocery lists
Recipe tinder - swipe left for the next and right to add it as favorite
Take a closer look at the recipe
Use the grocery list and the shopping cart functionality to get detailed information about the products
Inspiration
Google Keep, Tinder, Chefkoch
What it does
The app allows users to manage their shopping lists and additionally select what they have already purchased (which represents the scanning of the barcode in reality). For curious users, we have also included the 'Recipe Browser', which offers Tinder-like functionality. Instead of searching for the perfect human counterpart, the user searches here for new favorite recipes.
How we built it
We used Google Flutter to create this project. This way, we are not constrained by Android, but can also publish the app to iOS and as a website.
Challenges we ran into
Due to time constraints and because we are only 2 programmers, we could not realize all the ideas we had, but only the most important ones.
Accomplishments that we're proud of
That we have created a fully functional application with live API data that is actually retrieved from the Migros server. Also, we are currently having a lot of fun finding new recipes that we can cook ourselves using the Recipe Tinder functionality. We are also proud that our limited (wo)manpower has gotten us this far in implementing the app.
What we learned
How to use the Elasticsearch API inside Flutter and we also discovered some new Flutter widgets.
What's next for Foodster
Publishing it as a website and adding more features, like using more of the offered data (e.g. showing product regionality - 30% of the products in your cart are from Swiss, 10% from Austria, ...; a barcode scanner to scan products, ...).
Built With
affinity-designer
android
android-studio
elasticsearch
flutter
material
rest
Try it out
github.com
drive.google.com | Foodster | Tinder-like recipe browser combined with a grocery list manager and a virtual shopping cart | ['Sandra Brettschuh', 'Johannes Kopf'] | [] | ['affinity-designer', 'android', 'android-studio', 'elasticsearch', 'flutter', 'material', 'rest'] | 80 |
10,526 | https://devpost.com/software/carboncents | Inspiration
Several climate records were broken in 2019 across the world, including unprecedented temperature highs and extreme weather events.
Extensive usage of personal vehicles contributed to the rise of greenhouse gases (GHG) emissions from an individual
What it does
Incentivise user to lower their individual carbon footprint through a reward-based system
To urge users to be more conscious about their lifestyle habits, driving towards sustainability
Impart the importance of immediate environmental protection in the collective consciousness of society
Users can access our CarbonCents App to:
*track and actively reduce their carbon footprint
*view air quality-related information
*attend curated eco-festivals/events/webinars
### Gamified Carbon Footprint Scoreboard
Users will be able to:
collect Carbon Cents which are reflected in their individual scoreboard
receive cashback rewards such as public transport vouchers and eCommerce discounts with our merchant partners
How I built it
Used Figma and React Native to build the prototype app with the help of Azure Functions
Challenges I ran into
Convert Figma to React Native, integration of Azure Functions with React Native
Accomplishments that I'm proud of
Successfully built a prototype app
What I learned
integration of Azure Functions with React Native, business aspect of an app's success
What's next for CarbonCents
We will be working on this app to partner with other companies and reach to every people's phone
Built With
python
react-native
Try it out
github.com | CarbonCents | Get rewarded for reducing carbon emission | ['Deepjyoti Paul', 'Ngoc Nguyen'] | [] | ['python', 'react-native'] | 81 |
10,526 | https://devpost.com/software/miplan | Swipe the menu!
Tired but at that point still alive.
Inspiration
Make it easy as a couple of swipes to get a personalized menu plan and shopping list for the week.
What it does
Recommends recipes according to the users preferences and restrictions and then have the user swipe through the recipes to choose what they want to cook.
How we built it
Design the idea and illustrate it on figma and then develop it on android studio with flutter.
Challenges we ran into
Developing the code was quite cumbersome, especially when we had such a precise idea in mind.
Accomplishments that we are proud of
Our app even if incomplete and our frames on figma.
What we learned
We need more software developers especially UX/UI.
What's next for MiPlan
Refine the implementation of the app, add the purchase history of the users to suggest recipes, refine the reporting.
Built With
android-studio
dart
figma
flutter
ios
Try it out
drive.google.com | MiPlan | Swipe your cooking plan for the week | ['Myriam Schönenberger', 'Chris Hardaker', 'Leopold Franz', 'Roger Juanola Jornet'] | [] | ['android-studio', 'dart', 'figma', 'flutter', 'ios'] | 82 |
10,526 | https://devpost.com/software/migros-me | App logo
Logo
Home Screen
GIF
Tinder-like swipe through the recipes
Show my progress
My Rewards
Offset my CO2
Notification services of Robo Basket
About Me - Settings
Inspiration
We believe when it comes to Nutrition; our supermarket (Migros) knows better, not some random health App. We wanted to create a much more robust ecosystem around nutrition where Migros can come closer to its customers to provide them with more control over their Nutrition planning while rewarding them and helping them be sustainable by understanding the impact of their food on the environment and offer them ways to offset their CO2. We wanted Migros to be the "Apple of Nutrition"
What it does
Migros Me App is a nutrition ecosystem run by Migros where customers can find all information about the food they consume:
It helps them create better nutrition plans and manage health goals
Robo Basket auto-orders food according to the customer's Nutrition plans. while providing notification on food purchased, such as food expiring, etc.
Migros Me syncs with fitness tracker / local Health data to show both the Calories intake of food consumed against activity and Calories burned (not too many Apps can do this well without manual user input).
Migros Me can also offer rewards based on accomplishing health goals, this can be a platform for Migros to offer services from their wider group. It can also use the rewards as a platform to gain more clients as rewards can be sent also as a gift to friends.
Migros Me uses the nutrition data to also show the impact of the consumed food on CO2 emission and sustainability, clients can use this information to opt-in planting trees and other sustainability activities.
How I built it
We used Kotlin which is a cross-platform, general-purpose programming language with type inference. Kotlin is designed to interoperate fully with Java, we also used the Huawei Mobile Services to deploy our App. Through to the App we used some of the more robust navigation ways, such as Tinder-like swiping through the recipes, we used some of the product data to build a list of recipes the user can interact with.
Challenges I ran into
App development was new to us (specially on HMS - Huawei Mobile Services), we ran into some rendering issues and app flow bugs, but with a bit of googling, we managed to overcome some of those, still we need more time to build all the functionalities of our very ambitious App
Accomplishments that I'm proud of
Designing a functioning Android App in such a short time is always a challenge, however, we were able to create the key areas and functionalities of our App, we worked together very well and learned a lot in the last 40 hours. We are very proud of our idea on building this Nutrition ecosystem
What I learned
We very much enjoyed working together, brainstorming user journeys and finding ways to use the data Migros has to create innovative services. We are data scientist and App development was new to us, so we very much enjoyed working on creating the Migros Me App. We also learned that Migors has a lot of information on its products which was striking
What's next for Migros Me
We hope Migros find this ecosystem Nutrition App of huge value to their business, we intend to continue to working on our application and improving its usability and functionality.
Built With
android
java
javascript
kotlin
Try it out
github.com | Migros Me - Better Me | A better way to manage your nutrition, receive health rewards and have a positive impact on sustainability | ['Zhifei Yang', 'Siqi Dai', 'Makbule Asir', 'Dai Ling', 'WAEL WILLIAM'] | [] | ['android', 'java', 'javascript', 'kotlin'] | 83 |
10,526 | https://devpost.com/software/primeclime | snapshot of our code
snapshot of our (not yet finished) front-end webpage; (link attached)
Inspiration
Workshop #4: Climeworks & Accenture
PrimeClime
The Climate Crisis – we often speak about it – but also we often don't know what is the right choice to make in order to live in an eco-friendly way. Some people really care a lot, but again others have too few incentives to change their habits. So we thought it would be great to seek for a solution, where everyone would enjoy acting right for our planet. And what is better than a game to encourage everyone to join in and win?
We thought that using gamification and "challengification", we could develop an app that gives you "green reward points" for good eco-deeds. With an approach similar to the Bike To Work Challenge, we wanted to give you the opportunity to participate in different challenges. Weekly challenges, challenges with your friends or family, public ones and many more would create a big incentive to participate, mostly because we are all competitive in a certain way.
With this background idea, we also thought that using in-app purchases, ads and other tools we could raise money for Climework's mission. Selling those "green reward points" would directly help Climework to bind CO2 from the atmosphere. And simultaneously, we give the people a better feeling, they see and feel that they acted right and had an impact on the environment. It is important, that people start to realize that their decisions do have an impact on a global scale, so why not show them? With PrimeClime, this could be possible!
How we built it
The idea is the following: Using an app with user account, we let the user input his deeds and acts. For the beginning, categories like consumption, transportation and active CO2 binding are the main target. Using CO2 footprint databases, we can calculate how much CO2 is produced by a user based on his inputs. To prevent users to input only very eco-friendly behaviour, we created a system where all inputs result in reward points, but the less CO2 is produced by the input, the more reward points are earned.
For the prototype, we focused on alimentary products of Coop: Using an image of a Coop-receipt, we extracted text data specifying product type and quantity in order to calculate the purchased product CO2 emissions. We compared the products with a database of approximately 200 basic alimentary products and their CO2 emissions per kg, all that hosted on a server. Using the quantity, we calculated each single product's CO2 emissions and assigned them to a reward point score, which was then uploaded on the user account.
Challenges we ran into
Challenges were not rare, but given the short time we had we were able to master some of them:
Starting at the beginning, extracting text data of a receipt – which is often very small printed and faded – was not easy. Even after cleaning up using some filters, product names and quantity were sometimes not correctly extracted. Also, there were problems with the non-unitary format of the product names and quantity. Some products are given by simple name and weight in kg one one line, others have special names and quantities like "1" or even sometimes, quantity or weight is given in the product name.
Surely, with more time and resources, these problems can be solved. Using the bar code and the encoded receipt information would be a solution, but for that, we would have to work closely with the retail company, as we would need access to their product databases.
Then, we had some difficulties to compare our basic alimentary database entries with the text data we had extracted. Sometimes, a false character in the product name would defeat our comparison algorithm. Also, to handle all the small irregularities in special names or compound names was not trivial. But at the end, we had a good approximation of the CO2 emissions caused by the purchased aliments.
Accomplishments that we're proud of
Participating for the first time at a Hackathon, we were very proud of what we accomplished overall! Already extracting, treating and cleaning up the raw text data from a receipt was a big accomplishment. Also handling different image types like .HEIC/HEIF, .JPEG etc was nice.Then the comparison and rewarding system uploading a point score to the user account was a milestone too. Also, we are very proud of our idea, which could – with some more time and effort – become an entertaining solution to help save our world.
What we learned
We have definitely learned a lot! The text extraction was new to us, we learned a lot about filtering and bug fixing! In general the Hackathon was something great to experience!
What's next for PrimeClime
PrimeClime is a very interesting project, but as we are all mid studies, we would like to improve and further develop the prototype but surely not as intensively as the last few days!
Built With
amazon-web-services
html
javascript
json
mysql
python
Try it out
github.com
18.192.135.222 | PrimeClime | Get Green Credits by scanning your grocery store receipt and win prizes! | ['Daniel Reperant', 'errorplaye'] | [] | ['amazon-web-services', 'html', 'javascript', 'json', 'mysql', 'python'] | 84 |
10,526 | https://devpost.com/software/dyorka | Circuit AI
AI powered circuit recognizer application.
Tech stack
Backend
YoLoV3
python 3.7.9
ImageAI
for YoLoV3 models
RetinaNet
Google Colab
- used this workflow -
https://colab.research.google.com/drive/1v3nzYh32q2rm7aqOaUDvqZVUmShicAsT#scrollTo=I_AoWG4lHFME
Client
React
GoJS
We started our work with exploring computer vision. At first - we looked at ImageAI library which is high-level wrapper of Tensorflow. We used YoLoV3 model, as this model provided custom training ability only for it.
As test data we used provided images, as well as rotated, scaled & blurred copies. Annotation tool -
https://www.makesense.ai/
At first we tried to run training locally, but preformant training on CUDA cores didn't worked on any of our local machines, so we executed trainig for 12 hours on CPU.
Results were not quite satisfying, so we decided to move into cloud - Google Colab, which provides GPU computing. We decided to change model from YoLo to RetinaNet, as its description stated it is more accurate, but a bit slower.
However, we encountered a problem of small dataset. Seemed like retina requires significantly larger dataset. We managed to train it, but the results were worse than in our locally trained YoLo version.
The results of trained models (both YoLo and RetinaNet) were unstable and inaccurate, so we switched to OpenCV patern recognition and custom solution.
Siemens automation circuit recognizer
Special cirtuit recognizer, made for HackZurich 2020, Siemens "Graph the Building" challenge
Honors go to @mahmut-aksakallli for his image recognition studies, which this recognizer is founded on.
How it works
1) Using block adjacency graph potential components are segmented
2) Components are identified using contour based classification. Classification is done using support vector machine which is trained with HOG descriptors
3) Potential lines are detected using line segment detector algorithm
4) Components and lines are merged into common graph based on connecting coordinates
Built With
gojs
imageai
opencv
python
react
Try it out
github.com | Dyorka | AI powered circuit recognizer application | ['Dmitrijs Minajevs', 'Antons Cornijs', 'Dmitrijs Čuvikovs'] | [] | ['gojs', 'imageai', 'opencv', 'python', 'react'] | 85 |
10,526 | https://devpost.com/software/scorona-1c3qli | Data visualizer
System architecture
Inspiration
A global pandemic has changed peoples lives in 2020. We are surrounded by statistical data about the virus - number of active cases, number of deaths, number of recovered patients... But how do people actually feel? How today's news affects people in different regions? How they react to the virus on social media? Let us model these scenarios from live data and help media companies to feel closer to the crowd.
What it does
The project is split into 2 main parts:
Data provider
Aggregation service
- Fetches every 2 hours news data from various sources and fetches prefiltered twitter posts.
Corona filter
- Removes all non-coronavirus related data samples.
Emotion detection
- Pre-trained model to classify positivity / negativity of each news item.
Location extraction
- Detects location information in each data sample (the UK only) and aggregates them by larger geolocation (UK counties).
DB
- Processed metadata are cached in the database until new aggregation fetch is performed.
REST API
- Provides access to processed data to allow further processing or visualization.
Data visualizer
Static web front-end to visualize live model of peoples emotional response to the global pandemic. A larger area (eg. county) has certain opacity and colour. The amount of opacity value corresponds to the intensity of news related to coronavirus in that area. Colour denotes average emotion in that area related to the virus - the value can be any between green and red (green being positive, red being negative).
How I built it
Data provider
The layer is written in Python.
Aggregation service
- RSS is used for news access (BBC, DailyMail, SkyNews). Additionally, full article bodies are fetched for more precise location extraction and possibly emotion detection. Twitter API is used for accessing twitter posts.
Corona filter
- Text scanning using numpy
Emotion detection
- Pre-trained XLNet - fine-tuned with 25000 samples of IMDb reviews (no more appropriate dataset available). Binary classifier (positive / negative).
Location extraction
- Offline city matching with correlation to counties. Majority voting when multiple cities occurred. Edge case handling (too many cities => correlate to the whole country).
DB
- MongoDB Atlas
REST API
- Python Flask. Fetching cached data from DB.
Data visualizer
Mapbox API with custom polygon layers. AJAX requests to REST API.
Challenges I ran into
The actual performace of XLNet model on news and twitter posts was mostly not very good (misclassification). Fine-tuning of the model took more than 5 hours, which slowed us down.
Accomplishments that I'm proud of
Working prototype and functional data processing pipeline with live data.
What I learned
How to work and fine-tune XLNet model. Difficulties of data analysis and data processing. Data storage in NoSql (MongoDB Atlas).
What's next for SCorona
History data (infrastructre prepare), more data sources and correlations (eg. increase/decrease of COVID-19 cases) and more available areas (outside the UK).
Built With
html5
javascript
kaggle
mongodb
numpy
python
pytorch
Try it out
github.com | SCorona | Map of coronavirus scare level based on live news and twitter data. | ['Theodora Konstantinou', 'PlatypusTheSlayer', 'Ales Kubicek'] | [] | ['html5', 'javascript', 'kaggle', 'mongodb', 'numpy', 'python', 'pytorch'] | 86 |
10,526 | https://devpost.com/software/pmi-smart-loyalty | Inspiration
User management is a critical aspect in terms of a company's success. While it is easier to track online customer, what could be the possible approach for offline customers?
What it does
Customers who purchase products from offline shops, gets a printed receipt. The PMI Smart Loyalty app scans the receipt and fetches the itemized details and calculates loyalty points (10% of total order value). This loyalty point then gets added to the user's profile and data are sent to the company for further analytical purpose.
How I built it
I used React Native to build the cross platform app and used Microsoft Azure's vision API to get details from the receipt.
Challenges I ran into
Integrating Azure Vision API and getting data collected in different format and normalize them.
Accomplishments that I'm proud of
Successfully integrated the Azure Vision API into React Native and got the expected results.
What I learned
Business aspect that companies need to track offline customers as well as online customers. Integration of Azure Vision API.
What's next for PMI Smart Loyalty
Further improve with custom models and improve user experience.
Built With
azure
mongodb
python
react
react-native
Try it out
github.com | PMI Smart Loyalty | Smart scan and add points | ['Deepjyoti Paul', 'Ngoc Nguyen'] | [] | ['azure', 'mongodb', 'python', 'react', 'react-native'] | 87 |
10,526 | https://devpost.com/software/car-9d2rws | Inspiration
Reality is no longer what it used to be. New technologies like Augmented Reality (AR) are changing the way we experience the world. In this context, _ cAR _ is born from the quest for new experiences in racing and exploration games.
What it does
_ cAR _ is a single-player racing game. First, you build your own circuit at home by placing several cardboard checkpoints. Then, you prepare your radio-controlled car at the start of the circuit. When the videogame starts, the screen will show the circuit from the point-of-view of the car. You will finally drive the physical car through the obstacles, which are shown in the screen using AR.
Players can create their own rules, so be original! Also, you will be more motivated by our random-stage algorithm which generates new random (but solvable) challenges for you and your friends.
How we built it
It was built using Unity and Arduino. The AR technology was implemented with Vuforia software.
Challenges we ran into
Control the car using the Arduino required a lot of effort. Also, designing and adjusting the AR targets took some time.
Accomplishments that we're proud of
We had lots of fun coding and designing our project. We are proud of the final result.
What we learned
A lot! Not only about Unity and Vuforia but also about designing new game styles.
What's next for cAR
Multiplayer!
Built With
arduino
unity
vuforia
Try it out
drive.google.com | cAR | The new generation of racing games. | ['Victor Gomez', 'Álvaro Gómez Iñesta'] | [] | ['arduino', 'unity', 'vuforia'] | 88 |
10,526 | https://devpost.com/software/isustain | What's disposed
Analystics
Alternative
Product info
Inspiration
Finding reasonable methods to avoid organic waste
What it does
It monitors what you throw in your organic trash bin
It detects the amount of wast disposed
Provides analytics to motivate less waste
How I built it
Creating a Scale with Wifi
Using XD to present the prototype
PS:
We couldn't write the code, since we need to develop the Hardware first, which is easy but can't happen in this short time, so we presented the UI.
Challenges I ran into
Making sure to be super simple, not much effort to achieve, and cheap solution
Accomplishments that I'm proud of
Completing the idea
Easy to integrate in any Grocery store app
What I learned
Hard working with my team under stress
Patience
Passion for the idea we believe in
What's next for iSustain
Brining it to real life
Built With
css
html
javascript
migros
photoshop
xd
Try it out
xd.adobe.com | iSustain | Reduce the wasted food and Encourage customers to buy the sustainable products. | ['Mina Moanes', 'Martina Mikhail'] | [] | ['css', 'html', 'javascript', 'migros', 'photoshop', 'xd'] | 89 |
10,526 | https://devpost.com/software/crispy-invention | The Council of Cubers
Rubiks cube timer and leaderboard to keep hobbyists connected during lockdown
Inspiration
What it does
Challenges
Accomplishments
What we learned
Built With
flask
html
javascript
postgresql
python
Try it out
github.com
cubesoc.herokuapp.com | The Council of Cubers | Rubiks cube timer and leaderboard to keep hobbyists connected during lockdown | ['Jana Scholey', 'Morgan Baglin-Clarke'] | [] | ['flask', 'html', 'javascript', 'postgresql', 'python'] | 90 |
10,526 | https://devpost.com/software/green-ride | active ride
completed ride overview
offset purchase complete
GIF
how the app works
Inspiration 💡
To tackle the climate crisis, we need big policy changes. But we also need tools in our daily lives that help us do better.
Green Ride is here to make your life a bit more sustainable.
What it does ✅
Green Ride lets you track your everyday mobility and the emissions associated with it.
This insight lets you cut these emissions. And in situations where you can't, it offsets these emissions for you.
How we built it ⚙️
We used Flutter to build a cross-platform app that looks great on all devices.
We use the Cloverly API to get a precise estimate of the emissions that were associated with the ride.
The API also lets us offset these emissions. Additionally it provides us with detailed information about where the offset money was invested.
The app follows the KISS design principle (Keep it simple and stupid).
This way we can make sure that the app is accessible for anyone.
All you need is a credit card.
Challenges we ran into 😒
Tracking location updates when the app is in the background (the phone is locked) is quite tricky and highly dependent on the host operating system.
Accomplishments that we're proud of 🏆
Our app is app is appealing to use, simple and serves a clear purpose.
What we learned 👨🏽🎓
We improved our Flutter skill set especially around building animations and appealing UIs.
What's next for Green Ride 📈
Improving the payment flow to cut fees
Built With
cloverly
flutter
gns
gps
rest | Green Ride | It's best to not use your car. But if you have to, there's Green Ride. Offset the emissions of your commute. In three clicks. | ['Elia Bieri', 'Florian Burri'] | [] | ['cloverly', 'flutter', 'gns', 'gps', 'rest'] | 91 |
10,526 | https://devpost.com/software/fogproof-ja3yh5 | Inspiration
The challenge is proposed by Siemens Mobility. We accepted the challenge because we believe that a day with bad weather is not a bad day and we really want to help train drivers who can't see tracks clearly, this often leads to slowing down and time delays in the railway's schedule. so we decided to try to address that interesting problem with the magic of Deep Learning and Computer Vision technology. We are here to help them see through the unclear!
What it does
it removes fog and rain effects from the driver track to let the driver see clearly using Deep Learning and Computer Vision.
How we built it
Having previous experience in similar DL problems, we went to the awesome DL literature, we tried to employ any of the open-source available solutions to our problem. we combined 2 different DL models together to solve our problem
the first is: Deraindrop (Attentive Generative Adversarial Network for Raindrop Removal from A Single Image) which a generative model used to remove the raindrop effect on the images
the second is unfogging model which is based on AOD-Net (end-to-end dehazing neural network)
with the 2 main parts and with preprocessing and tuning, we managed to achieve good results on the provided dataset.
for the web app, we used flask to render our web content and simple but pretty UI (html/css/bootstrap)
Challenges we ran into'
The most important challenge we faced was how to address the bad weather effects removal problem, which approaches we should take? and is there publicly available data for this purpose?
also choosing a backend framework and implement it was too important to provide easy access to our solution. Finally, we tried to choose a clear and simple graphical user interface to provide a great experience to the driver.
Accomplishments that we're proud of
We are proud that we could reach such good results for the vision problem that was proposed in the challenge and that our solution, that has very good accuracy for a prototype developed in 40 hrs, our solution has the capacity to be improved and developed to be scalable and deployed in a real-life project at which AI algorithms help people and solving a real issue.
we are also proud of our attempt to build a complete system ( frontend-backend-AI solution ) to solve one of the important railway problems. we tried through less than 3 days.
What we learned
We learned a lot about models that can be used to remove complex features from images, a ton of them were GANs and autoencoders, which was a good opportunity to work close to them. also, we became better at deploying DL algorithms to bigger real projects from scratch in such short time.
What's next for FogProofg
we would be happy to collaborate with Siemens to improve our solution to level up and can be a really good tool to solve this challenge. we will try to get more data and to train our model with more examples to be able to generalize well at our deployment environment.
also, we would like to investigate and look around to see if any other people other than train drivers are having the same issue and maybe we can help :)
Built With
css
dl
flask
html
numpy
opencv
python
pytorch
scipy
Try it out
github.com | FogProof | FogProof is a WebApp that uses to AI & DL technology to help train drivers to beat weather conditions and gives them the edge to see through a rainy and foggy atmosphere. | ['Yomna Magdy', 'Ahmed Nasr', 'Mohamed Ayman'] | [] | ['css', 'dl', 'flask', 'html', 'numpy', 'opencv', 'python', 'pytorch', 'scipy'] | 92 |
10,526 | https://devpost.com/software/level-up-eisq5r | Home screen
Home screen scrolled down
Community screen
Challenge screen
Challenge your friends
Tips screen for finding articles to help you improve yourself
Apply filters to only get the most relevant topics for you
Calculate your Co2 savings
Access the bite.ai API directly
Inspiration
We realized that personal growth and continuous self-improvement not only helps one progress in life, but can also be very fulfilling, healthy and sustainable. It is even easier when you find yourself in a community of like-minded people and encourage each other for example with friendly challenges and virtual achievements.
What it does
Level Up! allows you to track your acitivites, your diet, your sustainability efforts and your health. You can join custom groups in the community created by users and challenge each other in friendly competitions. It also provides articles which help you improve yourself in the areas you are most interest in such as how to follow a healthy diet or on how to help the environment.
How we built it
We used Ionic Angular, defined different pages and distributed them among the team members to implement them. We also used Trello to further coordinate arising tasks and challenges.
Challenges we ran into
The biggest challenge of course was the short amount of time in combination with the lack of sleep. However, we isolated the most important features and managed to showcase them in our first prototype.
Accomplishments that we're proud of
Some of us came into contact with Ionic for the first time during this challenge and could learn a lot of new things.
We have also successfully integrated various APIs into our app.
What we learned
Building a app with Ionic is very time efficient and it was definitely the right decision in our framework evaluation process.
While researching the contents of our info articles on health topics we also noticed that the Hackathon lifestyle should only be lived to a limited extent ;)
What's next for Level Up!
There is a lot to do for Level Up! Here is a short summary:
More friendly challenges with a trophy system you can show off to your friends
Fitbit & Google Health integration
Adding more articles in the Tips section that contribute to personal growth in the categories health, sustainability and savings awareness.
Adding a new tab which showcases current trending tweets regarding different relevant topics we already mentioned.
Improving the gamification feature consisting of the personal avatar Beary with additional animations depending on how your are progressing on your personal journey to a better self.
Built With
adobe-illustrator
angular.js
bite.ai
css
html5
ionic
triptocarbon
typescript
Try it out
github.com | Level Up! | Achieve your own best form with Level Up! in the areas of health, sustainability and savings awareness. Join a community of like-minded people and participate in friendly competitions. | ['Andrea Zirn', 'theJoeSen', 'Cloe Hüsser', 'Firstname Lastname'] | [] | ['adobe-illustrator', 'angular.js', 'bite.ai', 'css', 'html5', 'ionic', 'triptocarbon', 'typescript'] | 93 |
10,526 | https://devpost.com/software/intuit-es2tca | Our cute little prototype - Check it out!
Our beautiful product page ♥️
Our logo and the project credits
Inspiration
The workflows of digital artists are becoming increasingly more complex. On the other hand AI assisted tools and even AIs creating art themselves are emerging. Together with Logitech we are asking the question: What will the future of digital creation look like? And how can we bridge the gap between humans and AI.
What it does
With Intuit we want to bring back the incredible power of human intuition to the digital creation process.
We do this by removing the complex user interfaces, sliders, inputs and buttons and enabling the humans most powerful crafting tool: the hand.
We use the latest in AI computer vision to detect the nuances of the creators hand positions and movements and use that as input for advanced AI tools which are able to generate music, images and 3D assets.
The feedback is immediate. Rather than to use abstract interfaces, you can simply change your hand posture, the AI changes the output accordingly and you can SEE what feels right.
This is a whole new way for creators to shape their digital art.
How we built it
We use standard browser APIs to capture your webcam footage
Video frames are then fed to the TensorFlow Handpose JS model
The model returns prediction of each hand joint as 3D coordinates
We use React as our primary front-end library, which hooks up predictions to the application state
Then using a mathematical sigmoid function we map received coordinates into abstract values on a scale from 1 to 100
Normalized data are used as inputs for any kind of AI or digital generators
We have built a little prototype to control the size and color of a Kawaii Cat (check out the demo)
We created a Landing Page to show the various use cases of the technology as a product demo for Logitech
Side note: The entire solutio runs in the browser!
Challenges we ran into
No one in our team had any experience with AI or ML models
Finding sophisticated AI which could generate art in real time in the browser proved to be still a major hurdle
Creating the submission video took sooo much time
Accomplishments that we are proud of
We are happy that we managed to produce such a well rounded package for the hack with a prototype that shows the potential of the technology, looks cute and is actually functional!
What we learned
That it's possible to use quite sophisticated AI/ML models without having a background in that field.
What's next for Intuit
Create a SDK which gives developers simplified ways to work with hand poses data.
Make the technology more broadly available for touchless UIs in public areas
Create a Chrome plugin to connect Intuit to any interactive web experience
Use specialised Hardware to get faster and more accurate inputs
Use models on the machine (not the browser) to improve the performance
Stack
React
TypeScript
TensorFlow
Continuos deployment with Netlify GitHub hooks
Special Thanks
To Alexandre DeZotti for the math tutoring ;)
Built With
handpose
javascript
kawaii-cats
react
tensorflow
Try it out
intuit.ocin.ch
goofy-ptolemy-c97c9b.netlify.app
github.com | Intuit | Bringing a human touch to AI generated art | ['Gleb Irovich', 'an nya', 'Arpita Mallik'] | [] | ['handpose', 'javascript', 'kawaii-cats', 'react', 'tensorflow'] | 94 |
10,526 | https://devpost.com/software/fondue | Logo
Frontend
Inspiration
Producing food that is not consumed results in unnecessary CO2 emissions, biodiversity loss and land and water consumption. Twenty-five per cent of Switzerland's nutrition-related environmental impact is caused by avoidable food waste. This equates to around half of the environmental impact of the country's motorised private traffic.
The environmental impact of one tonne of avoidable food waste varies greatly depending on its constituent products and where the wastage occurs in the value chain.
Food consumption
in Switzerland generates 2.8 million tonnes of avoidable food waste per year at all stages of the food chain, both in Switzerland and abroad. However, catering only accounts for 5% of the food being annually discarded in Switzerland; Swiss households account for nearly half of it (45%). The main reasons for the high level of avoidable household food waste are a general lack of awareness of the waste generated and of the value of food, insufficient knowledge about shelf life and storage, as well as insufficient knowledge about ways to make use of leftover food.
The two important questions we set out to answer are, what food is really better for our environment? And can there be a simple tool that can help us donate/share the food in our locality?
What does Fondue do?
As a result of food waste produced by 45% of the household and the importance of health, "Fondue" is created to encourage the Swiss citizens to take care of their well-being while considering the environment. Unlike the normal app such as Deliveroo, Foodpanda, Grab Food and Uber eat, this apps allows you to monitor your well-being when choosing your favourite meal. Once the health information of the user such as pregnancy, glucose, and blood pressure level is obtained, machine learning is employed to maintain their healthy diet along with your preference mainly for those who are vegan or vegetarian. In addition, Fondue encourages you to walk to the nearest place of the restaurant so that you can reduce carbon emission. Fondue at the same time offers to donate the excess food in the local surroundings thus helping you to reduce your carbon footprint.
Using the databases/API provided by Migros, IBM & Swiss Re, Roche Diagnostics and Eaternity, we set out to build
Fondue
which helps a person visualize the environmental impact of each meal they have.
Fondue
gives a person an opportunity to donate/share food with the individuals who can consume it. In this way, he/she reduce his/her carbon footprint.
How did we build Fondue?
Front-end: Angular JS
Back-end: SpringBoot, Java, Maven, MySQL, REST
Machine-Learning: scikit-learn, Decision trees, Neural nets, TensorFlow, Keras
Challenges we ran into
Obtaining the data from API.
Some categorising algorithms only support float and integer.
Biggest Challenge and still ongoing is the filtering of the Food Overview.
Joining all the differents APIs in a simple and user-friendly app is daunting.
Hard to collaborate especially when each member is in a different time zone.
Until the middle of the project, we didn’t get from where all the data comes from and how this should be put together in one piece.
Accomplishments that we're proud of
Order Food with tips for your current health.
Giving the power to donate the food in the local surroundings at a click of a button.
Show your total made food waste not only the one at home but also at the restaurant.
Exploiting different methods, for example, using LabelEncoder rather than one-hot-encoding to build the app.
What's next for Fondue?
We will be working on this app to partner with other companies to reach out to every citizen of Switzerland.
Built With
eaternity
java
maven
python
scikit-learn
tensorflow
Try it out
github.com | Fondue | Fondue is created to encourage the Swiss citizens to take care of their well-being while considering the environment and the fellow citizens around them. | ['Utkarsh Sharma', 'Roman Kathriner', 'Yashar ZoroofchiBenisy', 'KH Lee', 'Ahmed Gaber'] | [] | ['eaternity', 'java', 'maven', 'python', 'scikit-learn', 'tensorflow'] | 95 |
10,526 | https://devpost.com/software/screenless | screenless
estimates carbon footprint using provided data
Learns from past orders with ai to avoid food waste
Encourage CO2 reduction
Inspiration
COVID-19 is one of the biggest crises in the history of humanity.
Millions of people go to fast-food restaurants every day.
They use touchscreens for ordering food.
This can be quite dangerous.
What it does
Everyone has access to their phone's assistant (siri, google assistant...)
We found a way to navigate any platform using this service.
We enhanced our solution to prevent food waste and to encourage people to reduce carbon footprint.
How we built it
We used firebase, nodejs, javascript...
Challenges we ran into
We found it quite difficult to set up the voice control in our website. Also, we put a lot of effort into the final video.
Accomplishments that we're proud of
The final video
The final product
Making something that can actually be useful
What we learned
Teamwork
Programming skills improved
What's next for ScreenLess
Save the world.
Go screenless.
Built With
css3
firebase
html5
javascript
node.js
Try it out
github.com | ScreenLess | Place orders in fast food restaurants remotely and much more! | ['Pablo Biedma', 'Omar Nassar'] | [] | ['css3', 'firebase', 'html5', 'javascript', 'node.js'] | 96 |
10,526 | https://devpost.com/software/potato-heroes |
window.fbAsyncInit = function() {
FB.init({
appId : 115745995110194,
xfbml : true,
version : 'v3.3'
});
// Get Embedded Video Player API Instance
FB.Event.subscribe('xfbml.ready', function(msg) {
if (msg.type === 'video') {
// force a resize of the carousel
setTimeout(
function() {
$('[data-slick]').slick("setPosition")
}, 2500
)
}
});
};
(function (d, s, id) {
var js, fjs = d.getElementsByTagName(s)[0];
if (d.getElementById(id)) return;
js = d.createElement(s);
js.id = id;
js.src = "https://connect.facebook.net/en_US/sdk.js";
fjs.parentNode.insertBefore(js, fjs);
}(document, 'script', 'facebook-jssdk'));
Inspiration
Eating one more chocolate cup cake a day costs a person 1 hour in a gym. From the perspective of a healthy lifestyle, it is more efficient and important to be able to control food intake in order to live healthily. One of the obstacles is the low awareness of eating habits. People often do not pay attention to the ingredients or find it difficult to track and count calories. Our mobile app solves this problem by firstly providing a quick way to track food intake and the amount of nutrients it contains, and secondly by motivating a person to maintain a healthy lifestyle and achieve new goals.
What it does
The mobile app allows the user to take a photo of the food and, with the help of an external API for food recognition, it provides information about the nutritional value and calories in the food. Through interaction with the user, the app confirms the type of food and clarifies the size of the portion. Afterwards, the record can be saved in the diary, which tracks the user's eating behavior. The app can find out the user's profile by survey questions such as age and lifestyle, but in the future it will also be able to use mobile phone sensors such as the activity level during the day. The health user profile is used to estimate a daily calorie intake and assign a score. The score is translated into game points, motivating the user to maintain healthy behavior by collecting strikes and rewarding consistent and persistent behavior. The app maintains a positive mood using the potatoes theme, as the positive emotions help to overcome temporary hurdles and stay on track.
We plan to extend the app in the future to include machine learning about the user personality (stable characteristics such as extraversion or openness) and also more transient states (such as mood swings during the day). The monitoring of the user's state will make it possible to select the most appropriate moments for the implementation of interventions in the form of notifications that provide healthy recommendations to the user (e.g. exercise tips, recipes, nutritional recommendations).
How I built it
We used the framework of React Native and Expo to build a prototype. For the future we plan to extend it to the server backend (e.g. Node JS or Python), which will collect the information and implement machine learning models to evaluate the intervention times and the recommendation system.
Challenges I ran into
In the beginning we investigated the possibility of using Mobile Coach for our purposes, but then we had to change gears and switch to the Expo Framework. It became clear to us that we wanted more flexibility for our purposes and not to stay within the limits of a chat application. The other challenge was to clarify how the external API would work, but in doing so we got the full support of partners from CSS.
Accomplishments that I'm proud of
We are proud of the fact that despite some technical challenges, we persisted and always worked as a team.
What I learned
We learned to work efficiently in a short time, we learned how to work with new APIs and got motivated to learn new things.
What's next for Potato Heroes
We want to integrate the information about physical activity into the feedback the app gives a user. We also want to train and use our own models so that the app can learn more user-specific information. The next sequel to Potato Heroes will make everyone believe that “if you are a couch potato, it's time to rise and be a hero".
Built With
expo.io
javascript
react-native
Try it out
github.com | Potato Heroes | Potato Heroes is a mobile app that helps to maintain a healthy lifestyle using gamification and computer vision tools. Join us in the journey from couch potato to becoming a potato hero. | ['Mikkel Schöttner', 'hyunjoo hong', 'Yury Shevchenko'] | [] | ['expo.io', 'javascript', 'react-native'] | 97 |
10,526 | https://devpost.com/software/bircle-9n6rey | Bircle
Focus: The Two Billion People Sustainability Challenge
Inspiration
First, we observe an increasing need for sustainable construction material such as plastic. On top of that, there is a lot of recycling potential wasted due to the complexity of the recycling process to the final user and industries. Oftentimes people don’t have the time or means to personally bring their plastic waste to the recycling facilities. More than that, in underdeveloped countries, the recycling process is not yet structured in a way that the user knows what to do with their trash. Considering all of that, we came up with an idea that bridges the gap between companies in need for recycling materials and end users not knowing how or willing to take the effort of recycling. Our idea consists of a bottle recycle service where users can easily enter the recyclable bottles they have at home and request a pickup service which takes care of collecting and bringing them to the nearest qualified recycling facility.
What it does
Bircle makes it easier for users to recycle their bottles. We make a bridge between recycling facilities and the consumer, increasing recycling rates and making the user’s life easier. The user can easily enter the bottles to be recycled by scanning their barcode. Once the user has collected enough bottles they can request the pickup service and that’s it! The pickup service is activated, which means the bottles will be picked up at the user’s place and dropped off at the nearest recycling facility.
How we built it
Our initial prototype consists of an iOS app backed by a centralized data platform. The users can choose to register themselves by entering the required information manually or by connecting Bircle with their Google account. The scanning feature is built on top of the Scandit API (inspired by the Migros supported tools). The scanning API gives us the scanned product’s EAN, or EAN13, stands for International Article Number (originally European Article Number). It is an extension of the UPC codes and you'll find them as barcodes on most everyday products.The EAN allows us to further search for and provide the user with more detailed information about the product. This step is done on top of
EAN XML API
, which allows us to access information such as the name and category of the scanned products. The app provides the recycling historical data which we display in the form of a plot grouped by month. This graph is built using the open source front-end chart library
Highcharts
. Finally, the app has a wallet with the financial information (e.g., gained credit due to recycling, current balance) and the user profile containing all the personal information (e.g., name, email, pickup address). Regarding data storage, we rely on the combination of Realm and MonoDB 4.4 which is currently synchronized with Atlas, running an A10 Tier Cluster on top of AWS.
Challenges we ran into
One of the challenges we faced was regarding the categories from the EAN API. We expected the returned information about products to be more detailed, precise and consistent even between different brands. Our initial intentions with that was to better display the user’s products by categorizing them. However, as this was not the case, during the development we had to change the visualization and redesign what information to provide to the user.
Moreover, both members of our team are backend engineers with little mobile development experience. This made the development of an iOS application in such a short period of time challenging for us. Finally, our background does not include UX skills which made the process of designing the application’s flow demanding.
Accomplishments that we are proud of
We are proud to get out of our comfort zone and push ourselves to the limits. As a result of that, we managed to gain skills that we didn’t have before and solved problems which we wouldn’t have been exposed to on our daily basis. All of this effort was backed by our motivation to work on a sustainable solution that has a greater purpose and impact from different perspectives.
What we learned
Below we list our main takeaways from this experience.
Waste management and recycling topic after so many years is a challenge which has many open questions.
Industries which consume plastics as their raw material to construct roads are willing to pay and contribute in a way to close the loop of waste management.
So many people still are not motivated to recycle their waste.
What's next for Bircle
The current version of our application focuses on the end user spectrum of the problem. However, another application which will be used by companies or recycling facilities to receive the user requests is also necessary for this service to work. Therefore, our next main step would be developing a platform and application to cover this need. In order to ensure reliability of the recycling path information, we plan on integrating blockchain technologies into our applications.
When it comes to improving the current version of the developed application itself, we would like to further our prototype as well as expand it to other platforms (e.g., web, Android). Improvements in the current version would include a better design of our user interface and more robust functionality which involves an extensive planning and testing phase. Because we have already built the current version on top of MongDB, scaling it for more users would be a simple step as we can rely on its features to do so. However, a scalability testing phase would be required before expanding our horizon.
Finally, we would like to find partners which are willing to invest in your idea and launch a testing phase where we can evaluate all the positives and negatives aspects based and further refine this idea based on real user experience.
Built With
ios
mongodb
realm
rest
scandit-product
swift
Try it out
github.com | Bircle | Bircle is a bottle recycling service which takes the recycling burden from end users and streamlines the collected waste to material aggregators. | ['Isabelly Rocha', 'Niloofar Gheibi'] | [] | ['ios', 'mongodb', 'realm', 'rest', 'scandit-product', 'swift'] | 98 |
10,526 | https://devpost.com/software/coffeebreak-ur1s0q | When working from home we lack the small social interactions that we have at work. This app allows us to simulate sharing a coffee break together with colleagues.
What it does
First, the user must create an account and log in. Then they can set a time for a coffee break and it will invite all of their colleagues. After accepting the invitation, the user is taken to a video chat room together with everyone else.
How we built it
The app itself is based on Android. The messaging system between the client and the cloud utilizes Google's Firebase Messaging System. The video chat is supported by Jitsi Meet.
Accomplishments that we're proud of
None of us have ever used Firebase before and we are proud that we built a working application that is able to pass data and invitation notifications via Firebase in such a short amount of time.
What's next for CoffeeBreak
The notifications and the video chat already support separate rooms for users. In a next step, we would introduce a way for users to create their own groups of friends or colleagues. Our initial prototype sends the push notification to all its users.
Built With
android
firebase
java
jitsi
Try it out
github.com | CoffeeBreak | A simple no frills way to organize impromptu video chats | ['Elias Huwyler', 'Markus Roth', 'Pascal Maillard'] | [] | ['android', 'firebase', 'java', 'jitsi'] | 99 |
10,526 | https://devpost.com/software/corona-scale-level-swiss | Map view and statistics
Tags cloud by popularity in articles
data exploring and processing in python notebooks
output result to UI
Inspiration
COVID-19 Dashboards are everywhere, but this time we had an opportunity to research a different insights than the usual infected/deaths statistics, although we were inspired by the existing COVID-19 dashboards out there.
What it does
Input digesting country-wide news data, outputing a map view of "Corona scare-level", how "Corona" is trending is a specific areas, or in other words, how "scare" are the crowd of Corona.
How we built it
We split it to two main branches,
Data analytics - Exploration, clean-up, tagging location (German NLP model to extract "location" named entities), extracting COVID related words using our tailor-made terms list aiming for minimum false positives, filter only the relevant articles and then divide their count by the number of total articles for each specific location. Salting with public health data to find correlations. We also looked at IBM Watson tone analyzer but ran out of time.
User Interface - Journalist Dashboard - Web app built with ReactJS and d3 visualization library. Map view and statistics view based on the output of the previous process. Cloud tags. the App is hosted on IBM Cloud.
Challenges we ran into
In terms of innovation, the challenge was pretty straight-forward, so we where looking for a ways to add more added value to the journalist (as equipped with numerical statistics and tags cloud alongside the map view).
We focused on the provided articles, we had to read about NLP tagging and explore this area which was challenging, in addition to the all article's language which were German and not English.
Collaborate in remote.
Accomplishments that we're proud of
Tagged successfully around ~7k articles, completed the PoC for Journalist dashboard.
What we learned
Working with a lot of data, NLP, data exploring, working with maps in d3js
What's next for Corona Scare-Level
There's a way long research and work on moving it to be in-realtime digesting millions of articles, tweets,..
Built With
agile
bluemix
carbondesignsystem
d3.js
geojson
ibm-cloud
javascript
jupyter
natural-language-processing
nltk
npm
pandas
python
react
spacy
Try it out
csl.eu-gb.mybluemix.net
github.com | Corona Scare-Level (BR) | Journalist dashboard for sensing the "crowd scare-level” of COVID-19 Epidemic based on country-wide news. Visualizing on easy to use map view and statistics view. | ['Bar Haim', 'Bar Haim', 'Rahul Jha'] | [] | ['agile', 'bluemix', 'carbondesignsystem', 'd3.js', 'geojson', 'ibm-cloud', 'javascript', 'jupyter', 'natural-language-processing', 'nltk', 'npm', 'pandas', 'python', 'react', 'spacy'] | 100 |
10,526 | https://devpost.com/software/hackzurich2020_viroda | Welcome to our HackZurich2020 project on Github.
We created a powerful AR mobile app to help a Migros customer make better decisions and get to the available information more easily while browsing goods at the store.
Our system uses image recognition to allow tracking of various goods in 3D space, with that we can show a lot of useful additional data without the need to touch a product and read the small print on the back.
We optimized our system to make the most important information easily available; As soon as the customer points his phone at products we display action buttons that allows him to
view the product's carbon footprint and calories
. Further the capability to suggest relevant further products is available (based on historical purchasing data) as well as suggesting healthier alternative products.
Technology Stack
Unity with Vuforia allows a rapid development of an AR app which runs cross platform and offers a large selection of applications. We integrated Vuforia to detect and track the goods of the Migros warehouses and display data next to it in 3D space.
RStudio is a powerful language for mathematical calculations which we used to create a basket analysis model from a dataset which was provided by Migros with that we were able to find connections between goods which consumers like to buy together and base our recommender system on. We found out that bananas go very well with everything and are universally desired.
For our information gathering we used the Migros provided REST API to be able to give the consumers fresh information about the products they are looking at. Furthermore alternative healthier products are provided by the EatFit Service.
Further Steps
We think that given more time our app could be extend with various good additions:
Navigation in store to could help the customers find their products better and faster using the store layout provided by Migros.
Direct comparison between products could help the customer to be more informed about their choices especially with regard to nutritional values and sustainability impact.
With a text to speech and a translation extenstions the app could help blind and international people to find their way around Migros better.
Built With
c#
hlsl
r
shaderlab
Try it out
github.com
1drv.ms | VIRODA | Intuitive information access for a more sustainable and healthy world | ['Roger Siegenthaler', 'Vithushan Mahendran', 'kep1er Schlebusch'] | [] | ['c#', 'hlsl', 'r', 'shaderlab'] | 101 |
10,526 | https://devpost.com/software/foo-d-bar | Main screen
Item scanning
Kantönli Map
The climate crisis requires changes from everyone. We want to help consumers, to make a real impact with just their regular grocery shopping. With means of gamification and community influence, we encourage more sustainable and healthy choices, without any loss of convenience.
Our solution consists of three elements.
The first part is a self-scanning app, which is already a popular method of shopping in Switzerland. Additionally to the price of the scanned products, the users instantly get a sustainability and health rating which are based on the ingredients and the CO2 data from eaternity.
After the fully digital checkout, the user's personal score is updated in the second view of our app. Here, users can track their progress in reducing CO2 emissions and benefit from special rewards.
And by making local communities an integral part, we encourage everyone to take action and improve not only their personal choices but to also bring changes to the community as a whole.
But how did we do all of this? Our backend is built using nestjs. It stores anonymous information about the users in the database and queries the provided data API's.
In order to obtain information about the scanned products, we query the migros product api which provides the name, images, ingredients and for example the country of origin. This information is then used to query the IBM eaternity API in order to get an estimated amount of CO2 the given product costs. All of this is then provided to the frontend when the EAN barcode of a product is scanned.
The color of the leaf changes according to the amount of CO2 released and the user is able to obtain even more information by touching the product in the list. And since the phone already stores all information about the products being purchased, a qr code can be generated to provide the data to the checkout system. Even more importantly this could be used as a universal standard allowing for a more widespread use of the own phone for self-scanning and self-checkout and keep our CO2 emmisions in check.
We imagined exploiting the so-called Kantönligeist of swiss people by creating a scoreboard on which cities or cantons buy more ecological products. A map allows for direct comparisons with neighbouring cities creating a competition between them.
Last but definitely not least, the first screen of the app displays the personal progress over time. Seeing the emissions drop after changing one's habit is a satisfying feeling which the usual actions for climate cannot provide since the result in the real world is only visible after a long period of time.
Built With
javascript
mongodb
nestjs
react
react-native
typescript
Try it out
github.com
github.com
github.com | Foo(d)bar | Creating CO2 awareness through Kantönligeist | ['Marc Berchtold', 'Nico Hauser', 'Marc Rettenbacher', 'David Schmid', 'Jonas Spieler'] | [] | ['javascript', 'mongodb', 'nestjs', 'react', 'react-native', 'typescript'] | 102 |
10,526 | https://devpost.com/software/bubblecoffee | Mindmap of features, USP
Concept & overview
Screenshot
Inspiration
Coffee breaks are an important part of the employee experience. It's where people get to know each other, talk about work, life or other things. In these times, this social gathering is much harder - Coffee breaks are now mostly digital, through Zoom or Teams. The experience there however isn't great - There can only be one conversation and many people don't want to speak up in front of everybody.
What it does
CoffeeBubble re-creates the physical break room experience in the digital world. Instead of one large conference call, people join a room, where they can move around. Depending on their location/proximity to others, they hear conversations louder or more silent. This way, multiple conversations can happen at the same time.
How We built it
We developed a React app that runs together with a NodeJS backend. Our responsive design guarantees compatibility on all possible devices. WebRTC is used for the communication between the peers. To orchestrate the server we use Azure.
Challenges We ran into
We had problems to move the video streams back and forth between the different tables. We imagined this task to be easier.
Accomplishments that we are proud of
We think that with our idea we can definitely add value to the social interaction within a company. Through casual entertainment, various innovative ideas for companies can be created that would not arise in any other context. Through our application, this social interaction is also possible with physical distance.
What We learned
We have learned that the creation of an application within a hackathon definitely works better on site. But we hope that our idea will give us a further added value.
What's next for BubbleCoffee
First and foremost, the app should be finalized. The WebRTC connections are not yet stable and the user interface still needs some work. We definitely want to improve the user experience, which is a key factor of our application. Depending on user feedback, we would further add analytics functions to help people connect even better or add chat functionalities.
Built With
azure
node.js
react
webapp
webrtc
Try it out
github.com | BubbleCoffee | Coffee breaks are crucial to employee experience, but in the home office, Zoom coffee breaks are bad. Let's recreate the good things of physical coffee rooms! | ['Manuel Keller', 'Pascal Zehnder', 'Dimitri Kohler'] | [] | ['azure', 'node.js', 'react', 'webapp', 'webrtc'] | 103 |
10,526 | https://devpost.com/software/lazy-lawyers | Inspiration
The challenges posted on the legal tech slack channel were the main source of inspiration for us.
What it does
The solution works as follows:
Recognizes names of companies, currencies, names of natural persons, events, dates, places, in PDF documents.
Attaches relevant information about the above concepts, like data about the companies, currency conversions and linkedin profiles about natural persons.
Helps lawyers find contract templates, by only specifying the type of operation for which they want a template, and their email address in order to receive the required template.
How I built it
We built it around a NLP solution that smartly recognizes companies, persons, events, dates and places in PDF docs. The NLP solution is coded in Java, the front end of the application in Javascript, and we also use a python client for working with the backend that contains the NLP solution. Moreover, the template solution backend is written in Java, and it leverages SpringBoot functionalities for creating the services.
Accomplishments that I'm proud of
We had a lot of fund building it, we met a lot of nice people, exchanged ideas, and we had a very pleasant experience while developing it.
Built With
java
javascript
python
Try it out
github.com | Lazy Lawyers | Solution that allows lawyers to smartify contracts and templatize them, for a better management of contract's information and better reusability. | ['Dan T', 'Agustina La Greca', 'Mario Vasile', 'Laurentiu Raducu', 'Timea Nagy'] | [] | ['java', 'javascript', 'python'] | 104 |
10,526 | https://devpost.com/software/avanti-zqji5t | Our home screen
Comparing to more sustainable products
Comparing to healthier products
Comparing to better deals
Avanti IT landscape
Inspiration
Groceries is today neither fun nor easy. Avanti will provide you a new and unique shopping experience!
Have you ever asked yourself if your favorite product is also a sustainable one?
How do you compare the healthiness of your products?
Is the biggest package of a product always the cheapest one?
Which recipes are available for my purchased product?
Avanti assists you in answering all these questions!
What it does
While shopping, Avanti allows you to scan the products and it automatically adds it to your basket.
However, it will also show you product alternatives according to your personal preferences (sustainability, healthiness, best price) and possible recipes around your product.
You are free to exchange then the scanned product or keep it if you wish.
With the healthiness comparison we can motivate people to buy healthier food as healthiness consists always of two parts: Sports and nutrition!
In order to enhance user experience, we also provide a custom cell phone holder to attach it to any shopping cart. Shopping has never been easier!
How we built it
We have developed custom graph algorithms to crunch the product and recipe data and built a cross plattform App with Flutter. The Flutter App scans the product barcodes and receives recommendations from our graphs.
Challenges we ran into
Data cleansing was challenging as the structure and the quality of the product data varied heavily.
Also retrieving data from multiple data sources and combining it in our database was a bit underestimated as the API definitions were not always up to date.
Accomplishments that we're proud of
We have a fully functioning App that we can deploy on our smart phones.
The design is highly customizable and can be adapted to any branding very quickly.
Also our machine learning models worked better than anticipated and the results of them are impressive.
At last but not least, we are really proud of having developed a functioning App which could decrease our carbon footprint and therefor make this world a better place.
What we learned
Developing custom graph algorithms to crunch product and recipe data. We also gained more experience in video editing, story telling and Mobile App development.
What's next for Avanti
We would like to establish contact with retailers and other businesses to hear their feedback and opinion about our project and are open for a possible continuation and implementation of our MVP.
Screenshots
To see our high quality screenshots, follow these links:
https://raw.githubusercontent.com/dpacassi/hackzurich2020/master/screens/screen-1.png
https://raw.githubusercontent.com/dpacassi/hackzurich2020/master/screens/screen-2.jpg
https://raw.githubusercontent.com/dpacassi/hackzurich2020/master/screens/screen-3.jpg
https://raw.githubusercontent.com/dpacassi/hackzurich2020/master/screens/screen-4.jpg
Built With
dart
docker
flutter
pandas
python
rest
Try it out
github.com
github.com | Avanti | Comparing the sustainability, healthiness and prices from offline products is not easy. Avanti helps you with that! | ['David Pacassi Torrico', 'RaulCatena Catena', 'Michelle Díaz', 'Vanessa Eberhard'] | [] | ['dart', 'docker', 'flutter', 'pandas', 'python', 'rest'] | 105 |
10,526 | https://devpost.com/software/megrow | ops
rivella > red bull
Inspiration
You want to do self-checkout with your mobile phone only?
You want to make an impact on the planet by helping reduce carbon footprints, but you don't have enough information about the products? MeGrow is just for you!
What it does
facilitates customer experience
suggests more environment-friendly product choices
keep customers in the loop with gamification
payback for loyal and environment-friendly customers
compare your eco-shopping with others (neighbourhood, age-class ...)
How we built it
backend: flask, requests
frontend: ReactJS
data analytics: python and pandas
data modeling and predictions: IBM cloud assistant
Challenges we ran into
accessible frontend from all platforms
data modeling
Accomplishments that we are proud of
engagement with all the market users (single parents, families, young students, seniors)
proven data predictions and estimation
APIs interface
What we learned
team play during stressful short-time deadline
problem solving and lightning speed
tackle the problems with the big picture in mind
What's next for MeGrow
collaboration with Migros and Climeworks
frontend refactoring (native apps: flutter)
more robust interface for 3rd party APIs
more data analysis
business development
Built With
eaternity
flask
ibm
ibm-cloud
ibm-watson
migros
react
swissre
Try it out
github.com | MeGrow | A more productive and sustainable shopping experience | ['Roman Oechslin', 'Thuong Tran', 'Stefano Fogarollo'] | [] | ['eaternity', 'flask', 'ibm', 'ibm-cloud', 'ibm-watson', 'migros', 'react', 'swissre'] | 106 |
10,526 | https://devpost.com/software/tree_born | TREE_BORN
Inspiration
Climate change is one of the major problems that can have the worst impacts on mankind and at the same time is one of the problems towards which we are the most ignorant, which makes it a top-level threat. Increasing carbon content in the environment is one of the biggest contributing factors to this.
What it does
This is where our team EnviroGeeks’ Android Mobile Application Tree Born comes into play.
Well, along with the user-friendly UI, our application helps make the users aware of the health of their environment by showing them the tree index which is essentially the number of trees in their city per person in the city. The tree count is calculated using our machine learning model which runs on the satellite view imagery of the location.
Also, the users can get to know about their carbon footprint in the Stats section where all their used modes of transports are displayed, all their monitored actions using the google fit API, and also the total carbon dioxide produced by them.
The info button is where they can educate themselves about the terms.
The “How can I reduce ” button tells them the way they can balance out their carbon content to help improve the environment. This is where Zurich Insurance can help the users getting the trees planted by them insured so that their contribution doesn’t go in vain.
How we built it
Using Android Studio with java Language with the help of google and acqi API's.
Challenges we ran into
Integrating dependencies and API's from valid known sources and recognizing the correct and authentic one was one of the biggest challenge for us.
Accomplishments that we're proud of
Successfully integrating the API's and the beautiful fluid UI to implement our desired user-friendly solution.
What we learned
Out of the box thinking with great teamwork was one of the best learning experiences.
What's next for TREE_BORN
Expanding our user base to the International level to bring more population into our Eco-Friendly community with better UI and backend features.
This is our small attempt towards the betterment of the environment.
With all of this said, This is EnviroGeeks signing out.
Built With
acqi
android
android-studio
firebase
google-geocoding
google-maps
java
machine-learning
opencv
python
Try it out
github.com | TREE_BORN | HEAR THE ECO | ['Yatharth Ahuja', 'Krishnam Dhawan', 'Kshitij Sinha', 'Milind Kaushal'] | [] | ['acqi', 'android', 'android-studio', 'firebase', 'google-geocoding', 'google-maps', 'java', 'machine-learning', 'opencv', 'python'] | 107 |
10,526 | https://devpost.com/software/smice | SMICE
The Smart MICE. Oh, That's Smice!
This is a project for
HackZurich 2020
. This is a
daemon that moves the cursor between monitors, depending on where the user is
looking. This is achieved by processing the camera feed.
The more monitors you have, and the bigger their are, the more annoying it is to move your mouse around. We really believe that focus is key; the fewer the distractions, the better the outcome of any project. This includes small papercuts like having to locate your cursor, and performing the swipe-lift-swipe-lift-swipe mouse gesture to cover the distance between your monitors.
To ensure we would have a working prototype we decided to re-use as much off the shelf components as possible. Most of our time was spent looking for already built frameworks for eye-tracking, and trying them out. After a while we settled on Antoine Lame's GazeTracking. We developed a first prototype operating on a single monitor, split in the middle. Once that worked, we rushed home and brought in a second monitor.
The final result is not even 200 lines of Python code, and using it feels really magical. It's got rough edges, and we'd love to polish those, and extend the project to work
across
computers! Using a single mouse and keyboard for two, three computers; how cool is that?!
Building
Enter the nix-shell or make sure you have the following system packages installed:
xorg (libSM, libXrender, libXext, libX11)
libstdc++
glib
cmake
Follow the installation instructions for
GazeTracking
Follow the installation instructions for
pyautogui
You're all set! Run
python init.py
. See lines
11
and
15
of that file for
runtime options.
Built With
nix
opencv
python
Try it out
github.com | smice | Move your mouse across monitors, with your eyes. Using nothing but a webcam and two eyeballs. | ['Nicolas Mattia', 'maximilianmandl'] | [] | ['nix', 'opencv', 'python'] | 108 |
10,526 | https://devpost.com/software/diabetes-tracker | Inspiration
Our team's mission is to build tools that facilitates everyone who needs to track its blood glucose levels.
While it's common knowledge that diabetes is a huge problem in 21st century, we sought a solution that puts the problem at the forefront - starting with the habits that almost everyone participates in, daily tracking. We designed Diabetes Tracker to make it easy to figure out how much deviation in the sugar levels, does every patient has during the day. It's difficult to make daily habits, that is why we created motivational quotes which are revealed only when the users saves at least 9 records of its sugar levels. With Diabetes Tracker, we hope to motivate the people to use the app on a daily basis by encouraging them to record their values more frequently. All of that is achieved by using a simple but effective gamification.
What it does
Diabetes Tracker is a mobile app, which motivates its users every day to track their blood glucose levels, because the daily monitoring is essential for every person with diabetes to maintain a healthy and well-balanced lifestyle.
Apart from that the app is stores the value in its core data and uses machine learning to analyses the recorded values. If the values are not in the correct range, the app sends a notification to its user or has a feature to directly contact the the private doctor assistant, who has to take a professional check and prevent the patient from future health problems.
How we built it
The app is built on SwiftUI.
The data is stored in Core Data and JSON format
The notifications are triggered via Firebase
For prediction and analysis we used MySQL and Python. The ML models were build on jupytor notebook.
Challenges we ran into
The main challenge was to synchronise the back-end with the from-end. During the past hours we have created the app using SwiftUI as well as the ML models which analyse the stored data and classify it as good or bad. But we didn't have enough time to integrate it in the app. As a separate project we have tested the ML model on unseen data and it performed as planned. We have also created the set-up for the remote notifications, using Firebase as well as the direct communication with the private doctor, but all of that was done within individual projects.
Accomplishments that we're proud of
We are very proud of the organisation which we managed to create. For the limited amount of time, our team has managed to accomplished all individual to-do tasks, despite the fact that most of them were very complicated and challenging for some of us.
We are also very proud of the idea and the simplicity of the project, because nowadays there are many app on the market but most of them have many unnecessary features which are never used, therefore some apps are very complicated for people with low education or especially in areas with elderly people who have limited experience with smartphones.
Our goal was to create a simple from design and features app, which covers all the basic and most needed for the people with diabetes, and make the user experience as intuitive as possible, so that everyone can use it, regardless of age or education.
What we learned
We learned today that sometimes the seemingly simplest things (dealing with app synchronisation for hours) can be the greatest barriers to building something that could be socially impactful. We also realised the value of well-developed, well-documented APIs for programmers who want to create great products.
What's next for Diabetes Tracker
The main goal of our team now is to finish the app, by connecting the data base with the ML model, so that prediction can be executed, as well as connecting the app with the private doctors, so that the patients can have a professional support or consultation if needed. We truly believe that the app can have a great impact, therefore we will do our best to complete is and bring it to the people with diabetes as soon as possible.
Built With
ai
firebase
json
ml
mysql
python
swift
swiftui | Diabetes Tracker | Mobile app which motivate its users to daily track their blood glucose levels, by showing them daily motivational quotes as well as recording and sharing that data with a medical specialist. | ['Ivan Dimitrov', 'Dosi D', 'Lyuba Fileva'] | [] | ['ai', 'firebase', 'json', 'ml', 'mysql', 'python', 'swift', 'swiftui'] | 109 |
10,526 | https://devpost.com/software/gaming-the-environment-5gspm3 | Inspiration
We were inspired by many companies who helped us opening our eyes toward the problems they are facing and the problems the climate changing will bring to everyone in the next few years.
ACCENTURE & CLIMEWORKS, made us realize we need to act now!
CSS, AUTO-ID LABS, CAREUM, UNIVERSITY OF ST. GALLEN suggested how we can help people and the environment at the same time
HUAWEI gave us some tools to get great results
What it does
The Plastic App will let you challenge your friends to go finding & collecting plastic around.
While cleaning the environment you will also find out how many calories you the burned, and how much you walked around.
Thanks to this, we are challenging people to get fit while doing the best for the world!
How I built it
We used:
react-native
for the app development
tensor-flow
for the ML object recognition with
coco-SSD
algorithm
HMS Health Kit
for calories detection and fitness activity tracking
Challenges I ran into
Setting up the HMS was quite difficult but in the end, we made it!
What's next for Gaming The Environment
We are still trying to add the following features:
QR scanning to join a game
Fitnes actity sharing
Built With
huawei
react-native
tensorflow
Try it out
github.com | Gaming The Environment | Challenge your friends to clean the environment | ['Simone Battaglia', 'Gabriele Prestifilippo', 'Aida Dumi'] | [] | ['huawei', 'react-native', 'tensorflow'] | 110 |
10,526 | https://devpost.com/software/m-einkauf-get-woke | Inspiration
The daily buying decisions in a supermarket are a tradeoff between several factors and therefore hard for us to take. We from M-Einkauf are convinced, that our children deserve a bright future. It simply should not be as hard as it is to make mindful and sustainable shopping decisions.
Does this product fit my values? Does it fit to my body in terms of allergies, lactose intolerance or gluten tolerance? How can I chose the "right" product in one category out of hundreds?
What it does
Our no-download-or-update-needed web app helps us with the buying decision for a single product in the market and also tells you how your full basket fits to your spiritual values and physical needs after the purchase at home.
A customer can just go to
www.m-einkauf.ch
and link her/his Cumulus account to the m-einkauf.ch website. On the website it is also possible to enter how important things like
sustainability, health, spending budget and allergies
are to the customer. In the store it is easily possible to add product categories (e.g. "Schokolade") to the shopping list and get products that fit personal values and other settings the best.
After the purchase, she/he gets an email with a summary of how this purchase fitted to the values and possible options for the next time.
How we built it
We used the
API provided in the HackZurich 2020 - Migros Challenge
and PHP on our own server.
AND: We bought the domain
www.m-einkauf.ch
with our personal money for this hackathon ;)
Challenges we ran into
The provided API was not in all aspects complete nor the documentation correct.
What we learned
We found out how important it is to build and document an API. Also, we should have looked at the raw data a lot earlier in order to become aware of this particular problem earlier.
What's next for M-Einkauf - Get Woke
The solution was designed to be easily integrated into Migros. However, it is not intended to only serve Migros customers. In order to make the world a better place, as many people as possible should be able to profit from less frustration when deciding what to buy and comparing products, especially regarding sustainability. The easier and the more engaging we make this for everyone, the brighter the future of our children will become.
Built With
html5
json
love
php
rest
storybrand
wordpress
Try it out
www.m-einkauf.ch | www.M-einkauf.ch - Get Woke | www.M-einkauf.ch is a web app for everyone, who wants to easily reflect her/his individual values, like healthiness and sustainability, in the daily shopping experience. | ['Kai Suchanek', 'Amr Abdelazeem', 'Benjamin Tal', 'David Pain'] | [] | ['html5', 'json', 'love', 'php', 'rest', 'storybrand', 'wordpress'] | 111 |
10,526 | https://devpost.com/software/carbocount | Carbocount
Score
Restaurants
Cart
Planter Interface
Item
Plant Tree
Donate
Extended Demo (3 mins)
https://www.youtube.com/watch?v=GMBeOQ2K6ec&feature=youtu.be&fbclid=IwAR3Pfovh4d-S3F_UUVPeDFWJtAh6HWM6jm1tWE8N84DGxJNTqpJkh4qgE7s&ab_channel=VincentOcchiogrosso
Inspiration
Global Warming is awful! While several greenhouse gases are responsible for it, CO2 is a major contributor. We wanted to come up with a solution that helps individuals become more aware of how their dietary habits add to it. We also wanted to provide them better alternatives and a way to make up for the damage.
What it does
Recommends ecofriendly alternatives to items in your cart, all you have to do is take a picture!
Shows ecofriendly restaurants in your area
Earn points by buying ecofriendly products and checking in at ecofriendly restaurants
Shows how many trees you need to plant to offset your carbon footprint
Donate to have a tree planted in your name
Shows trees planted in your name
Leaderboad to encourage individuals
How we built it
Eaternity API
Zomato API
React Native
AWS
Generated a score for each restaurant based on items on their menu
Calculated trees needed to offset carbon emissions
Challenges we ran into
Wanted to have an AR label for trees planted but the app crashed :(
Accomplishments that I'm proud of
It works
What we learned
A lot about carbon emissions
What's next for Carbocount
AR tagged trees
Personal carbon footprint based on household items, transport, habits
Payment integration
Built With
amazon-web-services
python
react-native
zomato
Try it out
github.com | Carbocount++ | Change your dietary habits, make up for your carbon footprint | ['Muntaser Syed', 'Ebtesam Haque', 'Vincent Occhiogrosso'] | [] | ['amazon-web-services', 'python', 'react-native', 'zomato'] | 112 |
10,526 | https://devpost.com/software/covid-doctor | Homepage
Services Offered by COVID Doctor
Live Stats of COVID-19
Real-Time Pulse and Breathing Rate Detection
Real-Time Pulse and Breathing Rate Detection
X-Ray Scan
X-Ray Scan Negative and Guidelines and Precautions
X-Ray Scan
X-Ray Scan Positive and Guidelines
Real-Time Self Assessment Chatbot
Real-Time Self Assessment Chatbot
Inspiration
In the current scenario, there is a widespread of coronavirus in the entire world, but there are limited supplies and services for the patients. But there are a large number of people who want to take COVID test to assure that they are safe, so they went to hospitals and an unnecessary crowd gathers there, which has a high chance of spreading coronavirus. So, to stop this, I have developed a contactless COVID 19 Detection Webapp COVID Doctor, which predicts the vital information of humans using a simple camera and a browser. So, people can take the self-assessment test and can run the health test to ensure that they are safe and won’t go to the hospital, which has a high chance of catching the virus.
In these difficult times, everybody has to be cautious for which they need to have a self-assessment tool which they can use for scanning themselves anytime.
What it does
Services Offered By COVID Doctor
Real-time Pulse Detection - Pulse detection plays a vital role in the detection of COVID. Respiratory rate is the number of breaths (inhales and exhales) you take in a minute and ranges from 12-20 breaths in a healthy person. Resting heart rate is the number of times your heart beats per minute and ranges between 60-90 in a healthy person, and heart rate variability is the time in between each heartbeat. So the change in pulse rate will help in early detection of COVID 19 and this can be tracked using just a camera and a browser.
COVID X-Ray Classification - With the help of this feature we can get an immediate result of our X-Ray instead of going to the hospital and waiting for the outcome, which can lead to the spread of the deadly virus. So, using Deep Learning technology we have built an X-ray Classifier which classifies whether the patient is COVID Positive or Negative. This is will save time and stops from spreading of the virus.
Live COVID 19 Stats - People can monitor the live stats of Coronavirus Confirmed, Active, Recovered cases, etc. in one place.
Real-Time Self Assessment Chatbot - People can do take their self-assessment checkup by answering the few questions and based on that answers, the user will get the advice by the chatbot such as to quarantine or visit the hospital or precautions if the user is healthy. This will help people in ensuring their safety.
How I built it
Tech Stack
Python 3
Open CV for image detection
Tensorflow and Keras for deep learning
Machine learning for pulse scanning models
Flask for web deployment
HTML and CSS for web frontend
Challenges I ran into
To build a product for Healthcare, it needs to be very accurate and require immense knowledge of the Medical side. For Real-time Pulse and Breathing Rate detection, I have to do a lot of research that how blood flows and how can we build a contactless system to stop the spreading of the virus.
Accomplishments that I'm proud of
The Real-Time Pulse and Breathing Rate detection is 99.9% accurate.
What I learned
During this time, I learnt how to work in the medical project and gain some insights about the Human Body.
What's next for COVID Doctor
In the future, I will try to connect the entire database of the diseases of the people so that the doctors can easily manage their past diseases record and can assist well taking in account of the previous diseases.
Built With
automatic
css
html5
machine-learning
numpy
opencv
python
vision
Try it out
github.com | COVID Doctor | Smart Contact-Less Health System | ['Shubham Goel'] | [] | ['automatic', 'css', 'html5', 'machine-learning', 'numpy', 'opencv', 'python', 'vision'] | 113 |
10,526 | https://devpost.com/software/lipify-lipreading | Lipify - A Lip Reading Application
Project Dependencies:
Python>=3.7.1
tensorflow>=2.1.0
opencv-python>=4.2.0
dlib
moviepy>=1.0.1
numpy>=1.18.1
Pillow
matplotlib
tqdm
pyDot
seaborn
scikit-learn
imutils>=0.5.3
Note: All Dependencies can be found inside
'setup.py'
Project's Dataset Structure:
GP DataSet/
| --> align/
| --> video/
Videos-After-Extraction/
| --> S1/
| --> ....
| --> S20/
New-DataSet-Videos/
| --> S1/
| --> ....
| --> S20/
S1/
| --> Adverb/
|
--> Alphabet/
|
--> Colors/
|
--> Commands/
|
--> Numbers/
|
--> Prepositions/
Dataset Info:
We use the GRID Corpus dataset which is publicly available at this
link
You can download the dataset using our script: GridCorpus-Downloader.sh
which was adapted from the code provided
here
To Download please run the following line of code in your terminal:
bash GridCorpus-Downloader.sh FirstSpeaker SecondSpeaker
where FirstSpeaker and SecondSpeaker are integers for the number of speakers to download
NOTE: Speaker 21 is missing from the GRID Corpus dataset due to technical issues.
Datset Segmentation Steps:
Run DatasetSegmentation.py
Run Pre-Processing/frameManipulator.py
After running the above files, all resultant videos will have 30 FPS and 1 second long.
CNN Models Training Steps:
Model codes can be found in the directory
"NN-Models"
First you will need to change the common path
value to the directory of your training and test data.
Run Each network to start training.
Early stopping was used to help stop
the training of the model at its optimum validation accuracy.
Resultant accuracies after training on the data can be found in:
Project Accuracies
or in the following illustration:
CNN Architecture:
All of our networks have the same architecture with the only
difference being the output layer, As shown in:
License:
MIT License
Built With
python
tensorflow
Try it out
github.com | Lipify | Your Lip Reading Application | ['Amr Khaled'] | [] | ['python', 'tensorflow'] | 114 |
10,526 | https://devpost.com/software/wecare-0fjkb9 | Summary: Home Screen of app, which allows you to report your symptoms, check the status of your circle, and get daily personalized tips.
Home Screen of app, which allows you to report your symptoms, check the status of your circle, and get daily personalized tips.
Map Screen of app, which allows you to see hotspots around you and your Care Circle.
Care Circle screen of app, which allows you to health conditions of your loved ones.
Web interface, which can be used to update the symptoms. It is synced with the app.
The problem WeCare solves
As the outbreak of COVID-19 continues to spread throughout the entire world, more stringent containment measures from social distancing to city closure are being put into place, greatly stressing people we care about. To address the outbreak, there have been many ad hoc solutions for symptom tracking (e.g.,
UK app
), contact tracing (e.g.,
PPEP-PT
), and environmental risk dashboards (
covidmap
). However, these fragmented solutions may lead to false risk communication to citizens, while violating the privacy, adding extra layers of pressure to authorities and public health, and are not effective to follow the conditions of our cared ones. Until now, there is no privacy-preserving platform in the world to 1) let us follow the health conditions of our cared ones, 2) use a statistically rigorous live hotspots mapping to visualize current potential risks around localities based on available and important factors (environment, contacts, and symptoms) so the community can stay safer while resuming their normal life, and 3) collect accurate information for policymakers to better plan their limited resources.
Such a unified solution would help many families who are not able to see each other due to self-quarantine and enable early detection and risk evaluation, which may save many lives, especially for vulnerable groups. These urgent needs would remain for many months given that the quarantine conditions may be in place for the upcoming months, as the outbreak is not reported to occur yet in Africa, the potential arrival of second and third waves, and COVID-19 potential reappearance next year at a smaller scale (like seasonal flu). There is still uncertain information about immunity after being infected and recovered from COVID-19. Therefore, it is of paramount importance to address them using an easy-to-use and privacy-preserving solution that helps individuals, governments, and public health authorities. The closest solution is
COVID Aggregated Risk Evaluation project
, which tries to aggregate environment, contacts, and symptoms into a single risk factor. WeCare takes a different approach and a) visualizes those factors (instead of combining them into a single risk value) for more tangible risk communication and b) incentivizes individuals to regularly check their symptoms and share it with their Care Circle or health authorities.
WeCare Solution
WeCare is a digital platform, both app and website. Both platforms can be used separately, and with freedom of choice towards the user. The app, however, will give users more information and mobile resources throughout the day. Our cross-platform app enables symptom tracking, contact tracing, and environmental risk evaluation (using official data from public health authorities). Individuals can add their family members and friends to a Care Circle and track their health status and get personalized daily updates. In particular, individuals can opt-in to fill a simple questionnaire, supervised by our epidemiologist team member, about their symptoms, comorbidities, and demographic information. The app then tracks their location and informs them of potential hotspots for them and for vulnerable populations over a live map, built using opt-in reports of individuals. This map is accessible on the app and our website. Moreover, symptoms of individuals will be tracked frequently to enable sending a notification to the Care Circle and health authorities once the conditions get more severe. We have also designed a citizen point, where individuals get badges based on their contributions to solving pandemic by daily checkup, staying healthy, avoiding highly risky zones, protecting vulnerable groups, and sharing their anonymous data.
Our contact tracing module follows guidelines of Decentralized Pan-European Privacy-Preserving Proximity Tracing
(PEPP-PT)
, which is an international collaboration of top European universities and research institutes to ensure safety and privacy of individuals.
What we have done during the summer.
We have updated the app-design. New contacts with Brasil, Chile and Singapore. We have also made some translation work with the app. Shared more on social media about the project and also connected to more people on slack and LinkedIn.
We have consolidated the idea and validated it with a customer survey. We then developed a new interface for
website
and changed the python backend to make it compatible with the WeCare app. We have also designed the app prototype and all main functionalities:
Environment: We have developed the notion of hotspots where we have developed a machine learning model that maps the certified number of infected people in a city and the spatial distribution of city population to the approximate number of infected in the neighbourhood of everyone.
Contact tracing: We have developed and successfully tested a privacy-preserving decentralized contact tracing module following the
(PEPP-PT)
, guidelines.
Symptoms tracking: We have developed a symptom tracking module for the app and website.
Care Circle: We have designed and implemented Care Circle where individuals can add significant ones to their circle using an anonymous ID and track their health status and the risk map around their location.
You can change what info you want to share with Care Circle during the crisis.
The app is very easy-to-use with minimal input (less than a minute per day) from the user.
We are proud of the achievements of our team, given the very limited time and all the challenges.
Challenges we ran into
EUvsVirus Hackathon Challenge opened its online borders recently to the global audiences which brought together plenty of people of different expertise and skills. There were challenges that we faced that were very unique, as we faced a variety of communication platforms on top of open-source development tools.
Online Slack workspaces and Zoom meetings and webinars presented challenges in forms of inactive team members, cross-communications, and information bombardment in several separate threads and channels in Slack and online meetings of strangers that are coordinated across different time zones. In developing the website and app for user input data, our next challenge was in preserving the privacy of user information.
In the development of a live map indicating hotspot regions of the COVID-19 real-time dataset, our biggest challenge here was to ensure we do not misrepresent risk and prediction into our live mapping models. We approached Skill Mentor Alise. E, a specialist in epidemiology, who then explained in greater detail that the proper prediction and risk modelling should take into account a large number of factors such as population, epidemiology, and mitigations, etc., and take caution on the information we are presenting to the public. Coupled with the lack of official datasets available for specific municipalities for regions, we based geocoding data mining of user input by area codes cross-compared with available Sweden cities number of fatalities, infected and in intensive care due to COVID-19.
The solution’s impact on the crisis
We believe that WeCare would help many families who can see each other due to self-quarantine and enable early detection and risk evaluation, which may save many lives, especially for vulnerable groups. The ability to check up on their Care Circle and the hotspots around them substantially reduces the stress level and enables a much more effective and safer re-opening of the communities. Also, individuals can have a better understanding of the COVID-19 situation in their local neighbourhood, which is of paramount importance but not available today.
The live hotspot map enables many people of at-risk groups to have their daily walk and exercise, which are essential to improve their immunity system, yet sadly almost impossible today in many countries.
The concept of Care Circle motivates many people to invite a few others to monitor their symptoms on a daily basis (incentivized also through badges and notifications) and take more effective prevention practices.
Thereby, WeCare enables everyone to make important contributions toward addressing the crisis.
Moreover, data sharing would enable a better visual mapping model for public assessment, but also better data collection for the public health authorities and policymakers to make more informed decisions.
The necessities to continue the project
We plan to continue the project and fully develop the app. However, to realize the vision of WeCare we need the followings:
Social acceptance: though being confirmed using a small customer survey, we need more people to use the WeCare app and share their data, to build a better live risk map. We would also appreciate more fine-grained data from the health authorities, including the number of infected cases in small city zones and municipalities.
Public support: a partnership with authorities and potentially being as a part of government services, though not being necessary, to make it more legitimate. This would increase the level of reporting and therefore having a better overview and control of the crisis.
Resources: So far, we are voluntarily (and happily) paying for the costs of the servers. Given that all the services of the app and website would be free, we may need some support to run the services in the long-run.
The value of your solution(s) after the crisis
The quarantine conditions and strict isolation policies may still be in place for upcoming months and year, as the outbreak is not reported to occur yet in Africa, the potential arrival of second and third waves, and possible COVID-19 reappearance next year at a smaller scale (like seasonal flu).
Therefore, we believe that WeCare is a sustainable solution and remains very valuable after the current COVID-19 crisis.
The URL to the prototype
We believe in open science and open-source developments. You can find all the codes and documentation (so far) at our
Website
.
Github repo
.
Other channels.
https://www.facebook.com/wecareteamsweden
https://www.instagram.com/wecare_team
https://www.linkedin.com/company/42699280
https://youtu.be/_4wAGCkwInw
(new app demo 2020-05)
Interview:
https://www.ingenjoren.se/2020/04/29/de-jobbar-pa-fritiden-med-en-svensk-smittspridnings-app
Built With
node.js
python
react
vue.js
Try it out
www.covidmap.se
github.com | WeCare | WeCare is a privacy-preserving app & page that keeps you & your family safer. You can track the health status of your cared ones & use a live hotspot map to start your normal life while staying safer. | [] | ['2nd place', 'Best EUvsVirus Continuation', 'Best Privacy Project'] | ['node.js', 'python', 'react', 'vue.js'] | 115 |
10,526 | https://devpost.com/software/meetingzz | What do we do?
The Architecture
The UI Screenshot
Inspiration
All the meetings have been moved online and all of us work online. Many meetings are conducted every day and getting the summary of the meeting recorded is a major task and is to take care of by one of the team members. It is always cumbersome to record all the points discussed without missing anything. Most of the time, this is seen as a challenging task and people actually are not interested in recording the meeting summary/minutes. Here, we have provided an intelligent AI-based solution that accepts the MEETING AUDIO file as input and gets the complete summary with the persons identified. The transcripts can be downloaded and we also give you complete statistics about how many speakers are identified, how many transcripts are generated etc.
What it does
Upload the Meeting audio file to the interactive system we built. Wait for a few seconds, the complete meeting is analyzed by our system and we generate a transcript that has the summary of the meeting with speakers identified accurately.
How I built it
COVID 19 has made a lot of changes in the way we are working every day. Almost everyone is working from home and meetings all happen over platforms like MS Teams, Zoom, or Google meet. Marking the minutes of the meeting is a very important component in any meetings and everyone needs the same to recall and the clients also need it for the recording purpose. Someone manually always makes a note of the important points and at times, it is going wrong and is cumbersome too. We have used done the Extraction, Speech Segmentation, and Speaker Diarization with the input file. Then, we recognize who the speaker is, what the content is, we generate sequential transcription for every second of the meeting. All these are done with our Intelligent AI approach. We used TensorFlow and deep learning techniques extensively to build this.
Challenges I ran into
Identifying the speaker was a major challenge.
Summarizing the points of the meeting in order was a challenge and needed a lot of effort.
Accomplishments that I'm proud of
Our team is very proud of this as we have definitely eased the task of the MoM and used tech to the best extent possible.
What I learned
Perseverance and Smart Working.
Usage of TensorFlow for voice.
What's next for Meetingzz
To talk to the Zoom or WebEx etc. to see how to do we integrate this as part of their tools.
Built With
bootstrap
cloud
css
deeplearning
flask
javascript
tensorflow
Try it out
drive.google.com | Meetingzz | Meeting Summary Made Easier | ['Shriram Vasudevan', 'Giridhararajan Rajarajan', 'Nitin Dantu'] | [] | ['bootstrap', 'cloud', 'css', 'deeplearning', 'flask', 'javascript', 'tensorflow'] | 116 |
10,526 | https://devpost.com/software/covidar-mlsewn | UI
world AR
India AR
AR GAME
BAR GRAPH
Inspiration
Matthew Halberg
posted a video on Instagram about
data visualization
COVID AR for the USA that the first impression to do in India COVID AR
What it does
Visualization of data of coronavirus data in augmented reality.
How we built it
by learning the unity engine, AR foundation, JsonAPI, android native plugin, and 3d object interaction and the responses, transportation of the 3d objects interacting with the real-world entities.
Challenges we ran into
focusing on
Dynamic objects at the run time
and make it interact with the ray cast technology
Accomplishments that we're proud of
I and my friend had done the interaction of the dynamic loading and the server script for the runtime loaded objects with the
custom behavior scrips
provided by the echo ar.
What we learned
echo ar and the usage of dynamic loading objects.
What's next for COVIDAR
develop a featured product for
data processing for any kind of data with the API
symptom visualization in AR, AR chatbot.
Built With
api
ar
arfoundation
unity
Try it out
github.com | COVIDAR | Once it arrives, Augmented Reality will be everywhere like covid 19 | ['JOTHESH S P', 'Mageshwaran R', 'Prashanth S'] | ['Best AR Project', 'Best Education Project'] | ['api', 'ar', 'arfoundation', 'unity'] | 117 |
10,526 | https://devpost.com/software/smokefree-bot | Inspiration
Smooke-free future
What it does
How I built it
Challenges I ran into
Accomplishments that I'm proud of
What I learned
What's next for Smoky
Built With
image-processing
natural-language-processing | Smoky | Get rewards on your purchase of smooke-free products! | ['Leandro Camacho'] | [] | ['image-processing', 'natural-language-processing'] | 118 |
10,526 | https://devpost.com/software/hello-corona | Inspiration
The SRF workshop convinced us that this is an interessting project that will help many of their journalists on reporting better on the outbreak.
What it does
Corona Scare Map parses the global twitter data from crowdbreaks.org and creates a heatmap in a web application. Let's try to figure out how much attention the Covid-19 pandamic got in the different countries.
How we built it
We changed plans 5 times, but its super easy to deploy it! While we dabbled with Azure's very cool API's and machine learning capabilities powered by blockchain. We constantly transformed our idea and pivoted to something simpler.
Challenges we ran into
Azure was more complicated than we thought. Our team lacked some web and data skills. And most importantly, there was too much focus on infrastructure and a lack of a clear initial vision.
Accomplishments that we're proud of
We have a working web application and it can be deployed to Azure in minutes.
What we learned
We improved our skills with Azure. It was our first hackathon and we realized that programming a prototype in a short time is very different from everyday programming.
Built With
azure
flask
leaflet.js
next.js
postgresql
python | Corona Scare Map | A helpful tool to visualize the perception of corona | ['Daan Boerlage', 'Anja Stuber', 'Matthias Weber'] | [] | ['azure', 'flask', 'leaflet.js', 'next.js', 'postgresql', 'python'] | 119 |
10,526 | https://devpost.com/software/perkmeup | PerkMeUp_demo_image
Inspiration
Just imagine that an honest Producer of goods wants to educate Consumers about a new product, its ingredients, distinctive features, as well as to send some video-advice. Usual consumer desicion-making time is 2-5 sec., the majority of people need some appealing arguments to pay more attention and learn about the product at the decision-making point. How to motivate them?
Give some perks! Send 20-30-50 cents directly to their mobile wallets immediately after watching the video.
Allocate more reward for healthier products thus strengthening consumer loyalty.
Banks are not involved. Retail chains are not involved. Only direct communication between the Producer and the Consumer.
What it does
A mobile app enabling producers of goods to instantly reward consumers for reading product info and watching short video on their mobile phones. More rewards for healthier products!
Changing consumer motivation and behavioral model.
How I built it
Pretty quickly in several weeks.
Just because my team found my idea pretty cool, and I had a clear vision on the app architecture... And we were extremely motivated: we want to learn about healthy products, see the difference, and get perks from Producers whose goods we usually buy.
Challenges I ran into
Due to COVID-19 I can't freely communicate with the team, so everything is done without leaving the kitchen.
Accomplishments that I'm proud of
Just in few weeks I have a fully-functioning mobile app that changes the way how a producer will communicate with consumers globally and build loyalty to healthier products.
What I learned
Build a fully-functioning app first. Don't waste your time for ppt. presentations, the web-site, and establishing a legal entity: not any Producer will pay for it. They want to see a fully-functioning app.
What's next for PerkMeUp
To create the polygon version and implement more complex logic in the smart contract allocating rewards based on multiple factors important for the consumer, motivating sensible product choice and building loyalty.
Built With
atrifyapi
figma
solidity | PerkMeUp | A mobile app enabling producers of goods to instantly reward consumers for reading product info and choosing healthier products | ['Aliaksandr Lazerko', 'Vitaliy Chernov', 'Nikita Zasimuk'] | [] | ['atrifyapi', 'figma', 'solidity'] | 120 |
10,526 | https://devpost.com/software/source-hwmna1 | Whole picture!
Inspiration
Misinformation—both promoted deliberately and accidentally—is perhaps an inevitable part of the world in which we live and is not a new problem. However, we currently live in an era in which technology enables information to reach large audiences distributed across the globe, and thus the potential for immediate and widespread effects from misinformation now looms larger than in the past. Recent examples include the misinformation regarding COVID-19, Trump's tweets, and how Facebook handled the hate speech and misinformation regarding Rohingyas in Myanmar. In recent years, tech companies have been criticized for their role in the spread of misinformation and how they should take action against it. Furthermore, we also know that people judge source credibility as a cue in determining message acceptability and will turn to others for confirmation of the truth of a claim. The flow of misinformation on social media is thus a function of both human and technical factors. So what if we can bring both of these factors together in a way to minimize misinformation? A platform that is available for all and regulated by all but what you share is fact-checked by people whose credibility is dependant on their real expertise, skills, and previous credit and this is exactly what we want to try and build!
What it does
The Source to put it simply can be regarded as a mix of
social media
such as Twitter or Facebook and a
community-driven and collaborative platform
such as Wikipedia.
The main features of our first version consist of:
Users can post and share news and facts in different categories.
Categories are pre-defined by the system and examples include politics, health, sports, etc.
Each post's credibility will be measured by users. Every user has a credit in each category and based on users' interaction with a post (the credit of the person posting it, approving/disapproving done by other users) the system will calculate a post's score (a measure for a post's credibility in our system). To get a better idea of the credit system, imagine something similar to StackOverflow.
A post with a score above a specific number is considered a verified fact.
The ultimate goal of the system is to show only the verified posts to users and thus, how the user's timeline is ordered or what kind of posts are considered more hot are all decided based on the post's score.
Users will gain credits by making positive actions (post a correct fact, help measure the facts)
Users will lose credits with negative actions (post a fake fact, approve a wrong fact)
Being good at sports doesn't mean you also know more about the COVID-19! Users will have separate credibility in each category and thus having a high credit in one category doesn't mean your voice will reach a bigger audience in other categories.
How we built it
We first started by a simple social media in which users can join, login, post their facts/news, and see others' posts. We built this based on REST with Django and React-Native, Redux as our mobile state manager, and PostgreSQL as our primary database.
Then we implemented the approve/disapprove for posts. Every user (except the author) can approve/disapprove the post. After this, we had the main APIs in place.
In the next stage, we focused on our scoring and credit algorithms. A couple of scoring algorithms were tested against our test data. We implemented the best one for the first version which computes the score based on the author's credit, approvers credits, and disapprovers credits. This algorithm creates a separate cluster for each and finds the best mean between them. The score will be between -100 to 100. -100 means the fact is completely wrong, 0 means the fact is neutral, and 100 means the post is completely correct.
We then sat the thresholds of >60 and <-60 for verified and unverified facts based on our tests. This helped us to build a training model based on logistic regression. This model trains its data against the verified and unverified facts and their authors and voters credits. This process is completely automated and online. Then it predicts the score of every fact to find the potentially correct and potentially wrong facts.
In the end, we built our credit system. This system uses the user's posts and votes to evaluate the credit of the user. The algorithm is not based on training models. Also, we add the feature of "initial credit" for each user in each category. This helps us to boost the credit system. It should be filled by the user's degree, publications, previous works, etc.
Challenges we ran into
We had 3 main challenges:
The scoring system is so complicated and so new. Every fact-checking publication and ML algorithms mostly focus on extracting the subject, object, and the claim and check them with the correct data from reliable sources. However in our case, we have our own credit for each source, and each source claims the falseness/correctness of the fact independently. So we should find out based on the sources credits the falseness/correctness factor of the fact.
The credit system also updates with the scoring system and the actions of the users. These variables (user's score and user's posts credits) will be computed based on each other which adds more complexity to the system.
How should we fill the "initial credit" automatically for each user? This is very dependent on every category, for example, we could use publications for the health category. But sites like Google Scholar have no open API, and cannot be used to verify the user's email.
Accomplishments that we're proud of
First of all, we managed to implement the first version of our MVP with only 2 people and in a short time. A social-network with scoring and credit systems and features. We are so proud of this.
Also, we implemented some complex algorithms, design and implement a training model for predicting the post score, and automate the whole thing on the development stage (which is on Heroku).
In the mobile, we also managed to implement the main features with minimal UI/UX just for showcasing.
Being rapid and have a working product is the most important thing to consider in a startup. In order to achieve it, we had to create a vision, break the epic stories, assign priorities to them, and plan every 6 hours for ourselves.
What we learned
We only realized the depth of the challenges of our idea when we started developing the system especially the challenge with determining the initial credit.
Automating the training model on Heroku.
Working with a tight deadline and designing the interface on spot.
What's next for Source
On the tech side, we need to improve the scoring and credit systems, gather more training data, and make our model better. Also, features like comments with different types of posts (facts, opinions, and self-made contents) will make the system more usable. We also think the system can produce digital currency based on the user's credits and which will be given to them at the end of each season. This currency is based on "Data", the more popular self-made content that you generate, the more currency you will receive. The more you spread the correct facts or help the system to determine the correct and wrong facts, you will receive more currency.
The whole picture can be explained by the product name, Source! Source of verified facts, Source of expert's opinions, Source of any type of self-made contents, and Source of your income.
Built With
django
heroku
javascript
numpy
postgresql
python
react-native
redis
redux
scikit-learn
scipy
Try it out
github.com
github.com
source-backend.herokuapp.com | Source | Get your data from the Source | ['Farnood Massoudi'] | [] | ['django', 'heroku', 'javascript', 'numpy', 'postgresql', 'python', 'react-native', 'redis', 'redux', 'scikit-learn', 'scipy'] | 121 |
10,526 | https://devpost.com/software/aaye-apmbk2 | After logging in successfully
After booking Appointment
Appointment details
SignUp page for User
Appointment Booking Page
User Dashboard with users information and list of previously booked appointments
Appointment Reciept
Home Page
List of available doctors
LogIn Page
Online Medical Store
Doctor's Registration page
We have been developing a website for online booking of the appointment with the doctor. During this pandemic, many people are not able to go to the doctor. So at our portal people will be able to book their appointments and can also have prescriptions online and get at their doorstep. We have also deployed the ML model for reading the prescription and make patients inform about them. In later stages, people can directly chat with the machine to understand their medicines and minor symptoms before visiting the doctor directly.
Built With
bootstrap
css
html
javascript
jquery
php
sql
Try it out
sarwar1227.000webhostapp.com
github.com | AAYE | Appointment At Your Ease | ['Sarwar Ali', 'Abhishek Goel', 'Nancy Mangla'] | [] | ['bootstrap', 'css', 'html', 'javascript', 'jquery', 'php', 'sql'] | 122 |
10,526 | https://devpost.com/software/sparking-creative-innovation | Logo
Event Invite
Gamification
Inspiration
Couple of people in our team started new jobs in March during the pandemic in a remote setting. INCLUDOO is inspired from their experiences on coping up with the situation. When you are new to a company, you do not know your colleagues well & it is not easy to bond well over pure video calls.
What it does
INCLUDOO is an intelligent Meetup organizer application which allows companies to increase employee social interaction and creativity through intelligent team bonding events. INCLUDOO tries to increase the strength of the connections in the company by setting up appointments between weakly connected & strongly connected employees in the company by suggesting an event that is interesting for both using a novel graph based approach.
INCLUDOO collects information about the strength of relationships between employees based on the exchanges between them through their email (GSuite, Outlook, etc), instant message conversations (Slack, Teams, etc). Additionally, each employee can select their preferences every week on what they want to do that week (watch a movie, do some sports, cooking, eat out, etc) along with rough timeslots of when they want to do it (lunch, weekaday afterwork or weekend) & also whether they would like to do it virtually or in person. Based on the strengths & their preferences, the employees are matched to ensure maximum connectivity within the company.
Another potential data point we explored for our integration is the use of the computer by the employees by tracking their usage of the Logitech keyboard & mice with their workflows & sensors. These data points could potentially indicate if some person is stressed out or requires attention of someone else in the team. These data points would not be sent to the cloud & only the inferences are shared from the local machine.
Some of the virtual events are also directly connected to external services like watching videos together or reservations to local restaurants.
To incentivise employees to utilize the platform, gamification is introduced in the system with points being allocated for each activity. These points are shown on the company leaderboard with tiered rewards including rewards for the entire company on the overall performance of the company.
There is a conscious effort to only collect the information that is necessary.
How I built it
We built the backend in Python using the Django Rest Framework. Additionally, we integrated with the GSuite to get the connection strength based on the email exchanges. We developed a Chrome extension that integrates Includoo on the GSuite platform including the calendar & the email.
Challenges I ran into
The main challenges were in realizing the authentication of Google services.
Accomplishments that I'm proud of
INCLUDOO’s team is proud of creating a revolutionary app that will increase employee and company satisfaction & creativity.
We managed to integrate & extract the contact strengths between employees based on their exchanges. The graph based matching approach to increase the overall network strength in the company is also quite novel.
What I learned
INCLUDOO’s team learned a great deal about how to creatively measure employee connectivity in order to help increase social interaction, whilst being very mindful of privacy.
What's next for INCLUDOO
INCLUDOO is slated for upgrades in the following areas: Automated Meetup space reservations, “Quick Meets!” - Options meet your colleagues quickly during the work day to inspire creativity, Hardware based metrics to measure your performance and temperament and suggest events to keep you balanced.
Built With
INCLUDOO is built from Python & fair amount of Javascript & CSS.
Built With
apis
django
gsuite
postgresql
Try it out
github.com
www.figma.com | Includoo | Your Company’s Intelligent Meetup Organizer! INCLUDOO directly solves remote collaboration pain-points by intelligently matching people in the company based on their interests | ['Nithish Raghunandanan', 'Marcel Engelmann', 'Jeremy Huffman', 'Tim Weiland', 'Jiayao Yu'] | [] | ['apis', 'django', 'gsuite', 'postgresql'] | 123 |
10,526 | https://devpost.com/software/qwe-8a2wtn | Inspiration
The safest way place to be in this pandemic would be at home but going out is always an inevitable thing whether its for groceries or for other necessities. Going out during this pandemic contributes a risk of getting infected with COVID19. But how much exactly is that risk of going out poses to your health in general? In hindsight of that question, this project aims to give every user a perspective on how much their exposure levels are to COVID 19 based on their daily activity of going out.
What it does
The app takes your home location and set's it as a safe zone where COVID exposure levels are 0 and the app classifies the data received from the AWS Data Exchange enigma corona tracker it classifies every place in the world as one of 3 zones. Zone 1 is a green zone where COVID cases are light to almost none for example countries with less COVID cases or in general an empty space, Zone 2 is an orange zone where you have orange level exposure where COVID cases are mediocre and finally, Zone 3 which is a red zone which is a heavy COVID population concentration and also includes public spaces.
The app classifies which zone you are in and presents a risk-based analysis based on the time you spent outside and based on the zone that the app classifies you in. It tracks how long you spent outside on that zone and when you return back home the app gives you an analysis of all the places you been to and the associated zone and gives a risk estimate on contracting COVID19. It gives you a detailed analysis of your daily activity and your monthly exposure levels to COVID.
This gives the user's a visual understanding of contracting COVID and the app also furthermore gives a risk analysis of wearing a mask vs without wearing mask for giving the user a perspective on how their exposure risk changes based on your precautions. Lastly, the app presents a set of precautions that must be taken based on your exposure levels in the time you spent outside.
The app also contains a map that wasn't completed to its potential but was intended to give a zone view of your location's radius on different exposure levels at different places.
How we built it
We built this app using flutter as our frontend for presenting our data visualization and analysis and fast API as our backend to compute our data from AWS and the data generated by the user. We used the AWS data exchange data set enigma corona scraper for analysis on locations and classifying a location to a zone based on population, corona growth rate, and other aspects to classify a location into a zone. Once it classifies an algorithm computes your risk exposure levels based on the time you spent at a zone. We hosted this application on Amazon EC2 instance and our storage in S3 and arangoDB for storing all data from AWS and the user-generated data. Lastly, we used different API's for map generation and news collection. We used a python library geopy to translate coordinates into an address the database understands for zone classification.
Challenges we ran into
Classifying a location to a certain zone took a lot of metrics and in many cases, the database didn't contain all metrics needed so we had to find other sources for the database to collect missing values. The data visualization and analysis was a challenge we had to pass while building the application.
Accomplishments that we're proud of
We are proud of our data analysis based on the user's daily input and AWS data exchange dataset's insight for backend zone determination and risk assessment. Furthermore, we built a good model to predict COVID exposure from real-life tracking and exposure risk levels.
What we learned
We learned to work with different API's connecting all of them together and certainly learned a lot about data modeling visualization.
What's next for TrackMyCovid
The next steps for TrackMyCovid are creating a map view of the world with green, orange, and red zones and creating more personalized reports based on COVID exposure and creating more accurate estimates. Furthermore, we plan on developing a map view to push notifications when you enter a new zone the app gives statistics on that zone and furthermore precautions for that exact zone.
Built With
amazon-ec2
amazon-web-services
arangodb
aws-data-exchange-enigma-dataset
boto3
covid19.org
fastapi
flutter
geopy
matplotlib
newsapi.org
nginx
numpy
pandas
scikit-learn
Try it out
github.com | TrackMyCovid | A mobile app which gives you a risk analysis of contracting COVID19 based on the places you been to and analyzing the amount of cases recorded and the time spent by you in each place. | ['Rohit Ganti', 'appidi abhinav', 'Arshdeep Singh', 'Abhishek Kumar', 'KrishNa Na'] | ['Honorable Mention', "People's Choice", 'Best AWS Project'] | ['amazon-ec2', 'amazon-web-services', 'arangodb', 'aws-data-exchange-enigma-dataset', 'boto3', 'covid19.org', 'fastapi', 'flutter', 'geopy', 'matplotlib', 'newsapi.org', 'nginx', 'numpy', 'pandas', 'scikit-learn'] | 124 |
10,526 | https://devpost.com/software/early-detection | Inspiration
Since COVID19- Pandemic, there has been quite a lot of research about AI, ML techniques that could help early diagnosis using some CT, X-ray chest, medical prescription, Sonar,..etc. Yet there are few deployed models out there.
What it does
So the idea is to build a mobile app that provides information about the virus, various symptoms reported, spreedness in regions, possible treatment, In addition to possible early classification for being infected or not using CT-Scan images with classifying severity based.
How I built it
Challenges I ran into
Accomplishments that I'm proud of
What I learned
What's next for Early Detection
Currently looking for a team to implement this crazy idea.
I am in ML/Data researcher and I worked sometimes with backend devs I'm not good at pitching. So if you're interested get in touch on Slack or Devpost!.
I am open to any idea refinements.
Built With
computer-vision
deep-learning
mobile | Awareness and Early Detection COVID19 | An app that help in diagnosis and awareness spread for COVID19 | ['Reem Abdel-salam'] | [] | ['computer-vision', 'deep-learning', 'mobile'] | 125 |
10,526 | https://devpost.com/software/challenge-13-no-title-yet-h7ngyl | created before the hackathon but perticipated in different project and couldn´t delete it
Built With
fancy-stuff | Template (not in use) | - | ['Benedict Lindner'] | [] | ['fancy-stuff'] | 126 |
10,526 | https://devpost.com/software/shopper-25x7nk | Shopper - Modern Shopping List
No need of physical paper shopping list or Bill Book! When you have this app in your hand.
List all your shopping/Grocery items at one place! No worries you can change your Quantity later also!
Don't worry! if you have checked your items during shopping than all your items would be visible in recycled checked List. you can add back!
Sometimes what happens, You are buying some items, and your wife says it's already in the fridge! But you can show her whats in the fridge!
I Know it's hectic to know all your past bills! No worries, we have solved your problem! Now you can store down your bills too in one app.
Inspiration
When I go for shopping, I am quite curious, Why people always need to carry a physical paper shopping list with them every time they go for shopping or sometimes what happens with Lazy people like me, When I go shopping for some necessary items but come back with unnecessary items!
When I see this type of situation, My mind gets stuck, and ideas pop up in my brain.
What it does
Umm!! We can say that this App is a multitasker! But how!! Let's See, Personalized Shopping List, Recycled Checked List, Inside the fridge List, and Lastly, you can store or save your bills too.
Personalized Shopping List, Ik! Carrying a paper list every time you go for shopping is a headache and time-consuming. To solve this, we have converted a paper shopping list into an app. Now No need to worry. You can list item names as well as Categories of items and quantity. Uhh!! Mistakenly you added the wrong quantity, no worries you can edit them later in your list.
So, once you bought that item, just check that item in your shopping list as you do on a paper list, and it would be added to the checked list, So you can check your items later when you come back home. So if you want to rebuy the same things, just transfer it to the shopping list.
Sometimes, when you are buying something you forget, that you have already bought this item or not, To Solve this, we have implemented a feature called inside my fridge list. This will show you a list of items available in your fridge.
I know it's hectic to know all your past bills! No worries, we have solved your problem! Now you can store down your bills too in our app!
_ Sharing of the shopping list is coming soon in next update _
Happy Shopper family!!
_ CHEERS!! _
How I built it
I have first built this app in python(Terminal Based with MongoDB) to check either this app will run or not. Then I finalized my idea and started making it for android!
So I have made this App for Android devices only! Will implement it for IOS in the future. I have developed this app in Android Studio. For using backend server services I have used Google FireBase. To store User data safely in a database, I have used SQLite.
Challenges I ran into
The main challenge was managing my 12th standard studies and developing App at the same time!!
Personally I took the challenge of developing shopper in 4days and I have successfully Achieved it!!
All the challenges were too small and easy to overcome.
Accomplishments that I'm proud of
Proud to develop a complex app and help people with it.
What I learned
Things I Learnt - How to use Adobe XD like a pro, Advance Java, Advance XML, SQLite Database, Advance Python, FireBase with Realtime Database, and many more!!
I have learned many things outside programming like Time-Management, Balancing Studies and Programming together, and patience Level while debugging your App (Just Kidding!!).
What's next for Shopper
Sharing your
Shopping List
With others. This Update is ready to launch, and it will soon launch into PlayStore.
Website and Instagram
Let's Go
Instagram
Thanks,
Rudra Shah
(Student)
Built With
android-studio
firebase
java
sqlite
xml
Try it out
play.google.com | Shopper - Modern Shopping List | Enhance Your Shopping Experience | ['Rudra shah'] | [] | ['android-studio', 'firebase', 'java', 'sqlite', 'xml'] | 127 |
10,526 | https://devpost.com/software/encounter-tavern | Inspiration
While there are a lot of DM (Dungeon Master) tools with which you can play fully remote, there is a disturbing lack of tools that allow one to play face-to-face while still leveraging the advantages of technology. So we set out to build one ourselves, starting with a tool to manage encounters.
What it does
It allows the DM to create and manage encounters for their sessions and campaigns.
We wrote an algorithm to generate encounters based on the strength of the players.
How we built it
The application is split in into two parts:
Backend
The backend is a Spring Boot Application which exposes a REST-API. To fill in the vast amounts of Dungeons and Dragons data we needed, we used
the public 5e-srd-api
. To document the REST-API we used
Swagger
.
You can take a look at the repo
here
or our Swagger-documentation available
here
.
Frontend
The frontend is completely build in
Vue.js
with
Vuetify
on top of it. Because Vue.js is a single page application we also used
Vue Router
to simulated a more traditional webpage.
You can take a look at the repository
here
.
Challenges we ran into
We ran into two major challenges:
The public
5e-srd-api
did not return some of the data as we hoped it would. This made the further processing of this data a bit more complex that we would have liked.
Because we are both backend developer by trade, we have very little experience in coding a frontend on this scale. Which made this an interesting endeavor to say the least.
Because one of us had to quarantine himself on short notice, we had to switch to a fully remote participation instead of meeting up.
Accomplishments that we are proud of
We made an application that works and that we can actually use. Even if it there may still be some bugs here and there.
We are also proud of our teamwork. Even though it was a fully remote environment it worked flawlessly.
What we learned
We learned a lot, and we really mean a lot, especially about Vue.js and Vuetify.
Also that it is apparently possible to only sleep two hours during this entire time frame.
What's next for Encounter Tavern
There is a big list of ideas that we want to implement:
The possibility to create an account and keep the encounters private
A tool to create encounters from scratch without any generation
Better encounter management(tags, story elements...)
More possibilities in generating encounters(terrain, languages, alignments...)
A face lift for the UI :)
We are also looking to host this in a Kubernetes environment to take advantages of the scalable container technology.
Built With
java
postgresql
rest
swagger
vue
vuetify
Try it out
encounter-tavern.github.io | Encounter-Tavern | A real time encounter management system for Dungeons & Dragons! | ['M Pfeuti'] | [] | ['java', 'postgresql', 'rest', 'swagger', 'vue', 'vuetify'] | 128 |
10,526 | https://devpost.com/software/climate-step | Inspiration
How do my conscious or unconscious decisions effect the climate? I want to see my impact! What if I could find new ways of contributing which I didn't even know where there? Plus, a little inspiration from local community / competitions wouldn't hurt either.
Vision
Local community that encourages climate friendly actions of ANY size, whether by sharing your own small steps, automatically calculating your contributions and showing you your success, or by creating
friendly
competitions supported by the government.
Challenges / Problems / Plan / Ideas / Features
Anything that will let the person know they are doing something whether consciously or unconsciously.
Estimate the contribution of a person.
Provide positive feedback on the contribution, and how much they helped the climate / environment.
Suggest new ways of contributing.
Show comparison of different options (bike / car / tram ...).
Display contribution of your local community, and how you, or other members, helped.
Areas of contributions
Transportation
Food
Waste
...
Examples
Estimate the transport the person is using, suggest new options people can use. Show them how much they contributed towards the local environment.
Show people that they can choose between walking, tram or bike, and what are the pros / cons of each one.
Last words
Lots of ideas are possible, lets choose one, try it out, repeat! Fail fast, follow the fun.
Expressing your idea costs you no effort, the benefit of discovering a new perspective is priceless.
I probably missed a lot of things, correct me!
Built With
anything | Climate Step | Help climate step-by-step, your presence is more than enough. | ['Vladimir Masarik'] | [] | ['anything'] | 129 |
10,526 | https://devpost.com/software/covida-9uxmhe | landing page
show data
about apk
load data
data provencial
Inspiration
The rampant virus outbreak has stopped all activities by 50% and has led people to undergo quarantine at home.
and in my thoughts for the future so that everyone can be provided with more detailed information about the COVID-19 outbreak even at home
What it does
provide data information and about what corona virus is
How I built it
I built it:
1.android studio
2.php
3.json
4.phpmyadamin for database
insomnia to see the results of json
Challenges I ran into
for the challenge itself, I didn't get it because 80% I had mastered the tools I used
Accomplishments that I'm proud of
For achievements I don't have it yet, but for several projects I've worked on
What I learned
android studio
java
php
dbms
json
insomnia
What's next for Covida
for now only data on the part of Indonesia that I show
and for the future I will make it load data all over the world
Built With
android-studio
java
json
mysql
php
phpmyadmin
Try it out
github.com | Covida | My idea is that this application in the future can provide information about the Covid-19 outbreak in more detail or worldwide | ['Agus Kurniawan', 'Subhan assiddik'] | [] | ['android-studio', 'java', 'json', 'mysql', 'php', 'phpmyadmin'] | 130 |
10,526 | https://devpost.com/software/auto-mask-5uzcn8 | Air Quality Sensor
Muscle Sensor
Eye Protection from infected saliva
Inspiration
In the year 2020, due to the coronavirus pandemic and California forest fires, I realized NO ONE is comfortable wearing a mask. There are many problems: 1. Difficulty breathing 2. Wearing incorrectly 3. Non-hygienic mask practice 4. Injuries due to prolonged mask-wearing 5. Mask off at critical moments 6. Not wearing at all!
What it does
Auto Mask is a multi-functional gadget featuring an eye shield, touchless control, muscle sensor, and air quality sensor. Easily control your face covering with a wave by the proximity sensor at the ear. Auto Mask will also protect you when air quality is bad and will cover you just before a cough or sneeze!
How I built it
The headpiece is 3D printed and holds Arduino hardware. C++ coding language was used.
Challenges I ran into
Designing the headpiece to be as compact as possible while still having all those features was the most difficult part.
Built With
3dprinting
arduino
c++
Try it out
automask.wixsite.com
github.com | Auto Mask | Behold the latest technology designed for the coronavirus pandemic and California wildfires. With eye protection, touchless control, air and muscles sensors, this is Auto Mask. | ['Taliyah Huang', 'Calista Huang'] | [] | ['3dprinting', 'arduino', 'c++'] | 131 |
10,526 | https://devpost.com/software/18-climate-challenge-placeholder | our logo, a heart-shaped leaf
our logo, a heart-shaped leaf
Placeholder
Inspiration
With most teams focussing on reducing their carbon emissions, we felt the need to really flip the coin and focus instead on increasing the positive impact of our lifestyle on the planet. We deeply in the power of positivity to raise awareness and inspire people to take action. Therefore, we want to provide humanity the opportunity to team up with plants, the carbon-consuming, oxygen-producing lifeforms that made our atmosphere habitable in the first place.
What it does
Plant-li strives to integrate a powerful plant-recommendation engine with a digital representation of people’s gardens and the ability to track the amount of CO2 absorbed by their plants from the atmosphere.
How we built it
The backend is a python server built on several common python frameworks and the frontend is built in ReactJS running on an apache2 Webserver.
Challenges we ran into
It is very hard to come by large datasets that have all the values we needed. On top of this, estimating the CO2 absorption from the provided values proved harder than expected.
You can try it out for yourself on
plant.li
and get the source code on GitHub:
Frontend
Backend
Built With
progressive-webapp
python
react
Try it out
plant.bitter.li | plant.li | plant.li - A plant catalogue and recommendation system that tells you how much your garden reduces your carbon footprint | ['Marc Bitterli', 'xaverfunk', 'Arsalan Ahmed'] | [] | ['progressive-webapp', 'python', 'react'] | 132 |
10,526 | https://devpost.com/software/plantvita | Inspiration
THE IDEA: August 2020, a frustrating holiday in Italy as vegans leads to our idea of a 100% vegan e-commerce platform to promote and facilitate the sale of plant-based alternatives (meats, cheese, etc.) to retailers across Switzerland.
THE PROBLEM: Locally, we have found that restaurants have few or no dishes with plant-based alternatives in spite of a rapidly growing community of environmentally and sustainability-conscious consumers. Not only does this limit restaurants' client base, it indirectly excludes vegans and other individuals who are trying to eat a more plant-based diet for reasons related to health, animal welfare, the environment, or, more recently, the COVID-19 pandemic.
What it does
THE SOLUTION: We propose to fill this gap by connecting vegan food manufacturers (primarily small and medium-sized) with Swiss retailers (cafés/restaurants/supermarkets) via an e-commerce type platform, thus facilitating efficient and cost-effective transactions and distribution of plant-based alternatives to the end consumers.
How we built it
We seek your help to build a website! Promotion of the benefits of a plant-based diet, e.g. through sustainability scores of food products. Contribute to the global fight against climate change and loss of biodiversity by reducing carbon emissions resulting from meat and dairy consumption.
Challenges we ran into
Seeking to find founding team members who are vegan themselves
Language barrier (Switzerland is divided into French, Swiss-German and Italian speaking regions)
Accomplishments that we're proud of
In August 2020, we participated in PIRATE Live 2020, a virtual event bringing together start-ups, entrepreneur enthusiasts and leaders in digital innovation.
In September 2020, we conducted a food habits survey among the Swiss population (vegans and non-vegans) to measure the appetite for our start-up idea. From August onwards, we have been reaching out to and connecting with potential business partners and clients about our start-up plans.
What we learned
We're still learning many things around entrepreneurship and food-tech business eco-systems.
What's next for PlantVita
Launching our e-commerce platform; Capturing upstream value; organising events for retailers and end consumers to promote the products of the vegan food manufacturers; creating our own plant-based alternatives based on jackfruit/seitan, for example; starting a "foundation of the month" sponsorship programme.
Built With
css
html
php | PlantVita | Promoting sustainable plant-based alternatives to meat and dairy in Switzerland | ['Rahul Jha'] | [] | ['css', 'html', 'php'] | 133 |
10,526 | https://devpost.com/software/save-a-life-v2r6sg | Inspiration
The project was an inspiration from a similar project in Australia, where people register as first aiders. Once an emergency happens, an alert is sent to the nearest five people to respond, saving lives before the ambulance arrives.
What it does
It is a mobile app that allows users to send out a categorized emergency alert, which notifies the nearest qualified first aiders. The notified aider can then respond to the alert; otherwise, other first aiders will be notified if needed. The first aiders can then update their response status (responding/on-site, etc.) to keep the alert caller updated on who is coming to the emergency scene and their status. This enables vital medical care to be given at the earliest possible time, which is essential, especially when ambulance services are strained and cannot respond within the optimal timeframe.
How I built it
We worked with flutter and firebase to connect the front-end and back-end as efficiently as we could. We used Firestore to collect the data for the emergency and responder's locations while using cloud functions to search for the nearest responder to alert them of the nearby emergency.
Challenges I ran into
We struggled with the cloud functions and node.js due to our inexperience and overestimation of their ease of use. It has also been challenging to work as a team remotely, from helping each other in code debugging to team organization so everyone could work productively.
Accomplishments that I'm proud of
We are proud of the work that we have done in this short amount of time despite the new tech stacks we had to deal with, regardless of the outcome. It was great to meet team members from different countries (Libya, UK, Iran, German, and Tanzania) and work together and interact online.
What I learned
Communication skills and team management are vital, especially when working with a diverse group of people, and of course, we enjoyed hacking new tech stacks on the fly.
Built with
Flutter and Firebase (FirebaseAuth, Firestore, CloudFunctions and Notifications)
Built With
andorid
backend
firebase
flutter
ios
node.js
web
Try it out
github.com
github.com
app.moqups.com | Save a life! | An app that connects first aiders to an emergency to provide help whilst waiting for an ambulance | ['Bahaeddin Sagar', 'Chris Hardaker', 'Esra Kashaninia', 'Ashery Mbilinyi', 'Caitlin Fotheringham'] | [] | ['andorid', 'backend', 'firebase', 'flutter', 'ios', 'node.js', 'web'] | 134 |
10,526 | https://devpost.com/software/home-cinema | Inspiration
So, basically the idea revolves around a cinema from home. I think that during times like these when most people can't really hang out with each other, this can be a great way for them to enjoy each other's company while having a fun binge watch.
What it does
Provides a way for people to have a digital cinema at home. Related to #6.
How I built it
Let's work on this.
Challenges I ran into
Let's see it together along with:
1) Being fast and secure
Built With
android
ios | Home Cinema (#6 BRINGING REMOTE TEAMS TOGETHER) | A mobile application (maybe cross-platform) which lets you interact with your friends/family through texting/video call/audio call while all of them watch the same piece of content. | ['Pradyuman Dixit', 'Alessandro Ruzzi'] | [] | ['android', 'ios'] | 135 |
10,526 | https://devpost.com/software/lift-the-veil-7-challenge | We are here to lift the veil!
Phone mockups
Tablet mockups
Inspiration
When driving in bad weather conditions, train drivers often cannot see signals and track-side equipment in front of their engine. They have to speed down in this situation just because they can't see outside like before. Now think about the passengers which can't see the beauty outside. boring!
What it does
Using provided geotagged video and data of the train route, we will give the drivers insight when bad weather. There will be a Mobile app for their tablet. With that, they can see what's going outside, what's the nearest track sign, and how far they are from it and where is the train on the map. Very same thing for a passenger, who wants to see outside so they won't get bored seeing nothing outside, or maybe just a smart TV in the wagon for all!
How we built it
As we needed to be as realtime as possible and other limitations like internet outages on the way, We had to cache data to the mobile client as well as doing some calculation there.
Alongside the heavy-duty of the mobile app, we designed a back-end system in which mobile clients get files needed to cache.
Our Flask app will be called to retrieve the new video and meta-data from the cloud storage (S3 in our case) and it will start compressing the video to the proper settings as well as cleaning invalid data on its meta-data and saving them on the server. On the other hand, Nginx will serve these files to the clients.
Challenges we ran into
Different time zones. one of our developers just gone to bed after sending the final demo, forget sending the source code needed for submission! He just woke up some minutes before the deadline!
Putting together the right team with the right set of skills until the last minute of the matching session.
Matching GPS location and Movies' location for synchronization.
Train speed may vary and image must stay synced.
Being Realtime, therefore we need to cache on the mobile side efficiently.
There are internet connectivity outages in some places on the way.
GPS doesn't work in tunnels.
There are some invalid location data in the raw dataset.
Accomplishments we are proud of
Building a
hybrid diverse
(in terms of age, location, and sex)
team
of experienced hackers!
We had Vikram our legend mobile developer and a swiss trains user, Paria our brave backend dev which is also a developer in a map platform with more than one million users, Mouhssen our kind UI/UX designer, Walid our sleepless skillful hacker, and Mohammad Reza our restless leader and software architecture.
The first successful experience of participating in a hackathon with a hybrid team.
Making a badass video for the presentation.
What we learned
The importance of networking
Wide range of track signs used for the trains and how they actually being used.
What's next for Lift the Veil
a smooth solution for constant frame and location synchronization
Built With
adobe-illustrator
adobe-premiere
adobe-xd
boto3
ffmpy
flask
gps
gpxpy
maps
nginx
python
s3
swift
xcode
Try it out
github.com
drive.google.com
github.com | Lift The Veil project (#7 challenge) | Empowering geotagged images and track data, We will give the drivers insight and passengers some entertainment in the bad weather conditions. this is a solution for the real world. | ['Vikram Kriplaney', 'Yassine Medjati', 'ABDELOUAHAD Mouhssen', 'MohammadReza Malekabbasi', 'Paria Kashani'] | [] | ['adobe-illustrator', 'adobe-premiere', 'adobe-xd', 'boto3', 'ffmpy', 'flask', 'gps', 'gpxpy', 'maps', 'nginx', 'python', 's3', 'swift', 'xcode'] | 136 |
10,526 | https://devpost.com/software/provisory-title-avoid-damage-before-the-crash | See
https://www.hackzurich.com/workshops
for workshop info.
The idea is to streamline this as a pretty standard Data Science project, exploring the individual characteristics of an earthquake dataset and its associated damages.
Built With
fast.ai
python
pytorch
sklearn
Try it out
github.com | Avoid damage before the crash | Optimize how buildings are organized to avoid damage using data science | ['Gabriel Fior'] | [] | ['fast.ai', 'python', 'pytorch', 'sklearn'] | 137 |
10,526 | https://devpost.com/software/use-and-earn-bitcoin-in-daily-life | Inspiration
In Switzerland there was a service called lamium.io which tried to solved this problem. It was really bad made, bad UX, bad UI. I want to do it better and help Bitcoin to be adopted.
What it does
User submit the bill to the platform. Another user pick this up on the platform and pay the bill with Euro, CHF, USD, etc. The payor get rewarded in the same amount of money, but in Bitcoin plus a small fee maybe.
How I built it
I have only a basic knowledge in Python/Flask and looking for people with better front/back/full stack and/ or UI skills.
Challenges I ran into
Technically there is a big challenge to make the UI/ UX as easy as possible. On the business side, law in Switzerland should allow this system, but in a small, specific range of law.
Accomplishments that I'm proud of
What I learned
What's next for Use and earn Bitcoin in daily life | Use and earn Bitcoin in daily life | Connect people which want to use Bitcoin to pay their daily bills with people who want to pay those bills and get Bitcoin back for their service | ['Marc Steiner'] | [] | [] | 138 |
10,526 | https://devpost.com/software/food-for-thought-challenge-2 | Foodprint App
Start Screen
Menu
Environmental impact of Meals
Details on environmental impact
Process Flow
User-centered design brainstorm
Inspiration
Many people are realizing that climate change is real and that our daily choices have a direct impact on our future. Food production is responsible for approximately one-quarter of
the world's GHG emissions
so people willing to reduce their carbon footprint are becoming more open to trying a more sustainable diet. But, what food is really better for our environment? And can there be a simple tool that can help us make a more informed decision?
See also
Challenge #1, 2 and 18
What it does
By combining Migros huge recipe database
Migusto
with
Eaternity
, the largest environmental database worldwide, we set out to build an app which helps a person visualize the environmental impact of each meal in a restaurant. In this way, he/she can make an informed decision and reduce his/her carbon footprint, with only a few taps on a phone.
How it works
Using our app, the user takes a picture of the menu with his/her phone. The app converts the photo into text and, using the recipe database as well as the Eaternity impact scores, it displays an environmental rating for each menu option. Additionally, information about the CO2 emitted, the water, forest, animal and seasonal impact is shown. This way, the user can have a better overview of the impact of that dish and make a more climate-friendly decision.
How we built it
Design process
In order to follow a user-centered design, we started by defining a persona. We gave this person a name, an age and a job. Then, we defined the Empathy Map of what this persona feels, thinks, says, does. After, we identified the pains and the journey/steps that this person takes all the way to the point of making a decision on what to order in a restaurant. Based on all this, we started thinking about how the app should function, it’s look and feel.
Implementation
On the technical side we use several OCR and computer vision techniques to de-noise the image and extract menu names. The menus are then passed to our Python backend hosted on Azure with an automatic CI/CD pipleline in place. In the backend, we prepare a customized query for the Elasticsearch cluster of Migusto so we can match the menus to their corresponding recipes as best as possible. Once we have the recipe we can extract all ingredients of a given dish and pattern match it with the products in the Eaternity database. We can then use the Eaternity API to get a wide variety of environmental indicators which we present the user in a cross-platform mobile app.
Built With
android
azure
eaternity
elasticsearch
flask
itsdangerous
jinja
kotlin
markupsafe
migusto
ocr.-other-python-libraries:-requests
python
python-levenshtein-(text-distance)
tensorflow
werkzeug
Try it out
github.com | Foodprint App | This project helps a person assess the environmental footprint of different meal options when scanning a menu in a restaurant. | ['Dániel Simkó', 'Gabriela Felder', 'Susanne Keller', 'Fabian Wüthrich', 'Mina Rezkalla'] | [] | ['android', 'azure', 'eaternity', 'elasticsearch', 'flask', 'itsdangerous', 'jinja', 'kotlin', 'markupsafe', 'migusto', 'ocr.-other-python-libraries:-requests', 'python', 'python-levenshtein-(text-distance)', 'tensorflow', 'werkzeug'] | 139 |
10,526 | https://devpost.com/software/camera-based-nutrition-and-diet-app | Discovering new recipes matching your diet and your preferences
FoodAIe! Dieting has never been so easy
User profiles for enhanced user experience and engagement
Getting the nutrition values of a product. Detecting product using computer vision
Inspiration
According to healthdata.org unhealthy eating accounts for approximately 680'000 deaths in the US alone and is a major risk factor for heart diseases, diabetes and high blood pressure. But what exactly is unhealthy eating? The three major unhealthy eating habits include consuming highly processed food such as fast food, too much sugar and sodium and having little to no diversity in food intake. Most of the people have no idea how many calories they are eating in a whole day, let alone do not even know how many calories would be enough to maintain a healthy life. Second, for most it is too tedious and time consuming to come up with healthy recipes. FoodAIe was born to tackle these problems, helping our fellows to eat health in a simple, fun and entertaining way.
What it does
Our app has two main functionalities: First, making use of artificial intelligence and computer vision algorithms, FoodAIe let's you track your calorie intake and nutritious value of your meals by simply taking a picture of your food.
Second, FoodAIe makes use of its extensive knowledge, extracted from several APIs and Databases, to give you recipe recommendations based on the food you have at home, what you already ate on this day and based on food restrictions, special diets and calorie goal of the day.
In order to increase user experience and engagement, a networking platform is integrated where you can track how your friends are doing with their diet and you can unlock various achievements based on how healthy your eating is.
How we built it
The app was built using React-Native for the frontend framework and a Flask API for the backend.
To provide the users with large amounts of information and of AI algorithms, we have profited from the food recognition API generously provided by Bite.ai (
https://bite.ai/
).
Furthermore, we used Bite.ai and Migros APIs for Product and Recipe recommendations.
Challenges we ran into
No one of our team had great experience with using React-Native for mobile app development which meant that it took us a lot of time to get the frontend done. Furthermore, being a hybrid team (combine online and offline participants) posed a challenging, though exciting experience.
Accomplishments that we are proud of
We are proud that we managed to develop a first working prototype of the app in such a short amount of time.
What we learned
We learned coordinating and working in a group that is physically not in the same place. What's more we learned how to build apps with React-Native which was not the best idea to do during a hackathon but it worked.
What's next for Camera-based nutrition and diet app
Despite a great progress, the scope of FoodAIe can be easily extended. The AI algorithms could be further refined in order to provide the user with even larger amounts of information about their food products and their nutrients.
Built With
computer-vision
flask
mobile-ui
python
react-native
Try it out
github.com
github.com | FoodAIe | Track your diet by simply submitting a picture of what you eat, get delicious and healthy recipe recommondations and see how your friends are doing | ['Samuel Beck', 'Angel Villar-Corrales', 'Digdarshan Kunwar'] | [] | ['computer-vision', 'flask', 'mobile-ui', 'python', 'react-native'] | 140 |
10,526 | https://devpost.com/software/fitnes | Inspiration
App stores are flooded with fitness and exercise apps. They charge money, people buy them, don't really stick to them and waste money, what if you got money back for actually working out?
What it does
verifies exercise movements, and gives back a portion of the subscription fee for you doing that exercise.
How I built it
Challenges I ran into
Accomplishments that I'm proud of
What I learned
What's next for FitneS$ | FitneS$ | App stores are flooded with fitness and exercise apps. They charge money, people buy them, don't really stick to them and waste money, what if you got money back for actually working out?q | ['Abhinav Khare'] | [] | [] | 141 |
10,526 | https://devpost.com/software/pay-by-fingerprint-jagw9y | Inspiration
I got inspiration from the many attempts done for improving customer experience by making the payment process easier. Paying contactless with cards, or with phone, or smart watch is making our life easier, by saving us precious time which we can use for more important activities. What about having a bad day, and you forget to charge your phone and your watch, and you cannot find your wallet easily? What about using something unique to yourself, that you will always carry with you, such as your fingerprint?
What it does
This system would match the fingerprint to the credit card of each participant. The businesses that enroll into this system would receive a device to scan the fingerprint, which will be used to process payments. This will make this process more secure, and even faster than the existing payment methods.
What's next for Pay by Fingerprint
Currently looking for a team to implement this crazy idea. We would need a hardware wizard, otherwise we will use a phone for fingerprint scanning, therefore, we need android/ios developers.
Built With
java
spring | Pay by Fingerprint | A system that matches the fingerprints of its users with their credit cards. The businesses enrolled in the system, will receive a fingerprinter reader that will play the role of a payment processesor | ['Laurentiu Raducu'] | [] | ['java', 'spring'] | 142 |
10,526 | https://devpost.com/software/project-ipl0xj | Inspiration
We were inspired by our local shelter "Na Paluchu". According to that we decided to make our own web-app, that will help reduce the homelessness among animals.
What it does
You can use our web-app to register as a shelter or as a adopter. Being registered as shelter allows you to make an offer about active adoption so that other people can contact us or shelter by themselves. When logged as an adopter, you can browse adoption tab and search these offers that suit you the most depending on financial abilities etc. We provide 'preferences' that any adopter can update on their own, so the offers get's better and better.
How we built it
First, we made a django skeleton of our project. We maintained django administration and django data bases as well. Then we wrote front-end of app in HTML, CSS, BOOTSTRAP4.
Challenges we ran into
Lot of problems with django databases as well as front-end difficulties.
Accomplishments that we're proud of
Making full-responsive website with databases of users and authentication.
What we learned
Solving group-based problems involving many technologies
What's next for Project
Who knows what will bring the future? We are looking forward to other hackathons :)
Built With
bootstrap4
css3
django
html5
python
Try it out
bitbucket.org | Adogt(me) | Helping and protecting those who can't protect themselves - making a preference-adopt web app for dogs in shelters. | ['Rafał W', 'Mateusz Mianowany'] | [] | ['bootstrap4', 'css3', 'django', 'html5', 'python'] | 143 |
10,526 | https://devpost.com/software/climate-change-project-rfla2j | Architecture
Inspiration
Let’s make Uber-eats greener.
I have a problem. For six months, I’ve been sitting, working at home, ordering Uber Eats for every meal. I’ve been having the same pad thai for lunch every single day.
Every day, I didn't realize what the impact was from my restaurant choices on the climate. What if I could not only measure the impact but also offset the produced carbon dioxide emissions for a minimal and fair fee.
Introducing PickUpr. Now, I can not only enjoy a delicious meal but also help save the planet! Every time I walk or bike to pick up my order, I reduce my carbon emissions. I even get a discount on my Zurich health insurance or Zurich LiveWell partners for doing the exercise!
If I do get delivery, PickUpr offers me the ability to offset my emissions with Climeworks. All I have to do is pick my restaurant of choice. It’s a win-win!
Soon, I’ll not only be able to get recommendations for the tastiest dish but also tell how sustainable it is based on the carefully picked ingredients.
Here at Pickupr, we have a global team of developers spread across three continents, ready to take over the world for the better.
So what are you waiting for? Register today, invest in you now, save tomorrow! Be part of the great Climate Change community
Problem Statement
The carbon footprint from food delivery is still high and increasing.
People stuck at home are not getting enough exercise
Difficult to offset carbon footprint, people are not aware of the possibility
What it does
PickUpr gives you the best suggestion about your restaurant choice and how is the best way to get it to you, taken into account the best planet safety options. Also, the app gives you the right knowledge to understand how you can be very useful in the challenge of Climate Change!
How I built it
We build it using a combination of APIs to get the right information in real-time about the user location, nearby open restaurants, up-to-date delivery routes, and carbon dioxide emissions caused by delivery. The backend in NodeJS combines all the (proprietary) APIs in an easy and secure way. A React frontend uses this backend to visualize it to the user to maximize the user experience. Both applications are deployed on AWS.
Challenges I ran into
Stay connected to teammates all over the world in multiple timezones. Join all the differents APIs in a simple and user-friendly app.
Accomplishments that I'm proud of
Work with 3 people on 3 different continents.
What I learned
Mounting a web server using the last scripting languages (Node.js and React). Explore google places API.
What's next for PickUpr
Social media post based on the inspiration
Reduction in insurance premium by the carbon that would have otherwise been emitted
Time period in which other people can choose to order as well (group buying)
Also point out how much the restaurant is saving.
Story beginning to pitch before call to action and future plans (order and see environmental impact of orders)
Tracking people by getting them to take a geotagged photo when they arrive at the restaurant for their pick up
Google Voice assistant not as the interface (as well as the visual one)
Team
Willian Chan
Miel Verkerken
Leandro Benetton
Built With
amazon-web-services
google-maps
myclimate-api
node.js
react
Try it out
github.com
github.com | PickUpr | Order food online is convenient and should be safe for you and the planet | ['Miel Verkerken', 'William Chan', 'Leandro Benetton', 'Ritwik Agarwal'] | [] | ['amazon-web-services', 'google-maps', 'myclimate-api', 'node.js', 'react'] | 144 |
10,528 | https://devpost.com/software/flexr-7z2mhs | Inspiration
Based on implementing a token based on the same economics and incentives as the
Ampleforth token
What it does
It includes the token, that can be transferred, exchanged on swapr, users can provide liquidity on swapr, can trigger the daily rebase, and earn rewards by providing liquidity on swapr and staking their liquidity token on the Geyser. Private key required to submit data to the Oracle.
How I built it
hmm, 1 line at a time, while listening in to #futureproof... Explosive combination if you ask me!
Challenges I ran into
See
Gotchas section
Accomplishments that I'm proud of
Finishing with less than 2 hours to spare? Glad for the extended deadline.
What I learned
You don't need
ft-token
to implement a token in Clarity, and the economics and math are fascinating! The gotchas section also includes a few suggestions on how to improve the Clarity JS SDK, or Clarity itself.
What's next for flexr
Mainnet! With lots more testing...
Built With
clarity
node.js
Try it out
github.com | flexr | A Stacks v2 token with an elastic supply, works with swapr, incentivizes liquidity providers with token rewards, rebases daily based on price Oracle. | ['Pascal Belloncle'] | ['Grande Prize - $2,000', 'BONUS for the "Smartest" contracts out there!'] | ['clarity', 'node.js'] | 0 |
10,528 | https://devpost.com/software/uint256-lib | icon
Inspiration
Many blockchain-related services use uint256 math meanwhile Clarity only supports uint up to 128-bits long. Initially, my idea was to build the library for elliptic curve operations on finite fields that can be used by some other algorithms but it appeared to be pointless without uint256 as for security reasons the large field parameters are used.
What it does
The library implements common operations on numbers. There are two approaches of its usage represented by
uint256-ecc-lib-callable.clar
and
uint256-ecc-lib.clar
contracts.
The code is also covered by automated tests.
How I built it
Docs
and
vs extension
were very helpful.
Challenges I ran into
Working with
Clarity
I felt the need of some syntactic sugar, standards and improvements that would make code more readable and deployment process more comfortable:
Type declaration in order to clearly see the same types. For instance:
(type point (tuple (i0 u0) (i1 u0) (i2 u0) (i3 u0)))
Imports to avoid copying common code. For instance:
(load "/path/to/file")
Clarify logs. Now it displays only part of the call stack(the top-level function only).
Fix or documement recursive calls error, generated by composition of the same function. To reproduce error, replace
ecc-add
in
contracts/uint256-ecc-lib
by this one:
(define-public (ecc-add (p1 (tuple (x (tuple (i0 uint) (i1 uint) (i2 uint) (i3 uint))) (y (tuple (i0 uint) (i1 uint) (i2 uint) (i3 uint)))))
(p2 (tuple (x (tuple (i0 uint) (i1 uint) (i2 uint) (i3 uint))) (y (tuple (i0 uint) (i1 uint) (i2 uint) (i3 uint))))))
(if (is-zero-point p1)
(ok p2)
(if (is-zero-point p2)
(ok p1)
(if (and (uint256-is-eq (get x p1) (get x p2)) (uint256-is-eq (get y p1) (get y p2)))
(if (uint256-is-zero (get y p1))
(ok (tuple (x uint256-zero) (y uint256-zero)))
(let
((m (uint256-div
(uint256-mul-mod-short (uint256-mul-mod (get x p1) (get x p1) zk-p) u3 zk-p)
(uint256-mul-mod-short (get y p1) u2 zk-p))))
(let ((m1 (uint256-mul-mod m m zk-p)) (m2 (uint256-mul-mod-short (get x p1) u2 zk-p)))
(let ((x (uint256-sub
m1
m2)))
(ok (tuple (x x) (y (uint256-sub (uint256-mul-mod m (uint256-sub (get x p1) x) zk-p) (get y p1))))))))) ;; << CHANGES HERE
(if (uint256-is-eq (get x p1) (get x p2))
(ok (tuple (x uint256-zero) (y uint256-zero)))
(let ((mt (uint256-sub (get x p2) (get x p1))))
(let ((m (uint256-div (uint256-sub (get y p2) (get y p1)) mt)))
(let ((xt (uint256-sub (uint256-mul-mod m m zk-p) (get x p1))))
(let ((x (uint256-sub xt (get x p2))))
(let ((yt (uint256-mul-mod m (uint256-sub (get x p1) x) zk-p)))
(ok (tuple (x x) (y (uint256-sub yt (get y p1)))))))))))
))))
Code style standart. It's not clear how to indent and wrap code to make readable for others. For instance, Golang has such a utill as
fmt
that solves the issue by formating code and teach everybody write standartized code.
Better docs and tutorials.
Developer-friendly deployment from
UI
. There is no way to load or save the contract code on the sandbox(editing the samples is the only option).
Expand deployment fail reasons. The message is not really helpful:
This transaction did not succeed because the transaction was aborted during its execution.
Accomplishments that I'm proud of
I can read Lisp-like language and my eyes don't bleed anymore because of thousands of brackets. It was a funny experience.
What I learned
Lisp paradigm, interpreted approach for smart contracts, blockstack ecosystem related to smart contracts deployment, @blockstack/clarity sdk (BTW, awesome stuff, no pain).
What's next for Uint256-lib
It will stay open-source and will be able to serve the community.
It also needs more tests.
More functions may be implemented. For instance, according to this
API
.
As for Clarity in general, I am interested in creating some kind of tutorial with code examples as it was slightly painful to navigate in docs and there are a few contract examples. I like the Golang
approach
where the theory is illustrated by the code snippet that can be run immediately. Actually,
clarity.tools
can be a good example of the right part of the educational platform where the left side is extended by an explanation. Let me know if you are interested in it.
Built With
clarity
typescript
Try it out
github.com | Uint256-lib | A library for uint256 on-chain operations | ['Anastasiia Kondaurova'] | ['Second Place - $1,500', 'BONUS for the "Smartest" contracts out there!'] | ['clarity', 'typescript'] | 1 |
10,528 | https://devpost.com/software/stackstarter-kj9508 | Stackstarter
Stackstarter is a Clarity smart contract for crowdfunding on the STX blockchain. When a campaign is created, the fundraiser sets a goal in STX and a duration in blocks. In order for a campaign to be successfully funded, the campaign needs to receive investments matching or exceeding the funding goal before the target block-height is reached. The fundraiser can then collect the funds.
Investors can request a refund at any point while the campaign is still active. This ensures that the investors stay in control of their STX until the funding goal is reached. If the campaign is unsuccessful, investors can likewise request a refund.
In order to allow for nuanced investments and crowdfunding rewards, campaigns consist of tiers that the investors will choose from. Each tier has its own name, short description, and minimum cost.
Some campaign information is stored on-chain. It is conceivable that this information could be moved to Gaia end-points controlled by the fundraisers at some point.
A basic client implementation can be found in
src/stackstarterclient.js
.
This is a submission for the Clarity 2.0 hackathon.
Features
The smart contract implements all features you would expect for online crowdfunding:
Users can start campaigns and add a name, short description, a link, funding goal, and duration.
The campaign owner can update the short description and link.
The campaign owner creates tiers, each having a name, short description, and cost.
Investors choose one or more tiers to invest. They are required to pay at least the tier cost in order for the investment to be successful.
Investors can take their investment out as long as the campaign has not reached its funding goal.
The campaign owner can collect the funds once the campaign is successfully funded.
Read-only functions
get-campaign-id-nonce
Returns the current campaign ID nonce of the smart contract.
get-total-campaigns-funded
Returns the amount of campaigns that were successfully funded.
get-total-investments
Returns the total amount of investments. This number may go up and down a bit depending on refunds.
get-total-investment-value
Returns the total amount of STX invested. This number may go up and down a bit depending on refunds.
get-campaign(campaign-id uint)
Returns the campaign data for the specified campaign.
get-campaign-information(campaign-id uint)
Returns the campaign information for the specified campaign (short description and link).
get-campaign-totals(campaign-id uint)
Returns the campaign totals for the specified campaign (total investors and total investment).
get-campaign-status(campaign-id uint)
Returns the current campaign status (whether the goal was reached, the block height it was reached if successful, and whether the campaign owner has collected the funds).
get-is-active-campaign(campaign-id uint)
Returns whether the campaign is currently active (not expired and the funding goal has not yet been reached).
get-campaign-tier-nonce(campaign-id uint)
Returns the current tier ID nonce for the specified campaign.
get-campaign-tier(campaign-id uint, tier-id uint)
Returns the tier information for the specified tier.
get-campaign-tier-totals(campaign-id uint, tier-id uint)
Returns the campaign tier totals for the specified tier (total investors and total investment).
get-campaign-tier-investment-amount(campaign-id uint, tier-id uint, investor principal)
Returns the invested amount of an investor for the specified tier, defaults to
u0
if the investor has not invested.
Public functions
create-campaign (name buff, description buff, link buff, goal uint, duration uint)
Creates a new campaign with the provided information. Returns the campaign ID if successful.
update-campaign-information(campaign-id uint, description buff, link buff)
Updates campaign information for the specified campaign. Only the owner can do this. Returns
u1
if successful.
add-tier (campaign-id uint, name buff, description buff, cost uint)
Adds an investment tier to the specified campaign. Only the owner can do this. Returns the tier ID if successful.
invest(campaign-id uint, tier-id uint, amount uint)
Invest an amount in the specified tier. This will transfer STX from the
tx-sender
to the contract. An investment is only successful if the campaign is still active and the investment amount is equal or larger than the tier cost.
refund(campaign-id uint, tier-id uint)
Request a refund for the specified tier. This will transfer STX from the contract to the
tx-sender
. A refund will only be processed if the campaign is still active or expired (unsuccessful), and if the
tx-sender
has indeed invested in the specified tier.
collect(campaign-id uint)
Collect the raised funds. This will transfer STX from the contract to the
tx-sender
. The campaign owner is the only one that can do this, and only if the campaign was successfully funded.
Testing
All tests are ran on mocknet to get around limitations of the current Clarity JS SDK (STX transfers and advancing blocks). A
mocknet.toml
file is provided in the
test
folder to make things easier. Download and build
stacks-blockchain
, then run a mocknet node using the provided file:
cargo testnet start --config='/path/to/test/mocknet.toml'
There are two sets of tests: the "basic tests" cover specific function calls and the "scenarios" simulate two campaigns with multiple tiers and investors---one that is successful and one that is not).
To run all tests:
npm test
To run a particular test, specify the file:
npm test test/0-stackstarter.js
One can query the contract balance
using the local node
. Be sure to restart the mocknet node if you want to rerun the tests.
Since the tests rely on mocknet, they have to wait for blocks to be mined. It means that tests are slow and time-sensitive. Sit back and relax.
Built With
clarity
javascript
Try it out
github.com | Stackstarter | A Clarity smart contract for crowdfunding on the STX blockchain. | ['Marvin J'] | ['Third Place - $1,000', 'BONUS for the "Smartest" contracts out there!'] | ['clarity', 'javascript'] | 2 |
10,528 | https://devpost.com/software/lightning-swaps | Risidio STX Lightning Swaps 2020
Fraud Proof Swaps
Enable Delegated Stacking via swaps of Lightning BTC to STX tokens using Lightning Service Authentication Tokens (LSAT/402) protocol. Our goals revolve around providing services to help small STX token holders participate in proof of transfer. Our project
rStack
is a fully decentralised application using Gaia for data storage.
The Clarity contract is part of a more ambitious project. Its purpose is to transfer STX tokens and register btc reward addresses, linked transparently to the STX holder address. One component of the solution enables fraud proof swaps of btc for stx over the Lightning network using the LSAT protocol.
The clarity contract here provides the transfer function of the stx tokens that have been purchased independently via a lightning transaction and also a register for the stackers reward address. The registration happens in conjunction with payment for STX tokens via LSAT generates and in so doing a macaroon is registered that proves the service level agreement by locking in some key information. Combining the macaroon with the Lightning payment preimage provides a proof of payment and locks in some meta data. Using this technique the user can authenticate their bitcoin address while purchasing stx tokens and then register this information with the Clarity contract.
In this application, rStack combines LSAT and Clarity to provide a fraud-proof swap. Here, a buyer can prove the transaction of Lightning network using the preimage and the Stacks 2.0 Blockchain. If the buyer doesn’t receive the STX, the evidence would be transparent e.g. via the Stacks 2.0 explorer. The user record of the payment and transaction details are stored via Gaia and available in their rStack transactions history page.
Inspiration
We're inspired by the potential of the technologies and are excited to be involved in opening up and bringing new people into it.
Challenges
Working as early adopters of new technologies is always a challenge. We are still a long way from working out all the details here but feel this is a valuable project and look forward to making real contributions.
What we learned
We have learned a huge amount about Proof of Transfer and the Stacks 2.0 blockchain. The biggest win is the team of very bright people who are now starting to run with the ideas in the Blockstack and Blockchain ecosystems.
Check out our web application
rStack
.
Built With
bitcoin
blockstack
docker
java
javascript
lightning
vue
Try it out
stax.risidio.com | rStack | Fraud proof swaps of lightning BTC for STX tokens using the LSAT protocol | ['Mike Cohen', 'Camiel van der Beek', 'himanshu nair', 'Valentin Abrutin'] | ['Runners Up - $500', 'BONUS for the "Smartest" contracts out there!'] | ['bitcoin', 'blockstack', 'docker', 'java', 'javascript', 'lightning', 'vue'] | 3 |
10,528 | https://devpost.com/software/clarity-composable-token | Composable Token Contract
Motivation
When we creating dapps or especially games on blockchain, we are getting familiar with non-fungible-tokens like pet, hero, monster... Each entity in games is designed as an NFT token
Everything works well until we need to create items system.
Items also are NFT tokens, and will be attached to hero in games.
The nightmare begins when we try to attach many items to a hero, likes sword, helmet, gloves, boost... and with each item, it may also have many other items attached to it, and of course, those can be any NFT or FT token, too.
So if we use NFT standalone, the attachment and detachment process, transferring items will become painful when the items system becomes large and complex with so many transactions required. The game performance may be broken.
That is the motivation for creating a new type of token - a
composable-token
, where any NFT token can be attached to or detached from any NFT token at any time. Thus we can easily, automatically transfer whole heroes along with all attached items in one transaction, or we can detach any items before sending those.
Specification
Each Composable Token is a NFT token.
Each token can have only one parent, and can have many childs.
When attaching token A to token B:
token A != token B
token B parent must different than token A to avoid recursion attachment.
token A and B both belong to the same owner.
update parent for token A, and childs for token B.
When detaching token A from token B
token A must be attaching on token B.
update parent for token A, and childs for token B.
When transfering token A to new owner
all attach token will be transfered, too.
update the token count for sender and receipent.
Implementation
Checkout
contract
for more information.
Test
Checkout
test-cases
detail test cases of the composable token.
Checkout
test-client
for more information about the client of the composable token, which wrapping all contract functions here, thus we can easily call query or invoke transaction.
Checkout
test file
for tests implementation.
License
Built With
clarity
typescript
Try it out
github.com | clarity-composable-token | design a composable nft token which every nft token can be attached to or detached from any other nft token. Thus we can transfer whole token with all attachment in one transaction only. | ['Kevin Do'] | ['Runners Up - $500', 'BONUS for the "Smartest" contracts out there!'] | ['clarity', 'typescript'] | 4 |
10,528 | https://devpost.com/software/clarity-marketplace-bo50cm | Marketplace
First lines of marketplace contract
Clarity Marketplace
Clarity smart contract for a marketplace of tradable assets (some kind of basic NFTs)
(Inspired by the marketplace of monstereos.io)
This is a submission to the CLARITY HACK of the Stacks 2.0 Hackathon Series.
Smart Contracts
The main contract
market.clar
describes the marketplaces.
tradables.clar
defines the interface (
trait
) of tradable assets. There are two examples of tradable assets:
monsters.clar
: a simple tamagoshi like monster that needs to be fed every 6 blocks
constant-tradables.clar
: a dump asset that always belongs to the contract and transferring assets won't change ownership.
The marketplace supports all assets that have are NFTs with uint keys.
Public functions of the marketplace
The marketplace provides two functions for a bidder for a token:
bid
: allows to publish a price for a tradable asset that is defined by a contract implementating the
.market.tradables-trait
and by an unsigned integer.
pay
: after the bid was accepted by the current owner, the bidder can pay for the asset using this function and the asset will be transferred to the bidder.
The owner of a tradable asset can choose to accept a bid from all current bids:
accept
: allows to accept a bid for the tradable asset. The asset will be transferred to the marketplace until the bidder has paid the price.
cancel
: allows to cancel an accepted bid that was not yet paid, e.g. because the bidder disappeared
Used features of Clarity
The marketplace contract uses
a map to store offers.
the contract as an escrow for transferring assets.
traits
to handle any kind of tradables assets. It makes use of
contract-of
to store the contract of the assets.
stx transfers and nft transfers within contract calls that benefit from the post condition feature of the stacks chain.
Notes
Testing using the clarity-js-sdk is currently not possible as the latest SDK (0.2.0) does not support
contract-of
function. (Workaround: copy the most recent binary
clarity-cli
to the
node_modules/@blockstack/clarity-native-bin/.native-bin
folder of this project)
Deploying to testnet works (run
npx ts-node scripts/market.ts
), however, due to issue
#92
it is not possible to call functions with traits as arguments.
Simplified Marketplace
As a consequence, a simplified marketplace is defined in folder
contract-simple
. The marketplace can only
trade monsters (defined in
monsters.clar
). The functions are the same, the difference is that traits are not used.
To test the simple contract, run
yarn mocha test/market-simple.ts
.
While it is possible to initialize the VM with accounts with set amounts of STX, the current SDK does not support this feature. Therefore, the test does not include sucessful calls to the
pay
function.
Deployment
There is a script in folder
scripts
called
market.ts
that can be used to deploy the contracts to the testnet. Replace the location of the keychain.json file ontop of the script. Then run
npx ts-node scripts/market.ts
Future Work
After the issues mentioned in Notes have been solved the following tasks should be done:
build a web ui for making and accepting bids
run a server that maintains the list of current bids for assets. The list is not maintained on-chain because the list can be easily recreated from the history of relevant transactions.
Built With
clarity
typescript
Try it out
github.com | clarity-marketplace | Clarity smart contract of a marketplace | ['Friedger Müffke'] | ['Runners Up - $500', 'BONUS for the "Smartest" contracts out there!'] | ['clarity', 'typescript'] | 5 |
10,528 | https://devpost.com/software/stacks-loans | Login to Stacks Loans
Stack Loans dashboard, showing the wallet and loans on the Blockstack account
Deposit the amount of Stacks tokens you want to stake
You can select the lock up period of your STX tokens
When the transaction is ready, the user sees the smart contract ready to be executed
The user gets the confirmation on the transaction being executed by the Clarity smart contract
The transaction details of the Clarity smart contract
Inspiration
Working on leveraging the Blockstack platform to build a DeFi solution by implementing loans based on staking STX.
What it does
A simple DApp that authenticates with Blockstack username and creates a Blockstack profile where we store all the loans that a user solicits.
We use a Clarity smart contract to broadcast the loan, but keep the amounts private so the user is not exposed on the transaction.
How we built it
This is built with React and we use the Blockstack API's to enable the interaction between the user and the loan provider.
Challenges we ran into
Getting to learn and leverage GAIA storage, also the Clarity platform had a lot of issues in regards to the testnet availability for development purposes. The Blockstack PBC team was really helpful in the discord channels and helped us through our noob questions.
Accomplishments that we're proud of
Getting to understand how de centralized storage works and how GAIA can help Dapps provide a good level of privacy for the end users.
Clarity is still quite new, but we had all the tools needed in order to deploy a simple contract for our loans DApp.
What we learned
The big potential of DeFi and the untapped potential to use Blockstack as the building platform for this type of solutions.
What's next for Stacks Loans
Keep learning on how to build DeFi solutions on top of Blockstack.
Built With
clarity
node.js
react
Try it out
stacks-loans.herokuapp.com | Stacks Loans | Get a loan by locking up your STX and get the compounding interest up front, we will generate an IOU to free your STX once you pay back the loan. | ['RICHARD MICHEL ATECAS NOGALES', 'Jose Astrain', 'Joyce Lozano', 'Manuel Haro'] | ['Second Runners Up - $250', 'BONUS for the "Smartest" contracts out there!'] | ['clarity', 'node.js', 'react'] | 6 |
10,528 | https://devpost.com/software/clarity-hackathon-submission | Clarity-Hackathon-Submission
This is my submission for Blockstack's August 2020 Clarity Hackathon.
Features
This contract servers as a quiz answer verifier and distributer of funds. Ideally, this contract would work best for fill-in-the-blank quizzes with many possible answers. Users submit answers (strings) to the smart contract. The contract takes u50 STX from their account and checks if they answered correctly. If so, they are added to the winners map and get a payout at the end.
Techniques
Originally I wanted to store the user principals in a list but lists have a maximum storage capacity of 1 MB, which might be used quickly since I am storing user principals. Therefore, I decided to use a map. However, Clarity does not allow for iteration over maps. So, I maintain hybrid list/map data structure. The keys of the map correspond to the ints in the list and the values in the map are the winner principals. Therefore, when I need to distribute the funds, I iterate over the list and look up the values in the winner map.
Built With
clarity
Try it out
github.com | QuizPayout | Participants answer a quiz question and deposit funds into the contract. The contract verifies if the participants' answers are correct and distributes the loser's deposits. | ['James Botwina'] | ['Second Runners Up - $250', 'BONUS for the "Smartest" contracts out there!'] | ['clarity'] | 7 |
10,528 | https://devpost.com/software/earn-chain | Earn Chain
Get inspiration by ponzi model. Just for demo, not for evil purpose!
Scheme
member must pay 1000 STX for entrance
when invite someone join network: 500 STX
withdraw: after invite at least 10 members
Built With
typescript
Try it out
github.com | earn-chain | clarity earn chain smart contract | ['CG Deviation'] | ['Second Runners Up - $250', 'BONUS for the "Smartest" contracts out there!'] | ['typescript'] | 8 |
10,528 | https://devpost.com/software/provenance-smart-contract | Inspiration
I wanted to learn a little bit about smart contracts and saw a hackathon by Blockstack to build a smart-contract based on Clarity. I wanted a very simple project to tackle. Keeping proveance is a simple yet an important valuable service — so it felt right.
What it does
Keeps a record of provenance of an asset and allows for transfer of ownership from one owner to next.
Thumbnail from The Noun Project By Manthana Chaiwong, TH
Built With
blockstack
clarity
typescript
Try it out
github.com | Provenance Smart Contract | A smart contract that keeps a record of provenance of an asset | [] | ['Second Runners Up - $250', 'BONUS for the "Smartest" contracts out there!'] | ['blockstack', 'clarity', 'typescript'] | 9 |
10,528 | https://devpost.com/software/splitzpay | Inspiration
SplitzPay is an app to facilitate splitting of payments/bills between family, friends or any two entities. The smart contract written in Clarity is to enforce this arrangement between two parties. The contract has features to enable splitting a payment as a percentage of the total amount.
What I learned
I learnt to write a smart contract in Clarity and was able to understand the behind the scenes situation of any Blockchain app.
What's next for SplitzPay
I would like to build a blockchain app to inculcate this smart contract and implement if for splitting bills between two entities.
Built With
clarity
mocha
node.js
typescript | SplitzPay | A smart contract to facilitate splitting payment between two or more entities | ['Jui T'] | ['Third Runners Up - $50'] | ['clarity', 'mocha', 'node.js', 'typescript'] | 10 |
10,528 | https://devpost.com/software/asset-tracker-supply-chain | Asset Tracker - Supply Chain
The asset tracker smart contract was built using the clarity programing language, with the goal of deploying and using it on the blockstack blockchain. This supply chain has the ability to create and transfer assets. Users also have the option to leave comments with regards to the assets.
Built With
clarity
typescript
Try it out
github.com | Asset Tracker Supply Chain | Create and transfer assets on the blockstack blockchain. | [] | ['Third Runners Up - $50'] | ['clarity', 'typescript'] | 11 |
10,528 | https://devpost.com/software/counter-even-odd | Tested on Stacks 2.0 Testnet
Inspiration
Eager to learn about clarity language and work with blockstack.
What it does
It checks whether the counter is eve or odd. It also increments and decrements the value of counter.
How I built it
Used some of my idea and some from tutorial for clarity language.
Challenges I ran into
Finding the syntaxs' of different functions.
Accomplishments that I'm proud of
I did it even though I am a newbie.
What I learned
I learnt about smart contracts and more about blockchain Stacks
What's next for Counter-Even-Odd
Built With
clarity
Try it out
github.com | Counter-Even-Odd | Check whether counter is even or odd | ['AbdulBasit Hakimi'] | ['Third Runners Up - $50'] | ['clarity'] | 12 |
10,529 | https://devpost.com/software/storiology | Inspiration
Kenji has a rare auto-immune disease called "Coupe de Sabre" which has around 0.4 to 2.7 cases per 10,000 people per year. He had to visit the hospital, get several shots in one day, multiple times a month. On one occasion, Kierra decided to read a book to him so that she could help take his mind off of the shots and medications. Eventually, Kenji grew a love of learning and became advanced in that subject. In 3rd grade, he was already reading at a 9th grade level.
For this hackathon, Kierra, Kenji, and Keizou wanted to recreate that learning experience, promote mental health and make it available for all kids in hospitals. Instead of spending time in the hospital playing video games or watching movies, kids could be exposed to reading to further develop their skills at a young age. This provides equality for education for all kids in hospitals despite those setbacks.
Description
Our team made a project called Storiology, where kid volunteers read to sick kids in the hospital to promote mental health during these social distancing times. Why kid volunteer? Kids can connect with other kids in an instant. We made this because we want to give the kids in the hospital a chance to find the love of reading. A chance to form a love that will last for a life time. A chance to get through one more shot, one more chemo, just one more. We are the Storiology team and we are here to change a sick kid's life. One at a time.
What it does
There are 2 types of users: patients and volunteers. Patients are able to find volunteers that they can talk to and call with. Each volunteer page includes the volunteer's schedule and times they can meet, as well as a chat to familiarize the patient with them. There is an option to 'remember the volunteer' if a patient enjoys reading with them, and an option to 'report the volunteer' if the patient feels uncomfortable at any point.
In general, Storiology was made to connect patients with volunteers, who can read to young patients to distract them from their pain and to simultaneously help them grow academically.
What’s next for Storiology
In the future, we would like to improve by:
Reaching out to famous authors to spread knowledge of Storiology. This may be a big goal, but it will really bring Storiology to the next level in terms of popularity and the amount of users. By reaching out to big names and popular writers, Storiology can have a farther outreach and help affect the lives of more young kids.
Creating filters for the patients to better find volunteers. These filters will allow patients to choose the age of the reader, their gender, and the difficulty of the book. Overall, this filtering process will allow a more individualized platform for each user.
Built With
css
firebase
github
html
javascript
mailtrap
Try it out
www.figma.com
Storiology.kenjiw360.repl.co | Storiology | Calling out to all kids to volunteer to read for kids in hospitals and turn their frowns upside down! | ['Kierra Wang', 'Keizou Wang', 'Kenji Wang'] | ['Social Justice - 1st Place'] | ['css', 'firebase', 'github', 'html', 'javascript', 'mailtrap'] | 0 |
10,529 | https://devpost.com/software/social-justice | Democracy - Equity Black lives matters
A state of society characterized by formal equality of rights and privilege
Democracy! A supreme power invested in the people and directly exercised by them
We the people are, the blacks, the whites, and the reds
We the people are, the poor and the rich
We the people are the singers and stutterers
We the people are the Christians, the Muslims, and Atheists
We the people are Mexicans, the Italians, the British, the French
We the people are the Africans!
Inspired by African Democracy
What it does. A spoken word on Democracy ( Black lives Matter- Racism)
Challenges we ran into. We don't have the necessary equipment to shoot a better video.
Accomplishments that we're proud to advocate for Blacks, #BlackLivesMatter #Racism
What's next for Social Justice Democracy...
Built With
poam
spokenword
video
Try it out
docs.google.com | Democracy ( BlackLivesMatters) | Democracy, Black Lives matter.. | ['Casper Okpara', 'Ogahlandlord Nigeria'] | ['Social Justice - 2nd Place'] | ['poam', 'spokenword', 'video'] | 1 |
10,529 | https://devpost.com/software/sanchar-aidyub | I
Built With
a | Sanchaar | A | ['Isha Mudgal', 'Harshita Sachdeva'] | ['Social Justice - 3rd Place'] | ['a'] | 2 |
10,529 | https://devpost.com/software/nutrigreen | User Dashboard
Weekly GHG emissions and Tips
Donation page and virtual forest
Nutrition logs
1) Discovery Process
In recent years, I have seen an increasing number of headlines and posts regarding climate change, yet I see a lot less being done to prevent it. Thus before I began the project, I started researching a lot about the global issue.
It turns out that the leading cause of climate change, greenhouse gas emissions, are highly related to the food we eat.
Here are some facts I found regarding nutrition and climate change:
Food accounts for 10% - 30% of a household’s carbon footprint
Food is responsible for approximately 26% of global GHG emissions
Worldwide, approximately 13.7 billion metric tons of carbon dioxide equivalents (CO2e) emitted through the food supply chain per year
Given my research into this topic, I created NutriGreen which aims to integrate nutrition and environmental sustainability in a way that is also useful for the user.
2) About the Platform
NutriGreen is a revolutionary lifestyle platform designed to help you pursue healthy eating habits while striving to care for the environment.
We aim to utilize the power of technology to guide users in improving their diet, and provide awareness of their personal carbon footprint. Essentially, NutriGreen combines a useful dieting tool with environmental consciousness seamlessly in one application.
3) Features
Create logs of your meals fast and conveniently by simply taking a picture and writing a quick description
Image and natural language processing automatically determines the various ingredients and outputs the corresponding nutritional information and GHG emissions
User-friendly dashboard that displays daily and weekly information to help manage calorie intake and GHG emissions
Option to offset the carbon footprint by giving users an option to donate to build trees
Interactive virtual forest where users can visualize their contributions to curbing climate change
4) How it’s Built
Modern tech stack
MongoDB
ExpressJS
ReactJS
NodeJS
Google Cloud
Nutritionix API
Clarifai API
5) Next Steps/Roadmap
Shorterm:
Create embedded donation payment processing system by partnering with an organization like Tentree who will plant trees with proceeds
Generate user awareness and user loyalty (promotions and marketing). Use social media and fitness forums
Continuously innovate to improve user experience and interface
Raise money through donations and sponsorships to upgrade database, storage, and APIs
Longterm:
Establish partnerships with environmental organizations for credibility, guidance, and monetary support
Launch platform in various geographic locations & become #1 nutrition platform globally
Create mobile application
6) Necessary resources to implement
Start-up funding
Marketing team
Mobile development team
Built With
express.js
mongodb
node.js
react
Try it out
github.com | NutriGreen | Nutrition platform for a healthier diet and a healthier planet! | ['Andy Chen'] | ['Environment - 1st Place'] | ['express.js', 'mongodb', 'node.js', 'react'] | 3 |
10,529 | https://devpost.com/software/h2o-armu85 | Inspiration
Floods are one of the most dangerous and frequent natural disasters in the world.
According to the World Resources Institute, over 80% of India's population, that is, 1.08 billion people, are at risk due to floods. Floods cause an economic loss of nearly 40 billion USD annually worldwide, and 15 billion USD per year in India alone. In the recent Kerala flood tragedy, over 100 people lost their lives, and over 15,000 houses and buildings were swept away in the torrential rains. These kinds of floods happen almost every single year in many prone areas, and cause major damage to life and property. The major cause of this is the overflowing of rivers, reservoirs and other water bodies due to extensive rainfall during the monsoon season.
Furthermore, climate change is increasing the risk of floods worldwide, particularly in coastal and low-lying areas, because of its role in extreme weather events and rising seas. The increase in temperatures that accompanies global warming can contribute to hurricanes that move more slowly and drop more rain, funneling moisture into atmospheric rivers like the ones that led to heavy rains and flooding in California in early 2019. Meanwhile, melting glaciers and other factors are contributing to a rise in sea levels that has created long-term, chronic flooding risks for places ranging from Venice, Italy to the Marshall Islands. As such, the risk and impact of floods is continuing to slope upwards.
I wanted to help solve this problem, so i created H2O, a web application to predict floods and their impact before they even happen. Then, I displayed this information in an interactive graphical format, making the information compelling and easy to understand for people and governments alike.
What it does
H2O is my solution to floods in India. It is a web app that uses advanced machine learning algorithms to predict future floods based on weather forecast data – precipation, wind speed, humidity, temperature, maximum temperature, cloud cover – while allowing users to effectively visualize current and upcoming floods. The app has 4 core components:
Plots
The 3 visualizations on the plots page are bubble plots that display flood predictions, damage predictions, and heavy rainfall predictions across India, taking in factors such as precipation, wind speed, humidity, temperature, cloud cover, as well as previous data history. Plots:
The first plot is our flood prediction plot, which shows our ML powered prediction of where a flood is going to occur, marked by red dots.
The second plot is a precipitation plot, showing the current precipitation data across the nation, with larger bubbles indicating more precipitation.
Lastly, the third plot is a damage analysis plot, which shows the estimated cost and damage for various places in India, based on the flood risk prediction and population size. The size of the bubbles indicate the extent of predicted monetary damage, measured in USD.
Heatmaps
The 3 heatmaps show flood predictions, damage predictions, and heavy rainfall predictions across India, taking in factors such as precipitation, wind speed, humidity, temperature, cloud cover, as well as previous data history. Plots:
The first plot is our damage analysis plot, which shows a cost and damage analysis, with the colorscale of the heatmap indicating the extent of predicted monetary damage, measured in USD.
The second plot is a precipitation plot, showing the current precipitation data across the nation, with the darker red areas indicating a greater volume of precipitation.
Lastly, the third plot is a flood prediction plot, which shows our ML powered prediction of where a flood is most likely to occur given the current environmental factors, marked primarily by the darker red spots on a continuous colorbar.
Satellite Images
Our satellite image analysis displays the volume of precipitation over various cities in India for different months. In order to create this feature, I analyzed netCDF4 formatted data from NASA's Global Precipitation Measurement Project, and produced geo-referenced plots using a combination of libraries, namely numpy, matplotlib, and cartopy. i then displayed our processed images on our web application for users and governments to view.
Predict Page
On our predict page, the user simply enters the name of any city in the world. Our app then automatically fetches the weather forecast data of that city in realtime, runs this data into our Machine Learning model, and gives instantaneous results, which include the flood prediction, the temperature, the maximum temperature, the humidity, the cloud cover, the windspeed, and the precipitation.
How I built it
The dataset
Our goal was to predict floods from weather data using machine learning. For the dataset, I first scraped the website
http://floodlist.com/tag/india
using the python beautiful-soup 4 library. This website provided information about past and current floods India, as well as their date and location. I then used the Visual Crossing weather API to obtain historic weather data such as precipitation, humidity, temperature, cloud cover, and windspeed in those areas and during those times. I also performed several data augmentation techniques on this data set, which enabled me to significantly increase the diversity of data available for our training model, without actually collecting new data.
ML model
machine learning model is based on the python sci-kit learn library. I used pandas to generate a data-frame for the dataset, and then tried various machine learning models from Logistic Regression to K-Nearest Neighbors to Random Forest Classification. After experimenting heavily with all of these models, the Random Forest Classifier gave the highest accuracy of 98.71% on the test set. I proceeded to save our model in a pickle file.
Data Visualization
I first obtained a dataset of the major cities and towns in India (around 200 of them) along with their latitude, longitude and population. I then obtained the numerous weather factors in each city using the weather API and ran the data into our machine learning model. Next, I plotted the data from the model on various different types of maps, using Plotly chart studio. The maps represent various data such as flood prediction, precipitation analysis, and damage estimates, in the form of scatter plots, heat-maps, and bubble plots. The damage estimates were calculated based on flood prediction and population. I also produced geo-referenced satellite images for various cities in India, based on retrieved data from NASA's Global Precipitation Measurement project.
Front-end and hosting
Our web app is based on the Flask python framework. I rendered HTML templates – with CSS for styling and Javascript for added functionality – and integrated it with our machine learning models and datasets via the flask back-end. I then used Heroku's hosting service to host our web application for everyone to try!
Challenges I ran into
biggest challenge was in mining and collecting data to build our models and data visualizations. Given the extremely limited existing data available for floods and water related factors in India, scraping quality data was a challenge. I used a combination of weather API's and scraping techniques to create and compile an accurate and effective dataset. I also struggled with integrating the plots with our web app application, being our first time working with Plotly. Lastly, I faced a lot of git merge conflict issues due to different encodings of csv files and pickle versions across different computer platforms.
Accomplishments that I'm proud of
I m extremely proud of compiling and creating a dataset that can accurately and effectively reflect the current situation of floods in India, as well as allowed me to make future predictions. I am also proud to have expanded my machine learning skills by testing out new models, and ultimately implementing a model with over 98% accuracy. Lastly, I am proud to have integrated various data augmentation, data mining, and date manipulation techniques, together with our model, to create detailed and sophisticated, yet compelling and easy to understand plots for data visualization.
Built With
api
css
flask
html5
machine-learning
python
Try it out
github.com | H2O | Too much water | ['C kavya Shree'] | ['Environment - 2nd Place'] | ['api', 'css', 'flask', 'html5', 'machine-learning', 'python'] | 4 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.