hackathon_id int64 1.57k 23.4k | project_link stringlengths 30 96 | full_desc stringlengths 1 547k ⌀ | title stringlengths 1 60 ⌀ | brief_desc stringlengths 1 200 ⌀ | team_members stringlengths 2 870 | prize stringlengths 2 792 | tags stringlengths 2 4.47k | __index_level_0__ int64 0 695 |
|---|---|---|---|---|---|---|---|---|
10,415 | https://devpost.com/software/spacesuitsolutions-7fnm4k | Spacesuit Safety Setup: Warns astronauts when carbon dioxide levels are nearing lethal limit, through using an LCD screen and a gas sensor
Solar System Directory: Describes characteristics of planetary environments and advises astronauts to make according changes to space suits
Inspiration
Space is something I have always found extraordinary and, at times, I am unsatisfied with the lack of knowledge gathered on such a wondrous topic. To think that 99.9999999999999999% of space is unexplored, leaves me with a desire to know more! This inspired me to create a software, the Solar System Directory, as well as a Spacesuit Safety Setup, which aims to make space suits adjustable to the environment they are in through programming and sensors.
What it does
The Spacesuit Safety Setup, which is a component of SpacesuitSolutions, detects the amount of carbon dioxide present in a spacesuit. Seeing that oxygen containers can last astronauts 6 to 9 hours, as oxygen runs out, carbon dioxide will build up until the oxygen tanks need to be refilled. The Spacesuit Safety Setup warns the astronaut to return to their respective space shuttle as their suit nears lethal carbon dioxide levels.
The Solar System Directory, requires user input to enter which planet they would like to set a course for, and provide corresponding information for that planet, that is necessary for the astronauts' survival, such as atmospheric pressure and temperature. Accordingly, they are instructed to adjust internal temperature and pressure for optimal exploration.
How I built it
To build it, I used TinkerCAD for the necessary circuitry, and programmed an Arduino to display the necessary messages when the conditions in the space suit were met, by using a gas sensor and an LCD display, link below:
link
I also used Google Colab to code a Solar System Directory, which assessed all of the various environmental conditions on planets in our solar system, and gave corresponding commands to make space exploration safer by adjusting components of the space suit, such as internal temperature and suit pressure, link below:
link
Challenges I ran into
I initially attempted to use Pycharm to code the Solar System directory as I have a limited experience with coding, however, because I kept receiving compatibility errors, I looked for alternatives and ultimately ended up using Google Colab for the final product. Additionally, creating circuitry in TinkerCAD was also quite confusing as times, however I ultimately overcame that challenge through relentless research for solutions, as well as playing around with it myself
Accomplishments that I'm proud of
Being a beginner coder, as well as a beginner hackathon participant, I truly believe I tried my best to submit the finished product, and I am proud to have participated.
What I learned
I learned about the necessity of time management, as well as different facts and conditions about planets in our solar system!
What's next for SpacesuitSolutions
Though this is just a project for a hackathon I created in a couple of days, I look forward to increasing my learning of space, as well as pursuing it as a career in the future!
Built With
googlecolab
hardware
python
tinkercad
Try it out
colab.research.google.com | SpacesuitSolutions | SpacesuitSolutions takes care of all of your space-exploration safety needs! | [] | [] | ['googlecolab', 'hardware', 'python', 'tinkercad'] | 74 |
10,415 | https://devpost.com/software/tothemoon-1l5eop | ToTheMoon
This is a simple Space Game I created for To the Moon and Hack Hackathon. I built it in about a day so it is not a finished project.
This game uses React and JavaScript. I will add much more in the future
Image by
Free-Photos
from
Pixabay
Icons made by
Icongeek26
from
www.flaticon.com
Icons made by
Icongeek26
from
www.flaticon.com
Icons made by
fjstudio
from
www.flaticon.com
Icons made by
Flat Icons
from
www.flaticon.com
Icons made by
Eucalyp
from
www.flaticon.com
Icons made by
Freepik
from
www.flaticon.com
Built With
css
html
javascript
react | ToTheMoon | Simple Space Ship Game That I did not finish | ['Damon Marc Rocha II'] | [] | ['css', 'html', 'javascript', 'react'] | 75 |
10,415 | https://devpost.com/software/women-in-space-a-timeline-of-their-first-journeys | Inspiration
I wanted to recognize the amazing influential women astronauts all over the world who made huge impacts in space history.
What it does
It is a vertical carousel slider that you can scroll or click through on the navigation bar on the side.
How I built it
I utilized the horizontal carousel from Bootstrap and implemented more JavaScript to make it vertical. I also added twinkling star animations in the background and various transitions in each slide to make it visually appealing.
Challenges I ran into
Making the vertical carousel slider
Built With
bootstrap
css
html
javascript
jquery
Try it out
womeninspace.aliflores09.repl.co | Women in Space | A timeline displaying their first journeys in space and recognizing the impact women have made in space history | ['aliflores09 Flores'] | [] | ['bootstrap', 'css', 'html', 'javascript', 'jquery'] | 76 |
10,415 | https://devpost.com/software/the-space-screen | Render
Renders in its natural habitat
Vital levels 75% - 100%
Vital levels 50% - 75%
Vital levels 25% - 50%
Vital levels 0% - 25%
Physical Prototype
Physical Prototype when active
Space Screen
Inspiration
During spacewalks, astronauts within space suits have to be very cautious of things such as their oxygen levels, suit pressure, and suit temperature. They are constantly being updated on their vitals through a crew member thats inside the ISS and can directly communicate with the astronaut using a microphone link. However, we believed that the astronaut could be made more independent and the process of checking such important pieces of information should be made shorter and more convenient. The astronauts should possess the ability to check and monitor their vitals independently.
Our Vision
The space screen goal is to cut out the ‘middleman’ and help the astronaut be able to check important pieces of information like their remaining oxygen levels, suit pressure on their own without needing to ask a crewmate to constantly read it out to him/her. The space screen is an LCD panel that is to be installed onto the non-dominant arm of the spacesuit user, the screen would communicate with the same sensors which the astronaut’s crewmates use to monitor the suit. The screen would display all the information directly to the astronaut, thus allowing them to monitor their vitals without the need of someone else to monitor it and update the astronaut regularly.
The space screen would also have RGB capability which will alternate colours depending on vital levels.
How it works
The LCD (Liquid Crystal Display) panel which is the main component of the space screen, is controlled and is powered by an Arduino Uno microcontroller. The Arduino controls what is displayed and the automatic transitions between the various statistics. When the Arduino receives power, it will begin to communicate with the sensors around the spacesuit and will constantly be receiving information from them. The code (shown on github linked below) that is already programmed into the Arduino will take the information it receives from the sensors and will display it on the LCD panel.
There is also an RGB LED strip connected to the primary Arduino, the LED strip will go around the device such as shown in the renders and light up a colour depending on their vital levels. The Ardunio constantly checks for any change in value throughout the spacesuit and will update the screen accordingly. If the oxygen level is above 75%, the LED strips on the device will glow a bright blue colour, when it's below 75% but above 50% the colour green will glow, below 50% but above 25% will make the strip glow orange. Once it's below 25% and nearly empty it will turn bright red.
Challenges in Creation
Thinking of a project that we could do within the timeframe was the biggest challenge we encountered. We struggled in trying to come up with a product that would help better the lives of astronauts.
Time management was an issue since nearly every step of the way we encountered problems that we didn't expect therefore setting us back. We had many setbacks, personal situations and on top of that, we were having a hard time balancing our hackathon tasks and our household responsibilities.
Our Accomplishments
We were able to create a working prototype of the LCD panel and the RGB LED using nearly all of the same components would use if we were implementing this product in a real spacesuit.
We were able to divide up the work so well that we were able to complete the project before the deadline.
What we Learned
We learned a ridiculous amount about LCDs, how they work and the functions that are available with the LCD library. This was due to this being the first time that we worked with LCDs in a project.
We learned the importance of doing the proper research and how vital it is to properly understand your topic so that you can design a solution that people would benefit from.
We learned the importance of working as a team and dividing up the work evenly so no one feels like they are doing too much or too little work.
Everyone’s Role
Aryam Sharma (Disc. @imaryamsharma#8716)
Main Programmer and assisted with the research. He did the simulation on the final product and researched spacesuits, how they work, where they can be improved, etc.
Matthew Simpson (Disc. @inferno#8410)
Electronics engineer and demo video producer. He prototyped a functional model of a simpler version of the final product and explained how it would work and what it would display. He also made the script, edited, and acted in the demo video.
Shahmeer Khan (Disc. @PotatoTheTomato#4133)
Main designer and modeller. He designed each of the components and then rendered them together as a final product. Doing this allowed each member of the group to visualize the product. (unable to join devpost project as official creator yet still is part of the group)
Ishpreet Nagi (Disc. @Bapple_Boi#5294)
Documented everything going within the project and was also the researcher. The work he documented was vital as without it we would not have been able to keep track of what happened when. He also greatly helped with the research by finding out all the problems when making anything related to space. He also did the voiceover for the demo video.
Resources used in this Document
Dunbar, Brian. “What Is a Spacesuit?” NASA, NASA, 27 May 2015,
www.nasa.gov/audience/forstudents/5-8/features/nasa-knows/what-is-a-spacesuit-58.html
Built With
arduino
autodesk-fusion-360
devpost
github
pygame
python
tinkercad
Try it out
github.com | The Space Screen | Allowing astronauts to see their vitals without the need of others. | ['Matthew Simpson', 'Ishpreet Nagi', 'Aryam Sharma'] | [] | ['arduino', 'autodesk-fusion-360', 'devpost', 'github', 'pygame', 'python', 'tinkercad'] | 77 |
10,415 | https://devpost.com/software/we-gravitate | Inspiration
Being very interested in physics, we thought it would be interesting to provide a way to learn about gravitational mechanics in an easy to understand way. By having a sensitive and realistic game, it makes understanding the concept of gravitation easy.
What it does
We Gravitate
is a hyper-realistic game that uses up-to-date information to simulate objects in the gravitational field of others. It also includes a character design tool for the easy addition of other characters by the user.
How we built it
We split the responsibilities of the project among the team surrounding the following elements:
1. Scraping
We used
UIPath
and its Javascript SDK to scrape real-time data from online regarding information about the different celestial bodies.
2. Pixel art maker
Written in javascript, utilizing HTML tables, the user can color in cells of a table in order to simulate pixel art being made. Things like the bucket tool were implemented using algorithms like flood-fill. The image is then exported as a png for easy use in the game.
3. Game
We used the javascript library "Phaser" in order to create objects using the generated pixel art. Using physics equations, we update the position, velocity, and acceleration of every object several times per second.
4. Website
We uploaded the code to a github repository and linked it to the website using a CNAME DNS configuration
Challenges we ran into
Using Phaser was quite difficult. Both of us had limited experience with javascript and no experience with Phaser.
Using physics equations that account for more than just basic variables led to some confusion (and apparently some planets that appeared to have negative mass)
Accomplishments that we're proud of
Learning how to code in a new language and also learning a library in the span of a couple of days is something that both of us are very proud of.
We are also proud of being able to provide a fun and educational resource to the MLH community.
Although not coding related, I'm quite proud of the logo I made for this hackathon.
What we learned
How to use UIPath to: create tables, create VB variables, scrape data from websites, use conditional statements, and write to a spreadsheet
How to use the javascript library Phaser to make games and visual representations of code
What's next for We Gravitate
We want to add more to
We Gravitate
such that it is even easier to learn from it. We plan on including informational text to show equations and more information about the celestial bodies
Domain.com Submission
wegravitate.space
(This may not work due to slow DNS configuration updates, but usage is shown in the video)
Built With
domain.com
javascript
phaser.js
uipath
Try it out
wegravitate.space | We Gravitate | A hyper-realistic gravity simulation game and character designer. | ['Adam Hassan'] | [] | ['domain.com', 'javascript', 'phaser.js', 'uipath'] | 78 |
10,415 | https://devpost.com/software/skybox-3ebh5z | Inspiration
We always wanted to program a 3d physics game, so we made one.
What it does
Allows the player to create planets and stars, and attempt to form a stable solar system
How I built it
The physics behind this game was built using real-life gravity math.
The 3d rendering was made using 3d vector translation math and pygame to draw the planets.
The display was made using pygame to render buttons.
Challenges I ran into
We originally planned on using Pyopengl, but unfortunately, it couldn't be drawn side by side with a normal pygame surface, so we can to resort to making our own 3d rendering module from scratch.
Moving the perspective that viewed the 3d objects was also difficult, as we were inexperienced with vectors.
Accomplishments that I'm proud of
We managed to show 3d objects using a 2d drawing library and a 3d rendering module made from scratch.
We also made the physics optional using the play and pause button.
What I learned
We learned how to collaborate on the same program with multiple people and understand each other's code.
What's next for SkyBox
We hope to add textures to the planets and add more types of celestial bodies(eg: moons, asteroids, black holes, etc).
Built With
math
pygame
python
random
Try it out
github.com | SkyBox | Sandbox in the sky(space) | ['Brandon Cheng', 'Roderick Wu', 'Nicholas Jano', 'Ahren Chen'] | [] | ['math', 'pygame', 'python', 'random'] | 79 |
10,415 | https://devpost.com/software/to-the-moon-and-hack | To-the-moon-and-hack
Submission for the hackathon: To the Moon and Hack.
We have always been interested in 2d game design, so we decided to learn an open-source game engine called Godot. We were thinking of making a Space Invaders or Asteroids clone, but we thought of the game we used to play, Speedrunners, and decided to merge the grappling idea into a space game. We drafted ideas, figuring out we wanted to make a game where you race against another player, grappling onto asteroids upwards to survive. We used a creative commons space-themed asset pack to start developing our game. The process of creating the moving asteroids and the player was initially easy, but developing a grappling hook seemed like a hard problem. We researched online, learning about collision layer masking, ray tracing, and paths, however, implementing these caused many bugs to arise.
While the final game did have a lot of visual glitches, we are proud of what we made and we learned a lot.
Built With
godot
Try it out
github.com
brendankhoury.github.io | AstroHop | Play as a flying saucer which needs to cling to asteroids to avoid being thrown into the abyss. | ['Brendan Khoury', 'Nicholas Chite'] | [] | ['godot'] | 80 |
10,415 | https://devpost.com/software/tothemoonandhack | ToTheMoonAndHack
Building a a solar system with hopefully adjustable values to edit orbits for a hackathon
I got really sick this weekend and couldn't get very far. Put the plan was to make a square solar system. I got gravity and the sun to work with light. As well as added a bunch of texture packs. My current health situation isn't ideal. The plan was also to make the whole system user-controlled in terms of adding planets, adjusting velocities, distances, and so on. I'm still pretty new to unity but I learned a lot, esp how to get the sun object to be a light source emitter.
Built With
c#
Try it out
github.com | ToTheMoonAndHack | Building a a solar system with hopefully adjustable values to edit orbits for a hackathon | ['Sofia Bzhilyanskaya'] | [] | ['c#'] | 81 |
10,415 | https://devpost.com/software/alone-in-space | Landing page with a fun animation.
Advice from fellow astronauts.
A mission control panel with resources to combat boredom.
An online journal.
An uplifting quote generator.
During quarantine, we are all battling a feeling of loneliness as we are separated from our friends and family, a similar feeling for astronauts aboard the ISS.
We were inspired by our experiences in quarantine to create a website to help people combat feelings of isolation.
When building our website, Alone In Space, we used HTML, CSS, and Javascript. Through using these languages, we were able to incorporate a professional design and make the website user-friendly and interactive.
While making our website, we faced several challenges. For instance, on the mission control page, we initially had trouble centering the panel, making the buttons circular, and creating a text animation. Also, on the journal page, we had trouble using javascript as we wanted to personalize it according to one’s name.
We are proud of participating in our first hackathon and our efficiency working together as a team.
Through this process, we were able to enhance our skills in web development. We learned how to make a side navigation bar, create animations and a random quote generator, use flexbox, include gradient backgrounds, and many more.
Built With
css
html
javascript
jquery
repl.it
Try it out
aloneinspace.ananyagollakota.repl.co | Alone In Space | During quarantine, we have had feelings of isolation, a similar feeling astronauts face. Our website provides resources such as, coloring, meditation, and more to help combat feelings of isolation. | ['Ananya Gollakota', 'Daniela Rangel'] | [] | ['css', 'html', 'javascript', 'jquery', 'repl.it'] | 82 |
10,415 | https://devpost.com/software/when-is-bedtime | Inspiration
I wanted to figure out what bedtime would be on any planet. I was inspired by Carl's Sagan's quote about the earth being a Pale Blue Dot. How everything you know and love is a small piece of the vast universe. I wanted to be able to show that visualized.
What it does
This app is supposed to let you input a planet or a location and gives you when nightime is using the Skyfield astronomy api. That uses astronomical data to find what the nighttime is. Radar.io is used to figure out your location. Using the Nasa API you could also find the position of satellites above your current location or how far the closest astronomical object is. You could put in your birthday and it would tell you what the weather was on mars that day. Astropy is another api that allows you to access astronomy data and data from Nasa as well.
How I built it
I used flask and the apis and ngrok. The server and all python libraries like numpy, pandas,matplotlib runs on a raspberry pi. The nasa public api is used to find astronomical events close to a date you enter, or your birthday. Radar.io would help track your location if you wanted to find satellites above your head. The Skyfield api is actually dependent on numpy and pandas for data manipulation. Astropy was used for more information handling.
Challenges I ran into
Understanding how to do input in flask. Time management and working from home when you have to go places. I don't know web development and Json parsing and so figuring out how to show the data I am able to access is the hardest part. I am still learning how to incorporate data from APIs and parse them and then show them on the webpage. The standalone python scripts can access the data and can print out in command line, but I wasn't sure how to translate that into input and output from webpage. The biggest challenge was time however, with enough time, exploring the Nasa Api and learning flask development would make the project come together. The biggest challenge was simply not knowing the HTML/CSS/Javascript side of how to incorporate data onto a webpage.
The domain.com redirect was initially not working for me for some reason to my ngrok. And I am still unable to enforce SSL from domain.com (While my ngrok page is secure). So I have kept a link to the ngrok as well. (
https://palebluedotin.space/
is the domain I registered) The SSL not being able to be enforced is causing a security prompt in chrome not and saying "this is unsafe", while my actual site is SSL secured.
Accomplishments that I'm proud of
I was able to incorporate the Skyfield Api which is specifically made for astronomy and at least show a proof of concept based on a hard coded location in the flask python script.
What I learned
Web development is hard. Most of the time is spent on understanding APIs and parsing output from get/post requests. Even with flask, understanding HTML and CSS and Javascript is essential if you want your webpage to actually do something. I did learn how to use a data format used specifically in astronomy, and it was fun learning about that.
What's next for When is bedtime
Clean up the user experience and present the data in a nicer format.
I don't know if exposing a jupyter notebook publicly would be a good idea from a security standpoint. A lot of my initial testing was through a jupyter notebook.
Built With
astropy
flask
jupyter
nasa-astrophysics-data-system
ngrok
numpy
python
radar.io
raspberry-pi
skyfield-api
Try it out
f5b4a8c6a38c.ngrok.io
github.com
palebluedotin.space | Pale Blue Dot | Helps you figure out when to sleep and other interesting astronomical info based on your location and input date | ['Karan Naik'] | [] | ['astropy', 'flask', 'jupyter', 'nasa-astrophysics-data-system', 'ngrok', 'numpy', 'python', 'radar.io', 'raspberry-pi', 'skyfield-api'] | 83 |
10,415 | https://devpost.com/software/asteroid-impact-viewer-nuhwkt | Inspiration
I was inspired while looking through NASA's API page and found they had an API (JPL Sentry) to track asteroids with a non-zero probability of hitting us in the near future. I thought this was fascinating, especially since recently a comet is visible above the Earth.
What it does
AIV uses the API in two different ways. On the main page, it displays a subset of asteroids in the database with their name, diameter, and threat level on hover. The asteroids have physics programmed in and will move away from the borders of the screen and from other asteroids to keep them all in view. The bigger the asteroid drawn, the bigger the actual diameter. The shape of the asteroids are also randomly generated. The background is automatically generated from NASA's APOD API. After clicking on an asteroid, it will show all the "Virtual Impactors" of that asteroid (the possible trajectories/timeframes it could take). On this screen, each asteroid's distance to the earth is a measure of how many days away its possible impact date is. Hovering over these VIs shows the name, impact date, impact probability, and impact energy (expressed in megatons of TNT and the equivalent number of atomic bombs). Asteroids close to the Earth have a fiery trail behind them. The ISS is also shown above the Earth and uses open-notify API to get the astronauts currently on the ISS.
How I built it
I built it on the Javascript library p5.js, which has a much simpler canvas drawing syntax and runs on a loop to draw. This is the main engine behind all of the graphics. To interpret the API date values, I used moment.js for date formatting and converting. The APIs were simple GET requests and JSON data parsing.
Challenges I ran into
I ran into a couple of challenges, the main one being how to generate the random asteroids. At first I attempted using the QuickHull algorithm, however it created shapes far too rectangular. I used my own algorithm of just generating random successive points around the circle until a point got too close to the initial, and then the shape was closed. Another challenge I ran into was displaying the asteroid radii. If I used a linear scale, some asteroids were way too small and others were much too big. To solve this, I used a log scale to set the radius of each asteroid.
Accomplishments that I'm proud of
I am proud of what I was able to do by myself in the short amount of time I had, and it was my first time making a visualization with an API like this. It was rewarding seeing the animation and the hover effects play out, since I had to build it all from the ground up.
What I learned
I learned that organizing and developing a mental model for your code is very important, as the files in this project cross-referenced a lot and it was easy to get mixed up. I also learned how multiple APIs can come together into one project because I usually only use one at a time.
Built With
javascript
moment.js
nasa-apod
open-notify
p5.js
sentry | Asteroid Impact Viewer | Visually explore NASA JPL's asteroid tracking system | ['Bryce Parkman'] | [] | ['javascript', 'moment.js', 'nasa-apod', 'open-notify', 'p5.js', 'sentry'] | 84 |
10,415 | https://devpost.com/software/hyperjump-countdown | Inspiration
I really wanted to create something that celebrated the many amazing films set in space. In particular, these films always seem to have a hyperjump scene with a star field. I wanted to make something based on the hyperjump, and decided to make a countdown timer where the end of the timer is signalled by the hyperjump occurring.
What it does
It take an integer value from the user, and on the press of the 'Go!' button counts down for the number of seconds defined by the user-input integer.
How I built it
I started by taking the star field code developed by Daniel Shiffman from
The Coding Train
. As I am still pretty new js and specifically p5.js, I spent an hour or so just messing with the code (changing variables, adding functions, reordering sections etc) to really get a sense for what every part of the code does.
I then went about making the speed change in accordance to a countdown. This was relatively simple.
I then added the functionality to allow the user to enter a number of seconds. This was tricky.
Challenges I ran into
I found adding the functionality to allow the user to enter a number of seconds challenging. At first I planned to add the button in the html file, but I quickly found it difficult to feed the input from the html file into the javascript file. I then moved onto to creating the input box and button in the js file, but this is not something I had done with js before. To overcome this I looked up some online examples and used the
p5.js reference site
.
Accomplishments that I'm proud of
When I was struggling to get the user input feature to work, I considered just leaving it out and making it run a pre-determined amount of time. Instead of giving up though, I decided to take a break and do something else for a while. I then came back with a clearer mind and this allowed me to solve the problem. I am proud that I didn't give up on the feature!
What I learned
I learnt how to integrate p5.js with html (prior to this I ad only really worked in the
p5.js web editor
What's next for Hyperjump Countdown
I would love to make the app more aesthetically pleasing and add sound effects to enhance the experience. I would also like to host it so that other people can easily make use of it.
Built With
css
html5
javascript
p5.js | Hyperjump Countdown | Simple web based countdown timer that looks like a hyper jump! | ['Amy Hudspith'] | [] | ['css', 'html5', 'javascript', 'p5.js'] | 85 |
10,415 | https://devpost.com/software/grouber | groUber's initial branding!
Dashboard.
groUber's homescreen.
RSVP page.
The map showing the best routes solution.
Showing a long-distance trip.
groUber: schedule carpools, without the headache
Hello, world! We’re groUber, an app for organizing events in the 21st century.
groups + Uber =
groUber
About
groUber is aimed to help event planners create carpools, and is being built for To the Moon and Hack. If you're going to use this project to plan your event, remember to stay 6ft apart!
Motivation
This project was built by a group of 5 students from UBC in Vancouver, BC who love automating things. And one horrific task all of us have run into when planning our events is creating a workable carpool schedule.
It’s a great option for getting your group together: whether it be parents figuring out how best to get their kids to soccer practice, or friends accommodating those without access to a car, carpooling is common, but creating a plan can be painful, to say the least.
You finally come up with a workable schedule: everyone can make it to the event, no driver has to go in annoying, wasteful loops, and everything can start on time.
Then a driver with 4 seats drops out. And you have to do it all over again. No, thanks.
Introducing: groUber
With groUber, never go through that headache again. As an event organizer, create your event, send an invite link to your friends, and create a carpool schedule with one click. As a participant, simply receive a link, RSVP, and inform the organizer of how many seats you have available. You’ll receive a schedule on the day-of.
Using the Radar API and the Google Maps API, along with a bit of algo-magic, our app will create the most optimal carpool schedule for everyone involved. We were hesitant to do this project at first; the idea of designing an algorithm to find the “best” carpool strategy was intimidating to say the least. After some research, it turns out this is actually classified an NP-hard problem. We didn't need to solve the problem generally though, and were able to design a heuristic algorithm to be able to compute this with fairly good results. Here are a
few
examples
of scholarly work in this area. It took great teamwork, persistence, and a decent amount of caffeine to get this working.
Now, drivers won’t have to waste gas, and everyone will get there on time. Someone drops out? No problem, our app will allow you to adjust your schedule, painlessly.
groUber is ride-sharing for your group of friends, without all the expenses and overhead. Do a favor for the environment, and for your stress-levels, and start using groUber today.
Here's our
whitepaper
where you can find a more in-depth justification for this hack! We also have a
slide deck
for the visual learners out there.
What we Learned
Many of us learned React hooks for the first time
This is the first time we applied our knowledge from UBC data algorithm design classes to a real life scenario
We learned what NP-complete problems are and how to use heuristics to simplify them
We learned how to use firebase functions!!!
We're super proud of making a sexy, nice looking app within 36 hours!!!
Installation
Stack
TypeScript, React
Node.js, npm
Firebase, including authentication, hosting, and Firestore
Get it running
npm install
Install dependencies.
npm start
Run for development.
npm lint
Run linting over the project repository.
Deployment happens entirely via GitHub Actions; on any pushes to master, the app will be re-deployed to Firebase hosting.
Usage
Head to
grouber.online
. Sign-up using your Google account.
Create your event with all key details, and send an invite to your to-be attendees. Once they RSVP, you'll see their details on your event dashboard. Then, generate your event's carpool schedule in one click!
Participants must submit their address, if they're driving, and if so, how many seats they have available in their vehicle.
Contributing
This will be updated after the hackathon! Stay posted for more.
Built With
css
firebase
gcp
google-maps
html
pwa
radar.io
react
typescript
Try it out
grouber.online
github.com | groUber [groo-bur] | 🚗 Schedule carpools without the headache, to save our environment. | ['Michael DeMarco', 'Philly Tan', 'Hasan Altaf', 'Liang Liu', 'Jack He'] | [] | ['css', 'firebase', 'gcp', 'google-maps', 'html', 'pwa', 'radar.io', 'react', 'typescript'] | 86 |
10,415 | https://devpost.com/software/instant-chats-7o32ul | home page
login signup
video calling
Inspiration
Various organisations are struggling to work together due to work from home norms. Also family relations were being affected due to Covid19.
What it does
It bridges the communication gap that occured in the community due to the pandemic through chat/call/video-call.
How we built it
We used HTML CSS JS for Frontend and PHP MYSQL APIs for back-end. Python JS For Chatbot.
Challenges we ran into
As our team members were working remotely we were unable to help each other out efficiently. Also we had some problem while setting up video call feature.
Accomplishments that we're proud of
Video-call/Voice-call/Attractive UI/Instant Chats
What we learned
We learned the concepts of communication and network engineering.
What's next for instant CHATS
We'll upgrade our databases to more efficient hosting service.
Built With
css3
html5
javascript
mysql
php
python
Try it out
instantchat.epizy.com | instant CHATS | We are aiming for a messaging app where a person can use both his personal and officework. He can message videocall and fileshare. an office can setup their own group where every member can fileshare. | ['Prakash Rajpurohit', 'Amit Singh', 'Shubham Nagpal'] | [] | ['css3', 'html5', 'javascript', 'mysql', 'php', 'python'] | 87 |
10,416 | https://devpost.com/software/theclimbingcrew | We both grew up in the Chesapeake Bay region, Richard in Maryland and Lu in Virginia. We are passionate about the Bay and Richard and I are forever inspired by the citizen scientists collecting data across the Chesapeake Bay Region. Thank you for your time, effort, and commitment to protect and restore the Chesapeake Bay watershed. We hope this analysis of data gaps not only opens opportunities for more measurement but, celebrates the incredible work you have done so far.
Built With
r
rstudio
Try it out
github.com | Richard and Lu, Dynamic Duo | Connecting the dots between state water quality goals and citizen science data | ['Richard Latham', 'Lu Sevier'] | ['Present to Industry Leaders'] | ['r', 'rstudio'] | 0 |
10,416 | https://devpost.com/software/the-chesapeake-bayes-addressing-challenge-2 | Inspiration: We are inspired by the extensive sampling efforts across the watershed, and motivated to help advance efforts to continue monitoring the health of the Chesapeake Bay.
What it does: Mind the Gap is an on-line data visualization tool created with carefully curated prioritization scales to help easily spot temporal and geospatial data gaps in water quality or benthic sampling collection across the Chesapeake Bay watershed, with further categorization for each HUC-12 based on land use, stream designation, and adjacency to active sampling stations. Users can see data sampling trends across the whole watershed, by state, by local area, by HUC or at a specific CMC or CBP sampling station. The tool can be used to prioritize sampling efforts or create “target plans” to activate new sampling efforts, or re-energize efforts in an area that has not had recent sampling. The code is dynamic to ensure this tool can be modified with additional sampling parameters and/or updated as data collection efforts continue.
How we built it:
Step 1: We created a gap analysis for each sampling variable from the CMC and CBP databases for Water Quality and Benthic data. The data was separated by organization, sample parameter within 5 year intervals, then aggregated and scored based on sampling totals. The analysis focuses on geospatial and temporal gaps across sample parameters, geography and source database. The code to complete this analysis was done using Python, and then results generated into individual geojson files to upload to the ArcGIS online tool for visualization.
Step 2: Further analysis was completed to create ‘prioritization scores’ at each HUC-12 level. Using QGIS, we expanded on the concept from existing point location analysis to generalize a descriptive HUC-12 prioritization scale using a prescribed algorithm. The algorithm was developed on a parameter basis, characterized by spatial and temporal sampling frequencies, drawing from geospatial characteristics; specifically Land Use designation, 303d stream designation, and adjacency to HUC-12 active sampling stations. The resulting ‘prioritization score’ heat-map was layered into the ArcGIS online tool, complimenting the geojson gap analysis layers.
Step 3: The ArcGIS online tool was created by combining the geojson layers from our CMC & CBP temporal & geospatial analysis with the qgz QGIS files from our geospatial prioritization value layers. Within ArcGIS, symbols and colors were specifically chosen to easily identify temporal or geospatial gaps further classified by the land-use, stream designation or adjacency to active HUC-12 sampling stations.
Challenges we ran into:
One fun challenge we ran into was inverted LAT/LONG data points which resulted in our first ArcGIS map being in Antarctica!
Accomplishments that we're proud of:
We are proud of our ability to come together as a team during a global pandemic, dedicating a collective 120++ hours to the development of this tool that we hope can help Chesapeake Bay Watershed sampling teams enhance their efforts.
What we learned:
We learned that sample collection across the Chesapeake Bay watershed is very extensive, but largely decentralized. We also learned how to use QGIS, video editing tools, and how to turn raster data and code into geojson files to generate a high quality, dynamic tool for scientific sample data.
What's next for Mind the Gap: We will continue to be an awesome team prepared to present our tool to the Booz Allen and CMC teams for immediate implementation!
Built With
esri
python
qgis
Try it out
www.arcgis.com
drive.google.com | Mind the Gap - an on-line data visualization tool | Mind the Gap is an on-line data visualization tool created to help easily spot temporal and geospatial data gaps in water quality or benthic sampling collection across the Chesapeake Bay watershed. | ['Kelson Shilling-Scrivo', 'Janice Cessna', 'Annie Carew', 'Amy Nyman'] | ['Present to Industry Leaders'] | ['esri', 'python', 'qgis'] | 1 |
10,416 | https://devpost.com/software/hack-the-bay | Please read our readme!
https://github.com/jacob-r-hassinger/Hack-the-Bay
Built With
python
tableau
Try it out
docs.google.com
public.tableau.com | Hack the Bay | How does a tributary of the South River compare with the Chesapeake Bay? | ['Joseph Geglia', 'Jacob Hassinger', 'David Taboh'] | ['Present to Industry Leaders'] | ['python', 'tableau'] | 2 |
10,416 | https://devpost.com/software/hack-the-bay-challenge-3 | Distance total nitrogen
catboost model shap values
correlation of total nitrogen
segmented model
Distribution of ph, salinity and temperature
nitrogen and phosphorus scatter plots
Indicator of nitrogen
Shap of huc12_ separation
feature importance huc12_ separation
Inspiration
The Chesapeake Bay’s pollution levels and water quality have flattened out in recent years, however that quality is near 47% in 2012. Excess nutrients such as Nitrogen & Phosphorous are leading causes of the current state of the bay. The goal was to predict total nitrogen in the bay with a focus on finding patterns found from different features expressed by Total Nitrogen when examining the predictive model.
What it does
Predicts TN in the Chesapeake Bay watershed using land cover, weather, nitrogen oxide emission, and air nitrogen oxide monitoring data.
Data Sources:
Land Cover
- Multi-Resolution Land Characteristics (MRLC)
Check it out
Weather
- North American Regional Reanalysis (NARR)
Check it out
NO2 Pollution
- Point Source NO2 | EPA Air Quality System (AQS)
Check it out
- Air NO2 Monitoring Data | EPA National Enforcement and Compliance History Online Data Downloads (ECHO)
Check it out
Challenges we ran into
Finding relevant and complete data to use.
The lack of domain knowledge.
The major challenge was collecting land cover data beyond and before the year 2016. We had to rely heavily on the 2016 land feature data set making generalizations that land features stayed the same from 2016 to the end of 2019.
Determining the important features to predict TN throughout the whole watershed. Some HUC12 areas had different correlated features than other areas. To achieve scalability an overall model approach was taken but to solve the differences in areas an ensemble model was also created.
Data sparsity with data sources - NO2 emissions data was provided in a yearly format vs a weekly/monthly format. Air NO2 monitoring stations were sparse in terms of location throughout the watershed, and consistent values being recorded for the duration of the dataset
TN values were highly skewed. It was difficult determining what values may be determined as outliers from the dataset without specific domain knowledge.
Accomplishments that we’re proud of
The team’s openness and tenacity to complete our first hackathon.
Data wrangling from multiple datasets given the data sparsity.
The insights gained from the data revealing the complex relationships of pollutant flow into the bay.
What we learned
We learned how different models reveal different relationships of how pollutants make it into the Chesapeake Bay
How to utilize and incorporate shap for explaining ML models
Making it better
The improvements of the model would be to collect each year how land cover changed from 2016 to 2020.
Collecting more frequent measures of nitrogen oxide emissions instead of making a generalization of a yearly value.
Look to promote a more uniformed method of collection and to promote people who go into the field to label the type of body of water they are collecting from.
Built With
catboost
geopandas
geopy
matplotlib
pandas
python
qgis
shap
sklearn
urllib3
xgboost
Try it out
github.com
github.com | Shore Is Fun | Predicting total nitrogen which leads to dead zones and unlivable habitats for the local fauna using land features, air quality as well as nitrogen oxide. Using a robust and segmented model. | ['Bryan Dickinson', 'Berenice Dethier', 'Justin Huang', 'Jen Wu', 'Tim Osburg'] | ['Present to Industry Leaders'] | ['catboost', 'geopandas', 'geopy', 'matplotlib', 'pandas', 'python', 'qgis', 'shap', 'sklearn', 'urllib3', 'xgboost'] | 3 |
10,416 | https://devpost.com/software/just-some-eda | Inspiration
The volunteers who work every day to improve the watershed
What it does
It just cleans the data and follows a nearest neighbor imputing strategy using a workflow to retain as much of the original data as possible.
How I built it
I started with the CMC/CBP data and filled/added additional exogenous variables from publicly available noaa datasets.
Challenges I ran into
Time constraints and missing domain knowledge
Accomplishments that I'm proud of
It was a off to a good start.
What I learned
That there is a treasure trove of publicly available weather data.
What's next for Just some eda
If there ever is a call for further help in watershed pollution modeling, I'd like to take what I've learned and expand upon it to be something more complete and useful.
Built With
python
scikit-learn
Try it out
github.com | Just some eda | This is an unfinished project for problem 3 that would have needed much more work to have a decent model. There may be some useful code in the EDA/integration of data from NOAA sections. | ['Corey Ryan Hanson'] | [] | ['python', 'scikit-learn'] | 4 |
10,416 | https://devpost.com/software/upstream-one-stream-downstream | Inspiration
Problem Statement
Based on analysis of current report cards and visualization tools, we found three broad inadequacies that a human-centered design could address. These are:
Not enough people know about the issue of water quality and the bay, and how it impacts things that they care deeply about.
Not enough people can contextualize their role, either as contributor as potential change agent. They therefore do not feel invested in the issue, or feel powerless.
There is not enough data to tell a complete and robust picture about water quality in the bay watershed.
Existing Solutions
While report cards and visualizations do exist for the bay, they are not designed in a human-centered way, which limits their effectiveness in creating change. These drawbacks include:
They are often hard to understand, and clearly tailored to niche audiences like policymakers instead of average citizens.
The data is retrofitted within artificial boundaries so that one can only see it within the context of a “bubble,” and not within the giant network that it truly is. Users can’t engage with the data in such a way that they truly understand how the system interacts, and how inputs create tangible downstream impacts.
There are many compelling and human-centered stories and “hooks” which are not being told. Important and relevant data, from human (eg - demographics) to ecological (eg - bird habitation zones) geography is missing.
The data is black or white, with users not being able to see data gaps (in time or sample size), and therefore missing a key message with the need to collect better data.
Vision
We thought of an ideal end state for this project, where thousands of users were interacting with the data, conducting testing, sharing resources and research, and picking up calls to action based on the messages the data was telling in real-time. To get to this ideal end state, the following steps would need to be achieved:
Get critical mass of people to use the site
Get visitors to the site hooked on the issue
Get critical mass of people to become testers
Sustain and compound efforts
The second point was most relevant to this design challenge, but all would have to be achieved to be successful. The dashboard would make the data engaging and meaningful to a diverse audience, and be scaffolded and designed in such a way that it would create myriad opportunities for scaling
What it does
Overview
The site would have a dashboard with data that is (1) visually compelling, (2) relatable, and (3) lends itself to calls to action.
1. Visually Compelling:
Users would enter the site, plug in their address, and then access a variety of visuals showing (1) all of the pollutants that come into their locality, and (2) all the pollutants that their locality adds downstream. The data would be visualized in one or a combination of formats, such as:
A sankey plot (with the river system visualized on a horizontal axis, with the user’s locality at the middle)
A network node visualization
A map showing the bay watershed, with (1) the counties contributing to the user’s water quality, and (2) the counties the user’s locality impacts.
The data would be toggleable based on a number of criteria, including:
Political subdivisions (eg - county level)
Pollutant inputs
Subdivisions of pollutants
Political demographics
2. Relatable:
The data would be visualized in a simple format, and framed in ways that allow the user to filter by area of interest. Users from a variety of interest areas could approach the data in the following ways:
Political: Users could focus the data by political boundaries to infer where policy failures are enabling water quality to be impacted.
Human: Users could see demographics of areas where pollution is occuring, and where it is having the worst effects.
Ecological: Users could visualize pollutants and see how they impact animals and plant-life.
The data would be made easy to understand, and put in real world translations. For example:
A user could click on one of the counties upstream, see the pollutants that that county is creating, and see where each manifests from
Eg: A user could see that “pollutant x comes chiefly from factory runoff,”
A user could click on an area of the map/visualization, see a breakdown of the pollutants, and be able to toggle over them to see how they impact (1) animal life, (2) plant life, (3) human quality of life
Eg : when toggling over “human quality of life” impacts, can see a list of things like “Swimming hazardous, water murky, E-Coli risk, cancerous plastics leaching into drinking water, etc.
Eg: when toggling over “animal life” can see list of things like “turtles will not live in this water, fish develop cancer, contributes to bird deaths, etc.
For each of these lenses, they should be as tangible as possible. Visuals should be included to enable users to emotionally and personally relate to what they are seeing.
Eg: users would see pictures of shorebirds when looking at how certain pollutants contribute to animal health.
Eg: Users could post pictures of rivers when doing testing, which would be uploaded to the site and be tagged as nodes alongside the data points. Users could then scroll through photos up and down river systems to get a visual of what the water looks like day-to-day. Someone could see how turbid the water is at the mouth of the bay, and someone could also see how pristine the mountain water looks compared to where they live.
3. Calls to Action:
The data would be presented in such a way that calls to action would be natural, and the users would actively seek them out. The following are how the site would enable calls to action:
The political boundary overlay, which allows the user to see an approximation of the delta/change their locality creates to the bay, lends itself well to political calls to action since the user could see (1) how they are being failed by poor leadership upstream, and (2) how their localities leadership impacts things the user cares about downstream. The political boundaries could be linked to political data (eg: county and state government offices and officials).
The user would be empowered with data.
The user will want to become a tester, because they can see how powerful the data is in telling the full story.
What's next for Upstream / One Stream / Downstream
We developed this idea for a prototype in a day and aren't designers - we know this isn't a complete project and there is plenty of room to build this out with real data and wireframes. There are a variety of potential long term spinoffs for the website if it were successfully implemented:
User-Driven Engagement:
Have a critical mass of people doing testing, and have a number of users also adding to the platform in other helpful ways. These could include: :
Integrating more data layers, such as business information (eg: firm-level data on farms and factories), ecological data (eg: bird netting zones), etc.
Adding supplemental information which empower political calls-to-action (eg: government contact information, if environmental positions exist or are unfilled, if bills or ordinances are up for a decision/vote, etc.)
Using the centralized data as a means to conduct independent research (eg: modeling theoretical economic concepts with real-world data, drafting data-informed policy proposals, etc.)
Replicability:
The website and its infrastructure could be replicated as a “plug and play” solution, and implemented for other regions/watersheds, domestic or international.
Youth Engagement:
The smoothest on-ramp to scaling testing is by having students (either college, high school, middle school, or elementary) engage in testing. This could be administered by nonprofit or government partners, and would have several benefits:
Relevant tie-in to existing curriculum
Public funding/infrastructure available (field trips, buses, teachers, technology access, etc.)
Strong investment (good opportunity to change minds early in life)
Builds user base at scale.
Focusing on youth would also create a strong potential synergy with the UN Sustainable Development Goals (SDGs). The SDGs are designed to give youth (“generation 2030”) the buy-in and agency to create solutions to the world’s problems.Goal 6 of the SDGs relates specifically to water quality, and ensuring “availability and sanitation for all.” Many of the World’s most polluted rivers are in the developing world, and the value-add for clean water policy implementation would be even greater if taken abroad. A human-centered design approach giving youth the agency to drive water cleanliness would align perfectly with the SDGs, and is worth strong consideration for scalability.
Gamification:
Sustaining testing at scale presents a major challenge, but that could potentially be solved with an application. An app could have the following benefits:
Users could track and contextualize their testing, getting the satisfaction of seeing their data directly impact the system’s data (like PokemonGo, but for science)
Users could show off their activism by enabling cross-posting to social media (showing that they are doing something, not just talking about it)
Users could receive potential rewards/compensation for tests. Funders could see how compensation impacts testing volume, and make informed funding decisions.
Sponsors would have natural tie-ins when it comes to corporate social responsibility (eg: Vegan Burger Co. provides free burger voucher for users who do their first test)
Users would have a potential social component, where they could engage with others over chat, share photos, etc.
The app could be integrated with a learning component, especially if targeting youth.
Built With
canva
Try it out
www.canva.com | Upstream / One Stream / Downstream | Visualizing a human-centered watershed network | ['Daniel Dowdy'] | [] | ['canva'] | 5 |
10,416 | https://devpost.com/software/chesapeake-bay-data-quadrant | Inspiration
What it does
Chesapeake Bay Data Quadrant (CBDQ)
is a data science project that supports
Chesapeake Bay
conservation by providing four interactive environmental & water quality reports.
CBDQ
contributes to the
Chesapeake Monitoring Cooperative (CMC)
water quality data initiative and consists of the following four components:
Water Quality Restoration Visualization
GIS Data Gap Map
Water Pollution Analysis
Water Quality Report Card
How I built it
Challenges I ran into
Accomplishments that I'm proud of
What I learned
What's next for Chesapeake Bay Data Quadrant
Built With
esri
python
Try it out
bitbucket.org | Chesapeake Bay Data Quadrant | Chesapeake Bay Data Quadrant (CBDQ) is a data science project that supports Chesapeake Bay conservation by providing four interactive environmental & water quality reports. | ['Warp Smith'] | [] | ['esri', 'python'] | 6 |
10,416 | https://devpost.com/software/bh | Homepage
Dial 1 - Water Quality
Dial 2 - Community
Dial 3 - Recreation
Dial 4 - Covid
Dial 5 - Whats new
Inspiration
As a frequent visitor to the Chesapeake Bay, understanding and providing awareness about the water quality and bay conservation is our inspiration.
What it does
This is an integrated platform to get all related information. In addition to metrics measuring the health of the bay, this design adds the community factor in. Being interactive and engaging, it will gain more visitors and can be used to engage, educate and involve the local user.
How we built it
This is built using Tableau. This is user-friendly and can be easily scaled to cover other regions in the bay. It can be published online and can use different datasources both on-prem and in the cloud.
Challenges we ran into
Understanding the data and how it is measured is challenging. This being a new domain, time is required to understand and analyze the data.
Accomplishments that we're proud of
Putting together an interactive design that is both fun and informative. Adding maps that can be searched using Zipcode and click features enhance the user experience.
What we learned
A lot about the data behind water quality.
What's next
Plan is to integrate real time data sources including social media/news feed and add more measures for water quality.
Built With
tableau
Try it out
public.tableau.com
github.com | Water Quality ReportCard | A one-stop portal for the Chesapeake Bay community | ['Ranjani Chandran', 'Santhosh Kumar'] | [] | ['tableau'] | 7 |
10,416 | https://devpost.com/software/communicating-water-quality-information | Inspiration
I have noticed that many water quality and watershed group websites would be difficult for their intended audiences to understand. The Federal government Plain Language web site has statistics showing surprising low levels of scientific literacy in the population as a whole. Jargon and technical terms defined in the Clean Water Act are unlikely to be easily understood by the general public. Proper interpretation of graphs can be a challenge for some people. Others may encounter difficulties when information visualization relies on full color perception, ability to read small text, or tolerate low contrast. Likewise, video without closed captions, images without descriptions, and user interfaces that require physical dexterity can decrease accessibility.
What it does
This project is more of a proof-of-concept for enhancing a web Report Card or monitoring data page with descriptions intended for different audiences. This initial illustration is for Fecal Bacteria written at the user's chice of Basic, General, and Expert levels. The approach would also work for different languages, geographic areas, types of stakeholders, or other factors to help focus your message for portions of the intended audience.
How we built it
No code was written for this simple demo of the concept of alternate text.
Accomplishments that we're proud of
While the general idea was identified at the start of the Hackathon, team member recruitment was not sufficient for developing working code and UI within the event time frame. When the deadline was extended with allowance for partial completion, we revived the project concept with scope appropriate for the limited time. While not complete, it can serve as a resource or encouragement for others to adapt content for different groups.
What we learned
Even the drastically reduced scope did not fit easily within the deadline. Video production has a late night glitch that we hope to resolve soon.
What's next for Communicating Water Quality Information to Diverse Audiences
We are looking forward to feedback on the perceived benefits of further implementation.
It could lead to the creation of a repository (github or other) for plain language descriptions of report card and monitoring data.
Built With
design-concept-no-code
Try it out
github.com
EnvironmentalInformatics.com | Communicating Water Quality Information to Diverse Audiences | Greater outreach/education impact for the full range of personal interests and prior knowledge | ['J Campbell', 'Cynthia Campbell, Ed.D.'] | [] | ['design-concept-no-code'] | 8 |
10,416 | https://devpost.com/software/challenge-1-for-hackthon | Inspiration
I wanted to see if I could explore data science in the field of study when I was in college: "Civil Engineering"
What it does
I was able to create a pretty interesting tableau viz that highlighted an abnormality in the electric conductivity data
How I built it
Tableau and Python were utilized
Accomplishments that I'm proud of
I'm proud that I was able to play around with Tableau and learn a new tool
What I learned
Tableau
Built With
python
tableau
Try it out
github.com | Challenge 1 for Hackthon | The data set was cleaned using jupyterlab and then fed into tableau for analysis | ['Brian Tam'] | [] | ['python', 'tableau'] | 9 |
10,416 | https://devpost.com/software/challenge-2-concept-thoughts | Per the hackathon extension notice, I am submitting Challenge 2 concept ideas in lieu of a finished product.
It is commonly understood that there are spatial and temporal gaps in the existing Chesapeake Bay monitoring data. The first step would be to identify an area of the bay watershed, ideally the size of a few HUC12 subwatersheds or larger that includes fairly diverse land uses and contains recent data coverage located above and below those land uses considered most impactful (e.g. agriculture, urban/developed, mining areas). Since this Challenge focuses on a restoration case study, it is important to identify consistently well-monitored locations upstream and downstream of land use that was formerly identified as impactful and more recently identified as less impactful, or vice versa. This series of data facilitates qualitative modelling of how negative or positive changes influences water quality.
Identification of the subwatershed area would be accelerated by using a fusion of R and GIS to 1) filter out any data that doesn't meet basic spatial and temporal thresholds (e.g. water quality monitoring points with <10 visits per year for the last five years), 2) filter out any monitoring stations that do not contain the basic subset of parameters (e.g. pH, water temperature, benthic, nitrogen, phosphorous, etc), and 3) visually accentuate the remaining monitoring points that received concentrated attention per year (e.g. 50+ visits per year for most recent 5 years). The more visits per year the better, as trends associated with weather events could be studied closely and the largest amount of observations is desirable for modelling purposes. Also, if possible, it may be useful to select a headwater area to avoid substantial contributions from upstream influences.
After identification of the watershed region, additional geospatial data should be gathered including:
Multiple years of land use and land cover from the U.S. Geologic Survey National Land Cover Database;
Past weather data (e.g. precipitation, temperature) from the National Oceanic and Atmospheric Administration; and
Geology and/or soils from the U.S. Geologic Survey and/or Natural Resource Conservation Service.
These data sets would be joined to the existing monitoring data, likely through GIS overlay and extraction methods, by using the monitoring station geospatial locations.
With all data assembled in a single data file, the remaining data analysis steps would be completed in R. First, creating a few additional variables may be useful. For instance, knowing what land use or the length of stream lying immediately upstream from an existing monitoring station. Also, as with all data files, some tidying will be required to fit the intended use. In this case, if the multiple similar-named parameters (e.g. pH, pH.6, pH.9, etc) can be labelled the same, that would streamline the data set. Next, exploratory data analysis should be applied to evaluate basic univariate patterns, bivariate relationships, potential missing data, and any data oddities. Further tidying should be applied as appropriate.
After any restructuring and tidying, then machine learning algorithms could be applied to evaluate which variables have relationships with the response variable. This concept plan assumes benthic rating as the response variable. Initially, a multiple linear regression could be applied, although since the response is categorical, a classification model is likely most appropriate. A suite of classifiers including multinomial logistic regression, SVM, LDA, QDA, and Random Forest could quickly be trialed through R's caret package. Further refinement could occur after variable selection analysis using Random Forest, PCA or other method, and evaluation of the changes or improvements in classification error.
Thank you for the opportunity to submit these thoughts as a rough concept. At minimum, it was exciting to visualize a possible approach and daydream about how fun the execution would be.
Built With
qgis
r | Challenge 1 Concept Thoughts | Analyzing how changes in land use or land cover have affected the Chesapeake Bay water quality through geospatial data synthesis and machine learning modelling. | ['Janice Cessna'] | [] | ['qgis', 'r'] | 10 |
10,416 | https://devpost.com/software/challenge-3-concept-thoughts | Per the hackathon extension notice, I am submitting Challenge 3 concept ideas in lieu of a finished product.
It is commonly understood that there are spatial and temporal gaps in the existing Chesapeake Bay monitoring data. The first step would be to identify an area of the bay watershed, ideally the size of a few HUC12 subwatersheds or larger that includes fairly diverse land uses and contains recent data coverage located above and below those land uses considered most impactful (e.g. agriculture, urban/developed, mining areas). Identification of this area would be accelerated by using a fusion of R and GIS to 1) filter out any data that doesn't meet basic spatial and temporal thresholds (e.g. water quality monitoring points with <10 visits per year for the last five years), 2) filter out any monitoring stations that do not contain the basic subset of parameters (e.g. pH, water temperature, benthic, nitrogen, phosphorous, etc), and 3) visually accentuate the remaining monitoring points that received concentrated attention per year (e.g. 50+ visits per year for most recent 5 years). The more visits per year the better, as trends associated with weather events could be studied closely and the largest amount of observations is desirable for modelling purposes. Also, if possible, it may be useful to select a headwater area to avoid substantial contributions from upstream influences.
After identification of the watershed region, additional geospatial data should be gathered including:
Land use and land cover from the U.S. Geologic Survey National Land Cover Database;
Past weather data (e.g. precipitation, temperature) from the National Oceanic and Atmospheric Administration; and
Geology and/or soils from the U.S. Geologic Survey and/or Natural Resource Conservation Service.
These data sets would be joined to the existing monitoring data, likely through GIS overlay and extraction methods, by using the monitoring station geospatial locations.
With all data assembled in a single data file, the remaining data analysis steps would be completed in R. First, creating a few additional variables may be useful. For instance, knowing what land use or the length of stream lying immediately upstream from an existing monitoring station. Also, as with all data files, some tidying will be required to fit the intended use. In this case, if the multiple similar-named parameters (e.g. pH, pH.6, pH.9, etc) can be labelled the same, that would streamline the data set. Next, exploratory data analysis should be applied to evaluate basic univariate patterns, bivariate relationships, potential missing data, and any data oddities. Further tidying should be applied as appropriate.
After any restructuring and tidying, then machine learning algorithms could be applied to evaluate which variables have relationships with the response variable. This concept plan assumes benthic rating as the response variable. Initially, a multiple linear regression could be applied, although since the response is categorical, a classification model is likely most appropriate. A suite of classifiers including multinomial logistic regression, SVM, LDA, QDA, and Random Forest could quickly be trialed through R's caret package. Further refinement could occur after variable selection analysis using Random Forest, PCA or other method, and evaluation of the changes or improvements in classification error.
Thank you for the opportunity to submit these thoughts as a rough concept. At minimum, it was exciting to visualize a possible approach and daydream about how fun the execution would be.
Built With
qgis
r | Challenge 3 Concept Thoughts | Analyzing how environmental factors including land use, land cover, weather, atmosphere, and geology influence water quality, through geospatial data filtration and machine learning modelling. | ['Janice Cessna'] | [] | ['qgis', 'r'] | 11 |
10,416 | https://devpost.com/software/action-oriented-report-card | Inspiration: I was inspired by customer experience dashboards created at the VA Veterans Experience Office. The way in which data is presented and is accessible has huge impact to how useful it can be.
What it does: The report card should relay comprehensive information in a way that is understandable, insightful, and action-oriented.
How I built it: I sketched out ideas and did the rough prototype using only Powerpoint!
Challenges I ran into: As I started to experiment with ideas, I started to recognize just how many potential ways the data could be presented and how exciting a deeper dive could be. My biggest challenge was not enough time!
Accomplishments that I'm proud of: Was fun to timebox myself to see what I could come up with in a short amount of time.
What I learned: Ideas that have weight can't be done in a vacuum. It's important to bounce ideas of of others and to include experts where you don't have the skillset.
What's next for Action-Oriented Report Card: Recruit UX designers and data scientists to prototype further, and then GET IN FRONT OF USERS. Designers can come up with beautiful creations, but until users verify that there is benefit in the creation, it means nothing.
Built With
powerpoint | Action-Oriented Report Card | To make data actionable, it must be accessible, digestible, and robust enough to draw correlations and relationships. | ['Cameron Hanson'] | [] | ['powerpoint'] | 12 |
10,416 | https://devpost.com/software/lithogeochemistry-and-water-quality-in-the-cb-watershed | Geologic map of MD and VA reproduced in Python with water quality sampling sites from CBP and CMC represented as black squares.
p-value heatmap for comparison of nitrate concentrations of the lithogeochemical rock types. Most rock types are significantly different
Inspiration
There has been ample work on how land use and land cover change affects water quality in the Chesapeake Bay Watershed (CBW), and recent studies have shown that spatial location is one of the most important factors determining pollution loads to the bay. One important spatial factor is near-surface bedrock geology, which comes into contact with groundwater, streams, and rivers throughout the CBW. Some rock types contain minerals and other components that are reactive towards water, and can thus change the chemistry of the water flowing through the watershed and into the Chesapeake Bay. Geologic maps display information on the spatial distribution of different rock types in a given region. However, traditional geologic maps emphasize age and stratigraphic relationships between rock types rather than their chemical reactivity. A
study
by the USGS devised a
lithogeochemical
geologic map where the map units are classified based on their composition, mineralogy, and texture. In this way, we can see the spatial distribution of rock types based on their potential effects on water chemistry.
This partial hackathon submission utilizes the lithogeochemical map produced by the USGS along with water quality data from the Chesapeake Bay Program and Chesapeake Monitoring Cooperative to determine if this lithogeochemical classification can be of use in predicting water quality across the CBW. As a partial submission, the main goal of this project is to draw attention to the importance of rock geochemistry in understanding and restoring the Chesapeake Bay Watershed, and to be a starting point for further studies that will utilize this information.
What it does
I performed hypothesis tests on the geology and water quality data, using nitrate as an example, to determine if pollution loads were significantly different at water sampling stations located in different lithogeochemical regions. This can allow researchers and policy makers to further understand if geology is an important in controlling water quality in the CBW. Using a lithogeochemical map rather than a traditional geologic map makes it easier to understand which physico-chemical mechanisms throughout the watershed are shaping water quality, because the map is devised with these processes in mind. This analysis can enable decision makers to decide where how to plan land use based on geology.
How I built it
This preliminary analysis was built entirely in Python. The Python
geopandas
package was used to read in the geospatial data and to spatially join the water quality data to the geology data. The Python
pandas
package was used to manipulate dataframes, and packages
matplotlib
and
seaborn
were used for data visualization. Finally, the
scipy
package was used to perform the statistical hypothesis tests. The hypothesis test used was the Mann-Whitney
U
test, which is a non-parametric hypothesis test. P-values were obtained from this test with a significance level of 0.05 to determine if the nitrate concentrations between the lithogeochemical rock types were statistically significant.
Challenges I ran into
Working solo was my biggest challenge as I did not have teammates to bounce ideas off of and to answer any questions I had. As my knowledge of predictive modeling is limited, this severely limited the progress of this project. Additionally, the complexity of working with three-dimensional data posed a challenge: where the effects of time, and spatial data (below and above ground) are important it is difficult to determine the accuracy of the results.
Accomplishments that I'm proud of
Utilizing a unique geological dataset. This untraditional geologic map can bring a new perspective to modeling geographical influences of nutrient pollution and water quality in the CBW.
Working alone and figuring out solutions to problems that I encountered on my own.
Taking advantage of my knowledge as a geochemist and combining it with data science to perform an analysis.
What I learned
There are significant differences between the nitrate loads of different rock types classified lithogeochemically. This indicates that rock lithogeochemistry can be a useful predictor of water quality in the CBW when incorporated into predictive machine learning models. Some rock types do not have significantly different nitrate loads, and the reasons for this can be further investigated by determining what characteristics make these rock types react similarly.
What's next for A Potential Litho-geochemical Predictor of Pollution Loads
Collaboration: Team-up with data scientists and machine learning scientists to build a predictive model around geology
Utilize land use/land cover data sets to deconvolve other possible factors affecting water quality in differing lithogeochemical regions
Clean up data more so that temporal changes are taken into account
Utilize land use/land cover data sets to deconvolve other possible factors affecting water quality in differing lithogeochemical regions
Determine how rock type affects other water quality indicators such as phosphate and pH
Understand why certain rock types do not have significantly different nitrate loads
Incorporate surficial rock types and groundwater data into analysis
Incorporate lithogeochemical rock types into predictive models of Chesapeake Bay water quality
Built With
geopandas
matplotlib
pandas
python
scipy
Try it out
github.com | A Potential Litho-geochemical Predictor of Pollution Loads | This preliminary analysis shows that pollution loads in the Chesapeake Bay watershed differs between rock types in the region, indicating that geology can be a viable water quality predictor. | ['Sydney Riemer'] | [] | ['geopandas', 'matplotlib', 'pandas', 'python', 'scipy'] | 13 |
10,416 | https://devpost.com/software/modeling-toxic-phosphorus-levels-on-the-potomac-river | Harmful Algal Bloom - the effects of phosphorus pollution
Inspiration
Harmful algal blooms are on the radar of state agencies and local communities alike. From producing toxins harmful to humans and aquatic animals, through forming a thick mat that prevents sunlight from reaching the lower layers, to depleting the oxygen levels needed by aquatic organisms to survive, the rapid growth of algae signify an alarming level of water pollution. But the process, called eutrophication, starts way before we can see algae bloom on the water surface. Eutrophication in modern-day societies is sped up by land-use practices that lead to excessive amounts of nutrients entering the water body and thus causing a growth spurt in first the plant (such as algae), then the animal population. In this process, phosphorus as a key nutrient plays an important role both in producing and in controlling algae blooms. Phosphates are essential to cell reproduction. This means, that the plant population can only grow to the extent supported by the amount of phosphates in the water, regardless of the availability of other nutrients. While, therefore, a high level of phosphorus stimulates rapid algae growth, controlling the level of phosphorus in the water helps maintain a healthy aquatic ecosystem.
What it does
The first step towards controlling the total phosphorus amount in the water body is to monitor when levels are reaching a critical point. Our model predicts
total phosphorus
from measured levels of:
active chlorophyll
dissolved oxygen
ammonium nitrogen
nitrate nitrogen
PH, corrected for temperature
orthophosphate phosphorus
salinity
turbidity (Secchi depth)
total alkalinity
total dissolved solids
total Kjeldahl nitrogen
total nitrogen
total suspended solids
turbidity (nephelometric method)
water temperature
as three distinct categories:
1) healthy amount,
2) increased amount that stimulates plant growth, and
3) problematic amount that projects unhealthy algae blooms
in the
Chesapeake Watershed
.
Code
First, we built a random forest classifier model to predict the phosphorus levels on the Potomac River from the feature parameters. The testing and cross-validated accuracy were 97%.
Next, we extended the model to include the entire Chesapeake Watershed. The testing and cross-validated accuracy held at 97%, suggesting that the model scales well and generalizes well.
Last, we experimented with predicting chlorophyll (as a proxy for algal bloom) from all other parameters on the Potomac River with using the RandomForestRegressor algorithm of the sklearn library. In theory, changes in the chlorophyll level do not follow the change in nutrient levels immediately. For this reason, we selected only those data points that contained measurements from consecutive days. Then, we shifted observed chlorophyll values back in time by 1 day, 3 days, and 7 days. This meant to assure that the model predicts chlorophyll levels in 1 day, 3 days and 7 days in the future. So far, the model evaluation metrics do not look promising. The cross-validated R-squared score consistently stays below zero, suggesting that the model, as it is, explains less from the variability in the chlorophyll level, than the mean level of chlorophyll.
Challenges we ran into
The most challenging part of building the model was dealing with the lack of consistency in data collection, the missing values, and cleaning and merging datasets, as well as the lack of subject matter expertise.
What’s next for Modeling Toxic Phosphorus Levels in the Chesapeake Watershed
Incorporate data sets with more features (such as benthic data and weather data).
Try a neural net classifier.
Built With
python
scikit-learn
Try it out
github.com | Modeling Toxic Phosphorus Levels in the Chesapeake Watershed | Elevated phosphorus level in water is a precursor to harmful algae blooms. Our model predicts total phosphorus in the Chesapeake Watershed Area from water quality metrics. | ['Bibor Szabo', 'Clay Carson'] | [] | ['python', 'scikit-learn'] | 14 |
10,416 | https://devpost.com/software/locating-chesapeake-bay-water-quality-study-areas | Proportion of HUC12's with Benthic Measurments 2017-2019
Mean-Aggregated Time Series Benthic Measurements, Little Seneca Creek and Middle Potomac-Catoctin and Rapidan-Upper Rappahannock
Site Time Series Benthic Measurements, Little Seneca Creek
Proportion of HUC12's with Benthic Measurments 2017-2019
Time Series Benthic Measurements, Chesapeake Bay
Hack the Bay Chesapeake Bay Water Quality Hackathon
Challenge 1
: Develop a Restoration Case Study (Time Series / Visualization Challenge) Using data from CMC, the Chesapeake Bay Program, and supplementary sources, tell a story about how water quality has changed over time in the Chesapeake Bay watershed.
Contents:
Jupyter Notebook
Presentation pdf
Presentation Video
The Chesapeake Monitoring Cooperative
(CMC) and
The Chesapeake Bay Program
(CBP) contain a variety of water quality measurements across time and across the entire Chesapeake Bay watershed. Because the Chesapeake Bay watershed is so important for tourism and fishing industries and is so heavily impacted by land use and population, these efforts have been created to attempt to paint a picture of the long-term health of the watershed.
The Middle Potomac-Catoctin and Rapidan-Upper Rappahannock
HUC8's
were selected for focus because they interface directly with the Potomac river which feeds into the Chesapeake Bay, and also because the regions contain a variety of land uses that reflects the overall variety of the Chesapeake Bay watershed.
Within this region, focus was placed on the HUC12's that contained benthic quality measurements from the past three years. One HUC12, the Little Seneca Creek subwatershed was identified as the single HUC12 in the region that displayed improvement in benthic quality over the past three years.
Using these benthic measurements, time series of mean benthic rating per HUC12 can be displayed for both Little Seneca Creek and the overall Middle Potomac-Catoctin and Rapidan-Upper Rappahannock subwatersheds. However, after plotting the measurement locations for each year of measurements, it becomes apparent that the measurements are generally not in the same locations year after year. For example, in 2006 a wider variety of locations were sampled, and this led to an severe decrease in the mean benthic rating for that year, even though the overall benthic quality of the subwatershed may not have changed in that way. In order to construct a true time series, it becomes necessary to examine only the locations with repeated measurements.
Plotting the time series of each individual repeated location offers a more complex picture than the aggregated mean for the entire watershed. This allows for locating the specific sites that may be improving or degrading over time. This analysis can be aggregated to the entire Chesapeake Bay watershed to display all time series of repeated measurements, which can further be clustered and divided to locate areas experiencing improvement or degradation.
The time series generated by restricting to repeated measurements can be differenced and summed to determine the overall trend of each site (positive for improving benthic rating; negative for degrading). Mapping this trend across the entire Chesapeake Bay watershed can illustrate the regions experiencing either positive or negative changes, directing attention to places that are benefitting from best management practices and places that may need to implement best management practices. Alongside this map of benthic rating trends, the locations of the CBP and CMC water quality measurements can be mapped and compared. Overall it appears that there is poor overlap between the benthic measurements and the water quality data, which presents the challenge of complicating any attempt to correlate water quality variables such as pH, water temperature, and salinity to the benthic macroinvertebrate ratings. It is recommended that the CBP and CMC focus on adding more water quality regions where benthic quality measurements are taken in order to provide a holistic view of watershed health.
Citations:
The following data sources were used in this analysis:
CBP and CMC Water Quality Data
Benthic Macroinvertabrates Data
USGS Watershed Boundary Dataset
The following Python libraries were used in this analysis:
Pandas
NumPy
GeoPandas
CartoPy
MatPlotLib
Built With
cartopy
geopandas
matplotlib
pandas
python
Try it out
github.com | Locating Chesapeake Bay Water Quality Study Areas | Exploring Benthic Macroinvertebrates Time Series in Chesapeake Bay Watershed | ['Marina Baker'] | [] | ['cartopy', 'geopandas', 'matplotlib', 'pandas', 'python'] | 15 |
10,416 | https://devpost.com/software/sean-isaac | Inspiration
It was an exciting opportunity to join this hackathon because we are both interested in geography and data science but had yet to take part in a geography-related data science challenge. We were inspired by the experts who introduced us to spatial visualization tools through the introduction videos.
What we did
Visualizing the data
The spatio-temporal data given to us was very complex and difficult to perceive, so we created an interactive visualization using Python Dash that allows us to see spatial relationships between pollution sources and sensor stations, as well as the time-series for each sensor station to see how associations change over time. This allowed us to discover some preliminary hypotheses such as the seasonality of nitrogen and pollution levels, as well as nitrogen pollution being higher near urban areas such as Baltimore.
Transforming the data
Then, we transformed the given data into relevant input features for machine learning modelling. We found it important to capture the spatial relationships (i.e. upstream and downstream) between HUCs, and hence used innovative data representations like directed acyclic graphs to represent HUC dependencies. We also tried to use the land use data by counting pixels of certain colors using QGIS, but as the files were huge we could only process land use data for 8 HUCs.
Modelling the data
We then used linear regression to investigate the statistical significance of the different predictors, then used XGBoost, a machine learning algorithm, to uncover non-linear relationships as well as variable importance. This settles our first aim of uncovering the underlying factors affecting nitrogen and phosphorous pollution levels. Then, for each point, we developed a time-series model (SARIMAX) with the goal of predicting future pollution using past data, splitting out data into train and test samples to ensure the generalizability of the fitted models. We were able to fit a model that makes suitably good predictions over the test data.
What we learned
We're happy to have learnt so many new skills and enhance our data science arsenal, and also to have applied it to enhance the understanding of the environment and ecosystems. Hopefully this hackathon is just the start of promising data science careers for us !
(Note: Code and presentation PDF are in the public github repository
https://github.com/thamsuppp/hackthebay
)
Built With
dash
mapbox
plotly
python
qgis
Try it out
thamsuppp.pythonanywhere.com | Sean & Isaac hacked the bay | We created an interactive viz for the Chesapeake Bay, analysed the factors affecting nitrogen & phosphorous pollution with ML, and deployed time-series models to predict future pollution. | ['Sean Lim', 'Isaac Tham'] | [] | ['dash', 'mapbox', 'plotly', 'python', 'qgis'] | 16 |
10,416 | https://devpost.com/software/case-study-on-chesapeake-bay-water-temperature | Inspiration
The state and federal partners have invested heavily in restoration efforts, amounting to nearly $1.5 billion of state and federal funding in 2019 alone. I am interested in seeing whether those efforts have paid-off? The indicator I picked up is the "Water Temperature", for example, have the summer or the winter water temperature be higher than usual?
What it does
Data Collection
I chose "Water Temperature" as the indicator. I used the CMC and CBP water quality data set provided by Booz Allen Hamilton.
For the water temperature, all of the filtered data sets for the analysis and visualization are from the CMC dadabase.
(
Link to the raw data
)
Data Preprocessing
* Year: I used Python pandas to trim date and time elements from the Date parameter, and filtered 2015 - 2020 as the time window I am interested in.
* Season: I categorized May to August as Summer , and November to February as Winter.
Exploratory Data Analysis
* Methodology:
1. Stationality:
Use AF, PACF and Augmented Dickey–Fuller test to check stationality. The water temperature is stationary time-series data.
2. Summary Statistics:
After plotting water temperature within 2015 to 2020, I found the temperature range for the winter has something interesting to look at. Especially after 2017.
3. Geospatial Maps:
I used the 5-year water temperature for the color scale, so I would be able to see whether a certain range of temperature increases in certain area. In 2020, there are several unusual high temperature points.
How I built it
Data Visualization
I created a seasonal overview dashboard in Tableau, using year, season and state parameters to invite end users to take a look at the data set. Additionally, With Tableau's geo-spatial mapping feature, I was able to check on the land use in the above region, The Yellow Breeches Creek in Pennsylvania, which is a common fishing resort.
Challenges I ran into
Not all the states in the Chesapeake Bay area have comprehensive data in the 5-year window.
Most data sets are in large magnitude, and hope to be able to access data sets in the FTP in the near future
Accomplishments that I'm proud of
This project identified the unusual high water temperature in 2020 winter in the Yellow Breeches Creek in Pennsylvania, which is a common fishing resort. Additionally, I designed a Tableau interactive dashboard to invite the end users to check out on the Year-on-Year seasonal water temperature in the Chesapeake Bay area, and drill in to see the red-flagged regions with unusual high water temperatures.
What I learned
Throughout this project, I learned how to pre-process large quantity of geo-spatial data, creating maps with Python
geo-pandas libraries, and be able to design a Tableau dashboard on it. I was highly rewarded by the sense of achievement from learning and implementing new things.
What's next for Case Study on Chesapeake Bay Water Temperature
*Recommendation
Due to the time limit, I had not yet got time to dig into the residents occupation, conductivity, recreation and harvesting products in the Yellow Breeches Creek in Pennsylvania. From the satellite iamges, I saw most of the land use there is in agriculture. I assume agriculture and fishing might be the major land and water use there.
I should have checked out on the data collection time in the specific region, as well as weather condition.
I had not checked on the air temperature and should have mapped a confusion matrix to check on the correlation between the air temperature and the water temperature.
*Conclusion
In conclusion, I was super thrilled to be able to touch on this topic and have a chance to look at the data sets, read the background documents, and work with my great teammates in the 2020 Hack the Bay Hackathon. Since the state and federal restoration efforts and funding have poured in since 2019, in the summer in 2020, the water temperature did not show high deviation from the previous year, though the winter temperature had several outliers. The top 2 highest degree are in Pennsylvania, and third highest is in Maryland. However, the highest one is 43 degree Celsius, while the second and the third are 26 and 24 degree Celsius. On the "Yellow Breeches Creek, Pennsylvania Fly Fishing Reports & Conditions" website, it did mention about high water temperature, but it stated the water quality is good. As a result, I propose the 43 degree Celsius in the winter might not be a typo from data collection. I look forward to expand this case study further in the days to come.
Built With
python
tableau
Try it out
github.com | Case Study on Chesapeake Bay Water Temperature | The state and federal governments have input large efforts in Chesapeake Bay restoration since 2019. I am starting this project with the hope to see has those efforts paid off? | ['Jen Chen'] | [] | ['python', 'tableau'] | 17 |
10,416 | https://devpost.com/software/effect-of-land-cover-on-pollution-in-chesapeake-watershed | 1. Visualizing positive and negative correlation showed groups of land cover types and correlation with nitrogen.
2. A boxplot of the nitrogen levels broken down by month across all locations shows seasonal change.
3. The average nitrogen by station, broken out by month.
4. Average nitrogen by station.
5. A comparison of hydrological unit code (HUC) sizes. HUC-8 is the largest, followed by HUC-10, with HUC-12 being the smallest.
6. Waterways upstream of sampling stations were selected from the NHD and used to identify which watersheds influenced water samples.
7. Table of R2 values for the models built on land cover extractions and basic data.
8. Table of R2 values for the models comparing different land cover extractions.
9. Feature importance graph for the gradient boosting regression model that used upstream land cover as input features.
10. The interface allows users to explore how land cover changes would affect nitrogen readings at the station nearest the provided zip code
11. Extracting land cover within the buffer zone around rivers may better predict nitrogen levels.
12. A gaussian time series model captures seasonal variation.
13. Code was written to display heatmaps showing concentrations of land cover types.
Inspiration
We wanted to put our combined skill set to use for an environmentally important goal. Since we all live in and enjoy the Chesapeake Bay watershed, this hackathon had particular meaning for us. Given our background in machine learning, statistics, and GIS, we thought we would try the Challenge 3, the machine learning option. It was important to us that the output could go beyond simply being a model to being a tool for scientific communication with our community. That is why we built an online interface for users to see how the actions within the watershed may affect pollution and improve water quality.
What it does
Our online interface allows a user to predict nitrogen pollution while experimenting with the basic land cover parameters of forest, farmland, and development. The interface runs on a predictive model which uses gradient boosting regression. We trained this model on the land cover contained in the upstream area of sampling stations. In addition to the interface and model, our github submission provides the percent coverage of each land cover type in the HUC12 watersheds encompassing all the upstream waterways for a given station.
How we built it
We iteratively created a workflow to prepare public data for analysis, created and trained a machine learning model to predict nitrogen pollution at sampling sites based on land cover inputs, and then built an online interface for public interaction.
We focused on nitrogen pollution because it is one of the main drivers of ecological impairment in the Chesapeake Bay. Nitrogen pollution is also well predicted by land cover, for which there is a recent USGS provided dataset on which we can build a model. There are also well known and established mitigation measures for nitrogen pollution, such as stream buffers and wetland catchments, which are easy for the public to understand and begin to implement to improve water quality.
To explore relationships between nitrogen and different land covers, we made a heatmap of the pearson correlation to examine the strength of relationship between different land use types as well as between land use types and nitrogen (Fig. 1). The larger squares along the diagonal indicate groups of similar land use types, such as development and forest. We also see that pasture and cropland are positively correlated with nitrogen and that some of the non-agrarian vegetation is negatively correlated with nitrogen levels. This finding supported our decision to use land cover as the main predictor of nitrogen, and to use forest, development, and pasture in our simplified user interface.
Additionally in our data exploration, we found a seasonal effect on nitrogen levels, with lower levels in the summer (Fig. 2-4). We also noticed a bias in that more samples, particularly in the bay itself, were taken in the summer. We began to explore this in our time series analysis using Gaussian models (Fig 12).
To prepare input data for the models, we extracted land cover within watersheds. We used the public National Land Cover Dataset (NLCD) 2016 Science product created by USGS, which provides land cover data at a 30 meter resolution. We used the USGS watershed basin boundaries (which are named using hydrologic unit codes, called HUCs) as boundaries to distinguish which land cover drained into which waterways. The number of pixels of each land cover type and their percentage of area within the HUC boundary were compared to the water quality sampling data taken at stations within those boundaries. We applied various statistical correlation tests and then input this data into a machine learning model using gradient boosting regression. We tested HUC sizes 8, 10, and 12, with 8 being largest regions and 12 being the smallest (Fig. 5).
At first, even using the smallest land cover grouping, the HUC 12 level of boundary, we did not find as strong a relationship between land cover and station sampling data as we expected given the causal relationship recorded in peer-reviewed literature. We considered this may be because the water station samples were influenced by upstream water basins that we had not included.
To address this concern, we additionally extracted land cover in HUCs upstream of each sampling station. This was done by using the National Hydrography Dataset to create an iterative tool to identify which waterways flowed downstream to each sampling station. We used the selected set of streams (for example, in Fig. 6) for each sampling station to identify which HUCs drained into these upstream waterways on their way to the sampling station. We then used the grouped upstream HUCS to examine the land cover type amounts and percentages for the waterways flowing into the sampling stations. The upstream input land cover was then modeled and compared to the three previous inputs that used land cover from single HUCs of different size.
Results are shown in Fig. 7-8. Basic data inputs such as latitude, longitude, month, and year were used to benchmark the new models (Fig. 7). Their R2 showed the best fit was achieved when all extractions were considered. However, the second best fit was achieved using only upstream tracing.
We then compared land cover inputs alone (Fig 8). The R2 values showed the upstream HUC data was most predictive of nitrogen pollution, as suggested by the previous table. In the other three models, the more localized the area, the better the prediction was. This makes sense as the larger HUCs included land cover in the water basin that was relatively distant. However, the smallest HUCs This suggests that upstream land cover was as influential as the land cover immediately surrounding the sampling station and more so than the land included in large HUC areas.
The feature importance graph (Fig 9) indicates that cultivated crops, open water, and wetlands all play an important role in predicting nitrogen levels.
The interface is built using ReactJS and Python. A simple visualization of nitrogen levels over time compares existing land cover from the 2016 NLCD extractions to the user's hypothetical scenario land cover (Fig. 10). These scenarios are then fed to the model (we selected gradient boosting regression model with latitude, longitude, month, year, HUC12, HUC10, HUC8, and upstream HUCs) to project the impact on water quality indicators such as total nitrogen. The current pollution projection can then be compared to hypothetical pollution predictions. A video of the demo is viewable at:
link
. Upon request, the GUI can be hosted at:
http://baywatch.jbarrow.ai
.
Challenges we ran into
One challenge was subsetting the land cover upstream from the stations to reflect contributing runoff sources as closely as possible. First, the hydrography data needed to be downloaded in subsets that maintained an intact geometric system, meaning the lines representing streamflow had to be collected logically to indicate directionality. Then we had to learn to manipulate this data using the associated tools. The large number of stations meant this process must be automated. Models were created to batch process our stream selection and to extract land cover for those streams.
Further challenges arose in finding the most appropriate statistical analysis. We needed to measure complex correlations between a large number of land cover types and thousands of samples. This took a great deal of literature review and expertise to select and test appropriate models.
Accomplishments that we are proud of
We are most proud of the combination of GIS, modeling methods, and interface into a usable community-focused tool for data exploration.
What we learned
We learned how the ecological systems of the Chesapeake Bay watershed can be better supported, in particular through the far reaching effects of decisions on human activity and land management. Additionally, we gained understanding of national public geospatial datasets, statistical analysis techniques, and machine learning methods.
What's next for 'Effect of Land Cover on Pollution in Chesapeake Watershed'
Even the smallest watershed units and upstream selection available for this study may be too coarse to capture the influence of the land cover on nitrogen pollution in water samples. We started expanding the study using a similar workflow to land cover within a buffer area of streams rather than the entire watershed units HUC12 area (Fig. 11). Comparing the predictive power of different buffer sizes may support research suggesting that conservation of riverine buffers may have disproportionate benefit and direct conservation efforts. We have also started expanding on the models used to include time series analysis (Fig. 12). This would leverage the seasonality found in the data exploration.
Acknowledgements
We would like to thank Han-Chin Shing and Sharon Cheng for their pandas expertise, data exploration, and model help.
Built With
esri
python
react
Try it out
github.com | Effect of Land Cover on Pollution in Chesapeake Watershed | Powered by open data, stream flow models, and machine learning, this user interface helps communities can make informed decisions for future land development in the Chesapeake Bay Watershed. | ['Megan Maloney', 'Joe Barrow', 'Charlie E'] | [] | ['esri', 'python', 'react'] | 18 |
10,416 | https://devpost.com/software/d-c-potomac-river-report-card | Main
Data
Impacts
Introduction
Links
Inspiration
In senior year of High School, I took AP Environmental Science and have learned much about Chesapeake Bay and the pollution it has been experiencing. I love the nature, ecosystem, and do not want it to be destroyed due to human activities. I hope that I can do something, or at least gain some experience in advocating environmental protection. In an internet based society we have today, I know that a website is really helpful in advocating and raising attention on environmental issues and impacts, and this is why I chose to design a website like report card.
What it does
It provides specific information regarding the watershed community such as basic introduction, data, impact, quality standard, and how they could help.
How I built it
HTML, CSS, and JavaScript.
Challenges I ran into
I was stuck on choosing the dataset to perform on, since there are lots of choices. But eventually I chose the dataset in a particular time frame, 2015-2020. Also I spent lots of time on determining how to deliver the data, because the website representation was really vivid and I was not sure if I could build something like that. Eventually I decided to simplify the chart and just make it into multiple graphs.
Accomplishments that I'm proud of
This is my first time building a fully functional website and I am glad that I successfully finished this website. This has multiple pages and different small functions in it that I learned from the internet. And I am really proud of making something that could potentially help the environment or that particular community.
What I learned
I have gained experience on Web development and some data visualization tools. And I also learned something that I have never learned inside the classroom, such as the details of government policies and how the poor water quality is impacting the local community.
What's next for D.C Potomac River Report Card
To make it more user friendly and more specific by adding more functions to it that could help the users to understand more and try to appeal more users to protect the community.
Built With
chart.js
css3
html5
javascript | D.C Potomac River Report Card | Water quality report card for D.C Potomac River community. This's a website-like report card that has used vivid data visulizations to describe the poor water quality and hopefully raise attention. | ['Tomoyoshi Kimura'] | [] | ['chart.js', 'css3', 'html5', 'javascript'] | 19 |
10,417 | https://devpost.com/software/wh-submission | Hackathons
Homepage
Inspiration
I am trying some web dev in quarantine.
What it does
It provides a beautiful website for WH
How I built it
I built it on HTML, CSS and JS
Challenges I ran into
I didn't know how to do some animations so I learnt them during this hackathon
Accomplishments that I'm proud of
I had finally made it.
What I learned
Animations
What's next for WH Submission
Jai Shree Ram🚩
Built With
css3
html
javascript
Try it out
whsubmission.netlify.app | WH Submission | Website for World hacks | ['Loveneesh Dhir'] | ['1st Place - Best Website'] | ['css3', 'html', 'javascript'] | 0 |
10,417 | https://devpost.com/software/world-hackers-website-0g9fbt | This is the landing page with a peaceful full screen video background and hamburger menu.
This is the projects section with an embedded youtube video of their project submission
This is a vertical-flipping 3D carousel showing their hackathon participations
This is the team section. Each card flips and shows information about the member upon hover.
This is the footer showing their location , quick links, and contact info.
It is a website showing the work and achievements of world hackers team.
Built-using: Html, Css3, Javascript.
Built With
css
html
javascript
Try it out
github.com
swetadash0610.github.io | world-hackers-website | This is the world hackers website made with accordance to devpost rules of hackathon on the same by Sweta Dash. | ['Sweta Dash'] | ['Best Try', 'Public Voting'] | ['css', 'html', 'javascript'] | 1 |
10,417 | https://devpost.com/software/worldhackers-hackathon-team-website | HomePage
Participated Hackathons
Hackathons Complete List
Demo
Technology Stack
Materialize CSS 🎨
GSX2JSON - Google Sheet to JSON API 🚀
Website
Click
here
to see the demo website.
Built With
css3
html
javascript
materialize-css
Try it out
github.com
world-hackers.surge.sh | WorldHackers - Hackathon Team Website | Welcome to the Elite Hackathon Team's wbsite. | ['Sumit Banik'] | ['Best Try'] | ['css3', 'html', 'javascript', 'materialize-css'] | 2 |
10,417 | https://devpost.com/software/worldhackers-website | 💻 Check it out
Check out the site
here
.
✨ Features
Automatically updates your projects from your GitHub
New and improved logo and color scheme
Animated SVG logo
Animated scroll effects
Works on both desktop and mobile
🛠 How we built it
We developed your site using HTML, CSS, and JavaScript. We used GraphQL with the GitHub API to automatically update your projects from your GitHub.
Built With
css3
github-api
graphql
html5
javascript
Try it out
worldhackers.ml
github.com | worldhackers-website | A website created for the World Hackers elite hackathon team. | ['Raadwan Masum', 'Aadit Gupta', 'Rohan Juneja', 'Safin Singh'] | ['Best Try', 'Best UI Design'] | ['css3', 'github-api', 'graphql', 'html5', 'javascript'] | 3 |
10,417 | https://devpost.com/software/hackcation-mlh | Technology stack
Technology stack
Google Analytics
Google Data Studio
Wordpress
Google Cloud Monitering
Google Cloud for Hosting VM
Project Description:-
Used Google Cloud for hosting Virtual machine. After that installed WordPress on that Virtual machine. When Initially I created Virtual Machine its IP was dynamic which I made static as it was necessary. To track Virtual Machine's performance and know other details like API Requests (per/sec), bytes received by VM, bytes send by VM. After that, I designed the whole webpage using WordPress. When my design is done I decided to track the user's record (data) using Google Analytics for which I created a new project on Google Analytics and put 5 lines of javascript code on my webpage header section to enable google analytics. To go one step ahead (i.e to generate useful data from Google Analytics) I connected Google Analytics as a source of data to Google Data Studio which helps to know users (traffic which comes to the site) much better.
In order to see my work (demo) you watch the above video
Google Cloud for Hosting VM - Screenshot
Google Cloud Monitering - Screenshot
Google Analytics - Screenshot
Google Data Studio - Screenshot
Wordpress by Bitnami - Screenshot
Built With
elementor
google-analytics
google-cloud
google-data-studio
wordpress
Try it out
github.com | Web designing & hosting using Google Cloud +Google Analysis | Used Google Cloud for hosting the Virtual machine. To track user's record (data) using Google Analytics. | ['pranita patil', 'Sanket Patil'] | ['Best Try'] | ['elementor', 'google-analytics', 'google-cloud', 'google-data-studio', 'wordpress'] | 4 |
10,417 | https://devpost.com/software/world-hackers-website | Meet the Team Page
Home Page
Inspiration
I was very driven to design this website for the hackathon team, as I have previously looked at this team's projects on GitHub. They were all very unique, and I was also astounded by the amount of achievements they have in hackathons.
What it does
In this website, it has all the information a person needs about the "World Hackers" hackathon team. This includes a home page, achievements page, a "meet the team" page, and a page to learn more about the team.
How I built it
For the home page, I decided that the best way to go was with a type-writer animation, which already gives the user a sense of a "hacking" environment. I went with the red and black contrast, solely because of my opinion that it looked good. I debated doing a neon green and black, to look like a normal terminal, but I ultimately decided against it. For the "meet the team" page, I used four equally-sized panel, with the GitHub profile picture of each team member, their name, their "position", so to speak, and a quick sentence about them, which was also based off of the GitHub profile of each of them. For the achievements page, I did not want to pile all of their achievements into one column/table, so I split it into the two years of 2019 and 2020 in which they have participated in Hackathons. When a user clicks one of the achievement text, it will go to the "Learn More" page, as if they are clicking, it is most likely the case that they want to know more about the projects that they submitted/more about the team. Lastly, in the "Learn More" page, I have linked a few of their prominent projects, and their email information at the bottom. Additionally, I added media queries to each page in order to fit the screen for not only PC, but for the tablet and phone
Challenges I ran into
At first, I thought that the design looked too simple. At the same time, I'm into a minimalistic look, and I believe that a website looks polished when it's simple, serves the purpose, and at the same time, has a stylish feel to it. Additionally, I had some trouble with the media queries, but I was able to fix them with my DevTools experimentation.
Accomplishments that I'm proud of
I'm proud of the type-writer animation, and the general look of the website. I also like the "Meet the team" button and page, as I feel that helps the user to get to know the team a bit better.
What I learned
I learned a lot about computer science in general, and HTML/CSS bits. In general, I learned that there are a lot of tweaks that have to be made to certain components, such as text/images to make a website's orientation look perfect.
What's next for World Hackers Website
If the team likes my website design, I hope to publish it, and maybe change the look of it based on the feedback from the judges. Thanks!
Built With
css
html
Try it out
vardaansinha.github.io
github.com | World Hackers Website | This website was made for the hackathon team, "World Hackers". With exquisite HTML/CSS styling, the image of a hackathon team is portrayed through this simple yet well-functioning website. | ['Vardaan Sinha'] | ['Best Try'] | ['css', 'html'] | 5 |
10,417 | https://devpost.com/software/world-hacks-submition | .
Built With
css3
html5
javascript | . | . | [] | [] | ['css3', 'html5', 'javascript'] | 6 |
10,417 | https://devpost.com/software/mindcare-70rujc | About Us
Service
Featured
Inspiration
For our peace of mind we like to hang out in the park, try to mingle with nature for peace of mind and share life with each other but in the current situation we are absolutely stuck at home. No chance to park or mingle with nature or share your thoughts with anyone. We need human peace at this time.
What it does
○ Make professional counselling accessible, convenient – so anyone who struggles with life’s challenges can get help, anytime, anywhere.
○ Through our counsellors, MindCare offers access to trained, experienced psychologists, therapists, clinical social workers, and professional counsellors.
○ We help people identify their most important concerns and provide them with the support they need to work through their mental health issues and achieve their personal goals.
How I built it
I used the low-code html,php,js and css to rapidly prototype and application, which will result in fully scalable cloud solution.
Challenges I ran into
To learn a new coding stack. For the front end, some learned Flutter, on the back-end they learned Flask framework. This was an initial challenge to ramp-up but I were able to sail through.
Accomplishments that I'm proud of
.I'm proud that I built this from scratch, I have used other frameworks like java script in the pass but this website is written mostly by me other code. I am also proud of the stateless system allowing people to easily switch journeys on website.
What I learned
Potential exchanges between people in need mental helthcare and elective medical organizations with un-needed supplies is potentially a helpful area of short-term success as well as with other industries. A large number of small contributions add up quickly.
What's next for MindCare
I want to introduce more dynamic elements to our website – such as defining group constraints size, gender, minimum age to sign-up
Built With
css
html
javascript
php
Try it out
github.com | MindCare - Friend of Your Thinking | MindCare websites platform deliver mental healthcare to the people who do not have access to mental health facilities. | ['Mahabuba Akter Mursona'] | [] | ['css', 'html', 'javascript', 'php'] | 7 |
10,417 | https://devpost.com/software/weather-report-o54xub | Inspiration our website
The purpose of a weather report is to provide as accurate as a possible prediction or the
actual temperature is and to prepare the people to act accordingly. Without actually
going out or waiting for the news to come in order to check what the temperature is
today or using google to know the temperature of other places which actually provides
everything but the required information in some corner. Having one specialised in the
required field is the best way. Simply, a website to find the weather at any place in India.
What it does
This website is used to show weather information about a location. It uses openweathermap.org’s API to return the weather information to the user. It displays the humidity, maximum and minimum temperature of the place entered by the user.
How we built it
With HTML and This function called main is used to connect to the API and call the
Weather Callback function.
The weather Callback function is used to gather the weather information received from the
API and store it in variables.
We are going to provide the name of the city as the input and once we click the button
search we want to call the live weather service whose url is given and this is achieved
through jQuery and ajax.
If we click the link of the weather service this page is opened.
There are two links in the first page JSON and XML the data which is already written
produces the second page.
In ajax, url is being written which calls the weather and the method ‘get’ to get the
information whose data type is ‘json’.
When we click url that's when we want to issue an ajax request. So finding is done by
using jQuery id selector which executes some code on clicking:
We need to retrieve the name of the city, so a variable is used equalizing with the id of
the city textbox and to retrieve its value val function is used. This is the data that we
want to send to the web servers.
A new jQuery ajax function is called and options are specified.
Url we want to call.
And we want to issue a get request to which we use the method option.
And the data that we want is being specified.
And the type of data that we are expecting back from the server as JSON.
And when the request successfully completes we want to associate a callback
function (data). Now the JSON object that we are going to get back from the web server will be received by this data parameter.
So this above-discussed method will contain all weather information.
We want to display the main weather result within the ‘result div’. So id of ‘result div’ is
copied and another variable is used which equals to div element by using jQuery id
selector.
When you type ‘abcdef’ sort of string in the city box we would return to the same page
again as it repeats an error.
Challenges we ran into
Our website was perfectly running in our PC but git and other hosts are not able to support it
Accomplishments that we're proud of our website and thus able to give the report of all the cities and also the local places
What we learned we learned how to
With HTML and This function called main is used to connect to the API and call the
Weather Callback function.
The weather Callback function is used to gather the weather information received from the
API and store it in variables.
What's next for the weather report
There are so many projects regarding weather reports and many apps available in the
market, but among them, if our project has to survive it needs to have some uniqueness
in it. So as time goes by, with the experience, we will have a better idea and then
release our product. Hope by then, we will have a good response for our hard work.
Built With
html
Try it out
jsumanth2409.github.io | weather report | Our website shows the exact weather report of the particular location | ['J Sumanth', 'Bandi Mouna'] | [] | ['html'] | 8 |
10,417 | https://devpost.com/software/merrychristmas-master | Creative website with shopping cart
I use theme: jekyll-theme-cayman to create this website, also paypal shoppingcart
Everyone can feel beautyful background and online shopping.
Built With
css3
html5
javascript
paypal
shoppingcart
theme:
Try it out
github.com | MerryChristmas-master | Creative website with online shopping | ['Elisabeth Luo'] | [] | ['css3', 'html5', 'javascript', 'paypal', 'shoppingcart', 'theme:'] | 9 |
10,417 | https://devpost.com/software/optimist-india-6wo1fb | A website that contains only optimistic news from around the world.
Register for weekly newsletters and updates about the website.
optimist-India
As per the current situation going in whole world we just wanted to do some good things for the society, so this inspired us make a website where we can share positivity through news so that at this critical situation people should get to know something positive too. And even people can sign up for weekly newsletters and updates about the website.
While working on this project we learned many new attributes,methods using HTML, CSS, Javascript, JQuery. We learnt working using APIs .And on the backend with nodeJS which was little intimidating but we learned alot and most importantly we got to know new good things happening all around the world.
Built With
css
express.js
heroku
html
javascript
mailchimpapi
node.js
Try it out
optimist-india.herokuapp.com | optimist-India | Lets be optimistic in this pessimistic world! | ['Rushabh Runwal', 'ashwindamam'] | [] | ['css', 'express.js', 'heroku', 'html', 'javascript', 'mailchimpapi', 'node.js'] | 10 |
10,417 | https://devpost.com/software/deep-surveillance-using-a-i-deep-learning | On finding the suspect it will store the image of suspect to the database and inform to cops or personal security
Inspiration
Increasing the security to women and children in public areas and can detect a missing children
spot the suspicious and most wanted criminals wandering around especially in Airports/Public places
surveillance of militant activities at the borders
Securing home by identifying the visitors
Making offices more secured by detecting unknown people moving around
What it does
This is a project on building an A.I cam for the detection of the suspicious activities/people around us
If it finds the suspect it will store the image of suspect to the database and inform to cops or personal security
How I built it
Using a Deep Learning Model Trained on my own by using 35,887 data set of images to predict the suspicious activities.
Challenges I ran into
Collection of a data set of 35,887 data set of images
Training the model using CNN Algorithm
Accomplishments that I'm proud of
It finds the suspect it will store the image of suspect to the database and inform to cops or personal security
What I learned
A.I is boon to world if used in right way to add humongous results for our daily activities
What's next for Deep Surveillance using A.I & Deep Learning
Using my skills, knowledge and expertise in the field of deep learning to build a better/innovative society.
Built With
cnn
deeplearning
keras
machine-learning
opencv
python
tensorflow
Try it out
github.com | Deep Surveillance using A.I & Deep Learning | This is a project on building an A.I cam for the detection of the suspicious people/activities in public areas/home | ['Mohit Venkata Krishna Ch'] | [] | ['cnn', 'deeplearning', 'keras', 'machine-learning', 'opencv', 'python', 'tensorflow'] | 11 |
10,417 | https://devpost.com/software/forest_fire_predictor | Forest_Fire_Predictor
Forest Fire Occurrence Probability is predicted using Machine Learning.
Models used- train_test_split and LogisticRegression
Modules used- Pandas, Numpy, sklearn, Pickle, Warnings
Technologies used- HTML, CSS, JavaScript, Python, Flask, Pycharm
Model is trained using logistic regression and is dumped in a pickle file. It is then Deployed in web using Flask. Three Attributes are taken into consideration: Temperature, Oxygen and Humidity. After submitting the values, output is generated showing the probability of occurrence of forest fire for given values.
Model's accuracy can be further increased by using a csv file with more dataset.
Built With
html
python
Try it out
github.com | Forest_Fire_Predictor | ML Project/Deployed in web using Flask/Forest Fire Occurrence probability is predicted using machine learning. Model used- Logistic regression ,Languages used-HTML,CSS,JS,Python | ['Shubhanshu Biswal'] | [] | ['html', 'python'] | 12 |
10,417 | https://devpost.com/software/hackocommune | Explore Page
Main Page
Profile Page
HackOCommune
Live at :
myhackocommuneapp.herokuapp.com
Built With
bootstrap
css
html
mako
python
Try it out
myhackocommuneapp.herokuapp.com | HackOCommune | Its a website for hackers where they can showcase their work. Their post will be live, so if anybody login can see each other posts.You can also delete your post. | ['Sneha Gupta'] | ['Best Website'] | ['bootstrap', 'css', 'html', 'mako', 'python'] | 13 |
10,417 | https://devpost.com/software/musicspeaks-tech | MusicSpeaks.tech-Home Page
Meet the Creators-MusicSpeaks.tech
Home Page-Last Moment Store
Best Selling Books-Last Moment Store
Inspiration
Vacation was the inspiration so we decided to make a software that can plan, book, give virtual tour to the user and can entertain the user on a single platform
What it does
MusicSpeaks.tech lets the user to plan, book, schedule a vacation, get a virtual tour of the planned location and also lets the user to entertain himself using a portable piano made using Arduino UNO using a same platform. Also this project lets the user to order essentials on the spot which can be delivered within a day!
How I built it
The main website was built using Bootstrap studio
The Last Moment Store was made using wix.com.
The portable piano was made using Arduino UNO.
Challenges I ran into
Different Time Zones: Everyone in the team belonged to different country so to work on the project at same time was difficult. So, we decided to divide work among every teammate.
Bootstrap studio was difficult for me to work because of lack of Experience building the website.
Accomplishments that I'm proud of
Got to work with teammates across the world and got to know about their culture.
Used Bootstrap Studio for the first time so I got to know more about website development
What I learned
Teamwork
Used Bootstrap Studio for the first time so I got to know more about website development
What's next for MusicSpeaks.tech
Deploy an android app so that it would be easy for the user to use
Tie Up with the local stores so, we can provide users one day delivery.
Built With
arduino
bootstrap
css
html
javascript
wix
Try it out
hkhrapps.wixsite.com
keshavmajithia.github.io | MusicSpeaks.tech | Vacation is a breeze with MusicSpeaks.tech! | ['Keshav Majithia', 'Naseeb Dangi', 'Min Min Tan'] | [] | ['arduino', 'bootstrap', 'css', 'html', 'javascript', 'wix'] | 14 |
10,417 | https://devpost.com/software/posmino-ayxtl5 | Inspiration
Inspired by The Freecycle Network, Posmino is a community having donate and request services, which helps reduce the amount of unused goods from around the world.
What it does
The users are required to sign up/sign in before using the services. After logging in, they are able to request items or donate items. If a "donate/receive" procedure is done, an email will be sent to both contributor and donee to keep them updated about the item.
How I built it
Designed a responsive single page application with Angular, Bootstrap, HTML and CSS.
Utilized Firebase to authenticate users and store the information of each item.
Used different Java libraries as well as Spring Boot framework to send users email after "donate/receive" confirmation.
Challenges I ran into
Sending email to both contributor and donee and updating the database at the same time.
Accomplishments that I'm proud of
The website works as expected!
What I learned
How the front-end (Angular) communicates with the back-end(Spring Boot and Firebase).
Built With
angular.js
bootstrap
css
firebase
html
java
javascript
springboot
typescript
Try it out
github.com | Posmino | A community for people to donate and request items. | [] | [] | ['angular.js', 'bootstrap', 'css', 'firebase', 'html', 'java', 'javascript', 'springboot', 'typescript'] | 15 |
10,417 | https://devpost.com/software/covidmity-me8vur | Inspiration
Our team was inspired to create the Covidmity website after witnessing the strength in our own communities during the Covid-19 pandemic. We wanted to spread that solidarity to other communities through creating a website with simple tips and checklists to help adults and children make better choices during this troubled time.
What it does
The tip randomizer allows users to click a button for randomly generated tips(relating to Covid-19) for 4 different categories. They were “Helping Your Kids with Online Classes”, “Dealing with Teenagers' Concerns”, “Easing Your Child's Anxiety about Covid-19”, and “Self Care Tips.” With each randomly generated paragraph comes its partner image. The COVID Resource guide allows users to input the number of family members they have and accordingly be given the quantity needed for 3 necessities. Additionally, the web app sits on your desktop and be clicks at any time of the day to pull the best and most reliable news. After entering your personal info the app can either get you the best stocks of the day and the best coronavirus updates. It's simple UI and easy to use features are what makes it stand apart from the rest.
How we built it
The website aesthetic was built using HTML, CSS, and Javascript. The background images were from Videezy, a site that offers backgrounds that have the license of free to use. The logic behind the randomizer involved Javascript. For each of the 4 categories, we stored the tips into 4 variables, created 4 corresponding images that were also stored, and then put them into 2 separate lists. Using math. random we generated random selections and displayed them. For the resource guide, we designed the table and used Javascript to take the user input and translate it into a result on the table. The web app was built using Kivy, a python library for web app development after I laid out the framework for the app I used a separate function to hold the financial news and used a personal Robinhood account to get relevant information from the robin_stocks API. After this, the NewsPuller.py was created using the NewsAPI and then the parameters for the news were tweaked to get the best and most relevant news; we had to whittle down over 300k headlines. Eventually, the classes were put together and a Grid class for UI purposes contained all the relevant event triggers.
Challenges we ran into
Initially, we planned to have one randomizer button on the homepage, and when the user clicked on it, they would be directed to a page with the same title as the random button text. That was impossible to do at the time, so we changed the project to have a randomizer for each of the 4 categories. In addition, this was our first time using moving background images, so it was a struggle to get all the other elements to display in their proper places. For the web app, the hardest part was perfecting the UI, which is something we were very new too. Eventually, however, we were able to learn from the documentation.
Accomplishments that we're proud of
We are very proud of how the design turned out. The moving background images especially were the icing on the cake, as well as the consistent style between pages. We are also happy that the randomizer works perfectly!
The web app was something new for our team and we are happy we were able to complete it so well in such a short time frame.
What we learned
We learned through designing the website how to randomize items in Javascript as well as utilizing movable background images smoothly in HTML files. We also learned a lot about the Kivy python library for making web apps.
What's next for Covidmity
What’s next for the covidmity website is to include more information in the randomizer, and include a general login/signup. The site would also include a price/store database in the resource guide for the user to find desired products most efficiently and to frequently display products the user needs based on their profile. We also want to clean up the UI for the web app to make it more visually appealing.
Built With
html
javascript
kivy
newsapi
python
robin-stock
robin-stocks
spyder
Try it out
swethatandri.github.io | Covidmity | Here at Covidmity, we believe that a culmination of tips, tools, and resources are key to making it past this pandemic. At Covidmity, we build community. | ['Swetha Tandri', 'Adarsh Bulusu', 'Lavanya Neti'] | [] | ['html', 'javascript', 'kivy', 'newsapi', 'python', 'robin-stock', 'robin-stocks', 'spyder'] | 16 |
10,417 | https://devpost.com/software/secure-your-space | Inspiration
What it does
,
How I built it
Challenges I ran into
Accomplishments that I'm proud of
What I learned
What's next for Best Beginner Prize -- Healthcare
Built With
api | Best Beginner Prize -- Healthcare | An app that helps healthcare | [] | [] | ['api'] | 17 |
10,417 | https://devpost.com/software/back-end-developer-this-is-the-name-of-my-project | Inspiration
I have interest in hacking n
What it does
Hacking is alot more for me
How I built it
I built by challenging the dreams
Challenges I ran into
Hacking mobile phone and other things.
Accomplishments that I'm proud of
Father and mother
What I learned
Hacking
What's next for Back end developer this is
the name of my project
Develops the back side of anything
Built With
nothing
Try it out
youtu.be | Back end developer this is the name of my project | Back end developer is the developer which develops back side of anything is called back end developer | ['Afroz Khan'] | [] | ['nothing'] | 18 |
10,420 | https://devpost.com/software/eyedog-cmsgja | Inspiration
Our mate Lillian had to bring large font textbooks to school because she was visually impaired. These were double the size and weight of ours, but it meant that Lillian was be able to be a normal student and use her textbooks like everyone else.
Problem
There are 575, 000 people in Australia that have visual impairment. They are unable to do every tasks that others would take for granted like reading menus and knowing their surroundings. These are things that most people can do independently, and we wanted to give the same empowerment others have, to those with visual impairment.
Users
Those with macular degeneration.
Those with low vision.
Those who are illiterate.
Those with dyslexia.
Those with amblyopia.
Those with glaucoma.
Those with cataracts.
Those with diabetic retinopathy.
What EyeDog does
EyeDog is a mobile application that converts image to audio by identifying what is in the camera frame and reading any text that is shown.
How we built it
React
Python
Google Vision API
Google Text to Speech API
Google Translate API
YOLO Object Detection Algorithm
Challenges we ran into
Converting camera data into objects and text.
Implementing a working camera application.
Accomplishments that we're proud of
In depth use of Google APIs and YOLO.
Working application.
What we learned
Building a camera application.
Working collaboratively with new and challenging technology.
Integration of object recognition and text recognition.
What's next for EyeDog
Expand on our features to continue empowering those with vision impairment such as GPS and more detailed object description.
Built With
google-text-to-speech-api
google-translation-api
google-vision-api
python
react
yolo-object-identiciation
Try it out
github.com | EyeDog by GoodTeam | Eyedog maximises independence for those with visual impairment. EyeDog converts text and objects into audio output to assist those with visual impairment with reading text and object identification. | ['Jocelyn Hing', 'Matthew Olsen', 'Peter Kim', 'kevinz00', 'Bei Chen'] | ['Grand Prize'] | ['google-text-to-speech-api', 'google-translation-api', 'google-vision-api', 'python', 'react', 'yolo-object-identiciation'] | 0 |
10,420 | https://devpost.com/software/syncs-hack-2020 | Syncs Hack 2020
QR Tones is an app that encodes text to audio wave-forms, similar to QR Codes but for sound. When text is provided, the app will convert it into a tune that encodes the data that is to be transmitted. Any other device listening will be able to decode the data and receive the text that is being transmitted.
The webapp was built using React.js. We used the web audio API to get user microphone input. A custom protocol and algorithm based on Fourier analysis was developed to both encode and decode the text into their respective QR Tones.
Made by:
Ben Braham, Matty Hempstead, Pranav Alavandi, Cory Aitchison, Tom Schwarz, Hamish Sullivan
Video demonstration:
https://drive.google.com/file/d/1zpL3c7FF4-2ldOvo2Jhdv0zrsnQNpCK6/view?usp=sharing
Built With
css
html
javascript
python
react
Try it out
github.com | QR Tones | Share links with friends - over sound! | ['Pranav Alavandi', 'Hamish Sullivan', 'Tom Schwarz', 'Ben Braham', 'Matty Hempstead', 'Cory Aitchison'] | ['Second Prize'] | ['css', 'html', 'javascript', 'python', 'react'] | 1 |
10,420 | https://devpost.com/software/balance-the-work-from-home-browser-assistant | Inspiration
In the wake of the pandemic, now more than ever professionals and students alike are blurring the lines between work and home. By this point we've all had a moment where we've struggled to get into a working mindset when we're only a few feet from our bedrooms or found ourselves unable to switch off and relax even when dinner is on the table. When you're working from home, it can be hard to separate your work time from your private time.
Balance
does the heavy lifting of context switching for you, letting you set hard boundaries between your work life and home life.
What it does
Full chrome context switch from home mode to work mode (and back!) - including windows, bookmarks, blocked websites and newtab override.
Gentle introduction to the work day with easy access to your schedule and to-do list to plan your day in a drag and drop format.
Timer and notification system to help you make the most of your work day with an integrated Pomodoro system.
How we built it
React with TypeScript
Google Chrome Extension API
Atlassian's
react-beautiful-dnd
Webpack
Challenges we ran into
Utilising the Google Chrome Extension API, particularly the more finnicky usages such as the bookmarks and tab context toggles. We were using features that existing extensions did not make use of.
Implementing drag and drop to build a reactive todo system from the ground up. Enabled complicated cross-list drag and drop functionality.
Accomplishments that we're proud of
Elegant, clean and simple UI.
Smooth, reactive drag and drop.
In depth use of Chrome Extension API.
What we learned
Building browser extensions.
Working collaboratively with complicated technology.
What's next for Balance - The Work From Home Browser Assistant
Expand on the feature set to make a more complete and robust system to benefit your work and home life e.g. in depth calendar integration, customisation for the timer and notification system, etc. Integrate more health and wellbeing functionalities such as meditation into the future.
Built With
javascript
react
Try it out
github.com | Balance | The all-in-one work from home browser assistant helping you balance your work life with your home life by cleanly separating the two. | ['Sebastian Chua', 'george fidler', 'Ryan Fallah', 'Tony Tang'] | ['Third Prize'] | ['javascript', 'react'] | 2 |
10,420 | https://devpost.com/software/marketio-okz1y5 | Inspiration
Is there anyone living in a modern city that hasn't experienced the bottomless frustration that comes with having to run back and forth down an aisle looking for the next elusive item on their shopping list?
We have! The solution? Introducing Marketio! Your one stop to cutting your shopping in half*.
*We haven't actually calculated this
What it does
Marketio is a highly-engineered tour guide for your local supermarkets! The app considers a shopping list entered by a user and then uses
sophisticated algorithms, a database, and magic
to chart an optimal path on the display. The path shows the user how to coast through a store, grabbing all of their shopping list items along the way - no fumbling back and forth; no getting lost. Locations are clickable, and they will reveal the items which will be collected on the adjacent shelf too.
How we built it
The algorithm itself was built in Java, and the GUI was made with the help of JavaFX. The algorithm is a greedy one that considers the general locations of all items as well proximity to user. It then runs these variables through a heuristic determination function which helps in choosing the optimal path to the next item(s).
Challenges we ran into
The actual pathfinding algorithm was the biggest challenge - we had to use a combination of breadth-first-search and a greedy algorithm gaining its heuristic values from a function with adjustable weights for different variables it considers.
Accomplishments that we're proud of
Apart from getting the algorithm working, making the backend and frontend come together (kind of) seamlessly was quite the achievement! We struggled a bit making the app look the way it does now and are quite proud of it!
Along with solving technical problems, we're immensely proud of everyone effectively working together and looking out for one another during the development of this application!
What we learned
Honestly, all of us have experienced some newfound appreciation for the importance of management and documentation in a project involving a team larger than two people! We saw firsthand the effects of not having everybody on the same page and the setbacks it can bring. We also learned how to effectively write solutions under a restricted timeframe which involved a constant need for communication, highlighting to us how important interpersonal skills are in developing a sophisticated application.
What's next for Marketio
Plenty! The app has potential for near infinite expansion. More supported supermarkets, live GPS tracking for the customer, more shopping list features, more optimal algorithm!
Not only can this impact the
customer base
, but this algorithm can be used from a
business operational perspective
, as slight modifications in the algorithm implemented can help workers navigate their stores efficiently when they need to stock items, ultimately increasing their workplace efficiency!
Thus, this app has the potential to
provide both business-driven and customer-driven solutions!
Built With
blood
bugs
frustration
java
json
love
python
spaghett
sweat
tears
Try it out
github.com | Marketio | Your fast track to all your groceries! | ['Tiancheng Mai', 'Alexander Vaskevich', 'Marc Kohlmann', 'pzemp1', 'Woojin Lee', 'An Tran'] | ['Best Algorithm'] | ['blood', 'bugs', 'frustration', 'java', 'json', 'love', 'python', 'spaghett', 'sweat', 'tears'] | 3 |
10,420 | https://devpost.com/software/cancelme | Inspiration
Cancel culture has become incredibly prevalent in internet culture, with a huge spike in Google searches in the past year. The term was even Macquarie Dictionary's 2019 Word of the Year. Often public figures such as celebrities, politicians, social media influencers and other individuals in the public eye face incredible amounts of backlash due to posts or comments made online, often many many years ago, which no longer reflect the values they currently embody or may no longer be deemed politically correct. This causes significant issues for the person or group of people being "cancelled", largely surrounding their career, image and level of acceptance by society due the amount of hate being targeted towards them online. In many cases, they are not entirely at fault, and hence an easy way to help prevent or reduce the chances of being cancelled would be greatly beneficial. Not only would this help important public figures, but would also benefit people seeking jobs, or anyone wanting to tidy up the way they are presented to the world through social media.
What it does
#CancelMe is an online web app that allows you to connect your personal social media accounts to quickly and easily remove all traces of the old you that the world should no longer see. We do all the damage control you need in only the click of a button. Users can selectively delete posts based on topic, or remove all posts found by #CancelMe based on their search queries.
How we built it
The web app was built in React, and utilises the Twitter API and Facebook API in order to gather user data. Once the APIs are authenticated by the user, this gives us access to their post history and hence allows us to filter through them and selectively identify what could be problematic posts based on their search terms. From here we remove the unwanted posts by calling the APIs to update their user account. Our website interface was created using React's Material-UI library.
Challenges we ran into
Time was one of the main challenges we ran into. Due to the limited period we had to design, build and test the solution to a problem of our own choosing, this meant we could not fully implement every feature we wanted, or spent as much time on experimenting with different UI colours and layouts to enhance the user experience.
Another challenge we faced was refining our problem during the ideation phase, as there were many pros and cons of various different ideas we came up with. To really sell our product, we needed to flesh out the business model, directly address the target market, be clear about our intentions, and create a product that the world genuinely needs.
Accomplishments that we're proud of
Most of all, we are proud that we stuck through with this project and pushed our way through this hackathon till the very end. Although at many points we struggled with technical implementation, faced countless numbers of merge conflicts and debugged issue after issue, we built something over a weekend and learned a lot during the process.
What we learned
Although many of us had a little bit of previous experience with React, this project really helped to build on these existing foundational skills, and taught us how to create a new product that none of us had ever made before. It also gave us experience working with APIs, in particular for Twitter and Facebook, to fetch and delete post data. This is also something new we had not done before. We also got some really solid resume tips from some of the mentors.
What's next for #CancelMe
There are many additional features that could be implemented in this project, and existing features that could be improved on. One of the main ones would be to further improve the UI, making the website more aesthetically pleasing whilst also increasing usability and making the experience more intuitive for the user. Additionally we would also like to include many more different social media platforms, such as LinkedIn, Reddit, Instagram, and more. Covering a broader scope of platforms would significantly increase the value of the product. Furthermore there are more advanced natural language processing and machine learning techniques that could be used in order to further optimise our searches. While we currently only support the text format, in the future we would also like to bring images, videos, and comments on others' posts into the consideration for cancelling.
Built With
facebook
javascript
jsx
material-ui
react
twitter
Try it out
github.com | #CancelMe | Cancel the cancel culture. | ['annabelzh Zhou', 'liviaw Wijayanti', 'elizabethzhong', 'Vivian Shen'] | ['Best Pitch'] | ['facebook', 'javascript', 'jsx', 'material-ui', 'react', 'twitter'] | 4 |
10,420 | https://devpost.com/software/kidkit | KiDIY is a recommendation app for DIY projects based on user preferences
Built With
flask
javascript
python
react
Try it out
github.com | KidKit | KidKit provides fun DIY projects for all ages - so that you can have hands-on learning experience or just plain fun. | ['LongDangHoang Dang', 'Dylan Duy'] | [] | ['flask', 'javascript', 'python', 'react'] | 5 |
10,420 | https://devpost.com/software/study-in-uni | an empty classroom before the first student gets seated
Inspiration
People lost motivation to study while staying alone. As Universities go virtual, most students have been staying home alone and their mental health conditions fluctuate due to physical distancing and reduced face to face communication. With StudyInUni, we are boosting people's motivation and productivity.
What is StudyInUni?
StudyInUni is an app helping you
get motivated and stay focused
on your study in a
virtual university
setting.
Usage
Using
StudyInUni
is as simple as attending a virtual university. You can enter a building or a library and choose a classroom to study. There are general rooms and specific course classrooms. You can sit in a classroom to stay focused and be counted for the study time as well as seeing other students sitting in the classroom to gain motivation. You can move to other classrooms when you feel to change your environment/study subject :D.
How it was built
Django for the backend, MySQL as database, Bootstrap and GreenSock for the frontend, Azure VM as the cloud server.
We scraped units of study and its associated time from USYD UOS site for name and countdown of the classrooms and stored them in MySQL database with a Django backend. A queue is invoked to ensure all students will have a place to study.
Bootstrap is used in the frontend for easy and faster development of User interface elements. GreenSock framework is leveraged to animate the web, make the interface vivid, and allow for a smoother user experience.
Challenges we ran into
Choosing an idea that could be implemented within 24 hours from many brilliant ones
User interface and graphics design
Implementing the random seated algorithm
Debugging :)
Minimize time spent on merging our codes
What we learned and are proud of
Our Tech Lead is capable of identifying the exact technical challenges quickly, embracing those challenges, breaking down and assigning tasks to each of us tailoring our specialties.
We managed to decide our technology stack of choice rapidly.
We collaborated incredibly well on exchanging ideas and helping each other to resolve technical difficulties.
We worked together intensively for 24+ hours!
Roles
Da Xie: Tech Lead, Full-stack Engineer, Database manager, Graphics designer, Pitch designer
Yue Pan: Back-end engineer, Database setup, Pitch designer
Yuanyuan Sheng: Project manager, Back-end engineer, Cloud server manager, Pitch speaker
What's next for StudyInUni
Develop a mobile app.
Features to be implemented.
- Visualization of buildings and enhance graphics design
- Enter self-study rooms with keys
- View others' public profile and send a sticky note.
- Reports on total focus time (sense of accomplishments)
- Compare their classroom study time with other students
- Blocking modes to help control phone usage within a set timeframe
- More on the way!
Expand to other universities by having a dropdown list for students to select, and eventually release to the general public with public study areas and libraries.
Monetization:
Assign users limited tickets to study in the classroom
A shop/store with limited edition items to purchase
Built With
azure
bootstrap
django
greensock
mysql
python
Try it out
github.com | StudyInUni | Stay focused, be motivated in the virtual classrooms | ['Da Xie', 'KimIron Kim', 'Yuanyuan Sheng'] | [] | ['azure', 'bootstrap', 'django', 'greensock', 'mysql', 'python'] | 6 |
10,420 | https://devpost.com/software/scheduwell | Inspiration
We have spent so much time creating planners and trying to fit in tasks. We longed for a more automated process for time managment - One where you type in your tasks and the priority ratings, and get an planner automatically created for you.
What it does
ScheduWELL takes tasks and their respective priorities from users and automatically generates a schedule for them. It first accepts the user's fixed schedule (activities which are at the same time each week). Then it takes user input for tasks they need to complete, along with how important the task it (priority rating). ScheduWELL creates a personal weekly schedule for the user, where their tasks are planned around their fixed tasks.
How I built it
HTML, JS, CSS for the front end user interface. Node js for the server side computing.
Challenges I ran into
Linking the server side and front end
Accomplishments that I'm proud of
We are proud of our user interface design which was planned carefully for an optimal user experience. We are also happy with the handling of user input, where we processed it and created a valid schedule for the user which takes into account both task durations and priorities.
What I learned
We learned more about javascript, node.js, and HTML / CSS, as well as using these languages together in a single project. Most valuable is our improved abilities to use Git for version control.
What's next for ScheduWELL
We will now work on simply displaying the timetable in the GUI, as we ran out of time for that this weekend :)
Built With
express.js
node.js
timetable.js
Try it out
github.com | ScheduWELL | Have you ever longed for time management to be a more automated process? We do just that! ScheduWELL takes in user input regarding their schedule and tasks to do, and automatically creates a planner. | ['Pratham Purohit', 'Luis Alemany Traconis', 'Emily Brooks', 'Anosh Samsudeen', 'Rhizii Masurkar'] | [] | ['express.js', 'node.js', 'timetable.js'] | 7 |
10,420 | https://devpost.com/software/hackathon-mj2rzh | Hi-Fi Prototype
react-native IOS/Android/Web Lo-Fi
FLICKER
(Hackathon Project)
A low fidelity Tinder style movie matching system demo using react-native and api requests from movieDB. And a high fidelity prototype in AdobeXD.
Built With
adobexd
api-themoviedb
javascript
react-card-flip
react-native
react-swipe-card
snack.expo.io
ui-kit-https://gumroad.com/l/cknfj
Try it out
github.com
snack.expo.io
xd.adobe.com | FLICKER | Tinder style movie matching system demo using react-native and api requests from movieDB. | ['git-adamzhao Zhao', 'Cathy Ho', 'Madison Nguyen', 'Mehak Dhiman', 'Cameron Wang'] | [] | ['adobexd', 'api-themoviedb', 'javascript', 'react-card-flip', 'react-native', 'react-swipe-card', 'snack.expo.io', 'ui-kit-https://gumroad.com/l/cknfj'] | 8 |
10,420 | https://devpost.com/software/shefup-9wybms | Avatar customisation - 1
Completion of recipe
Avatar customisation - 2
Completed and favourited recipes
Filter list
Recipe list
Recipe Details
List of recipes available
Ingredients List
Camera access
What problem you are trying to solve?
Many people are unable to cook and do not enjoy cooking (which reinforces their inability to cook because they never practise). On the other hand many people have problems choosing what to cook because of too many choices being present.
Who would use it?
Beginner cooks who need incentive to cook for themselves. People who enjoy gamification and would like to try out new recipes.
What is your prototype?
An android application called ‘ShefUp’.
Project Features:
Filtering: Filter option to show completed/ new/ favourited recipes (not yet implemented)
Favourites: Mark a recipe as a favourite
Completion mark: Shows if a recipe has been successfully completed before
Ingredients and steps checklist: A checkbox list of ingredients that acts as a grocery shopping checklist and a preparation list. The steps are to easily keep track of the user’s progress through the recipe.
Visual preview of ingredients
Progress bar: Vision indication of how much XP has been earned and how far the user is to levelling up. XP is earned upon successful completion of a recipe.
Level up features: Visual animation of the progress bar changing when XP is gained.
Coin reward system: Upon completion of a recipe, a set number of coins will be rewarded to the user.
Option to take a photo of the dish to save it
Avatar customisation: choose from hair and background. Coins are used to purchase a new avatar custom feature and the user’s avatar gets updated.
How does your prototype work? How did you implement your prototype?
After determining that we wanted to develop an app for mobile we decided to work with the android platform due to the team having experience within it. Initially template screens were implemented straight from the wireframe to give everyone on the team a better grasp of the navigation of the app. This allowed for the backend to slow by implemented while the front end was being designed. Backend was placed with more importance in this application due to this being a hackathon where a minimum functional viable product was necessary.
The backend was kept as simple as possible within the app with a repository pattern implemented that connected to the Room jetpack library (an implementation of SQLite). This allowed for persisting data on the local storage of the device with the app. There was no attempt at connecting up a remote database service due to lack of experience with the technology.
The frontend was implemented by the rest of team in two parallel efforts. The designers iterate and create assets for the application including mockups of the screens from the templates and image icons for the interface. Our front-end developer would then take implement this changes as they came.
This allowed separate parts of the codebase to be managed and updated as we planned and developed. This was in attempt to reduce the complexity of any code base merges but we still did experience merge issues.
Reference to any third party material used
No explicit third party material was used besides the adapting of the banana bread recipe found here:
https://sallysbakingaddiction.com/best-banana-bread-recipe/?fbclid=IwAR3oRsp8_syYUmUzHgVNUm7FTykqMSb2TZj9sJ7Eu94sDHHS2XjlcFPLeF0
All image assets were made by the team ourselves. The code architecture was made referring to past projects.
Background music - Invincible by DEAF KEV (NCS)
Link to your repository:
https://github.com/SorenAlex/CookingApp
Images/ screenshots of your project:
https://drive.google.com/drive/folders/1-XNX4JHZKj7QBsWdeZmNcWI3Zv4xwjEl?usp=sharing
The roles of each participant in the project team:
Emily Ha: Content and asset creator,
Donna: Content and asset creator,
Rebecca: Content creator, Video editor,
Anh Tu: UX designer, Video editor,
Alex: Backend developer,
Vivian: Frontend developer.
Built With
android-studio
kotlin
xml
Try it out
github.com | ShefUp | Chef the F up and cook! | ['Rebecca Kung'] | [] | ['android-studio', 'kotlin', 'xml'] | 9 |
10,420 | https://devpost.com/software/call-grandma | Client interface for elderly
Inspiration
The other night I just remembered it's been a month since I last called my grandma. I'm 100% sure it's not because she doesn't want to call but I was so caught up with work and forgot.
She cannot make a video call on Wechat; she also doesn't know how to dial international numbers.
She nearly has no way to reach out to me when she is missing me.
That's why she needs to wait for my calls, and sometimes it can be as long as months. And given the current lockdown situation, I'm sure there are many people who face the same situation. The mere thoughts of it is heartbreaking.
The idea of this app is simple: we make social media/messaging platforms just like Wechat/Facebook/Whatsapp etc. more user-friendly for the elderly, or any people who are not tech-savvy. They do not need fancy functions like sharing live locations or money transfers, but I believe they do deserve more attentions and care than what they get.
What it does
This app bridges the communication gap by providing an extremely accessible and user-friendly interface to allow non-tech-savvy people to send voice messages and connect with the family and friend. Features like huge buttons, voice guide, and minimum operation steps make the app easier to use than a conventional messaging app.
How I built it
Fake it to make it. Building the actual app was easy at first thought, but tons and tons of bugs made it even hard to build with python.
The basic idea is that I imitate the interface and logic using a Flask web app (which is actually very close to how an actual app on smartphone would work). Then goodie oldie friend Bootstrap helped me to layout the user-interface. Though it is still a pain in the a to make htmls.
Total time of researching and coding amounts to approximately 20 hours.
Challenges I ran into
Frontend -- It is just hard to get things right;
Backend -- I was thinking too much into details when I started this, like a working user-verification process. It ended up taking a lot more time than expected.
Accomplishments that I'm proud of
There is at least working interface I can show off.
The backend logic is pretty much sound for the parts I finished.
I think the idea is great.
What I learned
I do not run as fast as I imagine in a hackathon.
Doing hackathon alone is fun in a special sense. I find myself in the flow state when I tried to get the html pages right--that is I did nothing but tying and thinking in two whole hours.
What's next for Call Grandma
Iterate UI-design with accessibility as top priority --+
Consider other aspects like security and privacy ----- User-based testings
More functionality: Personalized tutorials for elderly, mini-games that encourage communication with the elderly, elderly social circles managing, etc.
More testings then get one for my grandma.
Built With
bootstrap
python
Try it out
github.com | Call Grandma | Make tech easier for the elderly! | ['Shupeng Han'] | [] | ['bootstrap', 'python'] | 10 |
10,420 | https://devpost.com/software/floats | What is FLOATS?
Inspiration
In Australia, although only 4.5% of reported cases have been children with mild or no symptoms they are still vectors for the spread of COVID-19.
Children are usually reluctant to follow instructions such as wearing a mask all day, washing their hands, and having to stay 1.5 meters away from their friends. In order to minimize infection, we need to create fun games encouraging children to actively engage in preventative measures!
We've taken inspiration from the classical Flappy Bird game, and adapted it in a more relevant context, our current pandemic situation.
What it does
Our goal is to not only educate young children in particular, but also to spread awareness of the impacts this virus can bring to us. We have attempted to create a comedic game
Our ideas is to create special mechanics for each item present in the game, each bringing cool effects for the player to not only enjoy, but also to mirror how these essential items can limit transmission of the virus!
How we built it
Our team designated roles from designing icons and backgrounds to be featured in our game, to coding in pygame and creating different game modes in order to appeal to our audiences.
Challenges we ran into
Most of us have been learning as we go! Communication and distribution of tasks may have also been a challenge we faced, but we've gotten there in the end!
Accomplishments that we're proud of
We've learnt new concepts and ideas, coded in pygame, designed cute icons and made new friends :)
What's next for FLOATS?
In the upcoming future, FLOATS has prospects for expansion and further updates. We want to work towards further education on other pathogens and their respective treatments. This would allow for society to become educated on what over the counter drugs they could purchase or when it is necessary to see a professional. Our aim is to lessen the burden on the healthcare system by reducing the incidence of preventable diseases and promoting precautionary measures. More modes, environments and a rewards systems via partnering up with other organisations will be coming soon to different platforms.
Submission
Git repo link:
https://github.com/fasfdasdasdasdas/FLOATS.git
Slides:
https://docs.google.com/document/d/1zAFgevOnSFQ4ZWTYMTJ3tMxPJig4_b0NkBzlLB8RgfU/edit#
Description of the project and its features:
We have designed a desktop COVID game specifically targeting young children in order to educate them and raise awareness of current and future pandemics. This game takes inspiration from ‘Flappy Bird’ and features 3 different game modes and speed levels in order to engage users. Developed using Pygame, we have intended to shift backgrounds as players advance further in the game, to reflect the different places affected by the virus.
The Mask Collector Mode is one game mode with the objective of collecting 5 masks before promoting the necessity of masks. Each time you collect a mask, positive connotations and boosters speed up the player, in order to make it more interesting for the kids. Mechanisms include: jumping up and down, moving left and right via the space-bar and arrow keys, requiring both hands for users to minimise face touching. Each time the virus hits a random person moving on the ground, they will catch the virus. The aim is to make sure you don’t give the virus to someone via catching the required items. We hope to not only teach children and make them remember.
References to any open source or third party material used in development:
freeCodeCamp.org 2019, Pygame Tutorial for
Beginners - Python Game Development Course, YouTube, viewed 29 August 2020,
https://www.youtube.com/watch?v=FfWpgLFMI7w&t=201s
.
Tech With Tim 2018, Pygame Side-Scroller Tutorial #1 - Scrolling Background/Character Movement, YouTube, viewed 29 August 2020,
https://www.youtube.com/watch?time_continue=710&v=PjgLeP0G5Yw&feature=emb_title
.
Tech With Tim 2018, Pygame
Tutorial #2 - Jumping and Boundaries,
YouTube, viewed 29 August 2020,
https://www.youtube.com/watch?v=2-DNswzCkqk
.
Tech With Tim 2018, Pygame Tutorial #3 -
Character Animation & Sprites,
YouTube, viewed 29 August 2020,
https://www.youtube.com/watch?v=UdsNBIzsmlI
.
Tech With Tim 2018, Pygame
Tutorial #4 - Optimization & OOP,
YouTube, viewed 29 August 2020,
https://www.youtube.com/watch?v=xfnRywBv5VM
.
Roles of each participant in the project team:
Aaron Sandelowsky: Ideas and planning, digital asset design.
Anna Su: Ideas and planning, coding for game prototype, devpost descriptions
Clare Kuys: Ideas and planning, design for slides and game, scripting, devpost descriptions
Daniel To: Ideas and planning, coding for game prototype, devpost descriptions
Eric Zhang: Ideas and planning, coding for game prototype
Eugenie Kim: Ideas and planning, design for slides and game, scripting, devpost descriptions, pitch video
Built With
python
Try it out
github.com | FLOATS | Pandemics and People: Promote practises with Python! | ['Eugenie Kim', 'ckuy', 'sandelowsky2001', 'Anna Su', 'Daniel To'] | [] | ['python'] | 11 |
10,420 | https://devpost.com/software/ecokeen-6sftjp | Travel History
Grocery History
Settings
Home Page
Inspiration
Greenhouses gases are the gases that keep heat in the atmosphere carbon emissions such as fuel use are the main contributing factors to these levels. Many people do not realise how much their day to day activities impacts the environment and currently, there is not many applications that allow user's to easily understand this.
What it does
We have built an easy to use and interactive mobile app that encourages users to keep track of their carbon footprint recommends alternative strategies to keep their carbon footprint. The app will collect data through API's and manual entry to calculate a user's carbon footprint score. This will include:
grocery vendors such as Woolworths and Coles to import Grocery data
Google Maps to import travel distances
How we built it
The mobile application is built with React Native and the back end uses Java Spring. We designed this app by thinking about how to engage with users to make tracking their carbon footprint
Challenges we ran into
Learning Spring and React Native as new frameworks
Interactions and communication was difficult in a virtual environment
Accomplishments that we're proud of
The teamwork and deliverables that was outputted from this project
Learnt new frameworks
What's next for Ecokeen
A feature that allows competitions with friends and have worldwide rankings for leaderboards
Partnering with supermarkets to allow a reward incentive for completing certain carbon footprint tasks or daily tasks
Integrating with Google maps to automate travel details input
Integrating with supermarket API to automate the grocery input
Recommendations for users how to improve their carbon footprint usage
Built With
react-native
spring
Try it out
github.com | Ecokeen | An Application that keeps track of user's carbon footprint | ['Mohammed Mustafa Fulwala', 'Christopher Pidd', 'Brendon Lam', 'Shrawani Bhattarai'] | [] | ['react-native', 'spring'] | 12 |
10,420 | https://devpost.com/software/guess-whomst | Inspiration
Our project was inspired by the high demand for online board game alternatives as a result of COVID-19.
What it does
Guess Whomst? is an implementation of Guess Who in an online environment with fully customisable character cards which can be specified and uploaded by the users. The application is hosted on a web app and is capable of hosting different game sessions for different users simultaneously.
How I built it
The application back-end was built in Python using the Django web framework, and linked to an SQLite database for storage of active game sessions. The front-end was built in JavaScript using HTML5 Canvas.
Challenges I ran into
A lot of time was spent figuring out how to design class models in Django to be compatible with storage in the database.
Didn't have enough time to fully interface the front-end and back-end to achieve our MVP.
Accomplishments that I'm proud of
Not sleeping.
What I learned
As our team had no prior experience with most of the technologies and frameworks used, across the course of the hackathon we learnt a significant amount about JavaScript, Django, databases and web programming.
What's next for Guess Whomst?
Complete front-end and back-end interfacing
Cleaner UI
Additional Game Functionality
Built With
django
javascript
python
sqlite
Try it out
github.com | Guess Whomst? | Customisable Online Guess Who | ['David Young', 'Lewis Watts', 'kimianassaj'] | [] | ['django', 'javascript', 'python', 'sqlite'] | 13 |
10,420 | https://devpost.com/software/moneyheist-2-0 | We help classify illegal and unsafe bitcoin transactions based on Bitcoin address
We help you identify safe (white) bitcoin transactions based on Bitcoin address
Inspiration With the rise in cryptocurrencies such as Bitcoin, there has been a tremendous surge in ransomware attacks owing to the complexities of the system and the anonymity enjoyed by the illegal entities. Existing heuristic mechanisms in place to detect such malicious transactions suffer from several fallacies and have failed to deploy advanced techniques for detection purposes. Ransomware is a malicious software that takes control of one’s computer, affects it and releases the system upon securing a ransom payment. There are several reasons which make Bitcoin a hotspot center for illegal transactions. Bitcoin continues to remain the only virtual currency which has a widespread user-base and is convertible in nature. Bitcoin transactions can be carried out anonymously by cybercriminals as it does not require identity verification and only requires them to send a public Bitcoin address via anonymous networks. The entirety of process from wallet creation, to accepting the payment and laundering it is fully automated and is additionally incontrovertible, meaning payment once done can’t be charged back. Furthermore, usage of anonymous networks such as Tor and non-traceable payment networks make it extremely difficult for law enforcement agencies to track down such ransom websites and shut them down. FBI agent Joel DeCapua has suggested that more than $144 million have been paid as Bitcoin ransom payments between 2013 and 2019 and similar figures have been estimated by a Google/Princeton study.
What it does The aim of the proposed model is to leverage machine learning techniques such as supervised learning to identify malicious ransomware Bitcoin transactions based on highly influential features of the used Bitcoin addresses that display high utility in detecting a ransom transaction.
How I built it Using Jupyter Notebook
Challenges I ran into A Lagging PC, lack of coffee to keep me awake the whole night ! and several other challenges
Accomplishments that I'm proud of This will be now the world's first-ever model for such a task !
What I learned Advanced ML techniques, Time Management
What's next for MoneyHeist 2.0 Making it available to my fellow netizens ! Yes
Built With
jupyter
machine-learning
python
sklearn
Try it out
github.com | MoneyHeist 2.0 | Supervised Learning model for identification of ransomware transactions in a heterogeneous bitcoin networks | ['Soham Patil'] | ['ML/AI Theme Winner'] | ['jupyter', 'machine-learning', 'python', 'sklearn'] | 14 |
10,420 | https://devpost.com/software/physiotherapy-aid-tjiage | Inspiration
During Covid or in general working class specially IT professionals cant get time for exercise, sitting for long hours therefore they might get issues in body. So if we get them a virtual physiotherapist, so that sitting at home they can work out with proper guidance and do exercises will be beneficial.
What it does
Our application uses a tensorflow.js (browser-based) model to make predictions on the state of the current user's pose. It has been trained on a dataset of images created by us (~300 images per pose) to predict whether the position is correct, or incorrect - and what makes it so. I have used GCP Machine Learning Studio, a GCP Machine Learning tool, to train our models in the various physiotherapy poses. GCP Services Speech-to-Text API was also used to enable the application to be accessible by the visually impaired. The user can start their exercises via speech in various languages using GCPTranslator Speech API remotely and this is more convenient and easier to use for our target audience. The application utilizes GCP Services for text-to-speech. This is useful for the visually impaired as they can hear if they are in the right position as the application will tell them to adjust their posture if incorrect. We also use the webcam to track the user's movement which is fed as input to the posenet machine learning model and outputs posture image on the user's body.
How I built it
This is fully supported on Desktop/Android Google Chrome.
What's next for Physiotherapy Aid
Make it available for Gym enthusiasts
About the project :
AIDEN. Your physio assistant.
By Sanskar Jethi, Ankit Maity, Shivay Lamba
Access the live application at:
https://https://aidenassistant.azurewebsites.net/
View our presentation slides at:
https://docs.google.com/presentation/d/1wfyXXhWVZlDHjmuZIOpDDSzM61cEW7bwP6AOjpxJU_A/edit?usp=sharing
Our demo video:
https://youtu.be/9HOEje4E2i8
AIDEN is a web app utilising
tensorflow.js
, browser-based Machine Learning library, to enable accessible physiotherapy for the Visually Impaired and other people as well - talking through exercises by responding to users' postures in real-time.
AIDEN makes it easier for users to not only complete but to improve their techniques independently.
How to use AIDEN
Allow browser access to microphone and camera
Say “start exercises” or press “Start” or any particular language ( translation )
Try to do a “back bend stretch” approximately 8 foot away from the webcam with whole body in frame like in demo video. (only works in one orientation currently)
Technology
Machine Learning - tensorflow.js
AIDEN uses a
tensorflow.js
(browser-based) model to make predictions on the state of the current user's pose. It has been trained on a dataset of images created by us (~300 images per pose) to predict whether the position is correct, or incorrect - and what makes it so.
We have used Azure Machine Learning Studio, an Azure Machine Learning tool, to train our models in the various physiotherapy poses.
Azure Cognitive Services Speech-to-Text API was also used to enable the application to be accessible by the visually impaired. The user can start their exercises via speech in various languages using Azure Translator Speech API remotely and this is more convenient and easier to use for our target audience.
The application utilizes Azure Cognitive Services for text-to-speech. This is useful for the visually impaired as they can hear if they are in the right position as the application will tell them to adjust their posture if incorrect.
We also use the webcam to track the user's movement which is fed as input to the posenet machine learning model and outputs posture image on the user's body.
Key Azure Services that have been used in our product:
Azure Storage Services - storing machine learning model ( TF)
Azure Cognitive Services ( Inference )
Text-to-Speech
Speech-to-Text
Custom Vision ( to classify between correct and incorrect images)
Translator
Azure CDN ( three js and other libraries )
Azure Web App with Continuous Deployment
Linux Virtual Machine ( for hosting the website )
Azure CLI ( for deployment)
Azure Cloud Shell (for web app continuous deployment integration)
Azure Pipelines (Continuous deployment feature)
Visual Studio Code ( for all our life <3)
Supportability
This is fully supported on Desktop/Android Google Chrome.
Client Folder
The web application is located in the clients folder. The web application consists of two files: index.html and index.js.
Index.html
The index.html contains all the HTML that forms the backbone of the website.
We have used the bootstrap open-source CSS framework for our front-end development.
Index.js
index.js contains the Javascript code for the web application. This works with HTML to add functionality to the site.
Loads the model and metadata and handles image data.
Built With
gcp
javascript
machine-learning
tensorflow
Try it out
github.com | Physiotherapy Aid | Your physio assistant. | ['Sanskar Jethi', 'Shivay Lamba', 'QEDK .', 'Pulkit Midha', 'rahul garg'] | ['Matchathon & Incubation'] | ['gcp', 'javascript', 'machine-learning', 'tensorflow'] | 15 |
10,420 | https://devpost.com/software/vibecheckr | Text interface with prompt - we're detecting sadness in anthy's tone.
Text interface with prompt - detecting some anger, resentment.
Sad tone detected
Inspiration
We were inspired by the difficulties we encounter with online communication - unlike a conversation in person, online text-based communication is often roadblocked by not being able to fully understand the tone of the other person behind the screen. Because of this, we often neglect to reach out to our loved ones, despite it being easier than ever before with online communication platforms.
Although we don't believe a robot could do this better than we can, we wanted a way to celebrate mental health awareness, and encourage friends to reach out to each other. So we made
vibecheckr
!
What it does
vibecheckr is an addition to instant messaging platforms that augments the experience of online communication by facilitating the better understanding of the other person behind the screen. Typically, it would be deployed as an opt-in service for mental health events in order to spread awareness of being able to reach out to loved ones and how they're feeling.
As users talk in direct messages or group chatrooms, the tone of text in every user's message is analysed.
Over time, our algorithm detects when a user could be feeling a certain way, and then prompts other users chatting with our user with information on how they could possibly be feeling based on the tone of messages. When users are informed of how their loved ones could be feeling, they can then better navigate their online communication with them.
How we built it
The Juicy Trackpants used the IBM-Watson Tone Analyzer API to analyse tones of messages. To demonstrate vibecheckr's capabilities, we used ReactJS and ExpressJS to build an example messaging application implementing an algorithm for the tone analysing and prompting features of vibecheckr.
Challenges we ran into
We were met with all kinds of new challenges throughout this hackathon. We spent a lot of time choosing a suitable idea to start working on, so we didn't have much time for implementation. Many team members were unfamiliar with Node.JS and React, which made progress in some areas slow and a little frustrating. Even getting the development tools to work properly on all of our computers was the source of some headaches. The business side of the team faced difficulty with competitive analysis and market research due to the nature of our product.
Accomplishments that we're proud of
We're proud of the fact we were able to develop an initial idea into something more substantial. This involved finding a problem in the world that we could help with, and using it to guide the development and vison of our product. We're proud that we made a visually pleasing prototype, and we're proud of how each team member rose to the challenge by rapidly learning new skills and asking the right questions.
What we learned
For this hackathon, we gained a greater understanding of the frameworks ReactJS and Express, as well as tools alongside it, such as the atlaskit design system for our UI components. One thing we obtained a much greater understanding for was the use of APIs, with our extensive use of the Watson Tone Analyser API from IBM.
Besides these technical skills, we also gained a much greater understanding of the planning and teamwork required to prepare a hackathon project - including defining our main idea, use cases, viability as a product and writing code.
What's next for vibecheckr?
vibecheckr
intends to implement further partnerships and features to stand out more as a viable service to implement to existing instant messaging services.
As a service, this includes limited time roll-outs of
vibecheckr
through various messaging applications for mental health focused events, such as R U OK? Day through September. For example, a partnership with Facebook's Messenger platform could use
vibecheckr
to achieve R U OK? Day's goals of connecting with people who have emotional insecurity, addressing social isolation and promoting community cohesiveness.
Software wise, this includes fun, insightful user profiles for common tones used in messages, and colour coded message tones that give more insight into the feelings in messages. As the service develops,
vibecheckr
could be an almost viral online social connection medium that celebrates mental health awareness especially in the context of online text communication.
Built With
atlaskit-design-system
express.js
ibm-watson-tone-analyzer
javascript
react
Try it out
github.com | vibecheckr | Start the conversation. vibecheckr is a service deployed by messaging applications. With an inbuilt tone analyser, vibecheckr allows users to better understand the other person behind the screen. | ['An Thy Tran', 'Bayse McCarthy', 'Angus-McIntyre', 'rlinn25'] | [] | ['atlaskit-design-system', 'express.js', 'ibm-watson-tone-analyzer', 'javascript', 'react'] | 16 |
10,420 | https://devpost.com/software/dlct-6rophg | GUI Input
GUI Output
Inspiration
When we don't get classes with our friends at university, we feel disengaged and lonely. Its already quite difficult for university students to maintain friendships, especially when you aren't in the same classes, so we have created the solution!
What it does
This innovative program allows each student in a friend circle to input their availabilities and preferences into a simple to use GUI. The students also provide their unit code, which allows the software to obtain all the class times. With some magic (and code), it will present the best classes to pick so that the greatest number of friends are able to be together according to their preferences.
Development Process
GUI
Our resident C# expert Allan Chan designed and created the interactive interface from scratch. He designed a way for the student inputs to be stored in its own database and interact with the Python backend. After the algorithm has processed the data, the GUI retrieves the data and outputs it in an intuitive manner on a simple-to-read timetable.
Backend
Using Python, we designed a way to synthesise the student input into a manageable collection of availabities and preferences. We then parsed this information into the algorithm that provided the best classes which prioritised keeping all the friends together and matching the highest number of student preferences.
Challenges we ran into
1) Having the C# GUI interact with the Python backend.
2) Designing the algorithm to fairly calculate the required output.
3) Considering a lot of different situations friends can be in.
Accomplishments that we're proud of
1) Successfully finishing our first hackathon.
2) Successfully implementing the GUI and the Python backend.
3) Successfully using some Regexes, which we only learnt about last week.
What we learned
We learned how to create a functional program from start to end, with a working front-end and back-end. We also gained valuable skills in designing and creating within a team, including dividing the work, having regular team meetings and good commenting practices. We also gained substantial proficiency in the usage of Stackoverflow.
What's next for Friend+
1) Implemeting a smarter algorithm that splits the group into smaller groups to decrease a chance of a friend being by themselves.
2) Make the program interact with the university database in order to scrape the class times more efficiently.
3) Account for multiple classes at once.
Built With
c#
python
Try it out
github.com | Friend+ | A collaborative university timetabling system, prioritising on getting classes with your friends. | ['Zhijie Cao', 'Damien Ambegoda', 'georgialewis00', 'Isuru Peiris', 'Allan Chan'] | [] | ['c#', 'python'] | 17 |
10,420 | https://devpost.com/software/tiktalk | TikTalk logo
TikTalk chat screen
Link to repository
Inspiration / Problem
TikTalk is aimed at anyone who is willing to put themselves out there and text in a way that is based in the heart and not in the head. Social media has become calculated and cunning, sending messages in a way that are aimed to undercut and undermine others rather than being honest. COVID-19 has exacerbated this issue as many people now communicate solely through text.
What it does
TikTalk is a messaging service that allows you to watch users type in real time. You’re able to sign in, join a room and begin chatting with your friends straight away. This form of messaging mimics a real conversation and revolutionises traditional messaging applications which rely on heavily edited thoughts. TikTalk allows everyone to see what you think the moment something happens, not allowed for the superficiality of a turn-based text messaging app. As things happen, your organic reaction is captured and put on show – not an app for the light hearted! (and not banned in the US yet!).
Challenges
The biggest challenge we ran into was trying to decide what to develop. There are so many avenues and pathways we could've gone - for example, we thought to create a map with some sort of COVID tracing but decided against it due to time constraints. Coming to one decision about what choice was best for us to pursue was definitely a challenge! Aside from that, we had the typical software bugs along the way but with 6 people in the team, it was easy to put our heads together and try to work out what was wrong!
How we made it
TikTalk was written almost completely in raw HTML, JavaScript, and CSS. Firebase (platform as a service) was used to service the database and host the web application. These decisions were designed to minimise our time to market and ensure a high-quality, functional prototype from which we could continue to build on based on feedback!
Accomplishments we're proud of
It's easy to say that the biggest accomplishment is actually finishing this project in the given time period but, even further than that, it was our ability to problem solve. There were so many bugs to fix and instead of throwing in the towel, we thought outside the box, thought of solutions that are more sophisticated or efficient or streamlined in order to solve whatever problem we needed - and we think that's pretty cool! It's a skill we'll bring into our subjects at uni and our future placements!
Built With
css
firebase
html
javascript
Try it out
syncs-hackathon-2020.web.app
github.com | TikTalk | A real-time text streaming chat app. | ['Sean Tran', 'sassaf01', 'Kevin Leung', 'ckesoglou'] | [] | ['css', 'firebase', 'html', 'javascript'] | 18 |
10,420 | https://devpost.com/software/wolfie | Wrath of AI
Level optimization
Promise of Wolfie
Inspiration
During these morose times I want to do my share of
spreading joy and fun across the world
.
I wanted to make a game to try my hand in advanced AI and spawn optimization techniques and at the same time share the fun with the rest of the world.
What it does
The AI is based on a statistical model which helps predict the optimum position for a hit.
The spawn optimization techniques allows infinite level without causing any major lagging.
Character customization and a kill-based ranking system has been implemented.
The UX/UI has been designed so as to make game play intuitive.
Above all, it does it's bit in adding joy to the world !!!
How I built it
The game-play was implemented using Unreal Engine with the assets created using Blender, Photoshop and Illustrator.
My experience in AI helped in modelling the optimum position seeking algorithms.
Challenges I ran into
The spawn optimization was very challenging. Initially the calculations for the navigation mesh to be built exceeded 10 million as I didn't have time to increase the functionality for level streaming. As my computer is in no way capable of doing that in a day I had to come up with an entirely different algorithm to enable infinite level without using level streaming.
Accomplishments that I'm proud of
The AI is pretty good considering this was my first AI project from a game development perspective.
The level number is infinite which was done without using level streaming and no additional trade-offs and that was some cool last minute thinking.
The greatest achievement of all is the smile I saw on my loved one's faces as they enjoyed playing the game !!!
What I learned
I've have tasted the fruits of game-development and will definitely be back for more.
I've learned the do's and don'ts of game optimization.
I am left with a new outlook on statistical ML models.
What's next for Wolfie
Wolfie will be released on Windows, Android and iOS
for free
of-course.
The main goal of Wolfie has always been to
give joy to all people no matter what their financial status is
.
Wolfie will serve as the
quirky debut of a very quirky animal RPG series
which I hope will be a cult game someday.
Built With
adobe-illustrator
c++
photoshop
unreal-engine
Try it out
github.com
drive.google.com | Wolfie | An animal RPG that uses AI search techniques and spawn optimization | ['Hrishikesh P'] | ['Gaming Theme Winner'] | ['adobe-illustrator', 'c++', 'photoshop', 'unreal-engine'] | 19 |
10,420 | https://devpost.com/software/commshare | Leaderboard
Items you can borrow from people in your community
Items requested by community members
Item description
Login page
Inspiration
In this day and age, consumerism is a big problem that we face as a society. It is all too common for people to purchase items which they may only use a few times. The idea for this project arose when one of our teammates was building and kitting out a van. There was a need for specialist equipment, but it wasn't feasible to purchase everything when it would only be used a few times. This got us thinking. How could people get access to these items for free, saving individuals money, but also reducing consumerism?
Enter CommShare!
What it does
CommShare is a unique marketplace that allows communities to connect and share items with one another. Users can list items they are happy to lend out to people in their community, as well as post requests for specific items they are hoping to borrow from their community. It provides a distinctive opportunity for communities as a whole to reduce their consumption of products that may only be used a few times. More than this, CommShare helps to build a strong, interconnected community, as it allows people to connect with those in their area and help one another out.
This strong community aspect is reinforced with the gamification of the app, where communities are given points for helping each other out, and can battle it out against other communities for the position of top community! We love the idea of facilitating a modern, Australian version of the Indonesian concept of
Gotong Royong!
How we built it
We started out by building up the frontend of the product, and then later building in the back end. Using a UI package we ensured there was a consistent look and feel to the application as well as ensuring that the application was in line with other applications.
Challenges we ran into
The main conceptual challenge we ran into was how to incentivise community members to list items they had and would be happy to lend out. We didn't want to add a monetary side, so instead we settled on the gamification of the app. This means that individuals earn points for their community when they lend out objects. This fosters an element of competition and encourages users to share more regularly within their community.
Accomplishments that we're proud of
We are really proud of pulling off everything we did within the timeframe. Initially, we were focused on creating a working frontend, and thought we would move on to the backend only if we had time. Now, we are looking forward to a functional backend including a database for users and products thanks to the hard work of the rest of the team!
More than this I (Jamie) am incredibly proud of how my team has tackled everything, even though collectively we didn't have much experience in web applications. I have learnt so much from my teammates and cannot thank them enough for taking me under their wings and helping me grow (Thanks guys!!).
Aside from this, we are really proud to put forward a product that can be so beneficial to our communities and the environment!
What we learned
We've learnt a lot, both technically and from a business viewpoint. Much of what you can see in the final product are things that individual teammates had very little experience in initially. From a business standpoint, we really had to think and learn how to incentivise users when the platform is not monetary based.
What's next for CommShare
Looking forward, CommShare can only grow from here. We would love to see this application in use in our communities, and see it expand further. As this happens, we will need to adapt and make changes, such as the way data is stored. We are also looking forward to expanding the community functionality such that users can form their own communities rather than be restricted to just a postcode. New community types may be based on street, a specific radius, or perhaps even similar interest communities where people list items specific to that interest!
Built With
creative-tim
css
html
javascript
json
node.js
react
Try it out
github.com | CommShare | Share Items, Reduce Consumerism and Battle it Out to Be the Best Community! | ['Mohit Chauhan', 'Lawko van der Weiden', 'Gubi Byambadorj', 'Ravi Jayaraman', 'Jamie Ramjan'] | [] | ['creative-tim', 'css', 'html', 'javascript', 'json', 'node.js', 'react'] | 20 |
10,420 | https://devpost.com/software/zenogloss-2j5onh | Architecture diagram
Using a switch, users can easily translate messages between their language and the other
Inspiration
We’ve found that most messenger app services are direct messaging in the form of text bubbles. This tends to facilitate quick, brief chats. It is usually quite difficult and awkward to maintain a lengthy conversation, and the nature of direct messaging pressures either party to make a rushed response. On the other hand, emails are usually used for long-form correspondence, but they are clunky by nature, and usually have non-intuitive mobile apps that make them frustrating to use. We are aiming at the niche that exists in the space between these two extremes. There is a previously unrealized method of communication that is intuitive and streamlined – akin to direct messaging – but retains the more thorough correspondence of email. Where direct messaging is too shallow, but email/snail mail too slow, Zenogloss can thrive. This middle ground of communication would be the perfect form to take advantage of for communication between pen pals. A pen pal – that is, someone closer to a stranger than a friend (currently) – is someone with whom deeper conversation would be required in order to form a bond. However, this is a relationship that would also benefit from the brevity provided by a streamlined messaging service. After all, not every day is interesting, and sometimes all that is needed is a quick update. With Zenogloss, the best of both worlds awaits!
How it works
For those who wish to create new cross-cultural friendships, it is possible to choose a random match. Potential pals with comparable interests are matched, and may begin their correspondence at their leisure and in their native languages. For those who wish to chat with known friends, it is also possible to add contacts by username. Just write and the translation service does the rest!
Prototype implementation
Our prototype is created in the React framework, using various React Native modules such as but not limited to:
Gifted-Chat collection
Google Translate React Native API
Expo and various Expo modules
We plan to build out the backend on MySQL and integrate Auth0 for easy authentication and security.
Challenges we ran into
While most of the front-end was familiar to us, our development hit a wall when we started trying to implement the Gifted-Chat module for message entry. Since none of the developers were familiar with Gifted-Chat, it was a fast-paced learning experience. Because of our unfamiliarity with Gifted-Chat, we were unable to create a messaging UI that fully conveyed the position we envisioned for Zenogloss between direct message and email. Due to this, as well as the time constraint, we were unable to implement a backend in our prototype. The time constraints also forced us to implement some amount of hardcoding for text entries and displays in the demo prototype.
What's next for Zenogloss
First and foremost on the roadmap is the fulfillment of our envisioned User Interface design philosophy. Currently we have a stand-in UI that resembles a run-of-the-mill direct messaging service. We have planned a complete visual overhaul of the messaging section of our app, as well as increasing compatibility for iOS. After that, we will be looking into a way to make translations more eloquent. Anyone who has used Google Translate before is no stranger to the oftentimes...eccentric translations. Following that, we will be looking at encryption of user information and data streams, for added privacy and security. Overall, we're not at all finished with Zenogloss! Big plans on the horizon.
Built With
expo.io
gifted-chat
google-translate
javascript
react-native
Try it out
github.com | Zenogloss | Come to life, come to cultural conversations: Zenogloss is a correspondence app with an inbuilt translation, for cross-cultural connections | ['Liam Mills', 'Bangshuo Zhu', 'Kim Nguyen'] | [] | ['expo.io', 'gifted-chat', 'google-translate', 'javascript', 'react-native'] | 21 |
10,420 | https://devpost.com/software/hazmany-problems-an-interactive-game-for-covid-education | Inspiration
We wanted to create a fun game with many interactive elements such that it will appeal to a wide audience of gamers. In doing so, we wanted to use the game platform in order to educate users on COVID safety and mitigation strategies.
What it does
The user controls the Hazman who is immune to the disease. At the same time, an infectious person begins to spread the disease around the 'roamers'. The Hazman has to collect masks and hand sanitizers to give to the roamers to stop the disease from spreading! The roamers can also spread the disease to locations like the school and playground which means that the Hazman also has to clean these locations before time runs out. To win, the Hazman has to cure all the roamers in the given time frame and ensure that all locations are kept disease-free.
How we built it
This game was build using the Pygame module in Python.
Challenges we ran into
Some of the challenges we ran into included working out how to deal with the collisions between the objects and playing field as well as choosing the features we thought we had enough time to implement.
Accomplishments that we are proud of
A lot of us have never used Python before or never used the Pygame module so we are extremely proud of the programming skills we have obtained throughout the process.
What's next for Hazmany Problems
We have a lot more features and complexities we plan on adding to the game and are excited to keep working on this into the future
How to play
Simply download the HazzyMan folder and then run the hazmat.py file (python3 hazmat.py):
https://github.com/evanmakjw/HazmanyProblems/tree/master/HazzyMan
Built With
pygame
python
Try it out
github.com | Hazmany Problems | An Interactive Game For COVID Education | ['Danielle Haj Moussa', 'Nicolina Bagayatkar', 'nicky-steel', 'Ji Sun Youn', 'Evan Mak', 'KhitNay'] | [] | ['pygame', 'python'] | 22 |
10,420 | https://devpost.com/software/famaapp | Login Page
Subscribe Page
Individual Article
Article Feeds
Settings
We were inspired by the recent invasion of Google, Facebook and other big tech companies into the journalistic landscape. We thought that there should be a place where people can access news that is unfiltered by algorithyms designed by big tech to make lots of money.
Built With
css
html
javascript
react
Try it out
github.com | Fama | Journalism without Big Tech | ['jamessacummins Cummins'] | [] | ['css', 'html', 'javascript', 'react'] | 23 |
10,420 | https://devpost.com/software/sahayak-wl8cu9 | not provided account error
uploading a document with a specific account
unix timestamp for the transaction given in the transaction hash
adding the file to be uploaded
faq section
team
Inspiration
We have seen various places where fake marks have given unfair advantage to the students whereas the hard working person remains behind.We also wanted to change the way college and other counselling sessions perform because of the Covid-19 pandemic
What it does
How we built it
Long story short we divided the work in two sections where vineet worked on the frontend and Hritwik took care of the blockchain backend
Challenges we ran into
there were like a gazillion of bugs while making this project and at some moment i just wanted to quit.To be honest webpack was seriously annoying to work with.
we had bugs like "fs" not found and then various missing webpack loaders,404 post request error in skynet etc
Accomplishments that we're proud of
With this project we have achieved the power to archive all your documents in a safe and secure manner and just transfer some hashed for verification process
What we learned
We have learned a bunch of new stuff,this is my first project with web3js, skynet and ethereum
What's next for Sahayak
we are going to add video calling functionalities to make this platform a true replacement for offline verification and secure document storage
Built With
bootstrap
css3
html5
javascript
skynetjs
solidity
web3js
webpack
Try it out
education-doc-sahayak.netlify.app
gitlab.com
docs.google.com | Sahayak | Lets's just not trust each other to make this world trust itself.Our platform has a sole purpose to eradicate fake documents and make counselling /verification easy with online methods | ['Vineet Kumar'] | [] | ['bootstrap', 'css3', 'html5', 'javascript', 'skynetjs', 'solidity', 'web3js', 'webpack'] | 24 |
10,420 | https://devpost.com/software/uandi-dating-for-the-isolation-generation | What's wrong with online dating?
You’ve heard of USYD, UNSW and UTS Love Letters right? For decades the meta has been unchanged, but on Thursday a UNSW student posted a gamechanger!! One post was requesting google form submissions from potential love interests, rather than browsing for heart reacts. But why didn’t our UNSW poster use existing dating apps, like Tinder, Bumble and eHarmony?
The reality is online dating isn’t all sunshine and roses. A study from 2017 shows that 55% of online daters have experienced some sort of threat during their online dating experience, like uncomfortable or misleading dates and even online stalking. Dating apps are forcing us to give too much information to the wrong people before we even get to know them.
Meet UandI - the profile that you share how you like
UandI puts you in control of your dating experience. Users create profiles to ask their prospective dates questions, and share them where they choose. Their secret admirers enter answers and their phone number, and if the admired likes their answers, they have 30 texts to get to know each other over a completely anonymised number. The users have total control of their information, and share it on their schedule. With a completely serverless backend, our system can scale from 1 to 1000 users easily.
Why use UandI?
The benefits of our platform are threefold. User anonymity facilitates superior privacy and security; the use of texting is familiar and convenient, reducing consumer frictions; and our uniquely open network means that no app download is necessary and profiles can be shared with communities of the users’ choosing.
What's next for UandI - Dating for the Isolation Generation
UandI is brimming with untapped potential. In the future, our priorities would be to improve authentication and our site’s anti-abuse functionality - involving ML and sentiment analysis. Then once our platform has been on the market for 18 months, we would seek to build in new features, and optimise our revenue model.
15.07 seconds
Pitch Link
Youtube Video
Built With
chalice
cloudformation
dynamodb
messagebird
python
react
Try it out
uandi.cc | UandI - Dating for the Isolation Generation | We're giving too much personal information to strangers we don't know and shouldn't trust. UandI allows you to create a profile and chat with others using anonymous phone numbers. | ['David Singleton', 'Alana Hua', 'Rebecca Gouveia Pereira', 'Nicholas Lukman', 'Cheryl Nie'] | [] | ['chalice', 'cloudformation', 'dynamodb', 'messagebird', 'python', 'react'] | 25 |
10,420 | https://devpost.com/software/watchdog-hyet0m | Dashboard - a visualisation of general data usage
Specific page
Inspiration
On average, Google holds 5.5 GBs of personal data of each of their users. That is roughly equivalent to 3 millions of usable data points. *
When did they get hold of this data? What are they using it for? When are they using it?
Whilst these companies are allowing users to retrieve these data, we still have no idea of what they are doing with these data behind the doors.
This is why we started the WatchDogs initiative. We wanted to give the users an opportunity to see how and when our data is being accessed by these large companies. Hence, transparency.
What it does
WatchDog is a webapp/chrome extension that gives users the ability to see how websites that have access to personal data are using it. It provides real time updates whenever a website uses our data for third party reasons such as advertising, market research etc…
The websites themselves must provide this information for us which is optional but their choices to share data culminates in a grade that we give them. This grade is a summary of how transparent a company/website is in sharing their data usage. This gives users the ability to see exactly how and when their data is being accessed. With this information, users will be able to make informed decisions of whether they would like to continue to share certain information such as location services etc..
By placing pressure on companies to be transparent about their data usage, we will be able to promote a culture of trust and honesty between a user and a large company. This also allows smaller companies to be able to use transparency as a way to improve public image and give consumers incentive to choose them over larger companies who may not have as high of a WatchDog grade.
How I built it
Backend: Python + Flask
Frontend: React.js
Chrome extension: HTML, CSS, JavaScript
Challenges I ran into
We used React to build our frontend for the first time and it was a challenge building a complex but simple to use website that was nice to look at. Only one person knew how to use React and having him only work on the frontend would be too demanding so two others had to learn the framework to supplement his work.
Accomplishments that I'm proud of
Learning new frameworks and programming languages
not sleeping until 5 in the morning
Roles
Adam Ma: Full stack
Lyric Wang: Pitch, backend, O-O design
Matthew Juan: Chrome extension, backend
Jeremy Lim: Frontend, design
Vicky Hu: Pitch, frontend, design
Jeremy Chea: Pitch, frontend
What I learned
We’ve never built a chrome extension before and it turned out to be really similar to web development which was a task a few of us had done before. Learning the similarities and differences between the two was a fun and rewarding experience.
This was our second hackathon together and, learning from the previous experience, we were able to work together in a more efficient manner and build a bigger and better overall solution
What's next for WatchDog
The goal of WatchDog is to become the industry standard in monitoring data transparency between companies and users. We will aim to reach out to large companies, especially social media companies whilst refining and enhancing the features that we have to offer users. We will also aim to provide more accurate and precise information for users to be able to see how their information and data is being accessed. Ultimately, we aim to create a culture in which large companies are transparent about their usage of data so that users will be able to be aware and make informed decisions of which companies they decide interact with.
Built With
css
html
javascript
python
react
Try it out
github.com | WatchDog | What are they using our data for? When are then using it? I'll let them use it, but please keep me informed. WatchDog is a platform that allows users to visualise where and when our data are used. | ['Matthew Juan', 'Adammbq Ma', 'cheajer', 'vickyh4', 'Lyric Wang'] | [] | ['css', 'html', 'javascript', 'python', 'react'] | 26 |
10,420 | https://devpost.com/software/terner | What it does
Ternner is a student-catered job aggregation site which does away with the clunky interfaces of traditional aggregation sites such as Seek and Indeed in order to reduce the needless clutter and display the most important information for students seeking Internship and Grad roles.
How it does this
Jobs are scraped off job aggregation sites and large employers using a Selenium-based python bot, and used to fill entries in a SQLite3 database. Extraction of information from embedded html is done with beautifulsoup4. To gather further information, the bot performs additional trawling to obtain the full job description in the job listing. This is then parsed into Rake-nltk, a natural language processing algorithm which extracts keywords for searching and (future) recommendation purposes.
Everything is fed into a website for the users to search.
Further development
Use of Rake-nltk is temporary, as we would like to train a machine learning language processing method such as Sci-kit learn for better text parsing and keyword extraction.
Once population of the database has hit sufficient numbers, a categoriser or machine-learning fed recommendation algorithm will be used to assist students by ordering the listings in a manner which suits the specific student.
Mobile development would also suit the design, though using bootstrap and materialUI is a workaround for a mobile site.
Built With
beautiful-soup
css
html
nltk
python
selenium
Try it out
github.com | Ternner | Making internship searching easier | ['Bill Zhuang'] | [] | ['beautiful-soup', 'css', 'html', 'nltk', 'python', 'selenium'] | 27 |
10,420 | https://devpost.com/software/simplifica-9c4ldt | Simplifica
Many young students are often confused as to which career would be the best for them. Often this results, in them getting into fields which they don't enjoy working in. All this due to poor advice. With
Simplifica
, we aim to solve that problem.
Simplifica
asks the student a few simple questions, aimed at getting to know what the student is like as a person. Then the algorithm analyzes the data collected and comes up with a career field that would best suit that student. Also,
Simplifica
provides a one-stop shop solution for the students' information needs, as we have curated information pages with the relevant links, related to all career choices.
We hope that students can take a wise and informed decision with
Simplifica
!
Inspiration
We have gone through the phase in which we had to make a firm decision for our careers and have also seen many struggle through this phase. There are some students who fail to make the right decision and then regret for their choices. So we wanted to help the students and make this phase a bit easy for them.
How we built it
The front end was built using HTML, CSS and Bootstrap.
Back end was built using Java.
Spring Boot was used as the app framework.
JavaScript was used to link the front end and the back end.
Challenges we ran into
We were not familiar with the back-end development, so this was a big challenge for us.
Accomplishments that we are proud of
We managed to learn back end development during the hackathon and handled lots of stuff in HTML and CSS.
What we learned
We learned to use Spring Boot framework, Java and JavaScript. Also our front-end skills were polished. Besides these technical things, we learned to use our time efficiently.
What's next for
Simplifica
Due to time limitation we could only add a few fields and resources.Further we will incorporate more fields and resources.
Simplifica's
Career Test will also be upgraded and the number of questions will be increased with the accuracy of the algorithm.
Built With
css
html
java
javascript
springboot
Try it out
github.com | Simplifica | Introspect what you want to be. Be Imaginative.Be Successful. | ['ANUSHKA SHRIVASTAVA', 'https://devpost.com/software/simplifica', 'anshika dubey', 'Ishwari Patil'] | [] | ['css', 'html', 'java', 'javascript', 'springboot'] | 28 |
10,420 | https://devpost.com/software/kietxp | purpose and objectives
features
Glimpse of webiste
Glimpse of website
Glimpse of website
technology used
our future goals
KXP: KIET XCHANGE PORTAL
What it does
As per our observations, we reached a conclusion that many students of our college have to buy certain stuff 🧪 (Labcoats🥼 , Drafters, novels📚 , etc) which are only required for a very
short span of time, after which these products are clustered in their almirahs and remain of no use.
The problem is also faced by last year students who have to carry such non-needy stuffs with them back home.
Thinking more about it, we decided to take a step forward in order to solve this issue. We learned alot from the 3 R's principle i.e.
Reduce, Reuse, Recycle. Hence, we decided to make use of this principle to solve the problem.
We formed an online exchange portal named KXP(KIET Xchange Portal) where students of our college will be able to sell/buy the non-needy/needy stuffs respectively.
This will definitely help the students to cut down the expenses, and will even allow some of them to earn from the platform. We value your money and will dedicate ourselves to
provide seamless services.
Language used
: HTML5
: CSS3
: Python
: Django
: Bootstrap
: Javascript
: MySql
What we learned
Working with Backend technology was completely new to our team.
Making a solution that is intuitive, simple, and easy to use.
Learned web hosting, which was also something new to our team.
What are the problem faced by making website
There are many problems faced by our team on making this project.
A large number of errors occurs in our path when we dealing with Django ( backend framework)
The toughest part is to do hosting by Heroku.
Built With
adobe-illustrator
bootstrap
css
django
html
j-query
javascript
python
Try it out
github.com | KIET XCHANGE PORTAL | Reduce, Reuse, Recycle. ==>> KIET EXCHANGE PORTAL | ['Anubhav Kulshrestha', 'anshuman shukla', 'Akashdeep Gupta', 'Utkarsh Singh'] | [] | ['adobe-illustrator', 'bootstrap', 'css', 'django', 'html', 'j-query', 'javascript', 'python'] | 29 |
10,420 | https://devpost.com/software/hearmeout-2og68b | Example Complaint Details
Forum Page
Form to Submit a Complaint
If you have time, please view our full presentation:
Full Presentation Video Link:
https://youtu.be/ms-ubBEVBDc
For our 2 minute Demo, please watch the video above or go to
2 Minute Demo Link:
https://youtu.be/yee6PLEOeD4
En Espanol para HackMTY
English Below
Estamos estudiando Espanol en escuela, pero no es nuestros primero lingua. Por eso, estamos usando Google Translate.
Lo que dijimos al principio en Espanol
Al menos una cuarta parte de las mujeres sufren acoso sexual en el lugar de trabajo. Sin embargo, tienen miedo de denunciarlo o de buscar ayuda. Mi hermana se incorporará pronto a la fuerza laboral y quiero que se sienta cómoda informando casos en los que se siente incómoda, así que mi amigo y yo desarrollamos HearMeOut. HearMeOut ayuda a las personas en posiciones vulnerables a denunciar rápidamente los casos de discriminación de género. Si hay una disputa, los usuarios pueden enviar un correo electrónico con información sobre la queja y las pruebas que deben eliminar. Servimos como litigantes hasta que se resuelva o vaya a los tribunales si no hay compromiso. Ofrecemos varias funciones multidimensionales para ofrecer una experiencia integral: un mapa de calor con puntos críticos de discriminación (sede de la empresa o sucursales específicas, que actualmente está poblada por datos de muestra), una herramienta de informes, una información de noticias, peticiones y asesoramiento, y una forma para ver las quejas existentes. Aquí hay una demostración:
la demostración comienza en el video
.
Inspiración
La discriminación es un lugar común para las minorías de género. Ya sea en el lugar de trabajo, en el restaurante o en la escuela, pueden ser constantemente maltratados o acosados simplemente por ser ellos mismos. HearMeOut es una aplicación que busca combatir eso. HearMeOut permite a los usuarios realizar informes y subirlos a una base de datos. En el caso de que uno sienta que están siendo retratados de manera inexacta, habrá un sistema de gobernanza donde se pueden presentar pruebas y, en base a eso, un informe puede retirarse de la vista del público.
Que hace
HearMeOut consta de 3 partes principales: un sistema de informes (tanto para eliminaciones como para alegaciones), visualización de quejas (a través de una lista y mapas de calor) e información para que los usuarios puedan buscar asesoramiento sobre salud mental, asesoramiento legal y poder contribuir con datos. a encuestas o estudios que permitan a la sociedad conocer mejor la magnitud de la discriminación.
La primera página de noticias permite a los usuarios ver información sobre la discriminación en general, para generar conciencia.
La segunda página de quejas, muestra un feed de todas las quejas enviadas por los usuarios en toda la aplicación, donde pueden ver los detalles de las quejas y tomar decisiones basadas en esos datos.
La tercera página permite a los usuarios enviar quejas con respecto a cosas que pueden haber encontrado en su espacio de trabajo, como comentarios obscenos u otras formas de discriminación, que luego serán validados por otros usuarios en tiempo real en la página del Visor de quejas.
La cuarta página es un foro que permite a los usuarios consultar a otros miembros de la comunidad sobre lo que les está sucediendo, qué hacer en situaciones y recursos generales para consultar a otros similares a usted.
La última página es un mapa de calor que acumula todas las ubicaciones de las quejas escritas en un mapa de calor completo y fácilmente comprensible.
Cómo lo construí
Creamos la aplicación usando React Native, News API y Firebase.
Desafíos que encontré
Completamos este proyecto en un tiempo casi récord, mientras seguíamos ocupándonos de muchos problemas.
Nos ocupamos de muchos problemas relacionados con el sistema de compilación y las bibliotecas que estaba usando.
Logros de los que estoy orgulloso
Esta fue la primera vez que Vijay (mi) creación de una aplicación para ReactNative.
Qué sigue para HearMeOut
Para lanzar esto al público.
English
Inspiration
Discrimination is commonplace for gender minorities. Whether it be in the workplace, while at the restaurant, or while at school, they can be constantly misgendered or harassed simply for being themselves. HearMeOut is an app that seeks to combat that. HearMeOut allows users to make reports and upload it to a database. In the case that one feels that they are being inaccurately portrayed, there will be a governance system where evidence can be submitted, and based on that, a report can get retracted from public view.
What it does
HearMeOut consists of 3 main parts: a reporting system (for both removals and allegations), complaint viewing (via a list and heatmaps), and information for users to be able to seek mental health counseling, legal counseling, and be able to contribute data to surveys or studies that allow society to gain a better understanding of the magnitude of discrimination.
The first news page allows users to view information in regards to discrimination in general, to bring about awareness.
The second Complaints page, shows a feed of all the complaints submitted by users all across the app where they are able to view the complaints details and make decision based on that data.
The third page allows users to submit complaints regarding things they may have encountered in their workspace, such as foul comments or other forms of discrimination, which then later would be avalibake to other users in realtime in the Complaints Viewer page.
The fourth page is a forum which allows users to be able to consult others in the community about what is happening to them, what to do in situations and overall resource to consult others similar to you with.
The last page is a heat map which accumulates all the locations of the complaints written into a comprehensive and easily understandable heat map.
How I built it
We built the application using React Native, News API and Firebase.
Challenges I ran into
We completed this project in almost record time, while still dealing with many issues.
We dealt with a lot of issues with regards to the build system and libraries I was using.
Accomplishments that I'm proud of
This was Vijay (my) first time creating an application for ReactNative.
What's next for HearMeOut
To launch this to the public.
Built With
firebase
google-maps
react-native
Try it out
github.com | HearMeOut | Allowing gender minorities to report discrimination and harrassment publicly. | ['Vijay Daita', 'Om Joshi'] | ['Third Best Project'] | ['firebase', 'google-maps', 'react-native'] | 30 |
10,420 | https://devpost.com/software/multilingual-sentiment-analyser | Inspiration
Sentiments are Something that Every Human Being Has and Should be Valued but We Live in a world where Languages Spoken are in Plenty and it is Difficult to Know all the Languages . Though there are many so many translators , we cannot use it very often to Understand and translate everything . So there is Something Needed that Would Make people from a different Language Base Understand and Value the Sentiments of a Person from a totally different Language Clan . There are nearly 6500 Languages around the World Especially in a Country Like INDIA where over 22 languages are spoken officially with so much diversity , Everyone cannot understand what you are trying to convey over mobile phone or it is very difficult to understand the Sentiments and Moods of Different people with different Languages and that is where Our Project Comes into play by showing the accurate Sentiment of people through their speech or even texts in any Languages .When a Phone Call is Run or an Audio is Played . Our Model will Display the Respective Sentiment of the Speaker so that the Receiver can Understand Easily .
What it does
The Speaker can speak any language and the receiver ( who does or does not know the language of the speaker ) will be able to understand the tone , sentiments of the speaker through our model .
Our Model , Will get the Sentiment of a Person through his Voice or Speech or Any Audio File and of Any Language .
It takes the Audio File as Input from the User and Converts Into text and then and it goes through the translation model . After It has been Converted to English . We Run our Sentiment Analyzing Model and Display the Sentiment of the Speaker through an
Emoji
. An Emoji that depicts the Respective Sentiment is Shown as Output to the User.
How I built it
For the User-Interface that is the Front-end we have used Html , CSS and Javascript for the Webpage .
The Entire Model was developed Using Python . For Capturing the Speech we Used PyAudio Module and Speech Recognition Model . The Speech was Converted to text, after that We Used the Python Module to translate the given language to English using Natural Language Processing . Then the text was sent to the Sentiment Classifier Model and the respective
Sentiment
was found according to the speech of the speaker . After the Sentiment was Classified . The Output was displayed through Emojis for respective Sentiment using python's emoji module.
The code was then deployed Using Flask . The Website was created Using Flask. Flask is a lightweight Python web framework that provides useful tools and features that make creating web applications .
What I learned
Learnt to Make Sentiment Classifier Model , Speech Recognition Model . Learnt to Make Web applications Using Flask .
What's next for Multilingual Sentiment Analyser
Will be Build as a Web App by Us and Can be Used parallelly when a Phone call is Run or When a Audio is Played.
So that Receiver can get to know the Sentiment of the Speaker in Emoji Form .
Can be Used in Customer Feedback Centre's , Where Call Centre Employees get Easily understand the sentiments of the Customer Via Phone Call .
Now , any Person in the World can easily understand the Mood or the Sentiment of Any other Person with any other Language through his Speech or Voice .
Built With
css
flask
html5
javascript
natural-language-processing
pyaudio
python
Try it out
github.com | Multilingual Sentiment Analyser | This is a project that will get the Sentiments of People using their Voice , Speech or Audio Files of Any Given Language and Display it Using Respective Emoji . | ['sreeram 2001', 'S.G Harish'] | [] | ['css', 'flask', 'html5', 'javascript', 'natural-language-processing', 'pyaudio', 'python'] | 31 |
10,420 | https://devpost.com/software/vihunt | # ViHunt #EnjoyDefeatingCovid
ViHunt
EnjoyDefeatingCovid
In the month of December 2019 the first covid case was encountered. Now that the covid cases has stopped its hike and the schools and educational institutions have stated to open up a sudden rise in number of covid cases cases among children was observed. This was due to the lack of their awarness. This was the inspiration behind the project.Releasing that children are most attracted toward video games, we at the Vi-hunt have decided to create a game that tries to incorporate within the minors the methods to protect themselves and the other from falling a victim to the virus.The platform used the build the app was unity.
In the due coarse of the project I learnt C#language and game development in Unity.
The most challenging aspect of the project was the time restriction.
Built With
asp.net
c#
unity
Try it out
github.com | ViHuntDown | Have fun defeating the Coronavirus. | ['Pallab Paul'] | [] | ['asp.net', 'c#', 'unity'] | 32 |
10,420 | https://devpost.com/software/voice_controlled_pdf | Dark Mode
Help Section
File Menu
File Browser
Welcome to Voice control based PDFViewer
Voice control based PDFViewer is a speech recognizable GUI written using python3 various modules, but heavily based on tkinter which lets you view PDF and image files and Speech Recognition library for all the speech based commands.
Inspiration to build this project
The main motivation for making this project was to provide a good reading experience for amputated people, who can control an application with just their voice. This is the targeted usecase for our project. The software is also targeted for the general tech savvy demographic, or simply for people lazy enough to use the mouse for controlling a PDF viewer :D
What we learnt
This was a wonderful experience for all of us. We got to learn a ton of new things, not only technical but also social aspects of software development, that include efficient team co-ordination, communication and most important how to overcome hurdles.
In the technical aspect, we got learn software development principles by actually applying some software engineering practices like pair programming and agile workflow, We learnt about speech recognition and about different modules in python.
Overall, this was a challenging, fun and a nurturing phase.
Roles and Responsibilities
Throughout making this project, each member played a key role, right from ideation, to design to developing. Pair programming technique was employed, where each pair worked on the part they had undertaken. Various aspects of this project were carried by all members, working as one under the guidance of the team leader. The roles played by each member is listed out as follows:
Yashdeep - Team lead, motivator, main developer, testing
Shantanu - Co-developer, debugger, testing
Sreehari - UI design, code cleaning, testing
Vedant - Design, testing, documentation
Challenges faced
Right from ideation to development and deployment, we faced a lot problems. The problems faced can be categorized as:
Technical:
In the development and testing phase, we encountered lots of bugs till we perfected the software and make it deployable. The biggest problem was faced during the integration of speech recognition module with UI design.
Another technical challenge we faced was fixing the speech recognition functioning. A lot of other small bugs were frequently faced during the UI design part, but were quickly fixable.
Abstract:
The biggest abstract challenge we faced was selecting an idea. Amongst a swarm of ideas, we had to choose an idea which was unique, complex and which could be used in the betterment of society. The next abstract challenge was faced in the design aspect of the project, as the UI had to be asethetically pleasing and yet simple.
These were some of the major challenges faced by our team.
Built With
matplotlib
pdfplumber
pillow
pypdf2
python
pyttsx3
speechrecognition
tkinter
Try it out
github.com | Voice_Controlled_PDF_Viewer | Automation begins here | ['Shantanu Tripathi', 'Vedant Jadhav', 'Sreehari Premkumar'] | [] | ['matplotlib', 'pdfplumber', 'pillow', 'pypdf2', 'python', 'pyttsx3', 'speechrecognition', 'tkinter'] | 33 |
10,420 | https://devpost.com/software/pass-pulmonary-diagnosis-screen | deep learning enabled detection support for lung damage
GIF
Demo instant conversation with your virtual assistant chatbot
Inspiration
The problem with pulmonary fibrosis is that it can be caused by various number of factors and it is difficult for doctors to find out what is causing the problem. It may also increase the risk of severe illness from COVID-19.
What it does
A chatbot powered with machine learning algorithms to determine the patient's current lung health conditions based on a CT scan of their lungs.
How we did it
This model was trained on Google Colab using datasets from Kaggle.
What's next for PASS: Pulmonary diAgnosiS Screen
Provide an interactive and an more robust & comprehensive prediction of the pulmonary health conditions to assist diagnosis.
Built With
express.js
html5
jquery
mongodb
node.js
pytorch
Try it out
github.com | PASS: Pulmonary diAgnosiS Screen | AI-aided detection support for lung damage | ['Penguin :)'] | [] | ['express.js', 'html5', 'jquery', 'mongodb', 'node.js', 'pytorch'] | 34 |
10,420 | https://devpost.com/software/hackathon2020_project | MDM's logo
Inspiration
COVID-19 was first confirmed in Australia in late January 2020 and there have been more than 25,000 confirmed cases. Wearing of masks is playing such a significant role in helping to prevent the spread of the coronavirus that all Victorians are now required to wear a face-covering when they leave home.
Under the Public Health Orders in NSW, all the business have to register as a COVID Safe business and follow the COVID-19 Safety Plan which strongly recommends everyone in the indoor venues wearing a face mask.
Self-Check-In System has many benefits such as freeing up reception staff time and better customer service.
What It Does
The
Mask Detection & Management (MDM)
aims to track the percentage of people wearing face masks in indoor facilities where the infection can easily be spread such as hospitals, restaurants and gym centres. We hope our project can help the managers/security department at these venues to better prevent community transmission.
Our project is divided into 2 parts: a Self Check-in System with Face Mask Detection and a Management Application
Self Check-in System
At the check-in counter of the venue, the visitor is required to fill in details including name and contact number for later infection tracing, following the COVID-19 Safety plan under the Public Health Orders in NSW. All of the information will be stored confidentially and securely in the system's database for 28 days. In the meantime, the camera setup at the check-in counter allows the system to recognize real-time video face to detect whether the visitor is wearing a face mask or not. No image recording occurs at this stage to protect the visitor’s identification.
Management Application
If the visitor is not wearing a face mask, the system will record this visitor's status and send it to the
Management Application
. At this stage, the admin has been noticed the situation and he can decide what to do next. If this happens in Victoria, he can turn on the
Auto SMS
function on the app to let the system automatically send a warning message to this non-mask wearing visitor. If not, the admin can just simply keep his eyes on the percentage of non-mask wearers in the building. On the Management App, the admin can see all of the information of the visitors during the day in the
History
tab. There is also a
Summary Graph
and a
Calendar
section to let the admin traceback if there are any confirmed cases on a specific date. The app allows the admin to send the SMS to visitors having high-risk of infection easily, just by clicking a few buttons.
How We Built It
Backend: AWS lambda
Front-end: React
Automatic warning message: AWS SMS
Facial Recognition: Tensorflow.js, ML5.js
Database: DynamoDB
Server: S3 Basket
Challenges We Ran Into
There were many challenges we met while making this project and one of them was doing real-time video face recognition. The second one is expanding the self check-in system into mobile check-in. We ran out of time while doing that so we have given up on it finally. Privacy issue is also a thing that we really need to consider when developing this project.
Accomplishments
The self-check-in system helps to free up reception staff time as well as reduce unnecessary physical interaction to avoid community transmission.
The management app is a very effective tool for the managers to control the situation of the venues in real-time and easily traceback when necessary. Automatic SMS function helps the notification process become much easier.
These 2 parts work smoothly together to deliver the best result for our customers.
What We Learned
This project improved our ability to work in collaboration and manage time efficiently. We have learnt that creating a software product is not just about fancy front-end or complicated back-end. It is more about the value we can bring to our prospective customers.
Future Improvement
Integrate our self-check-in system with some kinds of temperature sensors for body temperature measurement
Expand the self-check-in system into mobile check-in to eliminate even further risk of infection
When The Pandemic Is Over?
Modify the system into emotion recognition for better customer service
Built With
amazon-web-services
css
html
javascript
react
Try it out
github.com | Mask Detection & Management | To help the managers/security department to track the percentage of people wearing face masks in indoor facilities and trace back when necessary | ['Suri Ho', 'Thang Chu'] | [] | ['amazon-web-services', 'css', 'html', 'javascript', 'react'] | 35 |
10,420 | https://devpost.com/software/symany | SYMANY - Fake News/Click Bait Detector
Google Chrome Extension
Facebook Warning Post
Inspiration
The majority of online media companies have revenue driven by user traffic and as such are pressured into producing sensationalist headlines that grab user attention. The resultant market has been saturated with articles that fall short on the sensationalism advertised, leaving users dissatisfied or misinformed.
Facebook has issued 40 million warnings that posts on its site may be misleading. The social detriment of this can be seen in the current pandemic. A BBC team tracking corona virus misinformation has found links to arson, assault and death.
What it does
Symany is a Google Chrome Extension that warnings over fake news and clickbait posts in Facebook. We make sure that any external links on Facebook have been tested against our metrics bringing you the most reliable news source - keeping you up to date on the most recent real news.
How we built it
The Google Chrome Extension was built using JavaScript, HTML and CSS (which handled the front end user interaction) and connects to a Python Flask API (news classifier) that we built ourselves. Finally, the API is hosted on Google Cloud. There are a number of metrics we used to classify news as either being reliable or not:
The Python Flask API utilises Machine Learning to categorise headlines as either clickbait or not.
Comparisons against a well-update blacklist of known fake-news sites are made.
The classifier continually scans posts on Facebook and asks the API if they're fake or clickbait. If they are, it displays an overlay.
Challenges we ran into
Time was the biggest challenge that we ran into. We found that our initial vision was too big and so had to dial it back given that there was only 40 hours to work on it. We found that working on a simple prototype and building upon that allowed us to combat this issue as we would end up with a product that worked if we ran out of time.
"Shoot for the moon. Even if you miss it you will land among the stars."
Accomplishments that we're proud of
Throughout this tight 40 hour time crunch, we are all proud of producing a functional Fake News/Click Bait Google Chrome Extension Detector that has the capability to filter out malicious news article swarming Facebook which have been causing harm to users globally. We are all especially proud of keeping our heads down, working through the 40 hours and never giving up on building a product that will one day help not only our friends and family but the wider community.
What we learned
As a team, no one had ever created a Google Chrome Extension before, with many of the members also not having experience in HTML, CSS, JavaScript or building an API from scratch. It was a challenging but rewarding experience grappling with new programming languages as well as setting up servers to host our API on Google Cloud. We also had the chance to develop our interpersonal skillsets as a few of the members had never met before this Hackathon, meaning we had the privilege to understand each others work styles and enhance our teamwork and communication skills.
What's next for symany
Introduce more features - provide a function that suggests better sources articles on the same topic
Expand to other social media (Twitter, Instagram) and build other explorer extensions
Add further classification metrics (scoring system)
Built With
chrome
flask
google-cloud
javascript
python
Try it out
github.com | Symany | A chrome extension to flag fake news in the Facebook news feed | ['Taron Ling', 'Liang Pan', "Daniel O'Dea", 'Charlie Lorimer', 'danlianzhao'] | [] | ['chrome', 'flask', 'google-cloud', 'javascript', 'python'] | 36 |
10,420 | https://devpost.com/software/stock-predictor-l902sy | Inspiration
It is always better for a novice to commence off with big dreams to reap profits out of the stock market within a shorter duration. Today, there are many people who prefer to become rich and gain happiness through quick, easy and short cut techniques. We shall now discuss some of the interesting share market tips, which can be effectively put into practice by beginners.
What it does
The main theme of the website is to make people aware of the stock market and its benifits in today's society by helping them to predict the values of the stock for the next day.It basically depends on the stock values from the past few years stock values. It tells about the various stock values running at that current point of time by redirecting to trustable websites.You can also view the trending news in the current stock market redirecting to the news websitess based on the stocks and its growth and fall in the value.
How I built it
Using visual studio with Html and css
Challenges I ran into
running to get the predicted values of the stocks
What's next for Stock Predictor
Predict the exact values of the stocks and update it.
Built With
css3
html5
visual-studio
Try it out
saiteja0909.github.io
0c7ouzfmq0z6hqxjlpywaq-on.drv.tw | Stock Predictor | Guide to manage and invest in stocks | ['Sai Teja V', 'Sreeram S'] | [] | ['css3', 'html5', 'visual-studio'] | 37 |
10,420 | https://devpost.com/software/pathik-xnykgj | ==
Try it out
github.com | == | == | ['Kritika Singh'] | [] | [] | 38 |
10,420 | https://devpost.com/software/quicktweet | Send tweet or Request hashtag by SMS
Sends latest hashtags and confirmations of tweets by SMS
Sent text has been tweeted
Inspiration
Ever since the recent COVID-19 pandemic, most of us, not surprisingly, have been spending a lot more time on the internet, which includes many websites like Twitter, Youtube, Facebook, Instagram, and many others. This inspired me to create a much faster and convenient way of tweeting and finding tweets only using your phone's SMS so that you don't have to open your browser, wait for the internet to connect and log in every time you want to tweet something or even find a tweet.
What it does
Simply send and SMS to an assigned number and it will tweet your message from your account, or, if you begin your SMS with '~hashtag', it will find the top 5 tweets with the same hashtag as the rest of your SMS. For example, '~hashtag tech' will find the top 5 tweets with the tech hashtag.
How I built it
I used the twilio API to get information from a sent SMS and also to send and SMS to the users phone.
I used the twython library to post the tweet and receive the hashtag tweets
The whole project was coded using Python.
Challenges I ran into
It was a new experience working with the Twilio API as I had never worked with any SMS connected API before, so I ran into many challenges and setbacks in that area. The most challenging feat was to integrate the code that finds the tweet with a certain hashtag with the code that sends the SMS back to the user's phone.
Accomplishments that I'm proud of
I am proud of being able to integrate so many API's with little prior experience.
What I learned
I learnt how to work with different libraries and API's and overcome the challenges of integrating different sections of code.
What's next for QuickTweet
Right now, the find tweet feature is very limited, in the sense that it can only find tweets based on hashtags, and furthermore, can only find 5 tweets under a hashtag at a time. I am looking forward to extending that feature to find tweets from usernames, followers, recent tweets, trending tweets, etc.
Built With
flask
python
twilio
twitter
twython
Try it out
github.com | QuickTweet | A quick way to tweet and find tweets by sending an SMS to an assigned number | ['Shashank S I'] | [] | ['flask', 'python', 'twilio', 'twitter', 'twython'] | 39 |
10,420 | https://devpost.com/software/pathik-vibd2u | Inspiration
What it does
How we built it...
.
Challenges we ran into
.
Accomplishments that we're proud of
. learned
What's next for ...
Built With
javascript
Try it out
github.com | test | . | ['Kritika Singh', 'Akash Bajpai'] | ['4th Place'] | ['javascript'] | 40 |
10,420 | https://devpost.com/software/syncs-hackathon | Home Screen
Taking photo screen
Picture scoring
Leaderboard
Circular
Circular is a novel app designed to competitively motivate engineers, mathletes and anyone else to better improve their freehand sketching for the sake of fame, glory and a sense of self-satisfaction. Our app allows users to take an image of their hand drawn shapes and receive automatically generated feedback on the ideality of their submission and ways to improve.
HOW YOU BUILT:
Development was done using
Kotlin
as the primary language due to its type safety and ease of development when paired with Android Studio. Some key challenges that we came across were dealing with asynchronous tasks when sending pictures to the backend as well as learning how to use Android’s camera api to access the phone’s camera hardware. These were resolved through intensive documentation reading.
We used a variety of python packages, centred around
OpenCV
and
Matplotlib
. The shape detection algorithm relies on various
Hough transforms and image preprocessing techniques
to translate the image into a space where the users’ desired shapes are revealed as prominent local maxima. To evaluate the score of a circle, the user’s drawing is translated to an exact curve through
LOWESS regression
, where the score is dependent on its L1 norm to the desired circle in polar coordinates. Parallel lines are scored by the angle between gradients, determined by linear regression.
OUR STORY:
With our team being composed of engineers and math savants, drawing the perfect freehand circle is somewhat the holy grail of flexing on friends. This in addition to the fact that one of our members is petrified of failing his upcoming freehand sketching assignment, inspiration sprung forth.
In developing Circular, a lot of self-teaching was required as members wrangled with
Android Studio, Kotlin and OpenCV
for the first time. We learned of the struggles with transferring an app to a real phone, learning how to use
Google Cloud
functionality, debugging code deployed to the cloud.
Most importantly, we learned of the power of git commit --amend to halve the Git tree when dealing with Google Cloud. If only we learned that before the last couple hours.
FUTURE WORK:
Features that we would like to see in the future are an improved global leader board that will detect cheating. To do this, we would have competitive video submissions so fraudulent or non-freehand shapes can not be submitted.
A personal statistics page with unlockable achievements would also help keep aspiring drawers motivated.
In our current solution, we have circles and parallel lines, and to add to this we would like to detect ellipses, right angles and parallel curves. Not only would this challenge the user that has already mastered the first 2 shapes but this would be very useful in assessing freehand engineering drawings that use all of these shapes.
Built With
api
async
googlecloudfunctions
kotlin
python
Try it out
github.com | Circular | Train to become the best at freehand circle drawing and more! | ['Jackie Wang', 'Andy Liu', 'Daniel Jiang', 'Amit Deep'] | [] | ['api', 'async', 'googlecloudfunctions', 'kotlin', 'python'] | 41 |
10,420 | https://devpost.com/software/spire-hagvsz | Minimalistic Login Screen
Assessment Centre
Tutorial Classroom
Hackathon Society Event
Networking Session at Atlassian
Inspiration
We were so
sick
of existing online video communication platforms failing to capture the humane attributes of real life conversation. People seem
distant
,
disengaged
and
separated
. What we want in video communication instead are, teachers and students, grandparents and grandchildren, interviewer and interviewee, speakers and listeners, not just isolated floating heads on a screen!
What it does
Welcome to Spire, the voice chat application simulating real-life conversations. Move around our virtual world through our web application where your
distance
from other people
determines exactly how loud
you can hear them. Those
awkward
situations when you're talking over one another in zoom? Sneakily pressing leave meeting when you know everyone can hear the exit sound?
GONE
with our revolutionary application featuring a comment system, platform for movement and conversing via voice chat and more!
More detail:
A faint circle outline will appear surrounded the avatar illustrating the range of their voice. Any other avatar within this circle will have their video appear on a sidebar as well as their audio streamed to the current user. This allows multiple group conversations to happen in the same call with absolute ease.
Breakout rooms
are clunky and tedious to operate, while Spire allows
free roaming
in the 2D world to simulate how an actual networking event or a classroom tutorial will operate! With casual backdrops for parties, formal backdrops for networking or interviews, working collision and so much more, Spire has limitless potential to deliver what the world really wants - effective,
realistic
online communication!
Alternate video demonstration!
link
How we built it
The frontend is constructed using ReactJS along with various packages such as:
Bootstrap for some upstanding UI
Axios for some awesome API calls
Socket.IO for some nice networking
React Router for some neat navigation
WebRTC for some vivacious voice connection
Challenges we ran into
Syncing sound together was very difficult.
Designing appropriate front end designs.
Implementing a comment section which works live for all clients connected
Accomplishments that we're proud of
Having
multiple connections
to the server simultaneously and updating each client live was really awesome to achieve.
Streaming sound
was also very difficult to manage, but in the end client to client audio connections were achieved!
Aesthetically pleasing, contextually appropriate backdrop designs ie classrooms, studios, assessment centres.
A fully
functioning chat/comment section
for the call was really awesome to implement as well!
What we learned
Most of the members were quite new to vue and had only basic JS knowledge. As well was this, integrating sound streaming was a great challenge!
What's next for SPIRE
Allowing the host to control their own
voice range
.
Host controlling participant
avatar positions
for classroom situations
Audio equalizer
to add room ambiance based on room size
Built With
html
javascript
react
vue
Try it out
github.com
github.com | SPIRE | Isolated. Confined. Distant. Current video communication is LIFELESS! Spire rejuvenates video calls with an interactive UI and spatially accurate audio. Spire has REVOLUTIONISED virtual communication! | ['Judd Zhan', 'Adam Leung', 'Sean Gong', 'evaliu-jpg', 'Sashi Peiris', 'Zijun Hui'] | [] | ['html', 'javascript', 'react', 'vue'] | 42 |
10,420 | https://devpost.com/software/eddy-kl1mf5 | Opening screen
Enter topic for making the decision
Enter attributes used to evaluate options
Enter the options and attribute scores
Your options are evaluated!
Inspiration
It's really hard to make decisions sometimes. Our friend Eddy knows this struggle all too well. When there's too many options to choose from, and too many factors to consider, we suffer from paralysis by analysis.
So inspired by the latest psychological research into decision making, we designed this app to make decisions simple.
What it does
Eddy is a quantitative decision making app which tells you the best choice to make based on what's most important to you.
Users input the options available to them, the attributes they need to factor into making a decision, and assign scores for each attribute for each option. Then, our algorithm evaluates the suitability of each option based on these weighted attributes. This information empowers the user to make the best decision for their situation.
How we built it
We built the app using the Kivy library in Python, which allows us to easily run it on both Android and iOS. It uses a simple state machine which transitions from the menu screen, to the user's selection of topics, attributes priorities, and entering of options, to the program's decisions, and finally returns to the menu screen.
We experimented with variations of the choice evaluation algorithm to find the optimum method to appropriately weight the attribute priorities. The current algorithm divides the weightings on a linear scale between 1 and 2. For example, if there are 3 attributes, the weightings will be [2, 1.5, 1].
Challenges we ran into
Learning and working with Kivy during the Hackathon
. The library was new to the team and had very little examples online. There were many paths to achieving the same outcome with the library, and so there was a large learning curve in figuring out the easiest option to implement in the given time frame.
Devising an algorithm
to ensure factors are weighted reasonably when calculating an ultimate score to rank options based on user priorities.
Accomplishments that we're proud of
Reaching our minimum viable product which embodies a majority of the functionalities and design of the app in a short time frame, especially with a library that our team has not encountered before.
What we learned
How to design a user-focussed app
to reduce analysis paralysis, a common problem given the number of choices we have in modern society.
How to effectively work as a team
in developing and pitching the app, given that this particular app had no modularity - something to take into consideration in our future development.
What's next for Eddy
More features to enhance user experience!
For example, ranking significance of attributes using a drag feature and ability to save and edit decision attributes.
Introduce social component to making decisions.
To boost user engagement and allow users to learn from others, we will add a community page to the app. Here, users will be able to share their topics, and the attribute groups they decided to use. It will be easier than ever to come up with attributes, and users can be connected to a community of people facing similar choices.
Premium version, with updated algorithms to suit more complicated attribute analysis.
This would help larger organisations inspire more confidence in decisions and make factors which go into decisions more transparent across the workplace. Ultimately, a company’s value is just the sum of the decisions it makes and executes. According to the McKinsey Global Survey on average, 61 percent say most of their decision-making time is used ineffectively. For managers at an average Fortune 500 company, this could translate into more than 530,000 days of lost working time and roughly $250 million of wasted labour costs per year. The knowledge work is a powerful sector of the economy.
Built With
kivy
python
Try it out
github.com
drive.google.com | Eddy | Decisions made simple. | ['Angeni Bai', 'Giuliana D', 'Edwina Adisusila', 'Ada Luong', 'Andre Georgis'] | [] | ['kivy', 'python'] | 43 |
10,420 | https://devpost.com/software/harmonics | The landing page
Register
Sign up
Match
Out of matches
Mutual connections
User profile
Harmonics
Need someone to play your new song?
Want to contribute as a guitarist to produce an original song?
Or do you just want an original song to play on your brand new guitar?
Regardless of whether you are a music professional or just want to jam. Harmonics can match you with the right person to collaborate with.
Inspiration
Finding a band or finding someone to jam with is a problem many small artist or hobbyist have. Whilst platforms such as "BandMix" exist, these websites don't cater to different proficiency levels. On the other hand, while applications such as "Facebook", "Gumtree", "find-a-musician.com" and "Musolist" allow people to create advertisements on their website, advertisements are not efficient nor effective as they are often a slow way to find matches. More shockingly none of the listed sites allows the user to preview a demo clip! Isn't music all about listening and hearing?
To aid these problems our team set out to design a website that allows users to efficiently search through a bigger range of artists and simultaneously listen to the artist's demo tracks. This will majorly streamline the process of matching artists with high musical compatibility. Our website is heavily inspired by the swiping mechanism popularised by "Tinder" and "Bumble" which entirely revolutionized how matchmaking was done.
What it does
Harmonics matches the user with a tight selection of artists based on their preferences. Preferences include the level of engagement, roles, and genre of music. Instead of reviewing redundant profiles for an hour, users can listen to up to 50 relevant artists in the hour, swiping left and right to find a jamming partner or a band.
How we built it
Concept
Design
Front end
Back end
Demonstration
Design
The user interface and experience as stated above are influenced by "Tinder" and "Bumble" which radicalised how matchmaking was done - i.e. swipe right and swipe left. The general style of the website adapts a contemporary minimal layout that allows the user to focus all their attention towards the artists and their creation.
Our design team used MS Paint to create initial wireframes and detailed the mockups using Adobe Illustrator and Adobe Photoshop. Using the mockups, the design team and Front End Developers worked together to build the website interface.
Front end
The Front End Developers learnt how to write in HTML and utilise bootstrap. Afterward, they were given mockups and wireframes by the design team to implement into the final project. Next, the Front End Team and the Back End Developers merged the project, which is fully functional on a local device as well as the VPS, hosted on DigitalOcean. Later, the Front End team was merged into the design team which worked together to create the presentation.
The front-end developers stack mainly consisted of:
JS
HTML
CSS
Bootstrap
Back end
Through communication with the front-end team, the back-end team implemented the business logic and relevant databases to the project, giving functionality. After the first meeting with the front-end team on implementation of buttons & UI/UX features, the back-end team developed the logistical back-end that allowed users to; sign-up, login & match with other users.
These features were implemented and programmed through a range of numerous technologies. The main back-end languages used in this project consist of:
PHP
MySQL
Digital Ocean Droplet (Ubuntu 20.04) running LAMP
Accomplishments that we're proud of
Before this hackathon, most of us have never met. However, within hours, we have become a fully-functional unit. Each person had their role and excelled at them. A big accomplishment, as a result, is completing a working prototype.
What we learned
Teamwork and collaboration skills.
Technical skills required for website development.
Team management and development.
Our team members learnt multiple programming languages, these include:
HTML
CSS
JS
Bootstrap
PHP
MySQL
Project Management
This project was done with the scrum (agile) method in mind, as it required little documentation and allowed us to rapidly prototype for this hackathon
We created a Trello Board to delegate tasks and create deadlines.
We assigned J.T. and Marco to maintain the Trello Board
We had a stand-up meeting every 3 hours to report back what progress has been made and what other requirements had to be done.
To collaborate we used a mixture of google drive, google docs, google slides and for the front & back-end, we used the git version control software facilitated through GitHub.
Challenges we ran into
An obvious challenge was the limited time frame to complete the task. While the event was marketed to be a 40-hour hackathon, many team members had commitments as well as needed time off to rest and replenish.
Another major challenge we face is the lack of common technical knowledge. Whilst Shaan and Jacky, the programming leads, know about desktop development, neither have experience in website development before the hackathon. 3 of our members are from non-software development backgrounds and were required to pick up programming concepts as well as technologies in an incredibly short timeframe i.e the git version control system, the scrum (agile) method as well as how to rapidly create websites using bootstrap. Additionally, the majority of our members were either new to hackathons or new to software oriented hackathons, as such making preparations for the hackathon and the hackathon itself harder.
Our third big challenge is the lack of resources. due to the time and budget constraints, we opted to make this hackathon as cheap as possible for us, as such we weren't able to start development on other hackathon ideas such as dynamic video generation.
What's next for Harmonics
Develop a mobile app:
Aim to develop a mobile app for ease of access and portability. Through making this application we can gain a larger target audience & market share.
Implement chat features:
Through the implementation of chat features, we will be able to audit, monitor and control our platform better, allowing us to ensure a more safe and better experience for our users.
Implement file upload:
After the implementation of the chat feature, we aim to implement file uploading to allow artists to better collaborate through the platform.
Implement listeners:
After the implementation of the base business requirements, we intend to add the ability for listeners to search and discover new music and artists.
Monetization:
Artists, producers Freemium (Current Plan)
- Limitations on the number of people you can swipe on
- Have ads
- Cannot "unswipe" or undo a miss-swipe
- Cannot "boost" profile, increasing visibility
VIP - ($5-$10 Monthly)
- No ads
- Ability to "boost" profiles, increasing visibility
- Can "unswipe" or undo a miss-swipe
Built With
bootstrap
css
digitial-ocean
html
javascript
mysql
php
Try it out
github.com | Harmonics | Meet. Jam. Create. Connecting artists to artists to create music. | ['Marco T', 'Alex Titchen', 'Athena Kam', 'Melody Ong', 'Jacky Wu', 'Shaan Khan'] | [] | ['bootstrap', 'css', 'digitial-ocean', 'html', 'javascript', 'mysql', 'php'] | 44 |
10,420 | https://devpost.com/software/eatsexplore | cafe visit with friend, quests for ubereats voucher
minigame example, also prototype
game skins purchase
visiting restaurants virtually
Inspiration
some news about businesses affected by COVID, games however are on the rise and the desire to go to aesthetically looking restaurants cafes!
What it does
explore surrounding restaurants via VR and with friends, support local businesses, users play mini games to earn in game currency and ubereats vouchers for that restaurant, game currency can buy skins/accessories.
How I built it
java papplet on gradle
Challenges I ran into
lots of bugs at first, making the code succinct, I can't seem to export it
Accomplishments that I'm proud of
making the prototype game in ~2hrs!
What I learned
doing this in a short period of time, thinking fast
What's next for EATSExplore
going 3d, having bonuses for repeating purchases, actually expanding to an rpg game, achieve the partnerships, have tiers for restaurants, more interaction with friends and a lot more!
how to use the prototype
java version 8 (install by doing sdk list java and sdk install java 8.0.252-amzn)
install gradle by doing sdk list gradle and sdk install gradle (version number)
Built With
gradle
java
Try it out
github.com | EATSExplore | A 3D virtual experience game partnering with UberEats with restaurant-specific minigames to get in-game currency and uberEats vouchers, support local businesses and COVID-safe | ['Faye Xie'] | [] | ['gradle', 'java'] | 45 |
10,420 | https://devpost.com/software/project-navi-v982gj | homepage, chat page, page for study tips
Inspiration
Between hard lockdowns, travel bans, and turbulent political discourse, this year has been difficult for everyone, especially uni students.
Having experienced our own struggles and seeing our friends' struggle in isolation and suffer from growing cynicism, we wanted to make something that could stitch together an intimate community and make up for the relationships we couldn't have on campus.
What it does
Project NaVi is a website built to help domestic and international uni students form the on-campus relationships this pandemic has deprived them of and improve their wellness while in and beyond isolation. There are 3 primary features:
The Library - a digital space where you can study collaboratively in a small group of uni students from a choice of faculties
The Cafe (not in MVP) - a space for users to take a break and bond with the people they’ve met to build relations that go beyond the platform
Student Help - a student help blog where we as the developers refer our users to credible articles and utilities that can help maximise productivity and promote a healthy work/life balance
How we built it
The program is written in HTML, html5, and CSS. The UI/UX Design was made using Figma to give a visual representation of our code.
First, our team members worked independently to build each of the three webpages - the home page, library page, and student help page - on the first day of the hackathon.
We then proceeded to link the webpages together and spend the rest of the hack polishing our UI/UX Design to best represent our vision of what this project could look like.
We also decided on other features we could implement later on into our MVP to demonstrate the potential for this product.
Challenges we ran into
TIME. This being our first hackathon, we were challenged by the time constraints to ideate, iterate and prepare a pitch for our project so we couldn't polish it as much as we'd like.
Since we were not well versed in HTML, this hackathon was a massive crash course in improving our understanding and seeing what's possible with this language. We wanted to implement a working microphone and video conferencing feature but this proved to be something we could not overcome in this short time period.
Built With
css
html
html5
Try it out
github.com | Project NaVi | Struggle Less Stay Connected | ['Selina Jiang', 'Stanley Tanudjaja', 'Wenyao Chen', 'unswit Zhao'] | [] | ['css', 'html', 'html5'] | 46 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.