hackathon_id
int64
1.57k
23.4k
project_link
stringlengths
30
96
full_desc
stringlengths
1
547k
title
stringlengths
1
60
brief_desc
stringlengths
1
200
team_members
stringlengths
2
870
prize
stringlengths
2
792
tags
stringlengths
2
4.47k
__index_level_0__
int64
0
695
10,371
https://devpost.com/software/heimdallr
Awards We'd Like To Be Considered For We would like to be considered for the overall award, the Google Cloud award, the Anthem award, and the Domain.com award. Domain.com Submission Our registered domain name is: ramblinwreckfromgeorgia.tech Inspiration Our goal is to help children with special needs better understand and interact with the world around them. Research has shown that children with autism are unable to learn language at the same rate as their peers. Instructional sessions with doctors can help, but they are often expensive and not widely available, especially due to the current global situation. Behind the Name Just like Heimdall from Norse Mythology, our product allows users to see the world in a new light. What it does Our solution leverages the growing widespread use of mixed reality to provide a cost-effective communication interface. Our software uses computer vision to determine what object the user is looking at. It then displays the name of that object on the Google Glass (or other Android device). How we built it Our solution is a mobile application for android devices that uses Google’s Tensorflow for object-detection API. With our solution, smartphones and Google Glass can help reinforce the visual cues for everyday objects through speech and text across multiple languages. Challenges we ran into We ran into issues with the speech-to-text API since our program uses a dynamic input from the object detection output of the computer vision program. Our solution was to ensure that we only detect the most prominent, centred object in the frame and that the program only speaks once for every new detection. We also ran into the issue of privacy with Head Worn Displays (HWDs) since people will be using our products not only in public spaces but also on private property. This was resolved by using Google Glass which has multiple built-in features for ensuring user privacy through encryption, providing notification through LEDs, and more. Accomplishments that we're proud of We're proud to be hacking for social good. We're proud to have made a finished, deployable, and testable product. What we learned We learned a lot about working with various API's, and how to work as a team. What's next for Heimdallr We'll be reaching out to the Emory Autism Center and the Children’s Healthcare of Atlanta to test our product in the real world. Built With android-studio google-cloud google-glass google-speech-to-text google-translate java Try it out github.com
Heimdallr
Reimagining Reality for Children with Special Needs
['Tanmoy Panigrahi', 'William Cooper', 'Jon Womack']
['Anthem: Personalized, digital medicine in the age of COVID and beyond']
['android-studio', 'google-cloud', 'google-glass', 'google-speech-to-text', 'google-translate', 'java']
25
10,371
https://devpost.com/software/picselar
GIF GIF PicselAR Hype Video (separate from above video) Warning If using the github files instead of the AR links, beware that we have found a bug with SparkAR that auto resets some values to 0. In order to fix this, click on the backgroundPixel object in the scenes, and then click on properties. Set height and width to 100% Relative. We do not know why this bug occurs and have spoken with Facebook representatives but the filters will not work otherwise. Links Please do not click on these unless you are staff. These links have limited uses as the filters have not been approved yet Instagram : Instagram Picsel AR Facebook : Facebook Picsel AR Instructions We made sure that when we developed the filter, it would be very simple to use. Because Facebook and Instagram have different capabilities, the filters for each platform have different options. When the filter opens, the user has options to chose which method of pixelation they want to use. The instructions for each specific method are below: Instagram Audio Analysis - This method analyzes the data from a spectrum equalizer and automatically pixelates the background if a certain average frequency threshold is exceeded. Slider - This method is also very intuitive. It just adds a slider on the right side of the screen that the user can modify to a wanted level. World AR Block - This is the most interesting method of pixelation. This method allows you to an embed an emblem of our logo anywhere in the world. Our filter will remember the location of this object no matter where you look, or if you switch between front and back camera. The only thing you need to keep in mind for this option is that the father away the object, the more pixelated your background will become. Facebook Hand Control - This method is both intuitive and fun to allow for creativity! If you put your hand towards the camera, it will further pixelate the background. If you put your hand away from the camera, it will unpixelate the background. World AR Block - This is the most interesting method of pixelation. This method allows you to an embed an emblem of our logo anywhere in the world. Our filter will remember the location of this object no matter where you look, or if you switch between front and back camera. The only thing you need to keep in mind for this option is that the father away the object, the more pixelated your background will become. Inspiration COVID-19 wrought an onslaught of problems concerning privacy due to the increase in video calls where people are required to share their video. Through creating an initially simple pixelation AR filter, we realized that by adding additional customization we enable users to utilize their own creativity in making skits specializing in reveals or hiding things. What it Does PicselAR gives users a way to pixelate their background through various means (e.g., hand gesture/slider, distance to a virtual object, and audio level). It also gives rise to skit creation with Instagram Reels as demonstrated above. Hand Gesture/Slider This option grants users the most convenient degree of customization over how their background is perceived. It is most useful if one has no need to pixelate their background or it needs to be pixelated the entire time regardless of the context it is used in. Virtual Object We believe this option to be the least useful in a professional setting but the one with the most potential for creativity. Users can place an object somewhere in a virtual 3D space and the camera's distance to it determines the background's pixelation. This feature could be used in creating short horror skits when paired with a dark environment or for creating skits that involve movement. Audio Level While it cannot be easily conveyed through a visual format, if the user's microphone picks up audio above a certain threshold, the background will be pixelated as much as possible. We envision this being used to prevent a person's home life being exposed in case someone begins yelling or barges in to their room. On the more light-hearted side of things, it could be used in a skit in which someone attempts to hide something from someone. How We Built it We built PicselAR using Facebook's Spark AR software. Within Spark, we were able to use all of the different tools to isolate our background and then learned how to use Spark's specific patches to manipulate all of our data. Challenges We Ran Into We realized that Spark AR has no way of saving values so we had to find a way to implement a finite-state machine into our patch editor to get around this issue. We also ran into various issues with regards to a new bug we discovered that reset some of our values every time we decided to run either of our projects. Accomplishments We're Proud of We are extremely proud of being able to produce two separate build of our applications. We were able to provide exclusive features to both platforms but still made them similar and were able to incorporate World AR into both of them. What We Learned We learned all about how to use various aspects of Spark AR in order to isolate different objects and layers. Furthermore, making a filter required a lot of skills outside of actually programming, which is all we really expected to do be doing. For example, we had to use Blender and Asperite to make the assets for the project. We also learned how to work efficiently together with a project outside of our comfort level (which is all we could ask for)! What's Next For 【PicselAR】 We wish to continue to build out this idea so it's perfect for our friends to use. Furthermore, we would love to show off our filters to our followers and see what they can think up with it, after all, that's the most important part of making a project like this. Thank you for checking out our project! Built With asperite blender sparkar Try it out github.com www.instagram.com www.facebook.com
PicselAR
Privacy Made Enjoyable
['Armaan Lala', 'Robin Rehman', 'Kevin Sadi']
['Facebook: Spark AR']
['asperite', 'blender', 'sparkar']
26
10,371
https://devpost.com/software/soapbox-world-effect-0lephu
Podium Model 1 Podium Model 2 Spark AR Code Snippet (Judge for Emerging Track) Inspiration This was our team’s first time using facebook’s AR software, and we wanted to create an effect that would stand out. In the current social climate, the voices of the people are more important than ever. Our team came together to create an open platform available to anyone to raise their voice and spread their message to the world. What it does The filter places a podium in front of the user in the camera’s frame. It then projects an image/video from the user’s gallery onto a wall or other flat surface behind the user to help support their message. How we built it Our team utilized facebook’s Spark AR software, to create the Instagram world effect. The effect utilizes background segmentation to layer the user between the podium and the green screen object, which is selected from the user’s own photo/video library via Spark AR’s gallery texture. Plane tracking is used to project the image onto a wall or other flat surface. Further interactions were designed to allow the user to manipulate the podium’s position and size. Challenges we ran into Background segmentation ensures that the background of the recorded image, including the images that we added into the background, appeared behind the user. However, trying to place an object in front of the user while utilizing background segmentation to ensure other objects are behind the user at the same time was difficult. The solution we developed to overcome this problem introduced further difficulty with scaling and positioning the foreground object necessitating additional logic to accommodate. Furthermore, it was an ongoing challenge to scale and position the foreground object without affecting the background image at the same time. We would have liked to improve the plane tracking feature to allow the user to better manipulate the green screen object. Plane tracking is only possible with the backwards-facing camera, which limits the shot composition possible. Accomplishments that we're proud of We implemented both segmentation and plane tracking in a complementary fashion and overcame challenges associated with both toolsets. In addition, we created our own 3D models from scratch, using Rhino 6. What we learned As first-time hackers, we learned how to use Spark AR Studio and Rhino 6 to create our first-ever AR filter for Instagram Reels. What's next for "SOAPBOX" World Effect If we had more time, we would have liked to create a proper Instagram Reel recording. Since our effect was not approved for publishing at the time, we had difficulty saving the reel to include in our demo video. In addition, we would have liked to improve the plane tracking features to allow the user to better manipulate the green screen object. In the future, we would like to have the ability to add multiple images from our respective photo/video libraries as well as more objects, and the ability to manipulate these objects and images separately. Built With facebook instagram javascript rhino sparkar Try it out github.com
"SOAPBOX" World Effect
Not a soap, but a foam of expression. Craft a bespoke platform from your own photos/videos, then take the stand to defend it (free podium included).
['Krithik Acharya', 'Rebecca Lin', 'bfaught3 Faught', 'Jordan Halim']
['Facebook: Spark AR']
['facebook', 'instagram', 'javascript', 'rhino', 'sparkar']
27
10,371
https://devpost.com/software/spark-ar-filter-for-concert-experiences
Inspiration I was inspired by one of my most missed experiences during recent times: concerts. I see a lot of potential in the intersection of music and design as with the rise of Spotify, Apple Music, Tidal, and other streaming services almost everyone has unlimited access to the music of their choice. Everyone loves to share attendance of concerts on social media and I thought with Facebook's Spark AR platform musicians could create individualized filters to augment their concert experience driving traffic to Facebook and Instagram instead of competitors. What it does My Spark AR filter uses background segmentation in the front-facing camera mode to enable a user to identify where they are and who they are seeing quickly. In the back-facing camera, I used plane-tracking to augment reality and have album-relevant objects float above the stage so that videos of the artist are more visually engaging. How I built it I built supplemental images using the Adobe Suite and implemented the rest of the project inside of Facebook's Spark AR Studio. Challenges I ran into I had trouble testing the filter on my phone but I was able to use their Hub to test it through Instagram instead of using the phone preview feature built into Spark AR Studio. Accomplishments that I'm proud of I am proud that I was able to learn a new program by myself in such a short period of time. What I learned I learned a lot about the workflow of Spark AR studio and was able to follow Facebook's tutorials to better understand the tools inside of the program. What's next for Spark AR Filter for Concert Experiences My next steps for the Spark AR Filter for Concert Experiences is to create custom 3-D objects for an artist an ideally work with one leading up to their concert once concerts can start to happen again. It would also be important to establish a way to distribute knowledge of the filter before the concert. Perhaps through posters featuring a barcode that links to the filter. Additionally, the front facing camera aspect could use polish. I would like to approach that in a way that allows the user to share their surroundings (say the crowd was huge or they had their friend next to them, while still establishing who they are seeing in an easy way. Further, I would like to look into how to better incorporate links to an artists music services so they could benefit as well, incentivizing artist's teams to create these filters. Built With adobe-illustrator particle photoshop Try it out www.instagram.com
Spark Ar Filter for Concert Experiences
Using the Spark AR platform to promote usage of Instagram Reels through an interactive concert sharing experience
['Matthew Askari']
['Facebook: Spark AR']
['adobe-illustrator', 'particle', 'photoshop']
28
10,371
https://devpost.com/software/that-s-so-local-cw98bl
Small-Business Focus Have you been lonely during these last several months of social distancing? Do you miss the social interaction and organic spirit of your local farmer’s market? Have you seen family and friends who own small businesses struggling to stay afloat? We know we have and that’s what inspired us to create That’s so Local . That’s so Local is a virtual farmer’s market . Our interface allows buyers and sellers to connect for face to face transactions in a virtual environment. We designed our interface in Figma and then deployed a lo-fi prototype in React JS. What we Learned We had a high learning curve using Figma considering none of us had prior experience. We also ran into CORS issues; the front end browser is unable to call NCR’s api gateway since they were not on the same domain. As a result, a server side sitting between That’s so Local’s client and NCR’s server had to be developed. Additionally, node and react take up more memory than IBM Cloud Foundry allows for free Docker instances. As a result, the build needed to be optimized by starting from an alpine container and copying over only the necessary static files generated by React. Nginx was used to route traffic and serve them appropriately. Lastly, due to a lack of experienced developers, the team focused on deployment and api integrations over front-end functionality. The resulting lightweight front end made keeping track of client state more rocky. Technical Skills Gained Our team was proud to have created a mid-fi rendering in Figma because we learned the program from scratch. We all learned new skills like creating designs in Figma, and also using React and connecting with the NCR payment software. Our fluency in both design terms and technical terms increased as well because we were able to collaborate on both sides of the process. We learned about deploying web applications to the IBM cloud, as well as the huge suite of computing power that’s offered there. Conclusion Our team’s ultimate wish is to help solve the problems That's so Local seeks to address. We’ve all loved attending our home community’s local farmer’s markets and have been devastated to see them go. Our hearts especially go out to the vendors and the people whose income depended on the market. As lockdowns in America are hopefully lifted in the coming several months, we think the space is prime for some digital assistance to help cities warm up to farmer’s markets again, and we hope to evaluate whether That's so Local could be a crucial part of that. Built With adobe-illustrator davinci-resolve docker express.js figma git ibm indesign javascript materialui ncr nginx nodes.js photoshop react reacthooks Try it out github.com
That's so Local
That's so Local is a virtual farmer's market that brings together the buyer and seller through a unique online experience. It fosters social interaction without unwanted germs.
['Katie Carlson', 'Brandon Moore', 'Lily Sullivan']
['IBM: The Community Response to COVID-19']
['adobe-illustrator', 'davinci-resolve', 'docker', 'express.js', 'figma', 'git', 'ibm', 'indesign', 'javascript', 'materialui', 'ncr', 'nginx', 'nodes.js', 'photoshop', 'react', 'reacthooks']
29
10,371
https://devpost.com/software/newsreel-xeqrj6
Inspiration Given the extreme importance of being knowledgeable about what is going on in the world, but the prevalence of fake and unreliable news, we identified that there is a need for a better way to distinguish factual and accurate information presented in the news. Inspired by the NewsQ challenge, we chose to investigate the Coronavirus Pandemic in the United Kingdom because there is a lot of hysteria surrounding the virus which makes it a target for misinformation. Design Document Link: https://drive.google.com/file/d/1pTJ9m88R7vYW9w-dqvYAWBwB5jZeFq-i/view What it does We mainly aimed to create a strong backend service that is capable of rating any relevant news article. Based on a trained data set that we created, we could input a list of articles and assign each a legitimacy score. Several factors go into determining the legitimacy score of a given article, such as the number of quotes, the tone, the number of typos, the number of swear words, the number of links the article has, and the word count. To provide a basic visualization of our results, we have organized it on a webpage which neatly lists the ultimate rankings. How we built it We began by building a robust website parser which had the capability of analyzing a variety of websites. Meanwhile, we also developed a machine learning model that took an initial evaluation of websites and compared it to a new list of websites. Once we had our models built, we transferred the project from a local device to a server. Challenges we ran into Finding a standard technique for scraping a variety of pages Ensuring out local changes were reflected on our server Determining why there were false positives that led to legitimate articles being ranked too low Managing multiple versions of models and output files Accomplishments that we're proud of Hosting on a unique Domain Name Scraping from a variety of websites Implementing a neural network Ability to maintain strong collaboration even virtually Putting together a wide combination of web technologies Using IBM Cloud’s Watson Tone Analyzer to determine the tone Setting up the Ubuntu Virtual Machine Service using Azure Cloud Services What we learned The most valuable lesson we learned from this project was looking at all the requirements. There were several times during the development process where we found ourselves referring back to the directions to ensure we address all aspects of the project. Another important lesson we learned was how there are two sides to a measured outcome. For example, an article may have very little quotes so at the surface level it may seem unreliable. However, we learned that it is also important to consider that an article may be a primary source that does not need to have quotes in order to provide legitimate information. What's next for NewsReel Fact checker API by google Larger dataset more training so more accurate More criteria for analyzing a web page Link to CSV with rankings Link: https://drive.google.com/file/d/1Z4g7Jyfp0fRAGutTgdcjmECx9_YbtaYJ/view?usp=sharing Built With azure cloudflare flask git html/css ibm-watson javascript keras python tensorflow Try it out github.com newsreel.nuclide.tech drive.google.com drive.google.com
NewsReel
How often do you question the reliability of your news? We target this essential question with our backend service. Trained on countless samples, it can identify key attributes to rank any article.
['Megan Dass', 'Ignacio Di Leva', 'Sneha Roy', 'Udit Subramanya']
['IBM: The Community Response to COVID-19', 'NewsQ for Social Good']
['azure', 'cloudflare', 'flask', 'git', 'html/css', 'ibm-watson', 'javascript', 'keras', 'python', 'tensorflow']
30
10,371
https://devpost.com/software/wonderlab
Object interaction development workflow demonstrating the interactions availble to WonderLab users. Testing out the rotation and scaling features Building out an asset listing Networking with a remote team member (a first for our team)! Inspiration In 2020, there's never been a better time to ReImagine Reality. The pandemic has touched us all- everyone in the entire world- and almost everyone has lost touch with something important to them. Often it's the places that we miss most- the restaurants, parks, friend's houses, amusement parks, and even the office buildings that serve as the backdrop of our lives. Without them, life just doesn't feel the same. But what if there was a way to bring those places to us, even if we can't go to them? What if there was a way to reconnect with the reality that we lost, and ReImagine it along the way? And what if there was a way to make the places that we know and love even better than they were before, using the power of our creativity and imagination? And best of all, what if it was accessible to everyone , regardless of how skilled they are at technology and computer science? WonderLab is the solution. What it does WonderLab places the user in a digital realm accompanied by the assistant robot Rollanda ("Roll", get it?). Rolanda can make anything you desire appear in the space, and has access to a large library of assets in many colors. She can make rooms, trees, garden fences, chairs, paintings, computers, kitchen items and anything else you can imagine come out of thin air, just by telling her "I want a [object]". For example, if you wanted a room, you simply tell Rollanda "I want a room", and she will make one appear for you. She'll also give a verbal affirmation that she did what you asked, including her favorite word "voila!" (pronounced voy - lah). This means that anybody, at any skill level can jump into the WonderLab and reimagine their reality. Now, let's take a look at a few examples of where this technology can be useful, and how it improves upon previous VR applications: Let's imagine that you've been in quarantine for the last few months, dutifully avoiding public spaces and forgoing family visits during the coronavirus pandemic. As Thanksgiving rolls around, you really may not be able to head home to eat at the family table like always. With WonderLab, you have the power to recreate that environment in an immersive and intuitive way, without needing any coding or modelling experience at all. There is no application that will ever be in the Oculus store which can simulate the most cherished locations in our lives; the ones which hold meaning only for ourselves. WonderLabs can do that. Or, perhaps you have some great ideas about how beautiful your neighborhood could be, if only you could add your own touches. Maybe you think it would look better if your house was a massive castle, or your neighbors house (who never mows the lawn) was in the dungeon. WonderLab empowers users to exercise their creativity regardless of skill level, and allows them to create imaginative environments that are still based on and reflect the real world. In this way you can still connect with the places you love while exercising your imagination, all the while breaking the monotony of quarantined life. You are empowered to reimagine your reality; to make it better, more ideal than the one left behind before the pandemic. Lastly, lets imagine you're a youth coordinator who runs a number of programs in your neighborhood, but find that you can no longer meet up in person due to the coronavirus. While you could meet up in a zoom call or even in services like RecRoom, nothing will ever replicate the places that you know and love. With WonderLab, you can create virtual environments that reflect the places you're already attached to, so you don't have to lose touch with them even though you can't be there in person. How we built it Hackathon projects require a great deal of organization and time management to pull off, especially when the members are not located in the same space. We exercised project management skills to break out team up into a modelling and development team. The development of this project (especially IBM integration) proved to be quite difficult, so almost all of our members worked primarily to code the WonderLab environment. Thus, our team divided into these groups: Development The development team integrated Unity with 3 IBM Services: Watson Assistant Text to Speech Speech to Text They built two primary functions: spawning items on voice command, and changing out spawned items for other prefabbed objects. The utilized dictionary and list data structures to manage the number of items that Rollanda could spawn, and they set up the Oculus Quest for use with the WonderLab. Another aspect of the development teams work was to code the interactions that users can have with objects in the scene, including rotation and scaling. This was accomplished thanks to Patrick's knowledge of UI/UX design in Oculus, which was one of the most fun aspects of this project. This chart demonstrates the user interface development workflow for object interaction that we developed during this Hackathon. Much of what we accomplished during this hackathon none of us had ever tried before. We aimed to provide a novel use for IBM's speech to text software. While most use cases concern data collection or process automation, we felt that it would be interesting to see what could be done with IBM cloud services beyond those fields. There were few tutorials or resources for anything like this on the internet, and we truly learned a great deal in the development of WonderLab. Modelling The development team was mainly represented by Austin, who used storyboarding and reference imagery to develop a repository of items for use in this application. Due to the number of items necessary for use in this scene, we had to rely on a small number of free asset packs to fill in the items that were impossible to model in the short time frame. We used assets from Unity's Essential Japan Office scene, as well as a forest pack that supplied trees. There were nearly 50 items in total that Rollanda could instantiate, which was far too many for one person to model and texture in 36 hours. The modelling team relied on the Maya - Substance Painter - Unity pipeline to produce these assets. Almost every asset has at least one variant, which Rollanda can instantiate at random. This was accomplished through the use of prefabs, where textures exported from Substance Painter were used to created a variety of different themed materials for each mesh in Unity. These were saved as prefabs and used to provide a sense of variety and immersion in the simulation. Challenges we ran into Our team ran into a number of challenges during the hackathon, from communicating with a member who lived in another state, wrangling with IBM's many services for the first time, learning how speech to text events can impact game objects, or even producing the sheer number of assets that were required for this project. At the moment, sleep deprivation is probably the greatest challenge this author faces in completing our submission for HackGT 2020. One particularly difficult challenge was developing the scaling feature. We calculated the scaling in real time by measuring the distance between the controllers. This was quite difficult and required a huge time investment, which occupied one of our members throughout HackGT2020. We are proud to say that we were able to successfully build out this feature. We created a guide on how to use this product, which is posted in our GitHub, so that future users who are interested in making this service work on their own computers might have an easier time with it and learn from some of our mistakes. Accomplishments that we're proud of Hackathons are just plain difficult, but they are ultimately rewarding. We built what we think is an amazing product, capable of producing a variety of entertaining, useful, and beautiful environments that are accessible to people of all skill levels. All of the team members remarked that none of us would've been able to make a project like this only two months ago. This hackathon season was difficult, but each of us learned a lot in every hackathon and improved by strides every weekend. This project represented the culmination of our growth thus far, and we feel that we worked together extremely well. We used Premiere Pro for the first time. Austin spent 12 HOURS making the 2 minutes demo video. PLEASE ENJOY ! What we learned The modelling team learned a great deal about Unity and Premier Pro this hackathon, having used both these services infrequently in the past. The development team learned how to make Watson assistant instantiate objects in game, which was a major accomplishment. The development team is proud to say that they learned how to scale objects using Oculus controllers in real time using a new and innovative method. This is both visually appealing and has interesting coding supporting it, which we count as a major accomplishment for our team. What's next for WonderLab The major functionality that we want to add is a multiplayer feature. We would also like to create many more assets than we had time to create this hackathon. Built With c# cloud ibm ibm-cloud ibm-watson maya oculus oculus-gear-vr substance-painter unity visual-studio Try it out github.com wonderlabmidas.weebly.com
WonderLab
We invite you to ReImagine Reality, and make the world of your dreams
['Patrick Molen', 'Roshaan Siddiqui', 'Ines Said', 'Austin Stanbury']
['IBM: The Community Response to COVID-19']
['c#', 'cloud', 'ibm', 'ibm-cloud', 'ibm-watson', 'maya', 'oculus', 'oculus-gear-vr', 'substance-painter', 'unity', 'visual-studio']
31
10,371
https://devpost.com/software/canvas-board
Creating a Venn Diagram with Canvas Board Creating a Free Body Diagram Inspiration Reimagining virtual education, providing educators a virtual whiteboard to make teaching easier. This idea is an alternative for educators who screen share paint applications when drawing out concepts in classes such as physics, linear algebra, database modeling and more. With this convenient whiteboard, educators can optimize class time for teaching concepts instead of dealing with the inconveniences of drawing via their mouse while streaming to their class. What it does Canvas Board uses Microsoft speech-to-text to accept verbal commands for drawing. The users, the educator, can say basic commands such as "Canvas write ", to render a textbox with the input, and Canvas , to render a specific shape, to the scree. NLP through Microsoft LUIS converts inputs into actions for Canvas Boards to run> Using handtrtack.js, Canvas Board allows the user to drag and resize these visual elements across their screen and build diagrams using the "Place" voice command to anchor the element. There are additional commands for color input, angle orientation, and opacity for personalization. Challenges we ran into Configuring handtrack.js, speech-to-text query handling, and rendering canvas elements to overlay over an active video element proved to be quite difficult. What we learned Throughout this project, we learned how to integrate various technologies such as Azure and Handtrack.js. Furthermore, we learned to work as a team to integrate version control and manage tasks. What's next for Canvas Board Optimizations to improve framerate Frontend redesigns Integration with online streaming services such as Zoom, Integrate finger tracking for more accurate interactions Expand feature set to incorporate more math and physics concepts Built With azure cognitive-services github handtrack.js html5 javascript luis-ai speech-to-text Try it out github.com
Canvas Board
Canvas Board is an Augmented Reality approach to the virtual classroom. It enables educators to draw out concepts and diagrams with vocal commands and manipulate them on-screen with their hands.
['Nitin Ramadoss', 'Sohil Kollipara', 'Raghu Radhakrishnan', 'Aditya Nair']
['Microsoft Azure Champ Prize - Hack for Good']
['azure', 'cognitive-services', 'github', 'handtrack.js', 'html5', 'javascript', 'luis-ai', 'speech-to-text']
32
10,371
https://devpost.com/software/hacknews
Search Least credible article Top search result Back-end Article overlay Most credible article Inspiration Our team was inspired by the increasing tension between the United States and Yemen and felt the urge to collect and spread unbiased and transparent reality about Yemen in America. Nevertheless, finding political neutral news reports becomes increasingly difficult among mainstream media because of the current polarized political background. What it does In hope of providing the most transparent and unbiased news to the public, our team developed “HackNews”, a web application, allowing the users to vote the articles as “legitimate” or “fabricated”, as well as to share comments to exercise their freedom of speech. HackNews ranks the credibility of the news according to the voting result from users. How I built it The design consists of several components including web scraping, storing upvoting, downvoting, and comments to Google Cloud, and computing the rankings of news articles in real-time. To begin with, we utilized web scraping to extract one hundred most up-to-date Yemen news articles from multiple news sites such as Yahoo, NBC, CNN, and Fox News. The ability to upvote or downvote, as well as the functionality to comment on news, is one of the most unique and crucial characteristics of HackNews. More specifically, readers are able to upvote for news that they consider as credible and unbiased while downvoting non-credible news articles. The comment section lays right next to the news entity in order to accomplish the concept of a user-friendly design. Generally speaking, the algorithm utilizes an upvote and downvote result from user voting to predict the credibility of each article. The credibility rank interprets the voting results with equal weight, and compute the credibility index of each news. The news is ranked corresponding to this credibility index in descending order. To present the credibility more user-friendly and visually-appealing, we implemented a gradient bar with dynamic color-changing that shows the degree of credibility. In detail, the bar represents the most trustworthy news as bright green, the most unreliable articles as bright red, and degrees of red and green shadows as different levels of credibility. In the back-end, data on votes and comments are uploaded and stored to google cloud due to the developer-friendly and faster response time of Google Cloud Platform. Challenges I ran into Our experience with web hosting faced several obstacles. We learned the limitations for different web hosting platforms after researching and experiencing with InfinityFree and Heroku: InfinityFree minimizes the support on React, which is the major frontend framework for our application, while Heroku denies uploading from git. In the end, GitHub ends up the most user-friendly and light-weight platform for deploying websites. Accomplishments that I'm proud of We are proud of making our internal search engine work. We used a semantic React component to help us synchronize several async requests as well as filter search results by credibility and relevance. We are also proud of writing our own web scraper using python and decoding the results using DOM manipulation. What I learned One of the most practical skills our team learned this time is the ability to replace the traditional databases with serverless Google Cloud Platform data storage. More specifically, firebase is used to store our vote counts, comments, and news articles. As reading through more news about Yemen, we develop a multi-dimensional understanding of Yemen events progressively. Unlike most news presented to the public on mainstream media, the reality is usually different from the news. In order to explore the facts, analyzing with an argumentative perspective is indispensable. Unfortunately, news related to Yemen is not gaining enough attention due to the domestic media manipulation in the U.S. With the purpose of spreading Yemen news to a bigger population, technologies such as HackNews become the essential methods to spread information. Therefore, modern technologies provide the most efficient solutions for many social problems. What's next for HackNews We want to give readers the power to judge the credibility of articles, therefore creating a democratic environment in the news feed community. In the future, we strive to improve the algorithm of the ranking system so that the website can organize the best news faster. We also want to add in features where the user can upload videos and images and send emojis in the comment section. What I learned One of the most practical skills our team learned this time is the ability to replace the traditional databases with serverless Google Cloud Platform data storage. More specifically, firebase is used to store our vote counts, comments, and news articles. As reading through more news about Yemen, we develop a multi-dimensional understanding of Yemen events progressively. Unlike most news presented to the public on mainstream media, the reality is usually different from the news. In order to explore the facts, analyzing with an argumentative perspective is indispensable. Unfortunately, news related to Yemen is not gaining enough attention due to the domestic media manipulation in the U.S. With the purpose of spreading Yemen news to a bigger population, technologies such as HackNews become the essential methods to spread information. Therefore, modern technologies provide the most efficient solutions for many social problems. What's next for HackNews We want to give readers the power to judge the credibility of articles, therefore creating a democratic environment in the news feed community. In the future, we strive to improve the algorithm of the ranking system so that the website can organize the best news faster. We also want to add in features where the user can upload videos and images and send emojis in the comment section. Built With google-cloud-apis javascript python react webscrappe Try it out hacknews.tech
HackNews
Web application that ranks Yemen related news by credibility, which is determined through user voting.
['PeggyZhao2 Zhao', 'Yukt Mitash', 'michaelohyang', 'Shuge Fan']
['NewsQ for Social Good']
['google-cloud-apis', 'javascript', 'python', 'react', 'webscrappe']
33
10,371
https://devpost.com/software/got-bias
The sentiment analysis model code The Excel sheet with article rankings The front page of the gotbias? website The article rankings on the gotbias? website The non-neutral websites are marked with their bias on the gotbias? website Inspiration To create a platform that could filter out biased and unreliable news sources. What it does "got bias?" filters news articles and ranks them according to a base algorithm that favors unbiased writers and reliable sources. On the "got bias?" website, a list of the top-ranked news sources is displayed. The algorithm also takes news date into account, so the most recent news is always near the top of the rankings. How we built it The sentiment analysis model was trained with data that was web-scraped using pythons scripts from news articles. The sentiment analysis model used Naive Bayes Classifier to analyze for bias. The ranked articles were then input into an Excel spreadsheet. The "got bias?" website was made with HTML/CSS and hosted using GitHub. Challenges we ran into Cleaning up the data was difficult, as the web-scraper script didn't provide the text of the article perfectly. We also had difficulty training the sentiment analysis model to find if the articles showed bias. Accomplishments that we're proud of We are proud of being able to successfully implement the sentiment analysis model. What we learned We learned about sentiment analysis modelling, as well as web-scraping with python scripts. What's next for got bias? We hope to deploy the bias checker remotely across all platforms so that anyone can use it. Built With css excel html javascript nltk python Try it out github.gatech.edu github.com docs.google.com
got bias?
got bias? filters news from the UK with the scope of political news and ranks them based on an algorithm that favors non-partisanship and reliability.
['Melam Master', 'Parth Shah', 'Faye Holt']
['NewsQ for Social Good']
['css', 'excel', 'html', 'javascript', 'nltk', 'python']
34
10,371
https://devpost.com/software/newsly-2wi63e
Challenge Submission: Ranked list: https://docs.google.com/spreadsheets/d/1-269wk5V4aiMAd-PW7nPEDw8BHSZyZKPcP4f7I5YHZ0/edit?usp=sharing Design Doc: https://docs.google.com/document/d/1klgP0XX64j28QEZ96e5a8LLZaVbUJCH4GN4YAkjoWRU/edit?usp=sharing Algorithm Results: https://docs.google.com/spreadsheets/d/135bwgYAJ-fpWsVOnPznz4LJn9bsZZ9KglBqj_b4MZus/edit?usp=sharing Frontend: https://jackcook.github.io/newsly-frontend/?fbclid=IwAR3vaDPEf0CdbHesxcHumK8W-eYnc9MM87hfrbxPVi1fiZGUwLzBRVuly0E Inspiration We are fortunate to have a vast array of news sources and aggregators that cater to our particular demographic, geographic, and sociopolitical identities. Even so, we should recognize that our favorite news websites can easily become echo chambers for beliefs that we already hold. In the United States and around the world, knowing the latest and most accurate information that promotes democratic and user-controlled consumption is ever so important. Therefore we built Newsly, a personalized news aggregator that allows you to control what you want to read and allows you to make the most informed and uncoerced decisions. Our current product revolves around news on Thailand, which we believe is a particularly apt use case given the social and political unrest and movements in recent months. What it does What many users fail to realize in the process of their daily news consumption is that the articles we are presented on a day-to-day basis is spoon-fed to us. We don't choose the factors that play into the order our articles appear in, the sentiment of the piece, or even the topics themselves (even though it may seem like we have that control). Newsly allows users to customize their feeds based on several factors we strongly believe are important to democratic consumption, such as sentiment analysis, peer reputation, news citations, political bias, Twitter article engagement, social bot promotion ratios, and even advertising aggressiveness. How we built it Newsly was built with a Postgres database, Flask server, and HTML/CSS frontend. We draw on several open source resources as well, including but not limited to Media Cloud, MediaRank, Twitter, and PyTorch. Challenges we're submitting to NewsQ for Social Good What's next for Newsly Newsly currently supports news feeds revolving around Thailand -- in the future Newsly hopes to democratize the world of news globally and allow for greater personalization of your news. Built With css flask html javascript python scss Try it out github.com github.com jackcook.github.io
Newsly
Democratizing our consumption of news, one article at a time
['Jamie Fu', 'Justin Yu', 'Jack Cook', 'Natalie Huang']
['NewsQ for Social Good']
['css', 'flask', 'html', 'javascript', 'python', 'scss']
35
10,371
https://devpost.com/software/duv-duv-gap
First page of Duv-Duv Gap website ranking 100 Uzbek news articles Demonstrating the page that you are redirected to after clicking on any link. Article view on the left side and data points on the right "Duv-Duv Gap" - a phrase in Uzbek meaning "gossip in the streets" Inspiration NewsQ Challenge on creating a news recommendation engine for a country other than the US. We decided to build one for Uzbekistan! What it does Duv-Duv Gap is a framework for low-resource languages by which we can use historical data of news articles and related metadata and predict the user engagement of new articles. How we built it We started with determining several Uzbek news sources that represented the majority of the viewers in the country. Then we scraped around 24,000 news articles and metadata for these articles. For each article we extracted several data points such as Title, Content, Date/Time posted, Number of views, Source, Number of images, Number of hyperlinks, and Number of quotes. Our next task was to decide on the metrics to quantify the quality of the news. Obviously, there are a lot of ways to define the quality of news/information among which relevance, facticity, style, potential impact play a big role. However, measuring these features is nothing but trivial and this is especially true for languages with poor data resources and relevant technologies. As a team, we spent a considerable amount of time over defining a metric that is meaningful -> does it actually capture the quality of news? simple -> is it straightforward and easily comprehensible? universal -> is it language-agnostic? After a lot of failed solutions, we reached our final solution. We define the target (label) as the normalized scalar value of NumberOfViews / ActivePeriod * SubscriberCount with respect to its own source. This way we avoid the problem of domain mismatch and bias towards small news outlets. Formally, the formula would be defined as: MinMaxScalerOfSource(NumberOfViews / ActivePeriod * SubscriberCount) Challenges we ran into One of the main challenges was the lack of extensive language technologies and resources relating to Uzbek. In fact, there are no automated fact-checkers, factual databases, grammar, or dependency parsers and lemmatization tools. Accomplishments that we're proud of We are proud to have built the first news aggregation and ranking engine for Uzbek language that is fully functional. What we learned For some team members, it was the first time building a recommendation and ranking system, so it was a valuable experience. For some others, it was a great coding challenge where they got to learn and write code in HTML, CSS, and JS. What's next for Duv-Duv Gap We want to make it an open-source project that the news agencies in Uzbekistan can utilize Built With css firebase html javascript machine-learning natural-language-processing python Try it out github.com drive.google.com newsquz.web.app
Duv-Duv Gap
A language agnostic framework for predicting news engagement rate in an unbiased manner
['Ahsan Wahab', 'Jam Mirzakhalov', 'Sherzod Kariev', 'Justina Le']
['NewsQ for Social Good']
['css', 'firebase', 'html', 'javascript', 'machine-learning', 'natural-language-processing', 'python']
36
10,371
https://devpost.com/software/jave-2-0
Home Page of Newsworthy Inspiration In the past year, Facebook removed nearly 7.1 million fake news posts, many of which included links to articles that were designed to confuse and mislead the masses. On top of that, just this past February, a study was conducted to gauge distrust of media sources world wide and found that 29% and 28% of adults trust media sources in the United States and the United Kingdom respectively. With the growing concern of fake news, the upcoming election, and general distrust of mass media, team JAVE has developed a way to determine if a news article is truthful or full deceit. What it does The application is a website, where users can upload articles from various news sources. The website using Natural Language Processing (NLP) will discern whether or not the article is untruthful based on a pre-trained model of words that suggest that an article may be fake. Our team took into consideration that our model may not be the most accurate, and decided to add a user consideration as an added heuristic. The application will take the url of an article picked by a user and apply the initial NLP heuristic to the article. Next, the users can upvote or downvote articles depending on the sentiment they may feel on the article. Using a unique algorithm combining both heuristics the website outputs a score thus ranking the article. Articles with a higher score bubble to the top where those with a lower score fall to the bottom. How we built it We have two components: a front and a back end. The front end is a simple HTML, CSS and Javascript implementation. The back end is a Python Flask server, using Gunicorn. We designed a REST API to interface between the front and back end. We also store a database on the Python server to store all of the articles. We also use PyTorch and BERT, a common NLP model, to train and predict the authenticity of an article. We utilize Flask in tandem with PyTorch to do real-time predictions. Challenges we ran into We ran into an issue of technical jargon being difficult to parse in terms of NLP. We performed very well in regards to Politics since our dataset mostly consisted of political news pieces. We struggled a lot with scientific articles since they often included very specific jargon. We also ran into the issue of training speed when training the models since they were pretty heavy weight. In addition, we ran into some networking issues with visibility on the network for our Flask server, but we fixed that quickly. Accomplishments that we're proud of We are very proud of having a NLP algorithm that was able to predict fake news with a test accuracy of 98.7% on a test data set from Kaggle. We are also very proud of being able to link a very aesthetically pleasing frontend with a production ready serve in the back. It’s connectivity through multiple heuristics like voting and NLP really makes this project special. What we learned We learned several ideas in NLP, especially about how to handle different jargons. We also learned about networking through servers and different channels of information. We got a lot better at several aspects of the front end, especially modular design. What's next for Newsworthy Our ML based fake news detection system shows that with careful training and targeting of specific fields we could do a sufficient job of parsing which articles are more or less biased. This system could certainly be improved to become a more generalized model that could detect bias in a number of different fields that are currently challenging like scientific articles. With the election year coming up, we believe being able to tell Americans what is possibly fake news is critical to preventing misinformation. As a result, we’d like to make this a more widespread idea for news sources to adopt. Built With bert css flask html javascript python Try it out github.com
Newsworthy
News that's worthy to you!
['Vineeth Harish', 'EtashGuha', 'Ali Mirzazadeh']
['NewsQ for Social Good']
['bert', 'css', 'flask', 'html', 'javascript', 'python']
37
10,371
https://devpost.com/software/canadarank
CanadaRank CanadaRank is a breaking news ranking model through objective methodology. Table of Contents CanadaRank Table of Contents Introduction Purpose Brainstorming Methodology Steps Parameters Algorithm Design Bringing it Together with Front End Society and Democracy Areas for Improvement Acknowledgements Introduction After our initial look at the challenge description and the accompanying video, we were interested due to the polarization of politics due, in large part, to news. We also feel that computer science and politics have never been so intertwined. To explore our curiosities, we decided to pursue this open-ended challenge. Our goal was to explore a much more objective way to rank what news user(s) residing in a non-US country should see in order. The benefits from an objective ranking system would be reduced polarity and manipulation of users through news. Further, it can help reduce biases that voters hold which could help them make more informed decisions about their representatives. We wanted to answer the question: "So given the needs of democracies and of machines at scale, what should the rules and considerations be to choosing news articles from numerous being articles being generated constantly to provide the most informative articles to a user. Having chosen the backend portion of this challenge(although we made a frontend to go along with our backend), we understood we would need to generate a list of ranks, a design document to explain our processes, and a working product. Purpose We wanted to introduce a bot that can effectively rank news articles about Canada's breaking news sector using Natural Language Processing, along with other methods. With this bot, we would compile a spreadsheet of 100 article sites in ordinal ranking that can demonstrate the effectiveness of the model. Determine factors such as technical and social to be used for Canada's breaking news sector that can be propagated to other countries and sectors and will help inform the user to objectively receive information. Brainstorming There were a few questions we posed when beginning this challenge: What is breaking news? Breaking news is recently received information about an issue that is currently developing. What parameters are important to determine what breaking news should be viewed near the top? Time Location Bias Misinformation News that has likely been generated by a machine or is inaccurate Site is mobile friendly We would like our model to favor sites that are mobile friendly Where do we retrieve a dataset from? Where do we get labels for our dataset such as a ranking of articles? We wanted to apply machine learning after retrieving numerical values for our parameters to determine the ideal weights but this would add heavy bias and is difficult under the time constraints due to unlabeled data. We decided we did not want to hand label data to maintain objectivity . How do we reduce subjectivity? Where do we retrieve news articles from as this will be our "ground truth"? This was a major point of focus as we knew that wherever we retrieve our news articles from (unless we go out with our own notebooks) is ranking the breaking news itself to determine what news to hand to our model. We decided this would be a noted assumption as we simply have to assume that whatever API hands us our news articles is objectively picking the breaking news. Obviously, in reality, this is not true but this bias will simply be present during our hackathon project. How do we maximize our efficiency to tackle the most important parameters over the course of the hackathon? After discussing the questions and purpose mentioned above amongst the group, we decided to brainstorm about our methods to tackling the challenge. We heavily researched NLP models and how news is currently ranked by Bing, Google, etc., to determine the best way to reduce bias and maximize information provided to users. We immediately knew we would need to retreive news from a non-US country. Then, we needed to pick whether we wanted to explore sports news, breaking news, or political news. Specifically, we wanted to explore the aspect of breaking news as it is quite important to due to its urgency. As for We decided we would go with the Google News API due to its documentation. From there, we would go to the news article and retreive the text, process it for certain parameters to be used in our model and throw it all in a CSV. Then, we would assign weights to this (perhaps using machine learning). Methodology We started by designating Canada as our country of focus to prevent a language barrier when retrieving news. Secondly, we chose to focus on breaking news due to its high importance stemming from its urgency. We wanted to be able to provide the most important information in the most objective manner, allowing users to remain informed and as unbiased as possible. We wanted to explore the realm of Google Cloud Engine, Natural Language Processing, Pandas,and Relu functions. Next, we made a strong goal of having a finished product by the hackathon deadline. Then, we created some steps to reach this goal. Steps Scrape a list of breaking news articles from a non US country Process each article for certain information(dicussed later) Parameters Rank each article with our model Output results through a developed frontend The main question that has yet to be answered is what specific parameters we will focus on and why . Parameters Misinformation It is important to determine if a news source contains disingenuous information that could have been written by either a human or another machine. To combat this, we researched natural language processing techniques that score how likely an article is to contain misinformation or words that were automatically generated. The model we used was based of Harvard NLP Lab's GLTR model, based off the current state of the art GPT-2 model from OpenAI. We took into account the final score produced from this machine into our ranking algorithm. Credibility of the source Bias, we must evaluate how biased a source is. We would prefer if our breaking news comes from a source who provides the most central viewpoint. This will help to prevent polarization , which is one of our focal points to preserve democracy. Time of publication Breaking news is from an event that has occurred within the last couple days at most, preferably much more recent. Location We would need to weigh this factor in an interesting manner as we want to provide news in a nearby location, but some events are so important that the location does not necessarily matter. We wanted to balance privacy, though and did not have a great way of retrieving user location data. This would be an important improvement in our news ranking model. Impact Factor We wanted to provide news that has high impact to the user. Ideally, this would be measured in clicks, but we did not have a way to retreive that. Instead, we went with the number of upvotes the article has. We wanted to take this factor into account, but we wanted to keep its weight low to keep news as unbiased as possible. If we completely relied on crowdsourcing to find important news, our model's bias would increase due to polarized audiences. Site is mobile friendly We wanted our model to prefer articles that were mobile friendly as it would reach larger audiences as most people view news through their mobile devices. Through lots of brainstorming, we decided to avoid taking this parameter into consideration as it does not determine the quality of the article to inform the reader about an important, urgent issue. Typos With an emphasis on objective ways to determine the quality of a breaking news article, we decided to include a typo ratio (number of typos in a document divided by the number of total words) as it can provide insight about the qualty of the written text. With our parameters having been determined, we decided to delegate tasks with a focus on objectively determining the quality of a news article. The first step was to produce the URLs for breaking news articles in Canada. After researching APIs, we stumbled upon the famous Google News API. After spending a decent amount of time on this, we decided this API would not fit our needs to retrieve breaking news as when retrieving breaking news through the API, we were not able to filter properly. Specifically, when filtering by regular news through the API, it did not let us specify a country and in the breaking news API it only returned 20 articles , we wanted to have about 100 articles to able to properly train our NLP model for fake news detection and provide enough data to determine the effectiveness of our model. After making this large pivot, we had to look for another location to get country-specific breaking news in a large quantity. After much deliberation, we decided to go with the Reddit API to filter by the subreddit r/Canada and sort by r/new while also making sure the article was published within the appropriate time frame. We immediately understood that this has the implications that r/Canada may not provide the most objective breaking news, however, in order to get the quantity and appropriate articles we would need to utilize Reddit's amazing API. Specifically, we went with PRAW -> Python Reddit API Wrapper. Through PRAW, we wrote a script to scrape the top 100 URLs that meet our breaking news criteria: news in a recent timeframe. From there, we wanted our model to go to the URL and retrieve all of the relevant text from the news article. For this, we used the "Newspaper3k : Article scraping and curation" library in order to retrieve key information such as the text, the time of publication, and the source. This text would be later used to train our GLTR model to detect misinformation, a parameter in our model. Then, we utilized PRAW to find determine the value of our parameter: Impact Factor. PRAW allowed us to retrieve the number of upvotes and therefore calculate the upvote ratio, which provides us a crowdsourced answer for the quality of an article. We wanted to keep the weight of this parameter low as we didn't want people's news feed to be biased by other people's interests. We wanted to reduce the domino effect that occurs in news today, but still wanted to provide news that is important to the members of the country or region. It was important to allow non-crowd favored, quality articles to reach the top of the news feed. After having collected the parameters thus far, the team researched media bias: the perceived bias of the source towards one political side of the spectrum." In order to calculate the bias of a source, we referenced www.allsides.com/ and scaled each article's source between [-2 , 2]. -2: LEFT, -1: LEFT-CENTER, 0: CENTER, 1: RIGHT-CENTER, 2: RIGHT. Our model would perceive LEFT = RIGHT, LEFT-CENTER = RIGHT-CENTER to prevent political bias and therefore maintain objectivity , a need in today's society. Our model would assign the highest value to sources with a bias rating of 0 (CENTRIST SOURCES) as determined by allsides.com. We compiled all of our parameters in a CSV, that would be used by the model to then calculate the weights and provide a final score for each news article. We then created a UI to express our final results. Algorithm Design The process of compiling all this data left us with a strong group of datapoints off which we were able to build our ranking insights. In order to do this, we proceeded with a number of steps, namely: Identifying the most important features of our news ranking dataset Conceptualizing the ways that these factors interacted Normalizing our features in order to arrive at more comparable, scalable metrics Creating a regression model to classify our data features First of all, we began brainstorming the most crucial data points we collected based on the information we had. The three areas that we had involved the Bringing it Together with Front End The front end aspect of this project wasn't the central focus, so we brought together our results with a simple interface that can be viewed from the web by users. The news articles are ranked from "best" to "worst" as determined by our algorithm from top to bottom, and the user can optionally sort the articles by a specific parameter of their choosing(as shown in the image below). We think that tackling how ranked news should be presented is an area that can be sought after more by developers and designers, creating solutions that attract users. Society and Democracy There is no doubt that news ranking is fundamentally important to converving democracy--if citizens do not have the access to a neutral, non-biased source of news, then it is very possible that their beliefs and actions will be indirectly influenced by the news they see. One approach our algorithm takes to combat this issue is to downplay the popularity of a news article. While often the most popular results are shown first to users, CanadaRank will give little weight to this category, instead prioritizing parameters we think are more important in determing a good news article. Areas for Improvement Although we are very proud of the work that we were able to accomplish, the limitations of the hackathon left us with a couple of areas we wished for improvement. For one thing, we wanted to explore a wider breadth of news sources in order to amalgamate our data, as opposed to strictly confining our searches based on Reddit's provision of news. This would have allowed to make our model more sensitive and protective to different alterations in metadata, and change the overall outcomes we received on our project. Another key area of improvement is in line with this example, and derives from the lack of time we had to fully train our model, and explore all possible organizations and normalization methods for our features. We were forced to make do with a simple regression model, and having an extra buffer would have allowed us to further explore the relationships between variables and their effects on our final score. Ultimately, the "perfect" solution to this problem is an open question, but having these deeper insights would have allowed us to make further progress towards answering this question. In terms of the other aspects of our project, we wanted to have more time provide a more robust dashboard and visualization of the data that we generated. Moving beyond just a simple CSV interface, we wanted to show deeper insights with charts and sensitivity plots of our different parameters. Another key aspect that we wanted to add is dynamic updates to our project. While we initially just built it to function locally on a test dataset, creating a full stack application that allows the user to survey different news sources over time and continuously receive updates on what is best for them. All in all, these areas would have greatly improved the overall vitality of our project, and give us promising areas for future exploration. Acknowledgements Casheer Built With css html javascript python scss typescript Try it out github.com
CanadaRank
CanadaRank is a breaking news ranking model through objective methodology.
['Manoj Niverthi', 'David Kwon', 'PranalMadria']
['NewsQ for Social Good']
['css', 'html', 'javascript', 'python', 'scss', 'typescript']
38
10,371
https://devpost.com/software/cleannews
Homepage (Zoom for Detail) Logo Single News Entry Inspiration Recently, there have been concerns on the truthfulness of news companies and what their best interests are. Misinformation, biased news, and hate speech are more prevalent now than ever. In the quest for social good, we have built a news ranking and recommendation system focused on politics in a foreign country--India. (We picked politics in India for the purpose of our demo, but the models and algorithms work on any type of news from any country) Rising political unrest and biased Indian news companies leave doubt in the people's minds, so we wanted to solve this problem by thoroughly analyzing news and presenting users with the highest-quality news possible. What it does Cleannews is a web application which is focused on providing unbiased, verified, and analyzed news. Recently, there have been concerns on the truthfulness of news companies and what their best interests are. In the quest for social good, we have built a news ranking and recommendation system focused on politics in a foreign country--India. (We picked politics in India for the purpose of our demo, but the models and algorithms work on any type of news from any country) Rising political unrest and biased Indian news companies leave doubt in the people's minds, so we wanted to solve this problem by thoroughly analyzing news and presenting users with the highest-quality news possible. We do this by filtering our news articles to native Indian publications about politics, determining the fake news, bias, and clickbait probabilities based on the content of the articles, utilizing Tensorflow to present articles with diverse viewpoints and eliminate articles with hate speech and offensive content , and using this data and corresponding weights to rank our news articles in terms of quality. Finally, we perform sentiment analysis to contextualize articles to users, keyword analysis to scrape the key terms , and present all of the information in a clean, concise, and readable way. How we built it First, we use the Bing News API in order to aggregate news articles. This API allows us to focus on polishing and innovating on existing search engines and news ranking algorithms, which allows us to focus on providing a better experience to users who are likely to be using the same search engines as we are using. **We focus on getting articles directly from Indian publishers rather than American publishers writing about Indian politics in order to reduce bias. After retrieving the articles, we analyze each article for signs of fake news which severely undermine the quality journalism we strive to uphold. **To do this, we use newspaper.js to harvest article data and pull the content and important keywords into our system for further analysis. After determining the news articles we've aggregated are not fake, we analyze and verify that the articles in question are not clickbait. This is done in conjunction with our fake news analysis with the help of multiple pre-trained models repurposed for our tasks. Afterwards, we use a sentiment analysis model to identify the sentiment of an article. We use this sentiment to detect biases of the author towards specific subjects and the author’s political preference which is incorporated in our final analysis of all of the individual algorithms we developed to polish the existing ranking algorithms. The goal of this sentiment analysis is to contextualize the article that readers may click on. Finally, we use a custom Tensorflow backend to detect hate speech and other languages which we consider to render an article as nonconstructive and written only to incite violence. The aggregate algorithms combine our results from the previous tests using a series of weights to properly create a custom threshold that indicates which news can be considered legitimate and which news is illegitimate. Challenges we ran into We spent a lot of time creating a workflow that enabled us to learn new technologies and create a web service that people would be able to use. Some challenges were connecting to Microsoft Azure to use the Bing API, Azure App Services, and working to deploy the website such that load time was minimal. We also faced challenges with adding custom machine learning models to have better results. Another challenge was dealing with the computation limits of running our models on the CPU and using the free tier of Azure. Overall, we were able to solve these challenges and build a platform we love. Accomplishments that we're proud of We're proud of being able to build an end-to-end service that helps people who are interested in learning positive journalism. We strive to create a safe place to learn about clickbait, fake news, and hate speech that can be found within common news articles. We're proud of the website we've built and it is amazing to see our algorithms work on news articles and produce tangible and verifiable results. What we learned We've learned that creating quality journalism is tough and takes a lot of time and effort, and news verification is a much needed service in this day and age. We also learned a lot about all types of ML-based analysis as well as about technologies such as Azure, Tensorflow, Node, and Pug. What's next for cleannews We hope to continue Cleannews and build a better service for everyone. We restricted our current application to one area of news in one country for the purposes of our demo, but after the hackathon we hope to broaden the scope and make it a usable tool for everyone. Resources Presentation: here Github: here Design Document: here Data Rankings Spreadsheet: here Note: The website is currently not hosted because of server costs. Built With azure node.js pug.js tensorflow.js Try it out cleannews.azurewebsites.net github.com
cleannews
Unbiased, Verified, Analyzed News.
['Sravan Jayanthi', 'Sidhesh Desai', 'Sidhartha Chaganti', 'Dheeraj Eidnani']
['NewsQ for Social Good']
['azure', 'node.js', 'pug.js', 'tensorflow.js']
39
10,371
https://devpost.com/software/military-graph-database
Internal Firebase database Inspiration We were inspired based on the challenge by the NSIN to build a relationship visualizer in the military. What it does Our project uses a Firebase database to allow users to be entered into the system. They can then find related users based on commonalities, such as events that both people attended. How I built it We built this as a website application using HTML, CSS, Firebase, and Django. Challenges I ran into We were originally using MongoDB but had issues in getting it to work with Django. Accomplishments that I'm proud of The application accurately searches for users based on shared events and features. What I learned We learned about Firebase and Django. What's next for Military Graph Database Add a visual graph with nodes and edges. Built With css django firebase html
Military Graph Database
Visualize relationships in the military
['Anon Anon', 'Vedaant Shah', 'Sathyanarayan Sudarshan']
['NSIN: Build a Tool to Visualize Relationships and Networks']
['css', 'django', 'firebase', 'html']
40
10,371
https://devpost.com/software/health-port
Logo Dashboard 1 Dashboard 2 Trend Charts Inspiration The US Army and Special Forces continue to risk their lives for the security and peace of the United States of America each and every day. As a result, to make their lives and training a little easier, we were inspired to help with this challenge and give a little back to the men and women that keep us safe every day. What it does Health Port is the one-stop shop for tracking fitness made specifically for the US Army’s special operations forces. Many rangers follow a strict schedule supplemented with various tracking devices from a range of different companies. With timeliness and convenience, Health Port garners all of this data into one place for an easily visualized and seamless training experience. How we built it We built our project using libraries and frameworks such as React Native and Typescript along with tools to test our builds such as Ignite and Expo. Using this software allowed us to take advantage of the power of hot reload while maintaining cross-platform development the entire way. Challenges we ran into One major challenge we faced was retrieving data from the different fitness trackers that the Rangers wanted us to integrate. Some APIs required an application to be approved, while others simply weren’t offered by the companies. Another challenge we came across was figuring out how to best aggregate and present the data from multiple sources to make it simple and useful. What we learned One of the biggest things that we learned was how to work on a team efficiently in a software-based product such as this one. Specifically, the power of team collaboration software such as Trello and the power of version control with Github. Together, we were able to utilize everyone’s strengths to overcome many of the challenges that we had throughout the entire Hackathon. We also learned the importance of perseverance in the field of Computer Science. Most things will never work the first time. As a result, it is important to stay motivated and persevere towards the end goal. What's next for Health Port? We will take the next few days to work towards providing rangers with suggestions based on all the data that we are gathering. We also want to utilize the power of single-sign on to make security convenient and consistent. Built With bridge-athletics-api expo.io figma garmin-connect-health-api ignite javascript oura-ring-cloud-api react-native typescript whoop-api Try it out github.com expo.io
Health Port
With timeliness and convenience, Health Port garners data from many popular fitness training applications into one place for an easily visualized and seamless training experience.
['Alexander Liu', 'Anand Krishnan', 'Aashay Amin', 'pranayagra Agrawal']
['NSIN: Create a Squad Fitness Tracker Interface']
['bridge-athletics-api', 'expo.io', 'figma', 'garmin-connect-health-api', 'ignite', 'javascript', 'oura-ring-cloud-api', 'react-native', 'typescript', 'whoop-api']
41
10,371
https://devpost.com/software/testme-dva254
GIF What it does TestMe allows users to view their COVID testing data all in one place and share their test history as a pdf. Significance With the onset of COVID-19, Georgia tech has done a great job building the infrastructure for asymptomatic testing, and according to our user research, students find GT’s testing program easy to use! However, we felt that the current system could use some additional features that would greatly improve the user experience! Our hack,TestMe, has two main upgrades to the current system: to make test results be all in one place and to make your testing history easily shareable. With these upgrades, we hope to make getting tested easier and more accessible to GT students, as well as keep students accountable to getting tested regularly. Intention Test Results: On the current GT testing site, GT will only let you know the status of your sample, but not whether you have coronavirus or not. TestMe shows you all in one place whether or not you have coronavirus, the status of your sample, and if you completed the test for the week. With this feature, users can see their full testing history without digging through their email inboxes, and make sure that they’re getting tested weekly. Export tests: We also included a feature to export your test results to one easily sharable file. Although it would be safest if everyone socially distanced, many students are still spending time with their friends and family in person. If a student has been getting tested weekly and consistently tests negative, they can share that proof with their friends and family to put them at ease. This feature could even be used to generate a “ticket in” to campus events to boost accountability and safety, and reduce new COVID cases on campus. Accomplishments that I'm proud of Our time management and teamwork was stellar! <3 What's next for TestMe In the future, we hope to include a more streamlined version of the Georgia tech overall campus testing data, weekly reminders to get tested, and an integrated rewards program in the web app to encourage frequent testing! To Explore TestMe Please use the following, username: 'sghosal' and password: 'georgiaTech' Built With bootstrap css html javascript next.js react Try it out github.com testme-eight.vercel.app
TestMe
A redesign of GT’s current COVID testing platform with some added features!
['Paulina Schuler', 'Kendra Washington', 'Rylie Geohegan', 'Saurav Ghosal']
['MLH: Best Domain Registered with Domain.com']
['bootstrap', 'css', 'html', 'javascript', 'next.js', 'react']
42
10,371
https://devpost.com/software/classroomocr
This project uses Google Vision OCR to enable educators to make their classes more interactive. Where older students can benefit from services like TurningPoint for real-time questions and answers in class, younger students may prefer a more tactile experience. Just a notebook, tablet, or whiteboard would enable students to write and hold up their answers to the camera as though they were showing the teacher themselves, and OCR would recognize the answer text and submit it once the teacher ended the question. Unfortunately, we were not able to get access to any video chat APIs in time that sufficiently matched our needs (Zoom, Bluejeans, etc. that are already used in schools), so we decided to leave out the video chat integration. Right now, we have the student-side code as a proof of concept. In use of this example, you press 's' to start a question, which emulates the teacher starting a question session in class. Once the answer is detected, it is outputted to the console, where in the final product we envision it being sent directly to the teacher, along with the image capture from the frame used for OCR to cross-reference against the answer submitted in case the submitted answer is incorrect. Requires (python3): cv2 numpy google-cloud-vision Step 1: Create credential for Google Vision using these steps: https://cloud.google.com/vision/docs/before-you-begin Step 2: Create a virtual environment virtualenv env source env/bin/activate Step 3: Use pip to install the three necessary libraries pip install cv2 numpy google-cloud-vision Step 4: Clone this repository Step 5: Run the program using: python main.py Built With python Try it out github.com
ClassroomOCR
Real-time interactivity in class enabled by Google Vision OCR. Teachers can ask questions to their class and students can write and present an answer to the camera to get automatically submitted.
['Aditya Jituri', 'Manognya Sripathi', 'Suma Cherkadi', 'Advay Mahajan']
['MLH: Best Use of Google Cloud']
['python']
43
10,371
https://devpost.com/software/piplane
PiPlane Channel Mapping Guide RC Calibration Modified Parameters 1 Modified Parameters 2 Modified Parameters 3 Modified Parameters 4 Modified Parameters 5 Modified Parameters 6 PID Values Flight Modes Speaker Construction Wing Construction Body Construction PiPlane Bottom Flight Test Electronics Takeoff! Flight! Touchdown! Inspiration Everyone on our team loves working with RC planes, and we have all felt the effects of not being able to interact with our friends in the same way we could before the COVID-19 pandemic. As such, we thought that it would be a fun project to develop an autonomous RC plane that could allow us to listen to music with others while maintaining a safe distance. What it does PiPlane is an integrated autonomous aerial boom box and disco ball. After taking off manually, the plane can circle around a pre-designated area, playing music and shining lights so that people below can enjoy entertainment while social distancing. Additionally, the PiPlane tracking goggles use computer vision to assist a pilot in locating the aircraft while taking off and landing to improve safety in an otherwise hazardous environment. How we built it PiPlane was built over the course of about 13 hours during the beginning of the event. The plane itself is built using primarily laser cut foam board pieces held together with hot glue, with the exception of certain areas such as the landing gear which required more robust materials like steel rods. The laser cut pieces come from the Flite Test Guinea Pig RC Plane, however many areas of it were modified for our needs, such as the landing gear which were made sturdier and the payload area. The payload of PiPlane includes a disco ball and custom built speaker as well as the autopilot and electronics. The disco ball is simply a cheap OMERIL Disco Light bought on amazon. The speaker uses a sealed Tupperware box and a passive radiator to improve the bass frequency performance. The amplifier and Bluetooth combo board are soldered and mounted separately in the fuselage to also improve the reception. For the autopilot, we used a RadioLink Pixhawk with an SE100 GPS. Mission planner was used to upload Ardupilot Plane firmware as well as the flight plan which were used to control the autopilot. Most of the wires for the Pixhawk were included with it, however we did have to use a separate PPM encoder for the Spektrum receiver we used since it did not send direct PWM signals to the Pixhawk. We used a single 2200 mAH 3S lithium polymer battery for the controls of the plane in addition to a single 1300 mAH 5S lithium polymer battery for the speaker and disco light. The algorithm behind the PiPlane tracking goggles is implemented in Python and relies primarily on the OpenCV and NumPy libraries. First, feature points are extracted from a video frame and stored in memory. This is done using ORB feature extraction through OpenCV. These points are then compared to the previous frame’s feature points using OpenCV’s Brute Force Matcher algorithm to determine possible matches. The best matches are then analyzed using a variety of methods, and a bounding box for the plane is generated. The input image with the bounding box shown is then projected to the screen. Regarding the hardware of the PiPlane tracking goggles, we attached a USB streaming webcam to a pair of FatShark video goggles, and plugged both of these into a laptop to run the PiPlane location algorithm. Challenges we ran into The first major challenge we faced had to do with getting to Pixhawk to interface with the receiver and transmitter since each used a different method for channel mapping. While this is usually standard, we discovered that the receiver used a different channel mapping than the Pixhawk and transmitter. While the Pixhawk and transmitter did use the same channel mapping during calibration, it was further discovered that the channel mapping parameters in the Pixhawk parameters was incorrect so these were also changed. Another major challenge was that the Bluetooth 5.0 receiver that we used had significant trouble picking up Bluetooth signals despite being a Bluetooth 5 module. Through testing we discovered that pointing the antenna from the phone towards the module as well as using a piece of aluminium foil to create a baffle around the receiver helped significantly increase its range. Thirdly, we ran into the challenge of building the plane in a timely manner since we realized that we had significantly overestimated our own building abilities, which did end up meaning that we had less time than anticipated to test the autopilot. Lastly, it was difficult to get the autopilot to actually function as there were a lot of parameters that needed to be changed for our specific plane. Furthermore, the arming system for the plane posed significant challenges to circumvent as the software was designed for drones and this meant that a large portion of the time in Ardupilot was spent trying to get this to work. The result was that we had basically no time to tune the PID settings to allow for an ideal flight by the autopilot. Accomplishments that we're proud of We had four main accomplishments that we are proud to have achieved during this hackathon. The first was that we managed to solve the range issue we were having on the Bluetooth module using the aforementioned aluminium foil baffle. The second major accomplishment was that we were able to have the plane actually take-off and fly at a weight of over 4.5 pounds. The third accomplishment was that we were able to verify that the autopilot worked with the aircraft we had created as there were significant challenges in the domain. The last major accomplishment was that we were able to make a vision tracking supplement for the plane which functioned to a reasonable degree, even in low-light conditions. What we learned The single most important thing we learned was the importance of budgeting extra time for seemingly simple tasks. When we thought up the project, we didn’t think that the plane would take anywhere near as much time to build as it actually did. We believed that we would be able to start testing in the early morning on Saturday, and would be done before rush hour. However, the build took much longer than we thought and we didn’t even start testing until after 4 in the afternoon. Nevertheless, this ended up being a blessing in disguise: we never would’ve thought to create the PiPlane tracking goggles had it not started to get dark while we still needed to test. What's next for PiPlane We would like to improve on the autopilot which we didn't have time to completely tune and test the plane in more of an urban setting. We would also like to automate the take-off and landing of the plane, as well as increase its flight time. Ardupilot: https://ardupilot.org/plane/ Built With ardupilot opencv python Try it out github.com
PiPlane
PiPlane is an integrated, autonomous, aerial boom box and disco ball. Accompanying visual tracking goggles assist with landing and takeoff, allowing for use as the sun is setting.
['Haris Miller', 'Nick Cich', 'Ethan Das']
['MLH: Best Hardware Hack Sponsored by Digi-Key']
['ardupilot', 'opencv', 'python']
44
10,371
https://devpost.com/software/unbias-ly
GIF Cooler Icon What it does This program has two parts, an aggregator/ranker, and a web page. The back-end aggregator web-scrapes news outlets (we chose NDTV, The Indian Express, and Deccan Chronicles) and pulls out political articles. Then each is run through VADER to determine their overall sentiment, but explicitly just the magnitude of such (since high negative can be just as bad as high positive). Higher magnitudes are perceived to be worse sources of news than others since this implies that the author's word choice can imply strong biases. This, along with the article link, article title, and article summary, is compiled into a list. This list is taken in by a Data Access Object (DAO) that proceeds to write this scraped data to our instance of DataStax Astra for long-term storage and relieves the need to rescan unless a new source with more links needs to be examined. This data is stored in a tabular fashion (due to Apache Cassandra's structure) and operations for queries and inserts are done solely in CQL (Cassandra Query Language) which looks an awful lot like SQL. However, the database is a NoSQL database in nature (even though it still returns results in ResultSets). The front-end displays the database data following an API call for queries. The data is sorted by score in ascending order with the weakest sentiment towards the top and the stronger wordings towards the bottom. Data can be retrieved in two ways: Reload and Direct Additionally, we implemented our own summarization tool which allows the reader to skip over the hassle of navigating and reading the news source. Aditya coded a term frequency-inverse document frequency algorithm and then grammar reconstruction function with the aid of nltk to make the important words and phrases not sound like a random string of vernacular and rather a smooth flow of thought akin to human writing. Reload This calls on the server to reperform the search as something may have changed like adding additional data sources. The server executes the web scraping function then loads that new data into the database. This results in a simple output of declaring that the process has finished which prompts the front-end to redirect to the news list out. This is unique because it allows our system to process dynamic changes in the news and subsequently parses and updates the feed (assuming you consistently reload, also the database will update daily). Direct This allows the user to instantly access the news sites we currently have processed and see their scores on the UI we created. Each site has its title, score, and summary visible. Clicking on each of the cards will open a new tab to the article link. Queries to Astra are notoriously quick, with this site loading faster than the same coming out of MongoDB or DynamoDB. The new tab was intentional as to not cause reload errors if the user was redirected from the news list. How we built it We used Flask to design a back-end server to respond to our React-based front-end/UI. The React app is proxied such that we do not need to use CORS to send requests. It effectively comes from a single port in the application. The backend is connected to an instance of DataStax Astra, one of the fastest cloud versions of Apache Cassandra. A table was initialized in the DataStax Astra Studio. We created a database model representing a news site consisting of the following: link: text score: float summary: text float: text This would be stored in a corresponding table to be retrieved later. On the front end, we included functionality for a reload, but also a direct query which queries Astra, sorts the data accordingly, and displays it on the UI as "The News!" Challenges we ran into Astra was notoriously hard to set up. You not only need a driver (which is only available inside the dashboard), but als a singleton session, CQL, and proper formatting instructions. Tony spent almost 3 hours just attaching the connections and services in order to get the proper data flow. Initially, I tried to run a CQL to String function on schema.cql , but gave up somewhere in the middle and initialized the table on a CQL notebook in the DataStax Astra Studio. This was our first time with web scraping for articles and Aditya had to create a filter for the junk (ie: non-relevant links and articles and other things) Accomplishments that we're proud of We finished! And we didn't just do the back-end challenge! We did BOTH What we learned Prepare to learn about new topics earlier on. Tony learned Astra querying and set up at 11 PM eastern the day before this was due. It may even become a critical element that makes your project seem more unique. TF*IDF - Aditya implemented this for our bonus summarization feature and keyword id with nltk. VADER (Valence Aware Dictionary and sEntiment Reasoner) is an amazing package. It basically judges how expressive, positive, negative, or neutral and determines how it sounds. The higher the magnitude, the stronger the sentiment. This was used as the primary categorizer for the different links we scraped from the news outlets. What's next for Unbias.ly We want to allow a customized way for users to simply check the bias of the article Built With astra beautiful-soup datastax flask javascript nltk react vader Try it out docs.google.com github.com
Unbias.ly
The News... Without the Bias!
['Aditya Singhal', 'Talia Tian', 'skim73']
['MLH: Best use of DataStax Astra']
['astra', 'beautiful-soup', 'datastax', 'flask', 'javascript', 'nltk', 'react', 'vader']
45
10,371
https://devpost.com/software/vslam-point-cloud-mapping
Inspiration We want to create a 3D point cloud mapping tool for robotics and autonomous driving applications using the novel computer vision technique visual SLAM. What it does We create 3D point cloud maps of physical environments using feature-based visual SLAM. Feature-based visual Simultaneous Localization and Mapping (SLAM) is a computer vision technique to perform location and mapping functions in an unknown environment by tracking and matching feature points between video frames. User can upload a video of an observer exploring a physical environment to create a 3D point cloud map of the environment, which is visualized in Three.js. How I built it We implemented visual SLAM in C++ with the help of OpenCV. The web app is built with Flask, Python, Three.js, SQLite, and GCP. Challenges I ran into Implementing visual SLAM algorithm from scratch is complicated and debugging C++ can be difficult. I had to implement a huge amount of functionality, like a generalized RANSAC discriminator and circular reference counting. The hardest part was creating the pose graph between frames. Accomplishments that I'm proud of Despite all the challenges we were able to create a web app that allows everyone to try out the visual SLAM technology through the web app. We are also proud of being able to implement visual SLAM algorithm from scratch. What I learned We learned about visual SLAM and how to build a web app around it. What's next for 3D point cloud mapping vSLAM There is so much to do, we could use KeyFrames for higher base lines, thread our RANSAC, perform radius searches better, etc. We would optimize our visual SLAM algorithm to make it run faster. Built With c++ flask gcp opencv python sqlite three.js Try it out vslam.tyfeng.com github.com
3D point cloud mapping vSLAM
Create 3D point cloud map for robotics and autonomous driving applications using Visual SLAM
['Ty Feng', 'Rahul Aggarwal', 'Justin Hinckley']
[]
['c++', 'flask', 'gcp', 'opencv', 'python', 'sqlite', 'three.js']
46
10,371
https://devpost.com/software/recycler-4stnjh
Inspiration One of the problems which is further worsening land pollution is the fact that the people are unable to decide whether a product is recyclable or not, and as a result, most products end up reaching the wrong places for their processing. Seeing a lot of products being wasted in our neighborhood causing environmental harm inspired us. What it does A lot of people are unable to classify their waste products into recyclable or non-recyclable items. Hence, when they dump them, these products don’t go to their appropriate place for further processing. So, we have created an app that can allow the user to upload an image and check whether the product displayed in the image is recyclable or not. How I built it We have implemented this solution by using a deep learning AI that was presented with some data to help it learn and classify products. We have created an app that allows the user to upload an image, which is then analyzed by our deep learning AI. At the end of this process, a popup message is displayed informing the user whether the product is recyclable or not. Challenges I ran into Trying to come up with a viable solution that could be easily available to restaurant Accomplishments that I'm proud of Met wonderful teammates who helped me build this in 48 hours What I learned Deep learning has so much potential What's next for Recycler Launching this on other platforms. Built With css3 deeplearning flask html5 python
Recycler
Sorting wastes has never been easier.
['Puneet Bajaj', 'Kartikey Sankhdher', 'Shreshth Kaushik']
[]
['css3', 'deeplearning', 'flask', 'html5', 'python']
47
10,371
https://devpost.com/software/covid-19-sentiment-analysis-dashboard
NOTE: We were not able to publish a youtube video in time. If we are able to edit in the future, we will provide a link, but for the time being get Rick Rolled. UPDATE: WE HAVE A LINK! Inspiration We wanted to observe the overall outlook in regard to COVID-19. What it does Our project determines whether tweets about COVID-19 share positive or negative sentiment. How we built it Data is live-streamed from twitter to a Google Cloud Database. The data is run through a machine learning model which predicts if the tweet has either positive or negative sentiment. These results are displayed on a Streamlit dashboard. Challenges we ran into It took us a long time to figure out how to chain all the data. Additionally, we had a hard time connecting the Google Cloud Database using Python. Accomplishments that we're proud of Overcoming our previously mentioned challenges had to have been the most fulfilling moment of the HackGT 7. What we learned We learned better and more efficient ways to scrape for data, create machine learning models, and connect to Google Cloud Databases using Python. What's next for COVID-19 Sentiment Analysis Dashboard The next step for COVID-19 Sentiment Analysis Dashboard would be to improve the modeling. Built With google-cloud machine-learning python sql twitter Try it out github.com
COVID-19 Sentiment Analysis Dashboard
We wanted to get a better understanding on the outlook of COVID-19 from a general perspective. By chaining a machine learning model, we were able to predict the sentiment twitter users had.
['Bharath Vemuri', 'Steven Hoang', 'Seth Santos']
[]
['google-cloud', 'machine-learning', 'python', 'sql', 'twitter']
48
10,371
https://devpost.com/software/nitelite-z6yd0s
Inspiration Keeping the people we love and care for safe. What it does Finds the safest route home. How we built it Used react-native application to integrate Microsoft Azure computer vision and Google Map APIs. Challenges we ran into Connecting React-Native application to the REST API where the data was stored. Accomplishments that we're proud of Using Azure's Machine Learning softwares to detect street lamps in Google Map Images. What we learned How to connect applications and softwares coherently. What's next for NiteLite Save places in database to increase efficiency. Built With python react-native Try it out github.com
NiteLite
Scared to walk in the dark? Use NiteLite to get home safe!
['Nevin Gilbert', 'kirtanamogili Mogili']
[]
['python', 'react-native']
49
10,371
https://devpost.com/software/fairycomm
Logo Inspiration & What it does When we learned about Amazon’s undercutting of small businesses, we were devastated . We built our product to help small businesses getting pushed out of industries by bigger companies with more economies of scale: specifically, the ability to adapt to COVID-19 circumstances . How we built it We created a responsive web app using bootstrap and vanilla javascript/css to serve a landing page, registration page, login page, and dashboard. Our web app also connects to a custom API backend that we wrote to assist with the MongoDB database transactions. Our backend is written with Node.js and Express.js to provide the customization we needed for our API . The web app serves mainly as the dashboard for local businesses to manage, update, or remove the products they wish to sell. Our website also serves as the landing page for our Chrome extension available to users. Users would then download our chrome extension, which reads an Amazon product you are viewing when activated. Our extension would then theoretically connect with our backend and retrieve the most relevant products sold by local businesses within your zip code. We were able to deploy our website to Azure Web Apps and customize it with one of our own custom domains. We also were able to use Chrome Extensions to start the creation of our chrome extension. Challenges we ran into We faced different challenges both individually and as a team due to our differing roles in the project and the shift from in-person to online Hackathon. Both Brenda and Miles got out of their comfort zones and learned how to code HTML and JavaScript respectively for the first time. Rafael faced the challenge of creating a Chrome Extension and connecting all the backend so that we can make a navigable website. As a team, we struggled to coordinate with each other and check in consistently due to the switch to virtual . Accomplishments that we're proud of and what we learned In this project we all developed new skills as emerging and improving hackers. Brenda learned html for the first time. Miles learned JavaScript for the first time. Rafael learned how to make a chrome extension. Manasa developed her skills in CSS . Together we learned how to collaborate over an entirely digital environment, an ever more prevalent skill in today’s world, and play to each other’s strengths. We learned the value in taking a project all the way from conception to submission and the need for time management in the planning stage. What's next for fairycomm The future of fairycomm is undecided. Once fully functional, it will be more effective the more small business register and the more people we have using our chrome extension. It may require a bit of time investment to prepare for an official launch, but our community will be better for it. We plan to complete our backend design and implementation as well as implement and improve our Chrome Extension for better user accessibility and use . Built With azure bootstrap express.js mongodb node.js Try it out www.fairycomm.tech
fairycomm
An innovative solution to help local businesses thrive in online shopping.
['Rafael Piloto', 'Brenda Cano', 'Miles Robertson', 'Manasa Akella']
[]
['azure', 'bootstrap', 'express.js', 'mongodb', 'node.js']
50
10,371
https://devpost.com/software/aggregationstation
This project was motivated by the crisis of fake news and how it has impacted all of the world, including, to a great degree, Australia. Despite all of the members of this group being freshmen, we managed to have a great time building something meaningful together! Built With beautiful-soup mediacloud python python-dotenv requests shell textblob Try it out github.com
AggregationStation
It is a news aggregator
['Aiden Melone', 'Ethan Kang']
[]
['beautiful-soup', 'mediacloud', 'python', 'python-dotenv', 'requests', 'shell', 'textblob']
51
10,371
https://devpost.com/software/taidl-everyday-payment-with-xdai-blockchain
App Introduction Main Pages Make a Payment Taidl We present Taidl, an app that makes payment with cryptocurrency easy. You don't have to be an expert in blockchain. You don't need to worry about high transaction fee or fluctuation in the conversion rate. Taidl is powered with xDai, a stablecoin that lives on a sidechain of Ethereum. The transaction fee is as low as $0.01 for 500 transactions, making it the perfect solution to peer-to-peer payments, local business payments, online payments and international transfers. Taidl app has even more exciting features that assists you in your daily spending. There is a address book for storing your contacts, so you don't need to search and type the receipient account every time you pay. You can scan a QR code to make a payment. You can check your transaction history to keep track of your finance. The registration process is especially easy. Just sign up with your favorite user name, which does not have to be your real name. Type in your password, just like you are using any traditional finance app without the burden to understand how cryptocurrency works. For experts and nerds who want full control of the crypto wallet, we can handle over the custody of private keys to you. Otherwise your private keys will be securely stored in our encrypted server as safe as a bank. Inspirations 🙊 Traditional crypto wallet apps are designed for expert users, traders and geeks. Unfriendly UI and complex technical functions can easily intimidate normal users. We keep the usability in mind when designing the interaction. We greatly reduce the complexity of using a mobile crypto wallet without sacrificing core functions or safety. 🎰 The price of crypto assets like BTC, ETH fluctuate heavily so they are not suitable for everyday uses. We adopts a stablecoin solution - xDai, an asset that is pegged to US Dollar. 1 xDai = 1 USD in almost every time. You don't need to worry about the conversion rate when paying and receiving money with xDai ⚡️ Transactions on xDai chain happen incredibly fast - 10~15 seconds to everywhere in the world! The transaction fee does not depend on the amount that you transfer. It's one same price for $1 and $10,000. Fee is as low as $0.0002 for each transaction. 🍩 In the near future cryptocurrency will become more mainstream with a larger user base. Small businesses who do not want to pay for credit card channels will benefit the most. Just print a QR code sticker at the checkout. Let customers scan the code and payment is done! 🌞 For international students and international travellers, you will not need to worry about expensive fiat conversions. With blockchain, you can go anywhere in the world and use the same currency, or you can exchange xDai for local currency with small fees. Main features ✈️ Fast (within 10 seconds) transfer to worldwide users 🎈 Free in-network transfers and very low transfer fees via xDAI blockchain (a high-performance sidechain of Ethereum) 📅 Scan QR code to make a payment ☕️ Request customer payments at local businesses 📝 Address book to save recent contacts 🥇 One unique user name, no need to type long addresses 🔑 We let you manage your private key if you like 💱 Support conversion to and from fiat (USD, GBP, EUR, AUD) (In Development) 💲 Borrow BTC, ETH with xDai as collatoral (In Development) 📈 Invest in DeFi mining protocals to earn passive income with your xDai savings (In Development) How we built it React Native for iOS and Android devices Vercel for serverless backend functions MongoDB as database NCR Consumer Data Management API for user management Figma as collaborative design tool Git Repos Mobile App https://github.com/wjw12/taidl-hackgt-2020/tree/main Blockchain APIs https://github.com/wjw12/mock-xdai-chain Backend https://github.com/wjw12/taidl-backend Built With react-native vercel Try it out github.com
Taidl - Everyday Payment with xDai Blockchain
Taidl is a mobile wallet that makes everyday payment with cryptocurrency incredibly easy
['Jiewen Wang', 'Xueyu Wang']
[]
['react-native', 'vercel']
52
10,371
https://devpost.com/software/i-sight
Inspiration Looking at how people treat each other, let aside the ones who really need help, humanity feels nothing more of a global scarcity. With the progressing technology there should be applications which help the people who really are in need and it would be better for people to atleast have technology which they can trust rather than being at nature's mercy. So we thought of building an application which would act as an in-person navigator and along with navigation also give out every detail a visually impaired person needs to know about his/her surrounding, keeping them safe. What it does With our project I-Sight, we intend give these people with a disturbed vision a trustworthy and reliable source to depend on when they walk out any place which makes them feel endangered due to their disabilities. This application is in itself a personal guide who’d rather hold one’s hand and walk them to wherever it is they want to go. This application would act as one’s vision as well as a navigator on the streets which would be far better than any human eyes could calculate or predict. All this application requires is an android device with a camera. Features: COMPLETE NAVIGATION: The user can use voice commands on the application and speak out the desired location they want to reach, the application will then set a course from the users' current location to the desired location and guide the user through the path making use of various other features in the application. OBJECT DETECTION/RECOGNITION: While moving the user has to keep the phone in his hands with the camera facing in front, this application is running on an DL model which will detect and recognize all the objects, moving and at rest, in the line of sight of the camera and inform the user of all the objects which are present on his/her path. DISTANCE ESTIMATION: This application will not only detect the objects in the vicinity but also estimate the distance between them and the user and convert it into the number of steps the user would require to get to any object present around the user. INTERMODAL ROUTING: Along with guiding the user through the pedestrian route our app also has a feature of public transport guidance. If the path of the user is long the application automatically guides the user to the nearest public transport waypoint to help the user reach their destination, which also includes multiple changes in the transport journey if required. Some of these public transport waypoints include city rail stations ,metro stations, bus stations etc. TIME TO COLLISION ALERT: This application also provides a feature of alerting the user in order to prevent any sorts of accidents or mishaps on the path of the user. This feature calculates the time and the distance between the user and any obstruction or vehicle in it’s on going path and alerts the user accordingly. How we built it I-Sight is built using the power of Deep Learning models for object detection. It heavily relies on Tensorflow lite Mobile Net v1 model for fast, low latency and performant model and HERE SDK for the features of in-time navigation, Geolocation and the intermodal routing. For Voice Interaction it uses Android's Text To Speech API. Challenges we ran into The implementation of tensorflow lite model in android as well as the implementation of HERE SDK required a bit of research. The merged output of both technologies was specifically challenging to achieve. Accomplishments that we're proud of We are proud of being able to put together two such robust technologies in HERE SDK and Tensorflow and making them work together effortlessly. Along with that, we are also proud of providing our contribution to the society by helping visually disabled people to find there own way making them independent of others. What I learned We learned to implement light weight mobile nets and usage of tensorflow lite models. We also learned about the amazing HERE SDK which provides various robust features for geocoding, routing and LIVE Sense. What's next for I-Sight Another enhancement would be an addition of AI danger heuristic for safer travel. Built With android here mobilenet tensorflow Try it out github.com
I-Sight
Making visually disabled people independent
['Chinmoy Chakraborty', 'Vishwaas Saxena']
['RUNNER-UP', 'Best Seeing Eye Project']
['android', 'here', 'mobilenet', 'tensorflow']
53
10,371
https://devpost.com/software/pokt
Food View Grocery View Store Front Page Product Info Retail View Profile View Inspiration With the modern political and social climate, distance has become a part of our everyday lives. We came into this hackathon striving for a way to not only better the lives of those who are stuck indoors all day but also those of who run businesses and are suffering due to the dry spell of serving customers. We ultimately ended up focusing our efforts in a way to limit interaction between store employees and customers to help in the fight against COVID-19 while also empowering customers to support local businesses. We identified that the biggest interaction between employees and customers is the checkout at the end of shopping. We decided to reduce that interaction by allowing the customer to have the POS system normally integrated within a cash register, integrated into their phone. Likewise, this is applied to three major domains: food/eateries, grocery stores and retail. Besides grocery stores there are a plethora of different business types affected by the pandemic; therefore, we made it our goal to span to as wide of an audience as possible and help as many people as we possibly could. Besides checkout, our app can be utilized on everything from grocery store pre-orders and contactless deliver to even restaurant curbside delivery and retail shopping. We realized the convenience of your phone's presence and the combination of a POS system is nothing short of extremely powerful and can help both consumers and business owners alike. For consumers, it provides hassle free shopping trips, ease of use, faster checkout times, and, most importantly, a safer experience. On the merchant end, it allows for business promotion and new ways to allow customers to support them through such difficult times such as reducing labor costs and allowing for a completely "contactless" shopping experience. What it does With Pokt, we try to integrate online and brick and mortar retail by bringing the power of POS to your pocket. Pokt is an all-in-one payment portal that allows users to make checkouts, reservations, preorder and any other form of sale you can think of with nothing but your phone. For example, our primary use case was targeted for the grocery store. With our app, we allow the user to simply scan the items they add to their cart with their phone and keep a "virtual cart" on hand at all times. At the end of their trip, they simply verify with an employee that their cart is indeed non-fraudulent and simply complete the payment through their phone. This avoids all forms of human interaction and only involves the employee verifying the customer's cart to ensure that they did not steal anything (this is of course optional and up to the merchant). With our app, there is very little infrastructure the merchant has to take care of on their end. While shopping in a store, users can scan barcodes and the product will automatically be added to their cart which can then be checked out. Likewise, users can use the product to preorder groceries, order takeout or curbside delivery from eateries or even do distanced dine in where waiter and consumer interact through a shared portal/cart. How we built it The core app is built on Flutter and Dart with a backend of NCR and Firebase. We knew none of this coming in, so it was not very hassle free :) Challenges we ran into Like mentioned before, we had never worked with these frameworks before and had ZERO experience in app development. Likewise, the scope of the project was extremely large and ended up pushing us to our limits. One of the other biggest things was the lack of a true "hackathon climate." Although we still had a great time and were extremely productive, I do believe that we would have been much more motivated if we were in Klaus. Accomplishments that we're proud of It works! Besides that, we are extremely happy that we came out of the hackathon as better developers than when we went in. It was extremely satisfying to dive into the deep end of such a powerful framework with literally zero knowledge. To go from cramming YouTube tutorials to making a near-market worthy product in 36 hours is nothing short of exhilarating. The biggest thing we're proud of is that we built a product that betters the lives of others. In previous hackathons, we strived to build really cool applications but never really had a target audience in mind. In this one, we came in striving to help as many people as we can and we are extremely satisfied that we were able to build a platform that has the possibility of doing so! What we learned Literally everything. Nothing was familiar in this hackathon, but that's what hackathons are about! We did not know flutter. We did not know Dart. We didn't know how NCR's API worked. We barely used firebase before. It was all new, but that's what made it fun! Very frustrating... but fun :) What's next for Pokt The possibilities of Pokt and its future are almost endless. A shortlist of key elements we wanted to add before the deadline include: yelp integration for restaurant and store recommendations deals and promotion search across multiple stores and eateries so that the user can find the best deal for. what they're looking for geo fencing technology so that the app automatically detects if the user has entered a Pokt supported merchant the aforementioned geo fencing technology would have also been able to provide routes of customers to the merchant and allowed them to analyze density hot-zones in their stores/shops. This would allow for better organization of their product with little to no investment! Sharing carts between users was a big one but we unfortunately did not get to it. For example, we wanted users like roommates and families to create a shared cart and be able to buy products. Recommendation systems are also a viable avenue. Based on your "Pokt history," we could recommend similar eateries and stores that match your behavior. This would be similar to a "people who bough this also bought __" model. Built With dart firebase flutter ncr Try it out github.com
Pokt
Bringing the power of POS to your pocket
['Rishov Sarkar', 'Athreya Anand', 'Sai Gogineni']
[]
['dart', 'firebase', 'flutter', 'ncr']
54
10,371
https://devpost.com/software/educe
Inspiration COVID-19 has pushed education online for communities globally, which has had many negative impacts on students. The most drastic of which is that students without access to their own devices can't attend school with their peers. In addition to this, there is a big problem currently with recycling old computers. We wanted to make a project that could help with both of these current problems. What it does Our project allows for students in need of certain tech for their distance learning to register for an account and make posts requesting old devices. The active posts are then shown to people looking at the site. If somebody is browsing the platform and has an old device that is being requested, they can contact the student in need via email to set up shipping logistics. How we built it We wrote the backend with google firebase and JavaScript. Our database is a google firestore, where we store post information for the different active posts and profile information for users requesting tech for educational purposes. The front end was written mostly in react. Challenges we ran into During the project, we ran into a few problems with properly connecting our front and back end. Accomplishments that we're proud of We are proud of the fact that we were able to get our project running at the end! What we learned We learned how to use firebase, and we also learned how to make react web apps. What's next for Educe The next step for Educe is probably to expand the types of technology that can be sent to students in need. For example, it might be helpful to be able to send software and other types of things to students besides just hardware devices. Built With css firebase google-firestore javascript material-ui react Try it out github.com
Educe
Reducing electronic waste while helping underprivileged students
['Zane Nasrallah', 'Nghi (Hailey) Ho', 'Brandon Wang']
[]
['css', 'firebase', 'google-firestore', 'javascript', 'material-ui', 'react']
55
10,371
https://devpost.com/software/youber-bluq43
Youber HackGTProject About Our Project A collaborative initiative between small business owners and your local community to increase revenue with unsold products such as food or clothes by selling them at retail price and donating to the homeless. Technologies Used We used Flutter to create our application and to connect to the APIs used such as the NCR API, Google Vision API, and the Firebase API. The language used to connect to the API's is Python as well as the Request Library. We also used Postman to test our requests with the API. What the App does The app scans the license plate of customers, it checks the database for any open orders for that specific customer and notifies the store/business owner that the customer has arrived. We also implemented what we like to call the Community Basket which is a way to help small businesses, to avoid loss, and to help the community by donating to shelters and to those in need. Built With android-studio dart flutter github google-vision-api ncr-api postman python Try it out github.com github.com
Youber
A collaborative initiative between small business owners and your local community to increase revenue.
['Kingsley Young', 'Isabella Mattua', 'Rafael Castillo-Vindel', 'Shelly Penichet']
[]
['android-studio', 'dart', 'flutter', 'github', 'google-vision-api', 'ncr-api', 'postman', 'python']
56
10,371
https://devpost.com/software/faceblindhelper-xw9sov
This application can learn to recognize human beings' faces and can tell the user the name of people in front of the camera. The resolution of the video, as we set, is 480p, and it refreshes 10 times per second and does recognition 3 times every second. It has a popup. When the application recognize a unknown person, the rectangle frame in the screen is clickable and when you click the box, the popup will appear. In the popup, you can input the name of that person and the application can remember the name. Thus, when you scan the same person again, the application can distinguish him or her and print this person's name below the rectangle frame. In this way, we allow the application to learn to recognize familiar and unfamiliar people. In the mechanics of this application is quite like a camera. When you click the screen to store a stranger's name, the application automatically takes a photo of him or her. It stores this photo for the future use of recognition. Built With python Try it out github.com
FaceBlindHelper
An AI program that can learn to recognize and classify people.
['Chunlin Li', 'Shunzhi Wen', 'Hanrui Wang']
[]
['python']
57
10,371
https://devpost.com/software/famly
Inspiration We realized that it was difficult to volunteer in today's world due to the pandemic. We used to travel and do volunteer work at orphanages which we realized would not be a possibility and would majorly impact both the moral and education of children in developing countries. After doing more research we realized this was a big issue and we wanted to think of a solution. What it does Famly connects families, orphanages, and orphans. Orphanages create accounts, and from there add children to their list. Families can create an account, and then choose to support a child from any organization. Once a child has a family, the two can chat, schedule events, post images, and chat in their native languages. Parents can also donate money to their children to support both them and their orphanage financially. The main purpose of the app is to give children in developing countries mentorship and the opportunity to build a meaningful relationship with families that could help, even with all the barriers in between. How we built it There are two parts to this project: the frontend & our custom backend. The former is built with the Quasar framework which itself is built with VueJs. This handles everything the clients see: the pages, buttons, their login forms. All the content is obtained through API requests using Axios. Many of the pages make heavy use of conditional rendering, enabling us to re-use the same components and pages across all possible types of users (children, organizations, families). We use the Stripe API to enable families to donate money from their credit cards to their children. The backend is entirely custom made. We built a flask server (python) that stores data into a SQL database. This database is hosted on Microsoft Azure, and also makes use of Azure Storage for images. The backend also makes use of IBM Watson. The first use case is for the chat system. We used IBM's Language Translator API to automatically determine which language each user spoke, and therefore translate their messages on-the-fly. A french child can talk to their Swedish family in French, and the Swedish family will receive the translated message. The other use case for Watson is the personality insight API. Each child and family submit a bio when creating an account, which is the processed and sorted by IBM. The end result is a system where children and families are paired intelligently. Challenges we ran into We had a few challenges, namely app routing, session storage, and server-side issues. App routing in quasar is a tad special and a few issues came up where layouts were getting mixed around unintentionally. Rewriting the entire routing file proved to be a solution. The second issue we had was managing the user state; as children log in from the organization account and portal, we had an issue were children could navigate as organizations, and where the UI displayed was for the wrong client. We decided to use localStorage to solve this issue. The server required lots of maintenance. Every single part of it depended on our ability to imagine every request possible, which meant that small niche bugs came out. However, the more prominent problems arose when Azure would slow down, where we would wait up to ten seconds to handle a simple GET request. Another difficulty was handling images. We couldn't get them to load for a long time, and after many hours of debugging the web app can finally handle uploading and receiving images perfectly. Accomplishments that we're proud of Many of the features in our app are based on technologies that were developed and proven over and over, so we were proud when we realized that we could take advantage of such tools and develop something innovative. We were able to leverage different APIs as well as the custom backend we are proud of. It proved itself to be robust and although did require a lot of effort, we believe it is for the better. The routing and layout system is also very nice in our opinion, with state logic being implemented everywhere. We also believe that the idea was something that required proper execution to reach its full potential which is what we were able to do by working for the 36 hours with a plan from the start. What we learned We learned that you should always plan your web app structuring and routing beforehand. More seriously, using Microsoft Azure was new for the both of us, and we had to learn its UI, features, and the drawbacks/advantages of using SQL instead of SQLite, which we were more familiar with. What's next for Famly Hopefully, much more! We do really like this idea and hope to expand on it in the future! We want to increase the use of remote technologies like video conferencing. The use of more advanced deep learning tools will also create a better relationship between families and children. Built With axios azure ibm-watson javascript python quasar rest vuejs Try it out github.com github.com
Famly
Connect families, children and orphanages. We made a way for families to build a meaningful relationship with children in tough conditions around the world, remotely.
['Victor Guyard', 'Maanit Madan']
[]
['axios', 'azure', 'ibm-watson', 'javascript', 'python', 'quasar', 'rest', 'vuejs']
58
10,371
https://devpost.com/software/wable-rcu317
Restaurant specific table view and statistics (WABLE-RX) Registration Portal for Restaurants (WABLE-RX) Menu Selection for Customers (after they scan barcode through WABLE-CX) wable Recently, we visited a dine-in restaurant which gave us a receipt with a QR code on it to pay for our meals digitally. They then gave us another receipt afterwards for our actual payment processing. We found this to be a little wasteful and laughed it off at the time, but remembered this experience when we read the problem statements presented by NCR. Furthermore, one of our close friends's families owns a small restaurant which is slightly struggling due to the COVID-19 pandemic. We thought that implementing safer ways to dine and pay, as well as providing analytics and insights to restaurant owners so they can cater better to their customers would help small businesses This project was a great learning experience for us as the NCR APIs proved to be more difficult than we had anticipated, but in the end we got that full stack experience and were able to make a decent prototype. wable is a web platform for restaurants to utilize QR codes at tables as a way to create a seamless, contactless experience for customers to order, reduce physical waste through online receipts, as well as provide data analytics as insights for restaurant owners. wable utilizes two web apps, one for customers when they go to a restaurant, and one for restaurant employees and owners. Customers will be able to walk into a restaurant and scan a unique QR code at their table seat, taking them to an online menu where they can order food from the web app. They will be able to pay through Apple and Google Pay, as well as by inputting their credit card information. The order requests will be sent directly to the restaurant kitchen, where the kitchen will be able to view orders and cook them through the restaurant-side app. The restaurant-side app has an overview of tables and seats for the restaurant, which recieves the data of the order from each seat's unique QR code to keep track of each seat at each table. Waiters and servers will use this to be able to serve food more easily and keep track of where orders should go. At the end of the meal customers will be billed a online receipt through their email they were prompted to input when they placed the order. If they request, they may get a physical receipt. This reduces waste and the cost of printing physical reciepts for the restaurant. The data received by each order will be stored and formatted into analytical insights for the restaurant owner, with information on various things such as popular orders, popular times of day, profit gained from specific orders and cost analysis of producing/buying ingredients for specific orders, and more. Created during HackGT 2020, with NCR APIs. Built With css html javascript Try it out github.com
wable
wable is a web platform for restaurants to utilize QR codes at tables to create a contactless experience for customers to order, as well as provide data analytics as insights for restaurant owners.
['Mihir Bafna', 'Vikranth Keerthipati', 'Apuroop Mutyala']
[]
['css', 'html', 'javascript']
59
10,371
https://devpost.com/software/treeguard-f4qy6i
Try the website: http://pranavputta.me/treeguard Inspiration Over the last few years, we have become almost desensitized to forest fires and their impact on the environment. Over the last few months, our own country has experienced catastrophes, with California's forests burning so fast and hard, it turned the sky a hazy red for days. Forest fires are an incredibly difficult problem to solve because the fire grows uncontrollably fast. In just one hour of a fire burning, it can become almost impossible to stop, reaching almost 10 mph at its peak. The solution is to find the fire as quickly as possible or find high risk areas, and put out the fires before they get too big. Satellite imaging and thermal detection cameras have been employed, but the problem is that satellites don't have high periodicity and cameras don't have great accuracy. So, we decided to use a few IOT Arduino boards to solve the problem by deploying a fleet of nodes that can map the heat signature of a forest. What it does Tree Guard is a solution to improve early detection of forest fires. It tracks the temperature, humidity, and carbon dioxide of various locations in a forest by deploying hundreds and thousands of nodes. This large network can work together to display areas of high temperatures and risks of fires, so that first responders know where to start immediately when a fire breaks out. Putting out the problem :) How we built it We built two nodes, a transmitter and receiver to show a proof of concept. The transmitters is an ESP32 (arduino-like) board which collects data from two sensors, a temperature and humidity sensor (DHT11) and a carbon dioxide sensor (Arduino). Then, a special transmitter called LoRa transmits the data to the receiver. The receiver (also same type of board) takes this data, and connected to a WiFi endpoint, connects to Google Firebase and deploys the information to the server. Our website then gets a push notification through the Firebase Cloud Messaging system and updates the information in real-time. The ESP32 boards were coded in C++ and our website was coded using ReactJS and typescript. The backend firebase functions were also coded in javascript. LoRa Communication LoRa is a special type of radio signal which can travel extremely long distances without losing information. We tested our devices successfully at a range of 250 feet, but the estimated range for LoRa devices can go up to 5 km. This gives us huge potential in using this system in a forest. Challenges we ran into Literally every single piece of this puzzle was a frustrating moment. We learned how primitive radio transmitters sent data to create a "synchronized handshake" between the transmitter and the receiver so that the two nodes would acknowledge each other's presence and send data back and forth. This synchronized system was incredibly to create. Another big issue was sending the information from LoRa as byte packets to json formatted data in the cloud. We had to resolve this problem by really digging deep into how C and C++ convert byte streams into formatted strings and then constructing json data that can be sent up to the cloud using an HTTP protocol. The biggest issue, however, was creating a network that could propagate data from node to node and reach a receiver which could send the data to the cloud. We ended up choosing a breadth first search algorithm to flood fill the data to a receiver. This is definitely not the most efficient way to solve this problem, but we will hopefully make this better in the future. Accomplishments that we're proud of Our transmitter and receiver connect at long distances!! We walked around the complex of North Avenue, switching floors and still getting the data that was being transmitted from the transmitter to the receiver. This was a milestone accomplishment consider that we were able to set up a transmission and conversion of primitive sensor data up to the cloud. Once the information was up to the cloud, it was like we were home free. We also built a simulation system to help emulate what the final product in a forest deployed with hundreds of these nodes would look like. What we learned We learned a tremendous amount about low level radio communication and how to propagate data through complex networks. What's next for TreeGuard After having so much success with sending information through long distances, we're super excited to continue working on this project as we see the potential it has in resolving a real-world issue. We hope to improve on the current system we have, and then contact local fire departments to get feedback on how useful such a system would be in the field, and build on our project from there! Built With arduino c++ firebase google-cloud javascript lora react typescript Try it out github.com pranavputta.me
TreeGuard
Tree Guard is an IOT network based early forest fire detection system
['Erik Scarlatescu', 'Pranav Putta']
[]
['arduino', 'c++', 'firebase', 'google-cloud', 'javascript', 'lora', 'react', 'typescript']
60
10,371
https://devpost.com/software/facecovmonitor
Inspiration COVID-19 has disrupted society and changed the way we live, learn, work, and play. I have experienced this first hand as I've been layed off from my job at the university, switched to fully virtual courses, and changed my interactions with friends and family to a digital format. My university, North Carolina State University, briefly opened for in person class at the start of August but was quickly forced back online due to a sharp increase in COVID-19 cases and clusters on campus. During the short period, I was baffled at how many people (a small but noticeable minority did not wear masks and even refused to wear masks. I wonder what life would be like if we had more actionable data and tools to monitor and remedy this. What it does This tool turns any device with a camera and web browser into a face covering monitoring agent. Agents constantly record a specific area and report to the backend whenever they detect n number of faces. The backend takes the picture of faces from the agents and collects data on how many are wearing face coverings properly. The aggregate data can be used by policy makers and the public to guide decisions on enforcement and awareness efforts. How I built it In this project, agents are devices with browsers and cameras that will send images to the backend when a person is detected. Work is done on the client side to ensure that only images with people in them are sent (the backend can get expensive as each image has a small charge). Once one is detected, the client base64 encodes the image and sends that string to the backend. The backend listens for images and processes an image for data about how many people are in it, how many are wearing face coverings improperly, and how many that are not wearing any face coverings. Only aggregate data is collected to remove possibility of bias. The backend is hosted on AWS with the help of a serverless microservice that I developed using golang, Lambda, API Gateway, and dynamoDB. The backend processes the image data and sends it through AWS Rekognition with the brand new feature that detects PPE usage (literally announced hours before HackGT began ) . Unfortunately I ran out of time before I could fully use rekognition, but I hope to develop this further after the event ends. The entire project (with the exception of the front end which is hosted on github pages) is easily deployable using AWS CDK which is a free infrastructure as code tool that allows for infrastructure to be defined using a programming language rather than a template. The front end is hosted on a github pages static website and uses face-api.js to detect when a face comes into view. Costs associated with streaming video (especailly with rekognition image processing) are extremely high so the decision was made to have clients detect when a person comes into view so as to not waste money and resources on images with no data in them. I also I learned that the browser will only allow webcam usage when the site is served over HTTPS. I had to switch from my orignal plan of hosting it in an s3 bucket to github pages. What's next for FaceCOVmonitor -Fully implement the back end and provide support for multiple agents and tenants. -Possibly find a cheaper way to process images by learning how to do that myself. -Clean up the front end Try out the front end client here (It's unfortunately not hooked up to the backend due to time restraints) https://github.com/adchungcsc/FaceCOVmon Built With amazon-web-services api-gateway cdk dynamodb githubpages html javascript lambda typescript Try it out adchungcsc.github.io
FaceCOVmonitor
Collect actionable aggregate data about mask usage with any device that has a web browser & camera.
['Alex Chung']
[]
['amazon-web-services', 'api-gateway', 'cdk', 'dynamodb', 'githubpages', 'html', 'javascript', 'lambda', 'typescript']
61
10,371
https://devpost.com/software/broadcraft-49grvx
Inspiration Some of our team members expressed enjoying products and services from smaller businesses that can offer a more custom or traditional experience. During the current quarantine, many business owners find themselves struggling to strive and maintain the clientele they once used to enjoy. In the digital market dominated by behemoths like Amazon and eBay, how can these businesses reach their audiences and deliver a similar fashion of care? Our team seeks to solve this gap with BroadCraft. BroadCraft is a play of words between "broadcast" and "craft" hinting at its purpose. Businesses that appeal to a similar type of audience popularly use storefronts like Etsy. In the midst of the isolation generated by the quarantine, we could use technology to enhance this sector and bring pleasant experiences to both, consumers and business owners. What it does Connects businesses with clients by allowing immediate updates about the production of a product through every step of its pipeline. This allows business owners to develop a more distinguishable brand as they can include as much detail to the process as desired, from elaborated walkthroughs to photographic recording. The progress of the order is shared with clients via a unique public URL (similar to tracking pages of delivery services). Furthermore, the web app seeks to optimize a business's performance through the use of series of QR codes that can be attributed to the different steps of the pipelines as well as a merchant dashboard, that lists information on orders and offers important insights about the logistics currently in place. How we built it We have a Node.js + Koa api as our backend, utilizing Firebase for our storage and identity management. Our front-end is a web app built using React with bootstrap and Material-UI components. We are using NCR's Banking API to provide mock-up transactional data. Challenges we ran into About half of our team was new to React which resulted in a decently steep learning curve. Additionally, we ran into issues trying to get a response from the Banking API we were using. Accomplishments that we're proud of Achieving a functional MVP despite all the great events and fun distractions that HackGT made available to us. What's next for BroadCraft We want to put some more work into the dashboard, including an actual onboarding for new users (business owners). This would include a detailed guide on the best logistical practices we've determined (where to place QR codes, when to record updates, etc). We also want to add functionality using geolocation, so we can also help service-providers (such as maintenance, construction, cleaning, babysitting, etc. ) connect with their customers in a similar way. Built With adobe-illustrator bootstrap css firebase html5 javascript material-ui ncr ncr-banking-api node.js react scss Try it out github.com
BroadCraft
Connect with clients, develop a more distinguishable brand and optimize your business's performance. BroadCraft allows businesses to inform consumers about their orders through every step of the way.
['Ivan A. Reyes', 'Jehf Denezaire', 'DaJuan Harris', 'raahimid Idrees']
[]
['adobe-illustrator', 'bootstrap', 'css', 'firebase', 'html5', 'javascript', 'material-ui', 'ncr', 'ncr-banking-api', 'node.js', 'react', 'scss']
62
10,371
https://devpost.com/software/kovidtrafik
Inspiration Because of the COVID-19 pandemic, so many areas of our lives have drastically changed, notably our willingness and ability to travel. Since our team believed that previously existing traffic models may not be as accurate, we sought to create a projection of traffic more relevant to our current situation. What it does KovidTrafik takes a user input of a date and time. The machine learning algorithm behind KovidTrafik then predicts the traffic level at the inputting time and prints out whether traffic is high, moderate, or lower, as well as how the traffic level compares to the average traffic levels. How I built it The machine learning algorithm behind KovidTrafik was built with an ARIMA time series model through python. The website was made on IBM cloud and formatted through html and css. Challenges I ran into Because this was our first experience with machine learning and hackathons in general, some difficult concepts bewildered us. Visualizing our data and creating a usable website were both new and challenging to us. Accomplishments that I'm proud of We are proud of the accuracy of KovidTrafik. We believe the model predicts values within a range implementable in the real world. The website we created was also more complex than our previous creations. What I learned We learned the concepts of machine learning, specifically ARIMA time series. Specifically, lag order and degree of differencing were ideas that required us to spend time on to comprehend. What's next for KovidTrafik We aspire to compile even more data for KovidTrafik, making the model more accurate. By increasing its accuracy, we hope to make KovidTrafik usable and beneficial around the world. Built With css html ibm-cloud python Try it out github.com github.com
KovidTrafik
KovidTrafik provides users with traffic predictions at a particular time for a particular day of the year based on traffic data collected since the first traces of COVID-19 in the United States.
['Sun Mee Choi', 'akshar2020', 'Felix Wang']
[]
['css', 'html', 'ibm-cloud', 'python']
63
10,371
https://devpost.com/software/dino-rctyaw
Inspiration Nonprofits and volunteer-based organizations struggle from limited financial and personnel resources. Furthermore, inability to evangelize their mission and promote awareness of opportunities hinders their progress and impact. When we also consider the perspective of community members, the apparent lack of information and unfortunate poor communication provide a high barrier for entry to potential volunteers. What it does "dino" (dee-no) would provide non-profit organizations a central platform to share volunteering opportunities, and community members can find opportunities specific to causes they’re passionate about that also fit to their schedules and location. Furthermore, for-profit companies can receive tax-breaks and positive PR through Volunteer Grant Matching for employee’s service. How we built it "dino" consists of two primary components: a webapp and mobile platform (Android AND iOS!). For our initial prototype, we wanted to focus on the "Main Success Scenario" in which: A nonprofit posts new opportunity using the webapp (labeled as "Events" in app) Volunteers find said opportunities using our search/browsing page on the mobile app Volunteers register for events and Nonprofits can view attendees. Our MVP consisted of subsequent infographic views and activities surrounding this main success scenario. We used Python and Flask to create a webapp and Flutter for cross-platform mobile development. Google Cloud and Firebase (specifically Cloud Firestore) provided our backend infrastructure and data management. We developed UI mockups using Figma to help focus our design process and cater towards typical user expectations. Challenges we ran into Our largest hurdle - beyond the time crunch - was web app development and firebase integration. Our team had a visionary idea and audacious passion to pursue it, but we also had a lack of prior experience in web-app development using flask. This challenge did not dissuade our commitment to learning and problem-solving. Despite a total of 3 hours of sleep throughout Hack GT, we persisted in self-study and countless tutorials. In the end, we produced beautiful designs and solid UI mock ups. While the full end-to-end integration is perhaps a future goal, the knowledge gained from this weekend is certainly a win. Accomplishments that we're proud of We're incredibly proud of our exciting and promising idea which we believe can have significant benefits to local communities. We're also proud of our mobile app which is compatible for both iOS and Android and is fully integrated with Firebase data! What we learned We learned much about Python and Flask development, creating Web Apps, HTML/CSS, Flutter, and many more. We also learned the importance of "keeping the blood pumping." The sedentary nature of our passion in coding can be lead to decayed productivity. However, occasional breaks to kick around a hacky sack gave us the energy and comradery needed to persevere and hope. What's next for dino dino has incredible potential and we are brimming with ideas. Beyond perfecting our initial prototype with the webapp, we plan to make accounts and usability for "Non-profit coordinators" and "Volunteers" in both the mobile and web apps. We also hope to include a third stakeholder in corporate entities who would be interested in monitoring and approving corporate volunteer grant matching through a simple, efficient, and reliable platform. We're especially excited by the prospect of reaching out to local and global volunteering organizations to gain those initial customers for our service. Built With css dart firebase flask flutter html python Try it out drive.google.com
dino
'dino' facilitates community service coordination by connecting non-profits and potential volunteers - while also providing a platform for corporate grant matching.
['Josiah Criswell', 'Ozi', 'Will Hunnicutt', 'Joel Bartlett']
[]
['css', 'dart', 'firebase', 'flask', 'flutter', 'html', 'python']
64
10,371
https://devpost.com/software/hackgt-dsm8y6
Smart Inventory A Revolutionary aspect facilitating Smart Inventory using Decentralised Blockchain and a Centralised Architecture. We have assimilated these prospects and our product uses Ethereum gas to fuel the Blockchain Ledger. Composite to this is our Machine Learning model that helps us compare the items. The items can be anything including Perishable Goods but we use medicines as an example. The Chain can be tracked and blocks are publicly available. Our branch is not merged yet, please refer to the Final Branch. The system can be used to check whether your medicines are being shipped in the right manner or not and all the data is submitted to a system compromising of Etherium blockchain and MongoDB. We help find discrepancies in the chain and all data is centrally available and immutable[thus the use case for block chain]. Finally, this also helps the customer as they can see where the medicine came from i.e the route and how it was stored, and how old the product is. They also see how other brands shipping the same type of products are doing. This data is also available for the brands to see so they can analyze the competition and provide a better product to the user thus increasing the sales. Unique ID's can be used to track and retrieve all these details in real-time from the Centralised system Built With css html javascript jupyter-notebook python Try it out github.com
MOTION: SMART INVENTORY
A smart Inventory system that changes how the perishable products especially medicines are shipped
['saksham gupta', 'Siddhant Tiwary', 'vortex-17 Mehta', 'Shreyas Vaderiyattil']
[]
['css', 'html', 'javascript', 'jupyter-notebook', 'python']
65
10,371
https://devpost.com/software/nomnom-ztvpo4
NomNom Logo Inspiration The other day while making a recipe, we realized we would have to scale up from 4 servings to around 10. This is a recipe we make all the time, although often for different amounts of servings (depending on how many of our roommates are eating, and how much leftovers we want). However, the website where we got the recipe from doesn't have the feature to scale up! This gave us the inspiration to create NomNom. What it does NomNom is the simple, no-math solution for recipe and ingredient calculation. It provides a easy to use interface to input your recipes, and allows convenient storage and scaling of your recipes to any number of servings! How we built it We used react to create our core web app, then we used python flask to write the APIs. For the option to import a recipe from a website, we used Beautifulsoup for a scraper. Challenges we ran into One member had to learn python beautifulsoup and requests, neither of which he was familiar with, in order to write web scraping logic to allow us to import recipes from other websites. Most of our group also didn't have prior experience with react. Accomplishments that we're proud of We are proud of how we were able to come up with something that everyone can use in their day to day life. We added a feature where our app recommends the closest quantity if the converted quantity is a weird decimal numbers, and our app is very accessible to everyone across any devices. What we learned One of us learned how to do web scraping with beautiful soup out of scratch, and we as a group learned how to create a web app with react. What's next for NomNom We would like to add a feature that our web will scrape from a photo of a recipe in physical books or notes then convert the recipe from that. Built With beautiful-soup flask javascript react requests
NomNom
The simple, no-math solution for recipe and ingredient calculation.
['Patrick Li', 'Jerry Huang', 'Neil Thistlethwaite', 'Eunseo Cho']
[]
['beautiful-soup', 'flask', 'javascript', 'react', 'requests']
66
10,371
https://devpost.com/software/connectr-5hps6c
Intro Page Login Page Messaging Page Chat Page Layout Inspiration Because of the coronavirus, people from all over the world have been stuck in their homes without the ability to meet and interact with new people. Isolation has been proven to cause a multitude of mental health issues, so we wanted to create a system that can help us connect with each other during this lonely times. We were inspired by Georgia Tech’s smile campaign and quarantine letters to make a positivity social media webpage.We wanted to use technology to give more people the ability to meet someone new. What it does CONNECTR allows you to send anonymous messages to other random CONNECTR users. If they like the message, they can send a message back. After both users have sent each other messages and liked them, their real names and social media information to be shared with each other so they can continue bonding outside the website and in the real world. We wanted to make the initial point of contact anonymous as more now than ever people are trying to get out of their social circles and form new bonds. Unlike other apps or websites where the user has to make the first move, CONNECTR does the hard work for them. How we built it We used Node JS and Express JS for the backend, focusing on efficient routing of URLs for easy access to each feature. We used MongoDB and Mongoose to create models for each letter and user that joins our platform. We used HTML and CSS to create beautiful UI in the hopes of promoting a calm, delightful atmosphere. Challenges we ran into The genesis of the project was founded upon our own inability to meet in-person; the lack of in-person support and collaboration made us rely on our own spirits to press on. Going into this project, we were complete beginners at server-side JS, meaning that documentation was a waterfall of confusion. Luckily, we managed to figure most of it out in the end. The time was not nearly enough to finish a complete developmental project with programmers at entry-level experience. Accomplishments that we're proud of We are extremely proud of the server-side code and building efficient routing from ground zero. We are also proud of the fantastic designs that we had in mind, although they were not able to be implemented into the webpage. We have to show it off, though. We are proud of building our first full-stack web from zero experience to intermediate! What we learned We learned how to use server-side technologies to connect to databases in order to serve people anywhere, anytime. We learned how to read documentation for hours on end. We learned important design principles and the importance of stunning visuals. We learned how to activate our stubbornness! What's next for CONNECTR CONNECTR still has a lot of potential as a web messaging app - perhaps it would be able to match people anonymously based on their interests. Expanding to many people would allow CONNECTR to flourish in its goals: reimagining social situations and new interactions. CONNECTR wants to ultimately connect many people throughout the world who are experiencing the same hardships and situations. CONNECTR wants to highlight the shared experiences of humans all across the globe, anonymously. Until you want to connect, of course. Built With css ejs express.js javascript mongodb mongoose node.js Try it out github.com
CONNECTR
“Re-imagining our social reality”
['Daniel Yuan', 'joydang37', 'Christian Kim', 'Angela Dai']
[]
['css', 'ejs', 'express.js', 'javascript', 'mongodb', 'mongoose', 'node.js']
67
10,371
https://devpost.com/software/tradot
Inspiration While in the middle of a pandemic, it is vital that customers can have germ free transactions and easy delivery and curbside pickup, and that small businesses can continue to thrive. We have built an innovative application that accomplishes this all. What it does Our application called Automatic Store Deployer is an all-inclusive webstore and a chatbot for stores that takes in and interprets customer messages and responds and acts according to customer requests, allowing customers in a hurry or on the go to order food or groceries in a matter of seconds through online casual conversations with the chatbot. This allows contact free transactions through delivery or curbside pickup. And this chatbot is so easy to set up and customize through fast and simple steps to add inventory, prices, location, and operating hours to this bot’s database that any company, large or small, can use this bot to create faster, safer, and simpler customer transactions! How we built it We used MERN stack to develop our web application, using React for frontend, Express and Node.js for backend, and MongoDB for our database. We built our chatbot using DialogFlow. We also implemented NCR's Selling Engine and Catalog API's to post and retrieve merchant and customer data. Challenges we ran into One challenge we ran in to was learning how to implement NCR's API's and integrating three separate backends hosted on three different servers. Accomplishments that we're proud of We are proud for having completed this project within 36 hours! One specific part we are especially proud of is setting up our chatbot. What we learned We learned how to integrate heavy frontend and backend code. We also strengthened our communication skills and teamwork in an virtual environment. What's next for Automatic Store Deployer We plan to expand our chatbot to have more capabilities in assisting customers and expanding this project to a larger marketplace. Built With dialogflow express.js mongodb node.js python react
Automatic Store Deployer
An easy to set up client for shops with a website and a chatbot! Make transactions faster, easier, and safer!
['Leonard Thong', 'Rochan Madhusudhana', 'Allison Lee']
[]
['dialogflow', 'express.js', 'mongodb', 'node.js', 'python', 'react']
68
10,371
https://devpost.com/software/vehicle-iot-security-system
Vehicle Device Insert Vehicle Device Insert Closed Vehicle Device Front View Vehicle Device Front View Vehicle Device Open Box Home Device System Diagram And the prints don't stop Vehicle IoT Security System Inspiration This project was created at HackGT 2020 and was inspired by a rise in vehicle theft in my local community. I hope to make this a noninvasive product that is easy to reproduce and deploy. Designed to Solve Many vehicles are equipped with alarms. Unfortunately, the current alarm systems are not enough. Malicious events involving vehicles occur when the victim is farthest away from the vehicle. Vehicles are often stolen at night and right from the victim's home parking. The problem is that it is difficult to hear alarms from inside a home and thieves and other malicious entities are aware of this. The solution I propose is to reimagine current alarms but within the realm of IoT or Internet of Things. By creating an IoT enabled device a user is able to hear alarms sound from inside the home as opposed to the outside. Now with the Vehicle IoT Security System users all over the globe will be prepared. What it does and How I built it As of October 18, 2020 at 7:00am EST a functional demo and prototype of the vehicle device and the home device has been created. The vehicle device consists of an ESP8266 NodeMCU v1.0, a 9V battery, and an Adafruit LIS3DH Triple-Axis Accelerometer Breakout Board. This device connects to a nearby network of the user's choosing and is meant to be placed inside a vehicle. If tampering occurs (theft, tow truck, vandalism, angry significant other, etc.), the accelerometer will sense the event and the ESP8266 will send the data over a client and server network connection with the home device via a url. The home device consists of another ESP8266 NodeMCU v1.0, a push button, and a buzzer (I was expecting company so I went for the quieter route, but, in reality, I would use a speaker instead). If the data received by the home device from the vehicle device confirms tampering, the alarm will sound via a speaker (or in this case buzzer). The alarm is disabled via the push button until another tampering event occurs. Challenges I ran into All of them!!!! Hardware, Software, Debugging, Print Failures, Burnt Boards, You Name It and I Experienced it! Need a 9V connector but don't have one? Pry it out of an old dead 9V! Not sure if the problem is hardware or software, sleep Deprivation, or hunger? Well, I had to figure it out. The video submission is the hardest part for me. Accomplishments that I'm proud of Finding a balance between different roles mechanical, electrical, software, and project management. I'm proud that I was kind of able to display my breadth and ability to be resourceful. As well as my rapid prototyping skills. It was a one-man show. I do prefer teams, but if need be I'll do it all! (Though I should really find a consistent team, this wasn't as fun. I miss in-person hackathons) What I learned A lot :) but I'm too short on time Refer to GitHub for Diagrams and Code Built With 3d accelerameter arduino buttons buzzers c++ determination esp8266 hardware inventor modeling nodemcu printing software Try it out github.com
Vehicle IoT Security System
Stop Vehicle Theft and Vandalism Now with an IoT Security System!
['Wally Proenza']
[]
['3d', 'accelerameter', 'arduino', 'buttons', 'buzzers', 'c++', 'determination', 'esp8266', 'hardware', 'inventor', 'modeling', 'nodemcu', 'printing', 'software']
69
10,371
https://devpost.com/software/prioritynews
python sheet beginning design doc beginnning Inspiration As a team, we enjoy tackling challenges that delve into social good. Regardless of whether we produce something that will be widely used, we think it’s important to work on projects that can serve to improve the lives of people or the world in some way. Regarding why we chose NewsQ’s challenge over others, we all have a budding interest in applying and incorporating machine learning into our algorithms to produce really useful results. This project gave us a great opportunity to combine both for a well-defined problem. What it does This project ranks news for people to read based on an algorithm we designed. This project is specific to general and health news from Australia, and it takes in the top headlines from Australia and orders them based on a combination of three main factors: credibility, or how trustworthy an article is based on its content, readability, based on a reading level algorithm called the Flesch Reading Ease, and time elapsed since publication. Each factor is weighed and processed in an equation, whose output leads to the overall news ranking. Equation output is ranked high to low; the highest ranking articles have the highest numerical combination of the three factors. How we built it We built PriorityNews with the help of an API (NewsAPI) and python’s BeautifulSoup and Urllib packages, among others, to webscrape headline news from Australia. With this initial data, we made calls to a machine learning model built on bag-of-words that predicted the probability of the article content being trustworthy. We also analyzed the readability level of each article’s content using a python package to abstract Flesch’s reading ease algorithm. We combined these factors, along with other factors such as time-since-published into a testing-tuned equation to achieve a relevant ranking of general and health news articles. Our design document for our specific challenge was made with LaTeX. Challenges we ran into While the machine learning model was trained on a cleaned dataset of news articles, our web scraper was pulling much messier strings with headings and markdown punctuation. This made our model seem to make predictions almost randomly. However, taking some time to clean the strings made the model more consistent again. But overall, the accuracy of the model may have reduced a bit when predicting our web scraped data compared to the clean data it was trained on. Accomplishments that we’re proud of We are proud of the progress made through this hackathon. The idea phase is always one of the more difficult portions of the hackathon, and after looking through the challenges, the Newsq challenge allowed us to express our skillset the best. Most of the members on the team are new to machine learning, and with our constant desire to improve and persevere through difficult times, we were able to train several data sets and develop an algorithm that ranked news sources with high efficiency. After constant trial and error, we were able to think of an algorithm that worked with the data sets and were satisfied with the results. What I learned Planning is a necessity. Training the machine models took a really long time, and one of the ones we ran throughout Friday night actually ended up having poor accuracy. The model ended up not liking the formatting of some of the webscraped data, so we had to reformat the data. We also learned about html parsing to extract article data with web scraping, particularly because of the challenges such as decoding character sets and trimming return content. What's next for PriorityNews We hope to improve the accuracy of the machine learning model when it predicts real and fake news. Hopefully, we can train more powerful models like convolutional news networks or ensemble models that combine neural networks and bag-of-words. Built With beautiful-soup jupyter-notebook latex machine-learning natural-language-processing newsapi python spacy webscrape Try it out github.com
PriorityNews
A news recommendation system algorithm conducive to democracy and social good
['Vinnie Khanna', 'Maanas Purushothapu', 'Sahil Sudhir', 'Raj Srivastava']
[]
['beautiful-soup', 'jupyter-notebook', 'latex', 'machine-learning', 'natural-language-processing', 'newsapi', 'python', 'spacy', 'webscrape']
70
10,371
https://devpost.com/software/alexandria-ufba14
Landing Skill Tree Course Page Skill Tree 2 Welcome to the Library This year, we were met with a reimagining of just about every reality we knew. Habits and routines we’ve known for years were ripped away from us, and all of us have had our own struggles in adapting to the new way of the world. As College Students, we immediately saw a change in our own ability to focus and how we worked with a transition to remote learning. While the impact on us as students was profound, what we couldn’t stop thinking about how that impact may be worsened for younger students. Children, as maturing individuals, lack the habits and self control that are helpful and perhaps even necessary to succeed in an online learning setting. We knew if we were struggling, our younger peers must be even worse off, and we wanted to try and fix that. How do you keep children engaged in schooling when they’re only way to be engaged is through a computer screen? Our project, Alexandria, named for the ancient library of the same name, is our proposed solution. A platform to “gamify” schooling and engage a wide age range of learners through their computer screen. How do we engage Students, Learners, and Scholars to be? Everyone loves to be rewarded, children especially. We wanted to try and make online schooling more interactive, more fun, and more enjoyable by providing rewarding “games” and satisfying results for children who are struggling to engage with an online education. We accomplished this through a number of ways Levels As students progress through their educational career, they will be rewarded for completing lessons and growing their knowledge base. This is tracked through a level system that manages their advancement. Skills Levels provide a very “at a glance” view of a students progress, but skills allow a more detailed view. Tied to certain attributes in execution, such as algebra or world history, these skills would be unlocked as students completed the relevant lessons. Peers In an online setting, interacting with fellow students is more difficult. Our platform would address part of this disconnect, by providing a way for students to collaborate with their peers. This would encourage cooperation with their friends and a motivation to advance, and be engaged in their coursework. Technologies Alexandria was built using several different layered technologies. React Native was employed for the creation of the frontend, while Node and Express were used to design a basic API to interact with our SQL database hosted through Microsoft Azure. Challenges we ran into An immediate concern we found was scope. We knew that Alexandria was a big project to take on, and one that polishing and completing in a weekend was impossible. We also found difficulty in adapting and learning a handful of new technologies in the timeframe, such as NodeJS and React Native. Despite these, we all felt passionately about exploring and developing a proof of concept for the idea; as a result, we spent a lot of our time trying to create well designed mockups, even if we couldn’t fully replicate them in code over the weekend. We believe the idea of what Alexandria can become is much more important than the prototype that can be built over a weekend. What's next for Alexandria This is one library that won’t burn down. We hope that over the semester we can continue to refine and make a true platform out of Alexandria. Every member of our team has been personally affected or had family that has been directly affected by the difficulties of online learning. This is our new reality, and for many it’s an uncomfortable one. We hope to continue working on Alexandria and make it realize its full potential, with trial runs and limited deployment before scaling up. With Alexandria, we hope to reimagine reality for children struggling to engage in their studies, and make this new normal a little easier to accept. Built With azure express.js javascript json node.js react react-native sql
Alexandria
Engaging young learners.
['Ryan Elliott', 'Trevor Crow', 'Sterling Cole']
[]
['azure', 'express.js', 'javascript', 'json', 'node.js', 'react', 'react-native', 'sql']
71
10,371
https://devpost.com/software/local-lenz-p3v0ql
Start of pathway: look up a location Door that leads to next scene Objects that represent choices for art you want to see 360 Degree Image Template Art Gallery Artist Selection Page Artist Gallery Page Inspiration During this global pandemic, it has been difficult to enjoy some of the experiences we were once used to. Having the ability to visit an art gallery with art from a local community is an experience we all took for granted, so we wanted to create a mechanism by which we could enjoy art in a reimagined way. What it does The project has two parts: first, a user is guided through a series of preference selections that will narrow their search. Once these fields have been narrowed, they move to the web app in which they are free to explore a personalized art gallery based on their choices. How we built it We used the Unity engine for the first part, creating a 3D space in which a character can move around and select their preferences for the questions. The web app is implemented using the Flask microframework in Python and styled using HTML/CSS. Challenges we ran into There were quite a few challenges that we ran into with this project. Our vision was extremely clear for the product we wanted, but figuring out the implementation details was tricky. There were a lot of ideas about how we would implement our design, but they all had to compromise on certain features we envisioned in our final product but couldn’t include due to the time constraints of a hackathon. Accomplishments that we're proud of We were able to use Unity to animate the search Gateway. We used Flask to implement the web app to illustrate the main concepts of the web application. We also looked into SQL Database to implement the basic web app functionalities such as artist signup and user login. What we learned As emerging hackers, we were not comfortable with all the technologies we utilized for our project and thus learned a lot. In addition to learning the technologies we directly used, like Unity and Flask, we also had to learn Git in order to collaborate effectively and efficiently. What's next for Local Lenz In order to implement our vision of Local Lenz, we will continue to use Unity to implement both the VR search gateway and the VR gallery portals for our web app. We will also enhance the users’ VR viewing experience by using VR lenses such as Oculus, Google Cardboard, MS HoloLens. Built With css flask html5 python sqlalchemy unity Try it out vrartsitelanding.herokuapp.com github.com
Local Lenz
Local Lenz is a VR Art Gallery space. It connects users to artists they may be interested in based on their preferences and provides a reimagined gallery space to enjoy their works.
['Emma Dang', 'Sohum Gala', 'Arthur Buskes']
[]
['css', 'flask', 'html5', 'python', 'sqlalchemy', 'unity']
72
10,371
https://devpost.com/software/ml-waywt-rec-q9nusk
What Are You Wearing Today? (WAYWT) HackGT Project Table Of Contents Introduction Setup Instructions Log in to the Microsoft Azure console and create a notebook instance Use git to clone the repository into the notebook instance Machine Learning Pipeline Step 1 - Importing the datasets Step 2 - Pre-processing data Step 3 - Training the CNN (using transfer learning) Part A - Specify Loss Function and Optimizer Part B - Train and Validate the Model Part C - Test the Model Step 4 - Creation of user style vectors Step 5 - Recommendation testing Important - Deleting the notebook Introduction The goal of this project is to develop a recommender system that will accept a few different user-supplied image of clothing as input, score them against the user's 'style vector' which is generated via user preferences during initialization of the app, and then rank the different outfits to help the user decide what to wear. All image files used to train the model for this project are from the DeepFashion dataset. Ziwei Liu, Ping Luo, Shi Qiu, Xiaogang Wang, and Xiaoou Tang. DeepFashion: Powering Robust Clothes Recognition and Retrieval with Rich Annotations. In Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2016. Setup Instructions This project requires the following tools: Python - The programming language used by Flask. PostgreSQL - A relational database system. Virtualenv - A tool for creating isolated Python environments. To get started, install Python and Postgres on your local computer if you don't have them already. A simple way for Mac OS X users to install Postgres is using Postgres.app . You can optionally use another database system instead of Postgres, like SQLite . The notebook in this repository is intended to be executed using Amazon's SageMaker platform and the following is a brief set of instructions on setting up a managed notebook instance using SageMaker. Log in to the Microsoft Azure console and create a notebook instance Log in to the Azure console and go to the Azure dashboard. Click on 'Machine Learning'. It is recommended to enable GPUs for this particular project. Use git to clone the repository into the notebook instance Once the instance has been started and is accessible, click on 'Open Jupyter' to get to the Jupyter notebook main page. To start, clone this repository into the notebook instance. Click on the 'new' dropdown menu and select 'terminal'. By default, the working directory of the terminal instance is the home directory. Enter the appropriate directory and clone the repository as follows. cd SageMaker git clone https://github.com/Supearnesh/ml-waywt-rec.git exit Machine Learning Pipeline This was the general outline followed for this Azure project: Importing the datasets Pre-processing data Training the CNN (using transfer learning) Creation of user style vectors Recommendation testing Step 1 - Importing the datasets The DeepFashion dataset used in this project is open-source and freely available: Download the DeepFashion dataset . Unzip the folder and place it in this project's home directory, at the location /img . In the code cell below, we will write the file paths for the DeepFashion dataset in the numpy array img_files and check the size of the dataset. import numpy as np from glob import glob # !unzip img # load filenames for clothing images img_files = np.array(glob("img/*/*")) # print number of images in each dataset print('There are %d total clothing images.' % len(img_files)) Step 2 - Pre-processing data The data has already been randomly partitioned off into training, testing, and validation datasets so all we need to do is load it into a dataframe and validate that the data is split in correct proportions. The images are then resized to 150 x 150 and centercropped to create an image tensor of size 150 x 150 x 3. They are initially 300 pixels in height and the aspect ratio is not altered. In the interest of time, this dataset will not be augmented by adding flipped/rotated images to the training set; although, that is an effective method to increase the size of the training set. import pandas as pd df_full = pd.open_csv("data_attributes.csv") df_train = df_full.loc[df_full['evaluation_status'] == 'train']][['img_path', 'category_values', 'attribute_values']] df_test = df_full.loc[df_full['evaluation_status'] == 'test']][['img_path', 'category_values', 'attribute_values']] df_val = df_full.loc[df_full['evaluation_status'] == 'val']][['img_path', 'category_values', 'attribute_values']] print('The training set has %d records.' % len(df_train)) print('The testing set has %d records.' % len(df_test)) print('The validation set has %d records.' % len(df_val)) import os from PIL import Image from torchvision import datasets from torchvision import transforms as T from torch.utils.data import DataLoader # Set PIL to be tolerant of image files that are truncated. from PIL import ImageFile ImageFile.LOAD_TRUNCATED_IMAGES = True ### DONE: Write data loaders for training, validation, and test sets ## Specify appropriate transforms, and batch_sizes transform = T.Compose([T.Resize(150), T.CenterCrop(150), T.ToTensor()]) dataset_train = datasets.ImageFolder('img/train', transform=transform) dataset_valid = datasets.ImageFolder('img/valid', transform=transform) dataset_test = datasets.ImageFolder('img/test', transform=transform) loader_train = DataLoader(dataset_train, batch_size=1, shuffle=False) loader_valid = DataLoader(dataset_valid, batch_size=1, shuffle=False) loader_test = DataLoader(dataset_test, batch_size=1, shuffle=False) loaders_transfer = {'train': loader_train, 'valid': loader_valid, 'test': loader_test} Step 3 - Training the CNN (using transfer learning) The FashionNet model is nearly identical to the VGG-16 model architecture, with the exception of the last convolutional layer. However, instead of introducing the additional complexities of the FashionNet model, this model can be simplified by simply retaining the attributes embedding from the dataset. The data will be filtered into 1,000, potentially relevant buckets across 5 attributes of clothing, namely its pattern, material, fit, cut, and style. All layers use Rectified Linear Units (ReLUs) for the reduction in training times as documented by Nair and Hinton. It will be interesting to test the trained model to see how the the training and validation loss function perform. Vinod Nair and Geoffrey Hinton. Rectified Linear Units Improve Restricted Boltzmann Machines . In Proceedings of ICML , 2010. An alternative could have been to use a pretrained VGG-19 model, which would yield an architecture similar to that described by Simonyan and Zisserman. The results attained by their model showed great promise for a similar image classification problem and it could have made sense to reuse the same architecture, and only modifying the final fully connected layer as done for the VGG-16 model in the cells below. Karen Simonyan and Andrew Zisserman. Very Deep Convolutional Neural Network Based Image Classification Using Small Training Sample Size . In Proceedings of ICLR , 2015. import torchvision.models as models import torch.nn as nn import torch # The underlying network structure of FashionNet is identical to VGG-16 model_transfer = models.vgg19(pretrained=True) for param in model_transfer.parameters(): param.requires_grad = False # The sixth, final convolutional layer will be adjusted to 1,000 model_transfer.classifier[6] = nn.Linear(1000, 133) # check if CUDA is available use_cuda = torch.cuda.is_available() # move to GPU if use_cuda: model_transfer = model_transfer.cuda() # create a complete CNN model_transfer = Net() print(model_transfer) # check if CUDA is available use_cuda = torch.cuda.is_available() # move tensors to GPU if CUDA is available if use_cuda: model_transfer.cuda() Part A - Specify Loss Function and Optimizer Use the next code cell to specify a loss function and optimizer . Save the chosen loss function as criterion_transfer , and the optimizer as optimizer_transfer below. import torch.optim as optim ## select loss function criterion_transfer = nn.CrossEntropyLoss() # check if CUDA is available use_cuda = torch.cuda.is_available() # move loss function to GPU if CUDA is available if use_cuda: criterion_transfer = criterion_transfer.cuda() ## select optimizer optimizer_transfer = optim.SGD(model_transfer.parameters(), lr=0.001) Part B - Train and Validate the Model The model is to be trained and validated below, with the final model parameters to be saved at the filepath 'model_transfer.pt' . n_epochs = 25 # train the model model_transfer = train(n_epochs, loaders_transfer, model_transfer, optimizer_transfer, criterion_transfer, use_cuda, 'model_transfer.pt') # load the model that got the best validation accuracy model_transfer.load_state_dict(torch.load('model_transfer.pt')) Part C - Test the Model The model can be validated against test data to calculate and print the test loss and accuracy. We should ensure that the test accuracy is greater than 80%, as the implementation in the FashionNet paper yielded an accuracy of 85%. def test(loaders, model, criterion, use_cuda): # monitor test loss and accuracy test_loss = 0. correct = 0. total = 0. model.eval() for batch_idx, (data, target) in enumerate(loaders['test']): # move to GPU if use_cuda: data, target = data.cuda(), target.cuda() # forward pass: compute predicted outputs by passing inputs to the model output = model(data) # calculate the loss loss = criterion(output, target) # update average test loss test_loss = test_loss + ((1 / (batch_idx + 1)) * (loss.data - test_loss)) # convert output probabilities to predicted class pred = output.data.max(1, keepdim=True)[1] # compare predictions to true label correct += np.sum(np.squeeze(pred.eq(target.data.view_as(pred))).cpu().numpy()) total += data.size(0) print('Test Loss: {:.6f}\n'.format(test_loss)) print('\nTest Accuracy: %2d%% (%2d/%2d)' % ( 100. * correct / total, correct, total)) test(loaders_transfer, model_transfer, criterion_transfer, use_cuda) Step 4 - Creation of user style vectors This capability is the crux of a recommendation engine; it generates a feature vector for a particular user, based on images they have previously selected or liked, and subsequently compares future images to ascertain the similarity, or distance, from previous selections to recommend items that would be a good fit. ## load attribute labels and their mappings df_attributes = pd.read_csv('labels_attributes.csv') # list of attribute names and their corresponding indices attr_pattern = [] attr_material = [] attr_fit = [] attr_cut = [] attr_style = [] for i in range(len(df_attributes)): if df_attributes[['attribute_type_id']][i] == 1: attr_pattern.append(df_attributes[['attribute_id']][i]) if df_attributes[['attribute_type_id']][i] == 2: attr_material.append(df_attributes[['attribute_id']][i]) if df_attributes[['attribute_type_id']][i] == 3: attr_fit.append(df_attributes[['attribute_id']][i]) if df_attributes[['attribute_type_id']][i] == 4: attr_cut.append(df_attributes[['attribute_id']][i]) if df_attributes[['attribute_type_id']][i] == 5: attr_style.append(df_attributes[['attribute_id']][i]) Step 5 - Recommendation testing Test the recommender system on sample images. It would be good to understand the output and gauge its performance - regardless of which, it can tangibly be improved by: data augmentation of the training dataset by adding flipped/rotated images would yield a much larger training set and ultimately give better results further experimentation with CNN architectures could potentially lead to a more effective architecture with less overfitting an increase in training epochs, given more time, would both grant the training algorithms more time to converge at the local minimum and help discover patterns in training that could aid in identifying points of improvement import urllib import matplotlib.pyplot as plt img = Image.open(urllib.request.urlopen('https://images.footballfanatics.com/FFImage/thumb.aspx?i=/productimages/_2510000/altimages/ff_2510691alt1_full.jpg')) plt.imshow(img) plt.show() transform = T.Compose([T.Resize(150), T.CenterCrop(150), T.ToTensor()]) transformed_img = transform(img) # the images have to be loaded in to a range of [0, 1] # then normalized using mean = [0.485, 0.456, 0.406] and std = [0.229, 0.224, 0.225] normalize = T.Normalize(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225]) normalized_img = normalize(transformed_img) # model loading tensor_img = normalized_img.unsqueeze(0) # check if CUDA is available use_cuda = torch.cuda.is_available() # move image tensor to GPU if CUDA is available if use_cuda: tensor_img = tensor_img.cuda() # make prediction by passing image tensor to model prediction = model_transfer(tensor_img) # convert predicted probabilities to class index tensor_prediction = torch.argmax(prediction) # move prediction tensor to CPU if CUDA is available if use_cuda: tensor_prediction = tensor_prediction.cpu() predicted_class_index = int(np.squeeze(tensor_prediction.numpy())) class_out = class_names[predicted_class_index] # predicted class index # The output would then be compared against the user's style vector to rank against other potential outfits Important - Deleting the notebook Always remember to shut down the notebook if it is no longer being used. Azure charges for the duration that a notebook is left running, so if it is left on then there could be an unexpectedly large Azure bill (especially if using a GPU-enabled instance). If allocating considerable space for the notebook (15-25GB), there might be some monthly charges associated with storage as well. Built With flask postgresql python Try it out github.com
What Are You Wearing Today? (WAYWT)
A recommender system targeted at giving outfit recommendations, drawing inspiration from the 'What Are You Wearing Today' (WAYWT) threads in the popular subreddit r/malefashionadvice.
['Calen Robinette', 'Arnesh Sahay', 'Juan Sebastian Arevalo', 'Chris J']
[]
['flask', 'postgresql', 'python']
73
10,371
https://devpost.com/software/hackgt-2020
Logo Menu Page Cart Page Checkout/Payment Screen Instructions Page & Table QR Code QR code demo on restaurant table Google Firebase Restaurant view of customer table Restaurant page to edit menu Restaurant Dashboard with Metrics Octobuzz NCR API access via Postman Inspiration: With the challenges of COVID today, public health and safety has never been a bigger priority which inspired us to rethink modern dining experiences. Many restaurants face the issue of interacting with numerous people everyday and risking the health of both the workers and the customers. With our web app we have created a contactless dining interaction to ensure safety and efficiency. This is important to us because many businesses especially small businesses have suffered due to the reduced amounts of people dining in which was a motivator for us to bridge the gap between keeping people connected digitally while ensuring safety is a priority. Furthermore, on the restaurant side, we will help merchants reduce cost of shipments and how many employees are working at a time because the analytics feature on our dashboard can track the busiest times, restaurants can use this data to adjust accordingly. This monitoring will optimize the performance and save money for restaurants and prioritize safety of workers by having only the necessary amount of people working at a time. What it does: Clean Eats aims to reimagine dining experiences by eliminating as much contact to ensure safety as well as optimize restaurant shipments and scheduling my analyzing trends in the data as to busy times and days of weeks as well as most popular food items. With Clean Eats, restaurants can keep track of multiple tables and orders in real time. Likewise, customers can simply scan the QR code on a table and begin their dining experiences completely paperless and contactless. Users can place orders after pursuing the menu and add notes for dietary restrictions and allergies. On the restaurant's end, they will confirm the order and the user will see the stages of their food’s preparation going from Order Pending, Order Received, Food Ready, Meal Complete, and Table Sanitized. Once the customers have their food, they will be able to request refills, a waiter, order more food, or finish their dining experience all at their fingertips. Want to dine in while ensuring your and others safety? Want to continue supporting local restaurants but not risking your health? No problem with Clean Eats! How I built it: For the frontend, we used ReactJS combined with Bootstrap for styling. We created two separate React apps, one to be utilized by a customer, and the other to be utilized by a participating restaurant. We did this because the two types of users have completely different functionalities, but we had to connect them somehow, which is where the backend comes in. For the backend database, we used Google Firebase (the same database across both React apps) to hold various restaurants and their menus, which update in real-time as restaurants change their items and also to store requests from different tables of customers. We also used Express.js for the backend server in conjunction with NodeJS and Firebase. Lastly, we used NCR's APIs with Postman in order to keep track of our catalogs and transactions. Challenges I ran into: The primary challenge was getting familiar with NCR’s APIs and determining how we wanted to utilize POST requests through the frontend rather than through a backend server. However, we eventually realized that this was not the way the API was built to function, and therefore, although we already had the code for it, we had trouble accessing data from the API. We had to devise a plan that would enable us to thoroughly understand how the APIs worked so that we could properly utilize them without having previous experience with them. Two of our four members did not have previous experience with React and Bootstrap so the next challenge was balancing learning and developing the project. Before we could make a lot of leeway on the first night, we had to learn a lot about React as well as the documentation for the APIs so that we could create the most efficient solution. Accomplishments that we're proud of: We are extremely proud of implementing NCR’s APIs effectively and seamlessly so that we could effectively create a solution to the problems pertaining to dining. We are very proud of our execution of our idea since we worked tirelessly to create the most complete project based on our initial idea that we could. Overall, the interface for both the restaurant view and customer view looks very clean and consistent and the user experience is intuitive and efficient. What we learned: We learned a ton about the NCR APIs and how to effectively incorporate them into frameworks and tools we already knew. We learned a lot about React and Bootstrap so that we could make the user experience very sleek as well as learning more about Firebase to store the state of a restaurant. Moreover, we learned how to collaborate more effectively as a team so that we could all utilize the skills we previously knew to maximize time and efficiency. What's next for Clean Eats: We want to use Clean Eats on a large scale and enable any restaurant to use this platform. We are currently using one restaurant, Olive Garden, as an example, but ideally every restaurant will work and can upload their restaurant. Furthermore, we hope to continue developing to expand the use and make it more applicable to more restaurants. Built With bootstrap css express.js firebase html javascript ncr-catalog-api ncr-selling-api react Try it out github.com cleaneats.herokuapp.com cleaneats.herokuapp.com
Clean Eats
Contactless Dining Experience: Reimagining Dining in a COVID World
['Michael Ryan', 'Rachel Voirin', 'Shweta Murali', 'Alex Bulanov']
[]
['bootstrap', 'css', 'express.js', 'firebase', 'html', 'javascript', 'ncr-catalog-api', 'ncr-selling-api', 'react']
74
10,371
https://devpost.com/software/room-counter-k26deo
Example Detection Empty Room Dialog Choice Box Inspiration Fighting against Covid-19 has resulted in many changes — both large and small — in the way we live our lives. In order to slow the spread of the virus, it has become important for businesses and public establishments to limit and keep track of the number of people in certain enclosed areas. This is not often an easy task, requiring an attendant stationed at each entrance at all times. Furthermore, such an approach puts the attendants at greater risk of contracting the virus themselves. There is a clear need for the automation of tracking and counting the exact number of people that enter or leave an area. What it does Room Counter is an automated system for detecting, tracking, and counting the number of people that enter or leave a room. Users will place a laptop with its webcam observing an entryway, and our machine learning and tracking algorithm will run behind the scenes to detect occupants coming and going. A live video feed displays the webcam footage with the detections overlaid as they appear. How we built it First, we trained and deployed a compact object detection neural network in pytorch to detect people in the frame. We used the well known MS COCO and pascalvoc datasets to train our model on an nvidia GPU. Then, we implemented a real-time tracking algorithm in python to process incoming video data and object detections to count when people enter or leave the room. Lastly, we use tkinter and opencv to display a simple, intuitive GUI. Challenges we ran into We faced many challenges during the implementation of this app. First, we had to train a model that was both fast enough to run on a laptop CPU at a usable fps, while also being accurate and robust enough to support the tracking algorithm. We also struggled with the GUI. It was hard to find a way to stream video in real time while processing the video in the background. Accomplishments that we're proud of We are very pleased with the back-end of the app, given the limited time we had. The neural network we trained is lightweight, but still accurate enough to produce reliable results. We are also glad that we found a solution for effectively displaying the video feed. Though our GUI is simple, it gets the job done. What we learned We need to become more familiar with GUI frameworks in python so that we can build front ends with more features faster. Also, it is one thing to have functioning code in the back-end, and an entirely other thing to have that code be effectively deployed in the front-end. We spent a lot of time on our back end, and we ended up cutting it really close with the functionalities of our front-end. In the future, we need to give accurate consideration and planning for all aspects of our project instead of just the most obvious ones. What's next for Room Counter Given more time, we will be able to train a better, more robust neural network -- this will make our tracking and counting even more reliable. We would also like to add additional functionalities to our app, such as displaying occupancy statistics over time, or even communicating with other devices to cover several entrances and exits together. We see great potential and wide applicability in this project. Built With machine-learning opencv python pytorch tkinter Try it out github.com
Room Counter
An automated approach for maintaining building occupancy guidelines during the global pandemic.
['Eric Gu', 'Arvind Srinivasan', 'bbgrg Gorti']
[]
['machine-learning', 'opencv', 'python', 'pytorch', 'tkinter']
75
10,371
https://devpost.com/software/schedadle
Adding a new task to your schedule Organization algorithm Displaying the newly created schedule Inspiration Georgia Tech inspired us to streamline tedious tasks in our daily lives - more specifically, creating a schedule. With COVID-19, we found that creating a daily routine became difficult when our day was completely in our hands. We wanted to develop a web application that would tailor our schedules to suit our preferences, needs, and dynamic timelines. What it does The Schedadle creates a schedule for the user, adjusting it throughout the day as plans change. The user inputs his or her daily tasks into the system stating what he or she needs to get done, how long it takes to finish the assignment, the deadline, and whether or not the task is fixed (i.e. meetings, classes) or dynamic (i.e. homework assignments). After filling out all of the data, the rest is up to the system. Schedadle develops a personalized schedule that will organize necessary tasks based on urgency and alter it throughout the day as time commitments shift. How I built it We combined HTML, CSS, and Javascript in the front-end with a Python-based web server in the back-end to develop our web application. Challenges I ran into Originally, we wanted to use React and Flask to develop our web application. However, throughout the first hacking day, we had issues setting up a database and integrating React and Flask together. We tried running a sample code once we installed both React and Flask, but we were unable to display data from the server onto the screen. Another issue was the integration of two team members’ front-end work. After individually completing their part of the front-end design, we combined their designs together to form our complete web application. However, there were different coding formats involved that caused trouble when integrating them. Accomplishments that I'm proud of We are very proud of the progress we made on our project. We didn’t anticipate learning as much as we did in the last 24 hours, but we surpassed our expectations upon entering the hackathon. All of us learned different coding languages and a new aspect of web development, and we are very proud of our idea. What I learned We come from different backgrounds with varying coding experiences - some more than others. While it was difficult to create a project with minimal coding knowledge, we learned multiple different languages - from Python to Javascript to HTML - during this project. We also learned about how to collaborate on a single coding project, especially with ours having so many different components that required us to split up the work and combine it at the end. What's next for Schedadle Next up, we want Schedadle to do more than just schedule for individuals - we want it to be able to coordinate schedules with groups of people. By syncing schedules with friends, users will be able to generate schedules in a way that groups can plan events together. We want to personalize the website more as well, generating multiple possible schedules so the user can pick the best one for them. Lastly, we want to tailor it to improve the user’s productivity over time, adding an Overview function that tells the user how much they deviated from their planned task times. There are multiple features we want to incorporate into our website, and we will continue to work on the project outside of HackGT and refine it as we become more experienced. Built With css flask html javascript Try it out github.com
Schedaddle
Tired of making your own schedule? Schedaddle offers a quick and easy way to automatically generate a plan for the day, allowing you to get up and get a move on!
['Jeffrey Shao', 'Eddy Wang', 'Anna Zhu', 'Owen']
[]
['css', 'flask', 'html', 'javascript']
76
10,371
https://devpost.com/software/medentifier
Main View App's prediction Confusion Matrix for our model (trained with AutoML) Inspiration Both members of the team have connections and experience in the health care field, and wanted to work with machine learning technology. Creating a tool to identify household medicine seemed like an achievable foundation for a project which could become much more. What it does Compares photographed medicine with the medicines included in the model. How I built it Using Google Cloud's AutoML to generate the model, react native was used to create an android/ios interface, which compares with the model using TensorFlow.js. Challenges I ran into Understanding the data structures associated with machine learning provided challenging, as both team members were novices in this area of expertise. Likewise, we needed to retrain the algorithm after early testing, which took up an immense amount of time. Accomplishments that I'm proud of The labeled datasets were partially collected with a data scraper, for the initial training of the algorithm. Thousands of additional photos were later added, taken in various environments around one of the team member's houses. This resulted in significantly higher accuracy for the medicines which were photographed (although, unfortunately this only includes 5 of the 20 medicines in our data set). What I learned Effective machine learning takes a lot of time, ideally this data should be collected as early as possible in a hackathon or time limited event. The majority of the coding should be done while the model trains, rather than before. What's next for Medentifier We'd like to add user-photographed images to our ML database, as well as provide information to the user when it is not clear which medicine the user has photographed. In these edge cases, we'd like to prompt for additional information to aid in identifying the medicine (such as prompting the user to place a quarter for scale, or request the user enter the numbers on the pill). Built With automl expo.io react-native tensorflow.js Try it out github.com
Medentifier
A neural-network powered solution for identifiying medicine. Primarily intended for healthcare professionals and the elderly.
['Tucker Adkison', 'Ezra Hill']
[]
['automl', 'expo.io', 'react-native', 'tensorflow.js']
77
10,371
https://devpost.com/software/interactive-chess-in-vr
Inspiration Our project is inspired by the recent boom in small-group social networking sites like Gather.town and Zoom helping individuals connect over distances in a more intimate manor. Knowing that our goals were to work with WebVR in some degree over the weekend, we made the natural move to combine the two goals and here we are! What it does Our web app allows for a game of chess to be played in a VR environment, accessible through browser on desktop and mobile devices. While chess is naturally a two player game, our innovation comes in having a shared, real-time playspace, providing an experience much more akin to the natural playing/viewing rhythm of a standard chess game. How I built it The platform heavily utilizes Aframe, a WebVR JS library for visual rendering, along with multiple open-sourced components to extend its base functionality, such as networked-aframe for our "room" functionality and Socket.io for real-time data transfer framework. Chess logic and move validation is provided by chess.js in a headless form, realized through our VR space. The whole project runs on a Node server instance, able to be containerized and deployed on most web server platforms available (as long as HTTPS is enabled, for WebVR security reasons). Challenges I ran into We struggled with time constraints, scope creep, and learning hurdles throughout this entire project. Having reached our deadline, we feel we could have better organized our expectations and time while building our project, knowing that this work is exploratory in nature for all members of our group (having never worked with the heavily technically complex libraries associated with WebVR before this!). Regardless, this hackathon has been an invaluable learning experience for our team members! Accomplishments that I'm proud of The first time we saw multiple members moving in a room in real-time through a VR space hosted on the web felt like technical wizardry, and a real breakthrough success in the project. It gave us a proper direction of what we wanted to build, and that vision held us through through the end. What I learned A strong guiding light in a project's vision is invaluable for development. Technology may change, processes may change, but the design goals (once set) provide an incredibly valuable safety net for when things break during crunch time! - Wade K. The possibilities for connective experiences on the web are infinite! We can take a simple idea and create a experience that can be shared between multiple users - Lawrence W. What's next for Multiplayer VR Chess Next, we want to build the community further. We want to create interaction circles so that users can only send their audio to other users closer to them. We would also like creating a more interactive scene where users can explore other rooms and possibly work towards creating a VR campus environment. Built With a-frame html javascript vr Try it out github.com
Multiplayer VR Chess
During this pandemic, it has been harder to interact with the community. In this project, we created an environment where you and your friends can come hang out virtually and play chess.
['Lawrence Williams', 'Leo Sorkin', 'Rachel Mittal', 'Wade Kaiser']
[]
['a-frame', 'html', 'javascript', 'vr']
78
10,371
https://devpost.com/software/pizza-hot
Logo Workflow Inspiration How to better help chained restaurants to maximize their resource and sale? Maximize the experience of their customers! And by that we mean lower their waiting time, give them recommendations on locations/route based on their preferences and current world status, and eliminate the chance of any unpleasant experience like extended waiting time or canceled order due to a particular restaurant running out of material. What it does Pizza-Hot is a centralized store management and order recommendation system for both the stores and customers. on the store side, it closely monitors which stores are open, status of current orders, their inventory level, and the current waiting lines in real time. On the other hand, for the customers, it will recommend them the ranked choices of which restaurants they could go to pick up their food, based on their choice of food, their current location and possible future destination, their personal preferences, and the status of the stores. The system is carefully designed in a way that is fast and scalable. It is ready to be scaled up as is to server restaurants and their customers. How we built it we built it with a Django-powered API and a centralized store and order management server built by python. Challenges we ran into It's the first time we used Django, and understanding and designing the structure of the API and the backend server took us really long time Accomplishments that we're proud of The amount of new concepts we learned about web development and new tools we got familiar with is by itself something we're really proud of, and the fact that we can ship a complete product in the end makes the experience even better What we learned full stack development workflow, django frame work, docker, redis, and much, much more What's next for Pizza Hot The backbone is there, but we simplified/idealized lots of routine in the development process. for example the for the travel time estimation, we would like to query google map, but as a proof of concept, we just did a quick estimation based on distance. There are also lots of features we had in mind yet didn't have enough time to implement we would also like to rebuild the UI itself. Built With django html javascript python socketserver Try it out github.com
Pizza Hot - Get your Pizza while it's still hot!
An Integrated Comprehensive Order Management and Recommendation System for Chained Restaurants
['Ray Lei', 'Mona Yao']
[]
['django', 'html', 'javascript', 'python', 'socketserver']
79
10,371
https://devpost.com/software/myfoodprint
Shop Catalog Shopping Cart Page Homepage Inspiration When our group was brainstorming ideas for our hack, we decided we wanted to incorporate some sort of large-scale issue into our project. We realized we were all passionate about saving the environment, so we quickly settled on this area. We talked about how most people overlook the role that food production and transportation plays in the production of greenhouse gases (over 38%). Because of this realization, we decided we wanted to raise awareness for this cause, and we tied in the nutrition/competitive aspect along the way as incentives. What it does MyFoodprint allows a user to shop for various food items in a online store environment. At checkout, the user is able to see the carbon emissions created by their order alone, in addition to the nutritional value of their grocery list. With these statistics, users will be able to compete against friends who also use the web app. How I built it MyFoodprint was built using HTML, CSS, and JavaScript for the frontend. The backend was made with JQuery and Node.js using Firebase's cloud firestore api, authentication api, and cloud functions api. Challenges I ran into One of the main challenges we ran into was dealing with the front end and figuring out how to deal with HTML and CSS to achieve our desired result. For the backend, we had to make sure to format data properly so that it would not cause any errors. Accomplishments that I'm proud of Since this was our first collective hackathon, we were super proud to have a polished final product to turn in. Also, we gained several valuable skills along the way including the basics of web development and various languages associated with web dev (html, javascript, etc.). What I learned By participating in Hack GT 7, many of our group members learned the fundamentals of front-end as well as back-end web development. It was each member’s first hackathon in our group, so we were very proud to have finished a polished looking project. While all of us had coding backgrounds to some degree, we all learned something new in the vast field of computer science. What's next for myFoodprint We want to add a fully fledged social media aspect to myFoodprint where users will be able to have friendly competitions with each other based off their myFoodprint scores. In addition, we want to incorporate the NCR BSP api for creating and store and making orders for customers. Built With cloud-firestore css firebase html javascript jquery node.js npm Try it out myfoodprint.online github.com
myFoodprint
Helping you buy groceries that are best for you while helping the planet with your friends!
['Navaneet Kadaba', 'Nat Wertz', 'Kavya Jade', 'Anuhya Kasam']
[]
['cloud-firestore', 'css', 'firebase', 'html', 'javascript', 'jquery', 'node.js', 'npm']
80
10,371
https://devpost.com/software/produce-pal-ctrl-alt-elite
Inspiration We focused on two main goals: helping Small Businesses thrive and helping restaurants and grocery stores handle surges in online orders. We see local businesses thriving with better foods as a result of their connections with local farmers because they will have more access to healthier and fresher ingredients. When restaurants or grocery stores experience large increases in orders, they can turn to Produce Pal to find local farmers to provide additional produce. What it does Produce Pal allows for restaurants to interact with local farmers by searching the farms' catalogs to find produce. The website also allows for restaurants to make encrypted, secure transactions via a third party system to pay local farmers for produce. How we built it We designed the prototype for this website on Figma and developed the app using React and NCR's APIs, specifically BSP. Challenges we ran into NPM configuration errors Post request issues on BSP Connection problems Time management Accomplishments that we're proud of Collaborating to resolve issues related to get and post requests on BSP Worked through issues related to yarn and npm on the React site What we learned How to work with APIs How to use the "prototype" feature to link elements in Figma The art of collaboration What's next for Produce Pal (Ctrl Alt Elite) In the future, we would like to create our own encrypted and secure payment system within the platform. Additionally, we want to implement our own delivery system between customers that is guaranteed to be safe. Built With bsp figma javascript json ncr-api python react Try it out github.com
Produce Pal (Ctrl Alt Elite)
Produce Pal is the perfect one-stop platform for small restaurants to buy fresh, high-quality local foods.
['YeshaT Thakkar', 'Eshani Chauk', 'Rahul S. Deshpande']
[]
['bsp', 'figma', 'javascript', 'json', 'ncr-api', 'python', 'react']
81
10,371
https://devpost.com/software/newsable
Inspiration We thought that the constraints from NewsQ looked very stimulating and intriguing, and we have a passion to show a neutral side to an issue through news, so we decided to find a way to do so. What it does Our iOS app created by Xcode allows users to input links of news articles and then web scrapes the article itself as well as information about it such as its news source title. With that information, our app uses sentiment analysis to measure the bias of the article. Our app, from the information we collect from the analysis, will have another screen that implements data analysis and ML to display the list of news sources and articles based on the value calculated from the sentiment analysis algorithm. In essence, the users will then have an unbiased, computer-created analysis to more accurately judge the validity of the articles that they view. How we built it We built the app through Xcode and Swift, implementing Machine Learning and Web Scraping. We did extensive research into how to implement certain processes that we needed. We used github to share the code amongst teammates. Challenges we ran into We were unfamiliar with several elements of our app at first. We had minimal experience in CoreML, so it took time to understand how to use it properly with our JSON file. Additionally, working at the intersection of HTML files and an iOS app was particularly challenging as well, as working with converting those HTML files into a format for our model to interpret Accomplishments that we're proud of We are really proud that we could tie in two very different aspects of computer science together. Combining classifiation models and machine learning with the connections between HTML and website data and our iOS app was our proudest accomplishment. What we learned We learned a ton of information about ML, and even exploring through HTML files in iOS was quite informative. What's next for Newsable For the future of Newsable, we want to be able to implement a search algorithm for articles so that the user only needs to know the title of the article, and the app will extrapolate the hyperlink of the article. Additionally, we want to widen the range of our sentiment analysis, so as to more accurately predict the bias of an article. Or, we could even go so far as to create the article, but modified, so that it is written in an unbiased fasion. Built With coreml html ml scraping swift xcode Try it out github.com docs.google.com
Newsable
Our iOS app allows users to input links of news articles, uses sentiment analysis, and measures the bias of the article to provide an unbiased perspective of the article.
['Ved Sampath', 'Shubhankar Baliyan', 'Raj Janardhan']
[]
['coreml', 'html', 'ml', 'scraping', 'swift', 'xcode']
82
10,371
https://devpost.com/software/ceres-hgjqnx
Inspiration Ceres was inspired by combining 5+ NCR problem statements to create a disruptive and innovative solution within food delivery services. Some of these include the problems noted above as well as how Ceres' waste minimization aids small businesses especially to mitigate monetary losses in the difficult time of COVID-19. The name Ceres comes from the name of the Roman goddess of agriculture and grain crops. What it does Our platform enables businesses, underserved individuals, and consumers to alleviate waste together. Businesses from grocery stores to restaurants to gas stations and more all have food stocked with impending expiration dates. Ceres enables this broad range of business stakeholders to sell food that's risking near expiration at lower prices to entice consumer purchases. As a delivery order application, underserved individuals have the opportunity to be the middle man to deliver orders to consumers receiving Ceres funds in return to bring food to the table. As a result, Ceres alleviates the environmental, economic, and societal impact that food waste has. How We Built It Through React Native and utilizing NCR's technologies, we divided the project into components for each team member to conquer. Challenges We Ran Into From the start, we were all naive in React Native. Our first hurdle was to learn and then implement the framework. After adjusting to React, we began our individual tasks, each with their own challenges. On the backend, we encountered difficulty setting up our authentication system with MongoDB. On the front end, we had a brief trial in having the pages update accordingly with user input and to be able to navigate between them. With perseverance and team work, we were able to overcome these issues and many more. Accomplishments We are proud of investigating and applying React Native to our project. Learning a new technology is always a favorable outcome from a Hackathon especially when working remotely as a team. Additionally, we are pleased with being able to introduce a new concept to help multiple communities and our environment in a joint effort. What We Learned We learned new technologies for full stack mobile/web development, team work, communication in a virtual environment, and persistence. What's Next for Ceres? For Ceres to create an impact, we believe that iterating over its UI for a seamless experience is necessary. The core concept and functionality exists. We also believe expanding to other sectors related to food waste is possible. For example, creating a system where expired food can be contacted to nearby and available composting businesses. Ceres is a modern solution with an abundant amount of future beneficial developments and add-ons. Built With java javascript mongodb ncr-design-system ncr-digital-banking objective-c react-native ruby shell starlark Try it out github.com
Ceres
REIMAGINE the food delivery industry to minimize merchant waste, aid those needing assistance in acquiring meals, and provide cheaper consumer eats all in one new mobile application, Ceres!
['Varun Kulkarni', 'Sumedh Garimella', 'Josh Landsman', 'Gabby Germanson']
[]
['java', 'javascript', 'mongodb', 'ncr-design-system', 'ncr-digital-banking', 'objective-c', 'react-native', 'ruby', 'shell', 'starlark']
83
10,371
https://devpost.com/software/among-buzz
Inspiration Months of staying inside due to COVID-19 have led to the drastic rise in virtual party games, and with restrictions relaxing and six-feet social distancing outdoors becoming the norm, we felt that there was a desperate need for technology to promote fun and safe physical activity away from the desktop. Our hope is that Among Buzz will help groups of friends stay connected while also rediscovering the benefits of fresh air. What it does Among Buzz completely reinvents the game of tag to include “tasks” at pre-defined locations as well as a covert and distanced mechanism of assassination. Instead of simply chasing friends around in the park, the game gains an added level of strategic interplay as the taggers, each known as “Buzz” since the app was developed at a Georgia Tech Hackathon, remain anonymous. Our app uses Google Maps to display each player’s current location, task locations, and all nearby players that can be interacted with. Additionally, A unique game code can be generated and then used to add privacy to the lobby. How we built it The majority of our project was built using React for the frontend and Flask for the backend. We also called the Google Maps API and used SQLAlchemy to build our database. Challenges we ran into Our main issue came with learning how to actually get our game and player data to our Flask backend using React fetch requests, as well as getting our endpoints up and running in the first place. Furthermore, constant pulling of data and pushing continuous updates of player locations to the Map UI was surprisingly difficult. Finally, we ran into several roadblocks with styling, particularly in regards to adding markers to the map. Accomplishments that we're proud of We’re proud of the fact that we created a database that users can directly interact with through our webapp. What we learned We all came in with a variety of skills, so each member gained exposure to new areas such as React, Flask, and calling Google APIs. What's next for Among Buzz Currently, tasks are not unique to each player, nor are they more actionable in any way than simply going to a physical location. Our goal is to integrate AR and other solutions to build a variety of tasks that make sense based on the surrounding environment of the player so that the app truly feels connected with the physical world. Furthermore, we would like to create iOS and Android versions of the webapp in order to reach more users. Built With css flask google-maps html javascript react sqlalchemy Try it out github.com github.com
Among Buzz
Although one of the most popular games of all time, tag needs a reinvention. We're proud to introduce Among Buzz, the geolocating web app that combines tag with the popular game Among Us.
['Athena Wu', 'Charlie Luo', 'Alex Hobby', 'Sana Verma']
[]
['css', 'flask', 'google-maps', 'html', 'javascript', 'react', 'sqlalchemy']
84
10,371
https://devpost.com/software/virtual-appointment-and-testing-suite-vats
Inspiration Our main inspiration for VATS was a mix of things. Both Paul and I have always found it silly that people would need to spend time and money to get an appointment for something as simple as a vision test, and with the current circumstances of coronavirus and a rise in the amount of people unwilling to go out unless absolutely necessary, we thought a VR solution to this issue would be just what the doctor ordered. Additionally, we were turned towards this idea by the mini challenge hosted by Anthem. What it does The goal of VATS was to implement four main things: A virtual environment that is visually appealing, sensory and physical tests such as ones for vision and hearing, the ability to accomplish clerical tasks like renewing prescriptions, and even a built in chat if you need to speak with a medical professional. How we built it The core of building VATS lied within integrating the Oculus Quest with the Unity Game Engine via a link cable. From there we developed every line of code ourselves from scratch by writing scripts in C#. Additionally, we used modeling software such as Blender and Qubicle to create all of our own assets. Challenges we ran into The main challenge we ran into was that of time, we took on a large load of work with a relatively lofty goal, especially since neither Paul nor I has had experience developing VR software in the past, which led to us not being able to implement every feature we wanted. Accomplishments that we're proud of The main accomplishments we are proud of are making a VR program work with an interactable environment, and the integration of the sensory tests. The sensory tests were always the most important part to us because it was the main inspiration for us wanting to make a virtual office, as we believe it was the part of the process of going to the doctor's office that needed to be made more efficient the most. What we learned We learned quite a bit from this hackathon. From a technical perspective both of us learned how to create a fully working virtual reality program. And from a design, project management, and timing perspective we both learned that planning ahead even when in a time crunch is key and that focusing on what you can do when you can do it is another essential skill that will lead to the maximum amount of work getting accomplished. What's next for Virtual Appointment and Testing Suite - VATS In our eyes, what's next for VATS is two main things. First, a completion of all of its intended features and perhaps the addition of a few more improvements. Second, we believe that it is a program that could become highly proliferated in coming years, with new technologies able to track things about your body becoming cheaper and more advanced, we believe VATS could be a highly effective free physical testing program as well as being an efficient method for non-serious doctor's visits. Built With audacity c# dropbox qubicle unity Try it out tinyurl.com
Virtual Appointment and Testing Suite - VATS
VATS is a VR program that simulates a doctor's office, allowing people to safely, efficiently, and freely conduct physical/sensory tests, prescription renewals, and chat with a doctor if need be.
['Derrick Adams', 'Paul Barsa']
[]
['audacity', 'c#', 'dropbox', 'qubicle', 'unity']
85
10,371
https://devpost.com/software/retail_insight
general overview Inspiration Helping businesses to grow and thrive. What it does This project helps grocery stores predict their sales. It also generates reports for predicting the number of items on their shelves. How I built it After browsing several datasets, I have decided to focus on a dataset from Kaggle that allows me to build a generalizable data-driven models for predicting the sales. I performed feature engineering to create more features and utilize a linear regression model for prediction. I also extended the idea for creating plots that helps the businesses predict the number of items on their shelves and estimate the lost demand. Challenges I ran into Not having enough time for exploring other ideas that are extensions of this project. Accomplishments that I'm proud of This is my first Hackathon experience. What I learned Thinking about new ways to apply data analytics skill. What's next for Retail_Insight Utilizing the correlation of similar items across different stores for improving the prediction. Built With git github numpy pandas python scikit-learn seaborn Try it out github.com
Retail Insight
This project helps grocery stores predict their sales. It also generates reports for predicting the number of items on their shelves.
['Rojin Aliehyaei']
[]
['git', 'github', 'numpy', 'pandas', 'python', 'scikit-learn', 'seaborn']
86
10,371
https://devpost.com/software/firepit
yoooo Built With cocoapods firebase google swift
GBF
GBf
['Abrar Hoque', 'Faiz Syed']
[]
['cocoapods', 'firebase', 'google', 'swift']
87
10,371
https://devpost.com/software/day-flow
REST framework endpoint Django admin panel Our API specifications Add a new task Login screen Daily flow Inspiration When considering the many challenges of virtual education, we realized that mental health and time management issues are not often discussed together despite their impact on each other. Many students have a hard time structuring their days during quarantine and struggle with completing schoolwork. Students may also be isolated from family, friends, peers, and teachers, which can negatively impact education and mental health. This line of thinking led us to our app which helps students complete tasks by blocking out time to work and suggesting wellness activities, such as exercise, healthy snack breaks, and socializing. What it does When a user first opens the Day Flow app on their phone, they will come across a login screen. We added this functionality to protect our user data and ensure a high level of security when using the app. After logging in, they will be in Day View. In this view, they are able to scroll through their daily tasks and recommended wellness activities. Each box represents a task or activity that a student can click on to enter a work session. The user can also add a new task to be broken down and recommended in the following days. The backend database stores information associated with users, such as their tasks. How I built it After generating the idea, we created user stories for the prototypical use case for our target audience. Next, we developed a series of API specifications for how the front-end application would communicate with the backend. From there, we split the team into frontend and backend groups. On the frontend, we used pencil and paper, Sketchbook, and JustInMind to brainstorm and prototype screens. After defining key design elements, we began creating our Vue Native app with support from Node.js, NPM, and Expo.io. On the backend, we investigated technologies such as Azure Functions, Django, Azure Web Apps, and SQL. We decided to use Django, since most of us were familiar with Python. After implementing some of our endpoints in Python, we worked on deploying the API-serving application to Azure Web Apps, and then we connected it to a PostgreSQL database on Azure. Challenges I ran into For the frontend group, our main challenge was learning a new development framework. We have never used Vue Native before and were unfamiliar with the structure of these types of apps as well as the React Native code it is based on. We also struggled to find documentation for different UI elements and to debug elements and functionalities that did not work as expected. For the backend group, challenges included learning how to write a Django application, create views for the endpoints, and then deploy it on Azure. After making some modifications and learning about production versus local development settings, we were able to successfully deploy on Azure, as well as migrate from the local SQLite database to the Azure PostgreSQL database. Unfortunately, this took some time, and as such we were not able to fully develop all of the planned endpoints. However, Azure continuous integration/continuous deployment through Github works like a charm, so we don't anticipate many issues with deployment with updated code. Accomplishments that I'm proud of The frontend group is proud of the planning we put into the app, from detailing user stories to drawing out different UI ideas. We are happy with how much we learned about an unfamiliar development framework in such a short period of time. The backend group is particular proud of being able to deploy a working backend to Azure. In addition, we're proud of the clear API specification that is hosted on SwaggerHub. Since the backend is live, we could even test the backend through the API specification on SwaggerHub, adding and removing users, tasks, etc. What I learned We learned about the Azure ecosystem, as well as all of the tools that Azure provides to make deployment easier. While we've individually programmed in Python and seen some work done in Django, we've never deployed into production before. Learning about how easily we can continuously deploy builds on Azure, as well as the flexible services that are provided, is something that will be of definite use in the future. We also learned about developing cross-platform UIs and how these apps are structured. What's next for Day Flow For the front end group, we need to complete screens and functionality for all user stories, as well as fix formatting issues among UI elements. For the backend group, we will need to finish implementing views for the rest of the endpoints, and then conduct unit testing and integration testing of the endpoints. Then, once the frontend is integrated with the backend, we can implement end-to-end testing as well. Built With azurepostgresql azurewebapps django expo.io node.js npm pythonwebframework swaggerhub vuenative Try it out github.com
Day Flow
Day Flow provides structure for students during virtual education by breaking down tasks into manageable time blocks. The app aids mental health by suggesting wellness breaks throughout the day.
['Sabrina Wilson', 'Michael Liang', 'Aishwarya Palekar']
[]
['azurepostgresql', 'azurewebapps', 'django', 'expo.io', 'node.js', 'npm', 'pythonwebframework', 'swaggerhub', 'vuenative']
88
10,371
https://devpost.com/software/smart-customer-arrival-based-order-fulfillment
GTHack Logo Inspiration During peak hours, a large number of orders come in at once at restaurants. Waiting in the car for an order made 30 minutes ago is frustrating. Our solution provides restaurants with a priority queue that takes into account the estimated arrival time and estimated food preparation time to give suggestion to the kitchen to prepare their order efficiently What it does The app has two routes. The customer route allows the customer to place an order on a mobile device, check the status of their order such as estimated wait time. The kitchen route allows effective sorting of the order based on priority calculated with estimated arrival time and estimate food preparation time. How we built it We built the front-end with React Native, and back-end with Python querying NCR API with Requests. Challenges we ran into one of the biggest challenges was using React Native to build our front-end because none of us have experience working with React Native. Accomplishments that we're proud of We are very proud of what we have accomplished given the 36 hours constraints. We are proud of our idea of improving customer experience by optimizing the kitchen's priority on orders. What we learned Other than React Native, we learned about Postman and API queries and finally integrating front-end with back-end What's next for Smart Customer Arrival-Based order Fulfillment There are many potential features we would like to implement given enough time. For better user experience, and specifically error prevention, we could implement an undo button for the kitchen order menu, furthermore, with more access to the API, we could use GPS to determine real-time location of the customer and determine if they have arrived at the restaurant. This application could also be extended to work with other pickup order fulfillment locations, such as grocery stores. Built With javascript ncr-api postman python react-native
Customer-Arrival-Time Order Fulfillment
A mobile-app system for kitchen staff to prepare food more efficiently and for customers to place their order. We aim to improve customer experience by effectively increase kitchen productivity.
['Jiwoo (Julie) Park', 'Jimmy Pham', 'Jason liu', 'Noah Gardner']
[]
['javascript', 'ncr-api', 'postman', 'python', 'react-native']
89
10,371
https://devpost.com/software/mental-health-20yki9
Website Chatbot Mockup Introduction Introduction 2 v1.1 Signup page About Fred is an Android application and an android (noun) companion built for the ups and downs of your mental health. Sign up, log in, and we'll introduce you to our amphibian mascot, Fred the Friendly Frog, whose passion is boosting your morale. Check out your profile page when you get a chance—you can select your current mood and Fred will offer insight on your mental health, along with tips on how to improve it. Inspiration Mental health is important, especially now. With the pandemic and quarantine, things have piled up on us and can be stressful at times. A lot of tragedies can be avoided with better care for mental health. Everybody talks about mental health and the issues one can face. Anxiety , stress , and depression are heavy words to use; alas, you never know you are suffering through such unless you talk to someone. Not everybody is ready to listen to you, or perhaps you feel like you might disturb the ones you trust, which ultimately builds further mental issues. To break this cycle we decided to come up with a solution which can help you deviate your thoughts towards a brighter and optimistic side of the ecosystem, so to speak! Features Having known the challenges, we decided to come with an optimal solution, a mental health chatbot accessible through web, mobile and via phone. The chatbot named Fred is trained to understand the user and provide basic self assessments . From offering articles and resources to recommending exercises and relaxation techniques, Fred will do everything possible to better your thoughts directly affecting your mental health. There’s an emergency feature in the app that provides the user various mental health hotlines if the need arises. Challenges we faced: Integrating the chatbot to the website and application was a hassle but ultimately, via providing embedded WebView access within the application, we solved it. Deciding on the architecture of the app was an initial hassle, but nonetheless we ultimately we decided to go with Firebase and Dialogflow which afforded us multitudes of technical benefits. Where are we headed next with Fred? Our basic version aims to solve many underlying issues revolving around mental health. Digging deep with have tried to train the chatbot to learn the CBT (Cognitive Behavioral Therapy) patterns which primarily psychologists around the world use. Furthermore we plan to integrate it with the Google Fit API, thereby providing the statistics of user physical activity. The statistics will be helpful to determine early psychological diagnoses for our users. Basic movements are necessary for humans. If we try to gauge them we may see a pattern revolving around a person showing signs of mental issues, for which we'd then alert the bot, notifying the user to move or elicit change in their environment. # Feeling Excited? Go visit Fred or simply call him 📞 +16788133709. HackGT squad? Try sliding into @Fred's direct messages. He doesn't bite. This pandemic has been tough on all of us. We hope this app can make it a little easier to manage our mental health during this difficult time. Built With android-studio css3 dialogflow figma firebase git github google-cloud gradle html5 kotlin slack Try it out github.com a.fredthefriendlyfrog.tech drive.google.com github.com
Fred
A friendly frog trying to solve your mental problems.
['Edward Lee', 'Rutvik J', 'Pujith Kachana', 'Won Yang']
[]
['android-studio', 'css3', 'dialogflow', 'figma', 'firebase', 'git', 'github', 'google-cloud', 'gradle', 'html5', 'kotlin', 'slack']
90
10,371
https://devpost.com/software/mopay
Inspiration We are currently living global pandemic with a virus that is rapidly spreading. We looked towards preventing spread in one of the most congregated places in the community—the marketplace. Self-checkout stations are getting more use due to the lack of social interaction needed to checkout. However, in turn, they become hotspots for microdroplets that carry viruses. This is where our problem lies. What it does Instead of using the traditional touchscreen, we implement a machine learning model that uses gestures to navigate the UI of the self-checkout machine. How I built it We Implemented python and Yolo V5 for backend. For front end, we used JavaScript and HTML for our website. Challenges I ran into We originally had many troubles with integrating TensorFlow into a mobile app, but after talks with our mentors, we decided to use a different model instead. What I learned Prototyping with Figma is one of our biggest takeaways this hackathon. What's next for MoPay Improving the detection of gestures as well as widening the variety of usable gestures to navigate the UI. Built With flask javascript opencv python Try it out www.figma.com
MoPay
#Finance #ContactlessPayment
['Souliya Chittarath', 'Olasubomi Olawepo', 'Tee Win', 'Anita Kuang']
[]
['flask', 'javascript', 'opencv', 'python']
91
10,371
https://devpost.com/software/mom-pop-aevj51
1. Retail Spaces Nearby 2. Voting for Online Merchants 3. Online Merchant Application 4. Retail Space Application Inspiration Mom&Pop was idealized during times of COVID, where many small businesses are going through hard times. Our business model is here to empower small businesses and new ideas without making drastic changes in their lives. What it does Our basic concept is similar to Airbnb, instead hosts being retail space owners and guests being online merchants. We post weekly themes for each retail space based on the season of the year, their location, and retail customers’ needs; and online merchants of the theme submit an application to showcase in the retail space. Then, retail customers, or virtually anyone, can vote for the merchants they’d like to see in the retail space. Mom&Pop benefits all of the retail space owners, online merchants, and retail customers. Retail space owners can make more profits by renting out their retail space to us than to long-term tenants. Online merchants can then rent the retail space at a reasonable price for just a week. With support from us, they will be able to easily set up a pop-up store and gain lots of business insights. Lastly, retail customers will be happy to see and buy the products of their favorite online merchants in a real retail space, by voting on our platform. We pay the retail space owners for renting their spaces, and the online merchants pay us for showcasing their products at one of our retail spaces for a week, which is where our margins come in. How we built it Our website is built with React and deployed using GitHub Actions. We used Three.js to represent the retail spaces in 3D on our website. We also utilized the Google Maps API to represent the stores near a location. Challenges we ran into The biggest challenge we ran into was, our partner Maksim getting very sick all of a sudden. Us being at GT lorraine, put us in undue pressure after we took on his tasks. We were able to implement every feature that we wanted without issue but it was stressful during the implementation. Accomplishments that we're proud of The 3D modeling system is fast and smooth. Allows for quick 3D mockups that runs natively on any web browser be it on phones or desktop computers. Our website is also very lightweight, monument to our skills as software engineers working in WebDev. What we learned Learned a lot of React! Designing mockups on the fly while editing videos with premiere pro. A lot of technical skills that helped us build the ideal hackathon project. What's next for Mom&Pop Utilize NCR POS system to handle sales and offer business insights to online merchants. Use social media to remind users about upcoming pop-up stores and advertise to the locals. Conduct consumer research to decide themes. Retain popular businesses to keep with us for the future. Allow partnerships to form to help these businesses grow. Built With google-maps react three.js Try it out jasonpark.me
Mom&Pop
A platform to help merchants bring their online experience into the real world.
['Zarif Rahman', 'Jinseo Park', 'Maksim Tochilkin']
[]
['google-maps', 'react', 'three.js']
92
10,371
https://devpost.com/software/remote-partner
Logo Firebase google login Inspiration The project was inspired by the current pandemic that has led to many online classes, making it difficult for students to connect and reach out to one another. What it does The app groups students taking the same classes, allows students to form study sessions, chat with each other, set reminders for upcoming assignment deadlines, and share notes online. How it was built The frontend is built by dart with Flutter (Dart) and the backend is built by Firebase. Challenges we ran into Remote working with teammates led to communication challenges. Accomplishments that we're proud of We are able to finish the front end and get some part of the backend done. What's next for Remote Partner We plan to set up a well-rounded backend and add more features for remote interactions such as online calls. Built With dart firebase flutter Try it out github.com
Remote Partner
Remote Partner. Your study buddy in quarantine :)
['radiantace', 'RosenYu Yu']
[]
['dart', 'firebase', 'flutter']
93
10,371
https://devpost.com/software/energy-consumption-analysis
Inspiration Commercial buildings waste an estimated 15% to 30% of energy used due to poorly maintained, degraded, and improperly controlled equipment. People need to be able to see when their energy is being over-consumed, allowing for them to live a more energy efficient and economical life. What it does We built an energy consumption visualizer and predictor tool for users to see anomalous, over-consumptive energy points and have it brought to their attention. How we built it We analyzed cumulative energy consumption data from a certain building by month. We used a Decision Tree as an anomaly-detection algorithm to find anomalous readings, flagging any value that looked anomalous in an initial test data set. We ran it on this test data set multiple times, choosing parameters that gave the best-value of accuracy over both the training set and test set. We then ran the model on the sensor data used on our Dash. We then used Dash and Plotly to display the data using various histograms and built interfacing tools like a slider for users to see the energy consumption at 100-minute intervals on chosen, specific days. Challenges we ran into Displaying the differing energy consumption points during the same day without having them merge into one another, making it difficult to see was an early major issue. We chose to use a histogram to illustrate the cumulative energy consumption of each day and then also build another graph for users to see 100-minute interval breakdowns of consumption during a day of their choice. We had issues trying to figure out which model would be best to flag anomalous data points. We tried quite a few before settling on our current one. We also ran into issues with hosting. We used anaconda and other data sci tools that weren't supported by our original choice for hosting, forcing us to pivot and use 'pythonanywhere' for hosting. Accomplishments that we're proud of We're proud of our model's ability to predict the anomalous, over-consumptive points. Additionally, we're very proud of our visualisations and the thorough breakdown to 100-minute intervals, allowing for users to get a thorough understanding of when and how much energy they are consuming. What we learned For this project, we had to learn how to use Dash, Plotly, and Flask. We learned how to select models to work with our specific data needs and also how to read datasets and manipulate/sort them to best output a strong visualisation on our Dash. We also learned how to build in some interactivity, allowing output to change for the user's benefit. What's next for Energy Consumption Analysis Going forward, Energy Consumption Analysis hopes to allow users to upload their own energy consumption data and use our model to see where their over-consumptive points occur and see a thorough breakdown/analysis. Built With dash flask numpy pandas plotly python scikit-learn Try it out energyconsumptionanalysis.pythonanywhere.com
Energy Consumption Analysis
Visualization and prediction tool for the overconsumption of energy in a building.
['Oankar Patil', 'Adithya Vasudev', 'Katherine Shen']
[]
['dash', 'flask', 'numpy', 'pandas', 'plotly', 'python', 'scikit-learn']
94
10,371
https://devpost.com/software/bydesign
Create an ideate session or join Specify name and team Form team Put ideas to solve customer challenges Chose your top three ideas Describe your top ideas in more detail Specify details about customer From the entire team's ideas, chose your favorite, Generated customer visualization and challenge specification Generated top idea from the entire team's input. Inspiration Last year, I read this book called, “The Design Thinking Playbook”, and it became clear that Design Thinking is the future of business. Design Thinking is about approaching things differently with a strong user orientation and fast iterations with multidisciplinary teams. Design Thinking inspires radical innovation and it's clear that Design Thinking is the driving force behind those who will lead industries through transformations and evolutions of our technologies. With the shift to a virtual working environment, it's important that we provide teams the tools they need to be innovative and solve the complex challenges at hand. What it does Somehow, the project and meetings tools we have available aren’t really catered towards this radical redesign in the way we approach innovation. I’ve created ByDesign, a portal that will take a team through a series of steps designed to make them ideate with a user-centric focus. Upon completion, the team will come out with an idea that directly targets. Not only do many meetings lack structure, with teams having a circular conversation trying to come up with an innovative idea or approach to a problem. Teams will often use sticky notes and whiteboards and come out with no clear task accomplished. WIth the ByDesign platform, each team member will join a generated session using a uniquely generated code. Upon entering, the team will be taken through a series of steps by the platform, starting with defining the user and her challenges. They will soon go through an ideation challenge to solve those challenges, and the team member's top voted ideas will be combined. By the end of the session, the team should come out with a clear idea, user knowledge, and action plan. How I built it I used React to develop a fully functional web-app. Next, I deployed my web app to the cloud directly from GitHub with Azure Static Web Apps. I then create a simple backend API with the help of Azure Functions. I also wanted to incorporate some machine learning into the final product which would look at the title of the final idea and attempt to generate similar ideas or existing products for the team to compare. Challenges I ran into My main challenge was creating custom 3D avatars based on the user specification. I think this would be an incredible feature that would allow teams to really visualize their customer. However, I ended up having to plan to develop this in more detail for the future because the complexity of creating a fully-custom animated human-model in React turned out to be vast, with little existing APIs and ways of doing this. Accomplishments that I'm proud of I am very proud of the fact that my web-app is fully functional, so any user or team can go in and start a session to start ideating. Creating an app with so many pieces and moving-parts was a difficult task to accomplish in such a short period of time, but definitely very rewarding. What I learned I've learned a ton about using React and deploying apps to the cloud. This project was very technically heavy with so many interacting parts in the app. Things like navigation were very non-typical and forced me to create unique technical solutions. What's next for ByDesign My long term vision for this project is to develop a new way for teams to interact online. They can utilize a portal experience for more than ideation, such as for prototyping and testing. By continuing to develop each feature and ensure incredible user experience, I hope to eventually create many portals, for things like ideating, prototyping, testing, and more. Teams would no longer need to rely on clunky meeting agendas or conversations that don't lead to a clear outcome. Instead, their very platform will drive them through a series of tasks that create innovation and clear user-centered output. Built With azure react reactstrap Try it out github.com
ByDesign
A new way for teams to interact online with design thinking and the user at the center. The platform would work as the Kahoot of online meetings, taking teams through a session of built-in innovation.
['Margarita Groisman']
[]
['azure', 'react', 'reactstrap']
95
10,371
https://devpost.com/software/iron-2-radiant
Inspiration Quarantine has had an impact on all of us. However, through these trying and distant times, video games have been one of the few things to bring us together. One of these games that we've been playing with friends is Valorant, a tactical FPS based on deep team play, including predicting enemy movements. Our team wanted to leverage Computer Vision to make these games even more collaborative and engaging! What it does As Valorant does not support official integrations, we were forced to get creative. This project pulls frames directly from the game in order to determine where players are and what they are doing. In this way, we can begin to predict movements of characters that we cannot see in order to plan our strategies and work together as a team. After analyzing gameplay and making our predictions, we generate a heatmap to allow players to visualize unseen enemies. How we built it We began with a combination of template matching using OpenCV alongside custom pixel matching techniques in order to determine the location, abilities, and movement direction of all players and to construct the best possible model of the game. This information was pulled directly from our own Twitch stream, allowing easy access and analysis of images from the game. From here, we used Dynamic Bayes Nets techniques including particle filtering in order to generate probability distributions for likely player locations. Finally, we visualized these predictions in real-time using Seaborn and Flask, providing a web app for players, coaches, and spectators to gain a deeper understanding of the game. Challenges we ran into and what we learned The primary challenges we faced were in extracting and processing the game data. Valorant is deliberately somewhat locked down, making traditional data extraction approaches quite challenging. Instead, we were forced to learn more unconventional Computer Vision techniques in order to extract meaningful results. We also spent significant time optimizing our code, particularly our analysis algorithms, in order to be able to generate insights in real-time based on a livestream. The combination of these factors drove us to think on our feet and learn a lot over the course of the weekend. Accomplishments that we're proud of This was an ambitious project, and one that we were not at all certain we would be able to complete. We're really excited that we were able to put all the pieces together into a full product within the limited timeframe, and we're really thrilled about the final product. Getting everything working on live gameplay was a super exciting moment. What's next for Road to Radiant We have now laid the foundations for something really exciting here. Unlike most similar games, Valorant does not have a public API for players to be able to access stats about their gameplay and stats about professional matches. This proof of concept reveals the potential for something much larger - a CV-based API that allows easy access to game data for all players. Now that we know that we can extract basic game data in this way, we're really excited at the potential to build something with the ability to empower a far broader developer community going forward. Built With dynamic-bayes-nets flask opencv particle-filtering scipy seaborn Try it out github.com
Road to Radiant
Automatic coaching for Valorant using Computer Vision and Dynamic Bayes Nets
['Hunter Hancock', 'Jordan Rodrigues', 'Simar Kareer', 'Naman Bhargava']
[]
['dynamic-bayes-nets', 'flask', 'opencv', 'particle-filtering', 'scipy', 'seaborn']
96
10,371
https://devpost.com/software/self-checkout
Inspiration Whenever I go into stores like Target and buy stuff, there's always a line for checking out. At first, self checkout was an easy way past this, but now self checkout has become more popular than even regular checkout. I wondered why we needed those expensive checkout machines when most people pay by card anyways. What it does Instead of having to wait in a line to checkout with the machine, you can scan the items as you shop and checkout without ever having have to interact with anything but your phone. This way, you can track your total as you shop, avoid lines, and avoid having to touch machines which may potentially spread germs. How I built it I used Android Studio and the NCR Silvers API to get the inventory of a store, and import the data for the inventory. The app then scans barcodes, adds items into a cart, and allows the user to checkout without ever having to use anything but their phone. Challenges I ran into I had never done many of the things required to make this app, so I had to learn to call APIs, scan barcodes, and starting new Activities with Intents. Accomplishments that I'm proud of I am proud that I was able to learn how to make this app by myself despite not knowing how to do just about anything involved in making in before starting the project. What I learned I learned a lot about Android app development, calling APIs, and working with Activities in java. What's next for Self Checkout Self Checkout has potential to be a cost efficient, convenient, and safe way for stores to be able to handle checkouts. Built With android-studio java ncr silver
Self Checkout
Scan and checkout your items with your phone
['Mike Zheng']
[]
['android-studio', 'java', 'ncr', 'silver']
97
10,371
https://devpost.com/software/small-business-insights-dashboard
New actionable insights (the main page of the dashboard), showing a list of categories and insights Each insight can be accepted or dismissed, which will hide it from this page and move it to the historical insights tab Once dismissing/accepting all insights, the table has an empty placeholder display All previously encountered or previous insights are available on the "Historical insights" tab Loading intermediate state Initial login screen When first onboarding, small business tenants have to opt in to share their data with the network in order to gain access to the insights Original Figma mockup, showing the utilization of NCR's design syst JSON from our "New Insights" API that is used to display insights in the dashboard Inspiration We were inspired when brainstorming ideas for NCR's challenge at HackGT 2020 , where we were searching for ways we could utilize our technical expertise to tackle some of the issues that NCR faces on a daily basis. Our idea grew from realizing the need many (small) businesses face with determining market trends and best products and practices in their industry. While many large companies have access to vast sums of data with which they can inform their business intelligence, small businesses often don’t have that luxury. The ability to crowdsource data could benefit small businesses with making product, staffing, and promotional decisions based on business growth models, allowing them to better compete with larger companies. Our project addressed several of the top problems for NCR and its customers: namely, How NCR can help small businesses thrive How NCR can help reimagine commerce and disrupt aspects like product discovery, e-commerce, payments, and supply chain How NCR can help merchants monitor and optimize their performance How NCR can help merchants reduce their labor costs What it does The project aggregates data across all (NCR businesses) and categorizes them and their products into categories. The app then calculates the monthly growth for the business and recommends products to add to a businesses catalog, products to remove, when to increase staff, and possible promotions, based on data of all other businesses in the same category. Through the dashboard, businesses can track recommendations for the current month as well as all past recommendations and their choices on those recommendations (accepted or rejected). How we built it We built Insight with a react frontend and flask backend. The frontend was designed in Figma and implements NCR’s design system using hand-written CSS with React. The backend utilizes Python and Flask to create routes that the frontend interacts with. The backend API is containerized and orchestrated onto a local server. The backend also has NCR’s Business Service Platform’s Catalog Service implemented, storing product data. Challenges we ran into We struggled with the NCR Catalog Service API, finding a use case for it because it required the uploading of our own data. While we did end up successfully uploading our data into the NCR Business Services Platform and successfully connected to it, it was easier to perform aggregation and analysis locally. That being said, in a scaled system where the catalog service provides product data for thousands of businesses, it would make more sense to use it. Accomplishments that we're proud of We are proud that we were able to ideate, mockup, and create a product from our mockup within around 15 hours. We are proud of our ability to parallelize tasks and split them up between our team, allowing for an infeasible task to be completed. We are also proud of our research, both in finding data that suited our use case, as well as the research of models that represent business growth when designing our algorithm. What we learned Working on this project allowed us to learn a variety of new skills that we wouldn’t have been able to learn otherwise, such as: Creating high-fidelity UI mockups from and existing design system and then translating those mockups into actual frontend code using CSS-in-JS Applying business growth models to create actionable insights from data Utilizing NCR’s Business Services Platform API’s to build our application What's next for Small Business Insights Dashboard In general, we envision being able to take the next step and make the Insights dashboard a fully-fledged product that small businesses can benefit from. Some of the ideas we have are: Scale our project to operate at the scale of NCR, allowing small businesses to benefit from the wealth of data and the insights it produces Employ unsupervised machine learning to generate additional insights based off of more complex patterns Create more insights based off of BSP or other NCR services Expand the use cases of the tool to include other businesses that would benefit from sales or product-based insights, such as chain restaurants Built With csv docker domain.com figma flask json ncr netlify numpy pandas python react typescript Try it out insights-dashboard.netlify.app www.figma.com
Small Business Insights Dashboard
Crowdsources sales, staffing, and catalog data from a network of non-competing small businesses (in the same sector) to provide actionable insights that benefit all member businesses
['Varnika Budati', 'Michael Chen', 'Bhanu Garg', 'Joseph Azevedo']
[]
['csv', 'docker', 'domain.com', 'figma', 'flask', 'json', 'ncr', 'netlify', 'numpy', 'pandas', 'python', 'react', 'typescript']
98
10,371
https://devpost.com/software/ncr-order-picking
Mobile Order Picking - A preview of the application Efficient sorting of the incoming orders Communication between pickers and customers makes the whole process easier! It's easy to see what items you need to get and where. 🌟 Inspiration The novel coronavirus has fundamentally shifted the way we shop, with 85% of Americans significantly increasing curbside pickup since the pandemic began. NCR is the world's largest producer of POS systems for stores and small businesses. It's time to ideate a solution that meets the changing needs of shoppers. ❓ Addressing The Problems "Improve delivery and curbside pickup for NCR’s merchants and consumers" - An easy to use interface that facilitates communication and optimizes order picking is a win-win for everyone. "Help restaurants and grocery stores handle a surge in online orders" - The User Interface automatically sorts the orders by the amount of items and the pickup time. "Help consumers have germ free transactions?" - Curbside pickup inherently reduces germs. Some could argue this will be our new normal, making efficient pickup solutions even more important. 💻 What it does Take a quick look at the presentation pictures, or try it out yourself! 📌 Accomplishments that I'm proud of Figuring out Figma: As someone who did some graphic design in high school, I never realized the industry changed like this until college. We're still using Adobe Illustrator for things like this! Playing with Postman: Introduced to a whole new world of APIs, testing APIs with Postman, and an idea of RESTFUL/RUST and it in comparison to SOAP APIs (thanks to a quick explanation by someone at NCR)! 🔨 How I built it Figma and the NCR Design System to design and prototype the system Adobe Premiere Pro for putting together the introduction video 🚸 Challenges I ran into When living by yourself you have to remember to eat during a remote hackathon :) 📆 What's next for NCR Order Picking There's a surprising amount of similarities between order picking in grocery stores and order picking in a warehouse. Last I checked, NCR is not in this multi-million dollar industry, yet their software has numerous cross-applicabilities simply based on inventory management. Could this prototype be applied in a new scenario? In the future, I'd like to work a bit more on the coding/technical part to put things into application. Leveraging the APIs in a way that is able to track the items would bring the functionality to completion. Built With figma Try it out www.figma.com
Integrated Mobile Order Picking
Improving curbside delivery with an intuitive picking interface
[]
[]
['figma']
99
10,371
https://devpost.com/software/squiggle-igad5p
The front page of the website Inspiration You're feeling hungry, so you decide to grab some food from your favorite restaurant. However, there's two locations within walking distance of your house, and you don't know which location to go to. It can be quite difficult to decide which location of a restaurant patronize. It would be best to go to the location with the shortest wait time, but it's difficult to guess which location will have you waiting the least! The time of customers is valuable not only to the customers themselves but also the merchants, who want their customer experience to be as effortless and straightforward as possible. With access to wait times, merchants and customers will benefit from the load management of each location. What it does Based on your current location or a selection location, Squiggle allows you pick the branch of a restaurant with the shortest wait time. Squiggle displays the restaurants within a desired radius of the user's current location and the respective expected wait times at these restaurants. How I built it The website was built with SCSS, Typescript, and React. In addition, we used the Google Maps API to assist with processing location data, hosted a database with individual store information on Azure, harnessed an NCR API (namely the Business Services Platform) to obtain information from restaurants locations and update the database, and utilized Flask and Python to call the API and update the database. Challenges I ran into None of us had experience with React, Flask, or Azure, which were all integral components of our end product. After lots of trial and error, we were able to integrate these technologies into our project. Also, there were issues running the virtual environment and running Flask. Furthermore, understanding the APIs (both Google Maps and NCR) took quite some time. What's next for Squiggle Further steps could include integrating ordering capabilities into Squiggle and integrating more companies into the platform. In addition to this, we more seamlessly integrate all the capabilities of the NCR and Maps APIs into our site. Built With azure css flask javascript python react scss Try it out github.com
Squiggle
Lines are for losers
['akaashp15 Para', 'Harrison Zhu', 'Ayush Nene', 'Varun Vangala']
[]
['azure', 'css', 'flask', 'javascript', 'python', 'react', 'scss']
100
10,371
https://devpost.com/software/medelivery
MeDelivery Project for HackGT7 Inspiration Telehealth, the distribution of health-related centers and information via the internet, is a fast-growing technology that's changing the world of healthcare - especially in a post-Covid era. Despite all its growth, however, one significant limitation hindering this field is that doctors cannot obtain verifiable metrics or evaluations of patients' medical data beyond subjective symptoms and complaints. MeDelivery aims to improve telehealth by providing patients a direct way to create appointments with doctors at nearby hospitals and request medical test kits so that they can report the results back to the healthcare professionals. This way, doctors will be better able to diagnose and treat their patients. What It Does MeDelivery is a platform that allows patients to schedule appointments with nearby hospitals and clinics that are offering telehealth sessions -- but with a twist. Before the appointment, patients can request to have a variety of important but easy-to-use biometric monitors and test kits sent to them - such as blood pressure monitors, blood oxygen monitors, COVID-19 test kits, urine sample kits, and more. Next, patients can use the equipment to gather and share medical information with their doctor/physician in their upcoming telehealth appointment. The monitors will be sent back to the hospitals after use for cleaning and redistribution, and the samples and test kits can also be sent back for lab analysis. How we built it To create the most user-friendly experience for patients, we created an Android app with Android Studio . The core of the back-end is a combination of Google cloud platforms - specifically, the Google Maps API (Nearby Search and Geocoding), and Google Firebase for storing the user's medical information and managing user-authentication. We accessed the Maps API with HTTP requests and parsed the JSON result to Java with the help of the FasterXML/jackson library. Challenges we ran into This was the first time for several of our team's members using technologies such as Android Studio, Google Firebase, and the Google Maps API. Much of our team's growth happened during these 36 hours, and we benefited from collaborating and learning from each other. Accomplishments We're Proud Of We're proud of our app, which adds value to the existing standard for Telehealth. We're also proud of the fact that our team is a truly interdisciplinary team, with each of us studying different majors at Georgia Tech. In the end, our diverse skill-sets contributed to brainstorming a creative idea and implementing it. What’s next for MeDelivery In the post-Covid era, telehealth will only grow in importance and popularity within the healthcare field. We are looking into increasing the functionality of our platform and providing a more robust and comprehensive experience for users. We can also plan on expanding our use of a database to track more medical information, and integrating video-call functionality directly within the app. Built With android-studio firebase google-geocoding google-maps java Try it out github.com
MeDelivery
HackGT project
['Athrey Gonella', 'C Wilson', 'Carson Quan', 'lawrence6he He']
[]
['android-studio', 'firebase', 'google-geocoding', 'google-maps', 'java']
101
10,371
https://devpost.com/software/swiftserve
The homescreen for our app Page for viewing menu and finalizing your order. From the manager's pov, you can manipulate the layout. From the user's, you can select tables to reserve and proceed to order. Inspiration We were inspired by the clearly intimidating climate around us. We wanted to find a way to turn that climate into less of an obstacle for restaurant businesses, large or small, by developing a clean, malleable tool for maintaining safe and swift services. What it does Managers - Design your workspace. Divide your restaurant into covid-regulation spaced dining spots. Manage your schedule, and know who's coming before they're there. Customers - Have your food when you want it, with no contact necessary. Plan to perfection, login and decide where and what you'll be eating. Maybe our chat bot can interest you in an extra desert or help you place an order! How I built it We developed a dashboard for two separate users, both with different levels of access to the restaurant layout. We focused on a minimalistic design to accentuate the possibilities our app offers, and streamline service. Challenges I ran into We lost a team member a few hours in, and we're each working with frameworks that take us out of our element. Honestly, though, just a couple of bumps in the road on the way to developing what we feel is a substantial project. Accomplishments that I'm proud of We're proud of our Customizable restaurant layout, clean UI with NCR's design standards, and chatbot functionality. What I learned We've all basically delved into some new frameworks together, we will all be adding tools like Dialog Flow, React, and Google Cloud SQL to our personal repositories. What's next for SwiftServe Next, we hope to introduce more ML aspects to our project. How can we provide further human-like services with our chatbot? What ways would we be able to lessen the burden on restaurants further with our collected data? In what way can we use customer feedback to bring about real change in a restaurant? Built With css dialogflow google google-app-engine google-cloud google-cloud-sql html javascript mysql natural-language-processing ncr python react sql Try it out swift-serve-292910.uc.r.appspot.com
SwiftServe
Safe. Swift. Service. An app that focuses on stimulating the everchanging restaurant climate and the popular desire for quality food.
['Ryan Faulkner', 'Min Htat Kyaw', 'Rachit Bhargava']
[]
['css', 'dialogflow', 'google', 'google-app-engine', 'google-cloud', 'google-cloud-sql', 'html', 'javascript', 'mysql', 'natural-language-processing', 'ncr', 'python', 'react', 'sql']
102
10,371
https://devpost.com/software/foliox
Inspiration Chatbots could also play a major role in the discovery of investment ideas and curating financial information. A Siri in a chatbot form. A personalized chatbot for financial investments. These are the advantages of a chatbot: A natural language interface for the user: this creates the opportunity to service users without requiring them to “learn” your UX. It also gives users the possibility of making a very wide range of requests. Unfortunately, living up to the full promise of a chatbot in this regard requires very advanced language parsing logic, which in turn requires both pretty advanced AI, large sets of training data, and semantic knowledge as well (i.e., something to tell the system words to mean and how those concepts are related). A natural language interface for the service: a chat also happens to be a fantastic format for information that needs to be delivered in narrative forms, like explanations. The service can deliver information, and then the user can ask for clarification on specific elements. (Just like a real conversation!) Low threshold, near-native application access: no website to register for, no app to install, so users can access it immediately. Contextual knowledge about the user: the chatbot can know about the user’s transaction history, so the user can say things like “I want to dispute the credit card charge from yesterday.” The tricky thing here is all about getting the permissions for user data right. I’d be unpleasantly surprised if, say, a trading chatbot started scolding me for spending too much money on my credit card! What it does It is your friendly neighborhood chatbot! It gives you quick financial information, just type what you want and it searches for it using different APIs such as the Blackrock API. It also suggests the risk factor for investing in a certain stock and all through the familiar Facebook messenger. It is also capable of showing different stock visualizations, as per the user's need! How we built it The backend is build using Flask and Python, with Facebook Messenger as the interface for the chatbot. We use several apis, all of which are running on my local server. We perform NLP using Dialogflow from google cloud, which helps us to extract utterances and intents. Challenges we ran into Honestly, a ton of them. It was the first time any of us worked with chatbots or NLP, so manually training the NLP model felt more difficult than it should have. Apart from that I had a really hard time figuring out how to send a rich text message including image and text from my server to the app using webhooks. Accomplishments that we're proud of Working remotely from different timezones was especially challenging with some team members leaving us in the middle, but we managed to hack it out. What we learned What's next for Foliox Built With blackrock dialogflow facebook-messenger gcp python Try it out github.com
Foliox
Your friendly neighborhood finance bot
['Sai Deepthi Yeddula', 'Keval Doshi', 'Michael Zhou']
[]
['blackrock', 'dialogflow', 'facebook-messenger', 'gcp', 'python']
103
10,371
https://devpost.com/software/canal
Homepage Learn More Create an Account Profile 1 Profile 2 Chrome Extension Mockup 1 Chrome Extension Mockup 2 Inspiration Marketing and outreach are some of the greatest challenges small businesses face. We hope for our application to give them a platform and to encourage everyday shoppers to consider alternatives to the mainstream. What it does Small businesses can register and create a profile on Canal. This profile consists primarily of their business details (a website, phone number, picture, and short, descriptive blurb) and key words that describe their products and services. From here, users can download and use the Chrome extension which will suggest small business alternatives to the products and/or services that they are currently viewing in their browser. How we built it Our web app is built in HTML, CSS, and Javascript and our database is hosted in Firebase. Challenges we ran into The Firebase documentation is fairly sparse yet simultaneously quite complex. It took a significant amount of time to integrate it into our project, but we managed to succeed in connecting it to the web application portion of our project. Unfortunately, we ran into some more Firebase trouble with regards to the Chrome extension and were thus unable to complete it. Accomplishments that we're proud of As a small team of hackathon rookies we're quite proud of the amount we completed! Also despite the virtual environment and timezone differences (and a wedding!?) we managed a consistent and effective workflow. We've all agreed that this is probably the most technically advanced project we have attempted to date! What we learned In short: a lot. To expand, learning to use and integrate Firebase was challenging and new. We also learned a lot about debugging and the magic of having fresh eyes on worn code. What's next for Canal Given the time constraints, we were unable to complete the Chrome extension portion of our project. We hope to finalize it soon! Built With chrome firebase html/css javascript Try it out github.com www.figma.com
Canal
Connecting small businesses to the world.
['Michelle Hou', 'Preethi Narayanan', 'Yaritza Garcia']
[]
['chrome', 'firebase', 'html/css', 'javascript']
104
10,371
https://devpost.com/software/only-s
Inspiration We wanted to build a sweet little note taking app that allows professors and students to take notes in class. What it does It allows the professor to jot questions down for the class to answer. Once the class is finished, it will compile the questions and answers into a PDF. How we built it We used React and Javascript. Challenges we ran into Setting up was the biggest challenge we faced. Accomplishments that we're proud of We're proud of getting the authentication system to work What we learned We learned about new react libraries just as react-pdf What's next for Only飯s HackTX Built With css firebase html javascript npm react Try it out github.com
Only飯s
A sweet little note taking app for professors and students in class.
['Kincent Lan', 'Jinay Jain', 'Qijia "Joy" Liu']
[]
['css', 'firebase', 'html', 'javascript', 'npm', 'react']
105
10,371
https://devpost.com/software/memopet
Inspiration Keeping healthy habits is a universal challenge, one that has been exacerbated by the pandemic. Productivity apps are a way for people to keep track of focus on tasks and maintain habits. Now more than ever, it is important to ensure that you’re mentally and physically healthy. We were inspired by productivity apps that used fun ways for users to interact with the interface. For instance, an app where a user grows a tree by remaining on the app. What it does Our website allows the user to adopt pets and take care of them through maintaining habits. The user can set what habits they would like to maintain, and the website’s calendar feature provides a way for the user to sustain habits across longer periods of time. By keeping on top of these habits, the user keeps their pets happy and healthy! How we built it We built the website using the text editor Atom.io. From there, we coded it in HTML, CSS, and Javascript. We also used other applications such as JQuery and local storage for our pet customization avatar. Challenges we ran into We had trouble dealing with local storage and its properties. Accomplishments that I'm proud of We’re proud of persisting through challenges with local storage as well as creating cute and customizable pets. What we learned Throughout the development process, we learned different formatting aspects of HTML and CSS. We also learned how to implement local storage to maintain the website state across page reloads. What's next for Memopet In the future, we hope to add more features to Memopet that would increase user interaction to allow for more engagement on the website, like adding accessories to the pets. Citation https://codepen.io/bphoebew/pen/NWrNBWN https://codepen.io/daliannyvieira/pen/MeWyjQ Built With css3 html5 javascript jquery
Memopet
Keep Your Habits Tracked in a Fun and Interactive Way with an Avatar Pet!
['izblx Zheng', 'Jackie Roche']
[]
['css3', 'html5', 'javascript', 'jquery']
106
10,371
https://devpost.com/software/don-t-spoil-on-me
Don’t Spoil on Me Your new favorite site to thwart spoilers! Inspiration When brainstorming a project idea for HackGT on Friday evening, the one thing we knew is that we wanted to do an application that utilized some form of machine learning and/or artificial intelligence. This is because a few of us are currently taking courses in this field and wanted to apply our skills in a practical environment. HackGT was a great opportunity to explore! We ended up choosing to create a Spoiler Alerter app because it was a quality-of-life problem we all had faced previously and was one of our more practical ideas (considering the Hackathon is max 36 hours). What it does Our program takes user input in a textbox, runs it through our artificial intelligence model, and determines whether the text contains a spoiler or not. Ideally, the user will be pasting medium to long length text so that at first glance they do not read a spoiler. This is important because the user is in contact with the text for at least the amount of time it takes to Ctrl+a → Ctrl+c → Ctrl+v, so the longer the text (e.g. movie reviews), the less chance they will read a spoiler. How we built it First we needed a dataset of which reviews contained spoilers and which did not in order to train our AI model. Initially we were going to create our own dataset by manually parsing through movie reviews and marking each accordingly, but we found a wonderful website called Kaggle that contains an imdb spoiler dataset. The URL to this dataset is here . For the actual machine learning library, we utilized Scikit-Learn (originally developed by David Cournapeau) not only because it provided the algorithms we needed but also because it conforms with the free and open-source aspect that is the essence of Hackathons. We chose Random Forest Classifier as the specific algorithm for our model. Lastly, for the web hosting part of this project we registered a domain with Domain.com, utilized Google Cloud’s App Engine to host the application, and used the Flask (Python) micro web framework to handle the HTTP requests and run input through our model accordingly. In addition, we programmed HTML pages with a simple website design and a text box to take input. Challenges we ran into A big challenge we ran into was choosing the correct machine learning model to use. Initially, we tried the Support Vector Algorithm but our accuracy rates were not very high. We were getting higher 40% to mid 50% each trial. We decided to switch to the Random Forest Classifier because of this low-accuracy. In turn, we consistently achieved rates of 74% and up after making the change. We also faced a problem where preprocessing of the data took longer than expected due to the sheer size of the dataset and the limited resources we had. We could only make a computing instance with so many cores/RAM on Google Cloud and not burn through our credits in an hour. To combat this setback, we reduced the size of our dataset. This definitely hurt our accuracy rating, but there was not much we could do. If this was an app that we develop outside of a Hackathon, we would have much more time to let the computer preprocess the data. The third challenge we faced was integrating the model into our web application. Initially it seemed very easy, but Flask turned out to not agree with the methods created to run the text through the model. To fix this issue, we simply created another function in the ML python file that simplifies the process by calling the necessary functions, processes the data, and returns the appropriate output. We could not get it fully hosted on the cloud, however. We utilized Google Cloud's App Engine and the service kept saying it could not find the modules of Flask and Tensorflow, even if pip said they were installed. After literally hours of debugging with a mentor , we still could not get it to work. Maybe we just have bad luck with web hosting, but given more time we could probably resolve the issue. Accomplishments that we’re proud of The biggest accomplishment we are proud of is the accuracy. Given that we only had 36 hours for this Hackathon, and quite frankly closer to 24 hours considering Friday night was brainstorming/planning and Sunday early morning was tying loose ends, creating the demo, and submitting the project, we know we did an amazing job with creating an ML application with 74% accuracy. In addition, this was most of the team members’ first hackathon, so venturing into the unknown was a new but exciting adventure! Another accomplishment we’re proud of is how much we learned in the last 36 hours! What we learned Everyone learned something new this weekend! Google Cloud was a new technology because we had all previously only used AWS.The team members working on the AI and ML side obviously got hands-on experience with Python machine learning libraries and utilizing datasets to produce a model. The frontend developer learned more about coding static web pages in HTML and injecting CSS for styling purposes. They also explored the Bootstrap framework for easier implementation of CSS and JS with templates. The backend developer had never used Flask before so this project taught him an incredible amount about a very useful web framework. In addition, the backend developer also got to experience the joys and sorrows of frontend web page development. What's next for Don't Spoil on Me The main expansion idea for Don’t Spoil on Me we thought of was to transform it from a webpage application to a browser extension. As a browser extension, it can scan the text on pages as soon as one visits them and provide a little alert window that a spoiler may be present. This would be beneficial as the user could forego the small chance they accidentally read a spoiler during a copy+paste of the text into the web page text box. The only problem with a browser extension is that we would have to create and maintain multiple forks of this project because there are multiple popular browsers in the market that are all functioning differently. At the minimum, we would have to create extensions for Chrome, Safari, and FireFox. Another improvement that could be made is the AI model. With a bigger dataset and more time for preprocessing, we are confident that our accuracy levels can go up into the high 80s and even mid 90s percentages. Built With app-engine css flask google-cloud html javascript keras nltk python sklearn Try it out github.com dontspoilonme.tech
Don't Spoil on Me
Tired of accidentally reading spoilers for a good TV series you're watching? Don't want to see a movie review that ruins the big plot twist? Try out our spoiler alerter to prevent any such problems!
['Dhruv Patel', 'Arvind Anand', 'William Sheppard', 'Keshav Ailaney', 'kailaney']
[]
['app-engine', 'css', 'flask', 'google-cloud', 'html', 'javascript', 'keras', 'nltk', 'python', 'sklearn']
107
10,371
https://devpost.com/software/spotichat-frontend
login page login logo - SpotiChat Chat Room Chat Room - new user joined To run locally, run npm install Then npm start To deploy, run npm build Then gcloud app deploy Built With css html javascript Try it out github.com spotichat.ue.r.appspot.com
SpotiChat
Interact with music and new friends, all on SpotiChat !
['Xiangyi Li', 'Rukai Zhao', 'Yangyi XU', 'javis-song']
[]
['css', 'html', 'javascript']
108
10,371
https://devpost.com/software/foodlocker-mcynbz
Inspiration In Chicago, we visited a restaurant called Wow Bao . This restaurant used robots to prepare the order, and then it would to place the order into one of the cubbies. This system seemed perfect for today's circumstances. With the demand for contactless/curbside pickup increasing. Issues such as limited parking spots and increased wait times also factor into hindering the flow of business for restaurants and stores. With FoodLocker, we plan to solve problems and help customers in endeavors such as: having germ free transactions/interactions, Reducing the likelihood of abandoned carts and the risk of fraud for e-commerce, improve delivery/curbside pickup for NCR's merchants and consumers, and help handle surges in online orders . What it does Restaurants and stores can manage food/item lockers where they can place finished orders into. With the application the store/restaurants would be able to view various types of info. This info would include items such as: the order information, the customer name, the locker the items are currently placed in, the list of items, the quantity of each item, the status of the locker, and the time elapsed since the order was placed in . When the restaurant or store places the item in a locker, the locker will send a notification to the customer side application to notify that their order is done and ready to be picked up. The customer would then use a the mobile application to check in and open the locker. Once the order is taken, the restaurant/store side application will clear that locker's information and reset it's status. How we built it We used React.js for the frontend UI and NCR's Catalog and Order API to retrieve the information we needed. We were trying to use the Catalog API to add items with prices in the stores so that it can calculate the total price of the cart before the customer checks out. We also tried to use the Order API to get the order from the customer and send it to food lockers. Challenges we ran into For this project, it was some members first time interacting and using React.js. It was a slow start to get them acquainted with the program, but they were happy and excited about learning a new language. The API was a challenging part of the project because we are fairly new to React JS and we spent good amount of time learning and debugging the application. We were able to get the API connected to application but we ran out of time to complete our project. Accomplishments that we're proud of We were able to develop a demo despite being relatively new to React.js and connecting APIs to it. What we learned We learned a lot as a team. This was the first time some of us have worked with React. We learned more about API's and Postman as well. What's next for FoodLocker We are going to further iterate on this idea and refine it. In addition, we are going to start trying to implement hardware as well. Built With css javascript ncr react Try it out github.com
FoodLocker
Contact-less curbside pickup option for local stores
['Khoi Le', 'Noah Le', 'Nitharjan Kanthasamy', 'Hang Qiu']
[]
['css', 'javascript', 'ncr', 'react']
109
10,371
https://devpost.com/software/med-viewer
Inspiration The coronavirus pandemic shows that the dissemination of useful, reliable and pertinent articles and papers is difficult but extremely crucial to medical research. The medical field is very broad, and we would like users - whether its other medical researchers, doctors, or even the press - to get a good sense of what the current state of the art research is. Currently, the medical preprint servers such as medRxiv have simple but barebones interfaces to find the latest and most interesting research papers. We would like to improve upon this system to give researchers and the public a richer interface to find articles. What it does In this project, we would like to introduce a personalized recommendation system for medical research articles, especially for COVID-19 research. Users can explore our site and mark interesting articles on the homepage using the favorites button. Then, after marking certain articles of interest, the user may visit the recommendations page to simulate the arrival of new articles. These new recommendations would suggest similar and relevant papers, selected from a large pool of articles. These news recommendations are dynamic and are based on two machine learning algorithms: TF-IDF and SVM. Periodically, the server will retrain each individual’s recommendation model to reflect the individual’s new favorites and preferences. How we built it We started out by working with the medRxiv site, a medical-preprint server, to query article metadata and PDFs. We built a system to automate the downloads of these files and convert these research articles (PDF) to text files to process downstream. Next, we worked on a system to convert the research articles, in text form, into a vector. This algorithm, TF-IDF, creates a content vector of a document, which provides a compressed summary of the articles using the frequency of words. By converting text into a numerical vector, we can run standard machine learning algorithms on these new features. Then, for each user, we trained a Support Vector Machine (SVM) classifier based on the user’s favorite articles to predict and suggest new potential “favorites” to the user. The SVM classifier uses a TF-IDF representation of each article to make a prediction, and outputs the probability of an article being a potential “favorite”. We created a simple frontend to expose our content recommendation system to the end-user. This frontend includes a flask server, which initiates the training and also the evaluations of SVMs. For our database, we used AirTable as a simple table database. When the user selects their favorites, these articles are added to the user's corresponding row in the AirTable. When the user requests for recommendations, we retrain the user model and generate new results using the SVM. Challenges we ran into We ran into many challenges that we had not anticipated. In the beginning, we had difficulty working with the medRxiv api and the extraction of PDFs from the site. We put together a temporary solution, which involved scraping the medRxiv site for a preliminary corpus of PDFs. We learned a lot about using BeautifulSoup and made good use of the Python requests library. We obtained a total of 400 PDFs into total for the final prototype. We also ran into many issues with setting up the TF-IDF and SVM system. At first, we did not clean the PDF-to-text files to remove numerical tables and other unicode symbols. This caused our vectorizer to include tokens that were not English words, but rather, numbers and floats. Since these characters don’t carry any semantic meaning, we removed these symbols to prevent them from skewing our content vectors. We solved this by integrating a Regular Expression to filter out non alphabetical characters. For the SVM, we had trouble integrating our database with the vector dimensions it requires. To solve this problem, we had to create a pipeline to effectively translate to and from the models input and outputs. We also had to tweak the SVM several times. One parameter we set was content weighting. Since there are way less favorites than non-favorites in our dataset, we had to make sure that the favorites are given more weight in the model. Before the adjustment, our model would never mark any item as a favorite in our testing dataset. Accomplishments that we're proud of We are proud of integrating our machine learning models with a frontend. Zach and Rajen, more familiar with data science, learned a lot about Flask and front-end development. Simon, who developed the Flask backend, learned about the various machine learning models used in content recommendation. Together as a team, we were able to deliver a working prototype that can be presented to an end user. This project required a lot of effective communication and collaboration - we extensively used GitHub branches to share code and datasets. Overall, we had a great experience working together and learned a lot as a team. What's next for Med Viewer We would like to add the following features: Comments Comments for different research papers can liven up the database with discussions, but also be used to identify trending articles. Articles with a high engagement factor can be given preference over other articles. Collaborative Filtering So far, we only made our recommendations based on each user’s preferences - that is a form of content-based filtering. With collaborative filtering, we not only look at each user’s preferences, but also other similar user’s. This allows us to “cross-recommend” between different users, which allows for more novel and meaningful recommendations. Built With airtable flask medrxiv python scikit-learn Try it out github.com
Med Viewer
A recommendation engine to suggest personalized medical research articles
['Simon Chervenak', 'Zach Zhao', 'Rajen Dey']
[]
['airtable', 'flask', 'medrxiv', 'python', 'scikit-learn']
110
10,371
https://devpost.com/software/news-ranking
Similarity table HackGT 7 Project We were inspired by problematic articles and ads prefacing the November election. This project uses beautiful-soup to scrape political headlines from popular UK news sites. The project then uses a universal sentence encoder TensorFlow to compare them. It then uses a clustering algorithm to cluster the news sites based on similarity. Our biggest challenge was that we had difficult deciding on what an optimal system of distribution of news was. We didn't know the ideal outcome of our algorithm until late in the hackathon. The question of how to optimally distribute news in a democracy is an incredibly difficult question, and our answer still isn't nuanced enough. Built With beautiful-soup numpy python tensorflow Try it out github.com
Norenue News
An algorithm to promote diversity and accuracy within news
['Albert Lu', 'Avni Tripathi', 'Anna Carow']
[]
['beautiful-soup', 'numpy', 'python', 'tensorflow']
111
10,371
https://devpost.com/software/ufcdata
UFCData UFC Web scraper, database, and API Scrapes statistics from http://ufcstats.com into an MSSQL database # About As most mainstream sports are fully embracing analytics, UFC seems to lag behind. One reason for this could be the lack of public tools that allow you to easily pull data. The UFC API provides a clean UFC dataset in a friendly JSON format. API Methods Fight Cards GET /fightcards Returns all fight cards and IDs /fightcards/{fightCardID} Returns fight card(s) for supplied ID(s) Multiple IDs can be separated by commas /fightcards/fights/{fightCardID} Returns fights for card for supplied ID Fight GET /fights Returns all fights and IDs GET /fights/{fightID} Returns fight info for supplied ID(s) Multiple IDs can be separated by commas FightStats GET /fightstats/{fightID} Returns fight stats for the supplied ID Built With .net c# mssql Try it out github.com
UFCData
UFC Web scraper, database, and API to make a clean and accessable dataset for the UFC
['Chris Turner']
[]
['.net', 'c#', 'mssql']
112
10,371
https://devpost.com/software/searchlight-ofilrp
Design Mockup of Account on Figma Design Mockup of the Alert System on Figma Map Screenshot Map Information Card Screenshot Transactions Screenshot Report Fraud Screenshot Accounts Screenshot Login Screenshot Sign Up Screenshot Inspiration When we went to the “Getting to know NCR APIs” workshop, we were inspired to create an app that addressed some of the problems faced by NCR and its customers with the tools NCR offered. We wanted to reduce the risk of fraud and reduce the number of losses for banks which were the two NCR problem statements we were targeting. What it does SearchLight crowdsources user information on stolen credit cards and the last place that they went before they began to receive fraudulent transactions. This way, others will be able to see exactly which places they will be at the highest risk for getting their information stolen. How we built it We used NCRs APIs to pull relevant account information from the Digital Banking API on things like the users account information and recent transactions. We the used the DataStax Cassandra database to store information about the fraudulent transactions and built the whole application in React Native. Mockups were made using the NCR design system in Figma. Challenges we ran into We had a lot of issues at first getting the NCR APIs to work just right for our use case and also some of us hadn't use React Native as much as others so there was a bit of a learning curve on how to get certain features working. Accomplishments that we're proud of We made a fully functional app! What we learned A ton about the different technologies used. What's next for SearchLight We would hope to see this project be taken further and eventually deployed into an urban setting similarly to the Citizen app to see if it could really be impactful. Built With cassandra datastax figma javascript nosql react react-native sql Try it out github.com www.figma.com
SearchLight
Conquer Scammers By Crowdsourcing Credit Card Skimmer Locations
['Fulton Garcia', 'Stephanie Luo', 'Dylan Skelly', 'Robert Boyd III']
[]
['cassandra', 'datastax', 'figma', 'javascript', 'nosql', 'react', 'react-native', 'sql']
113
10,371
https://devpost.com/software/try-buy
Inspiration Efficient, contactless, and personal. We've entered the new normal where social distancing is a must. But how do we keep in-person experiences such as trying/buying new clothes? Try & Buy makes trying on clothes easier than ever and allows customers to reserve a dressing room that already has all their pre chosen outfits/accessories locked up and only accessible by their special QR generated code! We wanted to make transactions as germ-free as possible, allowing stores to have a better handle on their business, and even reducing theft/shoplifting, while maintaining a strong sales experience with seamless interactions. What it does Try & Buy is powered by NCR's Selling Engine that allows seamless transactions between customers and stores. The app stores the user's payment methods and allows the user to choose whichever clothes/outfits they please. The UI is based off of the template that NCR provides from their elegant and user-friendly Design System. Sales representatives are available through live chat to help assist the customers if necessary and are in charge of getting the reserved dressing room for the customer cleaned and ready with all the clothes they decide to try on. This limits the amount of time a customer is spent walking around the store, touching many different clothing and risking increased COVID transmission, while maintaining the personal experience of assistance and utilizing the sales representative's knowledge. Customers can simply walk in at their reservation time to their reserved dressing room which is unlocked by a special QR generated code. They can try on their clothes/outfits as normal and simply leave with what they want, without having to interact with any sales representative - a completely contactless transaction! Afterwards, sales representatives cleans and checks the room to see what was left and compare to the initial list of items they wanted to try and charge accordingly, directly through the app thanks to NCR technology. No longer do you have to wait in line to pay or to try on new clothes, or be bothered by a sales representative when you just want to be alone. And no more arriving to a store and realizing that they're out of stock, even though their website said they aren't! As for the store, sales representatives no longer have to worry about customers contaminating all the clothes in the store, or theft in the dressing room area. It's all contained within a secure dressing room area. A win-win with Try & Buy. How we built it SwiftUI, NCR API, NCR Design System. Challenges we ran into XCode required us to all update, which took around 5 hours, since we had to update our OS. We were also getting used to Swift (none of us used) and also ran into several difficult bugs. The API was also very new to us, but we visited the help desk plenty of times :) Accomplishments that we're proud of We learned a lot in a short period of time. We also took advantage of the time zone difference and were very efficient of handing off work to one another. We also love how our final project looks and are excited to do even more with it. What we learned New language, new platforms, new APIs. What's next for Try & Buy We really want to explore further into NCR's APIs. There were so many to choose from and they are very comprehensive. We hope to continue implementing different features and integrating the API further. Built With figma gsuite ncrdesignsystem ncrsellingengine postman swift swiftui xcode Try it out github.com docs.google.com
Try & Buy
Efficient, contactless, and personal. Let's elevate your shopping experience!
['Yulai Tsui', 'Katherine Choi', 'Christopher Ballenger', 'Ali Kazmi']
[]
['figma', 'gsuite', 'ncrdesignsystem', 'ncrsellingengine', 'postman', 'swift', 'swiftui', 'xcode']
114
10,371
https://devpost.com/software/v-assist
the app is under development Built With dart firebase flutter google-cloud tensorflow-lite
V-Assist
E
['Akanksha Singh']
[]
['dart', 'firebase', 'flutter', 'google-cloud', 'tensorflow-lite']
115
10,371
https://devpost.com/software/vitalizer-kinoxf
Landing Page Vitals Input Form Analysis and Result Know More about the Disease Nearby Doctors Inspiration One of the biggest inspirations for this project was the entire scenario revolving around getting health check-ups and reports . Nowadays, it has become increasingly difficult and expensive to get regular and accurate checkups. We wanted a product that would generate medical reports based on minimal requirements and most of all, would be free of cost for the users. What it does With our product users can check the probability of having a rather life-threatening disease by just checking their vitals at home . Our AI-curated model carefully analyzes user fed data and gives a probable prediction of the severity of the ailment. We generate a tailored report for the users based upon vitals that can easily be measured in the comfort of their homes. The simple parameters that we consider are your, age, gender, height, weight, body temperature, heart rate, and blood pressure. This way we let the user know if he/she really needs to rush to the hospital or is just under the weather. Our model gives the user a probable score of having diseases which combined with their symptoms can help them make the decision of going to hospitals in these tough times. To help with the decision making we also provide thorough information about the disease for user satisfaction. An exhaustive list of symptoms to cross-check your situation. We believe in providing the smoothest User Experience as our target audience varies from the curious kid to the most elderly. Hence, our web app follows a simple UI and easy to navigate UX . How we built it We used the data set from the clinical data from MIMIC-II. In this data set, we predict the probability of having serious life-threatening diseases on the basis of vitals such as height, weight, bp, heart rate, and body temperature. Since the data set was tricky to handle with a lot of missing data as well as different formats we used advanced featured engineering to train it upon a support vector machine classification algorithm. Then we use the confidence score for predicting the probability of having a certain disease. For feature engineering, we use Numpy and Pandas, and for the model training, we used Scikit learn. We made our own API using a flask to make these predictions. For disease information and extensive symptoms recall, we use used APIMEDIC's API. For nearby doctor search, we used BetterDoctor's API and used it alongside google maps. Now coming to the front end, we linked the above-mentioned APIs in our web interface. We made a simple and easy to understand UI to take input from the user about his or her vitals and then output the predictions. We made sure the UI/UX is very neat understandable for easy understanding for a vast range of users. We have also added a light/dark toggle :) Accomplishments that we're proud of Support Vector Regression based prediction for the probability of disease Average Accuracy: 88.40 % on MIMIC-II dataset Making accurate prediction through minimal vitals using advance Feature engineering Creating our own API with this custom model Attractive dashboard display Easy to navigate user experience Data fed anonymously to maintain the privacy of medical-related information Locating nearby health consultants/doctors What's next for Vitalizer We want to make predictions for more diseases in the future. And expand into getting nearby doctor info for more regions. Built With api css3 flask html5 javascript machine-learning python react scikit-learn Try it out github.com vitalizer.tech
Vitalizer
An easy way to visualize your vitals
['Chirag Dugar', 'Sambhav Jain', 'Iishi Patel']
[]
['api', 'css3', 'flask', 'html5', 'javascript', 'machine-learning', 'python', 'react', 'scikit-learn']
116
10,371
https://devpost.com/software/scheduler-65hcfb
Inspiration We were inspired by the NCR subproblem of helping restaurants manage a surge in demand. We wanted to do our part by building a web application that helps restaurants manage the increasing burden as well to minimize their costs. What it does The app allows the user to choose from a variety of options on the menu. It then allows the user to set a time to come to the restaurant. Depending on how many people the restaurant can handle, the app books an order if space is available, otherwise it prompts the user to select another time. How we built it We built the frontend using React, Redux, and used Reactstrap to style the elements. The backend was built using node.js, express, and firebase. We integrated the backend with the NCR Business Selling Point API to handle orders and changes in the menu. Challenges we ran into Integrating the application to the NCR API proved to be difficult. In the end, we managed to access the endpoints and integrated them into our application. What we learned We collectively learned a lot about full-stack web development, React, and REST APIs. Built With firebase node.js react Try it out github.com
Scheduler
To help restaurants manage the surge in demand during the pandemic.
['Sriram Sathish', 'XanderGardner Gardner', 'Edmund Xin', 'Kaelen Saythongkham']
[]
['firebase', 'node.js', 'react']
117
10,371
https://devpost.com/software/shape-our-space
Shape Our Space - Home Using the app Continuous Integration Notification Workflow Inspiration We all have used video calling software these past few months, but they are better for presentations than social functions. Our project emulates the flow of real life social situations, allowing you to move from conversation to conversation as needed. What it does Shape Our Space is a peer-to-peer video call web application for all sorts of social events from birthday parties to career fairs. You control a token that can move around in a 2D virtual space. You can video chat with other people nearby you. However, if you want to move to a different conversation, you can easily move your token away from this group and join another group's conversation in a different area. And if you want to come back, you can easily walk back to a previous group! You can also set up "Speaker" sections where you can broadcast your video/audio during a presentation but you will be distracted by the audience's video/audio. These spaces can be thought of as hills, the higher you are the more people you can speak towards. Just like speaking on a hill, while you can broadcast yourself outwards it is difficult to hear those below you. Since these spaces can overlap, it is possible to create speaker sections and private rooms inside of a single lobby. How I built it HTML, CSS, and JavaScript built the front end. We used Flask for the back end, JavaScript and webRTC (peer.js) for the peer-to-peer communication, and EaselJS for the graphics. We setup Python workflows set up on GitHub. We also setup the continuous integration & notifications via Slack. Finally, we deployed our project with Heroku. Challenges I ran into Properly engaging and disengaging the video calls was challenging due to browser incompatibilities. Being able to dynamically create video calls slowed us down since the public server for peerjs broke while we were using it. It was extremely hard to debug since in the end it wasn't our fault. We fixed this by hosting PeerJS ourselves on a separate server. We had difficulty deploying on google cloud via github actions workflow. Design was difficult since we had to create a responsive application that works well and looks good on multiple screen sizes. Accomplishments that I'm proud of Setting up a video call platform with a dynamic number of people in a given video call at a time is a very cool thing to interact with, and we are proud that we were able to set that up! What I learned Webrtc, PeerJS, Flask, Heroku deployment, EaselJS, Google Cloud Platform What's next for Shape Our Space As far as the general feel of Shape Our Space goes, we would like to add more visual customization, allow for some users to be moderators, create dynamic circle size, and create a more visually pleasing look and feel. As far as the mechanics of Shape Our Space are concerned, we would have really liked to add widgets to the platform. Our original vision for the app had each circle space with configurable features such as presentation mode, whiteboard mode, queue mode, and more which would allow for custom types of video calls and interactions. We also wanted to be able to embed external apps into the app to help users communicate. We believe these changes would make Shape Our Space a platform that we ourselves would use for video communication. Being able to integrate video call with a real virtual space with custom interactions between users would make Shape Our Space a real competitor in the video streaming market. Built With css3 easeljs flask gcp heroku html5 javascript jquery peerjs python socket.io webrtc Try it out shape-our-space.herokuapp.com github.com
Shape Our Space
Virtual online space for video calls where you can seamlessly move from conversation to conversation!
['Biraj Dahal', 'Harrison Hall', 'Anshul Choudhary', 'Jacky Wong']
[]
['css3', 'easeljs', 'flask', 'gcp', 'heroku', 'html5', 'javascript', 'jquery', 'peerjs', 'python', 'socket.io', 'webrtc']
118
10,371
https://devpost.com/software/sparky-bvfc91
Login Screen Home Screen Patient Screen Family Screen Upload Screen Upload Screen (taking pictures) Upload Screen (cropping and uploading) Inspiration As a few of our members have worked with Alzheimer's patients, we know how hard each day can be for both the patient and their family. Seeing this grief first hand has made us really passionate about creating a way to make life easier for those mentally losing their own. Alzheimer’s is detrimental to people’s memories, so we wanted to focus on helping patients and their families stay connected to help fill in the memory gaps. What it does Because of this, we decided to create an app that allows family members to upload photos identifying and reminding patients of their loved ones even when they can’t constantly be around. This app will allow any family member or friend to upload a picture that can be seen by the patient and other members with access. Think of it as a family-based social media app that allows Alzheimer’s patients to digitally store their memories, constantly remind them of each memory, and help them organize short and long term memories so they can always reminisce about those they love. How we built it We built the app using Google's Flutter framework and Google's Firebase database service Challenges we ran into This was the first time that any of us used Flutter, so there was a steep learning curve involved. Also, integrating firebase with the application presented some challenges. What we learned We learned how to build a multiplatform full-stack mobile application. What's next for Sparky Polish the application with alert dialogs, more image interactivity, and improved design aspects. Built With android-studio dart firebase flutter nosql Try it out github.com
Sparky
Sparky is an app that helps bridge the gap between Alzheimer’s patients and their memories through visual sparks. The app helps patients stay close to loved ones.
['Abhijat Chauhan', 'Ethan Mendes', 'Jose Castejon', 'EmilyAL001']
[]
['android-studio', 'dart', 'firebase', 'flutter', 'nosql']
119
10,371
https://devpost.com/software/listify
Edit Receipt (edit scanned elements before they are added to your pantry) Your generated shopping list (items are labeled if expired or are about to be) Edit item in the Pantry Display your Pantry Search for Recipes to add to your Collection View all of Your Saved Recipes Main Screen Inspiration We were inspired because whenever we go shopping for grocery items, we always forget a couple of things to buy. We thought this project would be a fun, simple tool that would be useful to a lot of people. What it does The app generates a shopping list based off of items in your pantry as well as recipes you have saved. The shopping list is generated based off of expiration date that is inputted by the user. The pantry is built by the user uploading pictures of receipts that are then parsed in order to get the items the user bought. The user can also find recipes on the app and save them to their collection. How we built it We used Swift as the frontend and Python as the backend. We stored the pantry and recipes in different databases to be accessed accordingly in the frontend. We also used different APIs to read the receipts and display recipes. Challenges we ran into The main challenge we had was utilizing the APIs. The requests were constrained by developer plans, so we had to test using mock data. MySQL was updating a cache, or commit we were making wasn't getting all of the time, so there was a disconnect between the current instance that was running and the cache we had. Accomplishments that we're proud of We are proud that we were able to get the receipt API working, allowing users to simply take a pic of their receipt to upload to the pantry. We were also able to display the contents of the parsed receipt as well, so that the user could easily read what they bought. What we learned We got to learn more about Swift, API calls, and making endpoints. Overall, we got to practice our development skills more. What's next for Listify We want to be able to suggest other items to buy based on past receipts and common recipes. We want to add more capabilities to scan more recipes (we were throttled by the API we were using). Finally, we would want to figure out a way to get more accurate UPC readings from the received end as well as more accurate readings on abbreviations. We want to be able to handle a more exhaustive search as well as add tags so that common items (milk, whole milk, etc.) are grouped together. Built With amazon-web-services edamam flask heroku mysql ocr.space python s3 spoonacular swift Try it out github.gatech.edu
Listify
Listify generates your shopping lists for you! Using current items in your pantry, pictures of receipts, and current recipes, we update your shopping list based on items that are about to expire.
['Nitya Tarakad', 'Jessica Bishop']
[]
['amazon-web-services', 'edamam', 'flask', 'heroku', 'mysql', 'ocr.space', 'python', 's3', 'spoonacular', 'swift']
120
10,371
https://devpost.com/software/smart-calendar-ymo2b3
Emerging Track Inspiration Our inspiration for the project came from the want to help students and even adults that are struggling to continue their education while having to stay home as a result of the national pandemic. This idea came from struggles that we as students face while here at Georgia Tech What it does The calendar takes inputs for the type of assignment and the due date and creates a flexible schedule for the user to follow to complete studying and assignments on time. How we built it We created a frontend with ReactJS and bootstrap components. The backend consists of two flask api's. One controls the logic and one controls mySQL database that is held in Google-Cloud-Platform. Challenges we ran into We were not able to completely link both sides of our project (frontend and backend) due to time constraints. We had trouble calling our api's from the React database. Accomplishments that we're proud of We have one fully functional api, python methods, and a good-looking front end structure. What we learned As a team, we increased our knowledge on mySQL, Flask, python, React, javascript and Github What's next for Smart Calendar We would first like to spend more time to complete our project. When that is completed, we would like to add more feature based on research to create individually tailored studying plans for students based on past experience and make it more efficient for the end-user Built With flask github javascript mysql python react Try it out github.com
Smart Calendar
We have created a smart calendar that helps users optimize their time while studying, completing assignments, or doing anything else. This project is built using React, python, flask, and mySQ
['Nandha Sundaravadivel', 'sranganath02 Ranganath', 'Gaurav Garre', 'Harysh Magesh']
[]
['flask', 'github', 'javascript', 'mysql', 'python', 'react']
121
10,371
https://devpost.com/software/covicare
CoviCare Technology Stack Inspiration As the pandemic is growing, patients are worried if they can get a bedspace, PPE kits, Food, etc., One concern about the impact of COVID-19 on hospitals is ICU capacity. It is no secret that at the center of COVID-19 Pandemic there aren’t simply enough ICU beds to support the number of cases. At present, data on real time hospital bed availability is not accessible to public. It is difficult to find a hospital bedspace and what gets even more challenging is the service requests to tend to. Every second counts! A treatment without hassle and a timely self assessment, can provide confidence to the user and a sense of trust for the community which will make the rest of 2020 easy for us. The main aim of our app is to act fast during these times. What it does A self assessment is a set of questionnaire that helps to identity if user has COVID. These include uploading a copy of x-ray scan of lungs to be sure of any Pneumonic symptoms. Azure customvision.ai service used to train a machine learning model with Kaggle dataset of x-ray images. After the assessment, they can visit a nearby testing center to confirm. Once the user is COVID positive, they can act fast by registering a bed-space for themselves at the nearby hospital. API is created that finds nearby hospitals with available bed-space and charges based on user location. After bed-space allocation, they can request for additional resources like food, PPE kits, water etc., from the app itself. Hospital management consolidates the requests and tend to them in the order they arrive. This keeps hospital management organized and helps the staff and patients to have their needs met with social distancing in practice as every request goes through the app only. Additionally, maintaining social distancing is important for both patients and hospital staff. To ensure that, we made use of Google SODAR which creates augmented reality of two meter radius ring around the user. With this feature staff and patients can move safely inside the hospital. Following social distancing, acting fast and being compassionate to others, we can fight against COVID How we built it Android - main app Express Js, Node Js - Microservices deployed on Azure as an app service Kaggle datasets - Gathered X-ray images and Hospital-bed information from kaggle Microsoft SQL server management studio - To deploy the datasets onto azure backend Google SODAR - AR functionality for following social distancing Microsoft Azure - Backend cloud for the application Challenges we ran into Integrating android with machine learning model developed in customvision.ai achieving an accuracy of ~ 98% for x-ray classification model Accomplishments that we're proud of _ A real time classification of X-ray images into COVID positive and negative for better judgement _ _ Integration with Google Sodar to enable safe navigations within hospital _ _ Making the app intuitive What we learned _ Azure Cognitive services - as a part of customvision.ai which enabled us to understand the working of image classification and deploying an end point to it to be able to use it as a microservice. _ What's next for CoviCare _ map integration for intutive hospital selection _ _ Developing social distancing AR with haptic feedback _ _ Perform an analysis of the healthcare system capacity as compared to disease spread forecasts and efficiently plan resources. _ Built With android azure express.js google kaggle microsoft mssm node.js sodar Try it out github.com
CoviCare
Now more than ever, your safety matters
['Aditya Bhamidipati', 'sameera Turupu', 'Avinash Bondalapati']
[]
['android', 'azure', 'express.js', 'google', 'kaggle', 'microsoft', 'mssm', 'node.js', 'sodar']
122
10,371
https://devpost.com/software/suggestify-j2cplv
Inspiration As aspiring computer scientists, we spend no small amount of time sitting in front of a computer. Digital media is our bread and butter- as such, we decided to develop something that we would personally find useful in our lives as university students. We landed on Suggestify after much deliberation, but we agreed that all of us were running out of music to listen to and shows to watch during the quarantine. We are a team of individuals with mixed experience, two members with experience in web development and two members who are very new. As a result, the project was a teaching experience for everyone involved. What it does Suggestify is a web-based implementation of an app that serves as a sort of personalized dashboard that utilizes the power of Machine Learning in order to recommend a song/show for the user based on a simple quiz. Ideally, our site would collect user input in the form of quiz answers, and from that dataset, apply the ML algorithms in order to extrapolate and predict what kind of content the user wants to see/hear. How we built it Using a mixture of JavaScript, HTML, CSS, and an amalgam of APIs, we each got to work on our tasks. We divided the work into frontend/backend and got to work on each individual aspect of the project. The site manifested from a purely speculative endeavor into a real, tangible product all thanks to the tireless efforts of our team. Through a mix of brainstorming and sheer willpower, we managed to create something that resembled what we originally wanted. Challenges I ran into Actually making sure that all the moving pieces fit together was a considerable challenge that we certainly were not expecting when starting the project. We were met with unforeseeable roadblocks at every step of the way but were able to rely on each other for guidance as well as a helping hand. On top of that, the UI proved to be rather finicky along the way since certain elements did not work well with others. (I'm looking at you, immovable stars!) Accomplishments that we're proud of But at the same time, the UI design is something that we take great pride in. You would be proud too if you painstakingly selected every single color for each god-forsaken gradient, leaving the finished product line and lag free! Despite not having any prior experience with various APIs and Firebase, we managed to intuit how to utilize the provided features and add functionality. What I learned Speaking as someone who had very little experience with web-development before this project, I feel like I am horizons beyond where I was just a few days ago in terms of progress. Actually applying the knowledge from tutorials was far more effective at teaching than just watching those videos themselves. What's next for Suggestify There wasn't quite enough time to implement the show/movie suggestions quiz and prediction algorithm, but we've got high hopes for the future- when we plan to implement this feature and hopefully start to test it and iron out the wrinkles. Built With css html javascript react
Suggestify
Suggestify could help you find the next song that gets stuck in your head!
['John Pham', 'Jonathan Dieu', 'Victoria Nicole Williamson', 'Ellie Farnsworth']
[]
['css', 'html', 'javascript', 'react']
123
10,371
https://devpost.com/software/sanitizer-lyxqdc
Map View User Data Submission View Autocomplete feature Inspiration We want to make an impact on the society during these times of crisis, so we decided to learn something new and apply it to solve some of the problems the world is facing today. In an effort to reduce the impact of COVID-19, we built an app that informs users about restaurants that offer hand sanitizers. What it does Users can easily browse a map of restaurants that provide hand sanitizers on the home screen. They can also submit their own data to improve the community effort. How I built it We learned how to use Flutter, the framework developed by Google, in this hackathon, and managed to finish a functional app for the Android platform. Challenges I ran into Due to unfamiliarity with the Flutter framework, we encountered some problems with its syntax early on, but eventually we overcame it and incorporated a few popular API services offered by Google. Accomplishments that I'm proud of We are proud of how the app turned out in the end, being total newbies in app development. Some accomplishments include displaying a full map with markers, live updating location data, and the search function for locations. What I learned We learned that it is crucial to keep an open mind in the hackathon, try our best to come up with an idea, and just face the challenges head on. What's next for Sanitizer Since Flutter is a cross-platform framework, the next logical step would be to deploy it on iOS and the web. We also want to incorporate other features such as a rating system based on multiple safety factors. This app could also be used in different scenarios in addition to restaurants. Built With android firebase flutter google-maps google-places Try it out drive.google.com
Sanitizer
Sanitizer is an app that helps combat COVID-19 by allowing users to see which locations near them are offering hand sanitizer, and lets them update this map with new information in real time.
['Saksham Goel', 'jcoward00 Coward', 'XcrossD Lin']
[]
['android', 'firebase', 'flutter', 'google-maps', 'google-places']
124