anchor
stringlengths
1
23.8k
positive
stringlengths
1
23.8k
negative
stringlengths
1
31k
anchor_status
stringclasses
3 values
## Inspiration Exercise is extremely important and is most effective and healthy when it is evenly applied to all important muscles in your body. However, as students, many of us don't have enough time to finish a complete workout routine that trains every important muscle, and so we split up our exercise routines over several days. However, it can be a hassle to track which muscles you haven't exercised as often, generating friction between a busy student and quality exercise. SmartFit seeks to solve this issue. ## What it does SmartFit is a workout assistant that helps users to become more healthy. It's built upon two main features: logging and recommendation. It tracks your workouts using the Google Assistant and tailors a workout that would benefit the user the most based on an undertrained muscle group, determined by analysis of their previous workouts. SmartFit is also beginner friendly since it offers a tutorial for every different workout. Through SmartFit, anyone can become fit and healthy in a smart way. SmartFit is like a personal trainer that can be accessed at any time and anywhere. ## How we built it We built SmartFit using Voiceflow with calls to Google Sheets’ APIs and deployed on Google Actions to create a fully functional Google Assistant app. Smartfit’s workout logs are stored on Google Sheets and workout recommendations are generated automatically based on past workout data as well as exercise information curated from various professional fitness websites. The recommendation algorithm was designed using smart techniques and formulas within the Google Sheets Platform. JavaScript was used in order to parse user information of logs and prediction of recommendations within Voiceflow. We also made a static website that shows information regarding what SmartFit does. ## Challenges we ran into Since Voiceflow is a relatively new application, we had a hard time finding assistance on certain features that we were unsure about. For example, Voiceflow has the capability to display images using the Card Block or the Display Block, but it was very hard to find help regarding those issues. Due to the lack of documentation, it was very hard to solve problems promptly without halting all progress. Another challenge that we faced was the integration of Google’s Firebase. We ended up using Google Sheets instead of firebase because Google Sheets is already integrated with Voiceflow and is faster and more effective that Firebase in this application. ## Accomplishments that we’re proud of We’re proud of creating a robust voice app using Voiceflow and creating a Google Action usable on Google Assistant. ## What we learned We learned how to use a new application called Voiceflow in order to develop Google Assistant and Amazon Alexa applications without any pre-existing knowledge. We learned how to use Google’s Dialogflow in order to style the front end of SmartFit in the Google Assistant environment. We taught ourselves the intricacies of Google’s API client in order to host SmartFit to any Gmail email address that the user wishes. ## What's next for SmartFit Since SmartFit is on Google’s API platform, sending the app to worldwide alpha testers can be easily accomplished. From the alpha testers’ feedback, more features and user interface improvements will be implemented. The next step for SmartFit is to migrate to a more robust database and increase user customizability via FireBase and MongoDB. The Artificial Intelligence to predict workout recommendations would be implemented using TensorFlow, instead of hardcoded smart algorithms, due to a larger set of users. After the technology side of things is taken care of, SmartFit can have the opportunity to be implemented into the marketplace. Users can pay for the SmartFit service with different plans in order to have more data analysis and access to tutorials.
## Inspiration Lockdown is hard for everybody and the uncertainty makes it extremely difficult to plan things in advance and stay motivated. The pandemic has also put a strain on our ability to communicate with one another and make friends. This inspires us to create SwoleMate, a mobile application intended to connect users with a possible “swolemate” based on common interests. The app is intended to restore the body, restore workout ethics, restore our ability to make friends, and restore a piece of our lives before the pandemic. ## What it does It matches you with other people who share the same workout activities and have workout buddies. It uses an algorithm that matches you based on your location, activities, age, and preferred gender. It generates a list of potential matches, and you can pick out the ones you want. After that, their contact information is shared with you. You can pick it up from there! ## How we built it We did the front-end design on Figma. We programmed the back-end using Dart. ## Challenges we ran into None of us have used any of the technologies we used in this project since we wanted to learn something new. Although designing the front-end using Figma was a success, we couldn't wholly integrate the front-end to the back-end. ## Accomplishments that we're proud of We learned utterly new tools, which was also the first time we worked together. ## What we learned We learned how to use Flutter, Dart, and Figma. We also learned about the creativity and design process of a mobile app. ## What's next for SwoleMate We plan to continue implementing the app so that it is complete.
## Inspiration In the present state of 2022, as climate change is crashing upon us at an alarming rate, the environmental issues our world faces can no longer be ignored. Through a conversation our team had about our conscious eco footsteps as a member of Generation Z, we realized that we needed to change the current state of the globe, even with just the hands of the four of us. With our current abilities, we wanted to tackle a corner of a global issue, but have the ability to thoroughly present a complete solution in that process. Minimization was the way to go, and oil spills were our target. Ladies and gentlemen meet **oleumer**. Oil spill minimization for ecosystem and wildlife protection. ## What it does Oil spills are one of the most harmful disasters to the environment, suffocating and killing wildlife, damaging ecosystems, bankrupting oil companies and hurting taxpayer dollars. oleumer is a frontline warning system that serves to quickly detect oil spills by scanning and analyzing images taken by satellites of bodies of water with oil platforms in them to see whether a spillage has occurred. Our project serves as an extension product aiding governments and private organizations with satellites to form partnerships with oil companies. This optimizes the time used during communication when a spillage has occurred. Additionally, because our extension is a real-time analysis system, we’ve also provided a website that allows users to upload their photos for analysis of whether an oil spill has occurred in that area. This website serves as a checkup for those who would like to utilize our system for analysis and research on their own time. ## How we built it We used python to create our own CNN (Convolution Neural Network) and we trained it to recognize colours in oil spills satellite imagery so that it can analyze and announce immediately when it’s detected signs of an oil spill in live satellite imagery. We split work among the group members to play to everyone's strengths. Github is used for version control and communication on code components. We created an additional site using the Flask framework. ## Challenges we ran into At first, we thought to use Google Cloud's image processing since none of us had extensive experience creating and training our own CNN models however we wanted to demonstrate a well-rounded project that can be applied to bigger systems such as the satellite feeds from national government science agencies and large private companies like SpaceX. And so, we created and trained our own model. ## Accomplishments that we're proud of * Coding and understanding a Convolution Neural Network for the first time. * Designing the look of the application and site. * Using the design process to recognize the Problem, Client, Solution, and Impacts along with recognizing and addressing assumptions. * Research the effects of oil spills and come up with a practical, applicable solution that could prevent such extreme loss and suffering of wildlife. ## What we learned We learned about sorting a data-set before using it to train a CNN, implementing a CNN model in python, graphing the accuracy of predictions of our model, and creating a project site using the Flask framework. More importantly, through our research on the problem and impact parts of the design process, we learned a lot about the impacts oil spills have on the environment, their sources, and what isn't being done on political and industrial levels to prevent these catastrophes that not only harm aquatic wildlife but put humans in danger, all for the bottom line. ## What's next for oleumer? oleumer could be implemented in live satellite imagery systems from scientific initiatives such as Google Earth Engine or NASA's public satellite imagery systems. We also aim to serve as an extension on satellite systems of private spaceflight companies, like SpaceX and Virgin Galactic. The oleumer algorithm could also be updated to recognize faults in oil drilling sites to alarm the possible failure of the site before a natural disaster or harmful situation occurs by training it on satellite images of safe and unsafe oil drilling sites. We will also aim to help scan and document the area of past oil spills to aid in the cleanup and restoration processes of these spills. Partnering with organizations like Greenpeace for funding and support for continued development and marketing to scientific organizations can be beneficial.
partial
## Inspiration Shashank Ojha, Andreas Joannou, Abdellah Ghassel, Cameron Smith # ![](https://drive.google.com/uc?export=view&id=1griTlDOUhpmhqq7CLNtwrQnRGaBXGn72) Clarity is an interactive smart glass that uses a convolutional neural network, to notify the user of the emotions of those in front of them. This wearable gadget has other smart glass abilities such as the weather and time, viewing daily reminders and weekly schedules, to ensure that users get the best well-rounded experience. ## Problem: As mental health raises barriers inhibiting people's social skills, innovative technologies must accommodate everyone. Studies have found that individuals with developmental disorders such as Autism and Asperger’s Syndrome have trouble recognizing emotions, thus hindering social experiences. For these reasons, we would like to introduce Clarity. Clarity creates a sleek augmented reality experience that allows the user to detect the emotion of individuals in proximity. In addition, Clarity is integrated with unique and powerful features of smart glasses including weather and viewing daily routines and schedules. With further funding and development, the glasses can incorporate more inclusive features straight from your fingertips and to your eyes. ![](https://drive.google.com/uc?export=view&id=1eVZFYgQIm7vu5UOjp5tvgFOxvf3kv4Oj) ![](https://drive.google.com/uc?export=view&id=1L-5w9jzwKG0dLdwe-OCMUa6S2HnZeaFo) ![](https://drive.google.com/uc?export=view&id=1LP7bI9jAupQDQcfbQIszs9igVEFSuqDb) ## Mission Statement: At Clarity, we are determined to make everyone’s lives easier, specifically to help facilitate social interactions for individuals with developmental disorders. Everyone knows someone impacted by mental health or cognitive disabilities and how meaningful those precious interactions are. Clarity wants to leap forward to make those interactions more memorable, so they can be cherished for a lifetime. ![](https://drive.google.com/uc?export=view&id=1qJgJIAwDI0jxhs1Q59WyaGAvFg5fysTt) ![](https://drive.google.com/uc?export=view&id=1AY5zbgfUB4c_4feWVVrQcuOGtn_yGc99) We are first-time Makeathon participants who are determined to learn what it takes to make this project come to life and to impact as many lives as possible. Throughout this Makeathon, we have challenged ourselves to deliver a well-polished product that, with the purpose of doing social good. We are second-year students from Queen's University who are very passionate about designing innovative solutions to better the lives of everyone. We share a mindset to give any task our all and obtain the best results. We have a diverse skillset and throughout the hackathon, we utilized everyone's strengths to work efficiently. This has been a great learning experience for our first makeathon, and even though we have some respective experiences, this was a new journey that proved to be intellectually stimulating for all of us. ## About: ### Market Scope: ![](https://drive.google.com/uc?export=view&id=10LWCDhgfDPp1scpVI1GSAGIWrjprQtOY) Although the main purpose of this device is to help individuals with mental disorders, the applications of Clarity are limitless. Other integral market audiences to our device include: • Educational Institutions can use Clarity to help train children to learn about emotions and feelings at a young age. Through exposure to such a powerful technology, students can be taught fundamental skills such as sharing, and truly caring by putting themselves in someone else's shoes, or lenses in this case. • The interview process for social workers can benefit from our device to create a dynamic and thorough experience to determine the ideal person for a task. It can also be used by social workers and emotional intelligence researchers to have better studies and results. • With further development, this device can be used as a quick tool for psychiatrists to analyze and understand their patients at a deeper level. By assessing individuals in need of help at a faster level, more lives can be saved and improved. ### Whats In It For You: ![](https://drive.google.com/uc?export=view&id=1XbrcnIEc3eAYDmkopmwGbSew11GQv91v) The first stakeholder to benefit from Clarity is our users. This product provides accessibility right to the eye for almost 75 million users (number of individuals in the world with developmental disorders). The emotion detection system is accessible at a user's disposal and makes it easy to recognize anyone's emotions. Whether one watching a Netflix show or having a live casual conversation, Clarity has got you covered. Next, Qualcomm could have a significant partnership in the forthcoming of Clarity, as they would be an excellent distributor and partner. With professional machining and Qualcomm's Snapdragon processor, the model is guaranteed to have high performance in a small package. Due to the various applications mentioned of this product, this product has exponential growth potential in the educational, research, and counselling industry, thus being able to offer significant potential in profit/possibilities for investors and researchers. ## Technological Specifications ## Hardware: At first, the body of the device was a simple prism with an angled triangle to reflect the light at 90° from the user. The initial intention was to glue the glass reflector to the outer edge of the triangle to complete the 180° reflection. This plan was then scrapped in favour of a more robust mounting system, including a frontal clip for the reflector and a modular cage for the LCD screen. After feeling confident in the primary design, a CAD prototype was printed via a 3D printer. During the construction of the initial prototype, a number of challenges surfaced including dealing with printer errors, component measurement, and manufacturing mistakes. One problem with the prototype was the lack of adhesion to the printing bed. This resulted in raised corners which negatively affected component cooperation. This issue was overcome by introducing a ring of material around the main body. Component measurements and manufacturing mistakes further led to improper fitting between pieces. This was ultimately solved by simplifying the initial design, which had fewer points of failure. The evolution of the CAD files can be seen below. ![](https://drive.google.com/uc?export=view&id=1vDT1gGyfM7FgioSRr71yBSysGntOfiFC) The material chosen for the prototypes was PLA plastic for its strength to weight ratio and its low price. This material is very lightweight and strong, allowing for a more comfortable experience for the user. Furthermore, inexpensive plastic allows for inexpensive manufacturing. Clarity runs on a Raspberry Pi Model 4b. The RPi communicates with the OLED screen using the I2C protocol. It additionally powers and communicates with the camera module and outputs a signal to a button to control the glasses. The RPi handles all the image processing, to prepare the image for emotion recognition and create images to be output to the OLED screen. ### Optics: Clarity uses two reflections to project the image from the screen to the eye of the wearer. The process can be seen in the figure below. First, the light from the LCD screen bounces off the mirror which has a normal line oriented at 45° relative to the viewer. Due to the law of reflection, which states that the angle of incidence is equal to the angle of reflection relative to the normal line, the light rays first make a 90° turn. This results in a horizontal flip in the projected image. Then, similarly, this ray is reflected another 90° against a transparent piece of polycarbonate plexiglass with an anti-reflective coating. This flips the image horizontally once again, resulting in a correctly oriented image. The total length that the light waves must travel should be equivalent to the straight-line distance required for an image to be discernible. This minimum distance is roughly 25 cm for the average person. This led to shifting the screen back within the shell to create a clearer image in the final product. ![](https://drive.google.com/uc?export=view&id=1dOHIXN2L045LHh7rCoD0iTrW_IVKf7dz) ## Software: ![](https://drive.google.com/uc?export=view&id=1DzqhM4p5y729deKQQkTw5isccUeZRCP8) The emotion detection capabilities of Clarity smart glasses are powered by Google Cloud Vision API. The glasses capture a photo of the people in front of the user, runs the photo through the Cloud Vision model using an API key, and outputs a discrete probability distribution of the emotions. This probability distribution is analyzed by Clarity’s code to determine the emotion of the people in the image. The output of the model is sent to the user through the OLED screen using the Pillow library. The additional features of the smart glasses include displaying the current time, weather, and the user’s daily schedule. These features are implemented using various Python libraries and a text file-based storage system. Clarity allows all the features of the smart glasses to be run concurrently through the implementation of asynchronous programming. Using the asyncio library, the user can iterate through the various functionalities seamlessly. The glasses are interfaced through a button and the use of Siri. Using an iPhone, Siri can remotely power on the glasses and start the software. From there, users can switch between the various features of Clarity by pressing the button on the side of the glasses. The software is implemented using a multi-file program that calls functions based on the current state of the glasses, acting as a finite state machine. The program looks for the rising edge of a button impulse to receive inputs from the user, resulting in a change of state and calling the respective function. ## Next Steps: The next steps include integrating a processor/computer inside the glasses, rather than using raspberry pi. This would allow for the device to take the next step from a prototype stage to a mock mode. The model would also need to have Bluetooth and Wi-Fi integrated, so that the glasses are modular and easily customizable. We may also use magnifying lenses to make the images on the display bigger, with the potential of creating a more dynamic UI. ## Timelines: As we believe that our device can make a drastic impact in people’s lives, the following diagram is used to show how we will pursue Clarity after this Makathon: ![](https://drive.google.com/uc?export=view&id=1m85rTMVAqIIK5VRbjqESn1Df-H0Pilx8) ## References: • <https://cloud.google.com/vision> • Python Libraries ### Hardware: All CADs were fully created from scratch. However, inspiration was taken from conventional DIY smartglasses out there. ### Software: ### Research: • <https://www.vectorstock.com/royalty-free-vector/smart-glasses-vector-3794640> • <https://www.ncbi.nlm.nih.gov/pmc/articles/PMC2781897/> • <https://www.google.com/search?q=how+many+people+have+autism&rlz=1C1CHZN_enCA993CA993&oq=how+many+people+have+autism+&aqs=chrome..69i57j0i512l2j0i390l5.8901j0j9&sourceid=chrome&ie=UTF-8> • (<http://labman.phys.utk.edu/phys222core/modules/m8/human_eye.html>) • <https://mammothmemory.net/physics/mirrors/flat-mirrors/normal-line-and-two-flat-mirrors-at-right-angles.html>
## Inspiration *Do you have a habit that you want to fix?* We sure do. As high school students studying for exams, we noticed we were often distracted by our phones, which greatly reduced our productivity. A study from Duke University found that up to 45% of all our daily actions are performed habitually, which is a huge problem especially during a time when many of us are confined to our homes, negatively impacting our productivity, as well as mental and physical health. To fix this issue, we created HabiFix. We took the advice from a Harvard research paper to create a program that would not only help break unhealthy habits, but form healthy ones in place as well. ## What it does Unlike many other products which have to be installed by professionals, highly specialized for one single habit, or just expensive, HabiFix only requires a computer with a webcam, and can help you fix a multitude of different habits. And the usage is very simple too, just launch HabiFix on your computer, and that’s it! HabiFix will run in the background, and as soon as you perform an undesirable habit, it will remind you. According to Harvard Health Publishing, the most important thing in habit fixing is a reminder, since people often perform habits without realizing it. So when you’re studying for tomorrow’s test and pick up your phone, your computer will gently remind you to get off your phone, so you can ace that test. Every action you do is uploaded to our website, which users can see statistics of by logging in. Another important aspect of Habit Fixing that Harvard found is reward, which we believe we can provide users by showing them their growth over time. On the website, users are able to view how many times they had to be reminded, and by showing them how they have been requiring less reminders throughout the week, they’ll be able to know they have been fixing their habits. ## How we built it The ML aspect of our project uses Tensorflow and openCV, more specifically an object detection library to capture the user’s actions. We wrote a program that would use OpenCV to provide webcam data to TensorFlow, which would output the user’s position relative to other objects, then analyzed by our Python code to determine if a user is performing a specific task. We then created a flask server which converts the analyzed data into JSON, stores it in our database, allowing our website to fetch the data. The HabiFix web app is built with React, and Chart.js was used to display data that was collected. ## Challenges we ran into The biggest challenge we ran into was incorporating the machine learning aspect in it, as it was our first time using TensorFlow. While setting up the object detection algorithm using TensorFlow, we had difficulties installing all the dependencies and modules, and spent quite some time properly understanding the TensorFlow documentation which was needed to get outputs for analysis. However, after sleepless nights and a newfound love for coffee, we were able to finish setting up TensorFlow and write a program to extract the data and analyze it, which worked better than we thought it would, catching our developers on their phones even during development. ## Accomplishments that we're proud of We’re quite proud of the accuracy that our program has in detecting habits and believe it is the key reason why this program will be so effective. So far, unless you make a conscious effort to hide from the camera, which wouldn’t be the case for those wanting to remove a habit, the program will detect the habit almost instantly. The fact that our program caught us off guard on our phones during development is a clear indicator that our program does what it’s supposed to, and we hope to use this tool ourselves to continue development and break our own bad habits. ## What we learned Our team pretty much learnt everything we had to use for this project. The only tools that our team were familiar with were basic HTML/CSS and Python, which not all the members knew how to use. Throughout development, we learnt a lot about frontend, backend, and database development, and TensorFlow is definitely a tool we’re happy to have learnt. ## What's next for HabiFix In the future, we hope to add to our list of habits that we can detect, and possibly create a mobile application to track habits even when users are away from their computer. We believe this idea has serious potential for preventing not only simple habits like biting nails, but also other habits such as drug and substance abuse and addiction.
## ❓The problem at hand. How might we help lower the barrier of entrance for **aspiring actors** to join and thrive in the entertainment industry? ## 💭 Our inspiration? Just as others are inspired to pursue careers as engineers, doctors, or lawyers, my aspirations were shaped by watching actors perform. From a young age, witnessing their presence on Disney Channel and Netflix ignited my passion and desire to become an actor. Due to societal pressures and the slim chances of "making it big," I was encouraged to pursue career paths deemed more sustainable and stable. Ultimately, I steered away from my Disney Channel dream. However, there are still some out there who **have not given up**. There are ones out there who are like me, who are sitting in front of their TV, admiring the art of performing. ## 📌 Diving deeper into our problem scope Through our research, we found an overwhelming statistic that pointed to the fact that **unemployment rates** in actors hover around 90%, and that as low as 2% of actors are able to **make a living** out of acting. Using that, we were able to narrow down our niche and target our platform for beginners who aspire to perform center stage one day. We looked into established resources like Khan Academy, LeetCode, and Coursera as inspiration to provide purposeful and industry standard lesson plans. We believe just like any school subject like math or science, **acting is a skill that can be learned and honed onto**. ## 🎬 What does Essence do? Essence is a platform that **trains aspiring actors from practicing to performing**. Users are presented with real life scenes to watch and practice by mimicking. They are able to record themselves scripting the scene and get an **instant score** and **personalized feedback** to improve their skills. Users can compare side by side how the actor figure performed versus their take of the same scene. They are able to review all their takes and view their progress through their acting journey! ## 🛠️ How we built it Our team started by brainstorming the core features we wanted Essence to have, focusing on user engagement and practical learning tools. We then designed the user interface and user experience in Figma, ensuring a sleek and intuitive design. On the backend, we set up the server, database, and APIs needed to support our platform's functionalities. For the frontend, we developed the client-side interface, focusing on creating a responsive and user-friendly environment. Finally, we integrated the frontend and backend, ensuring seamless interaction between components. ## 🚦Challenges we ran into It was challenging to agree on which features were essential for our initial launch. We encountered several technical issues, especially related to real-time feedback and performance comparisons. Once many components were implemented, it became difficult to make changes to the overall architecture without disrupting existing features. ## 🖍️ Accomplishments that we're proud of We are proud of the intuitive and visually appealing user interface we created, and achieving a fully functional platform that effectively helps users improve their acting skills. ## 🎤 What we learned Our team gained experience with **new technologies and design principles**. We enhanced our version control skills, making collaboration more efficient. We learned how to work better as a team, leveraging each other’s strengths and improving our communication. ## → What's next for Essence As Essence provides easy to use resources for actors who are just starting out, we want to increase the amount of features to help actors of all levels. Think generative prompts to practice expressing niche emotions and involving the mentorship of admirable actors. Overall, we strive to provide an ever growing and supportive community for actors of all levels to come together!
winning
## Inspiration It’s insane how the majority of the U.S. still looks like **endless freeways and suburban sprawl.** The majority of Americans can’t get a cup of coffee or go to the grocery store without a car. What would America look like with cleaner air, walkable cities, green spaces, and effective routing from place to place that builds on infrastructure for active transport and micro mobility? This is the question we answer, and show, to urban planners, at an extremely granular street level. Compared to most of Europe and Asia (where there are public transportation options, human-scale streets, and dense neighborhoods) the United States is light-years away ... but urban planners don’t have the tools right now to easily assess a specific area’s walkability. **Here's why this is an urgent problem:** Current tools for urban planners don’t provide *location-specific information* —they only provide arbitrary, high-level data overviews over a 50-mile or so radius about population density. Even though consumers see new tools, like Google Maps’ area busyness bars, it is only bits and pieces of all the data an urban planner needs, like data on bike paths, bus stops, and the relationships between traffic and pedestrians. As a result, there’s very few actionable improvements that urban planners can make. Moreover, because cities are physical spaces, planners cannot easily visualize what an improvement (e.g. adding a bike/bus lane, parks, open spaces) would look like in the existing context of that specific road. Many urban planners don’t have the resources or capability to fully immerse themselves in a new city, live like a resident, and understand the problems that residents face on a daily basis that prevent them from accessing public transport, active commutes, or even safe outdoor spaces. There’s also been a significant rise in micro-mobility—usage of e-bikes (e.g. CityBike rental services) and scooters (especially on college campuses) are growing. Research studies have shown that access to public transport, safe walking areas, and micro mobility all contribute to greater access of opportunity and successful mixed income neighborhoods, which can raise entire generations out of poverty. To continue this movement and translate this into economic mobility, we have to ensure urban developers are **making space for car-alternatives** in their city planning. This means bike lanes, bus stops, plaza’s, well-lit sidewalks, and green space in the city. These reasons are why our team created CityGO—a tool that helps urban planners understand their region’s walkability scores down to the **granular street intersection level** and **instantly visualize what a street would look like if it was actually walkable** using Open AI's CLIP and DALL-E image generation tools (e.g. “What would the street in front of Painted Ladies look like if there were 2 bike lanes installed?)” We are extremely intentional about the unseen effects of walkability on social structures, the environments, and public health and we are ecstatic to see the results: 1.Car-alternatives provide economic mobility as they give Americans alternatives to purchasing and maintaining cars that are cumbersome, unreliable, and extremely costly to maintain in dense urban areas. Having lower upfront costs also enables handicapped people, people that can’t drive, and extremely young/old people to have the same access to opportunity and continue living high quality lives. This disproportionately benefits people in poverty, as children with access to public transport or farther walking routes also gain access to better education, food sources, and can meet friends/share the resources of other neighborhoods which can have the **huge** impact of pulling communities out of poverty. Placing bicycle lanes and barriers that protect cyclists from side traffic will encourage people to utilize micro mobility and active transport options. This is not possible if urban planners don’t know where existing transport is or even recognize the outsized impact of increased bike lanes. Finally, it’s no surprise that transportation as a sector alone leads to 27% of carbon emissions (US EPA) and is a massive safety issue that all citizens face everyday. Our country’s dependence on cars has been leading to deeper issues that affect basic safety, climate change, and economic mobility. The faster that we take steps to mitigate this dependence, the more sustainable our lifestyles and Earth can be. ## What it does TLDR: 1) Map that pulls together data on car traffic and congestion, pedestrian foot traffic, and bike parking opportunities. Heat maps that represent the density of foot traffic and location-specific interactive markers. 2) Google Map Street View API enables urban planners to see and move through live imagery of their site. 3) OpenAI CLIP and DALL-E are used to incorporate an uploaded image (taken from StreetView) and descriptor text embeddings to accurately provide a **hyper location-specific augmented image**. The exact street venues that are unwalkable in a city are extremely difficult to pinpoint. There’s an insane amount of data that you have to consolidate to get a cohesive image of a city’s walkability state at every point—from car traffic congestion, pedestrian foot traffic, bike parking, and more. Because cohesive data collection is extremely important to produce a well-nuanced understanding of a place’s walkability, our team incorporated a mix of geoJSON data formats and vector tiles (specific to MapBox API). There was a significant amount of unexpected “data wrangling” that came from this project since multiple formats from various sources had to be integrated with existing mapping software—however, it was a great exposure to real issues data analysts and urban planners have when trying to work with data. There are three primary layers to our mapping software: traffic congestion, pedestrian traffic, and bicycle parking. In order to get the exact traffic congestion per street, avenue, and boulevard in San Francisco, we utilized a data layer in MapBox API. We specified all possible locations within SF and made requests for geoJSON data that is represented through each Marker. Green stands for low congestion, yellow stands for average congestion, and red stands for high congestion. This data layer was classified as a vector tile in MapBox API. Consolidating pedestrian foot traffic data was an interesting task to handle since this data is heavily locked in enterprise software tools. There are existing open source data sets that are posted by regional governments, but none of them are specific enough that they can produce 20+ or so heat maps of high foot traffic areas in a 15 mile radius. Thus, we utilized Best Time API to index for a diverse range of locations (e.g. restaurants, bars, activities, tourist spots, etc.) so our heat maps would not be biased towards a certain style of venue to capture information relevant to all audiences. We then cross-validated that data with Walk Score (the most trusted site for gaining walkability scores on specific addresses). We then ranked these areas and rendered heat maps on MapBox to showcase density. San Francisco’s government open sources extremely useful data on all of the locations for bike parking installed in the past few years. We ensured that the data has been well maintained and preserved its quality over the past few years so we don’t over/underrepresent certain areas more than others. This was enforced by recent updates in the past 2 months that deemed the data accurate, so we added the geographic data as a new layer on our app. Each bike parking spot installed by the SF government is represented by a little bike icon on the map! **The most valuable feature** is the user can navigate to any location and prompt CityGo to produce a hyper realistic augmented image resembling that location with added infrastructure improvements to make the area more walkable. Seeing the StreetView of that location, which you can move around and see real time information, and being able to envision the end product is the final bridge to an urban developer’s planning process, ensuring that walkability is within our near future. ## How we built it We utilized the React framework to organize our project’s state variables, components, and state transfer. We also used it to build custom components, like the one that conditionally renders a live panoramic street view of the given location or render information retrieved from various data entry points. To create the map on the left, our team used MapBox’s API to style the map and integrate the heat map visualizations with existing data sources. In order to create the markers that corresponded to specific geometric coordinates, we utilized Mapbox GL JS (their specific Javascript library) and third-party React libraries. To create the Google Maps Panoramic Street View, we integrated our backend geometric coordinates to Google Maps’ API so there could be an individual rendering of each location. We supplemented this with third party React libraries for better error handling, feasibility, and visual appeal. The panoramic street view was extremely important for us to include this because urban planners need context on spatial configurations to develop designs that integrate well into the existing communities. We created a custom function and used the required HTTP route (in PHP) to grab data from the Walk Score API with our JSON server so it could provide specific Walkability Scores for every marker in our map. Text Generation from OpenAI’s text completion API was used to produce location-specific suggestions on walkability. Whatever marker a user clicked, the address was plugged in as a variable to a prompt that lists out 5 suggestions that are specific to that place within a 500-feet radius. This process opened us to the difficulties and rewarding aspects of prompt engineering, enabling us to get more actionable and location-specific than the generic alternative. Additionally, we give the user the option to generate a potential view of the area with optimal walkability conditions using a variety of OpenAI models. We have created our own API using the Flask API development framework for Google Street View analysis and optimal scene generation. **Here’s how we were able to get true image generation to work:** When the user prompts the website for a more walkable version of the current location, we grab an image of the Google Street View and implement our own architecture using OpenAI contrastive language-image pre-training (CLIP) image and text encoders to encode both the image and a variety of potential descriptions describing the traffic, use of public transport, and pedestrian walkways present within the image. The outputted embeddings for both the image and the bodies of text were then compared with each other using scaled cosine similarity to output similarity scores. We then tag the image with the necessary descriptors, like classifiers,—this is our way of making the system understand the semantic meaning behind the image and prompt potential changes based on very specific street views (e.g. the Painted Ladies in San Francisco might have a high walkability score via the Walk Score API, but could potentially need larger sidewalks to further improve transport and the ability to travel in that region of SF). This is significantly more accurate than simply using DALL-E’s set image generation parameters with an unspecific prompt based purely on the walkability score because we are incorporating both the uploaded image for context and descriptor text embeddings to accurately provide a hyper location-specific augmented image. A descriptive prompt is constructed from this semantic image analysis and fed into DALLE, a diffusion based image generation model conditioned on textual descriptors. The resulting images are higher quality, as they preserve structural integrity to resemble the real world, and effectively implement the necessary changes to make specific locations optimal for travel. We used Tailwind CSS to style our components. ## Challenges we ran into There were existing data bottlenecks, especially with getting accurate, granular, pedestrian foot traffic data. The main challenge we ran into was integrating the necessary Open AI models + API routes. Creating a fast, seamless pipeline that provided the user with as much mobility and autonomy as possible required that we make use of not just the Walk Score API, but also map and geographical information from maps + google street view. Processing both image and textual information pushed us to explore using the CLIP pre-trained text and image encoders to create semantically rich embeddings which can be used to relate ideas and objects present within the image to textual descriptions. ## Accomplishments that we're proud of We could have done just normal image generation but we were able to detect car, people, and public transit concentration existing in an image, assign that to a numerical score, and then match that with a hyper-specific prompt that was generating an image based off of that information. This enabled us to make our own metrics for a given scene; we wonder how this model can be used in the real world to speed up or completely automate the data collection pipeline for local governments. ## What we learned and what's next for CityGO Utilizing multiple data formats and sources to cohesively show up in the map + provide accurate suggestions for walkability improvement was important to us because data is the backbone for this idea. Properly processing the right pieces of data at the right step in the system process and presenting the proper results to the user was of utmost importance. We definitely learned a lot about keeping data lightweight, easily transferring between third-party softwares, and finding relationships between different types of data to synthesize a proper output. We also learned quite a bit by implementing Open AI’s CLIP image and text encoders for semantic tagging of images with specific textual descriptions describing car, public transit, and people/people crosswalk concentrations. It was important for us to plan out a system architecture that effectively utilized advanced technologies for a seamless end-to-end pipeline. We learned about how information abstraction (i.e. converting between images and text and finding relationships between them via embeddings) can play to our advantage and utilizing different artificially intelligent models for intermediate processing. In the future, we plan on integrating a better visualization tool to produce more realistic renders and introduce an inpainting feature so that users have the freedom to select a specific view on street view and be given recommendations + implement very specific changes incrementally. We hope that this will allow urban planners to more effectively implement design changes to urban spaces by receiving an immediate visual + seeing how a specific change seamlessly integrates with the rest of the environment. Additionally we hope to do a neural radiance field (NERF) integration with the produced “optimal” scenes to give the user the freedom to navigate through the environment within the NERF to visualize the change (e.g. adding a bike lane or expanding a sidewalk or shifting the build site for a building). A potential virtual reality platform would provide an immersive experience for urban planners to effectively receive AI-powered layout recommendations and instantly visualize them. Our ultimate goal is to integrate an asset library and use NERF-based 3D asset generation to allow planners to generate realistic and interactive 3D renders of locations with AI-assisted changes to improve walkability. One end-to-end pipeline for visualizing an area, identifying potential changes, visualizing said changes using image generation + 3D scene editing/construction, and quickly iterating through different design cycles to create an optimal solution for a specific locations’ walkability as efficiently as possible!
## Inspiration The only thing worse than no WiFi is slow WiFi. Many of us have experienced the frustrations of terrible internet connections. We have too, so we set out to create a tool to help users find the best place around to connect. ## What it does Our app runs in the background (completely quietly) and maps out the WiFi landscape of the world. That information is sent to a central server and combined with location and WiFi data from all users of the app. The server then processes the data and generates heatmaps of WiFi signal strength to send back to the end user. Because of our architecture these heatmaps are real time, updating dynamically as the WiFi strength changes. ## How we built it We split up the work into three parts: mobile, cloud, and visualization and had each member of our team work on a part. For the mobile component, we quickly built an MVP iOS app that could collect and push data to the server and iteratively improved our locationing methodology. For the cloud, we set up a Firebase Realtime Database (NoSQL) to allow for large amounts of data throughput. For the visualization, we took the points we received and used gaussian kernel density estimation to generate interpretable heatmaps. ## Challenges we ran into Engineering an algorithm to determine the location of the client was significantly more difficult than expected. Initially, we wanted to use accelerometer data, and use GPS as well to calibrate the data, but excessive noise in the resulting data prevented us from using it effectively and from proceeding with this approach. We ran into even more issues when we used a device with less accurate sensors like an Android phone. ## Accomplishments that we're proud of We are particularly proud of getting accurate paths travelled from the phones. We initially tried to use double integrator dynamics on top of oriented accelerometer readings, correcting for errors with GPS. However, we quickly realized that without prohibitively expensive filtering, the data from the accelerometer was useless and that GPS did not function well indoors due to the walls affecting the time-of-flight measurements. Instead, we used a built in pedometer framework to estimate distance travelled (this used a lot of advanced on-device signal processing) and combined this with the average heading (calculated using a magnetometer) to get meter-level accurate distances. ## What we learned Locationing is hard! Especially indoors or over short distances. Firebase’s realtime database was extremely easy to use and very performant Distributing the data processing between the server and client is a balance worth playing with ## What's next for Hotspot Next, we’d like to expand our work on the iOS side and create a sister application for Android (currently in the works). We’d also like to overlay our heatmap on Google maps. There are also many interesting things you can do with a WiFi heatmap. Given some user settings, we could automatically switch from WiFi to data when the WiFi signal strength is about to get too poor. We could also use this app to find optimal placements for routers. Finally, we could use the application in disaster scenarios to on-the-fly compute areas with internet access still up or produce approximate population heatmaps.
## Inspiration * This July was the hottest month ever * Trees are effective in cooling the city ([10˚ cooler than no-tree areas, 2-4x faster at cooling](https://www.nature.com/articles/s41467-021-26768-w)) * But the bottleneck is knowing **specifically where** to plant trees (how do we know? [a city mayor said it in an interview](https://tinyurl.com/quotebymayor)) ## What it does After the user selects a location on the map, our platform performs three things: 1. Identify and outline urban heat islands in the neighborhood 2. Locate specific streets where a) temperature is high, b) trees are underutilized, c) the sidewalks are wide enough to plant trees 3. Propose a solution with a) visualization of what each street would look like with tree canopy, b) recommendations of native tree species that can be planted, c) estimate the benefits of this course of action ## How we built it ### Part I. Overlaying heat, canopy, and air pollution data on the map * Heat data: we used Mateo Blue's surface temperature API * Canopy (trees): there's no direct canopy dataset, so we bootstrapped with Google Maps' Satellite API. We scanned a 5-mile radius from a point of interest. To calculate canopy density, we built a lightweight computer vision algorithm from scratch to detect trees in aerial images. * Air pollution: we used Google's pollution API. * Map: @Jenny @Balaji carried us * Part I isolates specific small regions in a large radius that have *potential* to be improved via tree planting. Next: ### Part II. Identifying specific streets suitable for planting trees * Getting street view: Google StreetView API * Assessing whether street 1) already has a canopy presence, 2) if not, does it have space for trees to grow: we used OpenAI's CLIP API @Andy ### Part III: Recommendation and details for planting trees on this street: * Generating visualization of what the street could be like if it had canopy: we integrated OpenAI's Dall-E API * Finding suitable trees to be planted in this region (soil, climate, maintenance, and we want to promote native species): we applied OpenAI's GPT-3 API * Benefit estimation: we drew conclusions from research papers (here's a [meta study](https://www.sciencedirect.com/science/article/abs/pii/S1618866712000829)) ### Front and Back End: Balaji and Jenny took care of arguably the nastiest part of this project - the front-end, which consists of building a map using the Leaflet library, overlaying data on it (drawing them polygons and coloring them shades), working with kml and geojson files, marking user input coordinates Andy and Ryan took care of the algorithm and the backend, creating computer vision algorithms, applying LLMs, and ensuring the Python code used for the LLM works in the Typescript environment of our overall project. ## Challenges we ran into Bottlenecks from big to small 1. Adding trees to a street is surprisingly hard. Generative models aren't good at adding objects without altering the rest of the picture. We tried using Stable Diffusion, ControlNet, Dreambooth to finetune Stable Diffusion, Dall-E, creating pipelines of two foundation models, and countless variations of prompts and parameters. We gave it our best shot and eventually found a set of prompts that generated good results. 2. Assessing canopy coverage (which streets lack trees) took us a long time. Because 1) we needed fine-grained, street-level data; 2) the closest match, [Google's canopy dataset](https://insights.sustainability.google/places/ChIJE9on3F3HwoAR9AhGJW_fL-I/trees) is inaccessible as it has a waitlist; 3) tree classification is difficult because we needed to iterate through hundreds of satellites images. So, we created our own dataset from raw satellite images and an in-house computer vision algorithm (>95% accuracy, processing 50 images in <5 seconds). 3. Map. Perseverance. Patience. 4. Google's StreetView image defaults to a northern direction. Their photographer-pov API has been deprecated. So we created our own computer vision model to ensure that our street view faces down the street. 5. Finding the right datasets, reformating them, and parsing GBs of data to fit our application. ## Accomplishments that we're proud of We LEARNED so much! From how to fine-tune stable diffusion to working with Leaflet map library. Each one of us walked away doing something we had no idea how to do a day ago. ## Memories we made * Getting kicked out from the MIT lounge coaches at 4 a.m. Sat * Going for a run around the track in the dark at 1 a.m. Sun * Jeopardizing each other's API keys * Ramen, red bull, sunrise, sunset
winning
## Inspiration We are part of a generation that has lost the art of handwriting. Because of the clarity and ease of typing, many people struggle with clear shorthand. Whether you're a second language learner or public education failed you, we wanted to come up with an intelligent system for efficiently improving your writing. ## What it does We use an LLM to generate sample phrases, sentences, or character strings that target letters you're struggling with. You can input the writing as a photo or directly on the webpage. We then use OCR to parse, score, and give you feedback towards the ultimate goal of character mastery! ## How we built it We used a simple front end utilizing flexbox layouts, the p5.js library for canvas writing, and simple javascript for logic and UI updates. On the backend, we hosted and developed an API with Flask, allowing us to receive responses from API calls to state-of-the-art OCR, and the newest Chat GPT model. We can also manage user scores with Pythonic logic-based sequence alignment algorithms. ## Challenges we ran into We really struggled with our concept, tweaking it and revising it until the last minute! However, we believe this hard work really paid off in the elegance and clarity of our web app, UI, and overall concept. ..also sleep 🥲 ## Accomplishments that we're proud of We're really proud of our text recognition and matching. These intelligent systems were not easy to use or customize! We also think we found a creative use for the latest chat-GPT model to flex its utility in its phrase generation for targeted learning. Most importantly though, we are immensely proud of our teamwork, and how everyone contributed pieces to the idea and to the final project. ## What we learned 3 of us have never been to a hackathon before! 3 of us never used Flask before! All of us have never worked together before! From working with an entirely new team to utilizing specific frameworks, we learned A TON.... and also, just how much caffeine is too much (hint: NEVER). ## What's Next for Handwriting Teacher Handwriting Teacher was originally meant to teach Russian cursive, a much more difficult writing system. (If you don't believe us, look at some pictures online) Taking a smart and simple pipeline like this and updating the backend intelligence allows our app to incorporate cursive, other languages, and stylistic aesthetics. Further, we would be thrilled to implement a user authentication system and a database, allowing people to save their work, gamify their learning a little more, and feel a more extended sense of progress.
## Inspiration With elections right around the corner, many young adults are voting for the first time, and may not be equipped with knowledge of the law and current domestic events. We believe that this is a major problem with our nation, and we seek to use open source government data to provide day to day citizens with access to knowledge on legislative activities and current affairs in our nation. ## What it does OpenLegislation aims to bridge the knowledge gap by providing easy access to legislative information. By leveraging open-source government data, we empower citizens to make informed decisions about the issues that matter most to them. This approach not only enhances civic engagement but also promotes a more educated and participatory democracy Our platform allows users to input an issue they are interested in, and then uses cosine analysis to fetch the most relevant bills currently in Congress related to that issue. ## How we built it We built this application with a tech stack of MongoDB, ExpressJS, ReactJS, and OpenAI. DataBricks' Llama Index was used to get embeddings for the title of our bill. We used a Vector Search using Atlas's Vector Search and Mongoose for accurate semantic results when searching for a bill. Additionally, Cloudflare's AI Gateway was used to track calls to GPT-4o for insightful analysis of each bill. ## Challenges we ran into At first, we tried to use OpenAI's embeddings for each bill's title. However, this brought a lot of issues for our scraper as while the embeddings were really good, they took up a lot of storage and were heavily rate limited. This was not feasible at all. To solve this challenge, we pivoted to a smaller model that uses a pre trained transformer to provide embeddings processed locally instead of through an API call. Although the semantic search was slightly worse, we were able to get satisfactory results for our MVP and be able to expand on different, higher-quality models in the future. ## Accomplishments that we're proud of We are proud that we have used open source software technology and data to empower the people with transparency and knowledge of what is going on in our government and our nation. We have used the most advanced technology that Cloudflare and Databricks provides and leveraged it for the good of the people. On top of that, we are proud of our technical acheivement of our semantic search, giving the people the bills they want to see. ## What we learned During the development of this project, we learned more of how vector embeddings work and are used to provide the best search results. We learned more of Cloudflare and OpenAI's tools in this development and will definitely be using them on future projects. Most importantly, we learned the value of open source data and technology and the impact it can have on our society. ## What's next for OpenLegislation For future progress of OpenLegislation, we plan to expand to local states! Constituents can know directly what is going on in their state on top of their country with this addition and actually be able to receive updates on what officials they elected are actually proposing. In addition, we would expand our technology by using more advanced embeddings for more tailored searches. Finally, we would implore more data anlysis methods with help from Cloudflare and DataBricks' Open-Source technologies to help make this important data more available and transparant for the good of society.
## Inspiration Oftentimes when we find ourselves not understanding the content that has been taught in class and rarely remembering what exactly is being conveyed. And some of us have the habit of mismatching notes and forgetting where we put them. So to help all the ailing students, there was this idea to make an app that would give the students curated automatic content from the notes which they upload online. ## What it does A student uploads his notes to the application. The application creates a summary of the notes, additional information on the subject of the notes, flashcards for easy remembering and quizzes to test his knowledge. There is also the option to view other student's notes (who have uploaded it in the same platform) and do all of the above with them as well. We made an interactive website that can help students digitize and share notes! ## How we built it Google cloud vision was used to convert images into text files. We used Google cloud NLP API for the formation of questions from the plain text by identifying the entities and syntax of the notes. We also identified the most salient features of the conversation and assumed it to be the topic of interest. By doing this, we are able to scrape more detailed information on the topic using google custom search engine API. We also scrape information from Wikipedia. Then we make flashcards based on the questions and answers and also make quizzes to test the knowledge of the student. We used Django as the backend to create a web app. We also made a chatbot in google dialog-flow to inherently enable the use of google assistant skills. ## Challenges we ran into Extending the platform to a collaborative domain was tough. Connecting the chatbot framework to the backend and sending back dynamic responses using webhook was more complicated than we expected. Also, we had to go through multiple iterations to get our question formation framework right. We used the assumption that the main topic would be the noun at the beginning of the sentence. Also, we had to replace pronouns in order to keep track of the conversation. ## Accomplishments that we're proud of We have only 3 members in the team and one of them has a background in electronics engineering and no experience in computer science and as we only had the idea of what we were planning to make but no idea of how we will make. We are very proud to have achieved a fully functional application at the end of this 36-hour hackathon. We learned a lot of concepts regarding UI/UX design, backend logic formation, connecting backend and frontend in Django and general software engineering techniques. ## What we learned We learned a lot about the problems of integrations and deploying an application. We also had a lot of fun making this application because we had the motive to contribute to a large number of people in day to day life. Also, we learned about NLP, UI/UX and the importance of having a well-set plan. ## What's next for Noted In the best-case scenario, we would want to convert this into an open-source startup and help millions of students with their studies. So that they can score good marks in their upcoming examinations.
winning
## Inspiration We've all left a doctor's office feeling more confused than when we arrived. This common experience highlights a critical issue: over 80% of Americans say access to their complete health records is crucial, yet 63% lack their medical history and vaccination records since birth. Recognizing this gap, we developed our app to empower patients with real-time transcriptions of doctor visits, easy access to health records, and instant answers from our AI doctor avatar. Our goal is to ensure EVERYONE has the tools to manage their health confidently and effectively. ## What it does Our app provides real-time transcription of doctor visits, easy access to personal health records, and an AI doctor for instant follow-up questions, empowering patients to manage their health effectively. ## How we built it We used Node.js, Next.js, webRTC, React, Figma, Spline, Firebase, Gemini, Deepgram. ## Challenges we ran into One of the primary challenges we faced was navigating the extensive documentation associated with new technologies. Learning to implement these tools effectively required us to read closely and understand how to integrate them in unique ways to ensure seamless functionality within our website. Balancing these complexities while maintaining a cohesive user experience tested our problem-solving skills and adaptability. Along the way, we struggled with Git and debugging. ## Accomplishments that we're proud of Our proudest achievement is developing the AI avatar, as there was very little documentation available on how to build it. This project required us to navigate through various coding languages and integrate the demo effectively, which presented significant challenges. Overcoming these obstacles not only showcased our technical skills but also demonstrated our determination and creativity in bringing a unique feature to life within our application. ## What we learned We learned the importance of breaking problems down into smaller, manageable pieces to construct something big and impactful. This approach not only made complex challenges more approachable but also fostered collaboration and innovation within our team. By focusing on individual components, we were able to create a cohesive and effective solution that truly enhances patient care. Also, learned a valuable lesson on the importance of sleep! ## What's next for MedicAI With the AI medical industry projected to exceed $188 billion, we plan to scale our website to accommodate a growing number of users. Our next steps include partnering with hospitals to enhance patient access to our services, ensuring that individuals can seamlessly utilize our platform during their healthcare journey. By expanding our reach, we aim to empower more patients with the tools they need to manage their health effectively.
## Inspiration Some of have family members in healthcare and see the overwhelming hardships they experience looking to provide healthcare to members of society. We also witness how hard it can be for the average human being to receive basic healthcare without losing a lot of money. ## What it does This product uses ai to develop solutions to your personal health problems that you are encountering and want a solution for. We also have opportunity to connect to a real doctor if ai is not helping. ## How we built it We used Next.JS, React, Firebase, and OpenAI to create this. ## Challenges we ran into A lot of the challenges focused around developing the AI chat experience as well as the doctor to patient experience. ## Accomplishments that we're proud of We managed to get the full product and the expected functionalities within 24 hours. There's also a full 24/7 backend database storing user and doctor credentials. ## What we learned Leveraging firebase for user authentication and complex databases as well as build a real-time chat experience between doctors and patients. ## What's next for Telemedicine Chatbot
## Inspiration An Annals of Internal Medicine study\* showed that of 430 physician office hours observed, only 27% was spent directly with patients whereas 49% of their time was spent on Electric Health Records (EHR). This is not including the additional 1-2 hours spent outside of office hours each day, dedicated primarily to EHR tasks. We built a comprehensive app that uses ML to reduce the time spent on EHR tasks so that physicians may have more time for seeing patients. ## What it does & How we built it ### Task 1: Ordering Tests Physicians have to order tests for patients based on their medical records. Some are regular (e.g. annual) tests, while others are specific to certain symptoms. It takes a physician a long time to meticulously read through the entire patient medical record and even then, some less common tests are occasionally forgotten. Our solution consists of a **Machine Learning (ML)** algorithm that uses both Natural Language Processing (ANN) and classification (kNN) to scan through patient medical records for doctor notes (text) and test results (numerical). NLP extracts the key words from the physician's notes and uses a pre-trained model to identify which conditions need to be tested for. Likewise, classification uses the medical profiles (previous test results) of other patients to rank the need for certain tests. ### Task 2: Adding notes Physicians often make jot notes on paper, then later sit down by a desktop to type up paragraphs to add to the digital patient record file. This meant that more time was spent on re-familiarizing with the case, not to mention paper notes can be more easily displaced. We implemented an iOS feature that can automatically translate **speech to text**, so that physicians can take elaborate notes on the go, and only light revision is necessary later on. ## Ethical Considerations *How much should we rely on ML/AI*? The technology has advanced very quickly and predictions are not more accurate than ever, but where medicine is concerned, can we ever leave human lives in the hands of AI? Throughout the process of building this app, we've been very clear on the fact that this is meant to be a helper, not a replacement for physicians. While we hope to maximize the efficiency of the hospital workflow so more people can be helped, we believe that it would be unethical to leave the health of people in the hands of only AI. Thus we conclude that while AI has proven helpful in its ability to reduce repetitive work, it would be unwise to eliminate the human factor completely. These considerations are reflected in our app design, as physicians must approve of suggestive tests for them to be ordered. We have designed it such that the physician must click on each prompt to order the suggested test, thus reinforcing review of AI's suggestions. ## Challenges I ran into * compatibility issues between Python scripts and the JavaScript backend * finding publicized medical record datasets ## Accomplishments that I'm proud of * being able to evaluate using both paragraphs and numerical test results * learning react on the go! ## What I learned * how NLP-ANN works * how to connect between different languages & platforms ## What's next for health.ai * develop ML to be more robust, maintaining accuracy for a greater variety of medical conditions. ## Sources \*Published: Ann Intern Med. 2016;165(11):753-760, DOI: 10.7326/M16-0961, Published at [www.annals.org](http://www.annals.org) on 6 September 2016, © 2016 American College of Physicians
losing
## Inspiration Hospitals have drastically cut nursing staff over the past decade with millions more patients flowing into the healthcare system. Around 300,000 patients die from medical mistakes, many of which are caused by chronic overwork & staffing shortages. Robots have the potential to minimize busywork for nurses and re-focus hospital attention on patient care. We made *MediBot* to provide a cost-effective automation solution for hospitals and clinics. ## What it does We provide a **fully automated intra-hospital delivery & patient care service** to patients in a seamless fashion. The first part of our system is the Amazon Echo Dot + Alexa. Patients (that may be physically disabled) can call for *MediBot* deliveries. These delivery requests (intents) include: \*Deliver Food \*Deliver Water \*Deliver a Tissue Sample \*Deliver the Blood Sample \*Call for Help \*Bring Nurse \*Call Nurse \*Get the Doctor \*What is my prescription schedule \*When do I take my pills The tasks assigned are instantly uploaded to our express server and processed to be viewed on our mobile application. In our app, we just list out the tasks that have been assigned to the bot. After the task has been assigned (via bluetooth), the Bot begins to move along a solid colored path. In hospitals, *MediBot* would have a great impact. Due to the fact that colored lines are used as navigation in hospitals, *MediBot* would have an easy time navigating around the turns using it's photo sensor to follow hospital lines. ## How we built it The **automated intra-hospital delivery & patient care service** has three parts: The Alexa commands, a mobile app, and a complimentary delivery bot. The Alexa leverages the Echo SDK and Express to allow for communication within the mobile app based on vocal commands and broadcasting. Once the command has been said, we iterate multiple updating processes to allow for the commands to be shown on the mobile app created entirely in Ionic. Commands then trigger the Bot to begin to move on a solid colored path. This type of sensing and movement is possible using a BLE receiver and transceiver along with 2 360 servos, Arduino Uno, 4 light sensors, a battery pack, and a 3d lasercut caster/wheels. When all 3 parts eventually combined with live data from commands, they truly form a fully **automated Voice assisted intra-hospital delivery & patient care service**. ## Challenges we ran into We initially wanted to create a dashboard of multiple Bots in a hospital environment, however this was troublesome due to the limited capability React had with animations. This made us move to a mobile Ionic app. We also had to look into getting Light Sensors to work with because IR sensors were really finicky to work with. We had to learn how to use Light Sensors because change in light for IR Sensors did not change significantly when turned on. We attempted to account for these problems as much as possible, allowing us to bring all 3 parts together with a seamless easy experience. ## Accomplishments that we're proud of We were excited when we knew we had a cohesive system, from requests to visualization of these requests, to **LINE SENSING** with our bot, instead of just one part. It was exciting to even see UPenn Medicine professors excited about our product! We were also proud of creating something that all of us will actually see in practice (we solved a common problem and we hope to change lives significantly with this product). ## What we learned We learned how to use Light Sensors. We also learned how to efficiently sync data across Alexa, our Mobile App, and the Bot. This project was filled challenges that we had to google extensively and debug constantly to get right. ## What's next for MediBot We want to implement a machine learning layer on the bot and application to allow for robust sharing of requests across all users. With NLP features, such as keywords and common requests, we can provide patients with even more commands. We can develop ML models to factor in faster routes within a hospital to reach patients. We also want to provide this product ready out of the box to other hospitals or our own in the near future. MediBot can improve in a number of ways -- we will tackle these tasks generally in order of importance as we see fit.
## Inspiration While we were thinking about the sustainability track, we realized that one of the biggest challenges faced by humanity is carbon emissions, global warming and climate change. According to Dr.Fatih Birol, IEA Executive Director - *"Global carbon emissions are set to jump by 1.5 billion tonnes this year. This is a dire warning that the economic recovery from the Covid crisis is currently anything but sustainable for our climate."* With this concern in mind we decided to work on a model which could possibly be a small compact carbon capturing system to reduce the carbon footprint around the world. ## What it does The system is designed to capture CO2 directly from the atmosphere using microalgae as our biofilter. ## How we built it Our plan was to first develop a design that could house the microalgae. We designed a chamber in Fusion 360 which we later 3D printed to house the microalgae. The air from the surroundings is directed into the algal chamber using an aquarium aerator. The pumped in air moves into the algal chamber through an air stone bubble diffuser which allows the air to break into smaller bubbles. These smaller air bubbles make the CO2 sequestration easier by giving the microalgae more time to act upon it. We have made a spiral design inside the chamber so that the bubbles travel upward through the chamber in a spiral fashion, giving the microalgae even more time to act upon it. This continuous process in due course would lead to capturing of CO2 and production of oxygen. ## Challenges we ran into 3D printing the parts of the chamber within the specified time. Getting our hands on enough micro algae to fill up the entire system in its optimal growth period (log phase) for the best results. Making the chamber leak proof. ## Accomplishments that we're proud of The hardware design that we were able to design and build over the stipulated time. Develop the system which could actually bring down CO2 levels by utilizing the unique side of microalgae. ## What we learned We came across a lot of research papers implicating the best use of microalgae in its role to capture CO2. Time management: Learnt to design and develop a system from scratch in a short period. ## What's next for Aria We plan to conduct more research using microalgae and enhance the design of the existing system we built so that we could increase the carbon capture efficiency of the system. Keeping in mind the deteriorating indoor air quality, we also plan to integrate it with the inorganic air filters so that it could help in improving the overall indoor air quality. We also plan to conduct research on finding out how much area does one unit of Aria covers
## Inspiration Reading about issues with patient safety was... not exactly inspiring, but eye-opening. Issues that were only a matter of human error (understaffed or forgetfulness or etc) like bed sores seemed like things that could easily be kept track of to at least make sure patients could get a heightened quality of life. So we decided to make an app that tracked patient wellness and needs, not necessarily just concrete items, but all the necessary follow-up items from them as well. We understand that schedulers for more concrete events like appointments already exist, but something that can remind providers to check up on patients in 3 days to see if they have had any side effects to their new prescription or any other task would be helpful. ## What it does The Med-O-Matic keeps track of patient needs and when they're needed, and sets those needs up in a calendar and matching to-do list for a team of healthcare providers in a hospital to take care of. Providers can claim tasks that they will get to, and can mark them down as they go throughout their day. This essentially serves as a scheduled task-list for providers. ## How we built it To build the frontend, we used Vue.js. We have a database holding all the tasks on AWS DynamoDB. ## Challenges we ran into Getting started was a bit difficult and we weren't really sure which direction we should take for Med-O-Matic. There were a lot of uncertainties about what exactly would be best for our application, so we had to delve in a bit deeper by thinking about what the current process is like at hospitals and clinics, and finding areas for improvement. This has led us to addressing a process issue in task assignment to reduce the number of errors associated with inattentiveness. ## Accomplishments that we're proud of What makes our application different than others is that you can sequence tasks and use these sequences as a template. For example, a procedure like heart-surgery always has required follow up steps. You can create a heart-surgery template, that will be used to set all the required follow-up steps. After the template is created, we can easily reapply that template however many times we want! ## What we learned We learned how to deploy using DeFang, and also how to connect our frontend with DynamoDB. And we learned more about the domain of our project, which is patient safety. ## What's next for Med-O-Matic More automation would be next. We've already got some bit for making sequences of tasks, but features like a send-a-text feature for example to make the following-up-on process easier would be next- in other words, we'd add features that help do the tasks as well, instead of simply reminding providers of what they need to do. We would also connect it to some medical scheduler like EPIC's API, like EPIC. This would allow us to really get the task sequencing working seamlessly with a real workflow, as something like a surgery can be scheduled in epic, happen, and then trigger the Med-O-Matic to create all the necessary follow-up tasks from that.
partial
## Inspiration My sister and I are both are very detailed oriented, organized, and heavily rely on our calendars. We wanted to build an app that could parse course outlines and automatically add important dates to our calendar. ## What it does * You can upload a course outline to [www.coursecalendar.online](http://www.coursecalendar.online) * We extract important dates from your file - assignment due dates, exams, etc. * Add these dates to your calendar with a single click! * Save time by avoiding manually inputting all those dates ## How we built it We used Flask for the API to easily handle the data flowing through our app in Python. Bootstrap and HTML for the frontend. Heroku for deployment and domain.com for our domain name. ### Salesforce We found that heroku made it simple and easy to deploy our application. It gave us a range of buildpack option for different application languages and set up and updating the deployment was painless. Setting up our domain name was also straightforward. ## Challenges I ran into We were limited by the APIs we could choose from - since there were only a few free options. These APIs had other roadblocks like rate limits, poor documentation, and were sometimes unreliable. For example, load testing found that the PDF to text API fails with large file uploads and the NLP API sometimes had issues parsing a large amount of text for dates. ## Accomplishments that I'm proud of Transforming all this data through APIs successfully :) ## What I learned How to transform and extract data through APIs. ## What's next for Course Calendar Google Calendar and Apple Calendar integration!
## Inspiration 💡 Back at university, we had a friend who couldn't see the professor's notes during lectures so he needed to take pictures, and often fell behind in class. But what if there was a way to take the pictures and convert them straight to notes? Introducing NoteSense, the fast and easy way to digitize captured photos and audio into ready to use typed up notes. ## What it does 🤔 NoteSense is a notes accessibility app that allows users to create notes based on images or audio snippets. Through harnessing technologies such as speech recognition and optical character recognition (OCR) users who have hearing deficits or vision impairment can create notes in a format they can access quickly and conveniently! Our platform quickly converts their image or audio they took from their mobile device to a PDF that is sent to their email! This way users can quickly stay on track during their lectures and not feel behind or at a disadvantage compared to their colleagues. Users also have the ability to view their generated PDFs on their device for quick viewing as well! ## How we built it 🖥️ When building out NotesSense, we chose 3 key design principles to help ensure our product meets the design challenge of accessibility! Simplicity, Elegance and Scalability. We wanted NotesSense to be simple to design, upgrade and debug. This led to us harnessing the lightweight framework of Flask and the magic of Python to design our backend infrastructure. To ensure our platform is scalable and efficient we harnessed the Google Cloud Platform to perform both our speech and image conversions harnessing its vision and speech api respectively. Using GCP as our backbone allowed our product to be efficient and responsive! We then used various python libraries to create our email and file conversion services, enabling us to harness the output from GCP to rapidly send pdfs of their notes to our users' emails! To create an elegant and user-friendly experience we leveraged React Native and various design libraries to present our users with a new, accessible platform to create notes for individuals who may have hearing and/or seeing difficulties. React Native also worked seamlessly with our Flask backend and our third party APIs. This integration also allowed for a concurrent development stream for both our front end and back end teams. ## Challenges we ran into 🔧 Throughout the course of this hackathon, we faced a variety of challenges before producing our final product. Issues with PDF reading and writing, audio conversion, and cross platform compatibility were the most notable of the bunch. Since this was our first time manipulating a phone’s filesystem using React Native, we had a few hiccups during the development of the PDF code to write to and read from the phone’s document directory. More specifically, we were confused as to how to create and populate a file with a stream of data of a PDF file type in the local filesystem. After some thorough research, we discovered that we could encode our data in a Base64 format and asynchronously write the string to a file in the local filesystem. Consequently, we could read this same file asynchronously and decode the Base64 to display the PDF in the app. Audio conversion was initially a big issue as both the frontend and backend did not have in-built or 3rd-party library functionality to convert between two specific file types that we believed we could not avoid. However, we later found that the client-side recording can be saved as a file type that was compatible with the Google Cloud Platform’s speech to text API. Cross platform compatibility was an issue that arose in multiple places throughout the course of development. Some UI elements would appear and behave differently on different operating systems. Fortunately, we had the ability to test on both Android and IOS devices. Therefore, we were able to pinpoint the cross platform issues and fix them by adding conditionals to change UI based on what platform the app is running on. Although we had to face various obstacles during the development of our app, we were able to overcome every single one of them and created a functional application with our desired outcome. ## What we learned 🤓 Hack the 6ix really helped develop our hard and soft skills. For starters, for many of us it was our first time using Google Cloud platform and other various google services! Learning GCP in a high pressure and fast paced environment was definitely a great and unique experience. This was also the first hackathon where we targeted a specific challenge (accessibility and GCP) and designed a product accordingly. As a result, this event enabled us to hone both our technical and design skills to create a product to help solve a specific problem. Furthermore, we also learned how to deal with file conversions in both Python and in React Native! Participating in this hackathon in a virtual setting definitely tested our team work and communication skills. We collaborated through Discord to coordinate our issues and track progress, as well as play music on our server to keep our morale at a high :). ## What's next for NoteSense 🏃‍♂️ For the future we have many ideas to improve the accessibility and scalability of NoteSense. A feature we weren’t able to develop currently but are planning is to improve our image recognition to handle detailed diagrams and drawings. Diagrams often paint a better picture and improve one's understanding which is something we would want to take advantage of in Note Sense. Due to the limitations of Google Cloud Platform currently our speech to text functionality is limited to only 60 seconds. This is fine for shorter recordings, however in the future we would want to look into options that allow for longer audio files for the purpose of recording live lectures, meetings or calls. Another feature that we would like to explore is using video to not only convert the audio into notes, but also capture any visual aid provided in the video to enhance the PDF notes we create.
## Inspiration To create an autonomous robot to help carry items. ## What it does Tracks a wearable beacon on a person and follows it. ## How we built it Build an ultrasound transmitting beacon with a tranducer and arduino nano. Use 2 ultrasound range sensors to detect differential range from beacon to know direction. Manuever robot accordingly to track beacon. ## Challenges we ran into Limited range of operation due to High Dynamic Range of ultrasonic sonar. Precise clock synchronization between transmitter and receiver. DC Motor was provided without a motor controller (improvised H-bridge contoller not really effective). ## Accomplishments that we're proud of Our hard work and determination to yield working prototype ## What we learned Cost effective indoor tracking is a challenging task. ## What's next for CompanionBot Improve the motor to bear greater load
partial
## Inspiration False news. False news. False news everywhere. Before reading your news article in depth, let us help you give you a brief overview of what you'll be ingesting. ## What it does Our Google Chrome extension will analyze the news article you're about to read and give you a heads up on some on the article's sentiment (what emotion is the article trying to convey), top three keywords in the article, and the categories this article's topic belong to. Our extension also allows you to fact check any statement by simply highlighting the statement, right-clicking, and selecting Fact check this with TruthBeTold. ## How we built it Our Chrome extension pulls the url of the webpage you're browsing and sends that to our Google Cloud Platform hosted Google App Engine Python server. Our server is then able to parse the content of the page, determine the content of the news article through processing by Newspaper3k API. The scraped article is then sent to Google's Natural Language API Client which assess the article for sentiment, categories, and keywords. This data is then returned to your extension and displayed in a friendly manner. Fact checking follows a similar path in that our extension sends the highlighted text to our server, which checks against Google's Fact Check Explorer API. The consensus is then returned and alerted. ## Challenges we ran into * Understanding how to interact with Google's APIs. * Working with Python flask and creating new endpoints in flask. * Understanding how Google Chrome extensions are built. ## Accomplishments that I'm proud of * It works!
## Inspiration Our mission is rooted in the **fight against fake news, misinformation, and disinformation,** which are increasingly pervasive threats in today’s digital world. As the saying goes, "the pen is mightier than the sword," which underscores the power of words and information. We aim to ensure that no one falls victim to digital deception. While technology has contributed to the spread of misinformation, we believe it can also be a powerful ally in promoting the truth. By leveraging AI for good, we aim to combat falsehoods and uphold the integrity of information. *Fun fact: Moodeng is a pygmy hippopotamus born on July 10, 2024, living in Khao Kheow Open Zoo, Thailand. She became a viral internet sensation during a busy political season in the US. Amid the flood of true and half-true information, Moodeng, symbolizing purity and honesty, stood as a beacon of clarity. Like Moodeng, our tool is here to cut through the noise and keep things transparent. So, Vote for Moodeng!* ## What it does Social media platforms are now major sources of rapidly shared information. Our Chrome extension, MD FactFarm, simplifies fact-checking through AI-driven content analysis and verification. Initially focused on YouTube, our tool offers **real-time fact-checking** by scanning video content to **identify and flag misinformation** while providing reliable sources for users to verify accuracy. ## How we built it * At the core of our system is a Large Language Model (LLM) that we trained and optimized to accurately understand and interpret various forms of misinformation, powering our fact-checking capabilities. * We integrated an AI agent using Fetch.ai and built services and APIs to enable seamless communication with the agent. * Our front-end, built with HTML, CSS, and JavaScript, was designed and deployed as a Chrome extension. ## Challenges we ran into * One of the major challenges we encountered was ensuring that the AI could accurately differentiate between fact, opinion, and misleading content. Early on, the outputs were inconsistent, making it difficult to trust the results. To achieve this, we had to rethink our approach to prompt engineering. We provided the AI with more detailed context and built a structured framework to clearly separate different types of content. Additionally, we implemented a formula for the AI to use to determine a confidence score for each output. These changes helped us generate more consistent and reliable results, enabling the AI to better recognize the subtle distinctions between fact, opinion, and misleading content. * Another challenge was integrating multiple agent frameworks into a unified system that could operate seamlessly. Managing the intricacies of coordinating tasks and data flow between these diverse components contributed to a complex integration process. ## Accomplishments that we're proud of * We successfully developed a Chrome extension that that provides real-time fact-checking for YouTube, empowering users to make informed decisions. * We crafted prompts that effectively leverage the LLM's ability to detect misinformation. * We successfully integrated Fetch.ai, utilizing agents that lay the foundation for scalability. ## What we learned We learned the importance of defining the problem clearly and deciding on a minimum viable product (MVP) within a limited timeframe. Additionally, we focused on framing our work to align with the AI agent framework, which has been crucial in improving our approach to misinformation detection. ## What's next for MD FactFarm Moving forward, we plan to expand our platform to include other social networks, such as Twitter and Facebook, where misinformation spreads rapidly. We aim to gather a wider range of information sources to ensure more comprehensive fact-checking and cover more diverse content. Moreover, we are working on enhancing our AI's fact-checking mechanics, utilizing more advanced techniques to improve accuracy.
Live Demo Link: <https://www.youtube.com/live/I5dP9mbnx4M?si=ESRjp7SjMIVj9ACF&t=5959> ## Inspiration We all fall victim to impulse buying and online shopping sprees... especially in the first few weeks of university. A simple budgeting tool or promising ourselves to spend less just doesn't work anymore. Sometimes we need someone, or someone's, to physically stop us from clicking the BUY NOW button and talk us through our purchase based on our budget and previous spending. By drawing on the courtroom drama of legal battles, we infuse an element of fun and accountability into doing just this. ## What it does Dime Defender is a Chrome extension built to help you control your online spending to your needs. Whenever the extension detects that you are on a Shopify or Amazon checkout page, it will lock the BUY NOW button and take you to court! You'll be interrupted by two lawyers, the defence attorney explaining why you should steer away from the purchase 😒 and a prosecutor explains why there still are some benefits 😏. By giving you a detailed analysis of whether you should actually buy based on your budget and previous spendings in the month, Dime Defender allows you to make informed decisions by making you consider both sides before a purchase. The lawyers are powered by VoiceFlow using their dialog manager API as well as Chat-GPT. They have live information regarding the descriptions and prices of the items in your cart, as well as your monthly budget, which can be easily set in the extension. Instead of just saying no, we believe the detailed discussion will allow users to reflect and make genuine changes to their spending patterns while reducing impulse buys. ## How we built it We created the Dime Defender Chrome extension and frontend using Svelte, Plasma, and Node.js for an interactive and attractive user interface. The Chrome extension then makes calls using AWS API gateways, connecting the extension to AWS lambda serverless functions that process queries out, create outputs, and make secure and protected API calls to both VoiceFlow to source the conversational data and ElevenLabs to get our custom text-to-speech voice recordings. By using a low latency pipeline, with also AWS RDS/EC2 for storage, all our data is quickly captured back to our frontend and displayed to the user through a wonderful interface whenever they attempt to check out on any Shopify or Amazon page. ## Challenges we ran into Using chrome extensions poses the challenge of making calls to serverless functions effectively and making secure API calls using secret api\_keys. We had to plan a system of lambda functions, API gateways, and code built into VoiceFlow to create a smooth and low latency system to allow the Chrome extension to make the correct API calls without compromising our api\_keys. Additionally, making our VoiceFlow AIs arguing with each other with proper tone was very difficult. Through extensive prompt engineering and thinking, we finally reached a point with an effective and enjoyable user experience. We also faced lots of issues with debugging animation sprites and text-to-speech voiceovers, with audio overlapping and high latency API calls. However, we were able to fix all these problems and present a well-polished final product. ## Accomplishments that we're proud of Something that we are very proud of is our natural conversation flow within the extension as well as the different lawyers having unique personalities which are quite evident after using our extension. Having your cart cross-examined by 2 AI lawyers is something we believe to be extremely unique, and we hope that users will appreciate it. ## What we learned We had to create an architecture for our distributed system and learned about connection of various technologies to reap the benefits of each one while using them to cover weaknesses caused by other technologies. Also..... Don't eat the 6.8 million Scoville hot sauce if you want to code. ## What's next for Dime Defender The next thing we want to add to Dime Defender is the ability to work on even more e-commerce and retail sites and go beyond just Shopify and Amazon. We believe that Dime Defender can make a genuine impact helping people curb excessive online shopping tendencies and help people budget better overall.
partial
## Inspiration My friend and I needed to find an apartment in New York City during the Summer. We found it very difficult to look through multiple listing pages at once so we thought to make a bot to suggest apartments would be helpful. However, we did not stop there. We realized that we could also use Machine Learning so the bot would learn what we like and suggest better apartments. That is why we decided to do RealtyAI ## What it does It is a facebook messenger bot that allows people to search through airbnb listings while learning what each user wants. By giving feedback to the bot, we learn your **general style** and thus we are able to recommend the apartments that you are going to like, under your budget, in any city of the world :) We can also book the apartment for you. ## How I built it Our app used a flask app as a backend and facebook messenger to communicate with the user. The facebook bot was powered by api.ai and the ML was done on the backend with sklearn's Naive Bayes Classifier. ## Challenges I ran into Our biggest challenge was using python's sql orm to store our data. In general, integrating the many libraries we used was quite challenging. The next challenge we faced was time, our application was slow and timing out on multiple requests. So we implemented an in-memory cache of all the requests but most importantly we modified the design of the code to make it multi-threaded. ## Accomplishments that I'm proud of Our workflow was very effective. Using Heroku, every commit to master immediately deployed on the server saving us a lot of time. In addition, we all managed the repo well and had few merge conflicts. We all used a shared database on AWS RDS which saved us a lot of database scheme migration nightmares. ## What I learned We learned how to use python in depth with integration with MySQL and Sklearn. We also discovered how to spawn a database with AWS. We also learned how to save classifiers to the database and reload them. ## What's next for Virtual Real Estate Agent If we win hopefully someone will invest! Can be used by companies for automatic accommodations for people having interviews. But only by individuals how just want to find the best apartment for their own style!
## Inspiration University gets students really busy and really stressed, especially during midterms and exams. We would normally want to talk to someone about how we feel and how our mood is, but due to the pandemic, therapists have often been closed or fully online. Since people will be seeking therapy online anyway, swapping a real therapist with a chatbot trained in giving advice and guidance isn't a very big leap for the person receiving therapy, and it could even save them money. Further, since all the conversations could be recorded if the user chooses, they could track their thoughts and goals, and have the bot respond to them. This is the idea that drove us to build Companion! ## What it does Companion is a full-stack web application that allows users to be able to record their mood and describe their day and how they feel to promote mindfulness and track their goals, like a diary. There is also a companion, an open-ended chatbot, which the user can talk to about their feelings, problems, goals, etc. With realtime text-to-speech functionality, the user can speak out loud to the bot if they feel it is more natural to do so. If the user finds a companion conversation helpful, enlightening or otherwise valuable, they can choose to attach it to their last diary entry. ## How we built it We leveraged many technologies such as React.js, Python, Flask, Node.js, Express.js, Mongodb, OpenAI, and AssemblyAI. The chatbot was built using Python and Flask. The backend, which coordinates both the chatbot and a MongoDB database, was built using Node and Express. Speech-to-text functionality was added using the AssemblyAI live transcription API, and the chatbot machine learning models and trained data was built using OpenAI. ## Challenges we ran into Some of the challenges we ran into were being able to connect between the front-end, back-end and database. We would accidentally mix up what data we were sending or supposed to send in each HTTP call, resulting in a few invalid database queries and confusing errors. Developing the backend API was a bit of a challenge, as we didn't have a lot of experience with user authentication. Developing the API while working on the frontend also slowed things down, as the frontend person would have to wait for the end-points to be devised. Also, since some APIs were relatively new, working with incomplete docs was sometimes difficult, but fortunately there was assistance on Discord if we needed it. ## Accomplishments that we're proud of We're proud of the ideas we've brought to the table, as well the features we managed to add to our prototype. The chatbot AI, able to help people reflect mindfully, is really the novel idea of our app. ## What we learned We learned how to work with different APIs and create various API end-points. We also learned how to work and communicate as a team. Another thing we learned is how important the planning stage is, as it can really help with speeding up our coding time when everything is nice and set up with everyone understanding everything. ## What's next for Companion The next steps for Companion are: * Ability to book appointments with a live therapists if the user needs it. Perhaps the chatbot can be swapped out for a real therapist for an upfront or pay-as-you-go fee. * Machine learning model that adapts to what the user has written in their diary that day, that works better to give people sound advice, and that is trained on individual users rather than on one dataset for all users. ## Sample account If you can't register your own account for some reason, here is a sample one to log into: Email: [demo@example.com](mailto:demo@example.com) Password: password
## What inspired us: The pandemic has changed the university norm to being primarily all online courses, increasing our usage and dependency on textbooks and course notes. Since we are all computer science students, we have many math courses with several definitions and theorems to memorize. When listening to a professor’s lecture, we often forget certain theorems that are being referred to. With discussAI, we are easily able to query the postgresql database with a command and receive an image from the textbook explaining what the definition/theorem is. Thus, we decided to use our knowledge with machine learning libraries to filter out these pieces of information. We believe that our program’s concept can be applied to other fields, outside of education. For instance, business meetings or training sessions can utilize these tools to effectively summarize long manuals and to search for keywords. ## What we learned: We had a lot of fun building this application since we were new to using Microsoft Azure applications. We learned how to integrate machine learning libraries such as OCR and sklearn for processing our information, and we deepened our knowledge in frontend (Angular.js) and backend(Django and Postgres). ## How we built it: We built our web application’s frontend using Angular.js to build our components and Agora.io to allow video conferencing. On our backend, we used Django and Postgresql for handling API requests from our frontend. We also used several Python libraries to convert the pdf file to png images, utilize Azure OCR to analyze these text images, apply the sklearn library to analyze the individual text, and finally crop the images to return specific snippets of definitions/theorems. ## Challenges we faced: The most challenging part was deciding our ML algorithm to derive specific image snippets from lengthy textbooks. Some other challenges we faced varies from importing images from Azure Storage to positioning CSS components. Nevertheless, the learning experience was amazing with the help of mentors, and we hope to participate again in the future!
partial
## Problem In these times of isolation, many of us developers are stuck inside which makes it hard for us to work with our fellow peers. We also miss the times when we could just sit with our friends and collaborate to learn new programming concepts. But finding the motivation to do the same alone can be difficult. ## Solution To solve this issue we have created an easy to connect, all in one platform where all you and your developer friends can come together to learn, code, and brainstorm together. ## About Our platform provides a simple yet efficient User Experience with a straightforward and easy-to-use one-page interface. We made it one page to have access to all the tools on one screen and transition between them easier. We identify this page as a study room where users can collaborate and join with a simple URL. Everything is Synced between users in real-time. ## Features Our platform allows multiple users to enter one room and access tools like watching youtube tutorials, brainstorming on a drawable whiteboard, and code in our inbuilt browser IDE all in real-time. This platform makes collaboration between users seamless and also pushes them to become better developers. ## Technologies you used for both the front and back end We use Node.js and Express the backend. On the front end, we use React. We use Socket.IO to establish bi-directional communication between them. We deployed the app using Docker and Google Kubernetes to automatically scale and balance loads. ## Challenges we ran into A major challenge was collaborating effectively throughout the hackathon. A lot of the bugs we faced were solved through discussions. We realized communication was key for us to succeed in building our project under a time constraints. We ran into performance issues while syncing data between two clients where we were sending too much data or too many broadcast messages at the same time. We optimized the process significantly for smooth real-time interactions. ## What's next for Study Buddy While we were working on this project, we came across several ideas that this could be a part of. Our next step is to have each page categorized as an individual room where users can visit. Adding more relevant tools, more tools, widgets, and expand on other work fields to increase our User demographic. Include interface customizing options to allow User’s personalization of their rooms. Try it live here: <http://35.203.169.42/> Our hopeful product in the future: <https://www.figma.com/proto/zmYk6ah0dJK7yJmYZ5SZpm/nwHacks_2021?node-id=92%3A132&scaling=scale-down> Thanks for checking us out!
## Inspiration Our inspiration for this project was our experience as students. We believe students need more a digestible feed when it comes to due dates. Having to manually plan for homework, projects, and exams can be annoying and time consuming. StudyHedge is here to lift the scheduling burden off your shoulders! ## What it does StudyHedge uses your Canvas API token to compile a list of upcoming assignments and exams. You can configure a profile detailing personal events, preferred study hours, number of assignments to complete in a day, and more. StudyHedge combines this information to create a manageable study schedule for you. ## How we built it We built the project using React (Front-End), Flask (Back-End), Firebase (Database), and Google Cloud Run. ## Challenges we ran into Our biggest challenge resulted from difficulty connecting Firebase and FullCalendar.io. Due to inexperience, we were unable to resolve this issue in the given time. We also struggled with using the Eisenhower Matrix to come up with the right formula for weighting assignments. We discovered that there are many ways to do this. After exploring various branches of mathematics, we settled on a simple formula (Rank= weight/time^2). ## Accomplishments that we're proud of We are incredibly proud that we have a functional Back-End and that our UI is visually similar to our wireframes. We are also excited that we performed so well together as a newly formed group. ## What we learned Keith used React for the first time. He learned a lot about responsive front-end development and managed to create a remarkable website despite encountering some issues with third-party software along the way. Gabriella designed the UI and helped code the front-end. She learned about input validation and designing features to meet functionality requirements. Eli coded the back-end using Flask and Python. He struggled with using Docker to deploy his script but managed to conquer the steep learning curve. He also learned how to use the Twilio API. ## What's next for StudyHedge We are extremely excited to continue developing StudyHedge. As college students, we hope this idea can be as useful to others as it is to us. We want to scale this project eventually expand its reach to other universities. We'd also like to add more personal customization and calendar integration features. We are also considering implementing AI suggestions.
## Inspiration Positive cases are spreading really quickly on campus, and many students decide against going to dormitories because of ineffectively mask enforcements. We hope to make these places safer so college students can come back to university sooner and enjoy their precious student-life experience more! ## What it does Cozy Koalas allows for the identification of people as well as whether or not they’re currently wearing a mask, using YOLOv5. It would allow dormitories to monitor statistics such as number of people with/without mask throughout the weeks. Another feature of our application is our infrared sensor which detects the temperature of an individual (<https://ieeexplore.ieee.org/document/9530864>) . If that person’s temperature is abnormally high and may have a fever, a notification is sent using Twilio so that they are aware of this and will take action to reduce risk. ## How we built it *Machine Learning*: We use images from a camera feed to detect a) whenever a person comes into the frame, and b) whether that person is either: 1. Not wearing a mask 2. Wearing a mask incorrectly 3. Wearing a mask correctly We use YOLOv5 (You Only Look Once) model, a real-time object detection model based on convolutional neural networks ([https://arxiv.org/pdf/2102.05402.pdf)](https://arxiv.org/pdf/2102.05402.pdf) and incorporated a Python script to help label our data. We ran multiple iterations through YOLO to improve our model and labelling. Initially, we only had 2 labels: mask or no\_mask. Howeverk, this was ultimately problematic as our model was unable to detect when someone wore there mask incorrectly (e.g. doesn’t cover nose). That’s why after multiple iterations, we added another label using a Python script. While that was our main change, our multiple iterations helped us balance our data and tune our hyperparameters, leading to a greater accuracy. This backend is connected to a server and a database via Google Cloud’s Firebase for the moment. *Front End:* The front end fetches its information from the Google Cloud’s Firebase and displays it in an interactive dashboard. The dashboard and its following pages were done using Material UI, a front-end library in React. Furthermore, a number of other libraries or tools were used to help sort the data/ beautify the application, such as lodash, iconify, ant-design, faker, etc. ## Challenges we ran into We struggled on really understanding the YOLO model and convolutional networks before implementing it. We initially tried to implement parts of it without understanding this nor torch as this was our first time working with it. However, to actually improve our model, we really needed to understand the parts to change. We also struggled immensely on connecting the two parts of our projects together. In the front end, we ran into multiple typeerrors that had to do with states and usages of props. ## Accomplishments that we're proud of We’re proud of how we worked as a team and leveraged our different specialities and managed to create a working product together once we stitched different parts of our project together. The computer vision and machine learning modules and libraries are state-of-the-art and very much used in current technologies. Our model is even able to differentiate very corner case situations, such as when one has their hand to cover their face instead of a real mask or when the mask is worn incorrectly. The dashboard also turned out to be simple, clean and elegant and reflected what we initially went for. ## What we learned ML models are easy to use yet hard at the same time. Documentation and APIs are useful, but a large part of the understanding is understanding your data and what changes need to be made to improve your model. Data visualisation and metrics were very helpful for this part! Another great thing was the potential and opportunity which comes from already pre-existing and labeling datasets. In our project, Roboflow and Kaggle proved to be hugely useful and saved us a lot of time. ## What's next for Cozy Koalas A mask recognition system can be used in multiple other fields. For instance, airports, hospitals, quarantine centers, malls, schools and offices could reinforce their mask mandates without having to buy extra hardware. A software that can be used in conjunction with existing camera feeds would simplify this task and the analytics provided could also help those organizations better plan their resources. In the technical aspect, a clear next step would be to incorporate face recognition into our model using FaceNet library (<https://ieeexplore.ieee.org/document/9451684>). This would essentially map the face images it gets from the cameras’ feed to identified individuals inside our deep convolutional network. Whether or not this feature will be used is up to the user’s discretion; however, it would be necessary for our system to automatically send a text message to the right person. Other next steps include increasing accuracy through a more balanced dataset (SMOTE balancing can only do so much...), adding distance measurements, and providing more analytics in the dashboard.
winning
## Inspiration One of our team members was in the evacuation warning zone for the raging California fires in the Bay Area just a few weeks ago. Part of their family's preparation for this disaster included the tiresome, tedious, time-sensitive process of listing every item in their house for insurance claims in the event that it's burned down. This process took upwards of 15 hours between 3 people working on it and even then many items were missed an unaccounted for. Claim Cart is here to help! ## What it does Problems Solved (1) Families often have many belongings they don’t account for. It’s time intensive and inconvenient to coordinate, maintain, and update extensive lists of household items. Listing mundane, forgotten items can potentially add thousands of dollars to their insurance. (2) Insurance companies have private master lists of the most commonly used items and what the cheapest viable replacements are. Families are losing out on thousands of dollars because their claims don’t state the actual brand or price of their items. For example, if a family listed “toaster”, they would get $5 (the cheapest alternative), but if they listed “stainless steel - high end toaster: $35” they might get $30 instead. Claim Cart has two main value propositions: time and money. It is significantly faster to take a picture of your items than manually entering every object in. It’s also more efficient for members to collaborate on making a family master list. ## Challenges I ran into Our team was split between 3 different time zones, so communication and coordination was a challenge! ## Accomplishments that I'm proud of For three of our members, PennApps was their first hackathon. It was a great experience building our first hack! ## What's next for Claim Cart In the future, we will make Claim Cart available to people on all platforms.
## Inspiration It’'s pretty common that you will come back from a grocery trip, put away all the food you bought in your fridge and pantry, and forget about it. Even if you read the expiration date while buying a carton of milk, chances are that a decent portion of your food will expire. After that you’ll throw away food that used to be perfectly good. But, that’s only how much food you and I are wasting. What about everything that Walmart or Costco trashes on a day to day basis? Each year, 119 billion pounds of food is wasted in the United States alone. That equates to 130 billion meals and more than $408 billion in food thrown away each year. About 30 percent of food in American grocery stores is thrown away. US retail stores generate about 16 billion pounds of food waste every year. But, if there was a solution that could ensure that no food would be needlessly wasted, that would change the world. ## What it does PantryPuzzle will scan in images of food items as well as extract its expiration date, and add it to an inventory of items that users can manage. When food nears expiration, it will notify users to incentivize action to be taken. The app will take actions to take with any particular food item, like recipes that use the items in a user’s pantry according to their preference. Additionally, users can choose to donate food items, after which they can share their location to food pantries and delivery drivers. ## How we built it We built it with a React frontend and a Python flask backend. We stored food entries in a database using Firebase. For the food image recognition and expiration date extraction, we used a tuned version of Google Vision API’s object detection and optical character recognition (OCR) respectively. For the recipe recommendation feature, we used OpenAI’s GPT-3 DaVinci large language model. For tracking user location for the donation feature, we used Nominatim open street map. ## Challenges we ran into React to properly display Storing multiple values into database at once (food item, exp date) How to display all firebase elements (doing proof of concept with console.log) Donated food being displayed before even clicking the button (fixed by using function for onclick here) Getting location of the user to be accessed and stored, not just longtitude/latitude Needing to log day that a food was gotten Deleting an item when expired. Syncing my stash w/ donations. Don’t wanna list if not wanting to donate anymore) How to delete the food from the Firebase (but weird bc of weird doc ID) Predicting when non-labeled foods expire. (using OpenAI) ## Accomplishments that we're proud of * We were able to get a good computer vision algorithm that is able to detect the type of food and a very accurate expiry date. * Integrating the API that helps us figure out our location from the latitudes and longitudes. * Used a scalable database like firebase, and completed all features that we originally wanted to achieve regarding generative AI, computer vision and efficient CRUD operations. ## What we learned We learnt how big of a problem the food waste disposal was, and were surprised to know that so much food was being thrown away. ## What's next for PantryPuzzle We want to add user authentication, so every user in every home and grocery has access to their personal pantry, and also maintains their access to the global donations list to search for food items others don't want. We integrate this app with the Internet of Things (IoT) so refrigerators can come built in with this product to detect food and their expiry date. We also want to add a feature where if the expiry date is not visible, the app can predict what the likely expiration date could be using computer vision (texture and color of food) and generative AI.
## Inspiration One of our original teammates was unable to attend PennApps due to evacuations in Florida. This got us down the track of building something relating the resources we could give others in emergency situations. ## What it does It provides real time data relating to natural disaster zones and our services. Our services include: 1. Locates Emergency Essential Supplies 2. Locates Local Shelters 3. Helps You Find Transportation Out Of An Affected Zone. ## How we built it We built this complex design primarily in React JS, used MongoDB and Google Cloud for Storage. We also integrated Mapbox and Several Databases for information (where the real time data comes from). The domain is from domain.com. ## Challenges we ran into Everything was a challenge, we used tutorials and mentors to overcome our challenges. ## Accomplishments that we're proud of We built what we built in only 30ish hours ## What we learned Coding on a professional level is challenging. ## What's next for Emsource Production Launch?
winning
## Inspiration Feeling major self-doubt when you first start hitting the gym or injuring yourself accidentally while working out are not uncommon experiences for most people. This inspired us to create Core, a platform to empower our users to take control of their well-being by removing the financial barriers around fitness. ## What it does Core analyses the movements performed by the user and provides live auditory feedback on their form, allowing them to stay fully present and engaged during their workout. Our users can also take advantage of the visual indications on the screen where they can view a graph of the keypoint which can be used to reduce the risk of potential injury. ## How we built it Prior to development, a prototype was created on Figma which was used as a reference point when the app was developed in ReactJs. In order to recognize the joints of the user and perform analysis, Tensorflow's MoveNet model was integrated into Core. ## Challenges we ran into Initially, it was planned that Core would serve as a mobile application built using React Native, but as we developed a better understanding of the structure, we saw more potential in a cross-platform website. Our team was relatively inexperienced with the technologies that were used, which meant learning had to be done in parallel with the development. ## Accomplishments that we're proud of This hackathon allowed us to develop code in ReactJs, and we hope that our learnings can be applied to our future endeavours. Most of us were also new to hackathons, and it was really rewarding to see how much we accomplished throughout the weekend. ## What we learned We gained a better understanding of the technologies used and learned how to develop for the fast-paced nature of hackathons. ## What's next for Core Currently, Core uses TensorFlow to track several key points and analyzes the information with mathematical models to determine the statistical probability of the correctness of the user's form. However, there's scope for improvement by implementing a machine learning model that is trained on Big Data to yield higher performance and accuracy. We'd also love to expand our collection of exercises to include a wider variety of possible workouts.
## Inspiration Kevin, one of our team members, is an enthusiastic basketball player, and frequently went to physiotherapy for a knee injury. He realized that a large part of the physiotherapy was actually away from the doctors' office - he needed to complete certain exercises with perfect form at home, in order to consistently improve his strength and balance. Through his story, we realized that so many people across North America require physiotherapy for far more severe conditions, be it from sports injuries, spinal chord injuries, or recovery from surgeries. Likewise, they will need to do at-home exercises individually, without supervision. For the patients, any repeated error can actually cause a deterioration in health. Therefore, we decided to leverage computer vision technology, to provide real-time feedback to patients to help them improve their rehab exercise form. At the same time, reports will be generated to the doctors, so that they may monitor the progress of patients and prioritize their urgency accordingly. We hope that phys.io will strengthen the feedback loop between patient and doctor, and accelerate the physical rehabilitation process for many North Americans. ## What it does Through a mobile app, the patients will be able to film and upload a video of themselves completing a certain rehab exercise. The video then gets analyzed using a machine vision neural network, such that the movements of each body segment is measured. This raw data is then further processed to yield measurements and benchmarks for the relative success of the movement. In the app, the patients will receive a general score for their physical health as measured against their individual milestones, tips to improve the form, and a timeline of progress over the past weeks. At the same time, the same video analysis will be sent to the corresponding doctor's dashboard, in which the doctor will receive a more thorough medical analysis in how the patient's body is working together and a timeline of progress. The algorithm will also provide suggestions for the doctors' treatment of the patient, such as prioritizing a next appointment or increasing the difficulty of the exercise. ## How we built it At the heart of the application is a Google Cloud Compute instance running together with a blobstore instance. The cloud compute cluster will ingest raw video posted to blobstore, and performs the machine vision analysis to yield the timescale body data. We used Google App Engine and Firebase to create the rest of the web application and API's for the 2 types of clients we support: an iOS app, and a doctor's dashboard site. This manages day to day operations such as data lookup, and account management, but also provides the interface for the mobile application to send video data to the compute cluster. Furthermore, the app engine sinks processed results and feedback from blobstore and populates it into Firebase, which is used as the database and data-sync. Finally, In order to generate reports for the doctors on the platform, we used stdlib's tasks and scale-able one-off functions to process results from Firebase over time and aggregate the data into complete chunks, which are then posted back into Firebase. ## Challenges we ran into One of the major challenges we ran into was interfacing each technology with each other. Overall, the data pipeline involves many steps that, while each in itself is critical, also involve too many diverse platforms and technologies for the time we had to build it. ## What's next for phys.io <https://docs.google.com/presentation/d/1Aq5esOgTQTXBWUPiorwaZxqXRFCekPFsfWqFSQvO3_c/edit?fbclid=IwAR0vqVDMYcX-e0-2MhiFKF400YdL8yelyKrLznvsMJVq_8HoEgjc-ePy8Hs#slide=id.g4838b09a0c_0_0>
# Inspiration 🌟 **What is the problem?** Physical activity early on can drastically increase longevity and productivity for later stages of life. Without finding a dependable routine during your younger years, you may experience physical impairment in the future. 50% of functional decline that occurs in those 30 to 70 years old is due to lack of exercise. During the peak of the COVID-19 pandemic in Canada, nationwide isolation brought everyone indoors. There was still a vast number of people that managed to work out in their homes, which motivated us to create an application that further encouraged engaging in fitness, using their devices, from the convenience of their homes. # Webapp Summary 📜 Inspired, our team decided to tackle this idea by creating a web app that helps its users maintain a consistent and disciplined routine. # What does it do? 💻 *my trAIner* plans to aid you and your journey to healthy fitness by displaying the number of calories you have burned while also counting your reps. It additionally helps to motivate you through words of encouragement. For example, whenever nearing a rep goal, *my trAIner* will use phrases like, “almost there!” or “keep going!” to push you to the last rep. Once completing your set goal *my trAIner* will congratulate you. We hope that people may utilize this to make the best of their workouts. Utilizing AI technology to help those reach their rep standard and track calories, we believe could help students and adults in the present and future. # How we built it:🛠 To build this application, we used **JavaScript, CSS,** and **HTML.** To make the body mapping technology, we used a **TensorFlow** library. We mapped out different joints on the body and compared them as they moved, in order to determine when an exercise was completed. We also included features like parallax scrolling and sound effects from DeltaHacks staff. # Challenges that we ran into 🚫 Learning how to use **TensorFlow**’s pose detection proved to be a challenge, as well as integrating our own artwork into the parallax scrolling. We also had to refine our backend as the library’s detection was shaky at times. Additional challenges included cleanly linking **HTML, JS, and CSS** as well as managing the short amount of time we were given. # Accomplishments that we’re proud of 🎊 We are proud that we put out a product with great visual aesthetics as well as a refined detection method. We’re also proud that we were able to take a difficult idea and prove to ourselves that we were capable of creating this project in a short amount of time. More than that though, we are most proud that we could make a web app that could help out people trying to be more healthy. # What we learned 🍎 Not only did we develop our technical skills like web development and AI, but we also learned crucial things about planning, dividing work, and time management. We learned the importance of keeping organized with things like to-do lists and constantly communicating to see what each other’s limitations and abilities were. When challenges arose, we weren't afraid to delve into unknown territories. # Future plans 📅 Due to time constraints, we were not able to completely actualize our ideas, however, we will continue growing and raising efficiency by giving ourselves more time to work on *my trAIner*. Potential future ideas to incorporate may include constructive form correction, calorie intake calculator, meal preps, goal setting, recommended workouts based on BMI, and much more. We hope to keep on learning and applying newly obtained concepts to *my trAIner*.
winning
## Introducing Nuisance ### Inspiration When prompted with the concept of **Useless Inventions**, and the slight delay from procrastinating the brainstorming process of our idea, we suddenly felt very motivated to make a little friend to help us. Introducing **Nuisance**. A (not so friendly) Bot that will sense when you have given him your phone. Promptly running away, and screaming if you get too close. An interesting take on the game of manhunt. ### What it does **Nuisance** detects when a *phone* is placed in its possession. It then embarks on a random journey, in an effort to play everyone's favourite game, keep away. If a daring human approaches before *Nuisance* is ready to end the game, he screams and runs away; only a genuine scream of horror stands a chance of reclaiming the device. Adding a perfect touch of embarrassment and a loss of dignity. ### How we built it 1. Arduino Due 2. 2 wheels 3. Caster Wheel 4. H-bridge/ Motor Driver 5. Motors 6. 2 UltraSonic Sensors 7. Noise Sound Audio Sensor 8. PIR Motion sensor 9. 1 Grove Buzzer v1.2 10. large breadboard 11. 2 small breadboard 12. OCD display 13. Force sensor 14. 9V battery 15. 3\*1.5 = 4.5 V battery 16. A bit of wires and a **lot** of cardboard *and some software* ### Challenges we ran into * different motor powers / motors not working anymore We had an issue during the debugging phase of our code regarding the *Ultrasonic Sensors*. No matter what was done, they just seemed to constantly be timing out. After looking extensively into the issue, we figured out that the issue was neither hardware nor software related. The breadboard had sporadic faulty pins that we had to be considerate of. Thus causing us to test the rest of the breadboard for integrity. Furthermore, we had a lot of coding issues regarding the swap between our Arduino uno and due. The Arduino due did not support the same built in libraries, such as tone (for the buzzer). We also had issues with the collision detection algorithm at first. However, with a lil tenacity, *and the power of friendship*, you too can solve this problem. We originally had the wrong values being processed, causing out algorithm to disregard the numbers we required to gauge distance accurately. ### Accomplishments that we're proud of * completed project..? ### What we learned * yell at a nuisance if you want ur stuff back? Never doubt the ## What's next for Nuisance Probably more crying
## Inspiration Cute factor of dogs/cats, also to improve the health of many pets such as larger dogs that can easily become overweight. ## What it does Reads accelerometer data on collar, converted into steps. ## How I built it * Arduino Nano * ADXL345 module * SPP-C Bluetooth module * Android Studio for app ## Challenges I ran into Android studio uses a large amount of RAM space. Interfacing with the accelerometer was challenging with finding the appropriate algorithm that has the least delay and lag. ## Accomplishments that I'm proud of As a prototype, it is a great first development. ## What I learned Some android studio java shortcuts/basics ## What's next for DoogyWalky Data analysis with steps to convert to calories, and adding a second UI for graphing data weekly and hourly with a SQLite Database.
## Inspiration We wanted to make something that linked the virtual and real worlds, but in a quirky way. On our team we had people who wanted to build robots and people who wanted to make games, so we decided to combine the two. ## What it does Our game portrays a robot (Todd) finding its way through obstacles that are only visible in one dimension of the game. It is a multiplayer endeavor where the first player is given the task to guide Todd remotely to his target. However, only the second player is aware of the various dangerous lava pits and moving hindrances that block Todd's path to his goal. ## How we built it Todd was built with style, grace, but most of all an Arduino on top of a breadboard. On the underside of the breadboard, two continuous rotation servo motors & a USB battery allows Todd to travel in all directions. Todd receives communications from a custom built Todd-Controller^TM that provides 4-way directional control via a pair of Bluetooth HC-05 Modules. Our Todd-Controller^TM (built with another Arduino, four pull-down buttons, and a bluetooth module) then interfaces with Unity3D to move the virtual Todd around the game world. ## Challenges we ran into The first challenge of the many that we ran into on this "arduinous" journey was having two arduinos send messages to each other via over the bluetooth wireless network. We had to manually configure the setting of the HC-05 modules by putting each into AT mode, setting one as the master and one as the slave, making sure the passwords and the default baud were the same, and then syncing the two with different code to echo messages back and forth. The second challenge was to build Todd, the clean wiring of which proved to be rather difficult when trying to prevent the loose wires from hindering Todd's motion. The third challenge was building the Unity app itself. Collision detection was an issue at times because if movements were imprecise or we were colliding at a weird corner our object would fly up in the air and cause very weird behavior. So, we resorted to restraining the movement of the player to certain axes. Additionally, making sure the scene looked nice by having good lighting and a pleasant camera view. We had to try out many different combinations until we decided that a top down view of the scene was the optimal choice. Because of the limited time, and we wanted the game to look good, we resorted to looking into free assets (models and textures only) and using them to our advantage. The fourth challenge was establishing a clear communication between Unity and Arduino. We resorted to an interface that used the serial port of the computer to connect the controller Arduino with the unity engine. The challenge was the fact that Unity and the controller had to communicate strings by putting them through the same serial port. It was as if two people were using the same phone line for different calls. We had to make sure that when one was talking, the other one was listening and vice versa. ## Accomplishments that we're proud of The biggest accomplishment from this project in our eyes, was the fact that, when virtual Todd encounters an object (such as a wall) in the virtual game world, real Todd stops. Additionally, the fact that the margin of error between the real and virtual Todd's movements was lower than 3% significantly surpassed our original expectations of this project's accuracy and goes to show that our vision of having a real game with virtual obstacles . ## What we learned We learned how complex integration is. It's easy to build self-sufficient parts, but their interactions introduce exponentially more problems. Communicating via Bluetooth between Arduino's and having Unity talk to a microcontroller via serial was a very educational experience. ## What's next for Todd: The Inter-dimensional Bot Todd? When Todd escapes from this limiting world, he will enter a Hackathon and program his own Unity/Arduino-based mastery.
partial
# Echo Chef ### TreeHacks 2016 @ Stanford #### <http://echochef.net/> Ever wanted a hands-free way to follow recipes while you cook? Echo Chef can guide you through recipes interactively, think of it as your personal assistant in the kitchen. Just add your favorite recipes to our web interface and they'll be available on your Amazon Echo! Ask for step by step instructions, preheat temperatures, and more! In addition to Echo Chef's use in the kitchen, we track your data and deliver it to you in an easily digestible way. From your completion time of each recipe, to your most often used ingredients. #### Features: * Data Analytics and Visualization * Amazon Alexa Skill Kit using the Amazon Echo * AWS and DynamoDB * Qualtrics API * Responsive Site #### Team * Brandon Cen * Cherrie Wang * Elizabeth Chu * Izzy Benavente
## Inspiration Travelling can be expensive but data plans abroad can really push the expenses to a whole new level. With NavText, we were inspired to create an app fuelled by SMS messaging to do all the same services that might be useful while travellling. With this app, travelling can be made easy without the stress of finding Wifi or a data plan. ## What it does NavText has multiple functionalities and is created as an all around app useful for travelling locally or abroad. NavText will guide you in navigating with multiple modes of transportation including requesting an uber driver nearby, taking the local transit, driving or walking. Information such as the estimated time of travel, step by step directions will be texted to your phone after making a request. You can also explore local attractions and food places which will text you the address, directions, opening hours, and price level. ## How we built it Swift Uber API Google Maps API Message Bird ## Challenges we ran into Integrating Message Bird with the Google API. Working around iOS SMS limitations, such as reading and composing text messages. ## Accomplishments that we're proud of Polished iOS app which allows easy use of the formatting of the text message. ## What we learned ## What's next for NavText
## Inspiration As a university student, we usually do not have a full fridge of groceries to cook with. Therefore we must scavenge and try to make something out of what we have. We created a cool Alexa skill that helps with exactly that. ## What it does It prompts the user for what they have in the kitchen and provides them with a delicious and easy food to ## How we built it We used Node.js, lamda and AWS to create the Amazon Echo skill necessary for this function. ## Challenges we ran into None of us have ever used the Amazon Echo or AWS before so it was quite the challenge configuring it and making it work. ## What's next for Recipe Builder We would love to add more recipes based on the and allow the user to store food into it so the Echo can remember what food is available.
winning
## Inspiration Often times we’ve seen people post live streams on social media, use video calling for a more personal touch to a conversation, and engage in activities which require extensive expression of emotion through language. However, with over 430 million people around the world having hearing disabilities and less than a fourth of people fluent in sign language, our team was sought to bridge this significant, yet ironically, unattended problem. ## What it does ConnexionAI.tech is a web application that converts video input of sign language to three most popular languages English, Spanish, and French. The application also has in built capability of converting speech to text and also translate various languages to English - Text . In the future, ConnexionAI can be deployed in video chats and live streams to caption people with speaking disabilities to caption sign language and it could be used as a live ASL to text convertor. ## How I built it The machine learning model was trained on google cloud’s autoML platform, the web framework we’re using is flask, the website was built in HTML5, CSS, JS, and the project integration was in python. ## Challenges I ran into Our biggest challenge was to find a reliable dataset and deciding the hyper parameters for training the model under such a time constraint. The other challenge we ran into was integrating the video format in the web browser and taking inputs to backend and pushing outputs to the browser again. ## Accomplishments that I'm proud of We are proud of having the opportunity to solve a problem that affects the lives of millions of people in a fun learning environment! ## What I learned All of us worked with something that we had worked with before. Using google cloud for the first time was fascinating and the idea of connecting millions of dots really inspired us to pull through a working model. ## What's next for Connexion Future possibilities include implementing connexion to livestreams on Facebook and videos on YouTube to caption sign language for all audiences.
**check out the project demo during the closing ceremony!** <https://youtu.be/TnKxk-GelXg> ## Inspiration On average, half of patients with chronic illnesses like heart disease or asthma don’t take their medication. Reports estimates that poor medication adherence could be costing the country $300 billion in increased medical costs. So why is taking medication so tough? People get confused and people forget. When the pharmacy hands over your medication, it usually comes with a stack of papers, stickers on the pill bottles, and then in addition the pharmacist tells you a bunch of mumble jumble that you won’t remember. <http://www.nbcnews.com/id/20039597/ns/health-health_care/t/millions-skip-meds-dont-take-pills-correctly/#.XE3r2M9KjOQ> ## What it does The solution: How are we going to solve this? With a small scrap of paper. NekoTap helps patients access important drug instructions quickly and when they need it. On the pharmacist’s end, he only needs to go through 4 simple steps to relay the most important information to the patients. 1. Scan the product label to get the drug information. 2. Tap the cap to register the NFC tag. Now the product and pill bottle are connected. 3. Speak into the app to make an audio recording of the important dosage and usage instructions, as well as any other important notes. 4. Set a refill reminder for the patients. This will automatically alert the patient once they need refills, a service that most pharmacies don’t currently provide as it’s usually the patient’s responsibility. On the patient’s end, after they open the app, they will come across 3 simple screens. 1. First, they can listen to the audio recording containing important information from the pharmacist. 2. If they swipe, they can see a copy of the text transcription. Notice how there are easy to access zoom buttons to enlarge the text size. 3. Next, there’s a youtube instructional video on how to use the drug in case the patient need visuals. Lastly, the menu options here allow the patient to call the pharmacy if he has any questions, and also set a reminder for himself to take medication. ## How I built it * Android * Microsoft Azure mobile services * Lottie ## Challenges I ran into * Getting the backend to communicate with the clinician and the patient mobile apps. ## Accomplishments that I'm proud of Translations to make it accessible for everyone! Developing a great UI/UX. ## What I learned * UI/UX design * android development
## Inspiration Currently, many hospitals still burn CD’s to give patient’s their medical images and records including MRI’s and CT scans. A recent study conducted by Life Image showed that nearly 40% of patients are still required to physically travel to pick up CDs if they want access to their medical records including MRI’s and CT scans. According to the survey, 66% of respondents have access to at least one portal connected to their provider’s Electronic Health Records, however only 18% of respondents have been able to ever receive records digitally which shows that, while patients have access to portals, records and information are still not being effectively shared. This system of sharing medical data is very fragile because CD’s aren’t secure and don’t allow for immediate access. Since CD burner manufacturers are all different, and equipment at the two facilities may not be the same, getting the images is sometimes not possible due to compatibility issues. Although this system seems incredibly archaic given the technology we have now, this is the reality. Our team’s family members have personal experience with the broken medical record storage and retrieval system. ## What is it? **Medblock is a blockchain IPFS-based medical image portability system that’s fast, secure, and permanent.** MedBlock allows providers to upload and fulfill medical records requests and for patients to request, upload, and retrieve records. It also uses machine learning to make all MedBlock records verifiable, so receivers can make sure all images are authentic. ## What we learned We learned a lot about how medical data is stored and sent to patients! ## What's next for MedBlock In the future, we will launch MedBlock by partnering with local medical providers.
winning
# BlockOJ > > Boundless creativity. > > > ## What is BlockOJ? BlockOJ is an online judge built around Google's Blockly library that teaches children how to code. The library allows us to implement a code editor which lets the user program with various blocks (function blocks, variable blocks, etc.). ![Figure 1. Image of BlockOJ Editor](https://i.imgur.com/UOmBhL4.png) On BlockOJ, users can sign up and use our lego-like code editor to solve instructive programming challenges! Solutions can be verified by pitting them against numerous test cases hidden in our servers :) -- simply click the "submit" button and we'll take care of the rest. Our lightning fast judge, painstakingly written in C, will provide instantaneous feedback on the correctness of your solution (ie. how many of the test cases did your program evaluate correctly?). ![Figure 2. Image of entire judge submission page](https://i.imgur.com/N898UAw.jpg) ## Inspiration and Design Motivation Back in late June, our team came across the article announcing the "[new Ontario elementary math curriculum to include coding starting in Grade 1](https://www.thestar.com/politics/provincial/2020/06/23/new-ontario-elementary-math-curriculum-to-include-coding-starting-in-grade-1.html)." During Hack The 6ix, we wanted to build a practical application that can aid our hard working elementary school teachers deliver the coding aspect of this new curriculum. We wanted a tool that was 1. Intuitive to use, 2. Instructive, and most important of all 3. Engaging Using the Blockly library, we were able to use a code editor which resembles building with LEGO: the block-by-block assembly process is **procedural** and children can easily look at the **big picture** of programming by looking at how the blocks interlock with each other. Our programming challenges aim to gameify learning, making it less intimidating and more appealing to younger audiences. Not only will children using BlockOJ **learn by doing**, but they will also slowly accumulate basic programming know-how through our carefully designed sequence of problems. Finally, not all our problems are easy. Some are hard (in fact, the problem in our demo is extremely difficult for elementary students). In our opinion, it is beneficial to mix in one or two difficult challenges in problemsets, for they give children the opportunity to gain valuable problem solving experience. Difficult problems also pave room for students to engage with teachers. Solutions are saved so children can easily come back to a difficult problem after they gain more experience. ## How we built it Here's the tl;dr version. * AWS EC2 * PostgreSQL * NodeJS * Express * C * Pug * SASS * JavaScript *We used a link shortener for our "Try it out" link because DevPost doesn't like URLs with ports.*
## Inspiration "My Little Web Dev" was inspired by the simplicity and beginner-friendly nature of Scratch. We wanted to make a product that had the same approachability, but tackled a more complex topic like web development. Furthermore, we wanted to build a project that allows children in marginalized groups to shrink the gap in early coding confidence so that all children can learn to code on a level playing field. ## What it does This educational tool allows users to drag and drop code blocks into a canvas, and link them together to create a web application. Furthermore, the app also allows users to save their work into a file and load it into the canvas when they decide to continue working on their project. ## How we built it We created the "My Little Web Dev" webpage using Next.js and React. We then used a library called Blockly to build the code blocks and parsed the blocks into HTML using JavaScript. ## Challenges we ran into We ran into some issues with building our own HTML-specific code blocks and parsing them properly into HTML. ## Accomplishments that we're proud of We're very proud of how we integrated Blockly with Next.js, and how we parsed the block code into HTML. ## What we learned We learned how to a lot about building websites with Next.js, rendering HTML based on block code, and working with Blockly. ## What's next for My Little Web Dev Now that we've created a project encapsulates frontend development into block code, we hope to extend the project to also cover backend development, evolving the app into "My Little Fullstack Dev".
## Inspiration The whiteboard or chalkboard is an essential tool in instructional settings - to learn better, students need a way to directly transport code from a non-text medium to a more workable environment. ## What it does Enables someone to take a picture of handwritten or printed text converts it directly to code or text on your favorite text editor on your computer. ## How we built it On the front end, we built an app using Ionic/Cordova so the user could take a picture of their code. Behind the scenes, using JavaScript, our software harnesses the power of the Google Cloud Vision API to perform intelligent character recognition (ICR) of handwritten words. Following that, we applied our own formatting algorithms to prettify the code. Finally, our server sends the formatted code to the desired computer, which opens it with the appropriate file extension in your favorite IDE. In addition, the client handles all scripting of minimization and fileOS. ## Challenges we ran into The vision API is trained on text with correct grammar and punctuation. This makes recognition of code quite difficult, especially indentation and camel case. We were able to overcome this issue with some clever algorithms. Also, despite a general lack of JavaScript knowledge, we were able to make good use of documentation to solve our issues. ## Accomplishments that we're proud of A beautiful spacing algorithm that recursively categorizes lines into indentation levels. Getting the app to talk to the main server to talk to the target computer. Scripting the client to display final result in a matter of seconds. ## What we learned How to integrate and use the Google Cloud Vision API. How to build and communicate across servers in JavaScript. How to interact with native functions of a phone. ## What's next for Codify It's feasibly to increase accuracy by using the Levenshtein distance between words. In addition, we can improve algorithms to work well with code. Finally, we can add image preprocessing (heighten image contrast, rotate accordingly) to make it more readable to the vision API.
partial
### Inspiration Our inspiration for "I Wear It Better" stems from the addictive algorithms employed by popular platforms like Tinder and TikTok. Additionally, we were motivated by the increasing appeal of fast fashion trends, while also recognizing the need to reduce clothing waste. Our aim is to adapt these concepts to engage a broader audience by infusing elements of excitement and mental stimulation into the process. ### What it does "I Wear It Better" gamifies the experience of searching for fashion items, fostering interactions and trades among users that might not occur otherwise. By incentivizing users to explore new clothing options rather than discarding old ones, the platform promotes sustainable fashion practices while also providing entertainment value. ### How we built it Utilizing React Native and employing a component-based approach, we developed "I Wear It Better" despite being newcomers to the framework. Despite facing challenges with emulation and environment setup due to our limited experience in app development, we successfully created a functional app. ### Challenges we ran into Our main challenge revolved around emulation and environment setup, as our team had minimal prior knowledge of app development. However, through perseverance and problem-solving, we overcame these obstacles to deliver a working solution. ### Accomplishments that we're proud of We take pride in achieving our goal of creating a fully functional app and successfully emulating it. As first-time React Native users, this accomplishment marks a significant milestone in our journey as developers. ### What we learned Throughout the development process, we gained a basic understanding of React Native and honed our skills in state management, component creation, and overall project structure. These learnings have equipped us with valuable knowledge for future projects in app development. ### What's next for I Wear It Better Looking ahead, we plan to enhance the platform by implementing stronger algorithms such as the stable matching algorithm to improve the success rate of matches between users. Additionally, we aim to integrate AI models that analyze user preferences and interactions to provide personalized clothing recommendations, further enriching the user experience.
## Headline **bold** IoSECURITY ## What it does IoSECURITY provides a complete network setup for home users. An IoSECURITY server is connected to the home router allowing increased security, privacy, management and customization. The IoSECURITY server features are accessed through a webapp. IoSECURITY server allows home administrators to: 1. Accept or deny guest requests to access WiFi 2. Set time limit for guest accounts 3. Block users 4. Limit visibility of guest users to home IoT devices ## How we built it Node.js to set up server. ## Challenges we ran into Initially users were to be allowed into the network using FreeBSD's Packet Filter Firewall on a Raspberry pi. Due to complications in running Node.js and our databas on the Pi itself (we want a one for all solution!), an Odroid C2 arm64 board running Arch Linux was used. The rules were written using IPTABLES and Node.js ran fine afterwards! ## Accomplishments that we're proud of ## What we learned Setting up a server using node.js. mongo.db for database ## What's next for IoSecurity Bugs would need to be fixed for a smooth application. In addition, features might be edited to optimize the Webapp after receiving feedback from users.
## Inspiration As developers, we were on a mission to create something truly extraordinary. Something that would change the way people approach to fashion and make getting dressed in the morning an easier and more enjoyable experience. Introducing Rate My Fit, our revolutionary AI software program that rates people's outfits based on color coordination, mood/aesthetic, appropriateness for the current weather, and the combination of complementary textures. We wanted to create a tool that not only enhances people's fashion sense but also helps them make the best outfit choices for any occasion and weather. ## What it does We used cutting-edge image recognition technology and machine learning algorithms to train our program to understand the nuances of fashion and personal style. It can analyze an individual's outfit and give instant feedback on how to make it even better. We are passionate about our technology and the impact it has on people's lives. We believe that our AI outfit rating program will empower individuals to make confident and stylish fashion choices, regardless of their body type, skin tone, or personal style. ## How we built it Building our AI outfit rating software was a challenging and exciting journey. Our goal was to create a program that was not only accurate and efficient but also user-friendly and visually appealing. We began by selecting the appropriate technology stack for our project. We chose to use Python and Flask for the back-end, JavaScript, CSS, and HTML for the front-end, and a state-of-the-art computer vision architecture in Python for the image recognition component. To train our computer vision model, we collected a dataset of over 200,000 images of various outfits. We carefully curated the dataset to ensure a diverse representation of styles, body types, and occasions. Using this dataset, we were able to train our model to accurately recognize and analyze different aspects of an outfit such as color coordination, mood/aesthetic, appropriateness for the current weather, and the combination of complementary textures. Once the model was trained, we integrated it into our web application using Flask. The front-end team used JavaScript, CSS, and HTML to create a visually appealing and user-friendly interface. We also added a weather API to the software to provide real-time information on the current weather and make the rating even more accurate. The final product is a powerful yet easy-to-use software that can analyze an individual's outfit and provide instant feedback on how to make it even better. We are proud of the technology we used and the impact it has on people's lives. ## Challenges we ran into * **Cleaning and organizing the dataset**: With over 200,000 images to sift through, it was a daunting task to ensure that the images were high quality, diverse and appropriately labeled. It took a lot of time and effort to make sure the dataset was ready for training. * **Building the complex JavaScript UI**: We wanted to create a visually appealing and user-friendly interface that would make it easy for users to interact with the software. However, this required a lot of attention to detail and testing to ensure that everything worked smoothly and looked good on various devices. * **Creating the back-end processing for the analytics**: We needed to create an efficient pipeline to process the outfit ratings in real-time and provide instant feedback to the users. This required a lot of experimentation and testing to get the right balance between speed and accuracy. * **Training the model** in PyTorch on a GPU: We had to optimize the training process and make sure the model was ready in time for the project submission. It was a race against time to get everything done before the deadline but with a lot of hard work, we were able to meet the deadline. ## Accomplishments that we're proud of We're most proud of being able to train and deploy our own custom computer vision model. This is something we've all had the ambition to take on for quite a while but were always intimidated by the daunting task that training a neural network entails. Additionally, we're proud of building a full stack web app which is compatible with bot mobile OS and desktop use. Overall, building this software was a challenging but rewarding experience. We learned a lot and pushed ourselves to new limits in order to deliver a product that we are truly proud of. ## What we learned **Using Cuda to train PyTorch models can be very frustrating!** (Documentation is lacking). Also, building tests for the back-end to validate the quality of the ratings and the overall user experience was fun but more intensive than we envisioned. ## What's next for Rate My Fit * Adding a live fit detection feature. * Configuring a database to allow users to save their outfits for future reference. * Add more analytical functionalities. * Be able to recognize a wider range of clothing styles and garments.
partial
## Inspiration 🍪 We’re fed up with our roommates stealing food from our designated kitchen cupboards. Few things are as soul-crushing as coming home after a long day and finding that someone has eaten the last Oreo cookie you had been saving. Suffice it to say, the university student population is in desperate need of an inexpensive, lightweight security solution to keep intruders out of our snacks... Introducing **Craven**, an innovative end-to-end pipeline to put your roommates in check and keep your snacks in stock. ## What it does 📸 Craven is centered around a small Nest security camera placed at the back of your snack cupboard. Whenever the cupboard is opened by someone, the camera snaps a photo of them and sends it to our server, where a facial recognition algorithm determines if the cupboard has been opened by its rightful owner or by an intruder. In the latter case, the owner will instantly receive an SMS informing them of the situation, and then our 'security guard' LLM will decide on the appropriate punishment for the perpetrator, based on their snack-theft history. First-time burglars may receive a simple SMS warning, but repeat offenders will have a photo of their heist, embellished with an AI-generated caption, posted on [our X account](https://x.com/craven_htn) for all to see. ## How we built it 🛠️ * **Backend:** Node.js * **Facial Recognition:** OpenCV, TensorFlow, DLib * **Pipeline:** Twilio, X, Cohere ## Challenges we ran into 🚩 In order to have unfettered access to the Nest camera's feed, we had to find a way to bypass Google's security protocol. We achieved this by running an HTTP proxy to imitate the credentials of an iOS device, allowing us to fetch snapshots from the camera at any time. Fine-tuning our facial recognition model also turned out to be a bit of a challenge. In order to ensure accuracy, it was important that we had a comprehensive set of training images for each roommate, and that the model was tested thoroughly. After many iterations, we settled on a K-nearest neighbours algorithm for classifying faces, which performed well both during the day and with night vision. Additionally, integrating the X API to automate the public shaming process required specific prompt engineering to create captions that were both humorous and effective in discouraging repeat offenders. ## Accomplishments that we're proud of 💪 * Successfully bypassing Nest’s security measures to access the camera feed. * Achieving high accuracy in facial recognition using a well-tuned K-nearest neighbours algorithm. * Fine-tuning Cohere to generate funny and engaging social media captions. * Creating a seamless, rapid security pipeline that requires no legwork from the cupboard owner. ## What we learned 🧠 Over the course of this hackathon, we gained valuable insights into how to circumvent API protocols to access hardware data streams (for a good cause, of course). We also deepened our understanding of facial recognition technology and learned how to tune computer vision models for improved accuracy. For our X integration, we learned how to engineer prompts for Cohere's API to ensure that the AI-generated captions were both humorous and contextual. Finally, we gained experience integrating multiple APIs (Nest, Twilio, X) into a cohesive, real-time application. ## What's next for Craven 🔮 * **Multi-owner support:** Extend Craven to work with multiple cupboards or fridges in shared spaces, creating a mutual accountability structure between roommates. * **Machine learning improvement:** Experiment with more advanced facial recognition models like deep learning for even better accuracy. * **Social features:** Create an online leaderboard for the most frequent offenders, and allow users to vote on the best captions generated for snack thieves. * **Voice activation:** Add voice commands to interact with Craven, allowing roommates to issue verbal warnings when the cupboard is opened.
## Inspiration We were heavily focused on the machine learning aspect and realized that we lacked any datasets which could be used to train a model. So we tried to figure out what kind of activity which might impact insurance rates that we could also collect data for right from the equipment which we had. ## What it does Insurity takes a video feed from a person driving and evaluates it for risky behavior. ## How we built it We used Node.js, Express, and Amazon's Rekognition API to evaluate facial expressions and personal behaviors. ## Challenges we ran into This was our third idea. We had to abandon two major other ideas because the data did not seem to exist for the purposes of machine learning.
## Inspiration Car thefts have been at a all time high in Canada and the US. With over 50% more carjacking in Ontario alone since 2023, that is almost 10,000 cars stolen in the year. This problem also hit close to home, as two of our members dealt with car jackings. We understand firsthand the distress and inconvenienced caused by such incidents propelled us to take action. We envisioned a cutting-edge security system that could provide a formidable defense against car thefts. ## What it does Intruder Alert is a 2 factor authentication mobile application that uses artificial intelligence in facial recognition to verify the owner or the driver of the vehicle. This will allow the system to give power from the battery to start up the car. ## How we built it Intruder Alert uses react-native in the front end to interact with a Node.JS backend API server to make update/read/write/delete calls. The calls interact with our AWS cloud database, where we store raw data inside of S3 and use dynamoDB as a relational-database. The data is then used in our facial recognition ML model called Amazon Rekognition. That will send a binary signal to our arduino board (proof of concept) to either allow power from the battery to the light, or to shut off the circuit. ## Challenges we ran into Our team ran into many problems in our journey of creating this application. We first ran into problems when connecting the front-end react native to the back-end Node.Js API servers. The problem came from that the server was hosted in a localhost rather than a local network, which meant that no outside devices other than the one hosting the server could access it. Our second problem came from the Arduino board, where we did not have a Wi-Fi receiver to allow the on/off of the light. Our team also ran into problems with loading data into AWS, because the file that we wanted to upload initially could not be put into AWS. To solve the problem we had to pivot to learning more about data storage and data warehousing, and find better tools for the jobs we wanted to do. ## Accomplishments that we're proud of For many of us this was the first time that we tackled different technologies that were needed for this project, and we were all able to have it connected and working together in a working application. We are extremely proud of each of our members for taking on the challenge of learning new frameworks, languages, programming concepts to complete this important goal. ## What we learned The team learned a lot about cloud computing and it's different uses such as data storage, data warehousing, data pipelining and data processing for working with our machine learning model. The team learned to use Node.Js to setup API servers so that the front-end and back-end could communicate effectively with each other. We also learned the ins-and-outs of embedded programming for the Arduino board. ## What's next for IntruderAlert Even though we only had only had 36 hours to complete a project, we were able to complete all features we wanted for the initial MVP. It is able to recognize a user's face and store it for 2 factor authentication, the user is able to select different vehicles they own, etc. As for the future of Intruder Alert, the immediate next step is to connect our application to an actual car rather than a proof of concept Arduino board. Furthermore, we wanted to add more than one user to a car for 2 factor authentication, and tracking the car with our hardware (GPS).
winning
## About the Project NazAR is an educational tool that automatically creates interactive visualizations of math word problems in AR, requiring nothing more than an iPhone. ## Behind the Name *Nazar* means “vision” in Arabic, which symbolizes the driving goal behind our app – not only do we visualize math problems for students, but we also strive to represent a vision for a more inclusive, accessible and tech-friendly future for education. And, it ends with AR, hence *NazAR* :) ## Inspiration The inspiration for this project came from each of our own unique experiences with interactive learning. As an example, we want to showcase two of the team members’ experiences, Mohamed and Rayan’s. Mohamed Musa moved to the US when he was 12, coming from a village in Sudan where he grew up and received his primary education. He did not speak English and struggled until he had an experience with a teacher that transformed his entire learning experience through experiential and interactive learning. From then on, applying those principles, Mohamed was able to pick up English fluently within a few months and reached the top of his class in both science and mathematics. Rayan Ansari had worked with many Syrian refugee students on a catch-up curriculum. One of his students, a 15 year-old named Jamal, had not received schooling since Kindergarten and did not understand arithmetic and the abstractions used to represent it. Intuitively, the only means Rayan felt he could effectively teach Jamal and bridge the connection would be through physical examples that Jamal could envision or interact with. From the diverse experiences of the team members, it was glaringly clear that creating an accessible and flexible interactive learning software would be invaluable in bringing this sort of transformative experience to any student’s work. We were determined to develop a platform that could achieve this goal without having its questions pre-curated or requiring the aid of a teacher, tutor, or parent to help provide this sort of time-intensive education experience to them. ## What it does Upon opening the app, the student is presented with a camera view, and can press the snapshot button on the screen to scan a homework problem. Our computer vision model then uses neural network-based text detection to process the scanned question, and passes the extracted text to our NLP model. Our NLP text processing model runs fully integrated into Swift as a Python script, and extracts from the question a set of characters to create in AR, along with objects and their quantities, that represent the initial problem setup. For example, for the question “Sally has twelve apples and John has three. If Sally gives five of her apples to John, how many apples does John have now?”, our model identifies that two characters should be drawn: Sally and John, and the setup should show them with twelve and three apples, respectively. The app then draws this setup using the Apple RealityKit development space, with the characters and objects described in the problem overlayed. The setup is interactive, and the user is able to move the objects around the screen, reassigning them between characters. When the position of the environment reflects the correct answer, the app verifies it, congratulates the student, and moves onto the next question. Additionally, the characters are dynamic and expressive, displaying idle movement and reactions rather than appearing frozen in the AR environment. ## How we built it Our app relies on three main components, each of which we built from the ground up to best tackle the task at hand: a computer vision (CV) component that processes the camera feed into text: an NLP model that extracts and organizes information about the initial problem setup; and an augmented-reality (AR) component that creates an interactive, immersive environment for the student to solve the problem. We implemented the computer vision component to perform image-to-text conversion using the Apple’s Vision framework model, trained on a convolutional neural network with hundreds of thousands of data points. We customize user experience with a snapshot button that allows the student to position their in front of a question and press it to capture an image, which is then converted to a string, and passed off to the NLP model. Our NLP model, which we developed completely from scratch for this app, runs as a Python script, and is integrated into Swift using a version of PythonKit we custom-modified to configure for iOS. It works by first tokenizing and lemmatizing the text using spaCy, and then using numeric terms as pivot points for a prioritized search relying on English grammatical rules to match each numeric term to a character, an object and a verb (action). The model is able to successfully match objects to characters even when they aren’t explicitly specified (e.g. for Sally in “Ralph has four melons, and Sally has six”) and, by using the proximate preceding verb of each numeric term as the basis for an inclusion-exclusion criteria, is also able to successfully account for extraneous information such as statements about characters receiving or giving objects, which shouldn’t be included in the initial setup. Our model also accounts for characters that do not possess any objects to begin with, but who should be drawn in the display environment as they may receive objects as part of the solution to the question. It directly returns filenames that should be executed by the AR code. Our AR model functions from the moment a homework problem is read. Using Apple’s RealityKit environment, the software determines the plane of the paper in which we will anchor our interactive learning space. The NLP model passes objects of interest which correspond to particular USDZ assets in our library, as well as a vibrant background terrain. In our testing, we used multiple models for hand tracking and gesture classification, including a CoreML model, a custom SDK for gesture classification, a Tensorflow model, and our own gesture processing class paired with Apple’s hand pose detection library. For the purposes of Treehacks, we figured it would be most reasonable to stick with touchscreen manipulation, especially for our demo that utilizes the iPhone device itself without being worn with a separate accessory. We found this to also provide better ease of use when interacting with the environment and to be most accessible, given hardware constraints (we did not have a HoloKit Apple accessory nor the upcoming Apple AR glasses). ## Challenges we ran into We ran into several challenges while implementing our project, which was somewhat expected given the considerable number of components we had, as well as the novelty of our implementation. One of the first challenges we had was a lack of access to wearable hardware, such as HoloKits or HoloLenses. We decided based on this, as well as a desire to make our app as accessible and scalable as possible without requiring the purchase of expensive equipment by the user, to be able to reach as many people who need it as possible. Another issue we ran into was with hand gesture classification. Very little work has been done on this in Swift environments, and there was little to no documentation on hand tracking available to us. As a result, we wrote and experimented with several different models, including training our own deep learning model that can identify gestures, but it took a toll on our laptop’s resources. At the end we got it working, but are not using it for our demo as it currently experiences some lag. In the future, we aim to run our own gesture tracking model on the cloud, which we will train on over 24,000 images, in order to provide lag-free hand tracking. The final major issue we encountered was the lack of interoperability between Apple’s iOS development environment and other systems, for example with running our NLP code, which requires input from the computer vision model, and has to pass the extracted data on to the AR algorithm. We have been continually working to overcome this challenge, including by modifying the PythonKit package to bundle a Python interpreter alongside the other application assets, so that Python scripts can be successfully run on the end machine. We also used input and output to text files to allow our Python NLP script to more easily interact with the Swift code. ## Accomplishments we're proud of We built our computer vision and NLP models completely from the ground up during the Hackathon, and also developed multiple hand-tracking models on our own, overcoming the lack of documentation for hand detection in Swift. Additionally, we’re proud of the novelty of our design. Existing models that provide interactive problem visualization all rely on custom QR codes embedded with the questions that load pre-written environments, or rely on a set of pre-curated models; and Photomath, the only major app that takes a real-time image-to-text approach, lacks support for word problems. In contrast, our app integrates directly with existing math problems, and doesn’t require any additional work on the part of students, teachers or textbook writers in order to function. Additionally, by relying only on an iPhone and an optional HoloKit accessory for hand-tracking which is not vital to the application (which at a retail price of $129 is far more scalable than VR sets that typically cost thousands of dollars), we maximize accessibility to our platform not only in the US, but around the world, where it has the potential to complement instructional efforts in developing countries where educational systems lack sufficient resources to provide enough one-on-one support to students. We’re eager to have NazAR make a global impact on improving students’ comfortability and experience with math in coming years. ## What we learned * We learnt a lot from building the tracking models, which haven’t really been done for iOS and there’s practically no Swift documentation available for. * We are truly operating on a new frontier as there is little to no work done in the field we are looking at * We will have to manually build a lot of different architectures as a lot of technologies related to our project are not open source yet. We’ve already been making progress on this front, and plan to do far more in the coming weeks as we work towards a stable release of our app. ## What's next for NazAR * Having the app animate the correct answer (e.g. Bob handing apples one at a time to Sally) * Animating algorithmic approaches and code solutions for data structures and algorithms classes * Being able to automatically produce additional practice problems similar to those provided by the user * Using cosine similarity to automatically make terrains mirror the problem description (e.g. show an orchard if the question is about apple picking, or a savannah if giraffes are involved) * And more!
## Inspiration Want to take advantage of the AR and object detection technologies to help people to gain safer walking experiences and communicate distance information to help people with visual loss navigate. ## What it does Augment the world with the beeping sounds that change depending on your proximity towards obstacles and identifying surrounding objects and convert to speech to alert the user. ## How we built it ARKit; RealityKit uses Lidar sensor to detect the distance; AVFoundation, text to speech technology; CoreML with YoloV3 real time object detection machine learning model; SwiftUI ## Challenges we ran into Computational efficiency. Going through all pixels in the LiDAR sensor in real time wasn’t feasible. We had to optimize by cropping sensor data to the center of the screen ## Accomplishments that we're proud of It works as intended. ## What we learned We learned how to combine AR, AI, LiDar, ARKit and SwiftUI to make an iOS app in 15 hours. ## What's next for SeerAR Expand to Apple watch and Android devices; Improve the accuracy of object detection and recognition; Connect with Firebase and Google cloud APIs;
## Inspiration The other day, I heard my mom, a math tutor, tell her students "I wish you were here so I could give you some chocolate prizes!" We wanted to bring this incentive program back, even among COVID, so that students can have a more engaging learning experience. ## What it does The student will complete a math worksheet and use the Raspberry Pi to take a picture of their completed work. The program then sends it to Google Cloud Vision API to extract equations. Our algorithms will then automatically mark the worksheet, annotate the jpg with Pure Image, and upload it to our website. The student then gains money based on the score that they received. For example, if they received a 80% on the worksheet, they will get 80 cents. Once the student has earned enough money, they can choose to buy a chocolate, where the program will check to ensure they have enough funds, and if so, will dispense it for them. ## How we built it We used a Raspberry Pi to take pictures of worksheets, Google Cloud Vision API to extract text, and Pure Image to annotate the worksheet. The dispenser uses the Raspberry Pi and Lego to dispense the Mars Bars. ## Challenges we ran into We ran into the problem that if the writing in the image was crooked, it would not detect the numbers on the same line. To fix this, we opted for line paper instead of blank paper which helped us to write straight. ## Accomplishments that we're proud of We are proud of getting the Raspberry Pi and motor working as this was the first time using one. We are also proud of the gear ratio where we connected small gears to big gears ensuring high torque to enable us to move candy. We also had a lot of fun building the lego. ## What we learned We learned how to use the Raspberry Pi, the Pi camera, and the stepper motor. We also learned how to integrate backend functions with Google Cloud Vision API ## What's next for Sugar Marker We are hoping to build an app to allow students to take pictures, view their work, and purchase candy all from their phone.
winning
## Inspiration For a manager of a small business -- be it a store, restaurant, gym, or even a movie theater -- improving the customer experience and understand what's going on is tremendously important. Having access to analytics of when people are entering the building, what areas they're spending time at, and what crowds and lines are forming can provide managers with incredibly useful insights -- from identifying parts of the building layout that are poorly designed and causing congestion, to figuring out that certain table setups or shop items are particularly engaging, to having a better idea of what's going on in their business and being able to make data-driven decisions about how to improve. ## What it does Given a live video feed from an overhead camera, Crowd Insights’ AI algorithms detect human heads within the video and use this positional data to identify lines and clusters of people and create heatmaps. The small business owner can then examine this data to learn about human traffic flow within their store over a specified period of time. There are a variety of use cases for this data: congestion tracking, popular hotspots in store, long lines, etc. By analyzing these trends over time, small business owners can make informed decisions on how to improve their business to optimize the physical interaction of customers with the store. For example, if they notice that lots of people tend to group up around a certain product, then they can know to place that product near the back of the store to prevent crowding around the store entrance. Other use cases for this technology could include event management. Event organizers such as the TreeHacks team can use this technology to monitor the congestion within each room and help disperse people from highly crowded rooms to open spaces for work. They can monitor lines, ie for food or networking, and figure out novel ways to deal with long lines and heavy foot traffic. ## How we built it We built the theory and data science toolkits, machine learning model, frontend, and backend separately. For the machine learning, we used the Pytorch FCHD fully convolutional head detector, running on a Google Cloud VM. Afterwards, we passed the list of heads to the graph theory library that we built, which constructed the Minimum Spanning Tree through the graph, removed edges that were too long, and performed elliptical fits to determine whether a group of points was a line or a cluster. We also aggregated human location data over time to create a heatmap of the environment to see which places are interacted with the most. Firebase is used to communicate between the head detector and the computer (like a Raspberry Pi), which sends webcam feed data. Finally we have a web server using ReactJS that displays the results. ## Challenges we ran into One main issue was finding a vision model that could provide dense data for human position in a camera frame. Most models tend to do decent at closer distances but as we try to monitor areas that are >15 feet away from a camera, the precision becomes an issue. Due to the fact that we needed this sort of density in our data, we had to work through testing many model architectures and fusion techniques to yield the best results. We also had a lot of trouble rendering the line/cluster data from Firebase in a real-time graph on the website. This was tough because no member had extensive experience with realtime updating and with push/pull requests between Firebase and the web app. To solve this, we worked together to break the problem down into two parts—that of collecting and parsing data from Firebase, and that of displaying the data in a dynamic graph. Lastly, this was our first time incorporating a big chunk of frontend programming in our application. Our experience in JavaScript, HTML, and Firebase was limited. Thus, it took us a long time to implement the syntax of the languages from scratch. However, this also made this project really impactful as it provided us with an exceptional learning opportunity. ## Accomplishments that we’re proud of We implemented simple but effective algorithms for recognizing clusters of crowds and lines. We used minimum spanning trees and fitting ellipses to identify clusters, then took clusters with particularly elongated ellipses and fit them with best fit lines. We developed a decision tree that applied knowledge from all branches of computer science - from theory to machine learning and software engineering - together in a product that became more than the sum of its parts. The final web product took tens of hours to complete, and we’re confident that we were able to get it right. ## What we learned A lot of new frontend learning and creating algorithms ReactJS, ChartJS, CanvasJS, Plotly, firebase ML Head and Body Detection Algorithms Kruskal’s Minimum Spanning Tree, Automatic K-Means Clustering, Depth-First Search, Firebase - Realtime graphs, how to upload data from Jetson to Firebase to web app Even though the project was divided into a frontend and backend portion, all members were able to understand the implementation on both sides. Throughout the implementation, we worked as a unified team, especially when we ran into roadblocks. The core takeaway from this project is our improved understanding of realtime databases, machine learning models, and frontend program structure. ## What's next for Crowd Insights AI One big next step would be applying mapping techniques to create a 3D map of the shop, then localize detected crowds in that 3D map. It would allow the business owner to analyze exactly which shelves or tables are becoming crowded. Furthermore, performing spatial transforms on the angled camera footage would allow us to track 3D from a 2D space. We'd also want to apply optical flow and motion tracking to see how people are moving through the space and what slows them down.
## Inspiration Inspired by a team member's desire to study through his courses by listening to his textbook readings recited by his favorite anime characters, functionality that does not exist on any app on the market, we realized that there was an opportunity to build a similar app that would bring about even deeper social impact. Dyslexics, the visually impaired, and those who simply enjoy learning by having their favorite characters read to them (e.g. children, fans of TV series, etc.) would benefit from a highly personalized app. ## What it does Our web app, EduVoicer, allows a user to upload a segment of their favorite template voice audio (only needs to be a few seconds long) and a PDF of a textbook and uses existing Deepfake technology to synthesize the dictation from the textbook using the users' favorite voice. The Deepfake tech relies on a multi-network model trained using transfer learning on hours of voice data. The encoder first generates a fixed embedding of a given voice sample of only a few seconds, which characterizes the unique features of the voice. Then, this embedding is used in conjunction with a seq2seq synthesis network that generates a mel spectrogram based on the text (obtained via optical character recognition from the PDF). Finally, this mel spectrogram is converted into the time-domain via the Wave-RNN vocoder (see [this](https://arxiv.org/pdf/1806.04558.pdf) paper for more technical details). Then, the user automatically downloads the .WAV file of his/her favorite voice reading the PDF contents! ## How we built it We combined a number of different APIs and technologies to build this app. For leveraging scalable machine learning and intelligence compute, we heavily relied on the Google Cloud APIs -- including the Google Cloud PDF-to-text API, Google Cloud Compute Engine VMs, and Google Cloud Storage; for the deep learning techniques, we mainly relied on existing Deepfake code written for Python and Tensorflow (see Github repo [here](https://github.com/rodrigo-castellon/Real-Time-Voice-Cloning), which is a fork). For web server functionality, we relied on Python's Flask module, the Python standard library, HTML, and CSS. In the end, we pieced together the web server with Google Cloud Platform (GCP) via the GCP API, utilizing Google Cloud Storage buckets to store and manage the data the app would be manipulating. ## Challenges we ran into Some of the greatest difficulties were encountered in the superficially simplest implementations. For example, the front-end initially seemed trivial (what's more to it than a page with two upload buttons?), but many of the intricacies associated with communicating with Google Cloud meant that we had to spend multiple hours creating even a landing page with just drag-and-drop and upload functionality. On the backend, 10 excruciating hours were spent attempting (successfully) to integrate existing Deepfake/Voice-cloning code with the Google Cloud Platform. Many mistakes were made, and in the process, there was much learning. ## Accomplishments that we're proud of We're immensely proud of piecing all of these disparate components together quickly and managing to arrive at a functioning build. What started out as merely an idea manifested itself into usable app within hours. ## What we learned We learned today that sometimes the seemingly simplest things (dealing with python/CUDA versions for hours) can be the greatest barriers to building something that could be socially impactful. We also realized the value of well-developed, well-documented APIs (e.g. Google Cloud Platform) for programmers who want to create great products. ## What's next for EduVoicer EduVoicer still has a long way to go before it could gain users. Our first next step is to implementing functionality, possibly with some image segmentation techniques, to decide what parts of the PDF should be scanned; this way, tables and charts could be intelligently discarded (or, even better, referenced throughout the audio dictation). The app is also not robust enough to handle large multi-page PDF files; the preliminary app was designed as a minimum viable product, only including enough to process a single-page PDF. Thus, we plan on ways of both increasing efficiency (time-wise) and scaling the app by splitting up PDFs into fragments, processing them in parallel, and returning the output to the user after collating individual text-to-speech outputs. In the same vein, the voice cloning algorithm was restricted by length of input text, so this is an area we seek to scale and parallelize in the future. Finally, we are thinking of using some caching mechanisms server-side to reduce waiting time for the output audio file.
## Inspiration Two of our teammates, Archna and Anirudh, had witnessed serious accidents at previous concerts and noticed how delayed the medical response was. On discussing this with teammates, we were appalled to find out the number of deaths and injuries at concerts each year due to crowd surges and stampedes, which can be easily prevented. ## What it does Surge Protector uses drone technology to monitor crowded locations like music concerts and public protests. Using a Dilated Convolutional Neural Network (CNN) model, we find the areas with the highest densities in the crowd, and alert organizers or authorities in a timely manner to prevent events such as stampedes. On Surge Protector's dashboard you can view these problematic regions with bounding rectangles and a density heat map, and with that information quickly send the necessary personnel to where they're needed most. With statistics such as population in certain areas and time-series of population groups in the crowd, Surge Protector can be the key to a future of safer and more accessible concerts for all. ## How we built it Drone Infrastructure (provided by TreeHack Sponsors) - Use drone camera footage, with plans to add object detection and autonomous flying to navigate the venue, and streaming video footage and location data live time using a live-streaming API. Backend: Uses Pytorch and CNNs for marking heads in a crowd Generate heatmaps using Matplotlib and OpenCV with custom thresholds. Used open sourced algorithms for training crowd counting model [link](https://github.com/leeyeehoo/CSRNet-pytorch) Wrote custom heatmap algorithms with Matplotlib and opencv to mark crowd, and temporal averaging for stabilizing bounding boxes generated by the model in Python3. Frontend was designed in ReactJS with bootstrap. ## Challenges we ran into Our biggest issue was in the image processing side, trying to efficiently process video frames at a reasonable speed. Additionally, conversion between different intermediate formats such as numpy arrays, Python Images and Matplotlib plots was surprisingly painful. In one instance, converting a pyplot to a cv2 image without a margin became a technical issue which caused an immense amount of frustration . Other problems would be incorporating the torch model, which was originally written in Python 2.7, and transforming it to Python 3.10 and running it in our Conda environment. We also faced hardware issues while getting the drone feed, and even learning to fly a drone as none of our team members had used one before. ## Accomplishments that we're proud of We are incredibly grateful for TreeHacks for giving us the opportunity to meet amazing people and collaborate on ideas. As a team, we are proud that we met as diverse individuals from each of the four corners of the US, to hack on a project we felt passionate about. We were also really happy that after almost 24 hours of continous hacking, we were able to get a real demo ready for our app, despite facing problems with conversions and cloud computing. All the pain and bugs we had to debug to get to the demo was worth it immediately after seeing the demo coming to light. ## What we learned We learned a lot about the potential of drone technology after speaking with some of the sponsors of the hackathon. We got to know about some incredible innovations happening in the autonomous aviation space and got to implement a tiny use case of the same for our project! Additionally, we had to research and use some interesting techniques for detecting the most dangerous areas of crowds, specifically keeping the bounded rectangles in consistent positions that made sense based on the density heat maps the the CNN produced. ## What's next for Surge Protector Surge Protector is just a proof of concept, and has a huge potential to grow as an project. Fully autonomous drones from TreeHacks sponsors can be used to automate concert scans, and technologies such as infrared imagery and LIDAR can be used to ensure better safety of citizens by drones at night-time. Additionally, there is existing software which can fly towards a selected target, which we believe can help guide emergency personnel to those who need it most by shining a lot on dense spots in a crowd. In the future, the Surge Protector dashboard can be expanded to include real time graphs and analytics in addition to a drone's live feed, allowing organizers to keep track of a constantly changing complex situations.
winning
We developed gloves that serve as remote controls for a small car. Simple, intuitive gestures are all one needs to control the car. The hack consists of a car and two-outfitted gloves. The car is powered by an Arduino Uno, and is able to programmatically power motors attached to wheels. Additionally, the car can communicate with each of the gloves over radio technology. The right-hand glove is outfitted with an Arduino Uno as well. It controls the forward movement of the car. Tilting the hand backwards (much like one does on a dirtbike) will accelerate the car forward. Keeping the hand level with the ground will stop the car's motion. The left hand is used to turn the car. The user can simply turn their hand to steer the car as it is in motion. Future versions of this car can be outfitted with rough-terrain traversing capabilities, enabling it to be able to deliver much-needed supplies to areas in need. Additionally, this car can be developed into an extremely immersive exploratory toy, potentially outfitted with VR streaming capabilities. As a team, we've learned how to communicate between Arduino boards, using transistors as switches, and getting data from IMU sensors. Both of us are computer science majors and have had limited experience with these technologies. The process of learning more about lower-level engineering has been amazing!
I want to create a tool to help people understand the Turing machine. The Game Of Life is a great way to do so while being interesting to play with.
## Inspiration As a team, we found that there was no portable mouse on the market that could fit us all comfortably. So we figured, why not make a portable mouse that perfectly conforms to our hand? This was the inspiration for the glove mouse; a mouse that seamlessly integrates into our daily life while also providing functionality and comfort. ## What it does Our project integrates a mouse into a glove, using planar movement of the hand to control the position of the cursor. The project features two push buttons on the fingertips which can control left and right click. ## How we built it At the core of our project, we utilized an Arduino Uno to transmit data from our push buttons and 6-axis accelerometer module to the computer. Each module sends analog signals to the Arduino, which we then collect with a C program running on the computer. This raw acceleration data is then processed in Python using integration to get the velocity of the cursor, which is then used to output a corresponding cursor movement on the host computer. ## Challenges we ran into One major challenge the team faced was that our board, the Arduino Uno, didn’t have native support with Arduino’s mouse libraries; meaning we needed to find a different way to interface our sensors with a computer input. Our solution was, based on forums and recommendations online, to output our data to Python using C, where we could then manipulate the data and control the mouse using a Python script. However, since Python is higher level than C, we found that the collection of data in the C program occurred faster than the code in Python could receive. To solve this, we implemented a software enable from Python to C to synchronize the collection of the data. ## Accomplishments that we're proud of Despite using a board that was incompatible with Arudino's built-in mouse library, we were able to figure out a workaround to implement mouse capabilities on our Arudino board. ## What we learned Through this project, the team learned a lot about interfacing between different programming languages with Arduinos. Additionally, the team gained experience with scripts for data collection and controlling timings so programs can interact at normal intervals. ## What's next for Glove Mouse In the future, we want to make our cursor movement smoother on the host PC by spending more time to calibrate the polling rate, response time, and sensitivity. Additionally, we would look to reduce the size of the device by creating an IC to replace our Arduino, add a Bluetooth transceiver, and add a small battery.
losing
## Inspiration All of us have participated in DECA case study competitions in the past, and we are aware of the impact that customer satisfaction has on the success of a business. Therefore, after learning of the theme of UoftHacks IX, we saw how we can use customer reviews to further grow revenue for businesses. Examples of customer reviews include YouTube reviews and tweets on Twitter. Rather than having to sift through all potential customer reviews, we created an application that can automatically determine the satisfaction level of a review. ## What it does The program is a map-user interface that allows users to submit reviews through audio clips, text, and more. From this, the backend straps the database and frontend together to use a machine learning model to determine if a certain review is positive or negative. This certain value is then displayed on the map corresponding to the location of the business. ## How we built it review.ai was built primarily using Visual Studio Code, in the javascript language (with some python). Frameworks that were used include react, node.js, and MongoDB. Additionally, we used Google Maps API and Speech-to-Text API. ## Challenges we ran into The module was not downloaded properly, but after countless hours of troubleshooting (as well as going through Stack Overflow Blogs), we managed to debug the code to help our application run smoother. ## Accomplishments that we're proud of We are proud of the fact that we were able to include Google Maps API and Speech-to-Text API, as these allowed our program to become more interactable and accessible. Furthermore, using these APIs allowed us to learn more about machine learning algorithms (e.g. decision trees, naive bayes, and SVM). ## What we learned We learned how to integrate Google Maps API and Speech-to-Text API into an application. Additionally, we learned how to train test models to perform the certain function that we want. ## What's next for review.ai We hope to expand the machine learning capabilities of review.ai to allow the filtering of more nuanced reviews of the business and be able to further separate the categories of customer satisfaction using keywords. Also, we want to investigate the capabilities of using deep learning for our program, which is a term we learned during the research of this project.
## Inspiration A common skill one needs in business management is the ability to know how the customer feels and reacts to a system of services provided by the business in question. Thus, having computers in this day and age make it an essential tool for analyzing these important sources of customer feedback. Automatically making a machine gather uncoerced customer "feedback" data can easily indicate how the last few interactions were for the customer. Making this tool accessible was our inspiration behind this project. ## What it does This web application gathers data from Twitter, Reddit, Kayak, TripAdvisor and Influenster at the moment with room to expand into many more social review websites. The data it gathers from these websites are represented as graphs, ratios and other symbolic representations that help the user easily conclude how the company is perceived by its customers and even compare it to how customers perceive other airline companies as well. ## How we built it We built it using languages and packages we were familiar with, along with packages we did not know existed before yHacks 2019. An extremely careful design process was laid out well before we started working on the implementation of the webApp and we believe that is the reason behind its simplicity for the user. We prioritized making the implementation as simple as possible such that any user can easily understand the observations of the data. ## Challenges we ran into Importing and utilizing some packages did not play well with our implementation process, thus we had to make sure we covered our design checklist via working around the issues we ran into. This includes building data scrapers, data representers and other packages from scratch. This issue increasing became prominent the more we pressed on making the webApp user-friendly as more functions and code had to be shoveled in the back-end. ## Accomplishments that we're proud of The data scrapers and representative models for collected data are accomplishments we're most proud of as they are simple yet extremely effective when it comes to analyzing customer feedback. In particular, getting data from giant resources of customer reactions such as TripAdvisor, Reddit and Twitter make the application highly relevant and effective. This practical idea and ease of access development we implemented for the user is what we are most proud of. ## What we learned We learned a lot more about several of the infinite number of packages available online. There is so much information out on the internet that these 2 continuous days of coding and research have not even scratched the surface in terms of all the implementable ideas out there. Our implementation is just a representation of what a final sentiment analyzer could look like. Given there are many more areas to grow upon, we learned about customer feedback analysis and entrepreneur skills along the way. ## What's next for feelBlue Adding more sources of data such as FaceBook, Instagram and other large social media websites will help increase the pool of data to perform sentiment analysis. This implementation can even help high-level managers of JetBlue decide which area of service they can improve upon! Given enough traction and information, feelBlue could even be used as a universal sentiment analyzer for multiple subjects alongside JetBlue Airlines! The goals are endless!
## Inspiration While tackling our university courses, we were faced with the task of implementing (and debugging) various data structures. We found that using traditional debuggers or print statements obscures the structure of the data, making testing and debugging code tedious and unintuitive. We were determined to find a way to take advantage of the inherent properties of data structures. With Visually Study Yo Code, you can see the data structure come together as you step through your code. ## What it does Visually Study Yo Code offers gives you the choice of using a graphical depiction of your variables to debug your JavaScript code. By right clicking a variable in the editor, you can open a tab which displays nodes to depict trees, linked lists and custom data structures. Nodes are added, deleted or modified as you step through your code in the debugger, making it easy to find and trace down errors in your algorithms. ## How we built it We created and deployed a Visual Studio Code Extension using the Extension API to let us integrate our new functionality into the editor. The graphical representation is constructed using canvas in HTML and displayed to the user by using a webview. To access the data, we interfaced with Visual Studio Code's Debug Adapter and parsed the information about the variables into a JSON object. ## Challenges we ran into The functionality of our extension was limited by what the Visual Studio Code Extension API provides. While we originally planned to add a command directly to the debugging menu, the ability to add new commands was constrained to the editor. Furthermore, accessing the editors colour themes directly was difficult, so that we decided to limit ourselves to supporting a dark theme. ## Accomplishments that we're proud of We are proud to have completed a project that we would like to use in the future. In contrast to most hackathon projects, we feel like Visually Study Yo Code benefits us directly by letting us create more robust code more quickly. ## What we learned Since we had not made an extension before, we learned how to use the Visual Studio Code Extension API. Since Visually Study Yo Code is focused on debugging, we learned about the debugging architecture used by Visual Studio Code. ## What's next for Visually Study Yo Code Visually Study Yo Code currently only supports debugging in JavaScript. In the future, we will increase the scope of our extension to include other programming languages. Furthermore, we plan to provide support for custom themes for our graphs. By using SVG, we can make our webviews interactive to make debugging even more effective. ## Challenges we ran into The functionality of our extension was limited by what is possible with the Visual Studio Code Extension API. The ability to add new commands was constrained to the editor, so that we had to give up our original plan of right clicking the variables in the debugging menu. Furthermore, accessing the editors colour themes directly was difficult, so that we decided to limit ourselves to supporting a dark theme for the hackathon. ## Accomplishments that we're proud of We are proud to have completed a project that we ourselves would like to use in the future. In contrast to most hackathon projects, we feel like Visually Study Yo Code could benefit us in the future by letting us create more robust code more quickly. ## What we learned We learned how to use the Visual Studio Code Extension API, since we had not made an extension before. Since Visually Study Yo Code is focused on debugging, we learned about the debugging architecture used by Visual Studio Code. ## What's next for Visually Study Yo Code Visually Study Yo Code currently only supports debugging in JavaScript. In the future, we will increase the scope of our extension to include other programming languages. Furthermore, we plan to provide support for custom themes for our graphs. By using SVG, we can make our webviews interactive to make debugging even more effective.
losing
## Inspiration 37% of youth between 12 and 17 years of age have reported being bullied online at least once. Yet, only 1 in 10 of them have informed a trusted adult about it. Seeing as many children are already going to be using the internet and social media from a young age, we might as well make it as safe as possible. With this chrome extension, we wanted to provide a one-click access to mental health services for children and anti-cyberbullying software. ## What it does Just turn on the FroggyFriend feature and you will get a popup menu in the top right corner of your browser where you can choose one of three quick help options: the KidsHelpPhone main page, phone contact, or a live-chat feature. The extension also protects children from profane language, it analyzes the text on a webpage and replaces hateful words with asterisks. ## How we built it We started by making a manifest.json file to host the chrome extension, using the Chrome extension API. Then, we used HTML and CSS to design a popup menu, you can click on the extension icon in the top right and you have 3 quick access buttons to mental health resources for kids. We used an online resource called Font Awesome to extract fun icons and our signature logo: the frog. With JavaScript, we wrote a content script to extract text from the HTML DOM, to analyze extracted text for profanity using Google’s list of bad words, and then censor them. ## Challenges we ran into Nobody on the team had a very good understanding of JavaScript so we had to watch many tutorials on how to use it. One major issue was figuring out how to get all the words from a text file and use it in the JavaScript file, and ended up solving the problem with a simple Python code. Another major challenge was extracting all text from HTML documents and figuring out how to make changes. ## Accomplishments that we're proud of We would be proud to make a difference in the mental health of today's youth. We are proud that we learned to use new technology, most of us were new to HTML as well. We believe that we created a very reliable and easy-to-use web extension. ## What we learned We all learned to code in JavaScript, HTML, CSS, how to extract text from webpages, and making aesthetically pleasing yet functional technology. ## What's next for FroggyFriend This can be applied to many social media platforms and messaging apps to prevent inappropriate messages or content from being shown, especially to younger children.
## Inspiration Our **inspiration** for this project was born out of a need for a more accessible and responsible way to engage with digital content in a world where inappropriate language and content can be pervasive. In today's age, a significant portion of online content contains explicit or offensive material that may not be suitable for various settings, such as educational environments or professional presentations. We believe that access to information and media should be more inclusive and adaptable, which led us to create an automatic censor that can transcribe and filter out profanity in videos, audio, and text files. Furthermore, we recognize the importance of child safety and the need for parents to monitor the content their children consume. Our tool not only promotes a cleaner and more respectful online environment but also empowers parents to ensure that their children are exposed to age-appropriate content. ## What it does Our program is a cutting-edge solution that caters to the diverse needs of our users. It functions as an automatic content censor, offering the capability to process and filter out inappropriate language and content from a wide range of media, including videos, audio files, and text documents. We understand that there are countless scenarios where one might need to use online content, be it for educational, professional, or family purposes, but the presence of offensive material can be a hindrance. Our program simplifies this by allowing users to upload their files, which are then transcribed and meticulously scanned for any objectionable words or phrases. The program also provides users with the flexibility to define their own custom censorship rules, giving them greater control over the content they consume or share. In a world where the internet is flooded with explicit content, our program aims to empower users to curate and create a more respectful and safe digital environment tailored to their unique requirements. ## How we built it We developed our program using a combination of HTML, Reflex (a Python library), and Python, creating a robust and user-friendly application. The front end, built with HTML, provides a user-friendly interface for interacting with the system. Reflex, on the other hand, played a pivotal role in constructing webpages that seamlessly connect with Python, facilitating the necessary backend operations. Reflex was an invaluable asset in our project, bridging the gap between our user-friendly front end and the robust backend. It not only facilitated a smooth transition but also empowered us to incorporate essential features like video playback into our web application. Thanks to Reflex, our project achieved a polished and complete look, enhancing the overall user experience. This dynamic library played a key role in ensuring that our application seamlessly integrated user interactions with the powerful Python backend, resulting in a more engaging and refined final product. Python served as the core of our backend, and we harnessed the power of libraries like MoviePy and AssemblyAI to enhance our program's capabilities. With MoviePy, we adeptly handled video and audio files, enabling dynamic editing based on the transcribed content. AssemblyAI was instrumental in real-time caption generation, complete with timestamps. To ensure content appropriateness, we leveraged Python to cross-reference the transcript against a database of profanity, stored in a text file, as well as any custom words provided by the user. This process allowed us to pinpoint and redact specific timestamps associated with objectionable content, ultimately producing a censored, refined version of the media. The result is a versatile and effective solution for filtering and enhancing digital content. ## Challenges we ran into As a team of three first-semester freshmen in computer science at Arizona State University, our first hackathon experience brought about a series of challenges and valuable lessons. We entered the competition somewhat uncertain about what to expect, and finding the right balance between being adequately prepared and over-prepared proved to be a concern. Traveling to the event introduced an additional layer of complexity, given the stakes and the desire for a fruitful outcome. The need to quickly grasp Reflex, a new sub-language, added another dimension of learning while staying on track with our pre-established plan. The often slow and unreliable Wi-Fi presented a major hurdle, particularly since our primary backend code required a swift internet connection for efficient processing. ## Accomplishments that we're proud of During the two-day hackathon, we achieved remarkable progress and successfully built a working prototype well within the given time frame. Each member of our team contributed their expertise to different aspects of the project. Pratham, despite starting from scratch, quickly learned Reflex and skillfully developed the interface connecting the backend to the frontend, which included the video playback feature. Harshit took on the challenge of learning HTML and CSS during the hackathon and skillfully crafted the main front pages. Syna worked diligently on the backend, creating an efficient Python program. While our responsibilities were well-defined, we also supported each other when facing difficulties. Overcoming several obstacles along the way, we persevered and, upon testing our code with a more reliable network, were delighted to find that our model was impressively fast and potentially more efficient than many existing programs in the market. These challenges not only proved to be valuable learning experiences but also demonstrated our adaptability and success in a competitive environment. We take great pride in what we accomplished during this hackathon. ## What we learned Through this enriching experience, we, as a team of first-semester computer science freshmen, gained invaluable insights and lessons. Our first hackathon was an eye-opener, teaching us to find the right balance between preparation and readiness, and how to persevere in the face of challenges. We discovered that with dedication and teamwork, we could not only build a functional prototype but also do it within the tight time constraints of the hackathon. Our individual journeys were equally enlightening. Pratham, with determination, learned Reflex from scratch and managed to create a seamless interface between the backend and front end, even incorporating video playback. Harshit, in a commendable feat, picked up HTML and CSS during the competition, enabling us to design polished front pages. Syna, as the backend wizard, worked out the entire logic behind our model and by utilizing the available open-source libraries and functions, crafted an efficient Python program. Perhaps the most significant lesson was our ability to support each other. Despite our defined roles, we helped out whenever challenges arose. Slow and unreliable Wi-Fi posed a significant obstacle, particularly since our primary backend required a fast internet connection. However, we persevered, and this challenge led us to the discovery that our model was impressively fast and potentially more efficient than many existing programs on the market. In the end, these challenges shaped us, highlighting our adaptability and ability to thrive in a competitive environment. We take immense pride in our accomplishments during this hackathon and look forward to applying these lessons to future endeavors. ## What's next for "What The $@!#" The future of "What the $@!#" holds exciting possibilities as we explore avenues for expansion and enhancement. First and foremost, we are keen on developing a web extension that employs the same logic as our current model. This extension would serve as a versatile tool for testing online content for profanity, effectively creating a parental control system that not only identifies inappropriate material but also censors it, granting parents peace of mind as their children navigate the internet. In addition to this, we envision the inclusion of a feature allowing users to apply custom censor sounds to video and audio files, adding a personal touch to the censorship experience. Furthermore, we aspire to broaden our program's capabilities by not only redacting audio but also censoring the visual content during objectionable timestamps in videos. This expansion will make our product even more versatile and valuable in content filtering and enhancement. As a team, we eagerly anticipate further development and collaboration in future hackathons. We are also planning to present our idea to "Venture Devils," Arizona State University's entrepreneurial club, which provides funding for projects like ours. This step will help us transform our prototype into a fully realized implementation, ensuring that "What the $@!#" can make a substantial impact in the realm of online content filtering and enhancement.
## Inspiration One of the most exciting parts about Hackathons is the showcasing of the final product, well-earned after hours upon hours of sleep-deprived hacking. Part of the presentation work lies in the Devpost entry. I wanted to build an application that can rate the quality of a given entry to help people write better Devpost posts, which can help them better represent their amazing work. ## What it does The Chrome extension can be used on a valid Devpost entry web page. Once the user clicks "RATE", the extension will automatically scrap the relevant text and send it to a Heroku Flask server for analysis. The final score given to a project entry is an aggregate of many factors, such as descriptiveness, the use of technical vocabulary, and the score given by an ML model trained against thousands of project entries. The user can use the score as a reference to improve their entry posts. ## How I built it I used UiPath as an automation tool to collect, clean, and label data across thousands of projects in major Hackathons over the past few years. After getting the necessary data, I trained an ML model to predict the probability for a given Devpost entry to be amongst the winning projects. I also used the data to calculate other useful metrics, such as the distribution of project entry lengths, average amount of terminologies used, etc. These models are then uploaded on a Heroku cloud server, where I can get aggregated ratings for texts using a web API. Lastly, I built a Javascript Chrome extension that detects Devpost web pages, scraps data from the page, and present the ratings to the user in a small pop-up. ## Challenges I ran into Firstly, I am not familiar with website development. It took me a hell of a long time to figure out how to build a chrome extension that collects data and uses external web APIs. The data collection part is also tricky. Even with great graphical automation tools at hand, it was still very difficult to do large-scale web-scraping for someone relatively experienced with website dev like me. ## Accomplishments that I'm proud of I am very glad that I managed to finish the project on time. It was quite an overwhelming amount of work for a single person. I am also glad that I got to work with data from absolute scratch. ## What I learned Data collection, hosting ML model over cloud, building Chrome extensions with various features ## What's next for Rate The Hack! I want to refine the features and rating scheme
losing
## Inspiration Found the need to be able to search through lecture videos more efficiently to be able to study with the disorganization of online school. Many students are having trouble keeping up with recorded lectures, and that’s causing them to fall behind and struggle, so we created a way to bookmark the important parts of a recorded lecture and be able to find them in a much more efficient way courtesy of Assembly AI. ## What it does Lecture Logs is a tool that takes mp4 or mp3 files and sends them to an API which creates a suitable environment to allow the user to post their file to the Assembly AI API. The file is then transcribed and the data is broken into readable chapter summaries with timestamps and is then rendered to the client to be seen in the front-end. ## How we built it The front-end is made with react.js and a few other react-specific tools such as react-router-dom which allows the user to navigate around the client. The backend is made using Express.js to which the user can send their data in order for the backend to handle and receive data from Assembly Ai to display to the user. ## Challenges we ran into One challenge we ran into was trying to implement Redux to work alongside react.js. We attempted this to try to implement an application-wide state along with actions. Unfortunately, we weren't able to get our middleware to work due to an incompatibility with our dependencies. We also ran into some issues with rendering in react with state, but we were able to work around that by looking at code that we've written before. ## Accomplishments that we're proud of Working with an unfamiliar API, being able to split long videos down into smaller chapter summaries and being able to find the content in the chapter summaries by navigating to the provided timestamps. ## What we learned We were able to strengthen our understanding of react.js, really working with UI design to be able to give a user experience that we are proud of. We also learned how to send files in HTTP requests to APIs and other services. ## What's next for Lecture Logs The next step for Lecture Logs is to be able to store chapter summaries into a database for a user to access over the course of a semester to help them stay organized better. We'd also like to be able to clip the video so that after the Assembly AI process has taken place, the user would be able to see clips of the most important parts of the video
## Inspiration As is known to all, the wildfire issue is becoming more and more serious during recent years, especially in California. It is very hard to find out where the wildfire happens and where it is heading to because satellite monitoring have half-day delay and the wildfire often spread too quickly before firefighters could possibly take actions. So, we built up the path planning software for drones to detect and image wildfires with efficient and safe path. A beautiful picture of human beings’ future lives is currently drawing by us and we are desired to devote ourselves to this spectacular world. ## What it does It mainly deals with the collection of historic data of wildfire in California. Then the drones are able to learn which path is the most efficient, the most economic way, taking advantage of machine learning algorithms, to the place where fire started. People then can understand the situations of wildfire better. ## How we built it We work on collecting historical wildfire data with the help of datasets provided by API like Google Earth, or NASA public open sources, visualizing them, and then figuring out the most vulnerable sites or most important sites and setting coordinates to them. We established UAVs path planning model using machine learning algorithm, data we get, and simply simulated the hardware part combining software with hardware, in NVIDIA JetBot. We then use JetBot to simulate UAV in terms of motion controlling, image shooting and identification, since they have similar approaches, algorithm, sensors and detectors. ## Challenges we ran into When we were finding data, we didn't know which data was the most reflective one. We had data of fire for the past 24h, 7 days, or past few years. There are lots of factors that can influence the path of drones, and it's hard to consider wind, weather, fire enlargement all into account. It was also hard to do the demos or simulations, since we didn't have a drone that allows deep learning. ## Accomplishments that we're proud of Convincing visualized wildfire historic data; Strong machine learning model that could figure out the most efficient UAVs detection path when wildfire happens; Successful assembled and set a ground robot car and taught it doing basic motions and image processing which could be considered as the same operation approaches on drone; Consumed a lot of food and drink; Enjoyed two days amazing TreeHacks; Exchanged and inspired new ideas; Made new friends...... ## What we learned Too much to cover everything we have learned. To name a few, some useful and powerful API tools, knowledge on computer vision, machine learning, AR/VR, how to assemble and initialize a ground robot, cooperation and communication skills... and fun things like how to find a good place to sleep safe and sound when there is no bed, Stanford is not much worse than Berkeley as we had thought hh.... ## What's next for Fire Drones! Adding various factors like wind direction, wind speed, and vegetation area into the algorithm can make our path model more appliable. We can also build a nice user interface that can show firing areas and the path of drones. Images sent by drones can also be displayed and then use them to train the deep learning model.
## 💡 Inspiration You have another 3-hour online lecture, but you’re feeling sick and your teacher doesn’t post any notes. You don’t have any friends that can help you, and when class ends, you leave the meet with a blank document. The thought lingers in your mind “Will I ever pass this course?” If you experienced a similar situation in the past year, you are not alone. Since COVID-19, there have been many struggles for students. We created AcadeME to help students who struggle with paying attention in class, missing class, have a rough home environment, or just want to get ahead in their studies. We decided to build a project that we would personally use in our daily lives, and the problem AcadeME tackled was the perfect fit. ## 🔍 What it does First, our AI-powered summarization engine creates a set of live notes based on the current lecture. Next, there are toggle features for simplification, definitions, and synonyms which help you gain a better understanding of the topic at hand. You can even select text over videos! Finally, our intuitive web app allows you to easily view and edit previously generated notes so you are never behind. ## ⭐ Feature List * Dashboard with all your notes * Summarizes your lectures automatically * Select/Highlight text from your online lectures * Organize your notes with intuitive UI * Utilizing Google Firestore, you can go through your notes anywhere in the world, anytime * Text simplification, definitions, and synonyms anywhere on the web * DCP, or Distributing Computing was a key aspect of our project, allowing us to speed up our computation, especially for the Deep Learning Model (BART), which through parallel and distributed computation, ran 5 to 10 times faster. ## ⚙️ Our Tech Stack * Chrome Extension: Chakra UI + React.js, Vanilla JS, Chrome API, * Web Application: Chakra UI + React.js, Next.js, Vercel * Backend: AssemblyAI STT, DCP API, Google Cloud Vision API, DictionariAPI, NLP Cloud, and Node.js * Infrastructure: Firebase/Firestore ## 🚧 Challenges we ran into * Completing our project within the time constraint * There was many APIs to integrate, making us spend a lot of time debugging * Working with Google Chrome Extension, which we had never worked with before. ## ✔️ Accomplishments that we're proud of * Learning how to work with Google Chrome Extensions, which was an entirely new concept for us. * Leveraging Distributed Computation, a very handy and intuitive API, to make our application significantly faster and better to use. ## 📚 What we learned * The Chrome Extension API is incredibly difficult, budget 2x as much time for figuring it out! * Working on a project where you can relate helps a lot with motivation * Chakra UI is legendary and a lifesaver * The Chrome Extension API is very difficult, did we mention that already? ## 🔭 What's next for AcadeME? * Implementing a language translation toggle to help international students * Note Encryption * Note Sharing Links * A Distributive Quiz mode, for online users!
losing
## Inspiration During the pandemic, we found ourselves sitting down all day long in a chair, staring into our screens and stagnating away. We wanted a way for people to get their blood rushing and have fun with a short but simple game. Since we were interested in getting into Augmented Reality (AR) apps, we thought it would be perfect to have a game where the player has to actively move a part of their body around to dodge something you see on the screen, and thus Splatt was born! ## What it does All one needs is a browser and a webcam to start playing the game! The goal is to dodge falling barrels and incoming cannonballs with your head, but you can also use your hands to "cut" down the projectiles (you'll still lose partial lives, so don't overuse your hand!). ## How we built it We built the game using JavaScript, React, Tensorflow, and WebGL2. Horace worked on the 2D physics, getting the projectiles to fall and be thrown around, as well as working on the hand tracking. Thomas worked on the head tracking using Tensorflow and outputting the necessary values we needed to be able to implement collision, as well as the basic game menu. Lawrence worked on connecting the projectile physics and the head/hand tracking together to ensure proper collision could be detected, as well as restructuring the app to be more optimized than before. ## Challenges we ran into It was difficult getting both the projectiles and the head/hand from the video on the same layer - we had initially used two separate canvasses for this, but we quickly realized it would be difficult to communicate from one canvas to another without causing too many rerenders. We ended up using a single canvas and after adjusting how we retrieved the coordinates of the projectiles and the head/hand, we were able to get collisions to work. ## Accomplishments that we're proud of We're proud about how we divvy'd up the work and were able to connect everything together to get a working game. During the process of making the game, we were excited to have been able to get collisions working, since that was the biggest part to make our game complete. ## What we learned We learned more about implementing 2D physics in JavaScript, how we could use Tensorflow to create AR apps, and a little bit of machine learning through that. ## What's next for Splatt * Improving the UI for the game * Difficulty progression (1 barrel, then 2 barrels, then 2 barrels and 1 cannonball, and so forth)
## Inspiration snore or get pourd on yo pores Coming into grade 12, the decision of going to a hackathon at this time was super ambitious. We knew coming to this hackathon we needed to be full focus 24/7. Problem being, we both procrastinate and push things to the last minute, so in doing so we created a project to help us ## What it does It's a 3 stage project which has 3 phases to get the our attention. In the first stage we use a voice command and text message to get our own attention. If I'm still distracted, we commence into stage two where it sends a more serious voice command and then a phone call to my phone as I'm probably on my phone. If I decide to ignore the phone call, the project gets serious and commences the final stage where we bring out the big guns. When you ignore all 3 stages, we send a command that triggers the water gun and shoots the distracted victim which is myself, If I try to resist and run away the water gun automatically tracks me and shoots me wherever I go. ## How we built it We built it using fully recyclable materials as the future innovators of tomorrow, our number one priority is the environment. We made our foundation fully off of trash cardboard, chopsticks, and hot glue. The turret was built using our hardware kit we brought from home and used 3 servos mounted on stilts to hold the water gun in the air. We have a software portion where we hacked a MindFlex to read off brainwaves to activate a water gun trigger. We used a string mechanism to activate the trigger and OpenCV to track the user's face. ## Challenges we ran into Challenges we ran into was trying to multi-thread the Arduino and Python together. Connecting the MindFlex data with the Arduino was a pain in the ass, we had come up with many different solutions but none of them were efficient. The data was delayed, trying to read and write back and forth and the camera display speed was slowing down due to that, making the tracking worse. We eventually carried through and figured out the solution to it. ## Accomplishments that we're proud of Accomplishments we are proud of is our engineering capabilities of creating a turret using spare scraps. Combining both the Arduino and MindFlex was something we've never done before and making it work was such a great feeling. Using Twilio and sending messages and calls is also new to us and such a new concept, but getting familiar with and using its capabilities opened a new door of opportunities for future projects. ## What we learned We've learned many things from using Twilio and hacking into the MindFlex, we've learned a lot more about electronics and circuitry through this and procrastination. After creating this project, we've learned discipline as we never missed a deadline ever again. ## What's next for You snooze you lose. We dont lose Coming into this hackathon, we had a lot of ambitious ideas that we had to scrap away due to the lack of materials. Including, a life-size human robot although we concluded with an automatic water gun turret controlled through brain control. We want to expand on this project with using brain signals as it's our first hackathon trying this out
## Inspiration Today we live in a world that is all online with the pandemic forcing us at home. Due to this, our team and the people around us were forced to rely on video conference apps for school and work. Although these apps function well, there was always something missing and we were faced with new problems we weren't used to facing. Personally, forgetting to mute my mic when going to the door to yell at my dog, accidentally disturbing the entire video conference. For others, it was a lack of accessibility tools that made the experience more difficult. Then for some, simply scared of something embarrassing happening during class while it is being recorded to be posted and seen on repeat! We knew something had to be done to fix these issues. ## What it does Our app essentially takes over your webcam to give the user more control of what it does and when it does. The goal of the project is to add all the missing features that we wished were available during all our past video conferences. Features: Webcam: 1 - Detect when user is away This feature will automatically blur the webcam feed when a User walks away from the computer to ensure the user's privacy 2- Detect when user is sleeping We all fear falling asleep on a video call and being recorded by others, our app will detect if the user is sleeping and will automatically blur the webcam feed. 3- Only show registered user Our app allows the user to train a simple AI face recognition model in order to only allow the webcam feed to show if they are present. This is ideal to prevent ones children from accidentally walking in front of the camera and putting on a show for all to see :) 4- Display Custom Unavailable Image Rather than blur the frame, we give the option to choose a custom image to pass to the webcam feed when we want to block the camera Audio: 1- Mute Microphone when video is off This option allows users to additionally have the app mute their microphone when the app changes the video feed to block the camera. Accessibility: 1- ASL Subtitle Using another AI model, our app will translate your ASL into text allowing mute people another channel of communication 2- Audio Transcriber This option will automatically transcribe all you say to your webcam feed for anyone to read. Concentration Tracker: 1- Tracks the user's concentration level throughout their session making them aware of the time they waste, giving them the chance to change the bad habbits. ## How we built it The core of our app was built with Python using OpenCV to manipulate the image feed. The AI's used to detect the different visual situations are a mix of haar\_cascades from OpenCV and deep learning models that we built on Google Colab using TensorFlow and Keras. The UI of our app was created using Electron with React.js and TypeScript using a variety of different libraries to help support our app. The two parts of the application communicate together using WebSockets from socket.io as well as synchronized python thread. ## Challenges we ran into Dam where to start haha... Firstly, Python is not a language any of us are too familiar with, so from the start, we knew we had a challenge ahead. Our first main problem was figuring out how to highjack the webcam video feed and to pass the feed on to be used by any video conference app, rather than make our app for a specific one. The next challenge we faced was mainly figuring out a method of communication between our front end and our python. With none of us having too much experience in either Electron or in Python, we might have spent a bit too much time on Stack Overflow, but in the end, we figured out how to leverage socket.io to allow for continuous communication between the two apps. Another major challenge was making the core features of our application communicate with each other. Since the major parts (speech-to-text, camera feed, camera processing, socket.io, etc) were mainly running on blocking threads, we had to figure out how to properly do multi-threading in an environment we weren't familiar with. This caused a lot of issues during the development, but we ended up having a pretty good understanding near the end and got everything working together. ## Accomplishments that we're proud of Our team is really proud of the product we have made and have already begun proudly showing it to all of our friends! Considering we all have an intense passion for AI, we are super proud of our project from a technical standpoint, finally getting the chance to work with it. Overall, we are extremely proud of our product and genuinely plan to better optimize it in order to use within our courses and work conference, as it is really a tool we need in our everyday lives. ## What we learned From a technical point of view, our team has learnt an incredible amount the past few days. Each of us tackled problems using technologies we have never used before that we can now proudly say we understand how to use. For me, Jonathan, I mainly learnt how to work with OpenCV, following a 4-hour long tutorial learning the inner workings of the library and how to apply it to our project. For Quan, it was mainly creating a structure that would allow for our Electron app and python program communicate together without killing the performance. Finally, Zhi worked for the first time with the Google API in order to get our speech to text working, he also learned a lot of python and about multi-threadingbin Python to set everything up together. Together, we all had to learn the basics of AI in order to implement the various models used within our application and to finally attempt (not a perfect model by any means) to create one ourselves. ## What's next for Boom. The Meeting Enhancer This hackathon is only the start for Boom as our team is exploding with ideas!!! We have a few ideas on where to bring the project next. Firstly, we would want to finish polishing the existing features in the app. Then we would love to make a market place that allows people to choose from any kind of trained AI to determine when to block the webcam feed. This would allow for limitless creativity from us and anyone who would want to contribute!!!!
partial
## Inspiration After seeing the breakout success that was Pokemon Go, my partner and I were motivated to create our own game that was heavily tied to physical locations in the real-world. ## What it does Our game is supported on every device that has a modern web browser, absolutely no installation required. You walk around the real world, fighting your way through procedurally generated dungeons that are tied to physical locations. If you find that a dungeon is too hard, you can pair up with some friends and tackle it together. Unlike Niantic, who monetized Pokemon Go using micro-transactions, we plan to monetize the game by allowing local businesses to to bid on enhancements to their location in the game-world. For example, a local coffee shop could offer an in-game bonus to players who purchase a coffee at their location. By offloading the cost of the game onto businesses instead of players we hope to create a less "stressful" game, meaning players will spend more time having fun and less time worrying about when they'll need to cough up more money to keep playing. ## How We built it The stack for our game is built entirely around the Node.js ecosystem: express, socket.io, gulp, webpack, and more. For easy horizontal scaling, we make use of Heroku to manage and run our servers. Computationally intensive one-off tasks (such as image resizing) are offloaded onto AWS Lambda to help keep server costs down. To improve the speed at which our website and game assets load all static files are routed through MaxCDN, a continent delivery network with over 19 datacenters around the world. For security, all requests to any of our servers are routed through CloudFlare, a service which helps to keep websites safe using traffic filtering and other techniques. Finally, our public facing website makes use of Mithril MVC, an incredibly fast and light one-page-app framework. Using Mithril allows us to keep our website incredibly responsive and performant.
## Inspiration Both chronic pain disorders and opioid misuse are on the rise, and the two are even more related than you might think -- over 60% of people who misused prescription opioids did so for the purpose of pain relief. Despite the adoption of PDMPs (Prescription Drug Monitoring Programs) in 49 states, the US still faces a growing public health crisis -- opioid misuse was responsible for more deaths than cars and guns combined in the last year -- and lacks the high-resolution data needed to implement new solutions. While we were initially motivated to build Medley as an effort to address this problem, we quickly encountered another (and more personal) motivation. As one of our members has a chronic pain condition (albeit not one that requires opioids), we quickly realized that there is also a need for a medication and symptom tracking device on the patient side -- oftentimes giving patients access to their own health data and medication frequency data can enable them to better guide their own care. ## What it does Medley interacts with users on the basis of a personal RFID card, just like your TreeHacks badge. To talk to Medley, the user presses its button and will then be prompted to scan their ID card. Medley is then able to answer a number of requests, such as to dispense the user’s medication or contact their care provider. If the user has exceeded their recommended dosage for the current period, Medley will suggest a number of other treatment options added by the care provider or the patient themselves (for instance, using a TENS unit to alleviate migraine pain) and ask the patient to record their pain symptoms and intensity. ## How we built it This project required a combination of mechanical design, manufacturing, electronics, on-board programming, and integration with cloud services/our user website. Medley is built on a Raspberry Pi, with the raspiaudio mic and speaker system, and integrates an RFID card reader and motor drive system which makes use of Hall sensors to accurately actuate the device. On the software side, Medley uses Python to make calls to the Houndify API for audio and text, then makes calls to our Microsoft Azure SQL server. Our website uses the data to generate patient and doctor dashboards. ## Challenges we ran into Medley was an extremely technically challenging project, and one of the biggest challenges our team faced was the lack of documentation associated with entering uncharted territory. Some of our integrations had to be twisted a bit out of shape to fit together, and many tragic hours spent just trying to figure out the correct audio stream encoding. Of course, it wouldn’t be a hackathon project without overscoping and then panic as the deadline draws nearer, but because our project uses mechanical design, electronics, on-board code, and a cloud database/website, narrowing our scope was a challenge in itself. ## Accomplishments that we're proud of Getting the whole thing into a workable state by the deadline was a major accomplishment -- the first moment we finally integrated everything together was a massive relief. ## What we learned Among many things: The complexity and difficulty of implementing mechanical systems How to adjust mechatronics design parameters Usage of Azure SQL and WordPress for dynamic user pages Use of the Houndify API and custom commands Raspberry Pi audio streams ## What's next for Medley One feature we would have liked more time to implement is better database reporting and analytics. We envision Medley’s database as a patient- and doctor-usable extension of the existing state PDMPs, and would be able to leverage patterns in the data to flag abnormal behavior. Currently, a care provider might be overwhelmed by the amount of data potentially available, but adding a model to detect trends and unusual events would assist with this problem.
## Inspiration We got our inspiration from the various trends and buzzwords that currently dominate the tech space. From established companies to startups to hackathons, everyone seem to be irrationally chasing the newest, hottest thing, even for cases where the technologies do not contribute positively to the user experience. We noticed that many of these trends, such as AI and Blockchain/Cryptocurrency, involve the hoarding of large amounts of GPUs, so we made a game that is based on the phenomenon. Gameplay wise, we wanted to take that idea to two extreme ends of a spectrum. On one end, you can slowly make money by solving the problems by hand with a pencil and paper; on the other end, you can harness the power of the sun to provide enough energy for an excessive number of computations, getting you a vast amount of wealth. ## What it does Our project is a functional idle game. In it, the player can make money over time by buying better computer hardware, or they can actively pitch startup ideas to gain a large boost of funds. ## How we built it We used the Unity game engine and the C# programming language. ## Challenges we ran into N/A ## Accomplishments that we're proud of We are quite proud of our final product and think it has plenty of charm and entertainment. ## What we learned Throughout the process, we learned better ways to organize our code and assets which vastly improved our ability to work on further improvements. ## What's next for Gladiator Go The game is very easily expandable now that we have laid the foundation. We wanted to add additional mini-games to play, but we were only able to make the one. In the future, we may add additional things to buy and fill out the rest of the intended mini-games.
winning
## Inspiration GeoGuesser is a fun game which went viral in the middle of the pandemic, but after having played for a long time, it started feeling tedious and boring. Our Discord bot tries to freshen up the stale air by providing a playlist of iconic locations in addendum to exciting trivia like movies and monuments for that extra hit of dopamine when you get the right answers! ## What it does The bot provides you with playlist options, currently restricted to Capital Cities of the World, Horror Movie Locations, and Landmarks of the World. After selecting a playlist, five random locations are chosen from a list of curated locations. You are then provided a picture from which you have to guess the location and the bit of trivia associated with the location, like the name of the movie from which we selected the location. You get points for how close you are to the location and if you got the bit of trivia correct or not. ## How we built it We used the *discord.py* library for actually coding the bot and interfacing it with discord. We stored our playlist data in external *excel* sheets which we parsed through as required. We utilized the *google-streetview* and *googlemaps* python libraries for accessing the google maps streetview APIs. ## Challenges we ran into For initially storing the data, we thought to use a playlist class while storing the playlist data as an array of playlist objects, but instead used excel for easier storage and updating. We also had some problems with the Google Maps Static Streetview API in the beginning, but they were mostly syntax and understanding issues which were overcome soon. ## Accomplishments that we're proud of Getting the Discord bot working and sending images from the API for the first time gave us an incredible feeling of satisfaction, as did implementing the input/output flows. Our points calculation system based on the Haversine Formula for Distances on Spheres was also an accomplishment we're proud of. ## What we learned We learned better syntax and practices for writing Python code. We learnt how to use the Google Cloud Platform and Streetview API. Some of the libraries we delved deeper into were Pandas and pandasql. We also learned a thing or two about Human Computer Interaction as designing an interface for gameplay was rather interesting on Discord. ## What's next for Geodude? Possibly adding more topics, and refining the loading of streetview images to better reflect the actual location.
## Inspiration The memory palace, also known as the method of loci, is a technique used to memorize large amounts of information, such as long grocery lists or vocabulary words. First, think of a familiar place in your life. Second, imagine the sequence of objects from the list along a path leading around your chosen location. Lastly, take a walk along your path and recall the information that you associated with your surroundings. It's quite simple, but extraordinarily effective. We've seen tons of requests on Internet forums for a program that can generate a simulator to make it easier to "build" the palace, so we decided to develop an app that satisfies this demand — and for our own practicality, too. ## What it does Our webapp begins with a list provided by the user. We extract the individual words from the list and generate random images of these words from Flickr, a photo-sharing website. Then, we insert these images into a Google Streetview map that the user can walk through. The page displays the Google Streetview with the images. When walking near a new item from his/her list, a short melody (another mnemonic trick) is played based on the word. As an optional feature of the program, the user can take the experience to a whole new level through Google Cardboard by accessing the website on a smart device. ## How we built it We started by searching for two APIs: one that allows for 3D interaction with an environment, and one that can find image URLs off the web based on Strings. For the first, we used Google Streetview, and for the second, we used a Flickr API. We used the Team Maps Street Overlay Demo as a jumping off point for inserting images into street view. Used JavaScript, HTML, CSS ## Challenges we ran into All of us are very new to JavaScript. It was a struggle to get different parts of the app to interact with each other asynchronously. ## Accomplishments that we're proud of Building a functional web app with no prior experience Creating melodies based on Strings Virtual reality rendering using Google Cardboard Website design ## What we learned JavaScript, HTML, CSS ## What's next for Souvenir Mobile app More accurate image search Integrating jingles
## Inspiration Sometimes in large cities or in some busy bu crowded places, people suffer a lot while looking for a parking lot. There is little information that guides people to available parking spaces. We hope to assemble the resources in parking lots, make databases, and guide people to the best-choice parking lot. The idea can be part of the Google Maps. ## What it does It mainly has two functionalities. Firstly, it collects information for parking lots that are open to public, which includes the parking-lot map, and a system that keep tracks of the changes in available spots in each parking lot. The database for each parking lots place is in a local network system. Whenever a user is near the networks (can detect the network signals), they can get information about where can they find a spot in the parking lots(usually only the number of empty spots are shown, but people sometimes just have a hard time finding it). People can search for the destination, and then recommended parking places will be shown. Secondly, we assemble all parkable resources, and allow people who hope to rent out there own parking space to do so. Many people have their own parking lots, but they don't use them until they get back home. During the daytime, they can choose to put their parking lots available to the public. The public can park on the spots after paying some money. This really helps in cities where there are communities in the busy and crowded city. ## How I built it It is built by python, basically package tkinter. All the materials are in python system, including the database system. It is really efficient to have local network sharing the information since there is usually little signal under most parking lots. ## Challenges I ran into There should be a Admin end for the database management, a database in the middle that stores all the value, and a users' end that get info from the database through some interface. This connection is little hard since we need to visualize every data clearly and manage them correctly. ## Accomplishments that I'm proud of I learned the related python package in one night. I spent most time in workshops for ideas and opportunities thus I started really late. But anyway I almost finished the structure and the interface. There are many little interesting things in the software, such as some secret code that can induce special interfaces. I love it! ## What I learned Python's packages, and use python to make interactive database management. I really experienced the excitement to develop from back end to front end. ## What's next for BlockShelter We should try to make everything get out of the limit of the local network things. We will do more APIs for the software, and try to better the contract for the private parking lots sharing process. We will need more actually situations rather than mostly demos
winning
## Inspiration Online shopping has been the norm for a while, but the COVID-19 pandemic has impelled even more businesses and customers alike to shift to the digital space. Unfortunately, it also accentuated the frustrations associated with online shopping. People often purchase items online, only to find out what they receive is not exactly what they want. AR is the perfect technology to bridge the gap between the digital and physical spaces; we wanted to apply that to tackle issues within online shopping to strengthen interconnection within e-commerce! ## What it does scannAR allows its users to scan QR codes of e-commerce URLs, placing those items in front of where the user is standing. Users may also scan physical items to have them show up in AR. In either case, users may drag and drop anything that can be scanned or uploaded to the real world. Now imagine the possibilities. Have technicians test if a component can fit into a small space, or small businesses send their products to people across the planet virtually, or teachers show off a cool concept with a QR code and the students' phones. Lastly, yes, you can finally play Minecraft or build yourself a fake house cause I know house searching in Kingston is hard. ## How we built it We built the scannAR app using Unity with the Lean Touch and Big Furniture Pack assets. We used HTML, CSS, Javascript to create the business website for marketing purposes. ## Challenges we ran into This project was our first time working with Unity and AR technology, so we spent many hours figuring their processes out. A particular challenge we encountered was manipulating objects on our screens the way we wanted to. With the time constraint, the scope of UI/UX design was limited, making some digital objects look less clean. ## Accomplishments that we're proud of Through several hours of video tutorials and some difficulty, we managed to build a functioning AR application through Unity, while designing a clean website to market it. We felt even prouder about making a wide stride towards tackling e-commerce issues that many of our friends often rant about. ## What we learned In terms of technical skills, we learned how to utilize AR technology, specifically Unity. Initially, we had trouble moving objects on Unity; after completing this project, we have new Unity skills we can apply throughout our next hackathons and projects. We learned to use our existing front-end web development skills to augment our function with form. ## What's next for scannAR In the near future, we aim to flesh out the premium subscription features to better cater to specific professions. We also plan on cleaning up the interface to launch scannAR on the app stores. After its release, it will be a constant cycle of marketing, partnerships with local businesses, and re-evaluating processes.
## What is 'Titans'? VR gaming shouldn't just be a lonely, single-player experience. We believe that we can elevate the VR experience by integrating multiplayer interactions. We imagined a mixed VR/AR experience where a single VR player's playing field can be manipulated by 'Titans' -- AR players who can plan out the VR world by placing specially designed tiles-- blocking the VR player from reaching the goal tile. ## How we built it We had three streams of development/design to complete our project: the design, the VR experience, and the AR experience. For design, we used Adobe Illustrator and Blender to create the assets that were used in this project. We had to be careful that our tile designs were recognizable by both human and AR standards, as the tiles would be used by the AR players to lay our the environment the VR players would be placed in. Additionally, we pursued a low-poly art style with our 3D models, in order to reduce design time in building intricate models and to complement the retro/pixel-style of our eventual AR environment tiles. For building the VR side of the project, we selected to build a Unity VR application targeting Windows and Mac with the Oculus Rift. One of our most notable achievements here is a custom terrain tessellation and generation engine that mimics several environmental biomes represented in our game as well as integrating a multiplayer service powered by Google Cloud Platform. The AR side of the project uses Google's ARCore and Google Cloud Anchors API to seamlessly stream anchors (the tiles used in our game) to other devices playing in the same area. ## Challenges we ran into Hardware issues were one of the biggest time-drains in this project. Setting up all the programs-- Unity and its libraries, blender, etc...-- took up the initial hours following the brainstorming session. The biggest challenge was our Alienware MLH laptop resetting overnight. This was a frustrating moment for our team, as we were in the middle of testing our AR features such as testing the compatibility of our environment tiles. ## Accomplishments that we're proud of We're proud of the consistent effort and style that went into the game design, from the physical environment tiles to the 3D models, we tried our best to create a pleasant-to-look at game style. Our game world generation is something we're also quite proud of. The fact that we were able to develop an immersive world that we can explore via VR is quite surreal. Additionally, we were able to accomplish some form of AR experience where the phone recognizes the environment tiles. ## What we learned All of our teammates learned something new: multiplayer in unity, ARCore, Blender, etc... Most importantly we learned the various technical and planning challenges involved in AR/VR game development ## What's next for Titans AR/VR We hope to eventually connect the AR portion and VR portion of the project together the way we envisioned: where AR players can manipulate the virutal world of the VR player.
We drew our first kindling of inspiration from the global success of Pokemon Go as an AR gaming platform that managed to have a significant social impact. Brainstorming possible ways to retool its AR/geolocation base, we immediately realized its immense potential to be used for social good due to its ability to meaningfully engage entire populations, its novelty and distinctiveness from other social media platforms, and ability to be integrated into peoples’ everyday lives. Instead of Pokemon, obviously, we opted for simple yet likable balloons as AR markers, that would be available to people to drop, find, and engage with. We were especially interested in two specific applications of this platform: (1) a feature for people to drop balloons on a map that others could find to receive a message of positive affirmation, and (2) a feature allowing local businesses to drop coupons in areas around their businesses to both improve customer value, and to attract customers in their region to their business. It was a bit difficult to get started because we needed to find a way to combine AR with maps in a similar manner to Pokemon Go, and most of the tools we found were outdated and incompatible with more recent versions of Unity. Eventually, we decided on using Vuforia for AR due to its built-in integration with Unity, Mapbox for mapping because it seemed to be the only viable option, and Firebase for databases. We then used C# to bring these all together. We faced many challenges throughout this project, with the most frustrating challenge being the amount of time it took to download the necessary software. Additionally, it was rather difficult near the end to consolidate the AR and mapping and database code (all of which were done separately and were going to be merged together at the end), especially because the team members did it on different versions of Unity. In the end, however, we were able to overcome these software challenges and use our shared vision of bubbly positivity to develop an ambitious and socially-conscious game. While we didn't have time to implement many of the features we intended, we would still like to see this features in the future. These include special bubble coupons, browsing through past bubbles, a tutorial, and potentially a spin on the map by introducing a heart that instead gets hotter or colder based on whether the player walks closer or farther to a bubble. We also would reach out to local businesses (much like Snackpass) and gauge interest in having special business-specific bubble marketing opportunities.
partial
## Inspiration As lane-keep assist and adaptive cruise control features are becoming more available in commercial vehicles, we wanted to explore the potential of a dedicated collision avoidance system ## What it does We've created an adaptive, small-scale collision avoidance system that leverages Apple's AR technology to detect an oncoming vehicle in the system's field of view and respond appropriately, by braking, slowing down, and/or turning ## How we built it Using Swift and ARKit, we built an image-detecting app which was uploaded to an iOS device. The app was used to recognize a principal other vehicle (POV), get its position and velocity, and send data (corresponding to a certain driving mode) to an HTTP endpoint on Autocode. This data was then parsed and sent to an Arduino control board for actuating the motors of the automated vehicle ## Challenges we ran into One of the main challenges was transferring data from an iOS app/device to Arduino. We were able to solve this by hosting a web server on Autocode and transferring data via HTTP requests. Although this allowed us to fetch the data and transmit it via Bluetooth to the Arduino, latency was still an issue and led us to adjust the danger zones in the automated vehicle's field of view accordingly ## Accomplishments that we're proud of Our team was all-around unfamiliar with Swift and iOS development. Learning the Swift syntax and how to use ARKit's image detection feature in a day was definitely a proud moment. We used a variety of technologies in the project and finding a way to interface with all of them and have real-time data transfer between the mobile app and the car was another highlight! ## What we learned We learned about Swift and more generally about what goes into developing an iOS app. Working with ARKit has inspired us to build more AR apps in the future ## What's next for Anti-Bumper Car - A Collision Avoidance System Specifically for this project, solving an issue related to file IO and reducing latency would be the next step in providing a more reliable collision avoiding system. Hopefully one day this project can be expanded to a real-life system and help drivers stay safe on the road
## Inspiration Peer-review is critical to modern science, engineering, and healthcare endeavors. However, the system for implementing this process has lagged behind and results in expensive costs for publishing and accessing material, long turn around times reminiscent of snail-mail, and shockingly opaque editorial practices. Astronomy, Physics, Mathematics, and Engineering use a "pre-print server" ([arXiv](https://arxiv.org)) which was the early internet's improvement upon snail-mailing articles to researchers around the world. This pre-print server is maintained by a single university, and is constantly requesting donations to keep up the servers and maintenance. While researchers widely acknowledge the importance of the pre-print server, there is no peer-review incorporated, and none planned due to technical reasons. Thus, researchers are stuck with spending >$1000 per paper to be published in journals, all the while individual article access can cost as high as $32 per paper! ([source](https://www.nature.com/subscriptions/purchasing.html)). For reference, a single PhD thesis can contain >150 references, or essentially cost $4800 if purchased individually. The recent advance of blockchain and smart contract technology ([Ethereum](https://www.ethereum.org/)) coupled with decentralized file sharing networks ([InterPlanetaryFileSystem](https://ipfs.io)) naturally lead us to believe that archaic journals and editors could be bypassed. We created our manuscript distribution and reviewing platform based on the arXiv, but in a completely decentralized manner. Users utilize, maintain, and grow the network of scholarship by simply running a simple program and web interface. ## What it does arXain is a Dapp that deals with all the aspects of a peer-reviewed journal service. An author (wallet address) will come with a bomb-ass paper they wrote. In order to "upload" their paper to the blockchain, they will first need to add their file/directory to the IPFS distributed file system. This will produce a unique reference number (DOI is currently used in journals) and hash corresponding to the current paper file/directory. The author can then use their address on the Ethereum network to create a new contract to submit the paper using this reference number and paperID. In this way, there will be one paper per contract. The only other action the author can make to that paper is submitting another draft. Others can review and comment on papers, but an address can not comment/review its own paper. The reviews are rated on a "work needed", "acceptable" basis and the reviewer can also upload an IPFS hash of their comments file/directory. Protection is also built in such that others can not submit revisions of the original author's paper. The blockchain will have a record of the initial paper submitted, revisions made by the author, and comments/reviews made by peers. The beauty of all of this is one can see the full transaction histories and reconstruct the full evolution of the document. One can see the initial draft, all suggestions from reviewers, how many reviewers, and how many of them think the final draft is reasonable. ## How we built it There are 2 main back-end components, the IPFS file hosting service and the Ethereum blockchain smart contracts. They are bridged together with ([MetaMask](https://metamask.io/)), a tool for connecting the distributed blockchain world, and by extension the distributed papers, to a web browser. We designed smart contracts in Solidity. The IPFS interface was built using a combination of Bash, HTML, and a lot of regex! . Then we connected the IPFS distributed net with the Ethereum Blockchain using MetaMask and Javascript. ## Challenges we ran into On the Ethereum side, setting up the Truffle Ethereum framework and test networks were challenging. Learning the limits of Solidity and constantly reminding ourselves that we had to remain decentralized was hard! The IPFS side required a lot of clever regex-ing. Ensuring that public access to researchers manuscript and review history requires other proper identification and distribution on the network. The hardest part was using MetaMask and Javascript to call our contracts and connect the blockchain to the browser. We struggled for about hours trying to get javascript to deploy a contract on the blockchain. We were all new to functional programming. ## Accomplishments that we're proud of Closing all the curly bois and close parentheticals in javascript. Learning a whole lot about the blockchain and IPFS. We went into this weekend wanting to learning about how the blockchain worked, and came out learning about Solidity, IPFS, Javascript, and a whole lot more. You can see our "genesis-paper"on an IPFS gateway (a bridge between HTTP and IPFS) [here](https://gateway.ipfs.io/ipfs/QmdN2Hqp5z1kmG1gVd78DR7vZmHsXAiSbugCpXRKxen6kD/0x627306090abaB3A6e1400e9345bC60c78a8BEf57_1.pdf) ## What we learned We went into this with knowledge that was a way to write smart contracts, that IPFS existed, and minimal Javascript. We learned intimate knowledge of setting up Ethereum Truffle frameworks, Ganache, and test networks along with the development side of Ethereum Dapps like the Solidity language, and javascript tests with the Mocha framework. We learned how to navigate the filespace of IPFS, hash and and organize directories, and how the file distribution works on a P2P swarm. ## What's next for arXain With some more extensive testing, arXain is ready for the Ropsten test network *at the least*. If we had a little more ETH to spare, we would consider launching our Dapp on the Main Network. arXain PDFs are already on the IPFS swarm and can be accessed by any IPFS node.
## Inspiration As with many drivers, when a major incident occurs while driving, we are often left afraid, anxious, and overwhelmed. Just as many of our peers, we had little experience behind the wheel and barely understood how insurance claims worked and what steps we should be taking if an accident occurs. We decided to innovate the process of filing insurance claims for people of all ages and diverse backgrounds to allow for a quicker, accessible, and user-friendly experience through SWIFT DETECT. ## What it does SWIFT DETECT is an app that utilizes machine learning to extract information from user fed pictures and environmental context to auto-fill an insurance claims form. The automated process gives the user an informed and step-by-step guide on steps to take collision. The machine learning software can also make informed decisions on whether to contact emergency services, towing services, or whether the user will need a temporary vehicle based off of the picture evidence the user submits. This automated process allows the user to gain control over the situation and get back on track with their day-to-day activities faster than the traditional methods practiced. ## How we built it SWIFT DETECT was made using Node.JS and CARSXE ML API. ## Challenges we ran into Initially, we had tried creating our own ML model however we faced issues gathering datasets to train our model with. We thus utilized the pre-existing CARSXE ML API. However, this API proved to be very challenging to use. ## Accomplishments that we're proud of We are proud to have utilized our knowledge of tech to engineer a meaningful product that impacts our society in a positive way. We are proud to have engineered a product that caters to a diverse group of end-users and ultimately puts the user first. ## What we learned Through the process of planning and executing our hack, we have learned a lot about the insurance industry and ML models. ## What's next for SWIFT DETECT SWIFT DETECT hopes to take a preventative approach when it comes to vehicle collisions. We will do so by becoming the primary source of information when it comes to your vehicle's health and longevity. We aim to reduce the amount of collisions by analyzing a car’s mechanical parts and alerting the user when it is time for replacement or repair. Through the use of smart car features, we want to deliver rapid and accurate results of the current status of your vehicle.
winning
View a demo of our project here! <https://youtu.be/9WfTZi9KiVw> ## Inspiration With the 2024 presidential election – a historical and critical event – coming up in less than a month, we noticed that there was a surplus of election information on the internet, but a lack of organization. At a more targeted level, even less people are informed about their state senators and representatives, despite their direct impact on voting issues. To address this issue we created Vota, which aims to centralize representative information for voters – helping to streamline the process of making informed election decisions. Existing websites and resources for centralizing information for voters tend to be convoluted and difficult to navigate due to the quantity of information that they store. Vota’s focus on congressional positions allows for a more targeted user friendly experience. ## What it does Vota provides general election information to a user, and prompts users to enter their hometown state; from the input, a list of state representatives and senators is retrieved from the API and displayed in a user-friendly format. A chatbot is also provided to ask additional questions pertaining to representatives, voting, and election information, which aims to answer any questions that may arise. Through these functionalities, we hope to provide an easily accessible and centralized method of learning about state representatives and elections! ## How we built it & Challenges For this project, we used the Congress.gov API which can be found at: <https://gpo.congress.gov/#/member/member_list_by_state> The front-end of the website was primarily developed in React.js along with CSS / HTML, and the chatbot on the website was developed using a React package react-chatbot-kit. In terms of challenges, we faced issues obtaining and parsing JSON data from the Congress.gov API; we also faced difficulties having to prototype the website layout in Figma, and deciding certain functionalities of our final product. Developing and integrating the chatbot was also particularly challenging, since neither of us had worked with the react-chatbot-kit package before. Our backend was built using Python and Flask, which we started by looking into using APIs for our data source. We faced challenges while working with APIs - the first limited the number of calls we could make and the second wasn’t as up-to-date as we’d like, so we learned the value of well-maintained and open data sets. Experimenting with both APIs also strengthened our skills in working with JSONs and reading documentation. ## Accomplishments that we're proud of Coming out of this hackathon, we’re proud of developing a functional website with both a front and back-end! We also learned many of the frameworks we used for the project during the 36-hour duration of the hackathon. ## What's next for Vota: One vote, one voice We hope to find a more comprehensive API to provide the most up-to-date data. With enough data, Vota could be expanded to local elections. Vota’s chatbot also has potential for growth through partnership with a more established AI model in the future.
Gif DEMO! <https://imgur.com/dWZ8hqt> Inspiration In 2018, California voter turnout soared to 65%, the highest for any gubernatorial election in California since 2006. This was the first midterm election to exceed 100 million votes. With the rise of social media advocacy, voter demographic has also shifted to younger generations. However, we realize that although political activity have increased, political awareness have not increased at the same rate. The political jargon in resources and lack of availability in easily understandable political education might be the culprit in this situation. To solve this problem, we designed Polli. What it does Polli chats with you on messenger and updates your political preferences live on your Polli website. Our bot starts with asking you some of your basic background information, then loops through a series of political questions to understand what your preferences are. How I built it The bot is built with dialogflow and wix code. Challenges I ran into We had trouble figuring out how we could retrieve the data from the users that are using dialogflow and display it live on the wix website as we had planned. Furthermore, designing the conversation for political preferences proved challenging, as we were aiming for a more personal bot that emulated human sentiments. We also had trouble figuring out how to train the bot to learn different ways that users might respond in order to provide the proper responses. What I learned After much struggle, we learned how to link dialog flow to Facebook messenger and train its responses. What’s Next Polli allows us to collect voter data in terms of political preferences and other views. We could use this data to analyze and project for future elections and policies to be passed. With the website that we will built to be auto-generated from the responses, the user can eventually migrate into a personal profile where they have a collection of their preferences. From this, we would like to provide “actions that they can take” and integrate it with other political advocacy platforms.
## Inspiration While discussing potential ideas, Robin had to leave the call because of a fire alarm in his dorm — due to burning eggs in the dorm's kitchen. We saw potential for an easier and safer way to cook eggs. ## What it does Eggsy makes cooking eggs easy. Simply place your egg in the machine, customize the settings on your phone, and get a fully-cooked egg in minutes. Eggsy is a great, healthy, quick food option that you can cook from anywhere! ## How we built it The egg cracking and cooking are handled by a EZCracker egg cracker and hot plate, respectively. Servo motors control these devices and manage the movement of the egg within the machine. The servos are controlled by a Sparkfun Redboard, which is connected to a Raspberry Pi 3 running the back-end server. This server connects to the iOS app and web interface. ## Challenges we ran into One of the most difficult challenges was managing all the resources that we needed in order to build the project. This included gaining access to a 3D printer, finding a reliable way to apply a force to crack an egg, and the tools to put it all together. Despite these issues, we are happy with what we were able to hack together in such a short period time with limited resources! ## Accomplishments that we're proud of Creating a robust interface between hardware and software. We wanted the user to have multiple ways to interact with the device (the app, voice (Siri), quick actions, the web app) and the hardware to work reliably no matter how the user prefers to interact. We are proud of our ability to take a challenging project head-on, and get as much done as we possibly could. ## What we learned Hardware is hard. Solid architecture is important, especially when connecting many pieces together in order to create a cohesive experience. ## What's next for Eggsy We think Eggsy has a lot of potential, and many people we have demo'd to have really liked the idea. We would like to add additional egg-cooking options, including scrambled and hardboiled eggs. While Eggsy still a prototype, it's definitely possible to build a smaller, more reliable model in the future to market to consumers.
losing
## Inspiration One plain Saturday morning, we were sitting in our living room bored out of our minds as midterms were over! We noticed there was a lot of cardboard left over from study night pizza boxes and a huge propeller that fell off our Google internship cap. So naturally, we could not help but think of making a cardboard helicopter! Andy and I have a very similar skillset as both of us have lots of experience with drones, so we decided to work on all aspects of the project together. FIrst with planning, then CAD, printing, coding, then finally integration. ## What it does The purpose of cardboard-copter is to avoid getting caught at all costs, so basically to provide as much of a nuisance to the owner as possible, kind of like the SNITCH from Harry Potter! ## How we built it We used a LiDAR sensor attached to the bottom of the helicopter so we could detect the closest object, then the data was relayed to our flight controller to move the helicopter away from the object. To control the propellor on top, we custom designed and 3D printed a gimbal that housed 2 servos, each to swivel the propellor on each axis. We also included a second, smaller propellor on the tail of the helicopter to counteract the moment force acting on the main body from the top propellor. Finally, we wired and soldered our motors, servos, and battery to our flight controller. All of the electronics were connected to a receiver which allowed them to be controlled using a drone controller. ## Challenges we ran into We originally were going to design an I-beam frame to hold the electronics, but the design was so good it broke our 3D printer! No seriously, our 3D printer broke while printing this design. We had no choice but to improvise and construct this beautiful cardboard and popsicle stick chassis. When integrating the motors with the flight controller, we ran into a lot of problems with the motors not moving and the firmware being finicky. This was probably because of a defect in the flight controller as everything else worked individually. ## Accomplishments that we're proud of In the end, did our LiDAR scanner work? No. Did our drone fly? No. Were we really close though and did we have a lot of fun and drink a ton of RedBull? Absolutely. We even got the LiDAR sensor online with a Raspberry Pi! In the end, although our project could not fly, it was truly a nuisance for us, the owners and therefore, a truly useless invention. ## What we learned Integration is difficult. Especially at 4am. ## What's next for Cardboard Copter While the current version looks questionable, it is fairly stable and we think we can eventually get it to fly! With a new flight controller and a lot of tuning, it may one day sail the skies. As a backup, we can always find a tall roof to toss it off of. After all, flying is just falling with style.
## Inspiration We, as all students do, frequently and unwillingly fall to the powers of procrastination. This invention is for when the little cute Pomodoro and Screen Time reminders are a tad too easy to ignore. ## What it does The device sits in a predetermined area that you would not want to be in order to focus. For example beside your bed, on the couch, in front of your gaming PC/console. If it detects a person there, It will aim at you and fire projectiles. ## How we built it We built it by integrating a variety of technologies. Firstly, in terms of the frontend, it works with with an Android app developed using the Qualcomm HDK 8450 which has autonomous controls such as connecting to the projectile gun, turning on and off. The app also takes care of the ML Computer Vision needed in order to both detect people and where they are via Google's ML Kit. This then sends this information wirelessly via Bluetooth to an Arduino which is hooked up to two motors that control the aiming and firing of the projectile. The angle at which the projectile launcher turns is approximated with the user sitting 50-100cm away. ## Challenges we ran into We ran into multiple challenges during the project. Firstly, none of us had any experience developing an Android app and using an HDK8450, so we had a lot of ground to make up in order to start developing the app. Secondly, we found the Bluetooth module connection to be quite difficult to get working, as the official documentation seemed to be quite limited especially for beginners to Android development. ## Accomplishments that we're proud of One thing we are extremely proud of is the number of different systems and devices we got working together smoothly. From Computer Vision, to Bluetooth Protocols, to Arduino Programming and Mechanical Design, this project brought together a whole variety of fields, and we are proud to have been able to cover all of those bases as smoothly as we did. ## What we learned As beginners to Android development, we gained a plethora of knowledge on how to build, develop and deploy a working Android application. We also gained experience working with Arduinos, especially involving the communication aspects including sending and receiving information via Bluetooth. Finally we learned about deploying a working ML model in a solution of our own. ## What's next for Failure Management 101 We would like to add movement by putting the whole mechanism on wheels to allow it a greater degree of freedom. We also had plans for voice control, as well as plans for the robot to have access to your laptop in order to determine whether the user is on non-productive websites. Finally, in a from a more realistic and practical purpose, we could envision robots like these helping in patrolling/guard duty as an aid to policemen, although perhaps not firing paper projectiles anymore.
## Inspiration Building domain-specific automated systems in the real world is painstaking, requiring massive codebases for exception handling and robust testing of behavior for all kinds of contingencies — automated packaging, drone delivery, home surveillance, and search and rescue are all enormously complex and result in highly specialized industries and products that take thousands of engineering hours to prototype. But it doesn’t have to be this way! Large language models have made groundbreaking strides towards helping out with the similarly tedious task of writing, giving novelists, marketing agents, and researchers alike a tool to iterate quickly and produce high-quality writing exhibiting both semantic precision and masterful high-level planning. Let’s bring this into the real world. What if asking “find the child in the blue shirt and lead them to the dinner table” was all it took to create that domain-specific application? Taking the first steps towards generally intelligent embodied AI, DroneFormer turns high-level natural language commands into long scripts of low-level drone control code leveraging advances in language and visual modeling. The interface is the simplest imaginable, yet the applications and end result can adapt to the most complex real-world tasks. ## What it does DroneFormer offers a no-code way to program a drone via generative AI. You can easily control your drone with simple written high-level instructions. Simply type up the command you want and the drone will execute it — flying in spirals, exploring caves to locate lost people with depth-first search, or even capturing stunning aerial footage to map out terrain. The drone receives a natural language instruction from the user (e.g. "find my keys") and explores the room until it finds the object. ## How we built it Our prototype compiles natural language instructions down into atomic actions for DJI Tello via in-context learning using the OpenAI GPT-3 API. These actions include primitive actions from the DJI SDK (e.g. forward, back, clockwise turn) as well as custom object detection and visual language model query actions we built leveraging zero-shot image and multimodels models such as YOLOv5 and image processing frameworks such as OpenCV. We include a demo for searching for and locating objects using the onboard Tello camera and object detection. ## Challenges we ran into One significant challenge was deciding on a ML model that best fit our needs of performant real-time object detection. We experimented with state-of-the-art models such as BLIP and GLIP which either were too slow at inference time, or were not performing as expected in terms of accuracy. Ultimately, we settled on YOLOv5 as having a good balance between latency and ability to collect knowledge about an image. We were also limited by the lack of powerful onboard compute, which meant the drone needs to connect to an external laptop (which needed to serve both the drone and internet networks, which we resolved using Ethernet and wireless at the same time) which in turn connects to the internet for OpenAI API inference. ## Accomplishments that we're proud of We were able to create an MVP! DroneFormer successfully generates complex 20+ line instructions to detect and navigate to arbitrary objects given a simple natural language instruction to do so (e.g. “explore, find the bottle, and land next to it”). ## What we learned Hardware is a game changer! Embodied ML is a completely different beast than even a simulated reinforcement learning environment, and working with noisy control systems adds many sources of error on top of long-term language planning. To deal with this, we iterated much more frequently and added functionality to deal with new corner cases and ambiguity as necessary over the course of the project, rewriting as necessary. Additionally, connectivity issues arose often due to the three-tiered nature of the system between the drone, laptop, and cloud backends. ## What's next for DroneFormer We were constrained by the physical confines of the TreeHacks drone room and obstacles available in the vicinity, as well as the short battery life of the Tello drone. Expanding to larger and more complex hardware, environments, and tasks, we expect the DroneFormer framework to handily adapt, given a bit of prompt engineering, to emergent sophisticated behaviors such as: * Watching over a child wandering around the house and reporting any unexpected behavior according to a fine-tuned classifier * Finding that red jacket that you could swear was on the hanger but which has suddenly disappeared * Checking in “person” if the small coffee shop down the street is still open despite the out-of-date Google Maps schedule * Sending you a picture of the grocery list you forgot at home DroneFormer will be a new type of personal assistant — one that always has your back and can bring the magic of complex language model planning to the embodied real world. We’re excited! <https://medium.com/@sidhantbendre22/hacking-the-moonshot-stanford-treehacks-2023-9166865d4899>
losing
## Inspiration Living in the big city, we're often conflicted between the desire to get more involved in our communities, with the effort to minimize the bombardment of information we encounter on a daily basis. NoteThisBoard aims to bring the user closer to a happy medium by allowing them to maximize their insights in a glance. This application enables the user to take a photo of a noticeboard filled with posters, and, after specifying their preferences, select the events that are predicted to be of highest relevance to them. ## What it does Our application uses computer vision and natural language processing to filter any notice board information in delivering pertinent and relevant information to our users based on selected preferences. This mobile application lets users to first choose different categories that they are interested in knowing about and can then either take or upload photos which are processed using Google Cloud APIs. The labels generated from the APIs are compared with chosen user preferences to display only applicable postings. ## How we built it The the mobile application is made in a React Native environment with a Firebase backend. The first screen collects the categories specified by the user and are written to Firebase once the user advances. Then, they are prompted to either upload or capture a photo of a notice board. The photo is processed using the Google Cloud Vision Text Detection to obtain blocks of text to be further labelled appropriately with the Google Natural Language Processing API. The categories this returns are compared to user preferences, and matches are returned to the user. ## Challenges we ran into One of the earlier challenges encountered was a proper parsing of the fullTextAnnotation retrieved from Google Vision. We found that two posters who's text were aligned, despite being contrasting colours, were mistaken as being part of the same paragraph. The json object had many subfields which from the terminal took a while to make sense of in order to parse it properly. We further encountered troubles retrieving data back from Firebase as we switch from the first to second screens in React Native, finding the proper method of first making the comparison of categories to labels prior to the final component being rendered. Finally, some discrepancies in loading these Google APIs in a React Native environment, as opposed to Python, limited access to certain technologies, such as ImageAnnotation. ## Accomplishments that we're proud of We feel accomplished in having been able to use RESTful APIs with React Native for the first time. We kept energy high and incorporated two levels of intelligent processing of data, in addition to smoothly integrating the various environments, yielding a smooth experience for the user. ## What we learned We were at most familiar with ReactJS- all other technologies were new experiences for us. Most notably were the opportunities to learn about how to use Google Cloud APIs and what it entails to develop a RESTful API. Integrating Firebase with React Native exposed the nuances between them as we passed user data between them. Non-relational database design was also a shift in perspective, and finally, deploying the app with a custom domain name taught us more about DNS protocols. ## What's next for notethisboard Included in the fullTextAnnotation object returned by the Google Vision API were bounding boxes at various levels of granularity. The natural next step for us would be to enhance the performance and user experience of our application by annotating the images for the user manually, utilizing other Google Cloud API services to obtain background colour, enabling us to further distinguish posters on the notice board to return more reliable results. The app can also be extended to identifying logos and timings within a poster, again catering to the filters selected by the user. On another front, this app could be extended to preference-based information detection from a broader source of visual input.
## Inspiration The inspiration for our project stems from the increasing trend of online shopping and the declining foot traffic in physical stores. Our goal was to provide a unique and engaging experience for customers, encouraging them to visit physical stores and rediscover the joy of in-person shopping. We wanted to create an interactive and entertaining shopping experience that would entice customers to visit stores more frequently and foster a deeper connection between them and the store's brand. ## What it does Our project is an AR scavenger hunt experience that gamifies the shopping experience. The scavenger hunt encourages customers to explore the store and discover new products they may have otherwise overlooked. As customers find specific products, they can earn points which can be redeemed for exclusive deals and discounts on future purchases. This innovative marketing scheme not only provides customers with an entertaining experience but also incentivizes them to visit stores more frequently and purchase products they may have otherwise overlooked. ## How we built it To create the AR component of our project, we used Vuforia and Unity, two widely used platforms for building AR applications. The Vuforia platform allowed us to create and track image targets, while Unity was used to design the 3D models for the AR experience. We then integrated the AR component into an Android application by importing it as a Gradle project. Our team utilized agile development methodologies to ensure efficient collaboration and problem-solving throughout the development process. ## Challenges we ran into One of the challenges we faced was integrating multiple APIs and ensuring that they worked together seamlessly. Another challenge was importing the AR component and creating the desired functionality within our project. We also faced issues with debugging and resolving technical errors that arose during the development process. ## Accomplishments that we're proud of Despite the challenges we faced, we were able to achieve successful teamwork and collaboration. Despite forming the team later than other groups, we were able to effectively communicate and work together to bring our project to fruition. We are proud of the end result, which was a polished and functional AR scavenger hunt experience that met our objectives. ## What we learned We learned how difficult it is to truly ship out software, and we are grateful to have joined the hackathon. We gained a deeper understanding of the importance of project planning, effective communication, and collaboration among team members. We also learned that the development process can be challenging and unpredictable, and that it requires perseverance and problem-solving skills. Additionally, participating in the hackathon taught us valuable technical skills such as integrating APIs, creating AR functionality, and importing projects onto an Android application. ## What's next for Winnur Looking forward, we plan to incorporate Computer Vision technology into our project to prevent potential damage to our product's packaging. We also aim to expand the reach of our AR scavenger hunt experience by partnering with more retailers and enhancing the user interface and experience. We are excited about the potential for future development and growth of Winnur.
## Inspiration Bookshelves are worse than fjords to navigate. There is too much choice, and indecision hits when trying to pick out a cool book at a library or bookstore. Why isn’t there an easy way to compare the ratings of different books from just the spine? That’s where BookBud comes in. Paper books are a staple part of our lives - everyone has a bookshelf, hard to find them, very manual organisation ## What it does Bookbud is Shazam but for books. Bookbud allows users to click on relevant text relating to their book in a live video stream while they scan the shelves. Without needing to go through the awkward process of googling long book titles or finding the right resource, readers can quickly find useful information on their books. ## How we built it We built it from the ground up using Swift. The first component involves taking in camera camera input. We then implement Apple’s Vision ML framework to retrieve the text recognised within the scene. This text is passed into the second component that deals with calling the Google Books API to retrieve the data to be displayed. ## Challenges we ran into We ran into an unusual bug in the process of combining the two halves of our project. The first half was the OCR piece that takes in a photo of a bookshelf and recognises text such as title, author and publisher, and the second half was the piece that speaks directly to the Google client to retrieve details such as average rating, maturity\_level and reviews from text. More generally, we ran into compatibility issues as Apple recently shifted from the pseudo-deprecated UIKit to SwiftUI and this required many hours of tweaking to finally ensure the different components played well together. We also initially tried to separate each book’s spine from a bookshelf can be tackled easily through openCV but we did not initially code in objective c++ so it was not compatible with the rest of our code. ## Accomplishments that we're proud of We were able to successfully learn how to use and implement Apple Vision ML framework to run OCR on camera input to extract a book title. We also successfully interacted with the Google API to retrieve average ratings and title for a book, integrating the two into an interface. ## What we learned For 3 of 4 on the team, it was the first time working with Swift or mobile app development. This proved to be a steep learning curve, but one that was extremely rewarding. Not only was simulation a tool we drew on extensively in our process, but we also learned about different objects and syntax that Swift uses compared to C. ## What's next for BookBud There are many technical details BookBud could improve on: Improved UI Basic improvements and features include immediately prompting a camera, Booklovers need an endearing UI. Simple, intuitive - but also stylish and chic. Create a recommendation system of books for the reader depending on the books that readers have looked at/wanted more information on in the past or their past reading history Do this in AR, instead of having it be a photo, overlaying each book with a color density that corresponds to the rating or even the “recommendation score” of each book. Image Segmentation through Bounding Boxes Automatically detect all books in the live stream and suggest which book has the highest recommendation score. Create a ‘find your book’ feature that allows you to find a specific book amidst the sea of books in a bookshelf. More ambitious applications… Transfer AR overlay of the bookshelf into a metaversal library of people and their books. Avid readers can join international rooms to give book recommendations and talk about their interpretations of material in a friendly, communal fashion. I can imagine individuals wanting NFTs of the bookshelves of celebrities, their families, and friends. There is a distinct intellectual flavor of showing what is on your bookshelf. NFT book? Goodreads is far superior to Google Books, so hopefully they start issuing developer keys again!
winning
## Inspiration Swap was inspired by COVID-19 having an impact on many individuals’ daily routines. Sleep schedules were shifted, more distractions were present due to working from home, and being away from friends and family members was difficult. Our team wanted to create a solution that would help others add excitement to their quarantine routines and also connect them with their friends and family members again. ## What it does Swap is a mobile application that allows users to swap routines with their friends, family members, or even strangers to try something new! You can input daily activities and photos, add an optional mood tracker, add friends, initiate swaps instantly, pre-schedule swaps, and even randomize swaps. ## How we built it For this project, we created a working prototype and wrote the backend code on how the swaps would be made. The prototype was created using Figma. For writing the backend code, we used python and applications such as Xcode, MySQL, and PyCharm. ## Challenges we ran into A challenge we ran into was determining how we would write the backend code for the app and what applications to use. Additionally, we had to use up some time to select all the features we wanted Swap to have. ## Accomplishments that we're proud of Accomplishments we’re proud of include the overall idea of making an app that swapped routines, our Figma prototype, and the backend coding. ## What we learned We learned how to use Figma’s wireframing feature to create a working prototype and learned about applications (ex. MySQL) that allowed us to write backend code for our project. ## What's next for Swap We want to finalize the development of the app and launch it in the app stores!
## We are DonSafe, a Blockchain and AI based organ donation interface, aimed at patient-centric care ### DonSafe provides a platform for organisations, donors and recipients to ethically source and donate organs. ## Inspiration The transplantation of healthy organs into persons whose own organs have failed, improves and saves thousands of lives every year. But demand for organs has outstripped supply, creating an underground market for illicitly obtained organs. Desperate situations of both recipients and donors create an avenue ready for exploitation by international organ trafficking syndicates. Traffickers exploit the desperation of donors to improve the economic situation of themselves and their families, and they exploit the desperation of recipients who may have few other options to improve or prolong their lives. Like other victims of trafficking in persons, those who fall prey to traffickers for the purpose of organ removal may be vulnerable by virtue of poverty, for instance. Organ trafficking is more than a $5B market annually. DonSafe sets out to try to solve the primary problems with organ donation and transplantation as described. ## What it does DonSafe has a three way clientele system, and utilises machine learning to first match donors and recipients at scale, anywhere in the world, followed by Stacks blockchain, which is used to securely authenticate organ transfers, including securing the identities and use-case of the donor, recipient and transplant organisation: 1) **Donor**: Simply set up a user account, and list organ to donate, with personal details about the donor and organ, all authenticated on the Stacks blockchain. The app uses machine learning to determine the authenticity of the listing, and make sure it is *ethically sourced*. 2) **Middle/Transplant organisation**: Takes the listing and is in charge of the transplant, usually a hospital. 3) **Recipient**: Simply set up a user account, and list organ to receive, and a timeline, with personal details about the recipient and organ, all authenticated on the Stacks blockchain. Further verification is done on the part of the health institution. The app uses machine learning to determine the authenticity of the listing, and make sure it is *ethically received* i.e. the organ will actually be transplanted, rather than trafficked, for instance. ## How we built it 1) Clarity for Stacks Blockchain for organ transplant authentication 2) Java and Kotlin for the Android app 3) Firebase for the database backend/and user authentication into the app 4) SciKit, Firebase and own Bayesian models for machine learning input ## Ethics We believe that access to healthcare is a basic human right, and that it is ethically wrong for us as a society to not act against the problem of the lack of medical care. ## Accomplishments that we're proud of The all round social impact of the scale of the app, its features as well as who we can help in real time. Also, that it was made in less than 36 hours. ## What's next for DonSafe The idea is to improve functionality, removing bugs, while ensuring an improvement of the use of our machine learning algorithm. DonSafe would ideally also be able to incorporate payments/financial transactions on DeFi, so that promises are fulfilled, without moral hazards.
## 🤯 Inspiration As busy and broke college students, we’re usually missing semi-essential items. Most of us just suffer a little and just go without, but what if there was an alternative? Say you need a vacuum. More often than not, someone living in your hall has one they aren’t opposed to sharing! Building upon this principle, our app aims to **connect** “haves” with “have-nots” and create a closer community along the way. ## 🧐 What it does Our app provides an easy-use platform for students to share favors between each other; two clear use-cases are borrowing items and running convenience store errands. In addition, this application encourages tighter communities and helps reduce consumerist waste(not everyone in a dorm hall needs their own of everything!). ## 🥸 How we built it * **Frontend**: built in React Native with Expo, run on Xcode simulator * **Backend** : authentication with Firebase, Typescript, TypeORM, GraphQL used to power Node server with Apollo editor to communicate with CockroachDB. * **Design and UI**: Figma and Google Slide * **Pitching** : Loom and Adobe Premiere ## 😅 Challenges we ran into * We were unable to find a UI/UX designer for our team and initially struggled with getting the project off the ground. Heather dedicated most of her time filling that role by learning how to operate Figma and tried her very best to make an aesthetically pretty mock-up and final pitch. * It was also difficult to work through many time zones and keep track of all members; we lost a backend person in last minute so Hung stepped up to the challenge to learn GraphQL, CockroachDB, and TypeORM in a really short time. * Of course scope ## 😊 Accomplishments that we're proud of * Heather is super proud of surviving her first hackathon and having her idea finally somewhat come to life! She also now realizes how much there is left to learn and is excited to explore more into UI/UX design and what goes into developing a mobile app. * Hung somehow managed to implement React Native App with expo, GraphQL & Node server in less than 24 hours ## 🤔 What we learned * We learned that having a reliable designer is super important, and how time moves super fast when you are having fun! * Having a high bar is good but also terrifying :^( ## 😤 What's next for Favor App We built a relatively functional minimum featured project over the past two days; however, we would like to implement GPS reliability and optimization algorithms in order to increase the amount of favors completed and make fulfilling favors easier. The ultimate goal is to tailor favor requests so fulfilling them doesn’t deviate from the helpers’ normal daily routines. We would also like to include more game-like features and other incentives. We could see ourselves using and relying on something like this a lot, so this hackathon will hopefully not be the end!
partial
# Travel Itinerary Generator ## Inspiration Traveling is an experience that many cherish, but planning for it can often be overwhelming. With countless events, places to visit, and activities, it's easy to miss out on experiences that could have made the trip even more memorable. This realization inspired us to create the **Travel Itinerary Generator**. We wanted to simplify the travel planning process by providing users with curated suggestions based on their preferences. ## What It Does The **Travel Itinerary Generator** is a web application that assists users in generating travel itineraries. Users receive tailored suggestions on events or places to visit by simply entering a desired location and activity type. The application fetches this data using the Metaphor API, ensuring the recommendations are relevant and up-to-date. ## How We Built It We began with a React-based frontend, leveraging components to create a user-friendly interface. Material-UI was our go-to library for the design, ensuring a consistent and modern look throughout the application. To fetch relevant data, we integrated the Metaphor API. Initially, we faced CORS issues when bringing data directly from the front end. To overcome this, we set up a Flask backend to act as a proxy, making requests to the Metaphor API on behalf of the front end. We utilized the `framer-motion` library for animations and transitions, enhancing the user experience with smooth and aesthetically pleasing effects. ## Challenges We Faced 1. **CORS Issues**: One of the significant challenges was dealing with CORS when trying to fetch data from the Metaphor API. This required us to rethink our approach and implement a Flask backend to bypass these restrictions. 2. **Routing with GitHub Pages**: After adding routing to our React application, we encountered issues deploying to GitHub Pages. It took some tweaking and adjustments to the base URL to get it working seamlessly. 3. **Design Consistency**: Ensuring a consistent design across various components while integrating multiple libraries was challenging. We had to make sure that the design elements from Material-UI blended well with our custom styles and animations. ## What We Learned This project was a journey of discovery. We learned the importance of backend proxies in handling CORS issues, the intricacies of deploying single-page applications with client-side routing, and the power of libraries like `framer-motion` in enhancing user experience. Moreover, integrating various tools and technologies taught us the value of adaptability and problem-solving in software development. ## Conclusion This journey was like a rollercoaster - thrilling highs and challenging lows. We discovered the art of bypassing CORS, the nuances of SPAs, and the sheer joy of animating everything! It reinforced our belief that we can create solutions that make a difference with the right tools and a problem-solving mindset. We're excited to see how travelers worldwide will benefit from our application, making their travel planning a breeze! ## Acknowledgements * [Metaphor API](https://metaphor.systems/) for the search engine. * [Material-UI](https://mui.com/) for styling. * [Framer Motion](https://www.framer.com/api/motion/) for animations. * [Express API](https://expressjs.com/) hosted on [Google Cloud](https://cloud.google.com/). * [React.js](https://react.dev/) for web framework.
# Things2Do Minimize time spent planning and maximize having fun with Things2Do! ## Inspiration The idea for Things2Do came from the difficulties that we experienced when planning events with friends. Planning events often involve venue selection which can be a time-consuming, tedious process. Our search for solutions online yielded websites like Google Maps, Yelp, and TripAdvisor, but each fell short of our needs and often had complicated filters or cluttered interfaces. More importantly, we were unable to find event planning that accounts for the total duration of an outing event and much less when it came to scheduling multiple visits to venues accounting for travel time. This inspired us to create Things2Do which minimizes time spent planning and maximizes time spent at meaningful locations for a variety of preferences on a tight schedule. Now, there's always something to do with Things2Do! ## What it does Share quality experiences with people that you enjoy spending time with. Things2Do provides the top 3 suggested venues to visit given constraints of the time spent at each venue, distance, and select category of place to go. Furthermore, the requirements surrounding the duration of a complete event plan across multiple venues can become increasingly complex when trying to account for the tight schedules of attendees, a wide variety of preferences, and travel time between multiple venues throughout the duration of an event. ## How we built it The functionality of Things2Do is powered by various APIs to retrieve the details of venues and spatiotemporal analysis with React for the front end, and express.js/node.js for the backend functionality. APIs: * openrouteservice to calculate travel time * Geoapify for location search autocomplete and geocoding * Yelp to retrieve names, addresses, distances, and ratings of venues Languages, tools, and frameworks: * JavaScript for compatibility with React, express.js/node.js, Verbwire, and other APIs * Express.js/node.js backend server * TailwindCSS for styling React components Other services: * Verbwire to mint NFTs (for memories!) from event pictures ## Challenges we ran into Initially, we wanted to use Google Maps API to find locations of venues but these features were not part of the free tier and even if we were to implement these ourselves it would still put us at risk of spending more than the free tier would allow. This resulted in us switching to node.js for the backend to work with JavaScript for better support for the open-source APIs that we used. We also struggled to find a free geocoding service so we settled for Geoapify which is open-source. JavaScript was also used so that Verbwire could be used to mint NFTs based on images from the event. Researching all of these new APIs and scouring documentation to determine if they fulfilled the desired functionality that we wanted to achieve with Things2Do was an enormous task since we never had experience with them before and were forced to do so for compatibility with the other services that we were using. Finally, we underestimated the time it would take to integrate the front-end to the back-end and add the NFT minting functionality on top of debugging. A challenge we also faced was coming up with an optimal method of computing an optimal event plan in consideration of all required parameters. This involved looking into algorithms like the Travelling Salesman, Dijkstra's and A\*. ## Accomplishments that we're proud of Our team is most proud of meeting all of the goals that we set for ourselves coming into this hackathon and tackling this project. Our goals consisted of learning how to integrate front-end and back-end services, creating an MVP, and having fun! The perseverance that was shown while we were debugging into the night and parsing messy documentation was nothing short of impressive and no matter what comes next for Things2Do, we will be sure to walk away proud of our achievements. ## What we learned We can definitively say that we learned everything that we set out to learn during this project at DeltaHacks IX. * Integrate front-end and back-end * Learn new languages, libraries, frameworks, or services * Include a sponsor challenge and design for a challenge them * Time management and teamwork * Web3 concepts and application of technology ## Things to Do The working prototype that we created is a small segment of everything that we would want in an app like this but there are many more features that could be implemented. * Multi-user voting feature using WebSockets * Extending categories of hangouts * Custom restaurant recommendations from attendees * Ability to have a vote of "no confidence" * Send out invites through a variety of social media platforms and calendars * Scheduling features for days and times of day * Incorporate hours of operation of venues
## Inspiration We were inspired to reduce the amount of time it takes to seek medical attention. By directing patients immediately to a doctor specific to their needs, one may reduce the wait time commonly associated with seeking medical aid. ## What it does Destination Doc asks a user how they are feeling at which point it determines what type of doctor a patient needs (by screening for flagged words). It then proceeds to search a 10 km radius for establishments that such as dentist offices, walk-in clinics, physiotherapy centers or other need-specific locations. Using Microsoft's Bing's API, Destination Doc determines which destination is the shortest time away using real-time traffic. A map is then displayed directing the user from their home location to the optimal medical center. ## How we built it We built the application front end using angular and the backend with flask, We incorporated the Cisco meraki, twilio APIs and azure. ## Challenges we ran into Our biggest challenge was putting all the different components together as well as doing a lot within a short time constraint. ## Accomplishments that we're proud of We're proud to take steps in creating a more efficient wait time service and also aiding the cause of better health and being safer. ## What we learned What we learned - We learned how to leverage the functionality of AngularJs to create a responsive front end page. we also learned how to use RestAPI HTTP get and post requests to communicate between the front end and the backend network. ## What's next for Destination Doc We plan to put together destination doc to an extent where anyone can enter their needs and find the best place to get help.
partial
## Inspiration We were driven by the need for real-time logging systems to ensure buildings could implement an efficient and secure access management system. ## What it does OmniWatched is an advanced access management system that provides real-time logging and instant access status updates for every entry into a building, ensuring a high level of security and efficient flow management. It leverages cutting-edge technology to offer insights into access patterns, enhance safety protocols, and optimize space utilization, making it an essential tool for any facility. ## How we built it We used React for the front end and Solace to communicate with our back end. We used an Event Driven Architecture to implement our real-time updates and to show it to the front end. The front end is effectively a subscriber, and the events are pushed by another application that publishes events to our Solace PubSub+. ## Challenges we ran into The first challenge that we faced was setting up Solace, originally we used RestAPIs, and we wrote it in Node.js. We had to completely rewrite our backend in Python, to properly take advantage of the Event Driven Architecture. ## Accomplishments that we're proud of Setting up Solace PubSub+, and finally achieving real-time data in our front end was challenging, but really rewarding. We are also really proud of how we delegated tasks and finished our application, even though we still wish to add more features. ## What we learned We learned the advantages of Event Driven Architecture, and how it compares to RestAPIs, and why Event Driven Architectures can be effective when it comes to real-time data. ## What's next for Omniwatch We think that our application has a lot of potential, and we're excited to continue working on it even outside of this hackathon. Implementing more users, organizations, and other features.
## Inspiration In the “new normal” that COVID-19 has caused us to adapt to, our group found that a common challenge we faced was deciding where it was safe to go to complete trivial daily tasks, such as grocery shopping or eating out on occasion. We were inspired to create a mobile app using a database created completely by its users - a program where anyone could rate how well these “hubs” were following COVID-19 safety protocol. ## What it does Our app allows for users to search a “hub” using a Google Map API, and write a review by rating prompted questions on a scale of 1 to 5 regarding how well the location enforces public health guidelines. Future visitors can then see these reviews and add their own, contributing to a collective safety rating. ## How I built it We collaborated using Github and Android Studio, and incorporated both a Google Maps API as well integrated Firebase API. ## Challenges I ran into Our group unfortunately faced a number of unprecedented challenges, including losing a team member mid-hack due to an emergency situation, working with 3 different timezones, and additional technical difficulties. However, we pushed through and came up with a final product we are all very passionate about and proud of! ## Accomplishments that I'm proud of We are proud of how well we collaborated through adversity, and having never met each other in person before. We were able to tackle a prevalent social issue and come up with a plausible solution that could help bring our communities together, worldwide, similar to how our diverse team was brought together through this opportunity. ## What I learned Our team brought a wide range of different skill sets to the table, and we were able to learn lots from each other because of our various experiences. From a technical perspective, we improved on our Java and android development fluency. From a team perspective, we improved on our ability to compromise and adapt to unforeseen situations. For 3 of us, this was our first ever hackathon and we feel very lucky to have found such kind and patient teammates that we could learn a lot from. The amount of knowledge they shared in the past 24 hours is insane. ## What's next for SafeHubs Our next steps for SafeHubs include personalizing user experience through creating profiles, and integrating it into our community. We want SafeHubs to be a new way of staying connected (virtually) to look out for each other and keep our neighbours safe during the pandemic.
## Inspiration: We're trying to get involved in the AI chat-bot craze and pull together cool pieces of technology -> including Google Cloud for our backend, Microsoft Cognitive Services and Facebook Messenger API ## What it does: Have a look - message Black Box on Facebook and find out! ## How we built it: SO MUCH PYTHON ## Challenges we ran into: State machines (i.e. mapping out the whole user flow and making it as seamless as possible) and NLP training ## Accomplishments that we're proud of: Working NLP, Many API integrations including Eventful and Zapato ## What we learned ## What's next for BlackBox: Integration with google calendar - and movement towards a more general interactive calendar application. Its an assistant that will actively engage with you to try and get your tasks/events/other parts of your life managed. This has a lot of potential - but for the sake of the hackathon, we thought we'd try do it on a topic that's more fun (and of course, I'm sure quite a few us can benefit from it's advice :) )
partial
## Inspiration We explored IBM Watson and realized its potentials and features that enable people to make anything they want using its cloud skills. We all read and we always want to read books/articles which suites our taste. We made this easier using our web app. Just upload the pdf file and get detailed entities, keywords, concepts, and emotions visually in our dashboard. ## What it does Our web app analyzes the content of the articles using IBM NLU and displays entities, keywords, concepts and emotions graphically ## How I built it Our backend is developed using Springboot and Java while the front end is designed using bootstrap and HTML. We used d3.js for displaying graphical representation of data. The content of the article is read using the Apache Tika framework. ## Challenges I ran into Completing a project within 24 hours was a big challenge. We also struggled connecting front end and backend. Fortunately, we found a template and we leveraged it to develop our project. ## Accomplishments that I'm proud of We are proud to say that we worked as a team aiming for a specific prize and we were able to finish the project with pretty much all the features we wanted. ## What I learned We learned the potential of IBM Watson NLU and other IBM Cloud technologies . We also learned different technologies like d3.js, springboot which we were not familiar with. ## What's next for Know before you read We want this app accessible to more people and we are planning to deploy it after finishing up the UI.
## Inspiration As avid readers ourselves, we love the work that authors put out, and we are deeply saddened by the relative decline of the medium. We believe that democratizing the writing process and giving power back to the writers is the way to revitalize the art form literature, and we believe that utilizing blockchain technology can help us get closer to that ideal. ## What it does LitHub connects authors with readers through eluv.io's NFT trading platform, allowing authors to sell their literature as exclusive NFTs and readers to have exclusive access to their purchases on our platform. ## How we built it We utilized the eluv.io API to enable upload, download, and NFT trading functionality for our backend. We leveraged CockroachDB to store user information and we used HTML/CSS to create our user-facing frontend, and deployed our application we used Microsoft Azure. ## Challenges we ran into One of the main challenges we ran into was understanding the various APIs that we were working with over a short period of time. As this was our first time working with NFTs/blockchain, eluv.io was a particularly new experience to us, and it took some time, but we were able to overcome many of the challenges we faced thanks to the help from mentors from eluv.io. Another challenge we ran into was actually connecting the pieces of our project together as we used many different pieces of technology, but careful coordination and well-planned functional abstraction made the ease of integration a pleasant surprise. ## Accomplishments that we're proud of We're proud of coming up with an innovative solution that can help level the playing field for writers and for creating a platform that accomplishes this using many of the platforms that event sponsors provided. We are also proud of gaining familiarity with a variety of different platforms in a short period of time and showing resilience in the face of such a large task. ## What we learned We learned quite a few things while working on this project. Firstly, we learned a lot about blockchain space, and how to utilize this technology during development, and what problems they can solve. Before this event, nobody in our group had much exposure to this field, so it was a welcome experience In addition, some of us who were less familiar with full-stack development got exposure to Node and Express, and we all got to reapply concepts we learned when working with other databases to CockroachDB's user-friendly interface. ## What's next for LitHub The main next step for LitHub would be to scale our application to handle a larger user base. From there we hope to share LitHub amongst authors and readers around the world so that they too can take partake in the universe of NFTs to safely share their passion.
## Inspiration Let's face it: Museums, parks, and exhibits need some work in this digital era. Why lean over to read a small plaque when you can get a summary and details by tagging exhibits with a portable device? There is a solution for this of course: NFC tags are a fun modern technology, and they could be used to help people appreciate both modern and historic masterpieces. Also there's one on your chest right now! ## The Plan Whenever a tour group, such as a student body, visits a museum, they can streamline their activities with our technology. When a member visits an exhibit, they can scan an NFC tag to get detailed information and receive a virtual collectible based on the artifact. The goal is to facilitate interaction amongst the museum patrons for collective appreciation of the culture. At any time, the members (or, as an option, group leaders only) will have access to a live slack feed of the interactions, keeping track of each other's whereabouts and learning. ## How it Works When a user tags an exhibit with their device, the Android mobile app (built in Java) will send a request to the StdLib service (built in Node.js) that registers the action in our MongoDB database, and adds a public notification to the real-time feed on slack. ## The Hurdles and the Outcome Our entire team was green to every technology we used, but our extensive experience and relentless dedication let us persevere. Along the way, we gained experience with deployment oriented web service development, and will put it towards our numerous future projects. Due to our work, we believe this technology could be a substantial improvement to the museum industry. ## Extensions Our product can be easily tailored for ecotourism, business conferences, and even larger scale explorations (such as cities and campus). In addition, we are building extensions for geotags, collectibles, and information trading.
partial
## Inspiration In the midst of your medical emergency, you rush into the ambulance. The nurse immediately asks about your current symptoms and medical history. You quickly relay the information, hoping for swift help. In the ER waiting room, another nurse approaches and you find yourself repeating the same details. Finally, a doctor comes to you, yet again inquiring about your symptoms and history. Frustrated, you realize you've stated the same facts three times. In a world where every second in medicine counts, this repetition feels like precious time wasted. You can't help but think, in emergencies, saving time means saving lives. ## What it does MediFetch AI revolutionizes patient care by streamlining the way medical professionals access and understand a patient's medical history. By utilizing advanced NLP technology, this platform allows users to submit their medical documents, which are then efficiently processed and organized. When a healthcare professional needs specific information, MediFetch AI queries these documents and displays the most relevant sections, tailored to the query. This innovative approach not only consolidates a patient's health history into one accessible location but also significantly reduces the time healthcare providers spend sifting through extensive medical records. With MediFetch AI, medical professionals can quickly grasp a patient's medical background, ensuring faster and more effective care. Example: A doctor treating a patient with shortness of breath and a history of heart issues might enter the following query into MediFetch AI: "Patient's history of cardiac conditions and recent treatments." Returns PDF chunks consisting of: * Diagnosed with Atrial Fibrillation in 2021 * Underwent angioplasty in 2022 * Prescribed beta-blockers in March 2023 * Last Echocardiogram showed mild left ventricular hypertrophy in December 2023 ## How we built it ### Frontend: React, Tailwind ### Backend/ML: Firebase, Flask, Pinecone, BERT ### How it works: When a document is uploaded, the text is segmented and each section is converted into an embedding using BERT. These embeddings capture the contextual meaning of the text. The system then stores these embeddings in a Pinecone index, allowing for efficient retrieval. When a query is made, it's also converted into an embedding and matched against the index to find the most relevant document sections, streamlining access to pertinent medical information. ## Challenges we ran into There were a lot of challenges all-around, but the hardest was having an effective embedding system. Currently, our model is not as effective in retrieving the most relevant chunks in pdfs, and due to the similarity threshold, may output no pdfs if the submitted files are not extensive. ## What's next for MediFetch AI Fine tune the model's effectiveness and launch for use.
## Inspiration Healthcare providers spend nearly half of their time with their patients entering data into outdated, user-unfriendly software. This is why we have built a medical assistant that helps doctors efficiently filter through patient information and extract critical information for diagnosis, allowing more of their attention to focus on meeting each patient's individual needs. The current Electronic Health Record (EHR) system is unnecessarily difficult to navigate. For instance, doctors must go through multiple steps (clicking, scrolling, typing) to access the necessary diagnostic information. This is **time-consuming and mentally taxing**. It negatively impacts the doctor-patient relationship because the doctor's attention is perpetually on the monitor rather than on the patient. In a nation with a somewhat disturbing track record in medical outcomes, we felt that something had to change. Tackling this giant of a problem one step at a time, we built a medical assistant that helps doctors efficiently filter through patient information and extract critical information for diagnoses, allowing them to focus on personalizing each patient's care. This new interface hopes to **alleviate a physician's information overload** and **maximize a patient's mental and physical well-being**. ## What it does Our web application consists of two main functionalities: a **conversation visualizer** and a **ranked prompt list**. The conversation visualizer takes in a transcript of a recording of the interaction between the patient and physician. The speech bubbles indicate which speaker each message corresponds to. Behind this interface, the words are processed to determine what topics are currently being discussed. The ranked prompt list pulls the most relevant past information for the patient to the forefront of the list, making it easy for the physician to ask better clarifying questions or make adjustments to their mental model, all without having to click and scroll through tens or hundreds of records. ## How we built it Our end goal is to help doctors efficiently filter and prioritize patient data, so each aspect of our process (ML, backend, frontend) attempts to address that in some way. We designed a **deep-learning-based recommendation system** for features within the patient’s Electronic Health Records (EHR). It decides what information should be displayed based on the patient’s description of their medical needs and symptoms. We leveraged the **OpenAI Embedding API** to embed string token representations of these key features into a high dimensional vector space and extract semantic similarity between each. Then, we employed the *k*-nearest neighbor algorithm to compute and display the top *k* relevant features. This allowed us to cluster related keywords together, such as "COVID" with "shortness of breath". The appearance of one word/phrase in the cluster will bring EHR data containing other related words/phrases to the top of the list. We implemented the ML backend using **Flask in Python**. The main structure and logic were done in **Node.js** within the **Svelte** framework. We designed the UI and front-end layout in Svelte to create something easy to navigate and use in time-sensitive situations. We designed the left panel to be the conversation visualizer, along with an option to record sound (see "What's next for MedSpeak"). The right panel holds the prompt list, which updates in real time as more information is fed in. ## Challenges we ran into One challenge we encountered was understanding the medical workflow and procuring simulated medical data to work with. As none of us had much background in the medical field, it took us some time to find the right data. This also led to difficulties in settling on a final project idea, since we were not sure what kind of information we had access to. However, speaking with Professor Tang and other non-hackers to flesh out our idea was incredibly insightful and helped lead us onto the right track. ## Accomplishments that we're proud of We are proud of generating a novel application of existing technology in a way that benefits the sector most in need of an upgrade. Our solution has great potential in the daily medical workplace. It is able to **integrate past and ongoing patient information** to enhance and expedite the interaction for both parties. The implementation of our solution would result in a considerable reduction in the number of physical steps and the level of attention required to record patient data. Our product's effects are twofold. It decreases the **mental and physical attention** needed for doctors to retrieve medical information. It allows doctors to spend **quality time communicating** with patients, fostering relationships built on trust and mutual understanding. ## What we learned Over the course of this hackathon, each of us on the team became more familiar with technologies like machine learning and general full-stack development. Coming in individually with our separate skill sets, we needed to share our respective knowledge with the others in order to stay on the same page throughout. Thus we each picked up some important tidbits of the others’ expertise, enabling us to become better developers and engineers. To build on that, we learned the importance of keeping everyone up to speed about the general direction of the project. Since we did not confirm our group until very late on the first day, we were delayed in settling on an idea and executing our tasks. More communication throughout the early stages could potentially have saved us time and confusion, allowing us to achieve more of our reach goals. ## Product Risk Accuracy and ethics should always be a cornerstone of consideration when it comes to human health and well-being. Our product is no exception. The medical metric recommendations may not function effectively when dealing with the latest medical metrics or conditions, as the pre-trained model is employed. This can potentially be mitigated by connecting our platform with the most up-to-date medical websites or journals. Even so, the model would require retraining every so often. There is a possibility that doctors may become overly reliant on AI-generated prompts. While designing our solution, we purposefully stayed clear of having the prompt list return information that could be misinterpreted as diagnoses. It is incredibly dangerous to have a machine make official diagnoses, so there would have to be regulations in place to prevent abuse of the technology. The voice transcription may not be (and likely is not) 100% accurate, which may lead to some inaccuracies in the vital signs or vital result recommendations. However, with enough training, we can hopefully make those occurrences a rarity. Even when they happen, the recording can ensure that we have a reference when verifying data. It is imperative that physicians who use this product obtain the proper consent from their patients. Since our current product involves the transcription of a patient's words and our end goal involves an audio recording feature, sensitive information could be at risk. We should consult with legal professionals before making the product available. ## What's next for MedSpeak Algorithmically, we aim to fine-tune the current embedding model on **clinical and biological datasets**, allowing the model to extract even more well-informed correlations based on a broader context pool. We also hope to extend this project to incorporate **real-time speech-to-text processing** into the visualizer. The recording would also act as a safety net in case the patient or physician wishes to revisit that conversation. A further extension would be the option to **autofill patient information** as the conversation goes on, as well as a **chatbot function** to quickly make changes to the record. The NLP aspect allows physicians to use abbreviations or more casual language, which saves mental resources in the long run. Another feature could be an integration of hardware, by having **sensors that detect vital signs** transmit the data directly to the app. This would save time and energy for the nurse or doctor, enabling them to spend more time with their patient.
## Problem Statement As the number of the elderly population is constantly growing, there is an increasing demand for home care. In fact, the market for safety and security solutions in the healthcare sector is estimated to reach $40.1 billion by 2025. The elderly, disabled, and vulnerable people face a constant risk of falls and other accidents, especially in environments like hospitals, nursing homes, and home care environments, where they require constant supervision. However, traditional monitoring methods, such as human caregivers or surveillance cameras, are often not enough to provide prompt and effective responses in emergency situations. This potentially has serious consequences, including injury, prolonged recovery, and increased healthcare costs. ## Solution The proposed app aims to address this problem by providing real-time monitoring and alert system, using a camera and cloud-based machine learning algorithms to detect any signs of injury or danger, and immediately notify designated emergency contacts, such as healthcare professionals, with information about the user's condition and collected personal data. We believe that the app has the potential to revolutionize the way vulnerable individuals are monitored and protected, by providing a safer and more secure environment in designated institutions. ## Developing Process Prior to development, our designer used Figma to create a prototype which was used as a reference point when the developers were building the platform in HTML, CSS, and ReactJs. For the cloud-based machine learning algorithms, we used Computer Vision, Open CV, Numpy, and Flask to train the model on a dataset of various poses and movements and to detect any signs of injury or danger in real time. Because of limited resources, we decided to use our phones as an analogue to cameras to do the live streams for the real-time monitoring. ## Impact * **Improved safety:** The real-time monitoring and alert system provided by the app helps to reduce the risk of falls and other accidents, keeping vulnerable individuals safer and reducing the likelihood of serious injury. * **Faster response time:** The app triggers an alert and sends notifications to designated emergency contacts in case of any danger or injury, which allows for a faster response time and more effective response. * **Increased efficiency:** Using cloud-based machine learning algorithms and computer vision techniques allow the app to analyze the user's movements and detect any signs of danger without constant human supervision. * **Better patient care:** In a hospital setting, the app could be used to monitor patients and alert nurses if they are in danger of falling or if their vital signs indicate that they need medical attention. This could lead to improved patient care, reduced medical costs, and faster recovery times. * **Peace of mind for families and caregivers:** The app provides families and caregivers with peace of mind, knowing that their loved ones are being monitored and protected and that they will be immediately notified in case of any danger or emergency. ## Challenges One of the biggest challenges have been integrating all the different technologies, such as live streaming and machine learning algorithms, and making sure they worked together seamlessly. ## Successes The project was a collaborative effort between a designer and developers, which highlights the importance of cross-functional teams in delivering complex technical solutions. Overall, the project was a success and resulted in a cutting-edge solution that can help protect vulnerable individuals. ## Things Learnt * **Importance of cross-functional teams:** As there were different specialists working on the project, it helped us understand the value of cross-functional teams in addressing complex challenges and delivering successful results. * **Integrating different technologies:** Our team learned the challenges and importance of integrating different technologies to deliver a seamless and effective solution. * **Machine learning for health applications:** After doing the research and completing the project, our team learned about the potential and challenges of using machine learning in the healthcare industry, and the steps required to build and deploy a successful machine learning model. ## Future Plans for SafeSpot * First of all, the usage of the app could be extended to other settings, such as elderly care facilities, schools, kindergartens, or emergency rooms to provide a safer and more secure environment for vulnerable individuals. * Apart from the web, the platform could also be implemented as a mobile app. In this case scenario, the alert would pop up privately on the user’s phone and notify only people who are given access to it. * The app could also be integrated with wearable devices, such as fitness trackers, which could provide additional data and context to help determine if the user is in danger or has been injured.
losing
<https://www.townsquares.tech> Discord Usernames: `jkarbi#1190`, `Leland#1463`, `Dalton#6802` Discord Team Name: `Team 13`, channel `team-13-text` ## Inspiration Traditionally, citizens write to city counsellors or stage protests when they are unhappy with how their government is acting. Nowadays, citizens can use social media to express their opinions, but the many voices makes platforms crowded and messages can get lost. Ever wondered if there was a better way? That's why we built TownSquares. ## What it does TownSquares lets anyone ask their community for its opinion by creating GPS-based polls. **Polls are locked to GPS coordinates** and can only be **answered by community members within a set radius**. Polls can be used to **inspire change in a community** by making the voice of the people heard loud and clear. Not happy with how a city service is being delivered in your community? Post a poll on TownSquares and see if your neighbours agree. Then use the results to get the attention of your representatives in government! ## How we built it Tech stack: **MEAN (MongoDB, Express.js, Angular, Node.js)**. **Mapbox API** used to display a map and the poll locations. Backend deployed on **Google Cloud** using **App Engine**. **MongoDB** running as a shared cluster on MongoDB Atlas. ## Challenges we ran into Deploying the app on GCP and mapping to a custom domain name. Working with Angular, since we had limited frontend development experience. ## Accomplishments that we're proud of We came into this hackathon with a plan for what we were going to build and which components of the project we would all be responsible for. That really set us up for sucess, and is something we are really proud of! ## What we learned Deployment using GCP App Engine and mapping to custom domain names, integrating with Mapbox, and frontend development with Angular! ## What's next for TownSquares We hope to continue working on this following the hackathon because we think it could really be popular!! We know there's more for us to build and we're excited to do that :).
## Inspiration GeoGuesser is a fun game which went viral in the middle of the pandemic, but after having played for a long time, it started feeling tedious and boring. Our Discord bot tries to freshen up the stale air by providing a playlist of iconic locations in addendum to exciting trivia like movies and monuments for that extra hit of dopamine when you get the right answers! ## What it does The bot provides you with playlist options, currently restricted to Capital Cities of the World, Horror Movie Locations, and Landmarks of the World. After selecting a playlist, five random locations are chosen from a list of curated locations. You are then provided a picture from which you have to guess the location and the bit of trivia associated with the location, like the name of the movie from which we selected the location. You get points for how close you are to the location and if you got the bit of trivia correct or not. ## How we built it We used the *discord.py* library for actually coding the bot and interfacing it with discord. We stored our playlist data in external *excel* sheets which we parsed through as required. We utilized the *google-streetview* and *googlemaps* python libraries for accessing the google maps streetview APIs. ## Challenges we ran into For initially storing the data, we thought to use a playlist class while storing the playlist data as an array of playlist objects, but instead used excel for easier storage and updating. We also had some problems with the Google Maps Static Streetview API in the beginning, but they were mostly syntax and understanding issues which were overcome soon. ## Accomplishments that we're proud of Getting the Discord bot working and sending images from the API for the first time gave us an incredible feeling of satisfaction, as did implementing the input/output flows. Our points calculation system based on the Haversine Formula for Distances on Spheres was also an accomplishment we're proud of. ## What we learned We learned better syntax and practices for writing Python code. We learnt how to use the Google Cloud Platform and Streetview API. Some of the libraries we delved deeper into were Pandas and pandasql. We also learned a thing or two about Human Computer Interaction as designing an interface for gameplay was rather interesting on Discord. ## What's next for Geodude? Possibly adding more topics, and refining the loading of streetview images to better reflect the actual location.
## Inspiration You're at a restaurant and you want to quickly split the bill. It's frustrating to have everyone pull out their cards and cash to pay, or even simply working out the amounts to pay. We aim to simplify that with a mobile app. ## What it does Facture Fracture leverages the power of OCR using the Microsoft Computer Vision API to process an image of your bill, allow people to join in your bill, let people decide how to pay and additionally uses Interac to send payment requests ## How we built it The app itself is built with react native. It communicates with our Flask backend built with python on a Microsoft Azure WebApp, which itself communicates with the Microsoft Computer Vision API. ## Challenges we ran into Handling communications between the app and the backend was hard as we had to understand how Http Requests are sent and received, and how to make sure the file sent by the app, in this case the image. was properly handled by the server. We also ran into issues developing the app being new to mobile app development ## Accomplishments that we're proud of * Being able to upload a picture from our phones to the cloud (Microsoft Azure) * Being able to analyze a picture of a bill * Being able to communicate between our phones, the backend, and Microsoft services ## What we learned We learned to first look at multiple tutorials to find a solution since the first answer isn't always applicable to our problem. We also learned to seek help when stuck because although another person might not have the answer to our problem, they can provide us with insight on how to solve the issue We also learned more about interacting with different services using requests ## What's next for Facture Fracture We truly believe that this app is useful to people as it came to us from our frustrating experiences eating out in groups. It also makes reimbursements much easier, as well as cashflow, since sending an interac request tells the participants exactly how much they need to be, and enables them to quickly repay the host!
winning
## Inspiration We were inspired by our own struggles of often having difficulties find new recipes based on the ingredients we have in our own fridge. We found that while many apps do this, none are able to scan the receipt right from your device and get the results instantly. ## What it does When you open the app you are prompted to take a photo of some text you wish to scan. Once you have taken your photo, you can crop it to filter out any unnecessary details. ## How we built it This iOS app was developed using Swift Code in the XCode environment. We use Apple's MLVision and MLKits to take the photo and translate it into text. From there we use Spoonacular API to fetch recipes based on the data received. ## Challenges we ran into Using Apple's MLVision and MLKit was tough to learn and often crashed and was inaccurate. On top of several XCode issues we had issues debugging but in the end finally got it working. ## Accomplishments that we're proud of * Building a fully fledged iOS app using Machine Learning from scratch * Debugging and working as a team to produce a final product. ## What we learned * Lots about API's, Swift Coding, XCode, and Machine Learning * Don't be afraid to ask questions * Sometimes you spend more time debugging than writing actual code. ## What's next for RecipEasy We would love to add: * Stronger UI * More accurate text recognition * Ability to access and read from photos * Provide links to the respective recipe online * Expand past Spoonacular to use a more expansive API
## Inspiration The motivation stemmed from one of our team member's problem having to cooking the same dish for myself everyday and do not have an easy way to discover new recipes. By simply snapping a picture of the ingredients, the app retrieves a potential list of recipes that you could draw inspirations from or learn to make a new dish. Another use case is the reducing food waste, where you could make the most out of any leftover ingredients you have from your last meal. As more and more people going out to eat, instead of resorting to ordering food at a restaurant, the app allows them to see what they are able cook with ingredients that have at home. This solution also resolves the extremely time consuming process of searching up each ingredients online for a recipe and while having to identify the ingredient herself/himself. ## What it does A user takes a picture of each ingredient he/she has. It will encode the image and send it to our server which will call Azure Computer Vision AI that will analyze the image. Once the image is analyzed it will be searched throughout our database for matching ingredients or similar ingredients. All the matching ingredients, confidence and a caption of the image will be returned to the front-end (Your phone) and will be displayed in the AR environment. Once all the ingredients are “scanned”, the user is able to send the list of ingredients back to our API which will find all recipes that can use any of these ingredients. This list of recipe will contain a name, image, and a list of instructions for how to create it. This list will be displayed on the AR environment which the user can interact with and select. ## How we built it We created an API back-end using django and Graphql. We have a database which stores the ingredients and recipes. This is queried using Graphql. In addition, we use Microsoft Azure ARKit for analyzing the images and returning a JSON response consisting of what the image is. We deployed this API on Microsoft Azure App Service to host our back-end server. On the front-end, we created an iOS application using Swift on MacOS. It calls our API when it detects a touch action to capture a snapshot, which we send to Computer Vision service for image analysis. If it recognizes a ingredient, it will add to the set of recognized ingredients and search for a recipe that contains those ingredients. The name, ingredient name, and confidence is rendered in the AR environment. ## Challenges we ran into One of the biggest roadblocks we ran into is setting up the back-end API onto Microsoft Azure server, but it was quickly resolved thanks to on-site Microsoft Mentors. In addition, it was difficult coming up with an algorithm and design structure to retrieve the recipes based on the recognized ingredients. We also ran into trouble of finding an existing viable data set of recipe and ingredients. ## Accomplishments that we're proud of We were able to integrate Azure environment without any prior experience. Also, we were able to solve a common problem and encourage people to save more by creating an opportunity to cook at home. ## What we learned Drawing up a plan in the beginning decreased development downtime. Azure has a variety of services that we could employ in future projects. ## What's next for ARuHungry Introduce preferences for individual users to only return a set of recipes from recognized ingredients filtered by their set preferences. Some future expansions could be to integrate with grocery stores that want to advertise their products and suggest them to the users great deals on them depending on the existing ingredients they have.
## Inspiration Conventional language learning apps like Duolingo don’t offer the ability to have freeform and dynamic conversations. Additionally, finding a language partner can be difficult and costly. Lingua Franca tackles this head-on by offering intermediate to advanced language learners an immersive, interactive experience. Although other apps exist that try to do the same thing, their interaction topics are hard-coded, meaning that you encounter yourself in the same dialogue over and over again. By leveraging LLMs, we’re able to ensure that no two experiences are the same! ## What it does You stumble into a foreign land and must communicate with the townsfolk in order to get by. As you talk with them, you must reply by recording yourself speaking in their language. Aided by LLMs, their responses dynamically change depending on what you say. Additionally, at some points in the conversation, they will give you checkpoints that you must accomplish, which encourages you to talk to other villagers. After each of your responses, you can also see alternative phrases you could’ve said in response to the villager. Seeing these alternative responses can aid in learning vocabulary, grammar, and can help the user branch outside of their usual go-to phrases in the language they are learning. Not only can you guide the conversation to whatever topic you’d like to practice, but to keep the user engaged, we’ve also added backstory to the characters in the village. Each time you talk with them, you can learn something more about their relationship with others in the village! ## How we built it Development was done in Unity3D. We used Wit.ai to capture and transcribe the user’s recorded responses. Those transcribed responses were then fed into an LLM from Together.ai, along with extra information to give context and guide the LLM to prompt the user to complete checkpoints. The response from the LLM becomes the villager’s response to the player. We created the world using assets from Unity Asset store, and the character models are from Mixamo. ## What we learned Developing in VR was new to all team members, so developing for the Oculus Quest and using Unity3D was a great learning experience. LLMs aren’t perfect, and working to mitigate poor, harmful, or unproductive responses is difficult. However, we took this challenge seriously while working on this app and carefully tuned our prompts to give the model the context it needed to avoid these situations. ## What's next for Lingua Franca The next steps for this app include: Adding more languages adding audio feedback from the villagers as an addition to text responses adding new locations, characters, and worlds for more variation in the experience.
losing
Lots of Kitties: when you click them, they meow, and its really cute.
## Inspiration Pat-a-cat was inspired when Aidon Lebar said "all i want to do is pet cats" and from there started a story of heroism as David Lougheed made it happen. In the corners of McHacks you could witness the cats being drawn, meows being recorded, and code being coded. ## What it does It does exactly what you think it does. You pat cats. In this game you can compete against your friends to pat cats, as many cats as you can using keyboard controls. This is designed to help you relax to some lofi hip hop beats and pat some cats to break from the stress of everyday life. Once a cat is patted, you get to listen to the happy meows of hackers, sponsors, and coordinators of McHacks. With every pat, you poof a cat <3 :3 ## How we built it David Lougheed and Allan Wang made the game play mechanics happen using javascript, typescript, and html. Alvin Tan was in charge of the music and boom box mechanics. Elizabeth Poggie designed the fonts, the graphic design through One Note, and partook in meow solicitation. As well we have some honorable mentions of those who helped record the meows, thank you to Will Guthrie, Aidon Lebar, and Jonathan Ng! Finally, thank you to everyone who gave their meows for the cause and made some art for the picture frame above the couch! ## Controls WASD to move Player 1, E to pat with Player 1. IJKL to move Player 2, O to pat with Player 2. ## Challenges we ran into As we wanted to maximize aesthetics, we had some high resolution assets; this posed a problem when loaded using some devices. As we were on a time constraint, we had a backlog of features in mind that we later added on top of a working prototype. This resulted in some tightly coupled code, as we didn't take the time to make full design docs and specs. ## Accomplishments that we're proud of We are proud of CTF (McGill Computer Taskforce) united together as one to create a project we are proud of. As well, we are are also so thrilled to have to the chance to talk to 100 different people, make a working final product, and put our skills to the test to create something fantastic. ## What we learned Friendship <3 Game mechanics html canvas how to make custom fonts ## What's next for Pat-a-Cat We want to introduce rare cats, power ups, web hosting to allow for people to verse each other on different computers, and different levels with new lofi hip hop beats.
## Inspiration The remote control of the car is intended specifically to help those with disability, or other restrictions in movement, to play with their pet from a fixed place with a toy more dynamic than the classic laser pointer. ## What it does JERRY is an RC car and toy for cats. Attached to its tail is a small snack for the cat (Tom, if you will). Users control JERRY from the comfort of their chair, bed, and practically anywhere that is comfortable and suitable for their needs, and monitors from their computer. ## How we built it This cat-entertaining vehicle is made with a foam-board chassis and hot glue. It is powered by a LiPo battery and runs on two DC motors controlled with a RC circuit board and an H-bridge motor driver. The video feed is sent from the camera via a 5.8ghz video transmitter to the computer where it is displayed on the monitor. ## Challenges we ran into We initially struggled with getting the cat to follow the car, perhaps because of the unfamiliar look. We mitigated this by attaching a small treat to prompt the cat to follow it. ## What's next for JERRY Mitigating the problem with JERRY’s speed was difficult, given our current time and resources. This would be something we can improve on by using a larger base for stability and a stronger, more balanced exterior created through CAD. Along with the speed, we would work on designing a more friendly exterior, if given time. *\*We have no git repository as our project required no programming. All components of JERRY were assembled during the hacking period. Our working base with edit history is our team google drive folder and documents, which can be provided if needed.*
partial
## Inspiration Are you out in public but scared about people standing too close? Do you want to catch up on the social interactions at your cozy place but do not want to endanger your guests? Or you just want to be notified as soon as you have come in close contact to an infected individual? With this app, we hope to provide the tools to users to navigate social distancing more easily amidst this worldwide pandemic. ## What it does The Covid Resource App aims to bring a one-size-fits-all solution to the multifaceted issues that COVID-19 has spread in our everyday lives. Our app has 4 features, namely: - A social distancing feature which allows you to track where the infamous "6ft" distance lies - A visual planner feature which allows you to verify how many people you can safely fit in an enclosed area - A contact tracing feature that allows the app to keep a log of your close contacts for the past 14 days - A self-reporting feature which enables you to notify your close contacts by email in case of a positive test result ## How we built it We made use primarily of Android Studio, Java, Firebase technologies and XML. Each collaborator focused on a task and bounced ideas off of each other when needed. The social distancing feature functions based on a simple trigonometry concept and uses the height from ground and tilt angle of the device to calculate how far exactly is 6ft. The visual planner adopts a tactile and object-oriented approach, whereby a room can be created with desired dimensions and the touch input drops 6ft radii into the room. The contact tracing functions using Bluetooth connection and consists of phones broadcasting unique ids, in this case, email addresses, to each other. Each user has their own sign-in and stores their keys on a Firebase database. Finally, the self-reporting feature retrieves the close contacts from the past 14 days and launches a mass email to them consisting of quarantining and testing recommendations. ## Challenges we ran into Only two of us had experience in Java, and only one of us had used Android Studio previously. It was a steep learning curve but it was worth every frantic google search. ## What we learned * Android programming and front-end app development * Java programming * Firebase technologies ## Challenges we faced * No unlimited food
## Inspiration Our inspiration for this project was our experience as students. We believe students need more a digestible feed when it comes to due dates. Having to manually plan for homework, projects, and exams can be annoying and time consuming. StudyHedge is here to lift the scheduling burden off your shoulders! ## What it does StudyHedge uses your Canvas API token to compile a list of upcoming assignments and exams. You can configure a profile detailing personal events, preferred study hours, number of assignments to complete in a day, and more. StudyHedge combines this information to create a manageable study schedule for you. ## How we built it We built the project using React (Front-End), Flask (Back-End), Firebase (Database), and Google Cloud Run. ## Challenges we ran into Our biggest challenge resulted from difficulty connecting Firebase and FullCalendar.io. Due to inexperience, we were unable to resolve this issue in the given time. We also struggled with using the Eisenhower Matrix to come up with the right formula for weighting assignments. We discovered that there are many ways to do this. After exploring various branches of mathematics, we settled on a simple formula (Rank= weight/time^2). ## Accomplishments that we're proud of We are incredibly proud that we have a functional Back-End and that our UI is visually similar to our wireframes. We are also excited that we performed so well together as a newly formed group. ## What we learned Keith used React for the first time. He learned a lot about responsive front-end development and managed to create a remarkable website despite encountering some issues with third-party software along the way. Gabriella designed the UI and helped code the front-end. She learned about input validation and designing features to meet functionality requirements. Eli coded the back-end using Flask and Python. He struggled with using Docker to deploy his script but managed to conquer the steep learning curve. He also learned how to use the Twilio API. ## What's next for StudyHedge We are extremely excited to continue developing StudyHedge. As college students, we hope this idea can be as useful to others as it is to us. We want to scale this project eventually expand its reach to other universities. We'd also like to add more personal customization and calendar integration features. We are also considering implementing AI suggestions.
## Submission Links: YouTube: * <https://www.youtube.com/watch?v=9eHJ7draeAY&feature=youtu.be> GitHub: * Front-End: <https://github.com/manfredxu99/nwhacks-fe> * Back-End: <https://github.com/manfredxu99/nwhacks-be> ## Inspiration Imagine living in a world where you can feel safe when going to a specific restaurant or going to a public space. Covid map helps you see which areas and restaurants have the most people who have been in close contact with covid patients. Other apps only tell you after someone has been confirmed suspected with COVID. However, with covid map you can tell in advance whether or not you should go to a specific area. ## What it does In COVIDMAP you can look up the location you are thinking of visiting and get an up to date report on how many confirmed cases have visited the location in the past 3 days. With the colour codes indicating the severity of the covid cases in the area, COVID map is an easy and intuitive way to find out whether or not a grocery store or public area is safe to visit. ## How I built it We started by building the framework. I built it using React Native as front-end, ExpressJS backend server and Google Cloud SQL server. ## Challenges I ran into Maintaining proper communication between front-end and back-end, and writing stored procedures in database for advanced database SQL queries ## Accomplishments that I'm proud of We are honoured having the opportunity to contribute well to the one of the main health and safety concerns, by creating an app that provide our fellow citizens to reduce the worries and concerns towards being exposed to COVID patients. Moreover, in technical aspects, we have successfully maintained the front-end to back-end communication, as our app successfully fetches and stores data properly in this short time span of 24 hours. ## What I learned We have learnt that creating a complete app within 24-hours is fairly challenging. As we needed to distribute time well to brainstorm great app ideas, design and implement UI, manage data, etc. This hackathon also further enhanced my ability to practice teamwork. ## What's next for COVIDMAP We hope to implement this app locally in Vancouver to test out the usability of this project. Eventually we wish to help hotspot cities reduce their cases.
winning
# PunMe ## Overview The PunMe app allows a user to take a photo or upload a photo and receive back a pun generated based on the subject of the photo. ## Mobile Application The mobile application, built for iOS iPhone, iTouch and iPad in Swift 3, calls our API using the Alamofire networking framework and sends a photo in the form of a jpeg image, which the user can either take directly or select from his/her camera roll. From there, the app receives a JSON with a pun and a key word which appears in the pun, which is then displayed to the user along with the image. ## Backend Our backend server is a RESTful API built in Java Spring and deployed to Amazon Web Services using Boxfuse. When an image is received, it calls [Microsoft's Computer Vision API](https://www.microsoft.com/cognitive-services/en-us/computer-vision-api) which returns a caption describing the contents of the image, among other information about the image. We then use [Google's Cloud Natural Language Processing API](https://cloud.google.com/natural-language/) for syntax analysis of the caption. After this, we perform a variety of custom checks to determine key subjects of the image input by the user, and consequently the direct subject. We then use this word to search <punoftheday.com> for puns relating to the subject of the image. ## Website PunMe is available at <punme.net>.
## Inspiration While trying to decide on a project, we were joking about building an app to roast your friends. Saturday morning after one idea failed, we decided to work on the bot as something fun until we found a 'real' idea. Eventually it became the real project. ## What it does @roastmebot will reply to any user that says "roast me" or the name of the bot. It replies with a roast that has been scraped from r/roastme on reddit, as well as a roast that has been generated through Markov chains. Additionally, a sentiment analysis is performed on each statement and their respective scores are printed. ## How we built it The bot is built using the 'slackbots' api in node.js. A custom web scaper scrapes the top posts in the roastme subreddit for comments, finds all the comments with a score greater than 4, then sorts them but upvotes. The 'markov-chains' npm module was used to generate the custom roasts. One of our team members wrote a custom wrapper that uses Slack's api for mentions, because the slackbots api does not support mentioning. ## Challenges we ran into Many of our functions are asynchronous, so handling their successful (and expected) operations was a huge challenge. We definitely learned a lot about how callbacks work. Finding a library for the Markov Chains that worked was difficult, and assembling the necessary data in the proper format (an array of single dimension arrays containing roasts) was challenging. Implementing a mention system was difficult as well, due to the lack of support from our chosen bot-building API. Small changes to bot properties, such as the name, led to hard to debug errors as well. ## Accomplishments that we're proud of We turned a silly project that could have been called finished at the first message sent into something meaningful that took a lot of effort and has hilarious results. The Markov Chain roasts are often harsher than the reddit roasts, leading to shocking, yet amusing, results. ## What we learned Nested callbacks are hell. Not every npm module is created equal. Reddit's denizens are incredibly witty and acerbic, making for very interesting roasts and generated roasts. According to our sentiment analysis, (very) sarcastic posts are generally about as positive as the (very) mean posts are negative. While the algorithm has trouble differentiating sincerity from sarcasm, this interesting correlation helps us notice it in the data. ## What's next for Roastbot Assuming that the Slack integration app store is ok with non-PC topics, we would love to submit it and allow other teams to roast each other. Another goal is setting up more user-friendly configuration, and also creating a custom command (/roast @John for example) that will directly roast a user, without involving the bot. Our scraper, Markov Chain algorithm, and sentiment analysis are also not tied to Slack, so we could extend the project to Twitter, Facebookm etc. as well.
## Inspiration Every project aims to solve a problem, and to solve people's concerns. When we walk into a restaurant, we are so concerned that not many photos are printed on the menu. However, we are always eager to find out what a food looks like. Surprisingly, including a nice-looking picture alongside a food item increases sells by 30% according to Rapp. So it's a big inconvenience for customers if they don't understand the name of a food. This is what we are aiming for! This is where we get into the field! We want to create a better impression on every customer and create a better customer-friendly restaurant society. We want every person to immediately know what they like to eat and the first impression of a specific food in a restaurant. ## How we built it We mainly used ARKit, MVC and various APIs to build this iOS app. We first start with entering an AR session, and then we crop the image programmatically to feed it to OCR from Microsoft Azure Cognitive Service. It recognized the text from the image, though not perfectly. We then feed the recognized text to a Spell Check from Azure to further improve the quality of the text. Next, we used Azure Image Search service to look up the dish image from Bing, and then we used Alamofire and SwiftyJSON for getting the image. We created a virtual card using SceneKit and place it above the menu in ARView. We used Firebase as backend database and for authentication. We built some interactions between the virtual card and users so that users could see more information about the ordered dishes. ## Challenges we ran into We ran into various unexpected challenges when developing Augmented Reality and using APIs. First, there are very few documentations about how to use Microsoft APIs on iOS apps. We learned how to use the third-party library for building HTTP request and parsing JSON files. Second, we had a really hard time understanding how Augmented Reality works in general, and how to place virtual card within SceneKit. Last, we were challenged to develop the same project as a team! It was the first time each of us was pushed to use Git and Github and we learned so much from branches and version controls. ## Accomplishments that we're proud of Only learning swift and ios development for one month, we create our very first wonderful AR app. This is a big challenge for us and we still choose a difficult and high-tech field, which should be most proud of. In addition, we implement lots of API and create a lot of "objects" in AR, and they both work perfectly. We also encountered few bugs during development, but we all try to fix them. We're proud of combining some of the most advanced technologies in software such as AR, cognitive services and computer vision. ## What we learned During the whole development time, we clearly learned how to create our own AR model, what is the structure of the ARScene, and also how to combine different API to achieve our main goal. First of all, we enhance our ability to coding in swift, especially for AR. Creating the objects in AR world teaches us the tree structure in AR, and the relationships among parent nodes and its children nodes. What's more, we get to learn swift deeper, specifically its MVC model. Last but not least, bugs teach us how to solve a problem in a team and how to minimize the probability of buggy code for next time. Most importantly, this hackathon poses the strength of teamwork. ## What's next for DishPlay We desire to build more interactions with ARKit, including displaying a collection of dishes on 3D shelf, or cool animations that people can see how those favorite dishes were made. We also want to build a large-scale database for entering comments, ratings or any other related information about dishes! We are happy that Yelp and OpenTable bring us more closed to the restaurants. We are excited about our project because it will bring us more closed to our favorite food!
losing
## Inspiration Always wanted to build a hack for a social good, which is why I wanted to come to DeltaHacks! I think this project in particular is really nice because it encourages people to get out of their comfort zone, meet new people, all while living a healthier lifestyle. More than just rewarding the participants, they get to bolster community events and bring happiness in a time where it may be hard to find. Hence, we chose to tackle mainly the Triangle Challenge of 'Exercise for the Community'. ## What it does The idea is really simple. Build a scalable web application that offers community-based portfolios of events. So that's what we did. People can sign up either as members or community administrators, whereby they can all make events (physical activities such as sports, yoga, aerobics, etc.). The more events someone goes to, the more points they aggregate that they can then redeem for vouchers --- or rather, we thought it would be better to have a community award of some sort. Such as the most points gets a recognition for being the person way out there, trying to get the community and the youth engaged. ## How we built it DeltaFit (repo name - FitCommunity) is built with Python 3.7.1 on a Django 2.7.1 MVC architecture, with the backend database built with PostgreSQL 11.1! The front end is all custom HTML/CSS/Javascript. ## Challenges we ran into A critical challenge was incorporating FitBit SDK into our idea. Our original plan was to consider using the SDK to track each individual's and the overall community's health trends, and provide reasonable analytics for better event planning/personal encouragement. However, the FitBit we had was not compatible with web applications so we had to move forward from there. From there however, we figured out a way to use the CSV files that FitBit allows its user to download, and combine it with the power of PlotLy to generate very beautiful and accurate analytics on pillars of health such as sleep, steps, and calories burnt. All it required was a bit more user responsibility, and clever use of HTML to get the graphs in the personal accounts of our users. However, due to the late start, we were not able to merge completely before the submission of this devpost, but please feel free to check out the only other branch on our repo, where you can see what our prospects were :) ## Accomplishments that we're proud of As mentioned above, we are really proud of the PlotLy integration despite a faulty FitBit, and we are also proud of developing a relatively robust Django platform is just about 21 hours. Also proud that we managed to put aside our tiry eyes, and keep on hacking for social change. It's been a really fun time, and we are really glad to have participated ## What we learned No matter how long 24 hours seems, it really is not that long | Communication with your team is very important, and so is proper delegation of tasks | No matter how bleak a bug may seem, there's always a way out of it or around it! ## What's next for FitCommunity Hopefully, we can eventually find a way for seamless and proper FitBit integration. We do want to take off as much of the user-side responsibility as possible to ensure that hassles are kept at a minimum. We also want to open source this software and potentially exhibit this idea to other communities around the world who too can benefit from a healthy dose of socializing... and sweating ;) ! Thank you for your time, and for your consideration! Best, Gaurav Karna Abijith Mani Rushil Malik Maariz Almamun :)
## Inspiration Our inspiration stems from the desire to craft something sustainable and innovative. Recognizing the significance of self-care, we set out to create a platform that embraces the uniqueness of each individual's journey to wellness. Our belief is grounded in the understanding that the path to well-being is deeply personal and should be tailored to the distinct needs of every individual. ## What it does **Empowering Personal Growth**: - Build an app that empowers individuals on their journey toward personal growth and self- improvement. - Provide tools and features that facilitate users in setting and achieving their wellness goals. **Community Connection**: - Foster a sense of community by incorporating features that allow users to connect, share achievements, and support each other. - Create a platform where users can join groups with similar interests, fostering a supportive and motivating environment. **Positive Impact on Local Businesses**: - Integrate a system that not only benefits users but also positively impacts local businesses. - Consider partnerships with local establishments to offer exclusive discounts or coupons to users achieving certain milestones. **Gamification for Motivation**: - Utilize gamification elements to make the wellness journey more enjoyable and motivating. - Reward users with points, badges, or virtual incentives for completing tasks, achieving goals, and actively participating in the community. ## How we built it Frontend: angular.js, React, html, javascript Backend: python, mysql, ml, html ## Challenges we ran into We faced challenges while implementing the Google Fit API, as acquiring the OAuth client ID was a task none of us had previously encountered. This aspect proved to be both challenging and time-consuming for our team. ## Accomplishments that we're proud of We take pride in successfully incorporating AI to enrich and support individuals on their journey toward well-being. Our achievement is reflected in offering a service that not only benefits individuals but also has a positive impact on the surrounding community. ## What we learned **API Implementation***: - Overcoming the hurdle of implementing the Google Fit API proved challenging. Obtaining the OAuth client ID was a novel task for our team, leading to a significant learning curve and consuming valuable time. \*\*Machine Learning**: - We significantly improved our capabilities in creating machine learning models. This encompassed the adept selection and utilization of data for training purposes. \*\*Model Training*\*: - Our learning journey included acquiring the skills to train models, ranging from image-based to audio-based models. We recognized the importance of strategic decisions, such as the choice of overlapping coefficients, in achieving optimal model performance. ## What's next for FITnFLEX * Implement the exercise tracker for all sort of exercise, not only push-ups but also set-ups, squats, jumping-jacks, etc. * Create a group system where people can share their achievements and completed task with friends n and also congratulate one another. * Expend it to a mobile app, make it more social, people can sign up for special events, with lots of consecutive tasks to complete and leaderboards. * Offer the possibility to organize groups based on specific objectives and/or shared interests. * Offer a personalised chatbot, which helps guiding people to sign up for challenges and/or groups depending on their interests.
## Inspiration We saw the sad reality that people often attending hackathons don't exercise regularly or do so while coding. We decided to come up with a solution ## What it does Lets a user log in, and watch short fitness videos of exercises they can do while attending a hackathon ## How we built it We used HTML & CSS for the frontend, python & sqlite3 for the backend and django to merge all three. We also deployed a DCL worker ## Challenges we ran into Learning django and authenticating users in a short span of time ## Accomplishments that we're proud of Getting a functioning webapp up in a short time ## What we learned How do design a website, how to deploy a website, simple HTML, python objects, django header tags ## What's next for ActiveHawk We want to make activehawk the Tiktok of hackathon fitness. we plan on adding more functionality for apps as well as a live chat room for instructors using the Twello api
losing
## Inspiration The idea for devDucky came from the classic rubber duck debugging technique, where programmers explain their code to a rubber duck to find solutions. We thought: What if the duck could talk back? Imagine a duck that not only listens but also warns you about errors, suggests optimizations, and acts as a knowledgeable pair programmer. This led us to envision a smart duck powered by a fine-tuned LLM that combines hardware and software, constantly monitoring your code - it's even proven better than Copilot! ## What it does devDucky is your intelligent duck friend & IDE that sits on your desk observing your codebase. It gives you suggestions, diagnostics, fixes and anything else you might need! ## How we built it We built devDucky from the ground up - literally! We fine-tuned the brains behind devDucky using Unsloth, used a MERN stack to create the frontend, along with Python and Flask for the backend. Here's a breakdown of what everything does. * Unsloth: We used Unsloth to fine-tune and quantize our model. We fine-tuned three different models (llama3.1 @ 375 steps, tinyllama @ 1 epoch, phi3 @ 375 steps) before settling on phi3 due to hardware constraints. We utilized the alpaca-cleaned 52k dataset by Yahma, and quantized all of our models to Q4\_K\_M. * Express: Our Express implementation forms the backbone of our API architecture, handling backend logic for audio recording and data analysis/cleaning processes. * Node: We used Node to power our server-side operations, providing a fast and scalable foundation for our backend. * Vite: This is the main piece of our frontend. Vite is a key factor in enabling rapid UI implementation and efficient routing across our interface. * Flask: We ran Flask as a dedicated microservice, managing our Python-based backend components, mainly the Ollama integration. * Ollama: We used the Ollama python library along with the Ollama cli and desktop instance to run inference on our model. * Mongoose: This streamlines our database operations, storing critical backend information, LLM responses, and user transcripts with efficiency and reliability. ## Challenges we ran into We had originally planned to use an RP2040 to give devDucky some ears, but it turns out there was no compatible microphone hardware. Luckily, we had an Arduino nano with a built-in mic, and 8 hours of troubleshooting later - it was working! Except, there was tons of static interference... So, our final option was to use a USB microphone, which worked perfectly! We also ran into issues with our laptops not being powerful enough to run our heavily fine-tuned models, forcing us to adapt and fine-tune phi3, a lightweight model, at the last minute. ## Accomplishments that we're proud of We're very proud of devDucky's efficiency compared to other code assistants like Copilot. When compared to Copilot, devDucky's base model is approximately 15% more efficient (without RAG or fine-tuning). With fine-tuning and RAG in the picture, expect that figure to be closer to 35%! We're also very proud of the fact that NONE of your data gets sent to a third party - devDucky's model is fully local! It's free, private, and fast - what more could you ask for? ## What we learned Throughout the development of devDucky, everyone on the team learned a lot. Everyone stuck to parts of the project that fit their own expertise - the biggest thing that came from this was proper file handling of file structure. Surprisingly, we barely had any merge conflicts, mainly due to us having a few different branches open at once. We also learned the importance of testing hardware before implementing it, because you never know if a component will be faulty, and we could have saved at least 8 hours of troubleshooting a faulty microphone. Lastly, find an idea you all agree with - we mulled over ideas for a while, but in the end it was worth it because we ended up finding something that we were all passionate about. ## What's next for devDucky The next steps for devDucky are moving model hosting to servers which would enable us to use even bigger models like codegeex4-all-9b as well as run inference faster, leading to an all around smoother experience. Aside from that, integrating the app with something like Datadog to enable a higher level of observability is another high priority. We have many small tweaks to make as well, but those are the main things!
## Inspiration We are firm believers in the saying "time is money," and one major source of time spent by developers is in the pull request review process. As aspiring software engineers, we aimed to find a way for developers to spend their time more efficiently. From this, we developed a suite of developer tools we call **DevBot**. ## Function For context, a developer submits a pull request when pushing their changes to production, on which dozens of checks are run to ensure it is up to standard. Often, these checks fail due to simple things such as linting errors or bad tests. Upon failure, they would have to go through the console output, fix what failed locally, and then push those changes remotely. Our first step is fixing these build failures, upon which **BuildBot** automatically parses the console output for what errors occurred and where. **BuildBot** then generates fixes for these errors and suggests the changes on the pull request for a developer to accept or reject. Our next tool is **PR Buddy**, which assists developers in the review of pull requests. Upon triggering **PR Buddy** with the command "/prbuddy", it automatically fetches the files changed in the pull request and reviews the code, looking for areas of improvement such as refactoring, efficiency, and error handling. **PR Buddy** then suggests these changes and discusses why it made these decisions in a conversational manner. Our last tool is **TestBot**, which automatically writes tests for code. When triggered with the "/testbot" command, it looks through all the files changed in the pull request and generates test cases for them if necessary. It then commits these new tests to the pull request branch in files labeled "{file\_name}.test.{file\_extension}". ## Development **DevBot** is written in JavaScript and bash script. It runs as a Node.js server hosted on Heroku and listens for its event triggers to automatically run its workflows. ## Challenges The biggest challenge we faced was model hallucinations. We tested models such as GPT-4o, Claude 3.5 Sonnet, and LLama3-70b. Even with highly engineered prompts, these models produced inconsistent outputs when given a prompt and a code snippet. ## Accomplishments This is our first project purely working on the backend. There were a lot of issues we had never encountered before, but we managed to push through. Consequently, we have achieved a working demo very close to what we expected coming into this hackathon. ## What We Learned We learned a lot about the pull request review process, as we had to understand every part of it to build our product. In addition, we got very familiar with CI/CD and DevOps workflows, as it is a core part of **DevBot**. ## What's Next for DevBot We want more compute to fine-tune and deploy our own models that are much better at programming. We also want to expand our product to include other CI platforms to enable more developers to use our product.
## Inspiration: Millions of active software developers use GitHub for managing their software development, however, our team believes that there is a lack of incentive and engagement factor within the platform. As users on GitHub, we are aware of this problem and want to apply a twist that can open doors to a new innovative experience. In addition, we were having thoughts on how we could make GitHub more accessible such as looking into language barriers which we ultimately believed wouldn't be that useful and creative. Our offical final project was inspired by how we would apply something to GitHub but instead llooking into youth going into CS. Combining these ideas together, our team ended up coming up with the idea of DevDuels. ## What It Does: Introducing DevDuels! A video game located on a website that's goal is making GitHub more entertaining to users. One of our target audiences is the rising younger generation that may struggle with learning to code or enter the coding world due to complications such as lack of motivation or lack of resources found. Keeping in mind the high rate of video gamers in the youth, we created a game that hopes to introduce users to the world of coding (how to improve, open sources, troubleshooting) with a competitive aspect leaving users wanting to use our website beyond the first time. We've applied this to our 3 major features which include our immediate AI feedback to code, Duo Battles, and leaderboard. In more depth about our features, our AI feedback looks at the given code and analyses it. Very shortly after, it provides a rating out of 10 and comments on what it's good and bad about the code such as the code syntax and conventions. ## How We Built It: The web app is built using Next.js as the framework. HTML, Tailwind CSS, and JS were the main languages used in the project’s production. We used MongoDB in order to store information such as the user account info, scores, and commits which were pulled from the Github API using octokit. Langchain API was utilised in order to help rate the commit code that users sent to the website while also providing its rationale for said rankings. ## Challenges We Ran Into The first roadblock our team experienced occurred during ideation. Though we generated multiple problem-solution ideas, our main issue was that the ideas either were too common in hackathons, had little ‘wow’ factor that could captivate judges, or were simply too difficult to implement given the time allotted and team member skillset. While working on the project in specific, we struggled a lot with getting MongoDB to work alongside the other technologies we wished to utilise (Lanchain, Github API). The frequent problems with getting the backend to work quickly diminished team morale as well. Despite these shortcomings, we consistently worked on our project down to the last minute to produce this final result. ## Accomplishments that We're Proud of: Our proudest accomplishment is being able to produce a functional game following the ideas we’ve brainstormed. When we were building this project, the more we coded, the less optimistic we got in completing it. This was largely attributed to the sheer amount of error messages and lack of progression we were observing for an extended period of time. Our team was incredibly lucky to have members with such high perseverance which allowed us to continue working on, resolving issues and rewriting code until features worked as intended. ## What We Learned: DevDuels was the first step into the world hackathons for many of our team members. As such, there was so much learning throughout this project’s production. During the design and ideation process, our members learned a lot about website UI design and Figma. Additionally, we learned HTML, CSS, and JS (Next.js) in building the web app itself. Some members learned APIs such as Langchain while others explored Github API (octokit). All the new hackers navigated the Github website and Git itself (push, pull, merch, branch , etc.) ## What's Next for DevDuels: DevDuels has a lot of potential to grow and make it more engaging for users. This can be through more additional features added to the game such as weekly / daily goals that users can complete, ability to create accounts, and advance from just a pair connecting with one another in Duo Battles. Within these 36 hours, we worked hard to work on the main features we believed were the best. These features can be improved with more time and thought put into it. There can be small to large changes ranging from configuring our backend and fixing our user interface.
losing
## Inspiration We decided to tackle this project because we wanted to compete in the JetBlue Challenge. The JetBlue Challenge entails navigating through any & all public domain and social media data to get customer feedback of JetBlue Airlines. ## What it does The main objective of our project is to compile the customer feedback for a given business in a specific location found on Google Reviews and searching through the individual comments to analyze the overall feedback. This works by having the user input the address of their choosing. Said address is then used to extract the data containing the feedback from the Google Places API and subsequently dumps the data onto a ".json" file. ## How I built it We used Google Places API in our program to collect reviews for businesses. Since this API has its limitations (e.g. we can only retrieve the top 5 reviews per location) we collected the address thanks to Google Places and relayed that information to an Actor for it to scan through all the information and create a .JSON file with everything in it. Afterward, we took that information and, with the help of Google's Natural Language API, ranked the reviews by their sentiment (i.e. how negative or positive each one is) and displayed the data we collected in graphs on our page. ## Challenges we ran into The biggest challenge for building ReviewBot was retrieving the data for the feedback, which was mainly due to the limitations of the Google Places API. ## Accomplishments that we are proud of We are pleased to have built an asynchronous web application using libraries that we have never used before. Despite encountering a couple of bumps along the way, our web application is a testament to the hard work we put into it. ## What we learned While working on ReviewBot, we had the opportunity to interact with both sponsors and other participants. Both allowed us to gain greater knowledge about the topics we wanted to implement in ReviewBot. It also gave us a greater appreciation for both parties, as we were glad to see others who displayed as much passion as we did. ## What's next for ReviewBot There are numerous ways that ReviewBot can be updated. Firstly, an upgrade in speed would vastly improve ReviewBot's functionality and performance. Secondly, although ReviewBot uses Google Review to gather the data for the feedback, the implementation of other social media platforms, such as Facebook or Twitter, would push ReviewBot to garner more feedback. Finally, allowing users to create a personal account for the web application would be worth working upon, as users would be able to save their searches onto their accounts, which would boost ReviewBot's reusability.
## Inspiration We help businesses use Machine Learning to take control of their brand by giving them instant access to sentiment analysis for reviews. ## What it does Reviews are better when they are heard. We scrape data from Yelp, run the reviews through our ML model and allow users to find and access these processed reviews in a user-friendly way. ## How we built it For the back-end, we used Flask, Celery worker, and Dockers, and TensorFlow for our machine learning model. For the front-end, we used React, bootstrap and CSS. We scraped the yelp data and populated it to a local MongoDB server. We perform periodic Celery tasks to process the scraped data in the background and save the sentiment analysis in the database. Our TensorFlow model is deployed on the GCP AI Platform and our backend uses the specified version. ## Challenges we ran into * Learning new technologies on the fly during the day of the hackathon. Also, commutation barriers and deployment for machine learning model * Training, building and deploying a machine learning model in a short time * Scraping reviews in mass amounts and loading them to the db * Frontend took a while to make ## Accomplishments that we're proud of * To get a working prototype of our product and learn a few things along the way * Deploy a machine learning model to GCP and use it * Set up async workers in the backend * Perform sentiment analysis for over 8.6 million reviews for almost 160,000 businesses ## What we learned * Deploy ML models * Performing async tasks on the backend side ## What's next for Sentimentality * Provide helpful feedback and insights for businesses (actionable recommendations!). * Perform more in-depth and complex sentiment analysis, and the ability to recognize competitors. * Allow users to mark wrong sentiments (and correct them). Our models aren't perfect, we have room to grow too! * Scrape more platforms (Twitter, Instagram, and other sources, etc.) * Allow users to write a review and receive sentimental analysis from our machine learning model as feedback * Allow filtering businesses by location and/or city
## Inspiration Living in the big city, we're often conflicted between the desire to get more involved in our communities, with the effort to minimize the bombardment of information we encounter on a daily basis. NoteThisBoard aims to bring the user closer to a happy medium by allowing them to maximize their insights in a glance. This application enables the user to take a photo of a noticeboard filled with posters, and, after specifying their preferences, select the events that are predicted to be of highest relevance to them. ## What it does Our application uses computer vision and natural language processing to filter any notice board information in delivering pertinent and relevant information to our users based on selected preferences. This mobile application lets users to first choose different categories that they are interested in knowing about and can then either take or upload photos which are processed using Google Cloud APIs. The labels generated from the APIs are compared with chosen user preferences to display only applicable postings. ## How we built it The the mobile application is made in a React Native environment with a Firebase backend. The first screen collects the categories specified by the user and are written to Firebase once the user advances. Then, they are prompted to either upload or capture a photo of a notice board. The photo is processed using the Google Cloud Vision Text Detection to obtain blocks of text to be further labelled appropriately with the Google Natural Language Processing API. The categories this returns are compared to user preferences, and matches are returned to the user. ## Challenges we ran into One of the earlier challenges encountered was a proper parsing of the fullTextAnnotation retrieved from Google Vision. We found that two posters who's text were aligned, despite being contrasting colours, were mistaken as being part of the same paragraph. The json object had many subfields which from the terminal took a while to make sense of in order to parse it properly. We further encountered troubles retrieving data back from Firebase as we switch from the first to second screens in React Native, finding the proper method of first making the comparison of categories to labels prior to the final component being rendered. Finally, some discrepancies in loading these Google APIs in a React Native environment, as opposed to Python, limited access to certain technologies, such as ImageAnnotation. ## Accomplishments that we're proud of We feel accomplished in having been able to use RESTful APIs with React Native for the first time. We kept energy high and incorporated two levels of intelligent processing of data, in addition to smoothly integrating the various environments, yielding a smooth experience for the user. ## What we learned We were at most familiar with ReactJS- all other technologies were new experiences for us. Most notably were the opportunities to learn about how to use Google Cloud APIs and what it entails to develop a RESTful API. Integrating Firebase with React Native exposed the nuances between them as we passed user data between them. Non-relational database design was also a shift in perspective, and finally, deploying the app with a custom domain name taught us more about DNS protocols. ## What's next for notethisboard Included in the fullTextAnnotation object returned by the Google Vision API were bounding boxes at various levels of granularity. The natural next step for us would be to enhance the performance and user experience of our application by annotating the images for the user manually, utilizing other Google Cloud API services to obtain background colour, enabling us to further distinguish posters on the notice board to return more reliable results. The app can also be extended to identifying logos and timings within a poster, again catering to the filters selected by the user. On another front, this app could be extended to preference-based information detection from a broader source of visual input.
losing
# About the Project: U-Plan ## Inspiration We're from Arizona, and yes—it really is incredibly hot. Having lived here for 2.5 years, each year seems to get hotter than the last. During a casual conversation with an Uber driver in Boston, we chatted about the weather. She mentioned that even the snowfall has been decreasing there. This got us thinking deeply about what's really happening to our climate. It's clear that climate change isn't some far-off concern; it's unfolding right now with far-reaching consequences around the world. Take Hurricane Milton in Florida, for example—it was so severe that even scientists and predictive models couldn't foresee its full impact. This realization made us wonder how we could contribute to a solution. One significant way is by tackling the issue of **Urban Heat Islands (UHIs)**. These UHIs not only make cities hotter but also contribute to the larger problem of global warming. But what exactly are Urban Heat Islands? ## What We Learned Diving into research, we learned that **Urban Heat Islands** are areas within cities that experience higher temperatures than their surrounding rural regions due to human activities and urban infrastructure. Materials like concrete and asphalt absorb and store heat during the day, releasing it slowly at night, leading to significant temperature differences. Understanding the impact of UHIs on energy consumption, air quality, and public health highlighted the urgency of addressing this issue. We realized that mitigating UHIs could play a crucial role in combating climate change and improving urban livability. ## How We Built U-Plan With this knowledge, we set out to create **U-Plan**—an innovative platform that empowers urban planners, architects, and developers to design more sustainable cities. Here's how we built it: * **Leveraging Satellite Imagery**: We integrated high-resolution satellite data to analyze temperatures, vegetation health (NDVI), and water content (NDWI) across urban areas. * **Data Analysis and Visualization**: Utilizing GIS technologies, we developed interactive heat maps that users can explore by simply entering a zip code. * **AI-Powered Chatbot**: We incorporated an AI assistant to provide insights into UHI effects, causes, and mitigation strategies specific to any selected location. * **Tailored Recommendations**: The platform offers architectural and urban planning suggestions, such as using reflective materials, green roofs, and increasing green spaces to naturally reduce surface temperatures. * **User-Friendly Interface**: Focused on accessibility, we designed an intuitive interface that caters to both technical and non-technical users. ## Challenges We Faced Building U-Plan wasn't without its hurdles: * **Data Complexity**: Integrating various datasets (temperature, NDVI, NDWI, NDBI) required sophisticated data processing and normalization techniques to ensure accuracy. * **Scalability**: Handling large volumes of data for real-time analysis challenged us to optimize our backend infrastructure. * **Algorithm Development**: Crafting algorithms that provide actionable insights and accurate sustainability scores involved extensive research and testing. * **User Experience**: Striking the right balance between detailed data presentation and user-friendly design required multiple iterations and user feedback sessions. ## What's Next for U-Plan We started with Urban Heat Islands because they are a pressing issue that directly affects the livability of cities and contributes significantly to global warming. By focusing on UHIs, we could provide immediate solutions to reduce urban temperatures and energy consumption. Moving forward, we plan to expand U-Plan into a comprehensive platform offering a wide range of data-driven insights, making it the go-to resource for urban planners to design sustainable, efficient, and resilient cities. Our roadmap includes: * **Adding More Environmental Factors**: Incorporating air quality indices, pollution levels, and noise pollution data. * **Predictive Analytics**: Developing models to forecast urban growth patterns and potential environmental impacts. * **Collaboration Tools**: Enabling teams to work together within the platform, sharing insights and coordinating projects. * **Global Expansion**: Adapting U-Plan for international use with localized data and multilingual support. --- # What's in it for our Market Audience? * **Data-Driven Insights**: U-Plan empowers urban planners, architects, developers, and property owners with precise, actionable data to make informed decisions. * **Sustainable Solutions**: Helps users design buildings and urban spaces that reduce heat retention, combating Urban Heat Islands and contributing to climate change mitigation. * **Cost and Energy Efficiency**: Offers strategies to lower energy consumption and reduce reliance on air conditioning, leading to significant cost savings. * **Regulatory Compliance**: Assists in meeting environmental regulations and sustainability standards, simplifying the approval process. * **Competitive Advantage**: Enhances reputation by showcasing a commitment to sustainable, forward-thinking design practices. ## Why Would They Use It? * **Comprehensive Analysis Tools**: Access to advanced features like real-time satellite imagery, detailed heat maps, and predictive modeling. * **Personalized Recommendations**: Tailored advice for both new constructions and retrofitting existing buildings to improve energy efficiency and reduce heat retention. * **User-Friendly Interface**: An intuitive platform that's easy to navigate, even for those without technical expertise. * **Expert Support**: Premium users gain access to expert consultants and an AI-powered chatbot for personalized guidance. * **Collaboration Features**: Ability to share maps and data with team members and stakeholders, facilitating better project coordination.
## Inspiration Our inspiration comes from the favorable push towards food delivery for almost any event such as lethargic evenings, lunch with someone special or even catering for a birthday party. Rapid technological advances have allowed for a new generation of the food industry and although beneficial, we heedlessly brush over some important aspects. Everything arrives in an instant and we often exchange nutritional value and sustainable practices for time. Our application would address these issues and help engage users to uptake practices that would better serve our communities and environments. This is why we created **SusFood**. ## What it does **SusFood** is an online ordering and delivery application focused on encouraging users to eat more sustainably without giving up on the convenience of ready-made food. The users can choose locations by type of cuisine, name, or location. Each eatery is assigned a score out of 100 which is its Sustainability Score. The Sustainability Score is computed as a function of the distance that a certain delivery needs to travel, use of sustainable and/or organic products in food preparation, materials used to package, prepare food, additional cutlery, bags etc. Users are incentivized to pick more sustainable food options and adapt more environmentally friendly practices through rewards earned by collected points in proportion to the sustainability of their food choices. In addition, eateries are also incentivized to adopt more environmentally friendly practices given the rewards system which would boost business activites. ## How we built it Information about the various restaurants, fast food eateries, cafes etc and their location, hours, contact information etc were extracted using a combination of OpenStreetMap API, OverPass API, and other standard python modules such as os, webbrowser, pandas, random, and json. The map was built using a combination of the Folium library and OpenStreetMap API. The Twilio python module and API were used to provide notifications and communication support for the user. The area ID assisted in placing markers on the map to show the location of restaurants and cafes. The user location is live and based on the network IP address allowing the query to sync accordingly to find nearby food options. Distance metrics were calculated using average time traveled and route conditions. The color system goes as follows: green is most sustainable, followed by yellow and red being the least sustainable. While connecting front-end to back-end we decided on a local environment due to unforeseen issues and time constraints to fully implement a back-end database. We took a CSV file containing the restaurants’ data and converted it to a JSON file that was parsed through with javascript and embedded within the application. On the front-end a combination of python, javascript, HTML and CSS were used to implement **SusFood**. We developed a website that is user friendly and it features a navigation bar that has a search bar, cart, and profile page. The home screen offers users the choice of preferred food categories that have a sustainability rating associated with it and is within your vicinity. There are two custom-made carousels: one for the food categories and the second for displaying the various restaurants. These restaurants are ranked by most sustainable to least sustainable. Each restaurant card contains the name of the restaurant and its Sustainability Score along with a tier badge. There are 4 badges: sprout, potted plant, bonsai, and golden stickers; as the user engages in healthier behaviors and uses the application more, they can build up points in which they would be able to redeem it at their favorite eating spots. The adorable pile of leaves indicates the accumulation of such rewards! Also there is a map system where users can visually explore their eating options around where they currently are. When you select a restaurant, the standard menu and business information is displayed along with the Sustainability Score. The metrics that determine this score are distance, packaging, sourcing, and leftovers that restaurants may offer to customers the following business day. Each of these factors are weighted, with distance being the highest and the other categories being equally weighted. The restaurant page also features standard ratings, reviews, and additional information on the calculations for the Sustainability Score. ## Challenges we ran into Lack of availability of extensive data about the sustainability and contact information for eateries made it challenging to tabulate appropriate search results for all queries. We encountered challenges with quality open sourced data. On the front end, we had to navigate through formatting and alignment issues with CSS and integrating different modules which we ended up not utilizing. Some applications that we would have liked to use include Open Route Service and Django, one of python’s database services. ## Accomplishments that we're proud of One of our main accomplishments is extracting geospatial information, processing it efficiently, and displaying it on the map. We are also very proud of our UI design. ## What we learned Geospatial programming, REST APIs, non-REST APIs, web-development with python, HTML, and CSS. Although we had a few major setbacks, overall we are extremely proud of what we were able to create with an idea that we were all excited to pursue. ## What's next for SusFood * Build a more extensive database containing information concerning various facets of sustainability for each eatery * Adding additional features to order and deliver food * Improve and optimize the algorithm to compute the sustainability score * Build a robust customer support system using Twilio * Expand on the mapping feature to include road traffic conditions and additional variables * A searching algorithm to cater to the user * Authentication of a new user (Login / Sign Up) * Building a database that would be fully implemented and used in conjunction with the front-end * Partner with restaurants to give users a functioning rewards system to encourage them to have sustainable buying habits.
As a response to the ongoing wildfires devastating vast areas of Australia, our team developed a web tool which provides wildfire data visualization, prediction, and logistics handling. We had two target audiences in mind: the general public and firefighters. The homepage of Phoenix is publicly accessible and anyone can learn information about the wildfires occurring globally, along with statistics regarding weather conditions, smoke levels, and safety warnings. We have a paid membership tier for firefighting organizations, where they have access to more in-depth information, such as wildfire spread prediction. We deployed our web app using Microsoft Azure, and used Standard Library to incorporate Airtable, which enabled us to centralize the data we pulled from various sources. We also used it to create a notification system, where we send users a text whenever the air quality warrants action such as staying indoors or wearing a P2 mask. We have taken many approaches to improving our platform’s scalability, as we anticipate spikes of traffic during wildfire events. Our codes scalability features include reusing connections to external resources whenever possible, using asynchronous programming, and processing API calls in batch. We used Azure’s functions in order to achieve this. Azure Notebook and Cognitive Services were used to build various machine learning models using the information we collected from the NASA, EarthData, and VIIRS APIs. The neural network had a reasonable accuracy of 0.74, but did not generalize well to niche climates such as Siberia. Our web-app was designed using React, Python, and d3js. We kept accessibility in mind by using a high-contrast navy-blue and white colour scheme paired with clearly legible, sans-serif fonts. Future work includes incorporating a text-to-speech feature to increase accessibility, and a color-blind mode. As this was a 24-hour hackathon, we ran into a time challenge and were unable to include this feature, however, we would hope to implement this in further stages of Phoenix.
partial
## Inspiration Tax Simulator 2019 takes inspiration from a variety of different games, such as the multitude of simulator games that have gained popularity in recent years, the board game Life, and the video game Pokemon. ## What it does Tax Simulator is a video game designed to introduce students to taxes and benefits. ## How we built it Tax Simulator was built in Unity using C#. ## Challenges we ran into The biggest challenge that we had to overcome was time. Creating the tax calculation system, designing and building the game's levels, implementing the narrative text elements, and debugging every single area of the program were all tedious and demanding tasks, and as a result, there are several features of the game that have not yet been fully implemented, such as the home-purchasing system. Learning how to use the Unity game engine also proved to be a challenge as not all of us had past experience with the software, so picking up the skills to implement our ideas into our creation and develop a fleshed-out product was an essential yet difficult task. ## Accomplishments that we're proud of Although simple, Tax Simulator incorporates concepts such as common tax deductions and two savings vehicles in a fun and interactive game. The game makes use of a charming visual aesthetic, simple mechanics, and an engaging narrative that makes it fun to play through, and we're very proud of our ability to portray learning and education in an appealing way. ## What we learned We learned that although it is tempting to try and incorporate as many features as possible in our project, a simple game that is easy to understand and fun to play will keep players engaged better than a game with many complex features and options that ultimately contribute to confusion and clutter. ## What's next for Tax Simulator 2019 Although it is a great start for learning about taxes, Tax Simulator could benefit from incorporating more life events such, purchasing a house with the First-Time Home Buyer Incentive or having kids, and saving for college with RESPs. The game could also suggest ways for players to improve their gameplay based on how the decisions they made regarding their taxes.
## Inspiration The expense behavior of the user, especially in the age group of 15-29, is towards spending unreasonably amount in unnecessary stuff. So we want them to have a better financial life, and help them understand their expenses better, and guide them towards investing that money into stocks instead. ## What it does It points out the unnecessary expenses of the user, and suggests what if you invest this in the stocks what amount of income you could gather around in time. So, basically the app shows you two kinds of investment grounds: 1. what if you invested somewhere around 6 months back then what amount of money you could have earned now. 2. The app also shows what the most favorable companies to invest at the moment based on the Warren Buffet Model. ## How we built it We basically had a python script that scrapes the web and analyzes the Stock market and suggests the user the most potential companies to invest based on the Warren Buffet model. ## Challenges we ran into Initially the web scraping was hard, we tried multiple ways and different automation software to get the details, but some how we are not able to incorporate fully. So we had to write the web scrapper code completely by ourselves and set various parameters to short list the companies for the Investment. ## Accomplishments that we're proud of We are able to come up with an good idea of helping people to have a financially better life. We have learnt so many things on spot and somehow made them work for satisfactory results. but i think there is many more ways to make this effective. ## What we learned We learnt firebase, also we learnt how to scrape data from a complex structural sites. Since, we are just a team of three new members who just formed at the hackathon, we had to learn and co-operate with each other. ## What's next for Revenue Now We can study our user and his behavior towards spending money, and have customized profiles that suits him and guides him for the best use of financial income and suggests the various saving patterns and investment patterns to make even the user comfortable.
## Inspiration Modern meetup apps are too focused on "dating", we wanted to create an app that helps students in the city make new friends on campus. The app gets rid of stalling when it comes to time and place to meet, we implemented a location tracking for each person so that when they match, they can try to meet up immediately. ## What it does Users can login to the app and fill out the criteria for the person they want to meet within a 1km radius, the app then tries to find a student that matches the criteria. Once found, the students are able to see eachother's locations and can easily meetup by looking at the pin points on the map. ## How we built it Built using Node.js backend with MongoDB on AWS server, our front-end utilizes javascript uses requests to communicate and gets location data. ## Challenges we ran into Working with AWS and the Google Maps Javascript API was very hard, Google Maps API was surprisingly hard to work with, the functionalities that they provide is very limited. AWS just had a lot of bugs that we had to work with during setup, once we got past it, working with it was more easier. ## Accomplishments that we're proud of We were able to get the core functionality of the program to work, sharing location data and seeing other users on the map.
partial
## Inspiration When visiting a clinic, two big complaints that we have are the long wait times and the necessity to use a kiosk that thousands of other people have already touched. We also know that certain methods of filling in information are not accessible to everyone (For example, someone with Parkinsons disease writing with a pen). In response to these problems, we created Touchless. ## What it does * Touchless is an accessible and contact-free solution for gathering form information. * Allows users to interact with forms using voices and touchless gestures. * Users use different gestures to answer different questions. * Ex. Raise 1-5 fingers for 1-5 inputs, or thumbs up and down for yes and no. * Additionally, users are able to use voice for two-way interaction with the form. Either way, surface contact is eliminated. * Applicable to doctor’s offices and clinics where germs are easily transferable and dangerous when people touch the same electronic devices. ## How we built it * Gesture and voice components are written in Python. * The gesture component uses OpenCV and Mediapipe to map out hand joint positions, where calculations could be done to determine hand symbols. * SpeechRecognition recognizes user speech * The form outputs audio back to the user by using pyttsx3 for text-to-speech, and beepy for alert noises. * We use AWS Gateway to open a connection to a custom lambda function which has been assigned roles using AWS Iam Roles to restrict access. The lambda generates a secure key which it sends with the data from our form that has been routed using Flask, to our noSQL dynmaoDB database. ## Challenges we ran into * Tried to set up a Cerner API for FHIR data, but had difficulty setting it up. * As a result, we had to pivot towards using a noSQL database in AWS as our secure backend database for storing our patient data. ## Accomplishments we’re proud of This was our whole team’s first time using gesture recognition and voice recognition, so it was an amazing learning experience for us. We’re proud that we managed to implement these features within our project at a level we consider effective. ## What we learned We learned that FHIR is complicated. We ended up building a custom data workflow that was based on FHIR models we found online, but due to time constraints we did not implement certain headers and keys that make up industrial FHIR data objects. ## What’s next for Touchless In the future, we would like to integrate the voice and gesture components more seamlessly into one rather than two separate components.
## Inspiration The intricate nature of diagnosing and treating diseases, combined with the burdensome process of managing patient data, drove us to develop a solution that harnesses the power of AI. Our goal was to simplify and expedite healthcare decision-making while maintaining the highest standards of patient privacy. ## What it does Percival automates data entry by seamlessly accepting inputs from various sources, including text, speech-to-text transcripts, and PDFs. It anonymizes patient information, organizes it into medical forms, and compares it against a secure vector database of similar cases. This allows us to provide doctors with potential diagnoses and tailored treatment recommendations for various diseases. ## How we use K-means clustering? To enhance the effectiveness of our recommendation system, we implemented a K-means clustering model using Databricks Open Source within our vector database. This model analyzes the symptoms and medical histories of patients to identify clusters of similar cases. By grouping patients with similar profiles, we can quickly retrieve relevant data that reflects shared symptoms and outcomes. When a new patient record is entered, our system evaluates their symptoms and matches them against existing clusters in the database. This process allows us to provide doctors with recommendations that are not only data-driven but also highly relevant to the patient's unique situation. By leveraging the power of K-means clustering, we ensure that our recommendations are grounded in real-world patient data, improving the accuracy of diagnoses and treatment plans. ## How we built it We employed a combination of technologies to bring Percival to life: Flask for server endpoint management, Cloudflare D1 for secure backend storage of user data and authentication, OpenAI Whisper for converting speech to text, the OpenAI API for populating PDF forms, Next.js for crafting a dynamic frontend experience, and finally Databricks Open-source for the K-means clustering to identify similar patients. ## Challenges we ran into While integrating speech-to-text capabilities, we faced numerous hurdles, particularly in ensuring the accurate conversion of doctors' verbal notes into structured data for medical forms. The task required overcoming technical challenges in merging Next.js with speech input and effectively parsing the output from the Whisper model. ## Accomplishments that we're proud of We successfully integrated diverse technologies to create a cohesive and user-friendly platform. We take pride in Percival's ability to transform doctors' verbal notes into structured medical forms while ensuring complete data anonymization. Our achievement in combining Whisper’s speech-to-text capabilities with OpenAI's language models to automate diagnosis recommendations represents a significant advancement. Additionally, establishing a secure vector database for comparing anonymized patient data to provide treatment suggestions marks a crucial milestone in enhancing the efficiency and accuracy of healthcare tools. ## What we learned The development journey taught us invaluable lessons about securely and efficiently handling sensitive healthcare data. We gained insights into the challenges of working with speech-to-text models in a medical context, especially when managing diverse and large inputs. Furthermore, we recognized the importance of balancing automation with human oversight, particularly in making critical healthcare diagnoses and treatment decisions. ## What's next for Percival Looking ahead, we plan to broaden Percival's capabilities to diagnose a wider range of diseases beyond AIDS. Our focus will be on enhancing AI models to address more complex cases, incorporating multiple languages into our speech-to-text feature for global accessibility, and introducing real-time data processing from wearable devices and medical equipment. We also aim to refine our vector database to improve the speed and accuracy of patient-to-case comparisons, empowering doctors to make more informed and timely decisions.
## Inspiration Many of us spend hours waiting in walk in clinics unnecessarily due to poor planning and communication. We aim to fix this with our program. ## What it does Our program has an interaction between doctor's office and the client. The Doctors office uses the graphical user interface to setup walk in and appointments. These are then grouped to create a schedule for the doctor. The user goes to [link](www.impatient-patient.me) which displays the other doctor offices in there area and the wait times for each, it also has a map in order to aid the user in determining which location is better for them. ## How we built it **Doctor's Office Application** The graphical user interface uses pythons built in graphics library Tkinter. On the left hand side of the application there is calendar that is built using a Canvas object with a for looped grid in the background and the events are buttons that allows for the modification of the event. In the center there is an add button to register walk ins or patients into the schedule thus updating the Firebase database. Under the Add Button there is a Note taking section to aid the doctor in examining his patient which has the ability to save into a text file. **Online Flask** The online portion of the project was built using primarily using Python and flask to access our Firebase database. In order to make the web app we are using HTML, Jinga, Javascript, and CSS.It then uses this information to generate a table with the waiting Time ## Challenges we ran into * The domain name is quite hard to obtain. Deploying our web app is another challenge. * Syncing our Python versions and installing pyrebase successfully in order to communicate with Firebase was also difficult. * Retrieving data from the Firebase based on the machines local clinic name( ie: only puling data that corresponds to the machines local clinic name) ## Accomplishments that we're proud of * Creating and manipulating a Firebase database * Creating a web app using Flask * Developing a beautiful Graphical User Interface (GUI) using Tkinter in Python ## What we learned * Learning how to use Firebase * Understanding the complex ways of Flask and how to integrate it into Firebase * Figured out how to make different machines interact with one database * Learning that it was possible to have "for loops" in HTML via an epic Jinga hack ## What's next for Wait No More * Incorporate with appointment booking systems - more autonomous adding of sessions * More intuitive/simple user interface * Integrating more APIs or public databases of medical clinics
winning
## Inspiration We wanted to build a sustainable project which gave us the idea to plant crops on a farmland in a way that would give the farmer the maximum profit. The program also accounts for crop rotation which means that the land gets time to replenish its nutrients and increase the quality of the soil. ## What it does It does many things, It first checks what crops can be grown in that area or land depending on the weather of the area, the soil, the nutrients in the soil, the amount of precipitation, and much more information that we have got from the APIs that we have used in the project. It then forms a plan which accounts for the crop rotation process. This helps the land regain its lost nutrients while increasing the profits that the farmer is getting from his or her land. This means that without stopping the process of harvesting we are regaining the lost nutrients. It also gives the farmer daily updates on the weather in that area so that he can be prepared for severe weather. ## How we built it For most of the backend of the program, we used Python. For the front end of the website, we used HTML. To format the website we used CSS. we have also used Javascript for formates and to connect Python to HTML. We used the API of Twilio in order to send daily messages to the user in order to help the user be ready for severe weather conditions. ## Challenges we ran into The biggest challenge that we faced during the making of this project was the connection of the Python code with the HTML code. so that the website can display crop rotation patterns after executing the Python back end script. ## Accomplishments that we're proud of While making this each of us in the group has accomplished a lot of things. This project as a whole was a great learning experience for all of us. We got to know a lot of things about the different APIs that we have used throughout the project. We also accomplished making predictions on which crops can be grown in an area depending on the weather of the area in the past years and what would be the best crop rotation patterns. On the whole, it was cool to see how the project went from data collection to processing to finally presentation. ## What we learned We have learned a lot of things in the course of this hackathon. We learned team management and time management, Moreover, we got hands on experience in Machine Learning. We got to implement Linear Regression, Random decision trees, SVM models. Finally, using APIs became second nature to us because of the number of them we had to use to pull data. ## What's next for ECO-HARVEST For now, the data we have is only limited to the United States, in the future we plan to increase it to the whole world and also increase our accuracy in predicting which crops can be grown in the area. Using the crops that we can grow in the area we want to give better crop rotation models so that the soil will gain its lost nutrients faster. We also plan to give better and more informative daily messages to the user in the future.
## Inspiration How long does class attendance take? 3 minutes? With 60 classes, 4 periods a day, and 180 school days in a year, this program will save a cumulative 72 days every year! Our team recognized that the advent of neural networks yields momentous potential, and one such opportunity is face recognition. We utilized this cutting-edge technology to save time in regards to attendance. ## What it does The program uses facial recognition to determine who enters and exits the room. With this knowledge, we can keep track of everyone who is inside, everyone who is outside, and the unrecognized people that are inside the room. Furthermore, we can display all of this on a front end html application. ## How I built it A camera that is mounted by the door sends a live image feed to Raspberry pi, which then transfers that information to Flask. Flask utilizes neural networks and machine learning to study previous images of faces, and when someone enters the room, the program matches the face to a person in the database. Then, the program stores the attendees in the room, the people that are absent, and the unrecognized people. Finally, the front end program uses html, css, and javascript to display the live video feed, the people that are attending or absent, the faces of all unrecognized people. ## Challenges I ran into When we were using the AWS, we uploaded to the bucket, and that triggered a Lamda. In short, we had too many problematic middle-men, and this was fixed by removing them and communicating directly. Another issue was trying to read from cameras that are not designed for Raspberry pi. Finally, we accidentally pushed the wrong html2 file, causing a huge merge conflict problem. ## Accomplishments that I'm proud of We were successfully able to integrate neural networks with Flask to recognize faces. We were also able to make everything much more efficient than before. ## What I learned We learned that it is often better to directly communicate with the needed software. There is no point in having middlemen unless they have a specific use. Furthermore, we also improved our server creating skills and gained many valuable insights. We also taught a team member how to use GIT and how to program in html. ## What's next for Big Brother We would like to match inputs from external social media sites so that unrecognized attendees could be checked into an event. We also would like to export CSV files that display the attendees, their status, and unrecognized people.
## Inspiration Coming from South Texas, two of the team members saw ESL (English as a Second Language) students being denied of a proper education. Our team created a tool to break down language barriers that traditionally perpetuate socioeconomic cycles of poverty by providing detailed explanations about word problems using ChatGPT. Traditionally, people from this group would not have access to tutoring or 1-on-1 support, and this website is meant to rectify this glaring issue. ## What it does The website takes in a photo as input, and it uses optical character recognition to get the text from the problem. Then, it uses ChatGPT to generate a step-by-step explanation for each problem, and this output is tailored to the grade level and language of the student, enabling students from various backgrounds to get assistance they are often denied of. ## How we built it We coded the backend in Python with two parts: OCR and ChatGPT API implementation. Moreover, we considered the multiple parameters, such as grade and language, that we could implement in our code and eventually query ChatGPT with to make the result as helpful as possible. On the other side of the stack, we coded it in React with TypeScript to be as simple and intuitive as possible. It has two sections that clearly show what it is outputting and what ChatGPT has generated to assist the student. ## Challenges we ran into During the development of our product we often ran into struggles with deciding the optimal way to apply different APIs and learning how to implement them, many of which we ended up not using or changing our applications for, such as the IBM API. Through this process, we had to change our high level plan for the backend functions and consequently reimplement our frontend user interface to fit the operations. This also provided a compounding challenge of having to reestablish and discuss new ideas while communicating as a team. ## Accomplishments that we're proud of We are proud of the website layout. Personally the team is very fond of the color and the arrangement of the site’s elements. Another thing that we are proud of is just that we have something working, albeit jankily. This was our first hackathon, so we were proud to be able to contribute to the hackathon in some form. ## What we learned One invaluable skill we developed through this project was learning more about the unique plethora of APIs available and how we can integrate and combine them to create new revolutionary products that can help people in everyday life. We not only developed our technical skills, including git familiarity and web development, but we also developed our ability to communicate our ideas as a team and gain the confidence and creativity to create and carry out an idea from thought to production. ## What's next for Homework Helper As part of our mission to increase education accessibility and combat common socioeconomic barriers, we hope to use Homework Helper to not only translate and minimize the language barrier, but to also help those with visual and auditory disabilities. Some functions we hope to implement include having text-to-speech and speech-to-text features, and producing video solutions along with text answers.
winning
## Inspiration As students, we understand the stress that builds up in our lives. Furthermore, we know how important it is to reflect on the day and plan how to improve for tomorrow. It might be daunting to look for help from others, but wouldn't an adorable dino be the perfect creature to talk to your day about? Cuteness has been scientifically proven to increase happiness, and our cute dino will always be there to cheer you up! We want students to have a way to talk about their day, get cheered up by a cute little dino buddy, and have suggestions on what to focus on tomorrow to increase their productivity! DinoMind is your mental health dino companion to improve your mental wellbeing! ## What it does DinoMind uses the power of ML models, LLMs, and of course, cute dinos (courtesy of DeltaHacks of course <3)!! Begin your evening by opening DinoMind and clicking the **Record** button, and tell your little dino friend all about your day! A speech-to-text model will transcribe your words, and save the journal entry in the "History" tab. We then use an LLM to summarize your day for you in easy to digest bullet points, allowing you to reflect on what you accomplished. The LLM then creates action items for tomorrow, allowing you to plan ahead and have some sweet AI-aided productivity! Finally, your dino friend gives you an encouraging message if they notice you're feeling a bit down thanks to our sentiment analysis model! ## How we built it Cloudflare was our go-to for AI/ML models. These model types used were: 1. Text generation 2. Speech-to-text 3. Text classification (in our case, it was effectively used for sentiment analysis) We used their AI Rest API, and luckily the free plan allowed for lots of requests! Expo was the framework we used for our front-end, since we wanted some extra oomph to our react native application. ## Challenges we ran into A small challenge was that we really really wanted to use the Deltahacks dino mascots for this year in our application (they're just so cute!!). But there wasn't anything with each one individually online, so we realized we could take photos of the shirts and extra images of the dinos from that!! As for the biggest challenges, that was integrating our Cloudflare requests with the front-end. We had our Cloudflare models working fully with test cases too! But once we used the recording capabilities of react native and tried sending that to our speech-to-text model, everything broke. We spent far too long adding `console.log` statements everywhere, checking the types of the variables, the data inside, hoping somewhere we'd see what the difference was in the input from our test cases and the recorded data. That was easily our biggest bottleneck, because once we moved past it, we had the string data from what the user said and were able to send it to all of our Cloudflare models. ## Accomplishments that we're proud of We are extremely proud of our brainstorming process, as this was easily one of the most enjoyable parts of the hackathon. We were able to bring our ideas from 10+ to 3, and then developed these 3 ideas until we decided that the mental health focused journaling app seemed the most impactful, especially when mental health is so important. We are also proud of our ability to integrate multiple AI/ML models into our application, giving each one a unique and impactful purpose that leads to the betterment of the user's productivity and mental wellbeing. Furthermore, majority of the team had never used AI/ML models in an application before, so seeing their capabilities and integrating them into a final product was extremely exciting! Finally, our perseverance and dedication to the project carried us through all the hard times, debugging, and sleepless night (singular, because luckily for our sleep deprived bodies, this wasn't a 2 night hackathon). We are so proud to present the fruits of our labour and dedication to improving the mental health of students just like us. ## What we learned We learned that even though every experience we've had shows us how hard integrating the back-end with the front-end can be, nothing ever makes it easier. However, your attitude towards the challenge can make dealing with it infinitely easier, and enables you to create the best product possible. We also learned a lot about integrating different frameworks and the conflicts than can arise. For example, did you know that using expo (and by extension, react native), you make it impossible to use certain packages?? We wanted to use the `fs` package for our file systems, but it was impossible! Instead, we needed to use the `FileSystem` from `expo-file-system` :sob: Finally, we learned about Cloudflare and Expo since we'd never used those technologies before! ## What's next for DinoMind One of the biggest user-friendly additions to any LLM response is streaming, and DinoMind is no different. Even ChatGPT isn't always that fast at responding, but it looks a lot faster when you see each word as it's produced! Integrating streaming into our responses would make it a more seamless experience for users as they are able to immediately see a response and read along as it is generated. DinoMind also needs a lot of work in finding mental health resources from professionals in the field that we didn't have access to during the hackathon weekend. With mental wellbeing at the forefront of our design, we need to ensure we have professional advice to deliver the best product possible!
## Inspiration MISSION: Our mission is to create an intuitive and precisely controlled arm for situations that are tough or dangerous for humans to be in. VISION: This robotic arm application can be used in the medical industry, disaster relief, and toxic environments. ## What it does The arm imitates the user in a remote destination. The 6DOF range of motion allows the hardware to behave close to a human arm. This would be ideal in environments where human life would be in danger if physically present. The HelpingHand can be used with a variety of applications, with our simple design the arm can be easily mounted on a wall or a rover. With the simple controls any user will find using the HelpingHand easy and intuitive. Our high speed video camera will allow the user to see the arm and its environment so users can remotely control our hand. ## How I built it The arm is controlled using a PWM Servo arduino library. The arduino code receives control instructions via serial from the python script. The python script is using opencv to track the user's actions. An additional feature use Intel Realsense and Tensorflow to detect and track user's hand. It uses the depth camera to locate the hand, and use CNN to identity the gesture of the hand out of the 10 types trained. It gave the robotic arm an additional dimension and gave it a more realistic feeling to it. ## Challenges I ran into The main challenge was working with all 6 degrees of freedom on the arm without tangling it. This being a POC, we simplified the problem to 3DOF, allowing for yaw, pitch and gripper control only. Also, learning the realsense SDK and also processing depth image was an unique experiences, thanks to the hardware provided by Dr. Putz at the nwhacks. ## Accomplishments that I'm proud of This POC project has scope in a majority of applications. Finishing a working project within the given time frame, that involves software and hardware debugging is a major accomplishment. ## What I learned We learned about doing hardware hacks at a hackathon. We learned how to control servo motors and serial communication. We learned how to use camera vision efficiently. We learned how to write modular functions for easy integration. ## What's next for The Helping Hand Improve control on the arm to imitate smooth human arm movements, incorporate the remaining 3 dof and custom build for specific applications, for example, high torque motors would be necessary for heavy lifting applications.
![ClassX logo](https://raw.githubusercontent.com/aheze/ClassX/main/pi_transparent.png) # ClassX = Vision Pro + Classroom RAG + Generative 3b1b GIFs * Fuse online + offline classes to ‘ultralearn’ (aka cram) class quickly! * Best-in-class multi-query RAG (retrieval augmented generation) shows relevant TA notes, LaTeX equations, YouTube videos, and 3b1b videos. * Whisper audio transcript can ***rewind*** to older transcript entires * **[Generative Video]** Generate 3blue1brown AI videos with GPT-4 and Manim rendering engine!! **Exploring the intersection of Mixed Reality, Spatial Computing, and Large Language Models in education.** The integration of Vision Pro and AI technologies offers a unique opportunity to enhance learning experiences through immersive, interactive content. Our project, ClassX, is designed to leverage these advancements, focusing on the practical application within an educational context. **Our mission** with ClassX was to develop a VisionOS app that enriches learning by offering a multi-dimensional platform for students to engage with educational materials. Recognizing the potential for broad application across various fields, we chose to concentrate on education, aiming to provide a solution that addresses the needs of modern learners. ClassX is a virtual learning environment where students can interact with an array of educational resources, including videos, PDFs, and LaTeX-rendered documents, all presented within a virtual lecture hall setting. ## Features of ClassX * **Dynamic Educational Material Display**: ClassX allows users to navigate through and interact with a variety of media types, all integrated seamlessly into the user’s visual field. * **AI-Driven Content Personalization**: We use TogetherAI and Mistral Mixture of Experts to customize the learning experience. We adapt the lecturer's style into a form that fits individual learning styles, preferences, and progress. * **Engaging Learning Methods**: Through interactive elements, such as quizzes and exercises, ClassX transforms traditional learning materials into engaging, interactive experiences. * **Comprehensive Academic Integration**: ClassX offers direct access to a wide range of academic resources, employing sophisticated search technologies to provide relevant, up-to-date materials. ## Development Process The development of ClassX was characterized by a collaborative and methodical approach, emphasizing efficiency and technical innovation. The team utilized a range of development tools and platforms, including Xcode, OpenAI API, Mistral, and LaTeX libraries, to create a robust educational platform. ## Stack ### Server * **Together AI:** Mistral 7x8b Mixture of Experts chat model * **OpenAI:** text-embedding-ada-3 embedding * **Chroma** multi query vector search. Each document and transcript maps to many keys, and Chromadb reranks n->n SQL mapping by similarity. * **3b1b Manim**: Grant Sanderson’s Python math rendering engine. GPT-4 generates 2d animation scenes as executable code, creating 10-sec crystal-clear AI animated video (without OpenAI Sora 😉). * **FastAPI**: serves generated video and APIs ### visionOS App * **100% Swift and SwiftUI**: fully native app! + handles animations, images, webviews, and more + native visionOS dynamic layout grids and resizing support without breakpoints * **Whisper (Local)**: Transcribe audio offline with timestamps + Live streaming via AVFoundation * **LaTeX renderer** (with regex to extract LaTeX sections and handle inlining) ## The ClassX Experience * **Virtual Lecture Halls**: ClassX *adds* a virtual environment on top of an existing one. It *enhances/supplements* the boring lecture hall experience with educational content to facilitate a comprehensive and engaging learning experience. * **Customized Learning Journeys**: AI technology assesses each learner's unique profile to deliver personalized content, optimizing the educational experience. * **Interactive Learning Tools**: ClassX enhances learning retention through interactive quizzes and exercises, providing instant feedback to reinforce understanding. ## Challenges and Achievements * **Developing for Vision Pro**: Tailoring ClassX to the innovative capabilities of Vision Pro required creative problem-solving and technical acumen. * **Complex Content Integration**: The integration of LaTeX into mixed reality posed significant challenges, but the team successfully achieved smooth rendering of intricate academic content. ## Future Directions Moving forward, ClassX aims to expand its content offerings, integrate live tutoring capabilities, and explore the potential of augmented reality (AR) for practical learning applications. ## Considerations While ClassX represents a significant step forward in educational technology, we are conscious of the challenges ahead, including device accessibility and data privacy concerns. **Thank you for exploring ClassX.** Our work represents a commitment to advancing educational technology for a brighter, more informed future.
partial
## Inspiration Examining our own internet-related tendencies revealed to us that although the internet presents itself as a connective hub, it can often be a very solitary place. As the internet continues to become a virtual extension of our physical world, we decided that working on an app that keeps people connected on the internet in a location-based way would be an interesting project. Building Surf led us to the realization that the most meaningful experiences are as unique as they are because of the people we experience them with. Social media, although built for connection, often serves only to widen gaps. In a nutshell, Surf serves to connect people across the internet in a more genuine way. ## What it does Surf is a Chrome extension that allows you to converse with others on the same web page as you. For example, say you're chilling on Netflix, you could open up Surf and talk to anyone else watching the same episode of Breaking Bad as you. Surf also has a "Topics" feature which allows users to create their own personal discussion pages on sites. Similarly to how you may strike up a friendship discussing a painting with a stranger at the art museum, we designed Surf to encourage conversation between individuals from different backgrounds with similar interests. ## How we built it We used Firebase Realtime Database for our chatrooms and Firebase Authentication for login. Paired with a Chrome extension leveraging some neat Bootstrap on the front end, we ended up with a pretty good looking build. ## Challenges we ran into The hardest challenge for us was to come up with an idea that we were happy with. At first, all the ideas we came up with were either too complex or too basic. We longed for an idea that made us feel lucky to have thought of it. It wasn't until after many hours of brainstorming that we came up with Surf. Some earlier ideas included a CLI game in which you attempt to make guacamole and a timer that only starts from three minutes and thirty-four seconds. ## Accomplishments that we're proud of We released a finished build on the Chrome Web Store in under twelve hours. Letting go of some perfectionism and going balls to the wall on our long-term goal really served us well in the end. However, despite having finished our hack early, we built in a huge V2 feature, which added up to twenty solid hours of hacking. ## What we learned On top of figuring out how to authenticate users in a Chrome extension, we discovered the effects that five Red Bulls can have on the human bladder. ## What's next for Surf Surf's site-specific chat data makes for a very nice business model – site owners crave user data, and the consensual, natural data generated by Surf is worth its weight in gold to web admins. On top of being economically capable, Surf has a means of providing users with analytics and recommendations, including letting them know how much time they spend on particular sites and which other pages they might enjoy based on their conversations and habits. We also envision a full-fledged browser for Surf, with a built in chat functionality.
## Inspiration During last year's World Wide Developers Conference, Apple introduced a host of new innovative frameworks (including but not limited to CoreML and ARKit) which placed traditionally expensive and complex operations such as machine learning and augmented reality in the hands of developers such as myself. This incredible opportunity was one that I wanted to take advantage of at PennApps this year, and Lyft's powerful yet approachable API (and SDK!) struck me as the perfect match for ARKit. ## What it does Utilizing these powerful technologies, Wher integrates with Lyft to further enhance the process of finding and requesting a ride by improving on ease of use, safety, and even entertainment. One issue that presents itself when using overhead navigation methods is, quite simply, a lack of the 3rd dimension. A traditional overhead view tends to complicate on foot navigation more than it may help, and even more importantly, requires the user to bury their face in their phone. This pulls attention from the users surroundings, and poses a threat to their safety- especially in busy cities. Wher resolves all of these concerns by bringing the experience of Lyft into Augmented Reality, which allows users to truly see the location of their driver and destination, pay more attention to where they are going, and have a more enjoyable and modern experience in the process. ## How I built it I built Wher using several of Apple's Frameworks including ARKit, MapKit, CoreLocation, and UIKit, which allowed me to build the foundation for the app and the "scene" necessary to create and display an Augmented Reality plane. Using the Lyft API I was able to gather information regarding available drivers in the area, including their exact position (real time), cost, ETA, and the service they offered. This information was used to populate the scene and deep link into the Lyft app itself to request a ride and complete the transaction. ## Challenges I ran into While both Apple's well documented frameworks and Lyft's API simplified the learning required to take on the project, there were still several technical hurdles that had to be overcome to complete the project. The first issue that I faced was Lyft's API itself; While it was great in many respects, Lyft has yet to create a branch fit for use with Swift 4 and iOS 11 (required to use ARKit), which meant I had to rewrite certain portions of their LyftURLEncodingScheme and LyftButton classes in order to continue with the project. Another challenge was finding a way to represent a variance in coordinates and 'simulate distance', so to make the AR experience authentic. This, similar to the first challenge, became manageable with enough thought and math. One of the last significant challenges I encountered and overcame was with drawing driver "bubbles" in the AR Plane without encountering graphics glitches. ## Accomplishments that I'm proud of Despite the many challenges that this project presented, I am very happy that I persisted and worked to complete it. Most importantly, I'm proud of just how cool it is to see something so simple represented in AR, and how different it is from a traditional 2D View. I am also very proud to say that this is something I can see myself using any time I need to catch a Lyft. ## What I learned With PennApps being my first Hackathon, I was unsure what to expect and what exactly I wanted to accomplish. As a result, I greatly overestimated how many features I could fit into Wher and was forced to cut back on what I could add. As a result, I learned a lesson in managing expectations. ## What's next for Wher (with Lyft) In the short term, adding a social aspect and allowing for "friends" to organize and mark designated meet up spots for a Lyft, to greater simply the process of a night out on the town. In the long term, I hope to be speaking with Lyft!
## Inspiration The current situation with Gamestop inspired us to create this project ## What it does It allows the user to find subreddit posts about the memestock he is interested in and also to check which one are popular on reddit. ## How we built it We build the app using python and the Reddit API. ## Challenges we ran into This was the first time that we used this kind of API, a challenging part of our project was to understand and correctly use Praw and the data that it gives us ## Accomplishments that we're proud of This was our first Hackathon for the most of us and we're proud to have managed to do a project on a constraint amount of time, and accomplishing all the goals that we were setting out to do. ## What we learned -Use Praw (a Reddit Scraper for Python) to get the data that we wanted -Build a simple UI in Python using tkinter ## What's next for MemeStocks Finder Implementing other social networks such as Twitter on it Improving the UI and the performance of the program
partial
## Inspiration Our project is driven by a clear purpose: to make a real, positive difference in society using technology, especially by fixing how the government works. We're excited about using statistical and reinforcement learning to tackle big issues like the tax gap and to build tools that agencies like the IRS and FDA can use. We're at a key moment for AI and learning technologies. We believe these technologies can hugely improve government efficiency, helping it better serve the community in today's fast-moving world. ## What it does Our project brings to life a unique system for automating and improving policy-making through AI. It starts by gathering preferences from people or AI on what matters most for societal well-being. Then, it designs a game-like scenario where these preferences guide the creation of policies, aiming to achieve the best outcomes for society. This continuous loop of feedback and improvement allows for experimenting with policies in a safe, simulated environment, making it easier to see what works and what doesn't before implementing these policies in the real world. ## How we built it We built our system by experimenting with various AI models and hosting solutions. Initially, we tried GPT-3.5 Turbo, Groq, and Together.AI, but decided on self-hosting for optimal performance. We started with Ollama, moved to Mystic, and finally settled on VLLM with RunPod, utilizing tensor parallelism and automatic weight quantization for efficiency. ## Challenges we ran into Scaling our backend was challenging due to the need for batching inputs and managing resources efficiently. We faced difficulties in finding the right balance between speed and quality, and in deploying models that met our requirements. ## Accomplishments that we're proud of We're proud of deploying a system capable of running thousands of agents with efficient resource management, particularly our use of VLLM on RunPod with advanced computational strategies, which allowed us to achieve our goals. ## What we learned We learned a lot about model optimization, the importance of the right hosting environment, and the balance between model size and performance. The experience has been invaluable in understanding how to scale AI systems effectively. ## What's next for Gov.AI Next, we aim to scale up to 100,000 to 1M agents by refining our token-level encoding scheme, further speeding up processing by an estimated 10x. This expansion will allow for broader experimentation with policies and more nuanced governance decisions, leveraging the full potential of AI to modernize and improve governmental efficiency and responsiveness. Our journey continues as we explore new technologies and methodologies to enhance our system's capabilities, driving forward the mission of Gov.Ai for societal betterment.
## Inspiration The inspiration of this project came from one of the sponsors in **HTN (Co:here)**. Their goal is to make AI/ML accessible to devs, which gave me the idea, that I can build a platform, where people who do not even know how to code can build their own Machine Learning models. Coding is a great skill to have, but we need to ensure that it doesn't become a necessity to survive. There are a lot of people who prefer to work with the UI and cannot understand code. As developers, it is our duty to cater to this audience as well. This is my inspiration and goal through this project. ## What it does The project works by taking in the necessary details of the Machine Learning model that are required by the function. Then it works in the backend to dynamically generate code and build the model. It is even able to decide whether to convert data in the dataset to vectors or not, based on race conditions, and ensure that the model doesn't fail. It then returns the required metric to the user for them to check it out. ## How I built it I first built a Flask backend that took in information regarding the model using JSON. Then I built a service to parse and evaluate the necessary conditions for the Scikit Learn models, and then train and predict with it. After ensuring that my backend was working properly, I moved to the front-end where I spent a lot of my time, building a clean UI/UX design so that the users can have the best and the most comfortable experience while using my application. ## Challenges I ran into One of the key challenges of this project is to generate code dynamically at run-time upon user input. This requirement is a very hefty one as I had to ensure that the inputs won't break the code. I read through the documentation of Scikit Learn and worked with it, while building the web app. ## Accomplishments that I'm proud of I was able to full-fledged working application on my own, building the entire frontend and backend from scratch. The application is able to take in the features of the model and the dataset, and display the results of the trainings. This allows the user to tweak their model and check results everytime to see what works best for them. I'm especially proud of being able to generate and run code dynamically based on user input. ## What I learned This project, more than anything, was a challenge to myself, to see how far I had come from my last HackTheNorth experience. I wanted to get the full experience of building every part of the project, and that's why I worked solo. This gave me experience of all the aspects of building a software from scratch in limited amount of time, allowing me to grasp the bigger picture. ## What's next for AutoML My very first step will be to work on integrating TensorFlow to this project. My initial goal was to have a visual representation of Neural Network layers for users to drag and drop. Due to time and technical constraints, I couldn't fully idealize my goal. So this is the first thing I am going to work with. After this, I'll probably work with authentication so that people can work on their projects and store their progresses.
## Inspiration As the world grapples with challenges like climate change, resource depletion, and social inequality, it has become imperative for organizations to not only understand their environmental, social, and governance (ESG) impacts but also to benchmark and improve upon them. However, one of the most significant hurdles in this endeavor is the complexity and inaccessibility of sustainability data, which is often buried in lengthy official reports and varied formats, making it challenging for stakeholders to extract actionable insights. Recognizing the potential of AI to transform this landscape, we envision Oasis as a solution to democratize access to sustainability data, enabling more informed decision-making and fostering a culture of continuous improvement toward global sustainability goals. By conversing with AI agents, companies are able to collaborate in real-time to gain deeper insights and work towards solutions. ## What it does Oasis is a groundbreaking platform that leverages AI agents to streamline the parsing, indexing, and analysis of sustainability data from official government and corporate ESG reports. It provides an interface for companies to assess their records and converse with an AI agent that has access to their sustainability data. The agent helps them benchmark their practices against practices of similar companies and narrow down ways that they can improve through conversation. Companies can effortlessly benchmark their current sustainability practices, assess their current standings, and receive tailored suggestions for enhancing their sustainability efforts. Whether it's identifying areas for improvement, tracking progress over time, or comparing practices against industry standards, Oasis offers a comprehensive suite of features to empower organizations in their sustainability journey. ## How we built it Oasis uses a sophisticated blend of the following: 1. LLM (LLaMA 2) parsing to parse data from complex reports. We fine-tuned an instance of `meta-llama/Llama-2-7b-chat-hf` on the HuggingFace dataset [Government Report Summarization](https://huggingface.co/datasets/ccdv/govreport-summarization) using MonsterAPI. We use this model to parse data points from ESG PDF text, since these documents are in a non-standard format, into a JSON format. LLMs are incredibly powerful at extracting key information and summarization, which is why we see such a strong use case here. 2. Open-source text embedding model (SentenceTransformers) to index data including metrics and data points within a vector database. LLM-parsed data points contain key descriptors. We use an embedding model to index these descriptors in semantic space, allowing us to compare similar metrics across companies. Two key points may not have the same descriptions, but are semantically similar, which is why indexing with embeddings is beneficial. We use the SentenceTransformer model `msmarco-bert-base-dot-v5` for text embeddings. We also use the InterSystems IRIS Data Platform to store embedding vectors, on top of the LangChain framework. This is useful for finding similar metrics across different companies and also for RAG, as discussed next. 3. Retrieval augmented generation (RAG) to incorporate relevant metrics and data points into conversation To enable users to converse with the agent and inspect and make decisions based on real data, we use RAG integrated with our IRIS vector database, running on the LangChain framework. We have a frontend UI for interacting with our agent in real time. 4. Embedding similarity to semantically align data points for benchmarking across companies Our frontend UI also presents key metrics for benchmarking a user’s company. It uses embedding similarity to find company metrics and relevant metrics from other companies. ## Challenges we ran into One of the most challenging parts of the project was prompting the LLM and running numerous experiments until the LLM output matched what was expected. Since LLMs are non deterministic in nature and we required outputs in a consistent JSON form (for parsed results), we needed to prompt the LLM and reinforce the constraints multiple times. This was a valuable lesson that helped us learn how to leverage LLMs in intricate ways for niche applications. ## Accomplishments that we're proud of We are incredibly proud of developing a platform that not only addresses a critical global challenge but does so with a level of sophistication and accessibility that sets a new standard in the field. Successfully training AI models to navigate the complexities of ESG reports marks a significant technical achievement. The ability to turn dense reports into clear, actionable insights represents a leap forward in sustainability practice. ## What we learned Throughout the process of building Oasis, we learned the importance of interdisciplinary collaboration in tackling complex problems. Combining AI and sustainability expertise was crucial in understanding both the technical and domain-specific challenges. We also gained insights into the practical applications of AI in real-world scenarios, particularly in how NLP and machine learning can be leveraged to extract and analyze data from unstructured sources. The iterative process of testing and feedback was invaluable, teaching us that user experience is as important as the underlying technology in creating impactful solutions. ## What's next for Oasis The journey for Oasis is just beginning. Our next steps involve expanding the corpus of sustainability reports to cover a broader range of industries and geographies, enhancing the platform's global applicability. We are also exploring the integration of predictive analytics to offer forward-looking insights, enabling users to not just assess their current practices but also to anticipate future trends and challenges. Collaborating with sustainability experts and organizations will remain a priority, as their insights will help refine our models and ensure that Oasis continues to meet the evolving needs of its users. Ultimately, we aim to make Oasis a cornerstone in the global effort towards more sustainable practices, driving change through data-driven insights and recommendations.
losing
## Overview AOFS is an automatic sanitization robot that navigates around spaces, detecting doorknobs using a custom trained machine-learning algorithm and sanitizing them using antibacterial agent. ## Inspiration It is known that in hospitals and other public areas, infections spread via our hands. Door handles, in particular, are one such place where germs accumulate. Cleaning such areas is extremely important, but hospitals are often at a short of staff and the sanitization may not be done as often as should be. We therefore wanted to create a robot that would automate this, which both frees up healthcare staff to do more important tasks and ensures that public spaces remain clean. ## What it does AOFS travels along walls in public spaces, monitoring the walls. When a door handle is detected, the robot stops automatically sprays it with antibacterial agent to sanitize it. ## How we built it The body of the robot came from a broken roomba. Using two ultrasonic sensors for movement and a mounted web-cam for detection, it navigates along walls and scans for doors. Our doorknob-detecting computer vision algorithm is trained via transfer learning on the [YOLO network](https://pjreddie.com/darknet/yolo/) (one of the state of the art real-time object detection algorithms) using custom collected and labelled data: using the pre-trained weights for the network, we froze all 256 layers except the last three, which we re-trained on our data using a Google Cloud server. The trained algorithm runs on a Qualcomm Dragonboard 410c which then relays information to the arduino. ## Challenges we ran into Gathering and especially labelling our data was definitely the most painstaking part of the project, as all doorknobs in our dataset of over 3000 pictures had to be boxed by hand. Training the network then also took a significant amount of time. Some issues also occured as the serial interface is not native to the qualcomm dragonboard. ## Accomplishments that we're proud of We managed to implement all hardware elements such as pump, nozzle and electrical components, as well as an algorithm that navigated using wall-following. Also, we managed to train an artificial neural network with our own custom made dataset, in less than 24h! ## What we learned Hacking existing hardware for a new purpose, creating a custom dataset and training a machine learning algorithm. ## What's next for AOFS Increasing our training dataset to incorporate more varied images of doorknobs and training the network on more data for a longer period of time. Using computer vision to incorporate mapping of spaces as well as simple detection, in order to navigate more intelligently.
## Inspiration **Handwriting is such a beautiful form of art that is unique to every person, yet unfortunately, it is not accessible to everyone.** [Parkinson’s](www.parkinson.org/Understanding-Parkinsons/Statistics) affects nearly 1 million people in the United States and more than 6 million people worldwide. For people who struggle with fine motor skills, picking up a pencil and writing is easier said than done. *We want to change that.* We were inspired to help people who find difficulty in writing, whether it be those with Parkinson's or anyone else who has lost the ability to write with ease. We believe anyone, whether it be those suffering terminal illnesses, amputated limbs, or simply anyone who cannot write easily, should all be able to experience the joy of writing! ## What it does Hand Spoken is an innovative solution that combines the ease of writing with the beauty of an individual's unique handwriting. All you need to use our desktop application is an old handwritten letter saved by you! Simply pick up your paper of handwriting (or handwriting of choice) and take a picture. After submitting the picture into our website database, you are all set. Then, simply speak into the computer either using a microphone or a voice technology device. The user of the desktop application will automatically see their text appear on the screen in their own personal handwriting font! They can then save their message for later use. ## How we built it We created a desktop application using C# with Visual Studio's WinForm framework. Handwriting images uploaded to the application is sent via HTTP request to the backend, where a python server identifies each letter using pytesseract. The recognized letters are used to generate a custom font, which is saved to the server. Future audio files recorded by the frontend are also sent into the backend, at which point AWS Transcribe services are contacted, giving us the transcribed text. This text is then processed using the custom handwriting font, being eventually returned to the frontend, ready to be downloaded by the user. ## Challenges we ran into One main challenge our team ran into was working with pytesseract. To overcome this obstacle, we made sure we worked collaboratively as a team to divide roles and learn how to use these exciting softwares. ## Accomplishments that we're proud of We are proud of creating a usable and functional database that incorporates UX/UI design! ## What we learned Not only did we learn lots about OCR (Optical Character Recognition) and AWS Transcribe services, but we learned how to collaborate effectively as a team and maximize each other's strengths. ## What's next for Hand Spoken Building upon on our idea and creating accessibility **for all** through the use of technology!
## Inspiration The failure of a certain project using Leap Motion API motivated us to learn it and use it successfully this time. ## What it does Our hack records a motion password desired by the user. Then, when the user wishes to open the safe, they repeat the hand motion that is then analyzed and compared to the set password. If it passes the analysis check, the safe unlocks. ## How we built it We built a cardboard model of our safe and motion input devices using Leap Motion and Arduino technology. To create the lock, we utilized Arduino stepper motors. ## Challenges we ran into Learning the Leap Motion API and debugging was the toughest challenge to our group. Hot glue dangers and complications also impeded our progress. ## Accomplishments that we're proud of All of our hardware and software worked to some degree of success. However, we recognize that there is room for improvement and if given the chance to develop this further, we would take it. ## What we learned Leap Motion API is more difficult than expected and communicating with python programs and Arduino programs is simpler than expected. ## What's next for Toaster Secure -Wireless Connections -Sturdier Building Materials -User-friendly interface
partial
## Inspiration As college students learning to be socially responsible global citizens, we realized that it's important for all community members to feel a sense of ownership, responsibility, and equal access toward shared public spaces. Often, our interactions with public spaces inspire us to take action to help others in the community by initiating improvements and bringing up issues that need fixing. However, these issues don't always get addressed efficiently, in a way that empowers citizens to continue feeling that sense of ownership, or sometimes even at all! So, we devised a way to help FixIt for them! ## What it does Our app provides a way for users to report Issues in their communities with the click of a button. They can also vote on existing Issues that they want Fixed! This crowdsourcing platform leverages the power of collective individuals to raise awareness and improve public spaces by demonstrating a collective effort for change to the individuals responsible for enacting it. For example, city officials who hear in passing that a broken faucet in a public park restroom needs fixing might not perceive a significant sense of urgency to initiate repairs, but they would get a different picture when 50+ individuals want them to FixIt now! ## How we built it We started out by brainstorming use cases for our app and and discussing the populations we want to target with our app. Next, we discussed the main features of the app that we needed to ensure full functionality to serve these populations. We collectively decided to use Android Studio to build an Android app and use the Google Maps API to have an interactive map display. ## Challenges we ran into Our team had little to no exposure to Android SDK before so we experienced a steep learning curve while developing a functional prototype in 36 hours. The Google Maps API took a lot of patience for us to get working and figuring out certain UI elements. We are very happy with our end result and all the skills we learned in 36 hours! ## Accomplishments that we're proud of We are most proud of what we learned, how we grew as designers and programmers, and what we built with limited experience! As we were designing this app, we not only learned more about app design and technical expertise with the Google Maps API, but we also explored our roles as engineers that are also citizens. Empathizing with our user group showed us a clear way to lay out the key features of the app that we wanted to build and helped us create an efficient design and clear display. ## What we learned As we mentioned above, this project helped us learn more about the design process, Android Studio, the Google Maps API, and also what it means to be a global citizen who wants to actively participate in the community! The technical skills we gained put us in an excellent position to continue growing! ## What's next for FixIt An Issue’s Perspective \* Progress bar, fancier rating system \* Crowdfunding A Finder’s Perspective \* Filter Issues, badges/incentive system A Fixer’s Perspective \* Filter Issues off scores, Trending Issues
## 💫 Inspiration Inspired by our grandparents, who may not always be able to accomplish certain tasks, we wanted to create a platform that would allow them to find help locally . We also recognize that many younger members of the community might be more knowledgeable or capable of helping out. These younger members may be looking to make some extra money, or just want to help out their fellow neighbours. We present to you.... **Locall!** ## 🏘 What it does Locall helps members of a neighbourhood get in contact and share any tasks that they may need help with. Users can browse through these tasks, and offer to help their neighbours. Those who post the tasks can also choose to offer payment for these services. It's hard to trust just anyone to help you out with daily tasks, but you can always count on your neighbours! For example, let's say an elderly woman can't shovel her driveway today. Instead of calling a big snow plowing company, she can post a service request on Locall, and someone in her local community can reach out and help out! By using Locall, she's saving money on fees that the big companies charge, while also helping someone else in the community make a bit of extra money. Plenty of teenagers are looking to make some money whenever they can, and we provide a platform for them to get in touch with their neighbours. ## 🛠 How we built it We first prototyped our app design using Figma, and then moved on to using Flutter for actual implementation. Learning Flutter from scratch was a challenge, as we had to read through lots of documentation. We also stored and retrieved data from Firebase ## 🦒 What we learned Learning a new language can be very tiring, but also very rewarding! This weekend, we learned how to use Flutter to build an iOS app. We're proud that we managed to implement some special features into our app! ## 📱 What's next for Locall * We would want to train a Tensorflow model to better recommend services to users, as well as improve the user experience * Implementing chat and payment directly in the app would be helpful to improve requests and offers of services
## Inspiration Crime rates are on the rise across America, and many people, especially women, fear walking alone at night, even in their own neighborhoods. When we first came to MIT, we all experienced some level of harassment on the streets. Current navigation apps do not include the necessary safety precautions that pedestrians need to identify and avoid dimly-lit, high-crime areas. ## What it does Using a combination of police crime reports and the Mapbox API, VIA offers users multiple secure paths to their destination and a user-friendly display of crime reports within the past few months. Ultimately, VIA is an app that provides up-to-date data about the safety of pedestrian routes. ## How we built it We built our interactive map with the Mapbox API, programming functions with HTML and Javascript which overlays Boston police department crime data on the map and generates multiple routes given start and end destinations. ## Challenges we ran into We had some difficulty with instruction banners at the end of the hackathon that we will definitely work on in the future. ## Accomplishments that we're proud of None of us had much experience with frontend programming or working with APIs, and a lot of the process was trial and error. Creating the visuals for the maps in such a short period of time pushed us to step out of our comfort zones. We'd been ideating this project for quite some time, so actually creating an MVP is something we are very proud of. ## What we learned This project was the first time that any of us actually built tangible applications outside of school, so coding this in 24-hours was a great learning experience. We learned about working with APIs and how to read the documentation involved in using them as well as breaking down data files into workable data structures. With all of us having busy schedules this weekend, it was also important to communicate properly so that we each new what our tasks were for the day as we weren't all together for a majority of the hackathon. However, we were all able to collaborate well, and we learned how to communicate effectively and work together to overcome our project challenges. ## What's next for VIA We plan on working outside of school on this project to hone some of the designs and make the navigation features with the data available beyond Boston. There are many areas that we can improve the design, such as making the application a mobile app instead of a web app, which we will consider working on in the future.
winning
## Inspiration The inspiration for the project was our desire to make studying and learning more efficient and accessible for students and educators. Utilizing advancements in technology, like the increased availability and lower cost of text embeddings, to make the process of finding answers within educational materials more seamless and convenient. ## What it does Wise Up is a website that takes many different types of file format, as well as plain text, and separates the information into "pages". Using text embeddings, it can then quickly search through all the pages in a text and figure out which ones are most likely to contain the answer to a question that the user sends. It can also recursively summarize the file at different levels of compression. ## How we built it With blood, sweat and tears! We used many tools offered to us throughout the challenge to simplify our life. We used Javascript, HTML and CSS for the website, and used it to communicate to a Flask backend that can run our python scripts involving API calls and such. We have API calls to openAI text embeddings, to cohere's xlarge model, to GPT-3's API, OpenAI's Whisper Speech-to-Text model, and several modules for getting an mp4 from a youtube link, a text from a pdf, and so on. ## Challenges we ran into We had problems getting the backend on Flask to run on a Ubuntu server, and later had to instead run it on a Windows machine. Moreover, getting the backend to communicate effectively with the frontend in real time was a real challenge. Extracting text and page data from files and links ended up taking more time than expected, and finally, since the latency of sending information back and forth from the front end to the backend would lead to a worse user experience, we attempted to implement some features of our semantic search algorithm in the frontend, which led to a lot of difficulties in transferring code from Python to Javascript. ## Accomplishments that we're proud of Since OpenAI's text embeddings are very good and very new, and we use GPT-3.5 based on extracted information to formulate the answer, we believe we likely equal the state of the art in the task of quickly analyzing text and answer complex questions on it, and the ease of use for many different file formats makes us proud that this project and website can be useful for so many people so often. To understand a textbook and answer questions about its content, or to find specific information without knowing any relevant keywords, this product is simply incredibly good, and costs pennies to run. Moreover, we have added an identification system (users signing up with a username and password) to ensure that a specific account is capped at a certain usage of the API, which is at our own costs (pennies, but we wish to avoid it becoming many dollars without our awareness of it. ## What we learned As time goes on, not only do LLMs get better, but new methods are developed to use them more efficiently and for greater results. Web development is quite unintuitive for beginners, especially when different programming languages need to interact. One tool that has saved us a few different times is using json for data transfer, and aws services to store Mbs of data very cheaply. Another thing we learned is that, unfortunately, as time goes on, LLMs get bigger and so sometimes much, much slower; api calls to GPT3 and to Whisper are often slow, taking minutes for 1000+ page textbooks. ## What's next for Wise Up What's next for Wise Up is to make our product faster and more user-friendly. A feature we could add is to summarize text with a fine-tuned model rather than zero-shot learning with GPT-3. Additionally, a next step is to explore partnerships with educational institutions and companies to bring Wise Up to a wider audience and help even more students and educators in their learning journey, or attempt for the website to go viral on social media by advertising its usefulness. Moreover, adding a financial component to the account system could let our users cover the low costs of the APIs, aws and CPU running Whisper.
## Inspiration Our inspiration came from a common story that we have been seeing on the news lately - the wildfires that are impacting people on a nationwide scale. These natural disasters strike at uncertain times, and we don't know if we are necessarily going to be in the danger zone or not. So, we decided to ease the tensions that occur during these high-stress situations by acting as the middle persons. ## What it does At RescueNet, we have two types of people with using our service - either subscribers or homeowners. The subscriber pays RescueNet monthly or annually at a rate which is cheaper than insurance! Our infrastructure mainly targets people who live in natural disaster-prone areas. In the event such a disaster happens, the homeowners will provide temporary housing and will receive a stipend after the temporary guests move away. We also provide driving services for people to escape their emergency situations. ## How we built it We divided our work into the clientside and the backend. Diving into the clientside, we bootstrapped our project using Vite.js for faster loadtimes. Apart from that, React.js was used along with React Router to link the pages and organize the file structure accordingly. Tailwind CSS was employed to simplify styling along with Material Tailwind, where its pre-built UI components were used in the about page. Our backend server is made using Node.js and Express.js, and it connects to a MongoDB Atlas database making use of a JavaScript ORM - Mongoose. We make use of city data from WikiData, geographic locations from GeoDB API, text messaging functionality of Twilio, and crypto payment handling of Circle. ## Challenges we ran into Some challenges we ran into initially is to make the entire web app responsive across devices while still keeping our styles to be rendered. At the end, we figured out a great way of displaying it in a mobile setting while including a proper navbar as well. In addition, we ran into trouble working with the Circle API for the first time. Since we've never worked with cryptocurrency before, we didn't understand some of the implications of the code we wrote, and that made it difficult to continue with Circle. ## Accomplishments that we're proud of An accomplishment we are proud of is rendering the user dashboard along with the form component, which allowed the user to either enlist as a subscriber or homeowner. The info received from this component would later be parsed into the dashboard would be available for show. We are also proud of how we integrated Twilio's SMS messaging services into the backend algorithm for matching subscribers with homeowners. This algorithm used information queried from our database, accessed from WikiData, and returned from various API calls to make an "optimal" matching based on distance and convenience, and it was nice to see this concept work in real life by texting those who were matched. ## What we learned We learned many things such as how to use React Router in linking to pages in an easy way. Also, leaving breadcrumbs in our Main.jsx allowed us to manually navigate to such pages when we didn't necessarily had anything set up in our web app. We also learned how to use many backend tools like Twilio and Circle. ## What's next for RescueNet What's next for RescueNet includes many things. We are planning on completing the payment model using Circle API, including implementing automatic monthly charges and the ability to unsubscribe. Additionally, we plan on marketing to a few customers nationwide, this will allow us to conceptualize and iterate on our ideas till they are well polished. It will also help in scaling things to include countries such as the U.S.A and Mexico.
## Inspiration We know the struggles of students. Trying to get to that one class across campus in time. Deciding what to make for dinner. But there was one that stuck out to all of us: finding a study spot on campus. There have been countless times when we wander around Mills or Thode looking for a free space to study, wasting our precious study time before the exam. So, taking inspiration from parking lots, we designed a website that presents a live map of the free study areas of Thode Library. ## What it does A network of small mountable microcontrollers that uses ultrasonic sensors to check if a desk/study spot is occupied. In addition, it uses machine learning to determine peak hours and suggested availability from the aggregated data it collects from the sensors. A webpage that presents a live map, as well as peak hours and suggested availability . ## How we built it We used a Raspberry Pi 3B+ to receive distance data from an ultrasonic sensor and used a Python script to push the data to our database running MongoDB. The data is then pushed to our webpage running Node.js and Express.js as the backend, where it is updated in real time to a map. Using the data stored on our database, a machine learning algorithm was trained to determine peak hours and determine the best time to go to the library. ## Challenges we ran into We had an **life changing** experience learning back-end development, delving into new frameworks such as Node.js and Express.js. Although we were comfortable with front end design, linking the front end and the back end together to ensure the web app functioned as intended was challenging. For most of the team, this was the first time dabbling in ML. While we were able to find a Python library to assist us with training the model, connecting the model to our web app with Flask was a surprising challenge. In the end, we persevered through these challenges to arrive at our final hack. ## Accomplishments that we are proud of We think that our greatest accomplishment is the sheer amount of learning and knowledge we gained from doing this hack! Our hack seems simple in theory but putting it together was one of the toughest experiences at any hackathon we've attended. Pulling through and not giving up until the end was also noteworthy. Most importantly, we are all proud of our hack and cannot wait to show it off! ## What we learned Through rigorous debugging and non-stop testing, we earned more experience with Javascript and its various frameworks such as Node.js and Express.js. We also got hands-on involvement with programming concepts and databases such as mongoDB, machine learning, HTML, and scripting where we learned the applications of these tools. ## What's next for desk.lib If we had more time to work on this hack, we would have been able to increase cost effectiveness by branching four sensors off one chip. Also, we would implement more features to make an impact in other areas such as the ability to create social group beacons where others can join in for study, activities, or general socialization. We were also debating whether to integrate a solar panel so that the installation process can be easier.
partial
## Inspiration We were interested in developing a solution to automate the analysis of microscopic material images. ## What it does Our program utilizes image recognition and image processing tools such as edge detection, gradient analysis, gaussian/median/average filters, morphologies, image blending etc. to determine specific shapes of a microscopic image and apply binary thresholds for analysis. In addition, the program has the ability to differentiate between light and dark materials under poor lighting conditions, as well as calculate the average surface areas of grains and the percentage of dark grains. ## How we built it We used Python algorithms incorporated with OpenCV tools in the PyCharm developing environment. ## Challenges we ran into Contouring for images was extremely difficult considering there were many limitations and cleaning/calibrating data and threshold values. The time constraints also impacted us, as we would have liked to be able to develop a more accurate algorithm for our image analysis software. ## Accomplishments that we're proud of Making a breakthrough in iterative masking of the images to achieve an error percentage consistently below 0.5%. We're also incredibly proud of the fact that we were able to complete the majority of the challenge tasks as well as develop a user-friendly interface. ## What we learned We became better equipped with Python and the opensource materials available to all of us. We also learned valuable computer vision skills through practical applications as well as a developed a better understanding of data processing algorithms. ## What's next for Material Arts 2000 We're looking to further refine our algorithms so that it will be of more practical use in the future. Potentially looking to expand from the specific field of microscopic materials to develop a more widely applicable algorithm.
## Inspiration Being a student of the University of Waterloo, every other semester I have to attend interviews for Co-op positions. Although it gets easier to talk to people, the more often you do it, I still feel slightly nervous during such face-to-face interactions. During this nervousness, the fluency of my conversion isn't always the best. I tend to use unnecessary filler words ("um, umm" etc.) and repeat the same adjectives over and over again. In order to improve my speech through practice against a program, I decided to create this application. ## What it does InterPrep uses the IBM Watson "Speech-To-Text" API to convert spoken word into text. After doing this, it analyzes the words that are used by the user and highlights certain words that can be avoided, and maybe even improved to create a stronger presentation of ideas. By practicing speaking with InterPrep, one can keep track of their mistakes and improve themselves in time for "speaking events" such as interviews, speeches and/or presentations. ## How I built it In order to build InterPrep, I used the Stdlib platform to host the site and create the backend service. The IBM Watson API was used to convert spoken word into text. The mediaRecorder API was used to receive and parse spoken text into an audio file which later gets transcribed by the Watson API. The languages and tools used to build InterPrep are HTML5, CSS3, JavaScript and Node.JS. ## Challenges I ran into "Speech-To-Text" API's, like the one offered by IBM tend to remove words of profanity, and words that don't exist in the English language. Therefore the word "um" wasn't sensed by the API at first. However, for my application, I needed to sense frequently used filler words such as "um", so that the user can be notified and can improve their overall speech delivery. Therefore, in order to implement this word, I had to create a custom language library within the Watson API platform and then connect it via Node.js on top of the Stdlib platform. This proved to be a very challenging task as I faced many errors and had to seek help from mentors before I could figure it out. However, once fixed, the project went by smoothly. ## Accomplishments that I'm proud of I am very proud of the entire application itself. Before coming to Qhacks, I only knew how to do Front-End Web Development. I didn't have any knowledge of back-end development or with using API's. Therefore, by creating an application that contains all of the things stated above, I am really proud of the project as a whole. In terms of smaller individual accomplishments, I am very proud of creating my own custom language library and also for using multiple API's in one application successfully. ## What I learned I learned a lot of things during this hackathon. I learned back-end programming, how to use API's and also how to develop a coherent web application from scratch. ## What's next for InterPrep I would like to add more features for InterPrep as well as improve the UI/UX in the coming weeks after returning back home. There is a lot that can be done with additional technologies such as Machine Learning and Artificial Intelligence that I wish to further incorporate into my project!
## Inspiration It has been increasingly evident that paper is becoming more and more obsolete in today's society. PaperVision is an innovation that will launch paper into the 21st century. ## What it does PaperVision will inspire the people to draw their ideas on paper and see them on the computer. It creates a freely interactive experience with ideas that are put down on the paper. Two examples that we tackled are mazes and graphing tools. Anyone can draw a maze on a piece of paper so long as it has a start and a finish and end up playing through it on a computer with PaperVision. In addition, PaperVision can take a hand drawn graph and output to the user the appropriate equation. Another work in progress of PaperVision, is the gaming aspect, such as a version of Space Invaders. Enemies are hand drawn and fought in real time. ## How we built it Using OpenCV embedded in Python, we were able to extract information from the hand drawn images. Specifically, we used the techniques of erosion and dilation to remove noise such as faint shadows and glares that could disrupt the clarity and the usability of an image. This allowed us to distinguish between ink and paper: black and white. OpenCV has a function that deals with contours that enabled us to recognize shapes, namely lines, such as the edges of a maze. We also other integrated libraries such as turtle graphics to complete the experience. In the graphing tool, we sampled several points from the hand drawn image and used linear algebra techniques including matrix row reduction and regression to determine an equation that matches the graph. ## Challenges we ran into Some of the challenges we faced included black ink detection and distinguishing between colors. Also, Computer Vision did not cooperate at first, but ended up working with fine tuning and patience on our part. When dealing with the maze proportion of the project, it was necessary for us to identify black ink as boundaries. However, due to the fact that the black ink was taken from something drawn on paper, the pixels did not correspond. To solve this, we needed create a linear mapping between the set of pixels from the drawn image and the game image. In order to distinguish between colors, namely black and white, we discovered that erosion and dilation did not remove all the noise created from various factors including light intensity. We accounted for this by having a threshold to categorize different pixels. ## Accomplishments that we're proud of Although we faced many challenges along the way, we also had accomplishments that we were proud of. Most importantly, we were able to a finish working project in the time alloted with our first idea. To add, we applied the math such as linear algebra that we learned in class to an education graphing tool that could be used. ## What we learned We learned that Computer Vision is a powerful tool that can connect both the digital and physical worlds in a cyber-physical future. Another lesson that we learned from Computer Vision is that not everything, especially in the real world is as precise and perfect as we expect it to be. Quite literally, not everything is black and white. It is important to adjust and fine tune the data we are given to work better with us. ## What's next for PaperVision PaperVision has the potential to revolutionize the usage of Computer Vision. It can be used for entertainment, education tools, and games such as space invaders, if not more.
winning
## Inspiration I like looking at things. I do not enjoy bad quality videos . I do not enjoy waiting. My CPU is a lazy fool. He just lays there like a drunkard on new years eve. My poor router has a heart attack every other day so I can stream the latest Kylie Jenner video blog post, or has the kids these days call it, a 'vlog' post. CPU isn't being effectively leveraged to improve video quality. Deep learning methods are in their own world, concerned more with accuracy than applications. We decided to develop a machine learning application to enhance resolution while developing our models in such a way that they can effective run without 10,000 GPUs. ## What it does We reduce your streaming bill. We let you stream Kylie's vlog in high definition. We connect first world medical resources to developing nations. We make convert an unrecognizeable figure in a cop's body cam to a human being. We improve video resolution. ## How I built it Wow. So lots of stuff. Web scraping youtube videos for datasets of 144, 240, 360, 480 pixels. Error catching, thread timeouts, yada, yada. Data is the most import part of machine learning, and no one cares in the slightest. So I'll move on. ## ML stuff now. Where the challenges begin We tried research papers. Super Resolution Generative Adversarial Model [link](https://arxiv.org/abs/1609.04802). SRGAN with an attention layer [link](https://arxiv.org/pdf/1812.04821.pdf). These were so bad. The models were to large to hold in our laptop, much less in real time. The model's weights alone consisted of over 16GB. And yeah, they get pretty good accuracy. That's the result of training a million residual layers (actually *only* 80 layers) for months on GPU clusters. We did not have the time or resources to build anything similar to these papers. We did not follow onward with this path. We instead looked to our own experience. Our team had previously analyzed the connection between image recognition and natural language processing and their shared relationship to high dimensional spaces [see here](https://arxiv.org/abs/1809.05286). We took these learnings and built a model that minimized the root mean squared error as it upscaled from 240 to 480 px. However, we quickly hit a wall, as this pixel based loss consistently left the upscaled output with blurry edges. In order to address these edges, we used our model as the Generator in a Generative Adversarial Network. However, our generator was too powerful, and the discriminator was lost. We decided then to leverage the work of the researchers before us in order to build this application for the people. We loaded a pretrained VGG network and leveraged its image embeddings as preprocessing for our discriminator. Leveraging this pretrained model, we were able to effectively iron out the blurry edges while still minimizing mean squared error. Now model built. We then worked at 4 AM to build an application that can convert videos into high resolution. ## Accomplishments that I'm proud of Building it good. ## What I learned Balanced approaches and leveraging past learning ## What's next for Crystallize Real time stream-enhance app.
## What it does Paste in a text and it will identify the key scenes before turning it into a narrated movie. Favourite book, historical battle, or rant about work. Anything and everything, if you can read it, Lucid.ai can dream it. ## How we built it Once you hit generate on the home UI, our frontend sends your text and video preferences to the backend, which uses our custom algorithm to cut up the text into key scenes. The backend then uses multithreading to make three simultaneous API calls. First, a call to GPT-3 to condense the chunks into image prompts to be fed into a Stable Diffusion/Deforum AI image generation model. Second, a sentiment keyword analysis using GPT-3, which is then fed to the Youtube API for a fitting background song. Finally, a call to TortoiseTTS generates a convincing narration of your text. Collected back at the front-end, you end up with a movie, all from a simple text. ## Challenges we ran into Our main challenge was computing power. With no access to industry-grade GPU power, we were limited to running our models on personal laptop GPUs. External computing power also limited payload sizes, forcing us to find roundabout ways to communicate our data to the front-end. ## Accomplishments that we're proud of * Extremely resilient commitment to the project, despite repeated technical setbacks * Fast on-our-feet thinking when things don't go to plan * A well-laid out front-end development plan ## What we learned * AWS S3 Cloud Storage * TortoiseTTS * Learn how to dockerize large open source codebase ## What's next for Lucid.ai * More complex camera motions beyond simple panning * More frequent frame generation * Real-time frame generation alongside video watching * Parallel cloud computing to handle rendering at faster speeds
## Introduction Hey! I'm William. I care about doing efficient Computer Vision and I think that this project highlights some great potential ideas for ensuring we can stay efficient. ## Inspiration I've recently been very interested in making ML (and specifically Computer Vision) more efficient. There exist [many](https://ai.googleblog.com/2022/02/good-news-about-carbon-footprint-of.html) [recent](https://arxiv.org/abs/1906.02243) [analyses](https://arxiv.org/abs/2104.10350) of making ML more efficient. These papers showcase a variety of different approaches to improve model efficiency and reduce global impact. If you're not sold on why model efficiency matters, I encourage you to skim over some of those papers. Some key takeaways are as follows: * A majority of energy is spent serving models than training (90%-10%) (Patterson et al.) * There are three major focuses we can use to improve model implementation efficiency * More Efficient Architectures --Model inference cost (in FLOPs), as well as model size (number of parameters), is growing exponentially. It is necessary to design and utilize algorithms that learn and infer more efficiently. The authors highlight two promising approaches, Mixture of Expert models and approaches which perform more efficient attention (BigBird, LinFormer, Nystromformer). * More Efficient Hardware --The only reason we are able to train these larger models is because of more complex hardware. Specialized hardware, such as TPUs or Cerebras' WSE shows promising results in performance per watt. Authors find that specialized hardware can improve efficiency 2-5x. * More Efficient Energy (Datacenter/Energy Generation) --The location that these models are run on also has a significant impact on the efficiency of inference time. By computing locally, as well as in a place where we can efficiently generate energy, we can reduce the impact of our models. These goals, however, conflict with many current approaches in Machine Learning implementation. In the NLP space, we are quickly moving towards models that are being trained for longer on larger, multi-lingual datasets. Recent SOTA works (Megatron-LM) are nearly 10 Trillion parameters. Training GPT-3 (once!) takes the CO2 equivalent of three round trip flights from SF-NY (1,287 MWh). In Computer Vision, a push towards using attention-based architectures as well as focusing on higher-resolution or videos (rather than images) has led to a sharp increase in the cost to train and perform inference in a model. ### We propose a new workflow to assist with the implementation of efficient Computer Vision. Rather than training a single model from scratch, we separate the encoding and fine-tuning aspects of a task. Specifically, we train a single, large, self-supervised model on an unlabelled, high-resolution dataset. After building a strong encoding, we can fine-tune small, efficient prediction heads on those encodings to solve a specific task. We utilize Scale.ai's platform to automate labeling. ## What it does ![A diagram of the flow of data across the system. After raw data is collected, it is sent to the server and to scale.ai. The server generates embeddings while scale.ai can generate ground truth labels.](https://res.cloudinary.com/devpost/image/fetch/s--Nj38FMn_--/c_limit,f_auto,fl_lossy,q_auto:eco,w_900/d1.png) We use [UIUC's Almacam](https://www.youtube.com/watch?v=h6hzVOwaN_4) as a proof of concept for this topic. We want to show that you can take some arbitrary data source and build out an extremely efficient, data-drift resistant, (mostly) task-agnostic inference pipeline. Our contribution has x major components: 1. Automatic data collection pipeline -- We design a small server that automatically collects data from the UIUC AlmaCam and uploads it to the Scale.ai server as well as uploading to some data centers. 2. Self Supervision for Image Encoding -- Instead of training a model end-to-end on local/inefficient hardware, we train in a (theoretical) data center. This allows us to still leverage extremely large models on extremely large data while maintaining long-term stability and efficiency. -- We train a self-supervised ViT/Nystromformer hybrid model on high resolution\* images. We require the use of efficient (Nystromformer) attention and we have large (48x48) image patches to accommodate for the high resolution images. 3. Fine Tuned Heads -- We showcase that this approach maintains accuracy by fine-tuning very small prediction heads on the image encoding alone (NONE of the original images is passed to the heads). -- We show that these prediction heads can be quickly and easily trained on a new task. This means we do not have to retrain the larger encoding model when we want to apply it to a new task! \*We train on 576x960, normally people train on 224x224 so this is ~11x larger. We can train on the raw (1920x1080, almost 4x larger) resolution without adverse effects, but not over a weekend! ## How we built it Everything is written in Python. I use PyTorch for ML things and Flask for server things. #### Automatic data collection pipeline: This was relatively straightforward. We have Flask automatically execute some bash scripts to first grab the .m3u8 playlist URL from YouTube. Then, we have ffmpeg scrape frames from that m3u8 file into a separate directory. After a certain number of images are downloaded, we hit the Scale.ai API to upload these images and associated metadata (time of day) to their database. Originally, I had configured this server to automatically create a batch (a new set of images to be labeled) but I found it easier to manually do it (since creating batches can be expensive and I only got $250 of credit). #### Self Supervision for Image Encoding So, the data collection pipeline is feeding new data into some files and we want to train on that in a self-supervised way (since this allows the encoder model to remain task agnostic). If you're interested, here's the [wandb logs](https://wandb.ai/weustis/treehacks/overview?workspace=user-weustis). ##### Model Architecture We train a [Vision Transformer](https://arxiv.org/pdf/2010.11929.pdf)-based architecture on 576x960 resolution images. We replace normal attention (which scales poorly with many tokens) with [Nystrom-based attention](https://arxiv.org/abs/2102.03902). This allows us to approximate self-attention with O(n) memory complexity (rather than O(n^2) with normal attention). This is necessary since we do not know our downstream task and so we **must** maintain high-resolution images. Our best model has a token dimension of 1024, a depth of 10, 10 heads, and 256 landmarks. If you'd like to compare it to existing models, it's a bit in between ViT-S and ViT-B ##### Training Scheme We use the [Masked Autoencoders Are Scalable Vision Learners](https://arxiv.org/pdf/2111.06377.pdf) paper to establish our approach for self-supervision. As seen in the graphic below: ![Masked Autoencoder training scheme for self-supervision. Image patches are masked out and a model is made to predict the masked regions](https://res.cloudinary.com/devpost/image/fetch/s--Nj38FMn_--/c_limit,f_auto,fl_lossy,q_auto:eco,w_900/d1.png) This approach masks regions of the image and provides the image encoder with the non-masked regions. The image encoder builds a strong encoding then passes those tokens to a decoder, which predicts the masked regions. We mask 70% of our patches. The best model was around 80M parameters, as well as an extra 30M for the decoder part (which is only needed when training the self-supervised model). ##### Logistics We train on 4xA100 for 1.5 hours. Our model was still improving but you gotta move fast during a hackathon so I spent less time on hyperparameter tuning. #### Fine Tuned Heads The fine-tuned head can be adjusted depending on the task. We only train on predicting the number of people in an image as well as predicting the time of day. These heads were about 2M parameters. We did not have time to ablate the head size but I strongly suspect it could be reduced drastically (5% of current size or less) for simple tasks like those above. ## Challenges we ran into In no particular order: * I did not know bash or ffmpeg very well, so I was a bit of a struggle to get that to download files from a YouTube live stream. * My original uploads to Scale.ai did not include the original image path in the metadata, so when I downloaded the labeled results, I was unable to link them back to my local images :(. I created a new dataset and included the image path in the metadata. -There is an extremely fine balance between the size of patches and the number of patches. Generally, more (and thus smaller) patches are better (since you can represent more complexity). However, this still makes our memory and training footprint prohibitively larger. On the other hand, if the patches are too big we also run the risk of masking out an entire person, which would be impossible for the model to reconstruct and lead to poor encoding representation. It took a lot of fine-tuning. -My lack of desire to tune hyperparameters early meant I wasted a very long time with a very bad learning rate -I planned to feed these encodings into a [DETR](https://arxiv.org/pdf/2005.12872.pdf) architecture to perform object/bounding box detection but I didn't have time. It isn't hard, but the DETR paper is more time-consuming to implement than I originally expected. ## Accomplishments that we're proud of I truly believe this idea holds significant merit for a future of performing efficient inference. It has its faults- relying on some data center to serve model encodings introduces latency that may not be possible in some applications (see: self-driving vehicles). However, the benefit of executing upwards of 99% of your FLOPs in a location where it can be 40x more efficient should not be understated. This model shows extremely high generalization capacity. The two tasks we do fine-tune on are time (of day) prediction and a number of people (a stand-in for bound box prediction). For time of day, we train on 300 samples and are able to generally predict the time of day within 15 minutes (though, all of the data was collected over the weekend, so there's probably some overfitting occurring since the test and train set are from the same day.). For a number of people, we train on only 30 samples and are able to get an average error of about .7 people! This is absolutely amazing considering how well the embedding of only 512 numbers was able to describe the 1.6M pixel values in an original image. The automated data pipeline is extremely smooth. It felt very nice to be working on other tasks while collecting data, creating batches, and getting those batches labeled at the same time. ## What we learned A ton about Bash and Flask. I learned what Scale.ai actually does, which was really fun. I learned a significant amount about the efficiency of models and, maybe more importantly, the inefficiencies. It was quite shocking for me to read that usually only 10% of the energy spent on a model was during training. ## What's next for Fine Tuned Heads I really want to see how well the object detection works with only encodings. Theoretically, it should be fine, since these embeddings are strong enough to (almost perfectly) reconstruct the original image, so the information on where people are in the image is clearly present. Automating more of the process would be the next major step. Currently, the 'datacenter' is a DGX A100, which honestly is a pretty killer data center but it could be on a TPU, further increasing efficiency. Looking extremely long-term, a major question becomes how to keep the self-supervised model up to date. Consistent 'fine-tuning' with new data should prevent too much data drift, but every time you fine-tune the self-supervised model you need to retrain all of the prediction heads (which, granted, is quite easy as fast). I believe this can be solved with some codebook solutions like that seen in VQ-VAE2, but I'd have to think about it more.
winning
## Inspiration We wanted to create an app that accessed and interconnected multiple APIs. With mental health being a very important issue, we wanted to create an app that at a glance can identify someone's mood. With the popularity of social media an easy way to tell someone's mood at a glance can be what they post. Studies have shown the suicide victims often post about their depression and go unnoticed. This led us to creating an app that can show someone's mood breakdown by analyzing their social media activity. An added functionality of this application is quick and easy background checks for potential job applicants. ## What it does Twinfo operates as both a website and a chrome plugin for analyzing twitter accounts and acquiring a mood breakdown. Website Functionality: When on the Twinfo website you can input a twitter handle (@) and scan it. This scan returns a visual depiction of the user’s mood breakdown by analyzing all their tweets, retweets, and replies. The user’s mood “score” is displayed both in text form on the website and as a visual pie chart giving a clear and concise breakdown. If the user falls over a specific threshold a warning message is displayed urging the user to reach out and seek help. Although it only uses test data for now, Twinfo would also be able to compare two users to each other as well as provide site wide mood breakdowns. The site wide mood breakdowns would be able to show how current social/political issues affect the site’s population as a whole. Plugin Functionality: When visiting a user’s Twitter page, the Twinfo plugin can be used to quickly scan the user’s page and acquire a basic breakdown. The plugin shows the user’s mood “score” in text form and will also urge the user to reach out if they fall over the same threshold used on the website. ## How we built it In total we used 3 different APIs to complete this project. We used the Twitter Developer API to gather a user’s tweets, retweets, and replies. The username is used to get the userID to be able to access the tweets. These tweets are then saved and passed on to the Twinword API which analyzes them and calculates the user’s mood “score”. This mood score is then passed on to the Google Visualization API on our webpage and displayed to the user. Our process for building this website was done modularly, in the sense that each group member was given a specific task and we brought it all together in the end. This was beneficial to optimum time management. It was also very fun and satisfying to bring everything together and complete the puzzle at the end. ## Challenges we ran into The biggest challenge we encountered was the interaction between Twinword API and the Google Visualization API. We built our website using test data at first that was displayed using the Google Visualization API. Our site worked fine with the test data but when it came to taking our mood “score” from the Twinword API and displaying it using the Google Visualization API difficulties arose. It was a challenge to convert our backend server code to use the new live data and display that data properly. In the end we managed to get it to work and the two APIs interact smoothly. ## Accomplishments that we're proud of Something that we are especially proud of is successfully linking both the website portion of our project and the chrome plugin portion of our project together, and having them retrieve data from the same API. Considering this is the first time any of us have used this API, things came together rather smoothly due to the fact that we planned ahead on this implementation, and designed our code structure around it. ## What we learned We gained a lot of new knowledge on API usage which is what we set out to accomplish this hackathon. We learned a lot about both the challenges and advantages of using APIs, especially when it comes to making them interact with each other. We also learned a lot about building a website and the challenges and difficulties that come with that. Especially how our APIs interacted and integrated with our website when it came to the HTML and js sides of things. ## What's next for Twinfo Our API account has a limited number of calls, but with an unlimited amount of calls and the ability to pull an unlimited number of tweets, we would want to implement a site-wide breakdown of all of twitter’s mood, and be able to graph how the overall mood changes. This would be great to see how the general population is relating to current events going on in the world. We would also want to add functionality to search not only by users, but by hashtags or location, and identify so-called “hotspots” where users in certain areas or using certain hashtags are more depressed than average. We would also want to add more specific search parameters, such as specifying the timeframe of tweets, or limiting the query to a certain number of tweets. We also would like to be able to use Twinfo to analyze a twitter account’s portion of sponsored tweets, to be able to tell if an account is saying authentic opinions, or just delivering advertising content.
## Inspiration Mental health has become an increasingly vital issue on our campus. The facade of the perfect Stanford student (Stanford Duck Syndrome) means that real emotions and struggles are often suppressed. It is heartwarming to be able to connect with people on campus and see how they feel, in a familiar, yet anonymous way. Having the moment to connect with another person's experience of struggling with a Mid-Term or the Happiness after Stanford beats Cal can be amazingly uplifting. ## What it does Our cross-platform app allows users to share how they feel in words. Their feelings are geolocated onto a map with a circle and with a timestamp anonymously. Our NLP sentiment analyzer creates a color for the feeling based on the sentiment of the feeling expressed. This provides a cool visualization to see how people feel across different geographic levels. For example, you can zoom into a building to observe that people are generally happy in the Huang Engineering Center because of TreeHacks currently and Zoom Out to see people in Stanford are generally stressed during midterm season. The ability to zoom in and even tap on a specific circle to see how a person feels in words allows you to go local while zooming out allows you to go global and gauge the general sentiment of an area or building as transparent colors overlap into generalized shades. It is a fascinating way to connect with people's deepest feelings and find the humanity in our everyday life. ## How we built it Our front end was built in React Native through Google Maps and uses Node.js. Our backend consisted of a Flask server written in Python, on which our NLP sentiment analysis is done (Artificial Intelligence), determining colors for the circle based on the feeling estimated by the language model. Our database of feelings entries is stored in Firebase on the Cloud, with data being written to and from, to overlay feelings entries on the map. We also have a script running on Firebase to remove entries from a map after a certain time period (example 6 hours, so only the most recent entries are displayed on the map to the user). Our Flask Server is deployed on Heroku through the Cloud. ## Challenges we ran into Getting Flask to communicate with our React Native app to produce the NLP sentiment analysis. Setting up our backend through Firebase to create markers on the map and persist user's responses in the long run. ## Accomplishments that we're proud of Integrating all the different components was fascinating from Firebase to ReactNative to the NLP Sentiment Analysis through Flask. ## What we learned We had no prior experience with React Native and Node.JS, so we learned this from scratch. Integrating all the different aspects of the solution from the frontend to backend to cloud storage was a thrilling experience. ## What's next for CampusFeels We hope to add features to track the emotional wellbeing of areas in the long run as well as encourage users to develop the skills to track their own emotional well-being in the long run. We hope to apply data analytics to do this and track people's emotions related to different events/criteria e.g. Housing Choices, Weather, Big Game etc.
## 💡 Inspiration 💡 Mental health is a growing concern in today's population, especially in 2023 as we're all adjusting back to civilization again as COVID-19 measures are largely lifted. With Cohere as one of our UofT Hacks X sponsors this weekend, we want to explore the growing application of natural language processing and artificial intelligence to help make mental health services more accessible. One of the main barriers for potential patients seeking mental health services is the negative stigma around therapy -- in particular, admitting our weaknesses, overcoming learned helplessness, and fearing judgement from others. Patients may also find it inconvenient to seek out therapy -- either because appointment waitlists can last several months long, therapy clinics can be quite far, or appointment times may not fit the patient's schedule. By providing an online AI consultant, we can allow users to briefly experience the process of therapy to overcome their aversion in the comfort of their own homes and under complete privacy. We are hoping that after becoming comfortable with the experience, users in need will be encouraged to actively seek mental health services! ## ❓ What it does ❓ This app is a therapy AI that generates reactive responses to the user and remembers previous information not just from the current conversation, but also past conversations with the user. Our AI allows for real-time conversation by using speech-to-text processing technology and then uses text-to-speech technology for a fluent human-like response. At the end of each conversation, the AI therapist generates an appropriate image summarizing the sentiment of the conversation to give users a method to better remember their discussion. ## 🏗️ How we built it 🏗️ We used Flask to make the API endpoints in the back-end to connect with the front-end and also save information for the current user's session, such as username and past conversations, which were stored in a SQL database. We first convert the user's speech to text and then send it to the back-end to process it using Cohere's API, which as been trained by our custom data and the user's past conversations and then sent back. We then use our text-to-speech algorithm for the AI to 'speak' to the user. Once the conversations is done, we use Cohere's API to summarize it into a suitable prompt for the DallE text-to-image API to generate an image summarizing the user's conversation for them to look back at when they want to. ## 🚧 Challenges we ran into 🚧 We faced an issue with implementing a connection from the front-end to back-end since we were facing a CORS error while transmitting the data so we had to properly validate it. Additionally, incorporating the speech-to-text technology was challenging since we had little prior experience so we had to spend development time to learn how to implement it and also format the responses properly. Lastly, it was a challenge to train the cohere response AI properly since we wanted to verify our training data was free of bias or negativity, and that we were using the results of the Cohere AI model responsibly so that our users will feel safe using our AI therapist application. ## ✅ Accomplishments that we're proud of ✅ We were able to create an AI therapist by creating a self-teaching AI using the Cohere API to train an AI model that integrates seamlessly into our application. It delivers more personalized responses to the user by allowing it to adapt its current responses to users based on the user's conversation history and making conversations accessible only to that user. We were able to effectively delegate team roles and seamlessly integrate the Cohere model into our application. It was lots of fun combining our existing web development experience with venturing out to a new domain like machine learning to approach a mental health issue using the latest advances in AI technology. ## 🙋‍♂️ What we learned 🙋‍♂️ We learned how to be more resourceful when we encountered debugging issues, while balancing the need to make progress on our hackathon project. By exploring every possible solution and documenting our findings clearly and exhaustively, we either increased the chances of solving the issue ourselves, or obtained more targeted help from one of the UofT Hacks X mentors via Discord. Our goal is to learn how to become more independent problem solvers. Initially, our team had trouble deciding on an appropriately scoped, sufficiently original project idea. We learned that our project should be both challenging enough but also buildable within 36 hours, but we did not force ourselves to make our project fit into a particular prize category -- and instead letting our project idea guide which prize category to aim for. Delegating our tasks based on teammates' strengths and choosing teammates with complementary skills was essential for working efficiently. ## 💭 What's next? 💭 To improve our project, we could allow users to customize their AI therapist, such as its accent and pitch or the chat website's color theme to make the AI therapist feel more like a personalized consultant to users. Adding a login page, registration page, password reset page, and enabling user authentication would also enhance the chatbot's security. Next, we could improve our website's user interface and user experience by switching to Material UI to make our website look more modern and professional.
losing
## Inspiration As vaccines are being distributed, more and more organizations are considering the idea of a Covid-19 vaccination passport. A vaccination passport would allow individuals who have been vaccinated to access events and other activities that may present high risks to unvaccinated individuals, such as flights. VaxsPass is a blockchain-based Covid-19 vaccination passport that aims to provide an open-source solution to vaccine-required activities. ## What it does VaxsPass uses blockchain to provide proof that a user received a vaccine. When creating an account on our iOS app, the user scans the vaccine card that they received from their vaccination site, which is being verified by Google Cloud Vision API. The data is parsed, anonymized, and stored in our Ethereum smart contract. Then, whenever a user needs to provide proof of vaccination, the app would provide a QR code redirecting to a site where anyone can see a user’s vaccination status. The app also provides users with a map of nearby locations that can administer the Covid-19 vaccine and a chatbot that can answer user's questions about coronavirus. ## How we built it We created a smart contract on the Ethereum blockchain for logging vaccines. This ensures that once a vaccine is logged, it cannot be used by another person who may not have gotten the vaccine. The smart contract maps the user’s unique key to a hash of the date of the vaccine and the vaccine type. It’s currently hosted on the Ropsten testnet with a MetaMask account. By logging vaccine data on the blockchain, we are able to make vaccination data visible to the public without compromising user security. This is especially important for a data-sensitive area such as healthcare. Oauth mappings to blockchain are one way: it is easy to get vaccine info with an oauth code (which requires account access) but impossible to retrieve account data from vaccine info. This method also allows easy integration with other organizations and businesses. You can view live transactions on the Ropsten testnet here: <https://ropsten.etherscan.io/address/0xe72e3da51c4c55dabb9ee97e760aa2c0c7e73022>. This also prevents misrepresentation of vaccinations since it is nearly impossible to modify the data stored on blockchain. The distributed nature of the Ethereum blockchain allows this data storage to be extremely resilient and secure. Alchemy API was used to quickly count transactions for the home page counter and for app analytics when interacting with blockchain. User registration was built with Firebase, verification of the covid vaccine certificate was built using Google's Cloud Vision API, the map of vaccination sites was built using Apple's mapkit and Google Places API, the chatbot was built using OpenAI's GPT-3 API. ## Challenges we ran into One major challenge was integrating both the iOS app and web app with blockchain. We created a custom API that allows both to easily interact with blockchain and our database. Another challenge was building a custom API for the chatbot. We built a wrapper around OpenAi's GPT-3 API that is being used by the iOS chatbot. ## Accomplishments that we're proud of This is the first hackathon we have done together. The app has a lot of components, so we are especially proud of being able to put the components together into a functional app. We are also proud of how the team responded to the need of working with new technologies and a large amount of APIs. ## What's next for VaxsPass In the future, we plan on expanding to support all vaccines, not just Covid-19. We also plan on creating a more comprehensive workflow, including a companion doctors app to replace a vaccine card, Apple Wallet support, and a physical NFC based card. Migration to a Proof of Stake concept would help with supporting more vaccines and expanding functionality. Blockchain’s distributed ledger and Alchemy’s API also provide opportunities to expand integrations with other apps, such as confirmation during online flight booking. A goal that we didn’t have time for was blockchain-based vaccine supply chain management, where Alchemy API would be essential for tracking and managing vaccines.
## Inspiration Every year thousands of companies are compromised and the authentication information for many is stolen. The consequence of such breaches is immense and damages the trust between individuals and organizations. There is significant overhead for an organization to secure it's authentication methods, often usability is sacrificed. Users must trust organizations with their info and organizations must trust that their methods of storage are secure. We believe this presents a significant trust and usability problem. What if we could leverage the blockchain, to do this authentication trustlessly between parties? Using challenge and response we'd be able to avoid passwords completely. Furthermore, this system of permissions could be extended from the digital world to physical assets, i.e. giving somebody the privilege to unlock your door. ## What it does Entities can assign and manage privileges for resources they possess by publishing that a certain user (with an associated public key) has access to a resource on the ethereum blockchain (this can be temporary or perpetual). During authentication, entities validate that users hold the private keys to their associated public keys using challenge and response. A user needs only to keep his private key and remember his username. ## How we built it We designed and deployed a smart contract on the Ropsten Ethereum testnet to trustlessly manage permissions. Users submit transactions and read from this contract as a final authority for access control. An android app is used to showcase real-life challenge and response and how it can be used to validate privileges trustfully between devices. A web app is also developed to show the ease of setup for an individual user. AWS Lambda is used to query the blockchain through trusted apis, this may be adjusted by any user to their desired confidence level. A physical lock with an NFC reader was to be used to showcase privilege transfer, but the NFC reader was broken. ## Challenges we ran into The NFC reader we used was broken so we were unable to demonstrate one potential application. Since Solidity (Ethereum EVM language) is relatively new there was not an abundance of documentation available when we ran into issues sending and validating transactions, although we eventually fixed these issues. ## Accomplishments that we're proud of Trustless authentication on the blockchain, IoT integration, Ethereum transactions greatly simplified for users (they need not know how it works), and Login with username ## What we learned We learned a lot about the quirks of Ethereum and developing around it. Solidity still has a long way to go regarding developer documentation. The latency of ethereum transactions, scalability of ethereum, and transaction fees on the network present limiting factors towards future adoption, though we have demonstrated that such a trustless authentication scheme using the blockchain is indeed secure and easy to use. ## What's next for Keychain Use a different chain with faster transaction times and lower fees, or even rolling our own chain using optimized for keychain. More digital and IoT demos demonstrating ease of use.
## Inspiration Our game stems from the current global pandemic we are grappling with and the importance of getting vaccinated. As many of our loved ones are getting sick, we believe it is important to stress the effectiveness of vaccines and staying protected from Covid in a fun and engaging game. ## What it does An avatar runs through a school terrain while trying to avoid obstacles and falling Covid viruses. The player wins the game by collecting vaccines and accumulating points, successfully dodging Covid, and delivering the vaccines to the hospital. Try out our game by following the link to github! ## How we built it After brainstorming our game, we split the game components into 4 parts for each team member to work on. Emily created the educational terrain using various assets, Matt created the character and its movements, Veronica created the falling Covid virus spikes, and Ivy created the vaccines and point counter. After each of the components were made, we brought it all together, added music, and our game was completed. ## Challenges we ran into As all our team members had never used Unity before, there was a big learning curve and we faced some difficulties while navigating the new platform. As every team member worked on a different scene on our Unity project, we faced some tricky merge conflicts at the end when we were bringing our project together. ## Accomplishments that we're proud of We're proud of creating a fun and educational game that teaches the importance of getting vaccinated and avoiding Covid. ## What we learned For this project, it was all our first time using the Unity platform to create a game. We learned a lot about programming in C# and the game development process. Additionally, we learned a lot about git management through debugging and resolving merge conflicts. ## What's next for CovidRun We want to especially educate the youth on the importance of vaccination, so we plan on introducing the game into k-12 schools and releasing the game on steam. We would like to add more levels and potentially have an infinite level that is procedurally generated.
losing
## Inspiration The current landscape of data aggregation for ML models relies heavily on centralized platforms, such as Roboflow and Kaggle. This causes an overreliance on invalidated human-volunteered data. Billions of dollars worth of information is unused, resulting in unnecessary inefficiencies and challenges in the data engineering process. With this in mind, we wanted to create a solution. ## What it does **1. Data Contribution and Governance** DAG operates as a decentralized and autonomous organization (DAO) governed by smart contracts and consensus mechanisms within a blockchain network. DAG also supports data annotation and enrichment activities, as users can participate in annotating and adding value to the shared datasets. Annotation involves labeling, tagging, or categorizing data, which is increasingly valuable for machine learning, AI, and research purposes. **2. Micropayments in Cryptocurrency** In return for adding datasets to DAG, users receive micropayments in the form of cryptocurrency. These micropayments act as incentives for users to share their data with the community and ensure that contributors are compensated based on factors such as the quality and usefulness of their data. **3. Data Quality Control** The community of users actively participates in data validation and quality assessment. This can involve data curation, data cleaning, and verification processes. By identifying and reporting data quality issues or errors, our platform encourages everyone to actively participate in maintaining data integrity. ## How we built it DAG was used building Next.js, MongoDB, Cohere, Tailwind CSS, Flow, React, Syro, and Soroban.
## Inclusiv.ai: Empowering Accessibility ## Inspiration In a world where technology is a cornerstone of daily life, it's crucial that digital access is equitable. However, not everyone experiences technology in the same way. We took this challenge to heart and crafted Inclusiv.ai to revolutionize accessibility, ensuring that individuals with disabilities can navigate the web with ease and confidence. Our main priority was reducing the hassle of different extensions and toggles. Inclusiv.ai has one button and one assistant—Inki. ## What it does Inclusiv provides a simple way for those with disabilities to navigate the web. We focused on a hassle-free, intuitive approach with only one button. Simply begin a conversation with your assistant, "Inki," by clicking on the microphone button and explain whatever issues have been limiting your experience on the web. With various modes such as colorblind, screen enhancer, screen explainer and summarizer, and an ADHD/Dyslexia mode. Inki is designed to be a dynamic tool, adapting to a variety of needs through its multiple modes: Colorblind Mode: Tailors the web page colors, ensuring that colorblind users can differentiate between colors that are typically hard to distinguish. Screen Enhancer: Amplifies and clarifies website content for those with visual impairments, allowing for easier reading and interaction. Screen Explainer and Summarizer: A mode that not only explains the elements on the screen but also provides concise summaries for quick comprehension, beneficial for users with cognitive disabilities. ADHD/Dyslexia Mode: Alters the web page layout and typography to minimize distractions and optimize for readability, assisting users with attention deficits or dyslexia. Behind these user-facing features, Inclusiv utilizes a combination of large language models and a specific focus on Monster API. This approach has enabled us to create features that are both technically sophisticated and varied, ensuring a broad range of user needs are met. ## How we built it Inclusiv was meticulously crafted by a synergistic team of two backend developers and one frontend designer, all united by the vision of making web navigation universally accessible. The project was born from a user-centric approach, focusing on the unique challenges faced by individuals with disabilities. Our team's design philosophy hinged on simplicity, leading to the creation of a one-button interface that calls upon Inki, an AI assistant, to activate different accessibility modes including colorblind, screen enhancer, screen explainer, and summarizer, and an ADHD/Dyslexia mode. By leveraging large language models for natural language processing and integrating Monster API's robust algorithms, Inclusiv transcends conventional assistive technologies. This blend of intuitive design and advanced AI capabilities was continuously refined through iterative user testing and feedback, ensuring that the product genuinely resonates with the needs of its users. Inclusiv’s launch is not an end, but a beginning to an ongoing journey of innovation and improvement, with a commitment to evolving and expanding its features to foster an inclusive web experience for all. ## Challenges we ran into Where to start? Front-end might be harder than the back end! We spent hours fiddling with the optimal UI setup, trying to figure out how best to simplify the experience for a user of any skill level. This is not as simple as it looks, and we spent two hours trying to add a power button. Additionally, we ended up having to pivot our AI model numerous times and switched our approach throughout the project. ## Accomplishments that we're proud of We ended up changing our AI agent multiple times throughout the project to deliver the optimal project. We're really satisfied with our ability to adapt and push ourselves out of our comfort zones throughout the 36 hours. We developed numerous, technically difficult and varied features that we believe will truly aid those with any form of disability. ## What we learned We learned tons about building a project from scratch, implementing LLMs, and creating an intuitive and appealing front-end experience. ## What's next for Inclusiv.ai We want to launch on the Chrome web store!
## Inspiration The inspiration for the project was to design a model that could detect fake loan entries hidden amongst a set of real loan entries. Also, our group was eager to design a dashboard to help see these statistics - many similar services are good at identifying outliers in data but are unfriendly to the user. We wanted businesses to look at and understand fake data immediately because its important to recognize quickly. ## What it does Our project handles back-end and front-end tasks. Specifically, on the back-end, the project uses libraries like Pandas in Python to parse input data from CSV files. Then, after creating histograms and linear regression models that detect outliers on given input, the data is passed to the front-end to display the histogram and present outliers on to the user for an easy experience. ## How we built it We built this application using Python in the back-end. We utilized Pandas for efficiently storing data in DataFrames. Then, we used Numpy and Scikit-Learn for statistical analysis. On the server side, we built the website in HTML/CSS and used Flask and Django to handle events on the website and interaction with other parts of the code. This involved retrieving taking a CSV file from the user, parsing it into a String, running our back-end model, and displaying the results to the user. ## Challenges we ran into There were many front-end and back-end issues, but they ultimately helped us learn. On the front-end, the biggest problem was using Django with the browser to bring this experience to the user. Also, on the back-end, we found using Keras to be an issue during the start of the process, so we had to switch our frameworks mid-way. ## Accomplishments that we're proud of An accomplishment was being able to bring both sides of the development process together. Specifically, creating a UI with a back-end was a painful but rewarding experience. Also, implementing cool machine learning models that could actually find fake data was really exciting. ## What we learned One of our biggest lessons was to use libraries more effectively to tackle the problem at hand. We started creating a machine learning model by using Keras in Python, which turned out to be ineffective to implement what we needed. After much help from the mentors, we played with other libraries that made it easier to implement linear regression, for example. ## What's next for Financial Outlier Detection System (FODS) Eventually, we aim to use a sophisticated statistical tools to analyze the data. For example, a Random Forrest Tree could have been used to identify key characteristics of data, helping us decide our linear regression models before building them. Also, one cool idea is to search for linearly dependent columns in data. They would help find outliers and eliminate trivial or useless variables in new data quickly.
partial
## Inspiration Remember the thrill of watching mom haggle like a pro at the market? Those nostalgic days might seem long gone, but here's the twist: we can help you carry out the generational legacy. Introducing our game-changing app – it's not just a translator, it’s your haggling sidekick. This app does more than break down language barriers; it helps you secure deals. You’ll learn the tricks to avoid the tourist trap and get the local price, every time. We’re not just reminiscing about the good old days; we’re rebooting them for the modern shopper. Get ready to haggle, bargain, and save like never before! ## What it does Back to the Market is a mobile app specifically crafted to enhance communication and negotiation for users in foreign markets. The app shines in its ability to analyze quoted prices using local market data, cultural norms, and user-set preferences to suggest effective counteroffers. This empowers users to engage in informed and culturally appropriate negotiations, without being overcharged. Additionally, Back to the Market offers a customization feature, allowing users to tailor their spending limits. The user-interface is simple and cute, making it accessible for a broad range of users regardless of their technical interface. Its integration of these diverse features positions Back to the Market not just as a tool for financial negotiation, but as a comprehensive companion for a more equitable, enjoyable, and efficient international shopping experience. ## How we built it Back to the Market was built by separating the front-end from the back-end. The front-end consists of React-Native, Expo Go, and Javascript to develop the mobile app. The back-end consists of Python, which was used to connect the front-end to the back-end. The Cohere API was used to generate the responses and determine appropriate steps to take during the negotiation process. ## Challenges we ran into During the development of Back to the Market, we faced two primary challenges. First was our lack of experience with React Native, a key technology for our app's development. While our team was composed of great coders, none of us had ever used React prior to the competition. This meant we had to quickly learn and master it from the ground up, a task that was both challenging and educational. Second, we grappled with front-end design. Ensuring the app was not only functional but also visually appealing and user-friendly required us to delve into UI/UX design principles, an area we had little experience with. Luckily, through the help of the organizers, we were able to adapt quickly with few problems. These challenges, while demanding, were crucial in enhancing our skills and shaping the app into the efficient and engaging version it is today. ## Accomplishments that we're proud of We centered the button on our first try 😎 In our 36 hours journey with Back to the Market, there are several accomplishments that stand out. Firstly, successfully integrating Cohere for the both the translation and bargaining aspects of the app was a significant achievement. This integration not only provided robust functionality but also ensured a seamless user experience, which was central to our vision. Secondly, it was amazing to see how quickly we went from zero React-Native experience to making an entire app with it in less than 24 hours. We were able to create both an aesthetically pleasing and highly functional. This rapid skill acquisition and application in a short time frame was a testament to our team's dedication and learning agility. Finally, we take great pride in our presentation and slides. We managed to craft an engaging and dynamic presentation that effectively communicated the essence of Back to the Market. Our ability to convey complex technical details in an accessible and entertaining manner was crucial in capturing the interest and understanding of our audience. ## What we learned Our journey with this project was immensely educational. We learned the value of adaptability through mastering React-Native, a technology new to us all, emphasizing the importance of embracing and quickly learning new tools. Furthermore, delving into the complexities of cross-cultural communication for our translation and bargaining features, we gained insights into the subtleties of language and cultural nuances in commerce. Our foray into front-end design taught us about the critical role of user experience and interface, highlighting that an app's success lies not just in its functionality but also in its usability and appeal. Finally, creating a product is the easy part, making people want it is where a lot of people fall. Thus, crafting an engaging presentation refined our storytelling and communication skills. ## What's next for Back to the Market Looking ahead, Back to the Market is poised for many exciting developments. Our immediate focus is on enhancing the app's functionality and user experience. This includes integrating translation features to allow users to stay within the app throughout their transaction. In parallel, we're exploring the incorporation of AI-driven personalization features. This would allow Back to the Market to learn from individual user preferences and negotiation styles, offering more tailored suggestions and improving the overall user experience. The idea can be expanded by creating a feature for users to rate suggested responses. Use these ratings to refine the response generation system by integrating the top-rated answers into the Cohere model with a RAG approach. This will help the system learn from the most effective responses, improving the quality of future answers. Another key area of development is utilising computer vision so that users can simply take a picture of the item they are interested in purchasing instead of having to input an item name, which is especially handy in areas where you don’t know exactly what you’re buying (ex. cool souvenir). Furthermore, we know that everyone loves a bit of competition, especially in the world of bargaining where you want the best deal possible. That’s why we plan on incorporating a leaderboard for those who save the most money via our negotiation tactics.
## Inspiration Partially inspired by the Smart Cities track, we wanted our app to have the direct utility of ordering food, while still being fun to interact with. We aimed to combine convenience with entertainment, making the experience more enjoyable than your typical drive-through order. ## What it does You interact using only your voice. The app automatically detects when you start and stop talking, uses AI to transcribe what you say, figures out the food items (with modifications) you want to order, and adds them to your current order. It even handles details like size and flavor preferences. The AI then generates text-to-speech audio, which is played back to confirm your order in a humorous, engaging way. There is absolutely zero set-up or management necessary, as the program will completely ignore all background noises and conversation. Even then, it will still take your order with staggering precision. ## How we built it The frontend of the app is built with React and TypeScript, while the backend uses Flask and Python. We containerized the app using Docker and deployed it using Defang. The design of the menu is also done in Canva with a dash of Harvard colors. ## Challenges we ran into One major challenge was getting the different parts of the app—frontend, backend, and AI—to communicate effectively. From media file conversions to AI prompt engineering, we worked through each of the problems together. We struggled particularly with maintaining smooth communication once the app was deployed. Additionally, fine-tuning the AI to accurately extract order information from voice inputs while keeping the interaction natural was a big hurdle. ## Accomplishments that we're proud of We're proud of building a fully functioning product that successfully integrates all the features we envisioned. We also managed to deploy the app, which was a huge achievement given the complexity of the project. Completing our initial feature set within the hackathon timeframe was a key success for us. Trying to work with Python data type was difficult to manage, and we were proud to navigate around that. We are also extremely proud to meet a bunch of new people and tackle new challenges that we were not previously comfortable with. ## What we learned We honed our skills in React, TypeScript, Flask, and Python, especially in how to make these technologies work together. We also learned how to containerize and deploy applications using Docker and Docker Compose, as well as how to use Defang for cloud deployment. ## What's next for Harvard Burger Moving forward, we want to add a business-facing interface, where restaurant staff would be able to view and fulfill customer orders. There will also be individual kiosk devices to handle order inputs. These features would allow *Harvard Burger* to move from a demo to a fully functional app that restaurants could actually use. Lastly, we can sell the product by designing marketing strategies for fast food chains.
## Inspiration -Just under 50% of insurance clients have a home inventory prepared. Our goal was to simplify and speed up the process to encourage more people to be prepared -Seeing that insurance policies may vary from company to company, people may not be covered as a result -Especially with home insurance, this can be a huge issue for those who may have a lot of precious belongings ## What it does Using React-Native as the front-end, the user will input household items by taking a picture of them. Then, using google's reverse image search and amazon, the app will return a title for the item as well as a price. Users will also be able to add items to their list and it will be stored in a database. ## How we built it * Used React-Native in the front end for UI and to send images to the backend * Used Flask as the backend and CockRoach DB as our database ## Challenges we ran into * Preparing the database schema with CockroachDB * Hosting a live backend server and connecting it with the database * Learning new languages and frameworks on the go ## Accomplishments that we're proud of First Time ever: * Creating a mobile app * Using React-Native * Preparing SQL Queries * Using Flask in connection with databases ## - Using fetch() API and react hooks in React-Native * Creating a functioning web scraper to search for product names and prices ## What we learned Learned how to better effectively use JSX syntax, using hooks, and async and await functions. Refined skills in Javascript. Learned how to use React Native and Expo. ## What's next for INSURatory * improving on the functions, UI/UX * implanting machine learning to categorize
winning
# Course Connection ## Inspiration College is often heralded as a defining time period to explore interests, define beliefs, and establish lifelong friendships. However the vibrant campus life has recently become endangered as it is becoming easier than ever for students to become disconnected. The previously guaranteed notion of discovering friends while exploring interests in courses is also becoming a rarity as classes adopt hybrid and online formats. The loss became abundantly clear when two of our members, who became roommates this year, discovered that they had taken the majority of the same courses despite never meeting before this year. We built our project to combat this problem and preserve the zeitgeist of campus life. ## What it does Our project provides a seamless tool for a student to enter their courses by uploading their transcript. We then automatically convert their transcript into structured data stored in Firebase. With all uploaded transcript data, we create a graph of people they took classes with, the classes they have taken, and when they took each class. Using a Graph Attention Network and domain-specific heuristics, we calculate the student’s similarity to other students. The user is instantly presented with a stunning graph visualization of their previous courses and the course connections to their most similar students. From a commercial perspective, our app provides businesses the ability to utilize CheckBook in order to purchase access to course enrollment data. ## High-Level Tech Stack Our project is built on top of a couple key technologies, including React (front end), Express.js/Next.js (backend), Firestore (real time graph cache), Estuary.tech (transcript and graph storage), and Checkbook.io (payment processing). ## How we built it ### Initial Setup Our first task was to provide a method for students to upload their courses. We elected to utilize the ubiquitous nature of transcripts. Utilizing python we parse a transcript, sending the data to a node.js server which serves as a REST api point for our front end. We chose Vercel to deploy our website. It was necessary to generate a large number of sample users in order to test our project. To generate the users, we needed to scrape the Stanford course library to build a wide variety of classes to assign to our generated users. In order to provide more robust tests, we built our generator to pick a certain major or category of classes, while randomly assigning different category classes for a probabilistic percentage of classes. Using this python library, we are able to generate robust and dense networks to test our graph connection score and visualization. ### Backend Infrastructure We needed a robust database infrastructure in order to handle the thousands of nodes. We elected to explore two options for storing our graphs and files: Firebase and Estuary. We utilized the Estuary API to store transcripts and the graph “fingerprints” that represented a students course identity. We wanted to take advantage of the web3 storage as this would allow students to permanently store their course identity to be easily accessed. We also made use of Firebase to store the dynamic nodes and connections between courses and classes. We distributed our workload across several servers. We utilized Nginx to deploy a production level python server that would perform the graph operations described below and a development level python server. We also had a Node.js server to serve as a proxy serving as a REST api endpoint, and Vercel hosted our front-end. ### Graph Construction Treating the firebase database as the source of truth, we query it to get all user data, namely their usernames and which classes they took in which quarters. Taking this data, we constructed a graph in Python using networkX, in which each person and course is a node with a type label “user” or “course” respectively. In this graph, we then added edges between every person and every course they took, with the edge weight corresponding to the recency of their having taken it. Since we have thousands of nodes, building this graph is an expensive operation. Hence, we leverage Firebase’s key-value storage format to cache this base graph in a JSON representation, for quick and easy I/O. When we add a user, we read in the cached graph, add the user, and update the graph. For all graph operations, the cache reduces latency from ~15 seconds to less than 1. We compute similarity scores between all users based on their course history. We do so as the sum of two components: node embeddings and domain-specific heuristics. To get robust, informative, and inductive node embeddings, we periodically train a Graph Attention Network (GAT) using PyG (PyTorch Geometric). This training is unsupervised as the GAT aims to classify positive and negative edges. While we experimented with more classical approaches such as Node2Vec, we ultimately use a GAT as it is inductive, i.e. it can generalize to and embed new nodes without retraining. Additionally, with their attention mechanism, we better account for structural differences in nodes by learning more dynamic importance weighting in neighborhood aggregation. We augment the cosine similarity between two users’ node embeddings with some more interpretable heuristics, namely a recency-weighted sum of classes in common over a recency-weighted sum over the union of classes taken. With this rich graph representation, when a user queries, we return the induced subgraph of the user, their neighbors, and the top k most people most similar to them, who they likely have a lot in common with, and whom they may want to meet! ## Challenges we ran into We chose a somewhat complicated stack with multiple servers. We therefore had some challenges with iterating quickly for development as we had to manage all the necessary servers. In terms of graph management, the biggest challenges were in integrating the GAT and in maintaining synchronization between the Firebase and cached graph. ## Accomplishments that we're proud of We’re very proud of the graph component both in its data structure and in its visual representation. ## What we learned It was very exciting to work with new tools and libraries. It was impressive to work with Estuary and see the surprisingly low latency. None of us had worked with next.js. We were able to quickly ramp up to using it as we had react experience and were very happy with how easily it integrated with Vercel. ## What's next for Course Connections There are several different storyboards we would be interested in implementing for Course Connections. One would be a course recommendation. We discovered that chatGPT gave excellent course recommendations given previous courses. We developed some functionality but ran out of time for a full implementation.
## Inspiration In large lectures, students often have difficulty making friends and forming study groups due to the social anxieties attached to reaching out for help. Collaboration reinforces and heightens learning, so we sought to encourage students to work together and learn from each other. ## What it does StudyDate is a personalized learning platform that assesses a user's current knowledge on a certain subject, and personalizes the lessons to cover their weaknesses. StudyDate also utilizes Facebook's Graph API to connect users with Facebook friends whose knowledge complement each other to promote mentorship and enhanced learning. Moreover, StudyDate recommends and connects individuals together based on academic interests and past experience. Users can either study courses of interest online, share notes, chat with others online, or opt to meet in-person with others nearby. ## How we built it We built our front-end in React.js and used node.js for RESTful requests to the database, Then, we integrated our web application with Facebook's API for authentication and Graph API. ## Challenges we ran into We ran into challenges in persisting the state of Facebook authentication, and utilizing Facebook's Graph API to extract and recommend Facebook friends by matching with saved user data to discover friends with complementing knowledge. We also ran into challenges setting up the back-end infrastructure on Google Cloud. ## Accomplishments that we're proud of We are proud of having built a functional, dynamic website that incorporates various aspects of profile and course information. ## What we learned We learned a lot about implementing various functionalities of React.js such as page navigation and chat messages. Completing this project also taught us about certain limitations, especially those dealing with using graphics. We also learned how to implement a login flow with Facebook API to store/pull user information from a database. ## What's next for StudyDate We'd like to perform a Graph Representation of every user's knowledge base within a certain course subject and use a Machine Learning algorithm to better personalize lessons, as well as to better recommend Facebook friends or new friends in order to help users find friends/mentors who are experienced in same course. We also see StudyDate as a mobile application in the future with a dating app-like interface that allows users to select other students they are interested in working with.
## Inspiration Every college student has to take Gen-Ed classes to fulfill graduation requirements. Often, this process is complicated because there are way too many options and very little information on what Gen-Eds are useful and fun. Navigating Reddit and other sites is a waste of time, and there isn’t a single platform for discovering your next favorite class. ## What it does ByMyGenEd is a website that matches you with classes based on your preferences. We take your graduation requirements, current major, and desired GPA, and present you with a list of classes. This list, which is presented as a set of interactive, swipeable cards (think of Quizlet or Tinder), includes a lot of information that can help you decide what to do, like past student GPAs, credit hours satisfied, and student reviews. This is what every student needs for finding the perfect GenEd and having the happiest semester of their lives! ## How we built it Front-end: * Created a single page application using React.js and JavaScript. * Used Material-UI for the design. * Utilized data visualizations with Plotly.js. * Sent student preferences to the Backend. * Presented matched courses in an interactive card format created with custom CSS. Back-end: * Downloaded UIUC's public class GPA dataset. * Manipulated dataset and uploaded to our Google Cloud Firestore database. * Made our back-end with Node.js, where it analyzes the user’s input (GPA and course interests) to match \* the courses from the database with the user’s tastes. * Set up our Node.js app in Google Cloud App Engine, so that our back-end code runs on the server and the front-end code will access the server. ## Challenges we ran into * Setting up the Google Cloud server. * Fetching data from the firestore database via Node.js (especially dealing with asynchronous functions). * Visualizing course data using React.js libraries. ## Accomplishments that we're proud of * First time using Google Cloud service. * First time using Node.js and Express. * Hacking virtually from multiple locations on campus. ## What we learned * How to use Google Cloud services (such as how to instantiate a cloud firestore database). * How to set up a Google App Engine server. * Learning more about Data Visualizations with JavaScript and React.js. * Learning more aspects of Node.js and the express library. ## What's next for BeMyGenEd * Include other possible ways to earn course credit (AP/IB Credit, Dual Credit, Proficiency Exams) * Expand towards other universities. * Connect to curriculum graduation requirements. * Add a recommender system on what to take based on your schedule and past courses.
winning
## Inspiration As busy university students with multiple commitments on top of job hunting, we are all too familiar with the tedium and frustration associated with having to compose cover letters for the few job openings that do require them. Given that much of cover letter writing is simply summarizing one's professional qualifications to tie in with company specific information, we have decided to exploit the formulaic nature of such writing and create a web application to generate cover letters with minimal user inputs. ## What it does hire-me-pls is a web application that obtains details of the user’s qualifications from their LinkedIn profile, performs research on the provided target company, and leverages these pieces of information to generate a customized cover letter. ## How we built it For our front end, we utilized JavaScript with React, leveraging the Tailwind CSS framework for the styling of the site. We designed the web application such that once we have obtained the user’s inputs (LinkedIn profile url, name of target company), we send these inputs to the backend. In the backend, built in Python with Fast API, we extract relevant details from the provided LinkedIn profile using the Prospeo API, extracted relevant company information by querying with the Metaphor API, and finally feeding these findings into Open AI to generate a customized cover letter for our user. ## Challenges we ran into In addition to the general bugs and unexpected delays that comes with any project of this scale, our team was challenged with finding a suitable API for extracting relevant data from a given LinkedIn profile. Since much of the tools available on the market are targeted towards recruiters, their functionalities and pricing are often incompatible with our requirements for this web application. After spending a surprising amount of time on research, we settled on Prospeo, which returns data in the convenient JSON format, provides fast consistent responses, and offers a generous free tier option that we could leverage. Another challenge we have encountered were the CORS issues that arose when we first tried making requests to the Prospeo API from the front end. After much trial and error, we finally resolved these issues by moving all of our API calls to the backend of our application. ## Accomplishments that we're proud of A major hurdle that we are proud to have overcome throughout the development process is the fact that half of our team of hackers are beginners (where PennApps is their very first hackathon). Through thoughtful delegation of tasks and the patient mentorship of the more seasoned programmers on the team, we were able to achieve the high productivity necessary for completing this web application within the tight deadline. ## What we learned Through building hire-me-pls, we have gained a greater appreciation for what is achievable when we strategically combine different API’s and AI tools to build off each other. In addition, the beginners on the team gained not only valuable experience contributing to a complex project in a fast-paced environment, but also exposure to useful web development tools that they can use in future personal projects. ## What's next for hire-me-pls While hire-me-pls already achieves much of our original vision, we recognize that there are always ways to make a good thing better. In refining hire-me-pls, we aim to improve the prompt that we provide to Open AI to achieve cover letters that are even more concise and specific to the users’ qualifications and their companies of interest. Further down the road, we would like to explore the possibility of tailoring cover letters to specific roles/job postings at a given company, providing a functionality to generate cold outreach emails to recruiters, and finally, ways of detecting how likely an anti-AI software would detect a hire-me-pls output as being AI generated.
### Inspiration Like many students, we face uncertainty and difficulty when searching for positions in the field of computer science. Our goal is to create a way to tailor your resume to each job posting to increase your odds of an interview or position. Our aim is to empower individuals by creating a tool that seamlessly tailors resumes to specific job postings, significantly enhancing the chances of securing interviews and positions. ### What it does ResumeTuner revolutionizes the job application process by employing cutting-edge technology. The platform takes a user's resume and a job description and feeds it into a Large Language Model through the Replicate API. The result is a meticulously tailored resume optimized for the specific requirements of the given job, providing users with a distinct advantage in the competitive job market. ### How we built it Frontend: React / Next.js + Tailwind + Material UI Backend: Python / FastAPI + Replicate + TinyMCE ### Challenges Creating a full stack web application and integrating it with other API's and components was challenging at times, but we pushed through to complete our project. ## Accomplishments that we're proud of We're proud that we were able to efficiently manage our time and build a tool that we wanted to use.
Applying to multiple roles with different requirements? Not enough space on your resume for all your content? Tailor.cv uses your LinkedIn profile to generate **resumes tailored to specific job descriptions**. # [Join the waitlist here!](https://forms.gle/SahQuvpZJysanK9F9) ## Inspiration Previously, we've all built our share of buzzword soup hackathon ideas without really determining whether there's a market for our product. This time, we decided to solve a problem for the customers we know best, us. Once you have more than a few job experiences, you need to start choosing which ones to put on your resume. This is a hassle especially for hackathon-goers who are often jack-of-all-trades or students who haven't found their specialization yet, because we apply to all kinds of roles with different requirements. If I'm applying for a frontend role, I want to include my frontend experiences; if I'm applying for an ML role, I want to include those skills. Previously, we've had to maintain multiple versions of our resume or customize our resume to each (of the hundreds) of jobs and internships we apply for. ## What it does In Tailor.cv, simply sign in with LinkedIn, input a job posting, and watch it generate a 1-page version of your resume optimized for the specific posting. It only includes the most relevant experiences, skills, and projects based on a classical NLP comparison of your profile and the posting. ## How we built it Frontend: * React * LaTeX2PDF Backend: * Node.js * Express * Cheerio * ScrapedIn * ejs NLP: * Topic Modeling * Keyword Analysis * Cosine Distance ## Challenges we ran into We pivoted multiple times early in this hackathon so we only finalized this idea by Saturday afternoon. We ended up in a time crunch but still completed all the features we set out to develop. ## Accomplishments that we're proud of Building an MVP for a product with real market potential in under 24 hours. ## What we learned LaTeX is really powerful and even classical NLP algorithms can be pretty great at their job. ## What's next for Tailor.cv After cleaning up the code for production, we aim to do some customer testing and deploy by the end of February.
partial
## Summary OrganSafe is a revolutionary web application that tackles the growing health & security problem of black marketing of donated organs. The verification of organ recipients leverages the Ethereum Blockchain to provide critical security and prevent improper allocation for such a pivotal resource. ## Inspiration The [World Health Organization (WHO)](https://slate.com/business/2010/12/can-economists-make-the-system-for-organ-transplants-more-humane-and-efficient.html) estimates that one in every five kidneys transplanted per year comes from the black market. There is a significant demand for solving this problem which impacts thousands of people every year who are struggling to find a donor for a significantly need transplant. Modern [research](https://ieeexplore.ieee.org/document/8974526) has shown that blockchain validation of organ donation transactions can help reduce this problem and authenticate transactions to ensure that donated organs go to the right place! ## What it does OrganSafe facilitates organ donations with authentication via the Ethereum Blockchain. Users can start by registering on OrganSafe with their health information and desired donation and then application's algorithms will automatically match the users based on qualifying priority for available donations. Hospitals can easily track donations of organs and easily record when recipients receive their donation. ## How we built it This application was built using React.js for the frontend of the platform, Python Flask for the backend and API endpoints, and Solidity+Web3.js for Ethereum Blockchain. ## Challenges we ran into Some of the biggest challenges we ran into were connecting the different components of our project. We had three major components: frontend, backend, and the blockchain that were developed on that needed to be integrated together. This turned to be the biggest hurdle in our project that we needed to figure out. Dealing with the API endpoints and Solidarity integration was one of the problems we had to leave for future developments. One solution to a challenge we solved was dealing with difficulty of the backend development and setting up API endpoints. Without a persistent data storage in the backend, we attempted to implement basic storage using localStorage in the browser to facilitate a user experience. This allowed us to implement a majority of our features as a temporary fix for our demonstration. Some other challenges we faced included figuring certain syntactical elements to the new technologies we dealt (such as using Hooks and States in React.js). It was a great learning opportunity for our group as immersing ourselves in the project allowed us to become more familiar with each technology! ## Accomplishments that we're proud of One notable accomplishment is that every member of our group interface with new technology that we had little to no experience with! Whether it was learning how to use React.js (such as learning about React fragments) or working with Web3.0 technology such as the Ethereum Blockchain (using MetaMask and Solidity), each member worked on something completely new! Although there were many components we simply just did not have the time to complete due to the scope of TreeHacks, we were still proud with being able to put together a minimum viable product in the end! ## What we learned * Fullstack Web Development (with React.js frontend development and Python Flask backend development) * Web3.0 & Security (with Solidity & Ethereum Blockchain) ## What's next for OrganSafe After TreeHacks, OrganSafe will first look to tackle some of the potential areas that we did not get to finish during the time of the Hackathon. Our first step would be to finish development of the full stack web application that we intended by fleshing out our backend and moving forward from there. Persistent user data in a data base would also allow users and donors to continue to use the site even after an individual session. Furthermore, scaling both the site and the blockchain for the application would allow greater usage by a larger audience, allowing more recipients to be matched with donors.
## Off The Grid Super awesome offline, peer-to-peer, real-time canvas collaboration iOS app # Inspiration Most people around the world will experience limited or no Internet access at times during their daily lives. We could be underground (on the subway), flying on an airplane, or simply be living in areas where Internet access is scarce and expensive. However, so much of our work and regular lives depend on being connected to the Internet. I believe that working with others should not be affected by Internet access, especially knowing that most of our smart devices are peer-to-peer Wifi and Bluetooth capable. This inspired me to come up with Off The Grid, which allows up to 7 people to collaborate on a single canvas in real-time to share work and discuss ideas without needing to connect to Internet. I believe that it would inspire more future innovations to help the vast offline population and make their lives better. # Technology Used Off The Grid is a Swift-based iOS application that uses Apple's Multi-Peer Connectivity Framework to allow nearby iOS devices to communicate securely with each other without requiring Internet access. # Challenges Integrating the Multi-Peer Connectivity Framework into our application was definitely challenging, along with managing memory of the bit-maps and designing an easy-to-use and beautiful canvas # Team Members Thanks to Sharon Lee, ShuangShuang Zhao and David Rusu for helping out with the project!
## Inspiration While the use cases for web3 expand every day from healthcare to polling systems, we wanted to explore the implementation of web3 in the entertainment sector. As the world of cryptocurrency expands, people would want to play games using crypto and win crypto. ## What it does The user gets to draw a picture and set the answer for the picture. The other players can then try to guess the answer. If they get it right, they are rewarded with crypto. In order to guess, the player needs to put in some crypto. As a result, the prize pool for that particular picture increases. The artist will get a portion of the prize pool as an incentive for drawing. ## How we built it To start off we used Solana's Twitter example and other social media-on-the-block-chain implementations we found online. Through that, we were able to set up creating a wallet on our local machines that could be used to test functions. Our next issue was uploading an image to the blockchain so that the data itself was de-centralized. We used IPFS for this task but ran into issues while connecting the uploading API to the function for creating a post. For our front end we had to flip-flop between React and Vue, as Due was already connected to our backend and could be used to fetch data, however, our team felt more comfortable in using React for front-end development. ## Challenges we ran into We ran into some challenges in building the blockchain and saving the drawn image. Moreover, the time crunch was also a big challenge for us. While we were able to learn many individual technologies like creating a wallet on our local machine, uploading images with IPFS, and sending posts through the blockchain combining all those elements together with our front end is what posed an issue in the constrained timings. Another problem was picking technologies. For our front end, React was a framework most of use were accustomed to however, Vue was better integrated without backend calls and for getting the drawing of our user. ## Accomplishments that we're proud of We are proud that we were able to learn and overcome so many challenges in a short period of time. Despite it having been 24 hours it feels like we have gained decent experience in Web3 and Solana specifically. ## What we learned None of us had ever worked on web3 before. This was our first time developing a decentralized application (dapp). We also learned about the various use cases of Web3 and its advantages. Furthermore, we explored building smart contracts. ## What's next for Cryptionary In the future, we hope that cryptionary will become an end-to-end game that anyone on the blockchain can enjoy in a safe way.
winning
## Inspiration Brought to you by kids who were literally sick days before the hackathon (and Chatime). We all know tea is a warm beverage that makes you feel warm and fuzzy when you drink it and can make you feel positive! Knowing that, we wanted to be positive and help others feel the same and spread some positiviTea. ## What it does Asks for some information from the user, mainly their name and their mood such as how happy they are, how sad they are, etc. The idea is to use that information we gather and recommend a tea to drink based on your emotional state of mind and along with an inspirational quote as the cherry on top. :)
# We'd love if you read through this in its entirety, but we suggest reading "What it does" if you're limited on time ## The Boring Stuff (Intro) * Christina Zhao - 1st-time hacker - aka "Is cucumber a fruit" * Peng Lu - 2nd-time hacker - aka "Why is this not working!!" x 30 * Matthew Yang - ML specialist - aka "What is an API" ## What it does It's a cross-platform app that can promote mental health and healthier eating habits! * Log when you eat healthy food. * Feed your "munch buddies" and level them up! * Learn about the different types of nutrients, what they do, and which foods contain them. Since we are not very experienced at full-stack development, we just wanted to have fun and learn some new things. However, we feel that our project idea really ended up being a perfect fit for a few challenges, including the Otsuka Valuenex challenge! Specifically, > > Many of us underestimate how important eating and mental health are to our overall wellness. > > > That's why we we made this app! After doing some research on the compounding relationship between eating, mental health, and wellness, we were quite shocked by the overwhelming amount of evidence and studies detailing the negative consequences.. > > We will be judging for the best **mental wellness solution** that incorporates **food in a digital manner.** Projects will be judged on their ability to make **proactive stress management solutions to users.** > > > Our app has a two-pronged approach—it addresses mental wellness through both healthy eating, and through having fun and stress relief! Additionally, not only is eating healthy a great method of proactive stress management, but another key aspect of being proactive is making your de-stressing activites part of your daily routine. I think this app would really do a great job of that! Additionally, we also focused really hard on accessibility and ease-of-use. Whether you're on android, iphone, or a computer, it only takes a few seconds to track your healthy eating and play with some cute animals ;) ## How we built it The front-end is react-native, and the back-end is FastAPI (Python). Aside from our individual talents, I think we did a really great job of working together. We employed pair-programming strategies to great success, since each of us has our own individual strengths and weaknesses. ## Challenges we ran into Most of us have minimal experience with full-stack development. If you look at my LinkedIn (this is Matt), all of my CS knowledge is concentrated in machine learning! There were so many random errors with just setting up the back-end server and learning how to make API endpoints, as well as writing boilerplate JS from scratch. But that's what made this project so fun. We all tried to learn something we're not that great at, and luckily we were able to get past the initial bumps. ## Accomplishments that we're proud of As I'm typing this in the final hour, in retrospect, it really is an awesome experience getting to pull an all-nighter hacking. It makes us wish that we attended more hackathons during college. Above all, it was awesome that we got to create something meaningful (at least, to us). ## What we learned We all learned a lot about full-stack development (React Native + FastAPI). Getting to finish the project for once has also taught us that we shouldn't give up so easily at hackathons :) I also learned that the power of midnight doordash credits is akin to magic. ## What's next for Munch Buddies! We have so many cool ideas that we just didn't have the technical chops to implement in time * customizing your munch buddies! * advanced data analysis on your food history (data science is my specialty) * exporting your munch buddies and stats! However, I'd also like to emphasize that any further work on the app should be done WITHOUT losing sight of the original goal. Munch buddies is supposed to be a fun way to promote healthy eating and wellbeing. Some other apps have gone down the path of too much gamification / social features, which can lead to negativity and toxic competitiveness. ## Final Remark One of our favorite parts about making this project, is that we all feel that it is something that we would (and will) actually use in our day-to-day!
## Inspiration We were inspired by Katie's 3-month hospital stay as a child when she had a difficult-to-diagnose condition. During that time, she remembers being bored and scared -- there was nothing fun to do and no one to talk to. We also looked into the larger problem and realized that 10-15% of kids in hospitals develop PTSD from their experience (not their injury) and 20-25% in ICUs develop PTSD. ## What it does The AR iOS app we created presents educational, gamification aspects to make the hospital experience more bearable for elementary-aged children. These features include: * An **augmented reality game system** with **educational medical questions** that pop up based on image recognition of given hospital objects. For example, if the child points the phone at an MRI machine, a basic quiz question about MRIs will pop-up. * If the child chooses the correct answer in these quizzes, they see a sparkly animation indicating that they earned **gems**. These gems go towards their total gem count. * Each time they earn enough gems, kids **level-up**. On their profile, they can see a progress bar of how many total levels they've conquered. * Upon leveling up, children are presented with an **emotional check-in**. We do sentiment analysis on their response and **parents receive a text message** of their child's input and an analysis of the strongest emotion portrayed in the text. * Kids can also view a **leaderboard of gem rankings** within their hospital. This social aspect helps connect kids in the hospital in a fun way as they compete to see who can earn the most gems. ## How we built it We used **Xcode** to make the UI-heavy screens of the app. We used **Unity** with **Augmented Reality** for the gamification and learning aspect. The **iOS app (with Unity embedded)** calls a **Firebase Realtime Database** to get the user’s progress and score as well as push new data. We also use **IBM Watson** to analyze the child input for sentiment and the **Twilio API** to send updates to the parents. The backend, which communicates with the **Swift** and **C# code** is written in **Python** using the **Flask** microframework. We deployed this Flask app using **Heroku**. ## Accomplishments that we're proud of We are proud of getting all the components to work together in our app, given our use of multiple APIs and development platforms. In particular, we are proud of getting our flask backend to work with each component. ## What's next for HealthHunt AR In the future, we would like to add more game features like more questions, detecting real-life objects in addition to images, and adding safety measures like GPS tracking. We would also like to add an ALERT button for if the child needs assistance. Other cool extensions include a chatbot for learning medical facts, a QR code scavenger hunt, and better social interactions. To enhance the quality of our quizzes, we would interview doctors and teachers to create the best educational content.
partial
## Inspiration In the work from home era, many are missing the social aspect of in-person work. And what time of the workday most provided that social interaction? The lunch break. culina aims to bring back the social aspect to work from home lunches. Furthermore, it helps users reduce their food waste by encouraging the use of food that could otherwise be discarded and diversifying their palette by exposing them to international cuisine (that uses food they already have on hand)! ## What it does First, users input the groceries they have on hand. When another user is found with a similar pantry, the two are matched up and displayed a list of healthy, quick recipes that make use of their mutual ingredients. Then, they can use our built-in chat feature to choose a recipe and coordinate the means by which they want to remotely enjoy their meal together. ## How we built it The frontend was built using React.js, with all CSS styling, icons, and animation made entirely by us. The backend is a Flask server. Both a RESTful API (for user creation) and WebSockets (for matching and chatting) are used to communicate between the client and server. Users are stored in MongoDB. The full app is hosted on a Google App Engine flex instance and our database is hosted on MongoDB Atlas also through Google Cloud. We created our own recipe dataset by filtering and cleaning an existing one using Pandas, as well as scraping the image URLs that correspond to each recipe. ## Challenges we ran into We found it challenging to implement the matching system, especially coordinating client state using WebSockets. It was also difficult to scrape a set of images for the dataset. Some of our team members also overcame technical roadblocks on their machines so they had to think outside the box for solutions. ## Accomplishments that we're proud of. We are proud to have a working demo of such a complex application with many moving parts – and one that has impacts across many areas. We are also particularly proud of the design and branding of our project (the landing page is gorgeous 😍 props to David!) Furthermore, we are proud of the novel dataset that we created for our application. ## What we learned Each member of the team was exposed to new things throughout the development of culina. Yu Lu was very unfamiliar with anything web-dev related, so this hack allowed her to learn some basics of frontend, as well as explore image crawling techniques. For Camilla and David, React was a new skill for them to learn and this hackathon improved their styling techniques using CSS. David also learned more about how to make beautiful animations. Josh had never implemented a chat feature before, and gained experience teaching web development and managing full-stack application development with multiple collaborators. ## What's next for culina Future plans for the website include adding videochat component to so users don't need to leave our platform. To revolutionize the dating world, we would also like to allow users to decide if they are interested in using culina as a virtual dating app to find love while cooking. We would also be interested in implementing organization-level management to make it easier for companies to provide this as a service to their employees only. Lastly, the ability to decline a match would be a nice quality-of-life addition.
## Inspiration Partially inspired by the Smart Cities track, we wanted our app to have the direct utility of ordering food, while still being fun to interact with. We aimed to combine convenience with entertainment, making the experience more enjoyable than your typical drive-through order. ## What it does You interact using only your voice. The app automatically detects when you start and stop talking, uses AI to transcribe what you say, figures out the food items (with modifications) you want to order, and adds them to your current order. It even handles details like size and flavor preferences. The AI then generates text-to-speech audio, which is played back to confirm your order in a humorous, engaging way. There is absolutely zero set-up or management necessary, as the program will completely ignore all background noises and conversation. Even then, it will still take your order with staggering precision. ## How we built it The frontend of the app is built with React and TypeScript, while the backend uses Flask and Python. We containerized the app using Docker and deployed it using Defang. The design of the menu is also done in Canva with a dash of Harvard colors. ## Challenges we ran into One major challenge was getting the different parts of the app—frontend, backend, and AI—to communicate effectively. From media file conversions to AI prompt engineering, we worked through each of the problems together. We struggled particularly with maintaining smooth communication once the app was deployed. Additionally, fine-tuning the AI to accurately extract order information from voice inputs while keeping the interaction natural was a big hurdle. ## Accomplishments that we're proud of We're proud of building a fully functioning product that successfully integrates all the features we envisioned. We also managed to deploy the app, which was a huge achievement given the complexity of the project. Completing our initial feature set within the hackathon timeframe was a key success for us. Trying to work with Python data type was difficult to manage, and we were proud to navigate around that. We are also extremely proud to meet a bunch of new people and tackle new challenges that we were not previously comfortable with. ## What we learned We honed our skills in React, TypeScript, Flask, and Python, especially in how to make these technologies work together. We also learned how to containerize and deploy applications using Docker and Docker Compose, as well as how to use Defang for cloud deployment. ## What's next for Harvard Burger Moving forward, we want to add a business-facing interface, where restaurant staff would be able to view and fulfill customer orders. There will also be individual kiosk devices to handle order inputs. These features would allow *Harvard Burger* to move from a demo to a fully functional app that restaurants could actually use. Lastly, we can sell the product by designing marketing strategies for fast food chains.
## Inspiration Old-school text adventure games. ## What it does You play as Jack and get to make choices to advance the adventure. There are several possible paths to the story and you will come across obstacles and games along the way. ## How I built it Using Python. ## Challenges I ran into Trying to use Tkinter to create a user interface. We ended up just doing in-text graphics. ## Accomplishments that I'm proud of This is two of the team members' first Python project. We are proud that we made working code. ## What I learned Catherine: learned how to code in Python (mainly the syntax) Jennifer: how to organize code so that it produces a functioning game Aria: finally learned how to use GitHub! ## What's next for The Text Adventure of Jack Adding user interface and sound to make the game more visually and aurally immersive.
winning
## Inspiration Memory athletes retain large amounts of information using mnemonics, story-telling, and visualizations. Picture Pathway aims to emulate this studying methodology and bring it into the classroom! ## What it does Picture Pathway is a student-teacher platform. Teachers submit the problem they would like their class to visualize and/or convert into a story. From there, the student describes a scene to DALL-E and then receives a generated image to add to their story on solving their assigned problems In our example, a teacher is looking to solidify the process of Integration for her students; thus, they have assigned a series of steps to 'storify'. The text contained in yellow represents what a student user's responses might look like (and our last slide demonstrates what the corresponding image output may be). ## How we built it -Front-End: Repl.it - HTML, Javascript -Back-End: Python (Jango), SQLite ## Challenges we ran into -Most of our members are just beginning their coding journey so there was certainly a learning curve! -The integration of Dall-E API was especially uncharted territory for our team and required much research to implement -Debugging(πーπ) ## Accomplishments that we're proud of Our team is most proud of our ability to riff off each other--- most of us met for the first time just Friday, yet we trusted one another to perform our assigned roles and successfully worked our way from 0 to a working prototype ## What we learned -3/4 members learned Django + SQL for the first time! -APIs can interact on the backend (what enabled us to pull images from Dall-E to embed into our project!) ## What's next for Picture Pathway -All our members are passionate about accessibility in STEM education
## Inspiration When visiting a clinic, two big complaints that we have are the long wait times and the necessity to use a kiosk that thousands of other people have already touched. We also know that certain methods of filling in information are not accessible to everyone (For example, someone with Parkinsons disease writing with a pen). In response to these problems, we created Touchless. ## What it does * Touchless is an accessible and contact-free solution for gathering form information. * Allows users to interact with forms using voices and touchless gestures. * Users use different gestures to answer different questions. * Ex. Raise 1-5 fingers for 1-5 inputs, or thumbs up and down for yes and no. * Additionally, users are able to use voice for two-way interaction with the form. Either way, surface contact is eliminated. * Applicable to doctor’s offices and clinics where germs are easily transferable and dangerous when people touch the same electronic devices. ## How we built it * Gesture and voice components are written in Python. * The gesture component uses OpenCV and Mediapipe to map out hand joint positions, where calculations could be done to determine hand symbols. * SpeechRecognition recognizes user speech * The form outputs audio back to the user by using pyttsx3 for text-to-speech, and beepy for alert noises. * We use AWS Gateway to open a connection to a custom lambda function which has been assigned roles using AWS Iam Roles to restrict access. The lambda generates a secure key which it sends with the data from our form that has been routed using Flask, to our noSQL dynmaoDB database. ## Challenges we ran into * Tried to set up a Cerner API for FHIR data, but had difficulty setting it up. * As a result, we had to pivot towards using a noSQL database in AWS as our secure backend database for storing our patient data. ## Accomplishments we’re proud of This was our whole team’s first time using gesture recognition and voice recognition, so it was an amazing learning experience for us. We’re proud that we managed to implement these features within our project at a level we consider effective. ## What we learned We learned that FHIR is complicated. We ended up building a custom data workflow that was based on FHIR models we found online, but due to time constraints we did not implement certain headers and keys that make up industrial FHIR data objects. ## What’s next for Touchless In the future, we would like to integrate the voice and gesture components more seamlessly into one rather than two separate components.
## Inspiration In high school, a teacher of ours used to sit in the middle of the discussion to draw lines from one person to another on paper to identify the trends of the discussion. It's a very meaningful activity as we could see how balanced the discussion is, allowing us to highlight to people who had less chance to express their ideas. It could also be used by teacher in earlier education, to identify social challenges such as anxiety, speech disorder in children. ## What it does The app initially is trained on a short audio from each of the member of the discussion. Using transfer learning, it will able to recognize the person talking. And, during discussion, very colorful and aesthetic lines will began drawing from person to another on REAL-TIME! ## How we built it On the front-end, we used react and JavaScript to create a responsive and aesthetic website. Vanilla css (and a little bit of math, a.k.a, Bézier curve) to create beautiful animated lines, connecting different profiles. On the back-end, python and tensorflow was used to train the AI model. First, the audios are pre-processed into smaller 1-second chunks of audio, before turning them into a spectrogram picture. With this, we performed transfer learning with VGG16, to extract features from the spectrograms. Then, the features are used to fit a SVM model, using scikit-learn. Subsequently, the back-end opens a web-socket with the front-end to receive stream of data, and return label of the person talking. This is also done with multi-threading to ensure all the data is being processed quickly. ## Challenges we ran into As it's our first time with deep-learning or training an AI for that matter, it was very difficult to get started. Despite the copious amount of resources and projects done, it was hard to identify a suitable source. Different processing techniques were also needed to know, before the model could be trained. In additions, finding a platform (such as Google Colab) was also necessary to train the model in appropriate time. Finally, it was fairly hard to incorporate the project with the rest of the project. It needs to be able to process the data in real-time, while keeping the latency low. Another major challenge that we ran into was connecting the back-end with the front-end. As we wanted it to be real-time, we had to be able to stream the raw data to the back-end. But, there were problems reconstructing the binary files into appropriate format, because we were unsure what format RecordRTC uses to record audio. There was also a problem of how much data or how frequent the data should be sent over due to our high latency of predicting (~500ms). It's a problem that we couldn't figure out in time ## Accomplishments that we're proud of The process of training the model was really cool!!! We could never think of training a voice recognition model similar to how you would to to an image/face recognition. It was a very out-of-the-box method, that we stumbled up online. It really motivated us to get out there and see what else. We were also fairly surprised to get a proof-of-concept real-time processing with local audio input from microphone. We had to utilize threading to avoid overflowing the audio input buffer. And if you get to use threading, you know it's a cool project :D. ## What we learned Looking back, the project was quite ambitious. BUT!! That's how we learned. We learned so much about training machine learning as well as different connection protocols over the internet. Threading was also a "wet" dream of ours, so it was really fun experimenting with the concept in python. ## What's next for Hello world The app would be much better on mobile. So, there are plans to port the entire project to mobile (maybe learning React Native?). We're also planning on retrain the voice recognition model with different methods, and improve the accuracy as well as confidence level. Lastly, we're planning on deploying the app and sending it back to our high school teacher, who was the inspiration for this project, as well as teachers around the world for their classrooms. ## Sources These two sources helped us tremendously in building the model: <https://medium.com/@omkarade9578/speaker-recognition-using-transfer-learning-82e4f248ef09> <https://towardsdatascience.com/automatic-speaker-recognition-using-transfer-learning-6fab63e34e74>
winning
## 💡 Inspiration * The healthcare industry has an extreme lack of assistive devices for people who are blind or visually impaired. * Only 2% to 8% of people who are visually impaired use white canes. From our research we learned that many people prefer not to use a cane because of the stigma associated with carrying a cane or because they do not find a personal need for that specific device. We wanted to create a sleek device that can be used with or without a cane for 100% of the low vision community. * Many assistive technological devices are expensive, costing thousands of dollars, so we attempted to create an affordable device, accessible to everyone. ## ⚙️ What it does The Magic Glove is a multi-feature, affordable, and innovative assistive device for people who are blind or visually impaired. The sleek design is compact with three useful features: 1) Spatial Orientation: Notifies the user of nearby obstructions to avoid accidents or tripping. 2) Colour Recognition: Helps users with daily life tasks including grocery shopping, clothing organizing, and more. 3) Light Detection: Notifies people who are totally blind (18% of the blind community) and cannot perceive light if the lights in a room are on or off. ## 🔨 How we built it * The Magic Glove uses a Raspberry Pi 3 microprocessor. * To create our color detection feature we used OpenCV to develop an algorithm for recognizing colors and we used a Raspberry Pi Camera to get image input. * To develop the spatial recognition feature we used an Ultrasonic Range Sensor to detect nearby obstructions by sending pulses and a buzzer to notify the user. * The light detection feature uses photoresistors to detect if lights are off or on in a room. * Additionally, Google Cloud Platform Text-to-Speech is used to communicate with the users. ## ⛰️ Challenges we ran into The primary challenge we ran into while creating The Magic Glove was interfacing with the Raspberry Pi 3 microprocessor. Connecting a speaker to the Raspberry Pi using its command line interface consumed the last 5-8 hours of our hacking as we learned Raspberry Pi 3 does not work very well with bluetooth. This was all our team member’s first time working with the Raspberry Pi board, which caused quite a learning curve. ## 🏆 Accomplishments that we're proud of We are proud that we created an affordable and innovative device to for the visually impaired community. ## 💭 What we learned Through research, we gained a lot of knowledge about the low-vision community. It was an incredible learning experience to uncover numerous false assumptions we had all made about people who are blind or visually impaired. ## 👟 What's next for The Magic Glove We want to take our device from a prototype to a final model. In our model that will hit the market, we will include brail on all our push buttons as well as vibration features for Spatial Orientation. We also aspire to include a feature for Optical Character Recognition.
## Inspiration While shopping for school supplies, one of our team members observed that while only one book/textbook brand had a version in braille for blind people, the other 99% of other books were in English text only. We realized that the amount of information available to blind people to learn from was significantly restricted. That was when we decided we needed to change the way visually impaired people intake English text. ## What is it? Braille Vision makes the lives of blind people easier by converting lines of texts to braille. First, the blind person uses a webcam to scan a line of text, and the information is sent to a Raspberry Pi 3. After the Raspberry Pi 3 creates a braille pattern, the pattern is sent to a group of 6 servos to make the pattern matching the scanned word/letter for the blind person to touch/understand. ## How did we do it? We built Braille Vision by firstly 3d modeling the outer casing for the cell, the flaps that connect the braille head to the servo, and the braille head itself. After a few iterations of each part, we used a 3D printer to print all the parts using PLA plastic. Within the casing, we have 6 servos connected to a Raspberry Pi 3, alongside a Logitech webcam to scan the lines of text. Each servo is fitted with a 3D printed flap that is connected to a cut paperclip. At the end of the paperclip, we have a 3D printed cap that is used for the texture of the braille. The top and body of the case have cable management holes so that the wires from the servos can connect to the Raspberry Pi with ease. ## What are we proud of? We are all proud of creating a simple, compact and cheap braille cell using very basic parts. Traditionally, refreshable braille displays are bulky, exorbitantly expensive, and used complex electrical ideas such as the piezoelectric expansion of crystals. We simply used a cell of six small servos, paperclips, PLA plastic, a Logitech G615 webcam and a Raspberry Pi 3. ## What challenges did we face? Of course, we had challenges in the design process. Our biggest challenge was that we were using an Arduino connected to a Raspberry pi 3 for running the whole project. We used the Arduino for separating the motors and the computer itself, but it could not communicate with the Pi 3 at all, so we were forced to remove it from the project. Our second biggest challenge was that we were unable to get the right 3 servos to interact with the Raspberry pi 3 because the single Raspberry Pi did not have enough ports for all 6 servos. Third, we designed our 3D printed parts on an online modeler which was tedious because of WiFi connection and consistency issues. Also, this was our first-time 3D printing parts, so we had some misprints and had to redesign the flaps connected to the servos and the braille heads four to five times each, resulting in time being wasted. Finally, during the construction of the unit, whe had to bend our own picks and tools using paper clips because of limited access to the internals. We feared that the servos would be too weak, so we created a display to automatically print out the braille patterns as what they're supposed to be. ## What did we learn? Along the design process, we learned some important things. One of them is that 3D printers require the parts to be thick, so they wouldn't either print fully or break after it's done. Also, we learned that planning ahead is not always able to account for every single outcome, and sometimes improvisation is necessary to complete our goal. ## What will be improved in the future? One thing we will improve for braille version is fixing the right 3 servo motors to function so we can create a 6 dot braille pattern for blind people to interpret. Once we complete the second half of servos, we will decrease the whole braille system size to fit on the fingertip of a visually impaired person. This would make it easier for the user to read braille, as the reader is designed to be carried around at all times. One other thing we are hoping to add is a potentiometer to control the speed of the text so that the braille interpreter is reading the text at the right speed. To aid the potentiometer, we would also add tactile aid to notify the reader if they are going above or below the line of text, as it would increase the success rate of the scanning. The ultimate goal for this product is for it to be housed in a single unit with a camera, led, and maximum portability, predicted to be the total size of your average thumb. The whole unit would slide across the page, closely simulating reading a real braille book.
## Inspiration Small scale braille printers cost between $1800 and $5000. We think that this is too much money to spend for simple communication and it has acted as a barrier for many blind people for a long time. We plan to change this by offering a quick, affordable, precise solution to this problem. ## What it does This machine will allow you to type a string (word) on a keyboard. The raspberry pi then identifies what was entered and then controls the solenoids and servo to pierce the paper. The solenoids do the "printing" while the servo moves the paper. A close-up video of the solenoids running: <https://www.youtube.com/watch?v=-jSG96Br3b4> ## How we built it Using a raspberry pi B+, we created a script in python that would recognize all keyboard characters (inputted as a string) and output the corresponding Braille code. The raspberry pi is connected to 4 circuits with transistors, diodes and solenoids/servo motor. These circuits control the how the paper is punctured (printed) and moved. The hardware we used was: 4x 1n4004 diodes, 3 ROB-11015 solenoids, 4 TIP102 transistors, a Raspberry Pi B+, Solarbotic's GM4 servo motor, its wheel attachment, a cork board, and a bunch of Lego. ## Challenges we ran into The project initially had many hardware/physical problems which caused errors while trying to print braille. The solenoids were required to be in a specific place in order for it to pierce paper. If the angle was incorrect, the pins would break off or the paper stuck to them. We also found that the paper would jam if there were no paper guards to hold the paper down. ## Accomplishments that we are proud of We are proud of being able to integrate hardware and software into our project. Despite being unfamiliar with any of the technologies, we were able to learn quickly and create a fun project that will make a difference in the world. ## What we learned None of us had any knowledge of python, raspberry pi, or how solenoids functioned. Now that we have done this project, we are much more comfortable in working with these things. ## What's next for Braille Printer We were only able to get one servo motor which meant we could only move paper in one direction. We would like to use another servo in the future to be able to print across a whole page.
losing
## Inspiration The inspiration from merchflow was the Google form that PennApps sent out regarding shipping swag. We found the question regarding distribution on campuses particularly odd, but it made perfect sense after giving it a bit more thought. After all, shipping a few large packages is cheaper than many small shipments. But then we started considering the logistics of such an arrangement, particularly how the event organizers would have to manually figure out these shipments. Thus the concept of merchflow was born. ## What it does Merchflow is a web app that allows event organizers (like for a hackathon) to easily determine the optimal shipping arrangement for swag (or, more generically, for any package) to event participants. Below is our design for merchflow. First, the event organizer provides merchflow with the contact info (email) of the event participants. Merchflow will then send out emails on behalf of the organizer with a link to a form and an event-specific code. The form will ask for information such as shipping address as well if they would be willing to distribute swag to other participants nearby. This information will be sent back to merchflow’s underlying database Firestore and updates the organizer’s dashboard in real-time. Once the organizer is ready to ship, merchflow will compute the best shipping arrangement based on the participant’s location and willingness to distribute. This will be done according to a shipping algorithm that we define to minimize the number of individual shipments required (which will in turn lower the overall shipping costs for the organizer). ## How we built it Given the scope of PennApps and the limited time we had, we decided to focus on designing the concept of Merchflow and building out its front end experience. While there is much work to be done in the backend, we believe what we have so far provides a good visualization of its potential. Merchflow is built using react.js and firebase (and related services such as Firestore and Cloud Functions). We ran into many issues with Firebase and ultimately were not able to fully utilize it; however, we were able to successfully deploy the web app to the provided host. With react.js, we used bootstrap and started off with airframe React templates and built our own dashboard, tabs, forms, tables, etc. custom to our design and expectations for merchflow. The dashboard and tabs are designed and built with responsiveness in mind as well as an intention to pursue a minimalistic, clean style. For functionalities that our backend isn’t operational in yet, we used faker.js to populate it with data to simulate the real experience an event planner would have. ## Challenges I ran into During the development of merchflow, we ran into many issues. The one being that we were unable to get Firebase authentication working with our React app. We tried following several tutorials and documentations; however, it was just something that we were unable to resolve in the time span of PennApps. Therefore, we focused our energy on polishing up the front end and the design of the project so that we can relay our project concept well even without the backend being fully operational. Another issue that we encountered was regarding Firebase deployment (while we weren’t able to connect to any Firebase SDKs, we were still able to connect the web app as a Firebase app and could deploy to the provided hosted site). During deployment, we noticed that the color theme was not properly displaying compared to what we had locally. Since we specify the colors in node\_modules (a folder that we do not commit to Git), we thought that by moving the specific color variable .scss file out of node\_modules, change import paths, we would be able to fix it. And it did, but it took quite some time to realize this because the browser had cached the site prior to this change and it didn’t propagate over immediately. ## Accomplishments that I'm proud of We are very proud of the level of polish in our design and react front end. As a concept, we fleshed out merchflow quite extensively and considered many different aspects and features that would be required of an actual service that event organizers actually use. This includes dealing with authentication, data storage, and data security. Our diagram describes the infrastructure of merchflow quite well and clearly lays out the work ahead of us. Likewise, we spent hours reading through how the airframe template was built in the first place before being able to customize and add on top of it, and in the process gained a lot of insight into how React projects should be structured and how each file and component connects with each other. Ultimately, we were able to turn what we dreamed of in our designs into reality that we can present to someone else. ## What I learned As a team, we learned a lot about web development (which neither of us is particularly strong in) specifically regarding react.js and Firebase. For react.js, we didn’t know the full extent of modularizing components could bring in terms of scale and clarity. We interacted and learned the workings of scss and javascript, including the faker.js package, on the fly as we try to build out merchflow’s front end. ## What's next for merchflow While we are super excited about our front end, unfortunately, there are still a few more gaps to turn merchflow into an operational tool for event organizers to utilize, primarily dealing with the backend and Firebase. We need to resolve the Firebase connection issues that we were experiencing so we can actually get a backend working for merchflow. After we are able to integrate Firebase into the react app, we can start connecting the fields and participant list to Firestore which will maintain these documents based on the event organizer’s user id (preventing unauthorized access and modification). Once that is complete, we can focus on the two main features of merchflow: sending out emails and calculating the best shipping arrangement. Both of these features would be implemented via a Cloud Function and would work with the underlying data stored in Firestore. Sending out emails could be achieved using a library such as Twilio SendGrid using the emails the organizer has provided. Computing the best arrangement would require a bit more work to figure out an algorithm to work with. Regardless of algorithm, it will likely utilize Google Maps API (or some other map API) in order to calculate the distance between addresses (and thus determine viability for proxy distribution). We would also need to utilize some service to programmatically generate (and pay for) shipping labels.
## Inspiration Imagine a world where the number of mass shootings in the U.S. per year don't align with the number of days. With the recent Thousand Oaks shooting, we wanted to make something that would accurately predict the probability that a place has a mass shooting given a zipcode and future date. ## What it does When you type in a zipcode, the corresponding city is queried in the prediction results of our neural network in order to get a probability. This probability is scaled accordingly and represented as a red circle of varying size on our U.S. map. We also made a donation link that takes in credit card information and processes it. ## How we built it We trained our neural network with datasets on gun violence in various cities. We did a ton of dataset cleaning in order to find just what we needed, and trained our network using scikit-learn. We also used the stdlib api in order to pass data around so that the input zipcode could be sent to the right place, and we also used the Stripe api to handle credit card donation transactions. We used d3.js and other external topological javascript libraries in order to create a map of the U.S. that could be decorated. We then put it all together with some javascript, HTML and CSS. ## Challenges we ran into We had lots of challenges with this project. d3.js was hard to jump right into, as it is such a huge library that correlates data with visualization. Cleaning the data was challenging as well, because people tend not to think twice before throwing data into a big csv. Sending data around files without the usage of a server was challenging, and we managed to bypass that with the stdlib api. ## Accomplishments that we're proud of A trained neural network that predicts the probability of a mass shooting given a zipcode. A beautiful topological map of the United States in d3. Integration of microservices through APIs we had never used before. ## What we learned Doing new things is hard, but ultimately worthwhile! ## What's next for Ceasefire We will be working on a better, real-time mapping of mass shootings data. We will also need to improve our neural network by tidying up our data more.
## Inspiration We were inspired by the resilience of freelancers, particularly creative designers, during the pandemic. As students, it's easy to feel overwhelmed and not value our own work. We wanted to empower emerging designers and remind them of what we can do with a little bit of courage. And support. ## What it does Bossify is a mobile app that cleverly helps students adjust their design fees. It focuses on equitable upfront pay, which in turn increases the amount of money saved. This can be put towards an emergency fund. On the other side, clients can receive high-quality, reliable work. The platform has a transparent rating system making it easy to find quality freelancers. It's a win-win situation. ## How we built it We got together as a team the first night to hammer out ideas. This was our second idea, and everyone on the team loved it. We all pitched in ideas for product strategy. Afterwards, we divided the work into two parts - 1) Userflows, UI Design, & Prototype; 2) Writing and Testing the Algorithm. For the design, Figma was the main software used. The designers (Lori and Janice) used a mix iOS components and icons for speed. Stock images were taken from Unsplash and Pexels. After quickly drafting the storyboards, we created a rapid prototype. Finally, the pitch deck was made to synthesize our ideas. For the code, Android studio was the main software used. The developers (Eunice and Zoe) together implemented the back and front-end of the MVP (minimum viable product), where Zoe developed the intelligent price prediction model in Tensorflow, and deployed the trained model on the mobile application. ## Challenges we ran into One challenge was not having the appropriate data immediately available, which was needed to create the algorithm. On the first night, it was a challenge to quickly research and determine the types of information/factors that contribute to design fees. We had to cap off our research time to figure out the design and algorithm. There were also technical limitations, where our team had to determine the best way to integrate the prototype with the front-end and back-end. As there was limited time and after consulting with the hackathon mentor, the developers decided to aim for the MVP instead of spending too much time and energy on turning the prototype to a real front-end. It was also difficult to integrate the machine learning algorithm to our mini app's backend, mainly because we don't have any experience with implement machine learning algorithm in java, especially as part of the back-end of a mobile app. ## Accomplishments that we're proud of We're proud of how cohesive the project reads. As the first covid hackathon for all the team members, we were still able to communicate well and put our synergies together. ## What we learned Although a simple platform with minimal pages, we learned that it was still possible to create an impactful app. We also learned the importance of making a plan and time line before we start, which helped us keep track of our progress and allows us to use our time more strategically. ## What's next for Bossify Making partnerships to incentivize clients to use Bossify! #fairpayforfreelancers
partial
## Inspiration * The COVID-19 pandemic has bolstered an epidemic of anxiety among students * Frequent **panic attacks** are a symptom of anxiety * In the moment, panic attacks are frightening and crippling * In a time of isolation, Breeve is designed to improve users' mental health by identifying and helping them when they are experiencing panic attacks ## What it does * Heart rate monitor detects significant increase in heart rate (indicative of a panic attack) * Arduino sends signal to Initiate "breathing routine" * Google chrome extension (after getting a message from the Arduino) opens up a new tab with our webpage on it * Our webpage has a serene background, comforting words, and a moving cloud to help people focus on breathing and relaxing ## How we built it * The chrome extension and website are built in HTML, CSS, and JavaScript * The heart rate monitor is comprised of an Arduino UNO microcontroller, heart rate sensor (we substituted a potentiometer since we don't own a heart rate sensor) and a breadboard circuit ## Challenges we ran into * As this is our first hardware hack, we struggled with connecting the hardware and software. We were unable to use the "Keyboard()" Arduino library to let the Arduino initialize the chrome extension, and we struggled with using other technologies like FireBase to connect Arduino sensor input to the chrome extension's output. This is something we plan to learn about for future improvements to Breeve and future hackathons. ## Accomplishments that we're proud of * This is our first hardware hack! ## What we learned * Kirsten learned a lot about Arduino and breadboarding (e.g., how to hook up potentiometer) * Lavan learned about CSS animations and how a database could be used in the future to connect various input and output sources ## What's next for Breeve * More personalized → add prompt to phone a friend, take anxiety medication (if applicable) * Better sensory data (e.g. webcam, temperature sensor) to make a more informed diagnosis * Improved webpage (adding calming music in the background to create a safe, happy atmosphere)
## Inspiration In the “new normal” that COVID-19 has caused us to adapt to, our group found that a common challenge we faced was deciding where it was safe to go to complete trivial daily tasks, such as grocery shopping or eating out on occasion. We were inspired to create a mobile app using a database created completely by its users - a program where anyone could rate how well these “hubs” were following COVID-19 safety protocol. ## What it does Our app allows for users to search a “hub” using a Google Map API, and write a review by rating prompted questions on a scale of 1 to 5 regarding how well the location enforces public health guidelines. Future visitors can then see these reviews and add their own, contributing to a collective safety rating. ## How I built it We collaborated using Github and Android Studio, and incorporated both a Google Maps API as well integrated Firebase API. ## Challenges I ran into Our group unfortunately faced a number of unprecedented challenges, including losing a team member mid-hack due to an emergency situation, working with 3 different timezones, and additional technical difficulties. However, we pushed through and came up with a final product we are all very passionate about and proud of! ## Accomplishments that I'm proud of We are proud of how well we collaborated through adversity, and having never met each other in person before. We were able to tackle a prevalent social issue and come up with a plausible solution that could help bring our communities together, worldwide, similar to how our diverse team was brought together through this opportunity. ## What I learned Our team brought a wide range of different skill sets to the table, and we were able to learn lots from each other because of our various experiences. From a technical perspective, we improved on our Java and android development fluency. From a team perspective, we improved on our ability to compromise and adapt to unforeseen situations. For 3 of us, this was our first ever hackathon and we feel very lucky to have found such kind and patient teammates that we could learn a lot from. The amount of knowledge they shared in the past 24 hours is insane. ## What's next for SafeHubs Our next steps for SafeHubs include personalizing user experience through creating profiles, and integrating it into our community. We want SafeHubs to be a new way of staying connected (virtually) to look out for each other and keep our neighbours safe during the pandemic.
## Inspiration Over the Summer one of us was reading about climate change but then he realised that most of the news articles that he came across were very negative and affected his mental health to the point that it was hard to think about the world as a happy place. However one day he watched this one youtube video that was talking about the hope that exists in that sphere and realised the impact of this "goodNews" on his mental health. Our idea is fully inspired by the consumption of negative media and tries to combat it. ## What it does We want to bring more positive news into people’s lives given that we’ve seen the tendency of people to only read negative news. Psychological studies have also shown that bringing positive news into our lives make us happier and significantly increases dopamine levels. The idea is to maintain a score of how much negative content a user reads (detected using cohere) and once it reaches past a certain threshold (we store the scores using cockroach db) we show them a positive news related article in the same topic area that they were reading. We do this by doing text analysis using a chrome extension front-end and flask, cockroach dp backend that uses cohere for natural language processing. Since a lot of people also listen to news via video, we also created a part of our chrome extension to transcribe audio to text - so we included that into the start of our pipeline as well! At the end, if the “negativity threshold” is passed, the chrome extension tells the user that it’s time for some good news and suggests a relevant article. ## How we built it **Frontend** We used a chrome extension for the front end which included dealing with the user experience and making sure that our application actually gets the attention of the user while being useful. We used react js, HTML and CSS to handle this. There was also a lot of API calls because we needed to transcribe the audio from the chrome tabs and provide that information to the backend. **Backend** ## Challenges we ran into It was really hard to make the chrome extension work because of a lot of security constraints that websites have. We thought that making the basic chrome extension would be the easiest part but turned out to be the hardest. Also figuring out the overall structure and the flow of the program was a challenging task but we were able to achieve it. ## Accomplishments that we're proud of 1) (co:here) Finetuned a co:here model to semantically classify news articles based on emotional sentiment 2) (co:here) Developed a high-performing classification model to classify news articles by topic 3) Spun up a cockroach db node and client and used it to store all of our classification data 4) Added support for multiple users of the extension that can leverage the use of cockroach DB's relational schema. 5) Frontend: Implemented support for multimedia streaming and transcription from the browser, and used script injection into websites to scrape content. 6) Infrastructure: Deploying server code to the cloud and serving it using Nginx and port-forwarding. ## What we learned 1) We learned a lot about how to use cockroach DB in order to create a database of news articles and topics that also have multiple users 2) Script injection, cross-origin and cross-frame calls to handle multiple frontend elements. This was especially challenging for us as none of us had any frontend engineering experience. 3) Creating a data ingestion and machine learning inference pipeline that runs on the cloud, and finetuning the model using ensembles to get optimal results for our use case. ## What's next for goodNews 1) Currently, we push a notification to the user about negative pages viewed/a link to a positive article every time the user visits a negative page after the threshold has been crossed. The intended way to fix this would be to add a column to one of our existing cockroach db tables as a 'dirty bit' of sorts, which tracks whether a notification has been pushed to a user or not, since we don't want to notify them multiple times a day. After doing this, we can query the table to determine if we should push a notification to the user or not. 2) We also would like to finetune our machine learning more. For example, right now we classify articles by topic broadly (such as War, COVID, Sports etc) and show a related positive article in the same category. Given more time, we would want to provide more semantically similar positive article suggestions to those that the author is reading. We could use cohere or other large language models to potentially explore that.
partial
## Inspiration Legal research is tedious and time-consuming, with lawyers spending hours finding relevant cases. For instance, if a client was punched, lawyers must manually try different keywords like “kick” or “slap” to find matches. Analyzing cases is also challenging. Even with case summaries, lawyers must still verify their accuracy by cross-referencing with the original text, sometimes hundreds of pages long. This is inefficient, given the sheer length of cases and the need to constantly toggle between tabs to find the relevant paragraphs. ## What it does Our tool transforms legal research by offering AI-powered legal case search. Lawyers can input meeting notes or queries in natural language, and our AI scans open-source databases to identify the most relevant cases, ranked by similarity score. Once the best matches are identified, users can quickly review our AI-generated case summaries and full case texts side by side in a split-screen view, minimizing context-switching and enhancing research efficiency. ## How we built it Our tool was developed to create a seamless user experience for lawyers. The backend process began with transforming legal text into embeddings using the all-MiniLM-L6-v2 model for efficient memory usage. We sourced data from the CourtListener, which is backed by the Free Law Project non-profit, and stored the embeddings in LanceDB, allowing us to retrieve relevant cases quickly. To facilitate the search process, we integrated the CourtListener API, which enables keyword searches of court cases. A FastAPI backend server was established to connect LanceDB and CourtListener for effective data retrieval. ## Challenges we ran into The primary challenge was bridging the gap between legal expertise and software development. Analyzing legal texts proved difficult due to their complex and nuanced language. Legal terminology can vary significantly across cases, necessitating a deep understanding of context for accurate interpretation. This complexity made it challenging to develop an AI system that could generate meaningful similarity scores while grasping the subtleties inherent in legal documents. ## Accomplishments that we're proud of Even though we started out as strangers and were busy building our product, we took the time to get to know each other personally and have fun during the hackathon too! This was the first hackathon for two of our teammates, but they quickly adapted and contributed meaningfully. Most importantly, we supported each other throughout the process, making the experience both rewarding and memorable. ## What we learned Throughout the process, we emphasized constant communication within the team. The law student offered insights into complex research workflows, while the developers shared their expertise on technical feasibility and implementation. Together, we balanced usability with scope within the limited time available, and all team members worked hard to train the AI to generate meaningful similarity scores, which was particularly demanding. One teammate delved deep into embeddings, learning about their applications in similarity search, chunking, prompt engineering, and adjacent concepts like named entity recognition, hybrid search, and retrieval-augmented generation (RAG) — all within the span of the hackathon. Additionally, two of our members had no front-end development experience and minimal familiarity with design tools like Figma. By leveraging resources like assistant-ui, we quickly learned the necessary skills. ## What's next for citewise We aim to provide support for the complete workflow of legal research. This includes enabling lawyers to download relevant cases easily and facilitating collaboration by allowing the sharing of client files with colleagues. Additionally, we plan to integrate with paid legal databases, making our product platform agnostic. This will enable lawyers to search across multiple databases simultaneously, streamlining their research process and eliminating the need to access each database individually.
## Inspiration When reading news articles, we're aware that the writer has bias that affects the way they build their narrative. Throughout the article, we're constantly left wondering—"What did that article not tell me? What convenient facts were left out?" ## What it does *News Report* collects news articles by topic from over 70 news sources and uses natural language processing (NLP) to determine the common truth among them. The user is first presented with an AI-generated summary of approximately 15 articles on the same event or subject. The references and original articles are at your fingertips as well! ## How we built it First, we find the top 10 trending topics in the news. Then our spider crawls over 70 news sites to get their reporting on each topic specifically. Once we have our articles collected, our AI algorithms compare what is said in each article using a KL Sum, aggregating what is reported from all outlets to form a summary of these resources. The summary is about 5 sentences long—digested by the user with ease, with quick access to the resources that were used to create it! ## Challenges we ran into We were really nervous about taking on a NLP problem and the complexity of creating an app that made complex articles simple to understand. We had to work with technologies that we haven't worked with before, and ran into some challenges with technologies we were already familiar with. Trying to define what makes a perspective "reasonable" versus "biased" versus "false/fake news" proved to be an extremely difficult task. We also had to learn to better adapt our mobile interface for an application that’s content varied so drastically in size and available content. ## Accomplishments that we're proud of We’re so proud we were able to stretch ourselves by building a fully functional MVP with both a backend and iOS mobile client. On top of that we were able to submit our app to the App Store, get several well-deserved hours of sleep, and ultimately building a project with a large impact. ## What we learned We learned a lot! On the backend, one of us got to look into NLP for the first time and learned about several summarization algorithms. While building the front end, we focused on iteration and got to learn more about how UIScrollView’s work and interact with other UI components. We also got to worked with several new libraries and APIs that we hadn't even heard of before. It was definitely an amazing learning experience! ## What's next for News Report We’d love to start working on sentiment analysis on the headlines of articles to predict how distributed the perspectives are. After that we also want to be able to analyze and remove fake news sources from our spider's crawl.
## Problem Attendance at office hours has been shown to be positively correlated with academic standing. However, 66% of students never attend office hours. Two significant contributing factors to this lack of attendance are the time and location of the office hours. See studies [here](http://www.tandfonline.com/doi/abs/10.1080/15512169.2013.835554?src=recsys&journalCode=upse20) and [here](https://www.facultyfocus.com/articles/teaching-professor-blog/students-dont-attend-office-hours/) ## Solution Our solution is an easy-to-use website that makes office hours accessible online. Students submit questions to the teacher, and teachers respond to these questions by video. Students can view previous questions and answers, which are recorded in association with individual questions. ## Our Mission * Improve office hours attendance * Reduce the friction of attending office hours * Improve student academic performance ## Process Front-end Design: * Used Sketch to draw up a outline of website * Coded the design using HTML, Javascript, and CSS. Back-end Design: * Used Django (Python) to build complex database structure Video Streaming API: * Determined best API for project is YouTube Live Streaming API, but did not have time to implement it ## Challenges * Implementing and understanding the YouTube Live Streaming API * Issue with date ranges and time zones, so kept all times in UTC * Publishing website from front-end ## Accomplishments * Domain hosting set up [here](www.ruminate.exampleschool.net) * Functional local website with admin editing and website updating * Business plan with initial, beginning, and future strategies for Ruminate ## Our Future Near future, we plan to connect with specific teachers at Cornell University to test and provide feedback on the software. We will survey some of their students to measure the efficacy of the software on the student's office hour attendance and academic standing. Some functionality we want to add is attendance statistics for the teachers. Later, we plan on expanding into other Upstate New York colleges and generating revenue by creating a biannual subscription service. We will attempt integrate our web service into Blackboard or Canvas to reduce friction of signing up.
partial
## Inspiration GeoGuesser is a fun game which went viral in the middle of the pandemic, but after having played for a long time, it started feeling tedious and boring. Our Discord bot tries to freshen up the stale air by providing a playlist of iconic locations in addendum to exciting trivia like movies and monuments for that extra hit of dopamine when you get the right answers! ## What it does The bot provides you with playlist options, currently restricted to Capital Cities of the World, Horror Movie Locations, and Landmarks of the World. After selecting a playlist, five random locations are chosen from a list of curated locations. You are then provided a picture from which you have to guess the location and the bit of trivia associated with the location, like the name of the movie from which we selected the location. You get points for how close you are to the location and if you got the bit of trivia correct or not. ## How we built it We used the *discord.py* library for actually coding the bot and interfacing it with discord. We stored our playlist data in external *excel* sheets which we parsed through as required. We utilized the *google-streetview* and *googlemaps* python libraries for accessing the google maps streetview APIs. ## Challenges we ran into For initially storing the data, we thought to use a playlist class while storing the playlist data as an array of playlist objects, but instead used excel for easier storage and updating. We also had some problems with the Google Maps Static Streetview API in the beginning, but they were mostly syntax and understanding issues which were overcome soon. ## Accomplishments that we're proud of Getting the Discord bot working and sending images from the API for the first time gave us an incredible feeling of satisfaction, as did implementing the input/output flows. Our points calculation system based on the Haversine Formula for Distances on Spheres was also an accomplishment we're proud of. ## What we learned We learned better syntax and practices for writing Python code. We learnt how to use the Google Cloud Platform and Streetview API. Some of the libraries we delved deeper into were Pandas and pandasql. We also learned a thing or two about Human Computer Interaction as designing an interface for gameplay was rather interesting on Discord. ## What's next for Geodude? Possibly adding more topics, and refining the loading of streetview images to better reflect the actual location.
## Inspiration Falls are the leading cause of injury and death among seniors in the US and cost over $60 billion in medical expenses every year. With every one in four seniors in the US experiencing a fall each year, attempts at prevention are badly needed and are currently implemented through careful monitoring and caregiving. However, in the age of COVID-19 (and even before), remote caregiving has been a difficult and time-consuming process: caregivers must either rely on updates given by the senior themselves or monitor a video camera or other device 24/7. Tracking day-to-day health and progress is nearly impossible, and maintaining and improving strength and mobility presents unique challenges. Having personally experienced this exhausting process in the past, our team decided to create an all-in-one tool that helps prevent such devastating falls from happening and makes remote caregivers' lives easier. ## What it does NoFall enables smart ambient activity monitoring, proactive risk assessments, a mobile alert system, and a web interface to tie everything together. ### **Ambient activity monitoring** NoFall continuously watches and updates caregivers with the condition of their patient through an online dashboard. The activity section of the dashboard provides the following information: * Current action: sitting, standing, not in area, fallen, etc. * How many times the patient drank water and took their medicine * Graph of activity throughout the day, annotated with key events * Histogram of stand-ups per hour * Daily activity goals and progress score * Alerts for key events ### **Proactive risk assessment** Using the powerful tools offered by Google Cloud, a proactive risk assessment can be activated with a simple voice query to a smart speaker like Google Home. When starting an assessment, our algorithms begin analyzing the user's movements against a standardized medical testing protocol for screening a patient's risk of falling. The screening consists of two tasks: 1. Timed Up-and-Go (TUG) test. The user is asked to sit up from a chair and walk 10 feet. The user is timed, and the timer stops when 10 feet has been walked. If the user completes this task in over 12 seconds, the user is said to be of at a high risk of falling. 2. 30-second Chair Stand test: The user is asked to stand up and sit down on a chair repeatedly, as fast as they can, for 30 seconds. If the user not is able to sit down more than 12 times (for females) and 14 times (for males), they are considered to be at a high risk of falling. The videos of the tests are recorded and can be rewatched on the dashboard. The caregiver can also view the results of tests in the dashboard in a graph as a function of time. ### **Mobile alert system** When the user is in a fallen state, a warning message is displayed in the dashboard and texted using SMS to the assigned caregiver's phone. ## How we built it ### **Frontend** The frontend was built using React and styled using TailwindCSS. All data is updated from Firestore in real time using listeners, and new activity and assessment goals are also instantly saved to the cloud. Alerts are also instantly delivered to the web dashboard and caretakers' phones using IFTTT's SMS Action. We created voice assistant functionality through Amazon Alexa skills and Google home routines. A voice command triggers an IFTTT webhook, which posts to our Flask backend API and starts risk assessments. ### **Backend** **Model determination and validation** To determine the pose of the user, we utilized Google's MediaPipe library in Python. We decided to use the BlazePose model, which is lightweight and can run on real-time security camera footage. The BlazePose model is able to determine the pixel location of 33 landmarks of the body, corresponding to the hips, shoulders, arms, face, etc. given a 2D picture of interest. We connected the real-time streaming from the security camera footage to continuously feed frames into the BlazePose model. Our testing confirmed the ability of the model to determine landmarks despite occlusion and different angles, which would be commonplace when used on real security camera footage. **Ambient sitting, standing, and falling detection** To determine if the user is sitting or standing, we calculated the angle that the knees make with the hips and set a threshold, where angles (measured from the horizontal) less than that number are considered sitting. To account for the angle where the user is directly facing the camera, we also determined the ratio of the hip-to knee length to the hip-to-shoulder length, reasoning that the 2D landmarks of the knees would be closer to the body when the user is sitting. To determine the fallen status, we determined if the center of the shoulders and the center of the knees made an angle less than 45 degrees for over 20 frames at once. If the legs made an angle greater than a certain threshold (close to 90 degrees), we considered the user to be standing. Lastly, if there was no detection of landmarks, we considered the status to be unknown (the user may have left the room/area). Because of the different possible angles of the camera, we also determined the perspective of the camera based on the convergence of straight lines (the straight lines are determined by a Hough transformation algorithm). The convergence can indicate how angled the camera is, and the thresholds for the ratio of lengths can be mathematically transformed accordingly. **Proactive risk assessment analysis** To analyze timed up-and-go tests, we first determined if the user is able to change his or her status from sitting to standing, and then determined the distance the user has traveled by determining the speed from finite difference calculation of the velocity from the previous frame. The pixel distance was then transformed based on the distance between the user's eyes and the height of the user (which is pre-entered in our website) to determine the real-world distance the user has traveled. Once the user reaches 10 meters cumulative distance traveled, the timer stops and is reported to the server. To analyze 30-second chair stand tests, the number of transitions between sitting and standing were counted. Once 30 seconds has been reached, the number of times the user sat down is half of the number of transitions, and the data is sent to the server. ## Challenges we ran into * Figuring out port forwarding with barebones IP camera, then streaming the video to the world wide web for consumption by our model. * Calibrating the tests (time limits, excessive movements) to follow the standards outlined by research. We had to come up with a way to mitigate random errors that could trigger fast changes in sitting and standing. * Converting recorded videos to a web-compatible format. The videos saved by python's video recording package was only compatible with saving .avi videos, which was not compatible with the web. We had to use scripted ffmpeg to dynamically convert the videos into .mp4 * Live streaming the processed Python video to the front end required processing frames with ffmpeg and a custom streaming endpoint. * Determination of a model that works on realtime security camera data: we tried Openpose, Posenet, tf-pose-estimation, and other models, but finally we found that MediaPipe was the only model that could fit our needs ## Accomplishments that we're proud of * Making the model ignore the noisy background, bad quality video stream, dim lighting * Fluid communication from backend to frontend with live updating data * Great team communication and separation of tasks ## What we learned * How to use IoT to simplify and streamline end-user processes. * How to use computer vision models to analyze pose and velocity from a reference length * How to display data in accessible, engaging, and intuitive formats ## What's next for NoFall We're proud of all the features we have implemented with NoFall and are eager to implement more. In the future, we hope to generalize to more camera angles (such as a bird's-eye view), support lower-light and infrared ambient activity tracking, enable obstacle detection, monitor for signs of other conditions (heart attack, stroke, etc.) and detect more therapeutic tasks, such as daily cognitive puzzles for fighting dementia.
## Inspiration The inspiration that we drew when creating this discord bot was to personalize a bot that could do a plethora of different things. We wanted the main purpose of our bot to be entertainment sort of like an Airplane screen. To navigate PikaBot, only a single person is needed. ## What it does When using the bot, one may use $help to uncover the PikaBot's commands and abilities. This results in 5 interesting commands. 1. $rps — rock paper scissor game where the user plays against the computer 2. $weather — a command which informs the weather of any major city in live time 3. $quotes — provides an inspirational quote to uplift users 4. $mimic — a command which mimics the user 5. $quiz — provides fun questions in various forms T/F, MC or One Word Answers ## How we built it The overall design falls into two places, the main aspect being the discord bot itself and the website that links the discord bot to the web. # Website It utilizes js, html, css and a bootstrap framework # Discord Bot Uses flask and UptimeRobot to "live forever" and we leveraged discord.py documentation to code this bot. In regards to the weather, quotes and quiz commands, it was developed using RESTful endpoints with OpenWeatherAPI, ZenQuotesAPI and TheTriviaAPI to display data to the user. Furthermore, we attempted to mimic the users using a TrieTree data structure to partially autocomplete based on previous responses. ## Challenges we ran into Some of the main challenges were trying to come up with logical responses by mimicking the user and configuring the quiz
winning
## Inspiration Being a student of the University of Waterloo, every other semester I have to attend interviews for Co-op positions. Although it gets easier to talk to people, the more often you do it, I still feel slightly nervous during such face-to-face interactions. During this nervousness, the fluency of my conversion isn't always the best. I tend to use unnecessary filler words ("um, umm" etc.) and repeat the same adjectives over and over again. In order to improve my speech through practice against a program, I decided to create this application. ## What it does InterPrep uses the IBM Watson "Speech-To-Text" API to convert spoken word into text. After doing this, it analyzes the words that are used by the user and highlights certain words that can be avoided, and maybe even improved to create a stronger presentation of ideas. By practicing speaking with InterPrep, one can keep track of their mistakes and improve themselves in time for "speaking events" such as interviews, speeches and/or presentations. ## How I built it In order to build InterPrep, I used the Stdlib platform to host the site and create the backend service. The IBM Watson API was used to convert spoken word into text. The mediaRecorder API was used to receive and parse spoken text into an audio file which later gets transcribed by the Watson API. The languages and tools used to build InterPrep are HTML5, CSS3, JavaScript and Node.JS. ## Challenges I ran into "Speech-To-Text" API's, like the one offered by IBM tend to remove words of profanity, and words that don't exist in the English language. Therefore the word "um" wasn't sensed by the API at first. However, for my application, I needed to sense frequently used filler words such as "um", so that the user can be notified and can improve their overall speech delivery. Therefore, in order to implement this word, I had to create a custom language library within the Watson API platform and then connect it via Node.js on top of the Stdlib platform. This proved to be a very challenging task as I faced many errors and had to seek help from mentors before I could figure it out. However, once fixed, the project went by smoothly. ## Accomplishments that I'm proud of I am very proud of the entire application itself. Before coming to Qhacks, I only knew how to do Front-End Web Development. I didn't have any knowledge of back-end development or with using API's. Therefore, by creating an application that contains all of the things stated above, I am really proud of the project as a whole. In terms of smaller individual accomplishments, I am very proud of creating my own custom language library and also for using multiple API's in one application successfully. ## What I learned I learned a lot of things during this hackathon. I learned back-end programming, how to use API's and also how to develop a coherent web application from scratch. ## What's next for InterPrep I would like to add more features for InterPrep as well as improve the UI/UX in the coming weeks after returning back home. There is a lot that can be done with additional technologies such as Machine Learning and Artificial Intelligence that I wish to further incorporate into my project!
## Inspiration Nowadays, the payment for knowledge has become more acceptable by the public, and people are more willing to pay to these truly insightful, cuttting edge, and well-stuctured knowledge or curriculum. However, current centalized video content production platforms (like YouTube, Udemy, etc.) take too much profits from the content producers (resaech have shown that content creators usually only receive 15% of the values their contents create) and the values generated from the video are not distributed in a timely manner. In order to tackle this unfair value distribution, we have this decentralized platform EDU.IO where the video contents will be backed by their digital assets as an NFT (copyright protection!) and fractionalized as tokens, and it creates direct connections between content creators and viewers/fans (no middlemen anymore!), maximizing the value of the contents made by creators. ## What it does EDU.IO is a decentralized educational video streaming media platform & fractionalized NFT exchange that empowers creator economy and redefines knowledge value distribution via smart contracts. * As an educational hub, EDU.IO is a decentralized platform of high-quality educational videos on disruptive innovations and hot topics like metaverse, 5G, IoT, etc. * As a booster of creator economy, once a creator uploads a video (or course series), it will be mint as an NFT (with copyright protection) and fractionalizes to multiple tokens. Our platform will conduct a mini-IPO for the each content they produced - bid for fractionalized NFTs. The value of each video token is determined by the number of views over a certain time interval, and token owners (can be both creators and viewers/fans/investors) can advertise the contents they owned to increase it values, and trade these tokens to earn monkey or make other investments (more liquidity!!). * By the end of the week, the value generated by each video NFT will be distributed via smart contracts to the copyright / fractionalized NFT owners of each video. Overall we’re hoping to build an ecosystem with more engagement between viewers and content creators, and our three main target users are: * 1. Instructors or Content creators: where the video contents can get copyright protection via NFT, and they can get fairer value distribution and more liquidity compare to using large centralized platforms * 2. Fans or Content viewers: where they can directly interact and support content creators, and the fee will be sent directly to the copyright owners via smart contract. * 3. Investors: Lower barrier of investment, where everyone can only to a fragment of a content. People can also to bid or trading as a secondary market. ## How we built it * Frontend in HTML, CSS, SCSS, Less, React.JS * Backend in Express.JS, Node.JS * ELUV.IO for minting video NFTs (eth-based) and for playing quick streaming videos with high quality & low latency * CockroachDB (a distributed SQL DB) for storing structured user information (name, email, account, password, transactions, balance, etc.) * IPFS & Filecoin (distributed protocol & data storage) for storing video/course previews (decentralization & anti-censorship) ## Challenges we ran into * Transition from design to code * CockroachDB has an extensive & complicated setup, which requires other extensions and stacks (like Docker) during the set up phase which caused a lot of problems locally on different computers. * IPFS initially had set up errors as we had no access to the given ports → we modified the original access files to access different ports to get access. * Error in Eluv.io’s documentation, but the Eluv.io mentor was very supportive :) * Merging process was difficult when we attempted to put all the features (Frontend, IPFS+Filecoin, CockroachDB, Eluv.io) into one ultimate full-stack project as we worked separately and locally * Sometimes we found the documentation hard to read and understand - in a lot of problems we encountered, the doc/forum says DO this rather then RUN this, where the guidance are not specific enough and we had to spend a lot of extra time researching & debugging. Also since not a lot of people are familiar with the API therefore it was hard to find exactly issues we faced. Of course, the staff are very helpful and solved a lot of problems for us :) ## Accomplishments that we're proud of * Our Idea! Creative, unique, revolutionary. DeFi + Education + Creator Economy * Learned new technologies like IPFS, Filecoin, Eluv.io, CockroachDB in one day * Successful integration of each members work into one big full-stack project ## What we learned * More in depth knowledge of Cryptocurrency, IPFS, NFT * Different APIs and their functionalities (strengths and weaknesses) * How to combine different subparts with different functionalities into a single application in a project * Learned how to communicate efficiently with team members whenever there is a misunderstanding or difference in opinion * Make sure we know what is going on within the project through active communications so that when we detect a potential problem, we solve it right away instead of wait until it produces more problems * Different hashing methods that are currently popular in “crypto world” such as multihash with cid, IPFS’s own hashing system, etc. All of which are beyond our only knowledge of SHA-256 * The awesomeness of NFT fragmentation, we believe it has great potential in the future * Learned the concept of a decentralized database which is directly opposite the current data bank structure that most of the world is using ## What's next for EDU.IO * Implement NFT Fragmentation (fractionalized tokens) * Improve the trading and secondary market by adding more feature like more graphs * Smart contract development in solidity for value distribution based on the fractionalized tokens people owned * Formulation of more complete rules and regulations - The current trading prices of fractionalized tokens are based on auction transactions, and eventually we hope it can become a free secondary market (just as the stock market)
**Track: Education** ## Inspiration Looking back at how we learnt to code, we found that the most difficult aspect of the learning process was to "trace" through the code. By "tracing", we mean mentally single-stepping through lines of code and keeping track of various variables. With this application, we hope to simplify the process of teaching kids the logic behind simple programmatic structures. ## What it does Once loaded into the game, the player (represented with a single stick figure) is presented with a series of options for where they should go next. Using letter keys on their keyboard (a, b, c), the player selects where in the code they think will be executed next. The player is given 3 lives, which is decremented for every incorrect choice they make. ## How we built it The primary languages we used were HTML5 + CSS (make it look decent) with Javascript for the interactive portion. We used PIXI as the game engine, which simplified a lot of the animation and rendering issues. ## Challenges we ran into Not a technical challenge, but 3/4 of our team got sick within the first 3 hours of the Hackathon. Luckily, we were able pull through and build Tracer! On a technical note, animating our little stick man was pretty challenging, given that none of us had in-depth experience with graphics animation using JS beforehand. ## Accomplishments that we're proud of Our stick man animations look really nice. The concept and potential impact of this game is very useful, since it emphasizes aspects of coding that are often overlooked, such as reading and comprehending code mentally. ## What we learned A majority of our team walked into this project with little to no knowledge of Javascript and the corresponding animation libraries. Going through the project, we had to cross apply our knowledge of other languages and frameworks to build this app. ## What's next for Tracer We originally had a stretch goal of linking our app with multiple mobile devices using Websocket, allowing for a multiplayer experience. Additionally, we would also clean up the UI and add fancier animations. Finally (if we haven't implemented this yet), we'd add simple sound effects (free open source sound available from [zapsplat](www.zapsplat.com) ![link to sound hosting service]). Some other things we would've implemented given more time: * restart button * interpreter for custom code input * local/remote scoreboard * multiplayer on a single local server (multiple sets of input on the same computer)
winning
## Inspiration During this Covid-19 pandemic, our team has noticed the high demand in essentials such as face masks and the importance of lending a hand to those in most need. When people come together as a whole great things happen. What better way than to create an application to connect eager volunteers with organizations that are providing aid to those being impacted by COVID. ## What it does Our application is simple and easy to use. Once a user signs up as a volunteer, they will fill out a survey, get matched to organizations, then have the option to schedule volunteer sessions. The sign in form has some check boxes for users to select from. This will allow us to have a better idea of what type of skills they have to match them with an organization. Such opportunities users can select from are joining an organization to make face masks, volunteering to pick up groceries for the elderly, donate food to a local food bank, and more! ## How I built it We built this application using the React web framework, Bootstrap css framework, HTML, CSS, and for back-end we plan to integrate SQL for maintaining the database. To make our application scalable, we plan to push it to AWS in the future. This will help us develop a scalable cloud-based application with built in AWS features that can help deploy things efficiently with great user experience. ## Challenges I ran into It is not easy to work virtually for some or work with different time-zones. Our team has been impacted by this as well. Additionally, finding a team was also a challenge, as well as having members drop due to personal or educational purposes. Nonetheless, we are still very passionate about our idea and plan to invest more time into this project. With more given time, our team would be able to finalize this idea and connect people for good causes. ## Accomplishments that I'm proud of We are proud of working through the barriers we have faced over the course of this hackathon. As well as creating a prototype of our idea, we visualized an application that can help save lives! ## What I learned Our team learned about REACT and how to use Git commands to push code to GitHub. It is a very important skill when developing web applications. Together, we coded live so that we can view the REACT environment, the apps components and how to create files. We learned that it is not easy to work virtually however, it is feasible with a team that shares a passion to helping the community. ## What's next for COVAID AWESOMENESS and IMPACT. COVAID is not over yet, with more time we plan to make complete this application in REACT, move it to AWS, scale to application to other nations including Canada, Europe, Central America, South America, Asia, and many more parts of the world. Connecting people to helping those in need during this pandemic is our mission and with time, dedication, effort, focus, and determination we will get there. Thank you!
## Inspiration We had originally planned on creating a healthcare management app, but we attended a brainstorming session on the first day of the hackathon and were immediately inspired by the idea of "Tinder for board games." We liked the idea because it was light-hearted and a fun entertainment project to work on, and all of us would actively use the app if it existed. It is an idea that allows us to connect in real life, in this digitalized world. ## What it does The web app matches users with others in their area who have similar interests in board games, card games, RPGs - you name it - and allows them to connect. The users would be given options of others nearby who were interested in similar games, and facilitate their exchanging contact information in order to meet up. ## How I built it On the front end, we used React and a framework for React called MaterialUI. On the backend, we hosted an AWS server with a MySQL database, where we stored our user information such as username, password, name, interests, location and so on. We communicate from the front-end to the backend using JavaScripts that calls PHP files which communicates with our server and database. The idea was to match people through our queries; similar interests, locations, etc will be matched together and be returned to the website. ## Challenges I ran into We had never worked with React before so it was challenging to understand the intricacies of not only React but also MaterialUI, which we understood to be similar to Bootstrap for React. We decided to use MaterialUI, which was a relatively small framework, in order to better style and present our app. Unfortunately, it did not have helpful documentation on issues such as managing and serving form data. On the back end, we ran into several server errors that we were unable to resolve such as *Error 500: Internal Server Error*. As a result, we are presenting and submitting our design and front-end here to display our ideas. ## Accomplishments that I'm proud of We are proud that we learned some new frameworks and became familiar with the syntax and styling of React applications. We all came across new languages and frameworks and are proud that we were able to make something with the new information. We're also proud of our teamwork and collaboration efforts, and how well we were able to work with strangers from around the world! ## What I learned We each learned new languages and frameworks, both from the internet and documentation and from one another. One of the main lessons we learned was a developer classic — that we should prioritize making sure our separate roles in the front- and back-end integrate well. ## What's next for PlayMate There are many improvements we would like to make, including on a basic level working out the server errors and posting data from React. Once this is worked out, we could make a much more sophisticated app, including location with GPS, social media integration, ID security checks, matching based off categories of games, multi-page support, a built-in chat using socket.io technology, etc.
# Catch! (Around the World) ## Our Inspiration Catch has to be one of our most favourite childhood games. Something about just throwing and receiving a ball does wonders for your serotonin. Since all of our team members have relatives throughout the entire world, we thought it'd be nice to play catch with those relatives that we haven't seen due to distance. Furthermore, we're all learning to social distance (physically!) during this pandemic that we're in, so who says we can't we play a little game while social distancing? ## What it does Our application uses AR and Unity to allow you to play catch with another person from somewhere else in the globe! You can tap a button which allows you to throw a ball (or a random object) off into space, and then the person you send the ball/object to will be able to catch it and throw it back. We also allow users to chat with one another using our web-based chatting application so they can have some commentary going on while they are playing catch. ## How we built it For the AR functionality of the application, we used **Unity** with **ARFoundations** and **ARKit/ARCore**. To record the user sending the ball/object to another user, we used a **Firebase Real-time Database** back-end that allowed users to create and join games/sessions and communicated when a ball was "thrown". We also utilized **EchoAR** to create/instantiate different 3D objects that users can choose to throw. Furthermore for the chat application, we developed it using **Python Flask**, **HTML** and **Socket.io** in order to create bi-directional communication between the web-user and server. ## Challenges we ran into Initially we had a separate idea for what we wanted to do in this hackathon. After a couple of hours of planning and developing, we realized that our goal is far too complex and it was too difficult to complete in the given time-frame. As such, our biggest challenge had to do with figuring out a project that was doable within the time of this hackathon. This also ties into another challenge we ran into was with initially creating the application and the learning portion of the hackathon. We did not have experience with some of the technologies we were using, so we had to overcome the inevitable learning curve. There was also some difficulty learning how to use the EchoAR api with Unity since it had a specific method of generating the AR objects. However we were able to use the tool without investigating too far into the code. ## Accomplishments * Working Unity application with AR * Use of EchoAR and integrating with our application * Learning how to use Firebase * Creating a working chat application between multiple users
losing
## Inspiration I ran out of time RIP ## What it does ## How we built it ## Challenges we ran into ## Accomplishments that we're proud of ## What we learned ## What's next for Didn't Finish On Time
## Inspiration ## What it does ## How we built it ## Challenges we ran into ## Accomplishments that we're proud of ## What we learned ## What's next for EduBate World domination
## Inspiration this is a project which is given to me by an organization and my collegues inspierd me to do this project ## What it does It can remind what we have to do in future and also set time when it is to be done ## How we built it I built it using command time utility in python programming ## Challenges we ran into Many challanges such as storing data in file and many bugs ae come in between middle of this program ## Accomplishments that we're proud of I am proud that i make this real time project which reminds a person todo his tasks ## What we learned I leraned more about command line utility of python ## What's next for Todo list Next I am doing various projects such as Virtual assistant and game development
losing
## Inspiration Life is short and fast-paced, filled with fleeting moments of joy, achievements, and connection. During Hack the North, our team met so many amazing and inspiring people in such a short amount of time. It struck us how easy it is to forget these meaningful interactions and experiences as life rushes on. This inspired us to create Flashback—a VR experience designed to memorialize these cherished moments and allow users to revisit them in a deeply immersive and personalized museum. Unlike traditional social media, which encourages constant sharing with others, Flashback offers a personal and introspective journey through your own memories. It’s designed for the individual, allowing users to relive their most cherished moments in an immersive, meaningful way. #### A quote that resonated with us this weekend: Life is not measured by time. It is measured by moments. There is a limit to how much you can embrace a moment. But there is no limit to how much you can appreciate it. Bonus: Great for ~~us~~ forgetful folks! ## What it does Flashback is a VR experience that transforms your personal memories and achievements into interactive, immersive museum exhibits. Users can upload photos, videos, personal audio clips, and music to create unique 3D galleries, where each memory comes to life. As you walk up to an exhibit, specific music, audio clips, and captions are triggered, bringing the memory to life in a dynamic way. Memories can also be grouped into collections. Instead of just scrolling through pictures, Flashback lets you step into your memories—hear familiar voices, see cherished moments, and relive experiences in a fully immersive environment. Additionally, there is a web app where users can upload, update, and maintain their growing museum of memories. Flashback evolves with you over time, offering a place to revisit positive memories on difficult days, and preserve fleeting moments like our time at Hack the North. ## How we built it Flashback is built with React, Node.js, JavaScript, Express.js, Convex, HTML, CSS, Spotify APIand Material UI for the web app's front and back end. The VR experience is developed using Unity, .NET, and C#, with testing done on a Meta Quest VR headset. Our mascot Framey was drawn up by our team mate Jenn. ## Challenges we ran into * Picking up Convex * Designing on Figma for the first time * Unity and C# are HARD (our teams first time making a VR project) * learning new tech is hard * merge requests on front end is hard * Sleep deprivation * figuring out how to connect and integrate client, server, and VR ## Accomplishments that we're proud of * Everything we made! We worked very hard! * Adapting and persevering through a lot of roadblocks this weekend * Creating a super cool VR experience (shoutout to Alan!!!) * First time designer making a hi-fidelity mock up of the web app and VR user flows in Figma * Spending time together and having fun as Hack the North 2024 ## What we learned Everyone on our team had experience in different technologies, but this weekend we each tried doing something new- using Convex DB integration for the first time, designing for the first time, learning C# and Unity, trying out front-end development, and creating our first VR project! Additionally, we learned that unity is hard and doing research and regularly communication about feasibility is extremely important. ## What's next for Flashback * integration authentication and allow users to visit other museums! * more customizability - users can choose their own VR assets to personalize their space and memories * more fields for memories - multi-media, video, and personal audio upload * more interactiveness in VR environment
## Inspiration The memory palace, also known as the method of loci, is a technique used to memorize large amounts of information, such as long grocery lists or vocabulary words. First, think of a familiar place in your life. Second, imagine the sequence of objects from the list along a path leading around your chosen location. Lastly, take a walk along your path and recall the information that you associated with your surroundings. It's quite simple, but extraordinarily effective. We've seen tons of requests on Internet forums for a program that can generate a simulator to make it easier to "build" the palace, so we decided to develop an app that satisfies this demand — and for our own practicality, too. ## What it does Our webapp begins with a list provided by the user. We extract the individual words from the list and generate random images of these words from Flickr, a photo-sharing website. Then, we insert these images into a Google Streetview map that the user can walk through. The page displays the Google Streetview with the images. When walking near a new item from his/her list, a short melody (another mnemonic trick) is played based on the word. As an optional feature of the program, the user can take the experience to a whole new level through Google Cardboard by accessing the website on a smart device. ## How we built it We started by searching for two APIs: one that allows for 3D interaction with an environment, and one that can find image URLs off the web based on Strings. For the first, we used Google Streetview, and for the second, we used a Flickr API. We used the Team Maps Street Overlay Demo as a jumping off point for inserting images into street view. Used JavaScript, HTML, CSS ## Challenges we ran into All of us are very new to JavaScript. It was a struggle to get different parts of the app to interact with each other asynchronously. ## Accomplishments that we're proud of Building a functional web app with no prior experience Creating melodies based on Strings Virtual reality rendering using Google Cardboard Website design ## What we learned JavaScript, HTML, CSS ## What's next for Souvenir Mobile app More accurate image search Integrating jingles
## What it is 🕵️ MemoryLane is a unique and innovative mobile application designed to help users capture and relive their precious moments. Instead of being the one to curate an image for others, with a nostalgic touch, the app provides a personalized space for friends to document and remember shared memories ~ creating a digital journey through their life experiences. Whether it's a cherished photo, an audio clip, a video or a special note, MemoryLane allows users to curate their memories in a visually appealing and organized manner. With its user-friendly interface and customizable features, MemoryLane aims to be a go-to platform for individuals seeking to celebrate, reflect upon, and share the meaningful moments that shape their lives. ## Inspiration ✨ The inspiration behind MemoryLane was born from a recognition of the impact that modern social media can have on our lives. While social platforms offer a convenient way to connect with others, they often come with the side effect of overwhelming timelines, constant notifications, and FOMO. In an age where online interactions can sometimes feel fleeting and disconnected, MemoryLane seeks to offer a refuge—a space where users can curate and cherish their memories without the distractions of mainstream social media. The platform encourages users to engage in a more mindful reflection of their life experiences, fostering a sense of nostalgia and a deeper connection to the moments that matter. ## What it does 💻 **Home:** * The Home section serves as the main dashboard where users can scroll through a personalized feed of their memories. This is displayed in chronological order of memories that haven't been viewed yet. * It fetches and displays user-specific content, including photos, notes, and significant events, organized chronologically. **Archive:** * The Archive section provides users with a comprehensive repository of all their previously viewed memories * It implements data retrieval mechanisms to fetch and display archived content in a structured and easily accessible format * [stretch goal] include features such as search functionality and filtering options to enhance the user's ability to navigate through their extensive archive **Create Memory:** * The core feature of MemoryLane enables users to add new memories to share with other users * Includes multi-media support **Friends:** * The Friends section focuses on social interactions, allowing users to connect with friends * Unlike other social media, we do not support likes, comments or sharing in hopes of being motivation to reach out to the friend who shared a memory on other platforms **Settings:** * Incorporates user preferences, allowing adjustments to account settings including a filter for memories to be shared, incorporating Cohere's LLM to ensure topics marked as sensitive or toxic are not shown on Home feed ## How we built it 🔨 **Frontend:** React Native (We learned it during the hackathon!) **Backend:** Node.js, AWS, Postgres In this project, we utilized React Native for the frontend, embracing the opportunity to learn and apply it during the hackathon. On the backend, we employed Node.js, leveraged the power of AWS services (S3, AWS-RDS (postgres), AWS-SNS, EventBridge Scheduler). Our AWS solution comprised the following key use-cases: * AWS S3 (Simple Storage Service): Used to store and manage static assets, providing a reliable and scalable solution for handling images, videos, and other media assets in our application. * AWS-RDS (Relational Database Service): Used to maintain a scalable and highly available postgres database backend. * AWS-SNS (Amazon Simple Notification Service): Played a crucial role in enabling push notifications, allowing us to keep users informed and engaged with timely updates. * AWS EventBridge Scheduler: Used to automate scheduled tasks and events within our application. This included managing background processes, triggering notifications, and ensuring seamless execution of time-sensitive operations, such as sending memories. ## Challenges we ran into ⚠️ * Finding and cleaning data set, and using Cohere API * AWS connectivity + One significant challenge stemmed from configuring the AWS PostgreSQL database for optimal compatibility with Sequelize. Navigating the AWS environment and configuring the necessary settings, such as security groups, database credentials, and endpoint configurations, required careful attention to detail. Ensuring that the AWS infrastructure was set up to allow secure and efficient communication with our Node.js application became a pivotal aspect of the connectivity puzzle. + Furthermore, Sequelize, being a powerful Object-Relational Mapping (ORM) tool, introduced its own set of challenges. Mapping the database schema to Sequelize models, handling associations, and ensuring that Sequelize was configured correctly to interpret PostgreSQL-specific data types were crucial aspects. Dealing with intricacies in Sequelize's configuration, such as connection pooling and dialect-specific settings, added an additional layer of complexity. * Native React Issues + There were many deprecated and altered libraries, so as a first time learner it was very hard to adjust + Expo Go's default error is "Keep text between the tags", but this would be non-descriptive and be related to whitespace. VSCode would not notice and extensive debugging. * Deploying to Google Play (original plan) + :( what happened to free deployment to the Google Play store + After prepping our app for deployment we ran into a wall of a registration fee of $25, in the spirit of hackathons we decided this would not be a step we would take ## Accomplishments that we're proud of 🏆 Our proudest achievement lies in translating a visionary concept into reality. We embarked on a journey that started with a hand-drawn prototype and culminated in the development of a fully functional application ready for deployment on the Play Store. This transformative process showcases our dedication, creativity, and ability to bring ideas to life with precision and excellence. ## What we learned 🏫 * Nostalgia does not have to be sad * Brain chemistry is unique! How do we form memories and why we may forget some? :) ## What's next for MemoryLane 💭 What's next for MemoryLane is an exciting journey of refinement and expansion. Discussing with our fellow hackers we already were shown interest in a social media platform that wasn't user-curating centric. As this was the team's first time developing using React Native, we plan to gather user feedback to enhance the user experience and implement additional features that resonate with our users. This includes refining the media scrolling functionality, optimizing performance, and incorporating more interactive and nostalgic elements.
winning
``` var bae = require('love.js') ``` ## Inspiration It's a hackathon. It's Valentine's Day. Why not. ## What it does Find the compatibility of two users based on their Github handles and code composition. Simply type the two handles into the given text boxes and see the compatibility of your stacks. ## How I built it The backend is built on Node.js and Javascript while the front-end consists of html, css, and javascript. ## Challenges I ran into Being able to integrate the Github API in our code and representing the data visually. ## What's next for lovedotjs Adding more data from Github like frameworks and starred repositories, creating accounts that are saved to databases and recommending other users at the same hackathon, using Devpost's hackathon data for future hackathons, matching frontend users to backend users, and integrating other forms of social media and slack to get more data about users and making access easier.
## Inspiration One of the most exciting parts about Hackathons is the showcasing of the final product, well-earned after hours upon hours of sleep-deprived hacking. Part of the presentation work lies in the Devpost entry. I wanted to build an application that can rate the quality of a given entry to help people write better Devpost posts, which can help them better represent their amazing work. ## What it does The Chrome extension can be used on a valid Devpost entry web page. Once the user clicks "RATE", the extension will automatically scrap the relevant text and send it to a Heroku Flask server for analysis. The final score given to a project entry is an aggregate of many factors, such as descriptiveness, the use of technical vocabulary, and the score given by an ML model trained against thousands of project entries. The user can use the score as a reference to improve their entry posts. ## How I built it I used UiPath as an automation tool to collect, clean, and label data across thousands of projects in major Hackathons over the past few years. After getting the necessary data, I trained an ML model to predict the probability for a given Devpost entry to be amongst the winning projects. I also used the data to calculate other useful metrics, such as the distribution of project entry lengths, average amount of terminologies used, etc. These models are then uploaded on a Heroku cloud server, where I can get aggregated ratings for texts using a web API. Lastly, I built a Javascript Chrome extension that detects Devpost web pages, scraps data from the page, and present the ratings to the user in a small pop-up. ## Challenges I ran into Firstly, I am not familiar with website development. It took me a hell of a long time to figure out how to build a chrome extension that collects data and uses external web APIs. The data collection part is also tricky. Even with great graphical automation tools at hand, it was still very difficult to do large-scale web-scraping for someone relatively experienced with website dev like me. ## Accomplishments that I'm proud of I am very glad that I managed to finish the project on time. It was quite an overwhelming amount of work for a single person. I am also glad that I got to work with data from absolute scratch. ## What I learned Data collection, hosting ML model over cloud, building Chrome extensions with various features ## What's next for Rate The Hack! I want to refine the features and rating scheme
## Inspiration When we got together for the first time, we instantly gravitated towards project ideas that would appeal to a broad audience, so music as a theme for our project was a very natural choice. Originally, our ideas around a music-based project were much more abstract, incorporating some notes of music perception. We eventually realized that there were too many logistical hurdles surrounding each that they would not be feasible to do during the course of the hackathon, and we realized this as we were starting to brainstorm ideas for social media apps. We started thinking of ideas for music-based social media, and that's when we came up with the idea of making an app where people would judge other's music tastes in a lighthearted fashion. ## What it does The concept of Rate-ify is simple; users post their Spotify playlists and write a little bit about them for context. Users can also view playlists that other people have posted, have a listen to them, and then either upvote or downvote the playlist based on their enjoyment. Finally, users can stay up to date on the most popular playlists through the website's leaderboard, which ranks all playlists that have been posted to the site. ## How we built it and what we learned Our team learned more about tooling surrounding web dev. We had a great opportunity to practice frontend development using React and Figma, learning practices that we will likely be using in future projects. Some members were additionally introduced to tools that they had never used before this hackathon, such as databases. ## Challenges we ran into Probably the biggest challenge of the hackathon was debugging the frontend. Our team came from a limited background, so being able to figure out how to successfully send data from the backend to the frontend could sometimes be a hassle. The quintessential example of this was when we were working on the leaderboard feature. Though the server was correctly returning ranking data, we had lots of trouble getting the frontend to successfully receive the data so that we could display it, and part of this was because of the server returning ranking data as a promise. After figuring out how to correctly return the ranking data without promises, we then had trouble storing that data as part of a React component, which was fixed by using effect hooks. ## Accomplishments that we're proud of For having done limited work on frontend for past projects, we ended up very happy with how the UI came out. It's a very simple and charming looking UI. ## What's next for Rate-ify There were certainly some features that we wanted to include that we didn't end up working on, such as a mode of the app where you would see two playlists and say which one you would prefer and a way of allowing users to identify their preferred genres so that we could categorize the number of upvotes and downvotes of playlists based on the favorite genres of the users who rated them. If we do continue working on Rate-ify, then there are definitely more ways than one that we could refine and expand upon the basic premise that we've developed over the course of the last two days, so that would be something that we should consider.
partial
## Inspiration COVID revolutionized the way we communicate and work by normalizing remoteness. **It also created a massive emotional drought -** we weren't made to empathize with each other through screens. As video conferencing turns into the new normal, marketers and managers continue to struggle with remote communication's lack of nuance and engagement, and those with sensory impairments likely have it even worse. Given our team's experience with AI/ML, we wanted to leverage data to bridge this gap. Beginning with the idea to use computer vision to help sensory impaired users detect emotion, we generalized our use-case to emotional analytics and real-time emotion identification for video conferencing in general. ## What it does In MVP form, empath.ly is a video conferencing web application with integrated emotional analytics and real-time emotion identification. During a video call, we analyze the emotions of each user sixty times a second through their camera feed. After the call, a dashboard is available which displays emotional data and data-derived insights such as the most common emotions throughout the call, changes in emotions over time, and notable contrasts or sudden changes in emotion **Update as of 7.15am: We've also implemented an accessibility feature which colors the screen based on the emotions detected, to aid people with learning disabilities/emotion recognition difficulties.** **Update as of 7.36am: We've implemented another accessibility feature which uses text-to-speech to report emotions to users with visual impairments.** ## How we built it Our backend is powered by Tensorflow, Keras and OpenCV. We use an ML model to detect the emotions of each user, sixty times a second. Each detection is stored in an array, and these are collectively stored in an array of arrays to be displayed later on the analytics dashboard. On the frontend, we built the video conferencing app using React and Agora SDK, and imported the ML model using Tensorflow.JS. ## Challenges we ran into Initially, we attempted to train our own facial recognition model on 10-13k datapoints from Kaggle - the maximum load our laptops could handle. However, the results weren't too accurate, and we ran into issues integrating this with the frontend later on. The model's still available on our repository, and with access to a PC, we're confident we would have been able to use it. ## Accomplishments that we're proud of We overcame our ML roadblock and managed to produce a fully-functioning, well-designed web app with a solid business case for B2B data analytics and B2C accessibility. And our two last minute accessibility add-ons! ## What we learned It's okay - and, in fact, in the spirit of hacking - to innovate and leverage on pre-existing builds. After our initial ML model failed, we found and utilized a pretrained model which proved to be more accurate and effective. Inspiring users' trust and choosing a target market receptive to AI-based analytics is also important - that's why our go-to-market will focus on tech companies that rely on remote work and are staffed by younger employees. ## What's next for empath.ly From short-term to long-term stretch goals: * We want to add on AssemblyAI's NLP model to deliver better insights. For example, the dashboard could highlight times when a certain sentiment in speech triggered a visual emotional reaction from the audience. * We want to port this to mobile and enable camera feed functionality, so those with visual impairments or other difficulties recognizing emotions can rely on our live detection for in-person interactions. * We want to apply our project to the metaverse by allowing users' avatars to emote based on emotions detected from the user.
## Inspiration While we were doing preliminary research, we had found overwhelming amounts of evidence of mental health deterioration as a consequence of life-altering lockdown restrictions. Academic research has shown that adolescents depend on friendship to maintain a sense of self-worth and to manage anxiety and depression. Intimate exchanges and self-esteem support significantly increased long-term self worth and decreased depression. While people do have virtual classes and social media, some still had trouble feeling close with anyone. This is because conventional forums and social media did not provide a safe space for conversation beyond the superficial. User research also revealed that friendships formed by physical proximity don't necessarily make people feel understood and resulted in feelings of loneliness anyway. Proximity friendships formed in virtual classes also felt shallow in the sense that it only lasted for the duration of the online class. With this in mind, we wanted to create a platform that encouraged users to talk about their true feelings, and maximize the chance that the user would get heartfelt and intimate replies. ## What it does Reach is an anonymous forum that is focused on providing a safe space for people to talk about their personal struggles. The anonymity encourages people to speak from the heart. Users can talk about their struggles and categorize them, making it easy for others in similar positions to find these posts and build a sense of closeness with the poster. People with similar struggles have a higher chance of truly understanding each other. Since ill-mannered users can exploit anonymity, there is a tone analyzer that will block posts and replies that contain mean-spirited content from being published while still letting posts of a venting nature through. There is also ReCAPTCHA to block bot spamming. ## How we built it * Wireframing and Prototyping: Figma * Backend: Java 11 with Spring Boot * Database: PostgresSQL * Frontend: Bootstrap * External Integration: Recaptcha v3 and IBM Watson - Tone Analyzer * Cloud: Heroku ## Challenges we ran into We initially found it a bit difficult to come up with ideas for a solution to the problem of helping people communicate. A plan for a VR space for 'physical' chatting was also scrapped due to time constraints, as we didn't have enough time left to do it by the time we came up with the idea. We knew that forums were already common enough on the internet, so it took time to come up with a product strategy that differentiated us. (Also, time zone issues. The UXer is Australian. They took caffeine pills and still fell asleep.) ## Accomplishments that we're proud of Finishing it on time, for starters. It felt like we had a bit of a scope problem at the start when deciding to make a functional forum with all these extra features, but I think we pulled it off. The UXer also iterated about 30 screens in total. The Figma file is *messy.* ## What we learned As our first virtual hackathon, this has been a learning experience for remote collaborative work. UXer: I feel like i've gotten better at speedrunning the UX process even quicker than before. It usually takes a while for me to get started on things. I'm also not quite familiar with code (I only know python), so watching the dev work and finding out what kind of things people can code was exciting to see. # What's next for Reach If this was a real project, we'd work on implementing VR features for those who missed certain physical spaces. We'd also try to work out improvements to moderation, and perhaps a voice chat for users who want to call.
## Inspiration ⛹️‍♂️ Regularly courts are getting many cases and currently, it is becoming challenging to prioritize those cases. There are about 73,000 cases pending before the Supreme Court and about 44 million in all the courts of India. Cases that have been in the courts for more than 30 years, as of January 2021. A software/algorithm should be developed for prioritizing and allocations of dates to the cases based on the following parameters: * Time of filing of chargesheet * Severity of the crime and sections involved * Last hearing date * Degree of responsibility of the alleged perpetrators. To provide a solution to this problem, we thought of a system to prioritize the cases, considering various real-life factors using an efficient machine learning algorithm. That's how we came up with **"e-Adalat"** ("digital law court" in English), e-Court Management System. ## What it does? e-Adalat is a platform (website) that prioritizes court cases and suggests the priority order to the judges in which these cases should be heard so that no pending cases will be there and no case is left pending for long periods of time. Judges and Lawyers can create their profiles and manage complete information of all the cases, a lawyer can file a case along with all the info in the portal whereas a judge can view the suggested priority of the cases to be held in court using the ML model, the cases would be automatically assigned to the judge based on their location. The judge and the lawyer can view the status of all the cases and edit them. ## How we built it? While some of the team members were working on the front-end, the other members started creating a dummy dataset and analyzing the best machine learning algorithm, after testing a lot of algorithms we reached to the conclusion that random forest regression is the best for the current scenario, after developing the frontend and creating the Machine Learning model, we started working on the backend functionality of the portal using Node.js as the runtime environment with express.js for the backend logic and routes, this mainly involved authorization of judges and lawyers, linking the Machine Learning model with backend and storing info in database and fetching the information while the model is running. Once, The backend was linked with the Machine Learning model, we started integrating the backend and ML model with the frontend, and that's how we created e-Adalat. ## Challenges we ran into We searched online for various datasets but were not able to find a dataset that matched our requirements and as we were not much familiar with creating a dataset, we learned how to do that and then created a dataset. Later on, we also ran this data through various Machine Learning algorithms to get the best result. We also faced some problems while linking the ML model with the backend and building the Web Packs but we were able to overcome that problem by surfing through the web and running various tests. ## Accomplishments that we're proud of The fact that our offline exams were going on and still we managed to create a full-fledged portal in such a tight schedule that can handle multiple judges and lawyers, prioritize cases and handle all of the information securely from scratch in a short time is a feat that we are proud of, while also at the same time diversifying our Tech-Stacks and learning how to use Machine Learning Algorithms in real-time integrated neatly into our platform! ## What we learned While building this project, we learned many new things, and to name a few, we learned how to create datasets, and test different machine learning algorithms. Apart from technical aspects we also learned a lot about Law And Legislation and how courts work in a professional environment as our project was primarily focused on law and order, we as a team needed to have an idea about how cases are prioritized in courts currently and what are the existing gaps in this system. Being in a team and working under such a strict deadline along with having exams at the same time, we learned time management while also being under pressure. ## What's next for e-Adalat ### We have a series of steps planned next for our platform : * Improve UI/UX and make the website more intuitive and easy to use for judges and lawyers. * Increase the scope of profile management to different judicial advisors. * Case tracking for judges and lawyers. * Filtering of cases and assignment on the basis of different types of judges. * Increasing the accuracy of our existing Machine Learning model. ## Discord Usernames: * Shivam Dargan - Shivam#4488 * Aman Kumar - 1Man#9977 * Nikhil Bakshi - nikhilbakshi#8994 * Bakul Gupta - Bakul Gupta#5727
winning
## \*\* Internet of Things 4 Diabetic Patient Care \*\* ## The Story Behind Our Device One team member, from his foot doctor, heard of a story of a diabetic patient who almost lost his foot due to an untreated foot infection after stepping on a foreign object. Another team member came across a competitive shooter who had his lower leg amputated after an untreated foot ulcer resulted in gangrene. A symptom in diabetic patients is diabetic nephropathy which results in loss of sensation in the extremities. This means a cut or a blister on a foot often goes unnoticed and untreated. Occasionally, these small cuts or blisters don't heal properly due to poor blood circulation, which exacerbates the problem and leads to further complications. These further complications can result in serious infection and possibly amputation. We decided to make a device that helped combat this problem. We invented IoT4DPC, a device that detects abnormal muscle activity caused by either stepping on potentially dangerous objects or caused by inflammation due to swelling. ## The technology behind it A muscle sensor attaches to the Nucleo-L496ZG board that feeds data to a Azure IoT Hub. The IoT Hub, through Trillo, can notify the patient (or a physician, depending on the situation) via SMS that a problem has occurred, and the patient needs to get their feet checked or come in to see the doctor. ## Challenges While the team was successful in prototyping data aquisition with an Arduino, we were unable to build a working prototype with the Nucleo board. We also came across serious hurdles with uploading any sensible data to the Azure IoTHub. ## What we did accomplish We were able to set up an Azure IoT Hub and connect the Nucleo board to send JSON packages. We were also able to aquire test data in an excel file via the Arduino
## Inspiration Our inspiration comes from people who require immediate medical assistance when they are located in remote areas. The project aims to reinvent the way people in rural or remote settings, especially seniors who are unable to travel frequently, obtain medical assistance by remotely connecting them to medical resources available in their nearby cities. ## What it does Tango is a tool to help people in remote areas (e.g. villagers, people on camping/hiking trips, etc.) to have access to direct medical assistance in case of an emergency. The user would have the device on him while hiking along with a smart watch. If the device senses a sudden fall, the vital signs of the user provided by the watch would be sent to the nearest doctor/hospital in the area. The doctor could then assist the user in a most appropriate way now that the user's vital signs are directly relayed to the doctor. In a case of no response from the user, medical assistance can be sent using his location. ## How we built it The sensor is made out of the Particle Electron Kit, which based on input from an accelerometer and a sound sensor, asseses whether the user has fallen down or not. Signals from this sensor are sent to the doctor if the user has fallen along with data from smart watch about patient health. ## Challenges we ran into One of our biggest challenges we ran into was taking the data from the cloud and loading it on the web page to display it. ## Accomplishments that we are proud of It is our first experience with the Particle Electron and for some of us their first experience in a hardware project. ## What we learned We learned how to use the Particle Election. ## What's next for Tango Integration of the Pebble watch to send the vital signs to the doctors.
## Inspiration According to the National Council on Aging, every 11 seconds, an older adult is treated in the emergency room for falling, and every 19 minutes, an older adult dies from a fall. A third of Americans over the age of 65 accidentally fall each year, and this is estimated to cost more than $67.7 billion by 2020. We wanted to make an IoT solution for the elderly, taking advantage of the Google Cloud Platform to be able to detect these falls more easily, and more quickly bring an emergency response to an elderly person who has suffered a fall. ## What it does Fallen is a security system that continuously analyzes its environment through a delayed video stream for people who may have fallen. Every 2 second frame is sent over to our Node.js server, which uploads it to Google Cloud Storage, and is then sent through Google Cloud Vision, which returns a set of features, that we filter. These features are passed to our own machine learning classifier to determine if the frame depicts a fall or not. If there is a fall, the system alerts all emergency contacts and sends an audio clip requesting help. ## How we built it We have a Node.js server that monitors the image feed coming in, connecting to Google Cloud Storage and Google Cloud Vision, and a Flask server that provides the machine learning classifier. We used the Android Things development kit to build a cheap monitoring system that will take a continuous stream of images, which is sent to the Node.js server, uploaded to Google Cloud Storage, and passed in Google Cloud Vision to retrieve a set of features which we filter based off of how relevant it is to distinguishing between a fall and not, namely LABEL\_DETECTION and SAFE\_SEARCH\_DETECTION. These features are normalized and passed to the Flask server to classify whether it is a fall or not, and this response is sent back to the Node server. We used the Twilio API so that if a fall was detected, Twilio gives the emergency contact a call with an audio clip requesting for help. ## Challenges we ran into The Android Things camera cable was unreliable and unstable, and not able to stably provide a stream of images.
winning
## Inspiration Have you ever seen something really tasty on Instagram but didn’t know where to get it? Have you ever craved for something but couldn' t get it? Maybe it' s too far away from home, or too expensive. We would like to make good food accessible to everyone. With Coquere, you will be able to make something without even needing to know its name. ## What it does Coquere is an innovative Android application designed to provide users with the wonder behind delicious food. With a quick photo of the dish, the users will find out how to make their favorite dish at home. ## How I built it Coquere utilizes Google' s machine Learning and vision API and Food2Fork' s recipe API to power its magic. ## Challenges I ran into It's our first time to use Android Studio to design an app and it's also our first time to use Google API, but in this process we get a better understanding of a good design for softwares and how to analyze each procedure to satisfy clients' needs. ## Accomplishments that I'm proud of We get to appreciate the google vision API and how we can identify every single element in the picture. It was our first hackathon and we are super proud that we get to appreciate the process and come up with a tangible project. ## What's next for Coquere We have in mind some features that will help us improve Coquere. Allow users to enter their allergic information to filter results accordingly. Dinner Planner: When the user inputs a photo of the interior of their fridge, our app will give suggestions of what to cook with the ingredients that the user already has.
## Inspiration We wanted a way to promote a healthy diet and lifestyle during these trying times and we thought a great way to start that was to cook your own food instead of having take out. ## What it does It's tinder but for recipes; you either like a recipe and we save it for you or you pass on it and we will remember to exclude it next time ## How we built it We used Googles Firebase to hold our user data such as the recipes they've liked and the ones they've passed on, as well as a recipe API to obtain recipes from, and we used Java on Android Studio for the front end ## Challenges we ran into Learning development on Android using Java for the first time for the majority of our group members along with implementing best practices for android such as MVVM and using the most optimal method for the problem was difficult to digest at the start. ## Accomplishments that we're proud of Having a working project, learning and implementing MVVM architectural pattern, trying out best. ## What we learned We learned how to use information from an API and present it on an application for consumers. It was a good learning experience working with firebase and experimenting with what android studio has to offer. ## What's next for Chef Swipe Since the API we use limits the number of calls we may perform, we hope to invest in a plan that would allow for more usage. We were not able to create tags that would filter through what recipes would appear on the application for each specific profile and we hope to implement that in the future.
## Inspiration While caught in the the excitement of coming up with project ideas, we found ourselves forgetting to follow up on action items brought up in the discussion. We felt that it would come in handy to have our own virtual meeting assistant to keep track of our ideas. We moved on to integrate features like automating the process of creating JIRA issues and providing a full transcript for participants to view in retrospect. ## What it does *Minutes Made* acts as your own personal team assistant during meetings. It takes meeting minutes, creates transcripts, finds key tags and features and automates the process of creating Jira tickets for you. It works in multiple spoken languages, and uses voice biometrics to identify key speakers. For security, the data is encrypted locally - and since it is serverless, no sensitive data is exposed. ## How we built it Minutes Made leverages Azure Cognitive Services for to translate between languages, identify speakers from voice patterns, and convert speech to text. It then uses custom natural language processing to parse out key issues. Interactions with slack and Jira are done through STDLIB. ## Challenges we ran into We originally used Python libraries to manually perform the natural language processing, but found they didn't quite meet our demands with accuracy and latency. We found that Azure Cognitive services worked better. However, we did end up developing our own natural language processing algorithms to handle some of the functionality as well (e.g. creating Jira issues) since Azure didn't have everything we wanted. As the speech conversion is done in real-time, it was necessary for our solution to be extremely performant. We needed an efficient way to store and fetch the chat transcripts. This was a difficult demand to meet, but we managed to rectify our issue with a Redis caching layer to fetch the chat transcripts quickly and persist to disk between sessions. ## Accomplishments that we're proud of This was the first time that we all worked together, and we're glad that we were able to get a solution that actually worked and that we would actually use in real life. We became proficient with technology that we've never seen before and used it to build a nice product and an experience we're all grateful for. ## What we learned This was a great learning experience for understanding cloud biometrics, and speech recognition technologies. We familiarised ourselves with STDLIB, and working with Jira and Slack APIs. Basically, we learned a lot about the technology we used and a lot about each other ❤️! ## What's next for Minutes Made Next we plan to add more integrations to translate more languages and creating Github issues, Salesforce tickets, etc. We could also improve the natural language processing to handle more functions and edge cases. As we're using fairly new tech, there's a lot of room for improvement in the future.
losing
## Inspiration Let's start by taking a look at some statistics on waste from Ontario and Canada. In Canada, only nine percent of plastics are recycled, while the rest is sent to landfills. More locally, in Ontario, over 3.6 million metric tonnes of plastic ended up as garbage due to tainted recycling bins. Tainted recycling bins occur when someone disposes of their waste into the wrong bin, causing the entire bin to be sent to the landfill. Mark Badger, executive vice-president of Canada Fibers, which runs 12 plants that sort about 60 percent of the curbside recycling collected in Ontario has said that one in three pounds of what people put into blue bins should not be there. This is a major problem, as it is causing our greenhouse gas emissions to grow exponentially. However, if we can reverse this, not only will emissions lower, but according to Deloitte, around 42,000 new jobs will be created. Now let's turn our focus locally. The City of Kingston is seeking input on the implementation of new waste strategies to reach its goal of diverting 65 percent of household waste from landfill by 2025. This project is now in its public engagement phase. That’s where we come in. ## What it does Cycle AI is an app that uses machine learning to classify certain articles of trash/recyclables to incentivize awareness of what a user throws away. You simply pull out your phone, snap a shot of whatever it is that you want to dispose of and Cycle AI will inform you where to throw it out as well as what it is that you are throwing out. On top of that, there are achievements for doing things such as using the app to sort your recycling every day for a certain amount of days. You keep track of your achievements and daily usage through a personal account. ## How We built it In a team of four, we separated into three groups. For the most part, two of us focused on the front end with Kivy, one on UI design, and one on the backend with TensorFlow. From these groups, we divided into subsections that held certain responsibilities like gathering data to train the neural network. This was done by using photos taken from waste picked out of, relatively unsorted, waste bins around Goodwin Hall at Queen's University. 200 photos were taken for each subcategory, amounting to quite a bit of data by the end of it. The data was used to train the neural network backend. The front end was all programmed on Python using Kivy. After the frontend and backend were completed, a connection was created between them to seamlessly feed data from end to end. This allows a user of the application to take a photo of whatever it is they want to be sorted, having the photo feed to the neural network, and then returned to the front end with a displayed message. The user can also create an account with a username and password, for which they can use to store their number of scans as well as achievements. ## Challenges We ran into The two hardest challenges we had to overcome as a group was the need to build an adequate dataset as well as learning the framework Kivy. In our first attempt at gathering a dataset, the images we got online turned out to be too noisy when grouped together. This caused the neural network to become overfit, relying on patterns to heavily. We decided to fix this by gathering our own data. I wen around Goodwin Hall and went into the bins to gather "data". After washing my hands thoroughly, I took ~175 photos of each category to train the neural network with real data. This seemed to work well, overcoming that challenge. The second challenge I, as well as my team, ran into was our little familiarity with Kivy. For the most part, we had all just began learning Kivy the day of QHacks. This posed to be quite a time-consuming problem, but we simply pushed through it to get the hang of it. ## 24 Hour Time Lapse **Bellow is a 24 hour time-lapse of my team and I work. The naps on the tables weren't the most comfortable.** <https://www.youtube.com/watch?v=oyCeM9XfFmY&t=49s>
## Inspiration Every year roughly 25% of recyclable material is not able to be recycled due to contamination. We set out to reduce the amount of things that are needlessly sent to the landfill by reducing how much people put the wrong things into recycling bins (i.e. no coffee cups). ## What it does This project is a lid for a recycling bin that uses sensors, microcontrollers, servos, and ML/AI to determine if something should be recycled or not and physically does it. To do this it follows the following process: 1. Waits for object to be placed on lid 2. Take picture of object using webcam 3. Does image processing to normalize image 4. Sends image to Tensorflow model 5. Model predicts material type and confidence ratings 6. If material isn't recyclable, it sends a *YEET* signal and if it is it sends a *drop* signal to the Arduino 7. Arduino performs the motion sent to it it (aka. slaps it *Happy Gilmore* style or drops it) 8. System resets and waits to run again ## How we built it We used an Arduino Uno with an Ultrasonic sensor to detect the proximity of an object, and once it meets the threshold, the Arduino sends information to the pre-trained TensorFlow ML Model to detect whether the object is recyclable or not. Once the processing is complete, information is sent from the Python script to the Arduino to determine whether to yeet or drop the object in the recycling bin. ## Challenges we ran into A main challenge we ran into was integrating both the individual hardware and software components together, as it was difficult to send information from the Arduino to the Python scripts we wanted to run. Additionally, we debugged a lot in terms of the servo not working and many issues when working with the ML model. ## Accomplishments that we're proud of We are proud of successfully integrating both software and hardware components together to create a whole project. Additionally, it was all of our first times experimenting with new technology such as TensorFlow/Machine Learning, and working with an Arduino. ## What we learned * TensorFlow * Arduino Development * Jupyter * Debugging ## What's next for Happy RecycleMore Currently the model tries to predict everything in the picture which leads to inaccuracies since it detects things in the backgrounds like people's clothes which aren't recyclable causing it to yeet the object when it should drop it. To fix this we'd like to only use the object in the centre of the image in the prediction model or reorient the camera to not be able to see anything else.
## Inspiration 💥 Our inspiration is to alter the way humans have acquired knowledge and skills over the last hundred years. Instead of reading or writing, we devised of a method fo individuals to teach others through communication and mentoring. A way that not only benefits those who learn but also helps them achieve their goals. ## What it does 🌟 Intellex is a diverse skill swapping platform for those eager to learn more. In this era, information is gold. Your knowledge is valuable, and people want it. For the price of a tutoring session, you can receive back a complete and in depth tutorial on whatever you want. Join one on one video calls with safe and rated teachers, and be rewarded for learning more. We constantly move away from agencies and the government and thus Intellex strives to decentralize education. Slowly, the age old classroom is changing. Intellex presents a potential step towards education decentralization by incentivizing education with NFT rewards which include special badges and a leaderboard. ## How we built it 🛠️ We began with planning out our core features and determining what technologies we would use. Later, we created a Figma design to understand what pages we would need for our project, planning our backend integration to store and fetch data from a database. We used Next.js to structure the project which uses React internally. We used TypeScript for type safety across my project which was major help when it came to debugging. Tailwind CSS was leveraged for its easy to use classes. We also utilized Framer Motion for the landing page animations ## Challenges we ran into 🌀 The obstacles we faced were coming up with a captivating idea, which caused us to lose productivity. We've also faced difficult obstacles in languages we're unfamiliar with, and some of us are also beginners which created much confusion during the event. Time management was really difficult to cope with because of the many changes in plans, but  overall we have improved our knowledge and experience. ## Accomplishments that we're proud of 🎊 We are proud of building a very clean, functional, and modern-looking user interface for Intellex, allowing users to experience an intuitive and interactive educational environment. This aligns seamlessly with our future use of Whisper AI to enhance user interactions. To ensure optimized site performance, we're implementing Next.js with Server-Side Rendering (SSR), providing an extremely fast and responsive feel when using the app. This approach not only boosts efficiency but also improves the overall user experience, crucial for educational applications. In line with the best practices of React, we're focusing on using client-side rendering at the most intricate points of the application, integrating it with mock data initially. This setup is in preparation for later fetching real-time data from the backend, including interactive whiteboard sessions and peer ratings. Our aim is to create a dynamic, adaptive learning platform that is both powerful and easy to use, reflecting our commitment to pioneering in the educational technology space. ## What we learned 🧠 Besides the technologies that were listed above, we as a group learned an exceptional amount of information in regards to full stack web applications. This experience marked the beginning of our full stack journey and we took it approached it with a cautious approach, making sure we understood all aspects of a website, which is something that a lot of people tend to overlook. We learned about the planning process, backend integration, REST API's, etc. Most importantly, we learned about the importance of having cooperative and helpful team that will have your back in building out these complex apps on time. ## What's next for Intellex ➡️ We fully plan to build out the backend of Intellex to allow for proper functionality using Whisper AI. This innovative technology will enhance user interactions and streamline the learning process. Regarding the product itself, there are countless educational features that we want to implement, such as an interactive whiteboard for real-time collaboration and a comprehensive rating system to allow peers to see and evaluate each other's contributions. These features aim to foster a more engaging and interactive learning environment. Additionally, we're exploring the integration of adaptive learning algorithms to personalize the educational experience for each user. This is a product we've always wanted to pursue in some form, and we look forward to bringing it to life and seeing its positive impact on the educational community.
winning
## Inspiration Being a Cal Bear, it was a dream come true for us! Through personal experiences, ## What it does Bind is a screening tool that evaluates one's state of mind in real time. It helps the user to figure out how he/she is feeling and what triggers theses feelings. Furthermore, it generates a report that can be shared to therapists. ## How I built it We used Google Cloud API (Vision) to detect the facial expression of the user as a metric to evaluate the state of mind. In addition, we use NLP to extract the emotional component of what you say. ## Challenges I ran into Time was a constrain for us. If we had more time, we would like to add more features such as mood analytics, chatbot features to let the user express feelings and a more efficient algorithm to let the code run faster ## Accomplishments that I'm proud of We learned how to use API calls and design an interactive website ## What I learned I learned to use Android Studio, Repl.it, API calls and a set of cool vim commands! ## What's next for Bear Care We plan on implementing the features discussed above that hasn't been completed due to time constrains in our time!
## Inspiration As a team of high school students, we understand how challenging it can be to manage stress, anxiety, and other emotional challenges while balancing school, extracurricular activities, and personal life. Music has always been a powerful tool for emotional regulation, but we wanted to take it a step further by integrating technology. This personalized, adaptive music therapy experience was inspired by the potential of combining emotion recognition and music therapy. ## What it does Rest is an emotion-driven music therapy website that provides personalized music therapy sessions based on the user’s current emotional state. By analyzing facial expressions and text using advanced algorithms, Rest identifies the user’s emotional state and recommends a tailored music therapy session. The app continuously monitors the user’s response and adjusts the music in real-time to ensure maximum effectiveness. ## How we built it We built Rest using a combination of Python, Flask, CSS, HTML, and JS. The AI used for image analysis was taken from the web and uses a Torchscript model. The text analysis was done with the OpenAI API. ## Challenges we ran into We encountered several challenges that tested our problem-solving abilities and teamwork. First of all, as a team with no prior experience using Flask, we faced a steep learning curve. We had to quickly get up to speed with Flask’s framework and figure out how to integrate it effectively into our project. Working with Spotify’s API was another significant challenge because the documentation is lackluster and it is highly unreliable at times. We spent considerable time reading documentation, experimenting, and troubleshooting issues. Lastly, working as a team lead to multiple merge conflicts when trying to combine code. ## Accomplishments that we're proud of We are proud of several accomplishments achieved during this project. First of all, the successful development of Rest. This website manages to integrate emotion recognition, music recommendation, and real-time adaptation. Another element of this project we are proud of is the innovative use of technology. Machine learning was used to both analyze facial expressions and text. This allowed for a personalized music therapy experience. In addition, our user-friendly web interface allows users to interact with the app and receive their personalized therapy sessions. Our design and clean UI were a significant accomplishment for Rest. ## What we learned As it was our first time using Flask, we learned a lot about Flask. This presented a significant challenge, as Flask was the primary framework we chose to build our project. However, through persistence and collaboration, we were able to rapidly learn and adapt to using this framework. Additionally, we learned about Spotify’s API during this hackathon project. Tasks like creating personalized recommendations analyzing track information, and adding songs to the queue were all things we achieved with Spotipy. We also learned how to use the Auth0 API for the first time. ## What's next for Rest One improvement that could be made to Rest in the future is increased interactiveness—introducing interactive elements such as guided visualizations and interactive music-making exercises to enhance user engagement. We would also like to add features such as meditation between songs. Another potential future idea would be to create a mobile application. A mobile version of Rest would be able to provide users with a more accessible and convenient platform for emotion-driven music therapy. Finally, collaborating with mental health professionals to integrate Rest into therapeutic practices would allow us to provide more comprehensive support to users. Rest aims to revolutionize the music therapy experience by providing personalized, adaptive, and effective emotional support through the power of music and technology.
## Inspiration Care.ai was inspired by our self-conducted study involving 60 families and 23 smart devices, focusing on elderly healthcare. Over three months, despite various technologies, families preferred the simplicity of voice-activated assistants like Alexa. This preference led us to develop an intuitive, user-friendly AI healthcare chatbot tailored to everyday needs. ## What it does Care.ai, an AI healthcare chatbot, leverages custom-trained Large Language Models (LLMs) and visual recognition technology hosted on the Intel Cloud for robust processing power. These models, refined and accessible via Hugging Face, underwent further fine-tuning through MonsterAPI, enhancing their accuracy and responsiveness to medical queries. The web application, powered by the Reflex library, provides a seamless and intuitive front-end experience, making it easy for users to interact with and benefit from the chatbot's capabilities. Care.ai supports real-time data analytics and critical care necessary for humans. ## How we built it We built our AI healthcare chatbot by training LLMs and visual recognition systems on the Intel Cloud, then hosting and fine-tuning these models on Hugging Face with MonsterAPI. The chatbot's user-friendly web interface was developed using the Reflex library, creating a seamless user interaction platform. For data collection, * We researched datasets and performed literature review * We used the pre-training data for developing and fine-tuning our LLM and visual models * We collect live data readings using sensors to test against our trained models We categorized our project into three parts: * Interactive Language Models: We developed deep learning models on Intel Developer Cloud and fine-tuned our Hugging Face hosted models using MonsterAPI. We further used Reflex Library to be the face of Care.ai and create a seamless platform. * Embedded Sensor Networks: Developed our IoT sensors to track the real-time data and test our LLVMs on the captured data readings. * Compliance and Security Components: Intel Developer Cloud to extract emotions and de-identify patient's voice to be HIPAA ## Challenges we ran into Integrating new technologies posed significant challenges, including optimizing model performance on the Intel Cloud, ensuring seamless model fine-tuning via MonsterAPI and achieving intuitive user interaction through the Reflex library. Balancing technical complexity with user-friendliness and maintaining data privacy and security were among the key hurdles we navigated. ## Accomplishments that we're proud of We're proud of creating a user-centric AI healthcare chatbot that combines advanced LLMs and visual recognition hosted on the cutting-edge Intel Cloud. Successfully fine-tuning these models on Hugging Face and integrating them with a Reflex-powered interface showcases our technical achievement. Our commitment to privacy, security, and intuitive design has set a new standard in accessible home healthcare solutions. ## What we learned We learned the importance of integrating advanced AI with user-friendly interfaces for healthcare. Balancing technical innovation with accessibility, the intricacies of cloud hosting, model fine-tuning, and ensuring data privacy were key lessons in developing an effective, secure, and intuitive AI healthcare chatbot. ## What's next for care.ai Next, Care.ai is expanding its disease recognition capabilities, enhancing user interaction with natural language processing improvements, and exploring partnerships for broader deployment in healthcare systems to revolutionize home healthcare access and efficiency.
losing
## Inspiration We were inspired to create this project after seeing the workshop put on by Microsoft about their new technology that we had not seen before. Azure was new technology to us and we wanted to explore what we could create with this technology. Taking inspiration from the challenges given to us and the large amount of hockey games around our university, we came up with IceBreakers. ## What it does IceBreakers has a trained AI that can tell NHL team logos from jerseys and we added in a way to get game data so you can see how your respective team is doing updated as data comes in. We put this all in a streamlined website so it's easy to navigate ## How we built it We spent a lot of time training our AI so it could differentiate the sports team logos by feeding it tons of images and then we worked with Wix code to create a website so the AI could pull from the uploaded images and run the images against it's knowledge base and also give you real-time game data from the most recent game your team has played in. ## Challenges we ran into We ran into a lot of challenges since we decided to tackle AI. It was an interesting but challenging weekend since we had never worked with AI and half our team had never worked with Wix before. It was difficult to get the AI to cooperate since it was like training a toddler and even AI can throw temper tantrums. We also had trouble getting Wix and our AI to get along since there was a bit of a technological screaming match between them so we had to make them cooperate, and came very close to getting them listen. ## Accomplishments that we're proud of We're proud of the fact that we were able to successfully train an AI, make a web app, and all of this was accomplished in a team of two. ## What we learned We learned that there is a lot that goes into AI and it requires a lot of effort to get a decently trained model. There is also a lot of work to be split up in a smaller team so it really tested our time management and our ambition. ## What's next for IceBreakers We hope to improve the functionality of our web application and to streamline usability and user experience. We also want to incorporate a way to add in the ability to find Facebook groups pertaining to your respective hockey team.
## Inspiration Coming out from across the country, we were struggling to find a team for the UC Berkley AI Hackathon. Although the in-person team formation process is helpful, it takes away from our precious hacking time and delays the ideation phase -- we don't want to miss a single minute that could be spent coding! This is how and why we were inspired to create TeamUP, an application to help streamline the team formation process online by matching hackers with the power of AI tools via an interface similar to Tinder. ## What it does TeamUP is designed to help match hackers with potential teammates with based on a combination of technical and soft skills. When creating your hacker profile, you'll be asked to list some of the libraries, frameworks, and tools you're experiences in so that our recommendation-system can try to suggest other hackers with skills that complement yours based on the theme of the specific hackathon. However, as often is the case with teamwork, we also want to make sure that you'll get along with your potential teammates which is why TeamUP also analyzes your "about" section to match you with people who will likely create the optimal collaborative space. This makes sure teams are well-rounded and equipped with a variety of different skillsets to create the best projects possible. ## How we built it For the web app, we used the React (node.js) framework for the frontend and Express framework for the backend. We hosted the database in MongoDB Atlas and used AWS to host our backend APIs. For the matching system, we utilized HuggingFace's Inference module, specifically the deepset/roberta-base-squad2 model. ## Challenges we ran into Finding a team! We spent quite a lot of time finalizing our team due some hiccups in communication... but we eventually got there. On the technical side, one of the biggest challenges was integrating the frontend and backend services because there were so many components. It also took us some time to get the AI model working with our react app -- it was a time crunch for sure! ## Accomplishments that we're proud of We are proud of our web application and integrating AI for the matching system. It was a time crunch, but each and everyone of us learned something new which is the greatest reward. ## What we learned We learned that full-stack development is fun but challenging at times. We also gained an appreciation for the ideation and prototyping phases because even if you put in more time there, it will make the rest of the hacking process smoother since everyone has a clear idea of what they need to be working on. Additionally, we got hands-on experience and enhanced our skillset in a variety of technologies, including MongoDB, Express, React, AWS, and more. All-in-all, this was another great opportunity to see how we can harness the power of AI to enhance our daily lives through accessible applications. ## What's next for TeamUP Although TeamUP is initially designed for AI-themed hackathons like this one, we would like to broaden the scope to all annual collegiate hackathons (themed or not!) so that we can find better skill matches based on the specific hackathon(s) the user selects. We would also like to improve our hacker matching system to make it more sophisticated, such as by noting post-hackathon feedback so that it can get a better sense of your teamwork preferences. This would require more research into just how recommendation-systems work from the ground up, in addition to making our app more powerful to personalize the experience for every hacker.
## 🚀**Inspiration** Stakes, friendly fights, and fiery debates have followed since the 2022 FIFA world cup's impending ascent and enthusiasm. Our team made the decision to apply machine learning principles to real data in order to assess and forecast which side would win these matches. ## 🤔**What it does** Our front-end web app allows users to match-up 2 countries that are participating in the 2022 FIFA world cup. Then it will display the win percentage from our back-end API which is in constant communication with an AI. This AI was taught through Machine Learning with a large dataset of past soccer games and their results. Using extensive services like Google Cloud in order to implement Vertex AI, we are able to train this large dataset in order to obtain the results for a win, a draw, or a tie given the two team names. Using Google Clouds Vertex AI we were able to deploy endpoints in order to then work with the predicted data in our Flask app. This Flask app communicated with our React app in order to display the information according to user input, and team matchup combinations. ## 👨‍💻**How we built it** We built the front end using React, and bootstrap. The front end communicates with an endpoint running Flask and Python. This is because we need to have an authenticated computer talk directly to the Google Cloud. This endpoint calls Google Cloud API, which predicts the odds of a team winning, given some information. ## 🚧**Challenges we ran into** From comprehending how it operates to building our own machine learning environments and processing information through them, machine learning presented a new challenge for all of us. A challenge was figuring out how to apply Vertex AI's AutoML to our dataset and modify it to make it more compatible with the AI. Another challenge was our React app. Learning React and building our first web application was challenging, and in order to make our React website work best with backend computation, we had to cram a lot of tutorials and documentation into it. ## 🎯**Accomplishments that we're proud of** making an AI that responds to the right request and outputs the right data We entered this data into Google Clouds Vertex AI, where we trained and successfully predicted soccer match results given the names of two teams utilising the effective and user-friendly AutoML technology. ## 💡**What we learned** Working with Google Clouds Platform our team really extended our capabilities across the wide array of features that it included, all the way from creating VM instances to host python flask apps, to utilizing the Vertex Ai’s AutoML to train our large dataset, and effectively target specific columns to produce the data that we needed. In order to understand and effectively communicate the data in a visual manner our team had to learn react, and understand HTTP requests, this process involved long hours of research and team efforts. ## 📈**What's next for Mirai9** Implementing our machine learning model to more variety of sports, esports, and even hackathon statistics (using data on ideas) to predict which projects have higher chances of winning based on prior hackathon data. Mirai 9 works to achieve more accurate predictions in the near future with greater success than as of current.
losing
## Inspiration The precise and confusing way in which network admin data must currently be found. The whole point of the project is to decrease the unnecessary level of burden for a given person to simply access and make sense of data. ## What it does We made a Spark/Slack/Cortana/Facebook Messenger/Google Assistant chat/voice bot that allows people to get data insights from Meraki networking gear by just typing/talking in a natural manner via Natural Language Processing. We also use Kafka/Confluent to upload chat messages to AWS S3 and analyze them and improve our NLP system. Additionally, we use advanced yet intuitive 2D and 3D modeling software to make it easy for users to understand the data they receive. ## How we built it Node.js chat bot on Heroku + custom analytic servers on Heroku along with a local Java data processing server and online S3 bucket. ## Challenges we ran into Getting Kafka set up and working as well as understanding different networking features. ## Accomplishments that we're proud of Overcoming our challenges. ## What we learned We learned a lot about networking terminology as well as how data processing/streaming works. ## What's next for EZNet More features to be available via the cross-platform bot!
## Inspiration Have you ever wished you had…another you? This thought has crossed all of our heads countless times as we find ourselves swamped in too many tasks, unable to keep up in meetings as information flies over our head, or wishing we had the feedback of a third perspective. Our goal was to build an **autonomous agent** that could be that person for you — an AI that learns from your interactions and proactively takes **actions**, provides **answers**, offers advice, and more, to give back your time to you. ## What it does Ephemeral is an **autonomous AI agent** that interacts with the world primarily through the modality of **voice**. It can sit in on meetings, calls, anywhere you have your computer out. It’s power is the ability to take what it hears and proactively carry out repetitive actions for you such as be a real-time AI assistant in meetings, draft emails directly in your Google inbox, schedule calendar events and invite attendees, search knowledge corpuses or the web for answers to questions, image generation, and more. Multiple users (in multiple languages!) can use the technology simultaneously through the server/client architecture that efficiently handles multiprocessing. ## How we built it ![link](https://i.imgur.com/PatcdIi.png) **Languages**: Python ∙ JavaScript ∙ HTML ∙ CSS **Frameworks and Tools**: React.js ∙ PyTorch ∙ Flask ∙ LangChain ∙ OpenAI ∙ TogetherAI ∙ Many More ### 1. Audio to Text We utilized OpenAI’s Whisper model and the python speech\_recognition library to convert audio in real-time to text that can be used by downstream functions. ### 2. Client → Server via Socket Connection We use socket connections between the client and server to pass over the textual query to the server for it to determine a particular action and action parameters. The socket connections enable us to support multiprocessing as multiple clients can connect to the server simultaneously while performing concurrent logic (such as real-time, personalized agentic actions during a meeting). ### 3. Neural Network Action Classifier We trained a neural network from scratch to handle the multi-class classification problem that is going from text to action (or none at all). Because the agent is constantly listening, we need a way to efficiently and accurately determine if each transcribed chunk necessitates a particular action (if so, which?) or none at all (most commonly). We generated data for this task utilizing data augmentation sources such as ChatGPT (web). ### 4. LLM Logic: Query → Function Parameters We use in-context learning via few-shot prompting and RAG to query the LLM for various agentic tasks. We built a RAG pipeline over the conversation history and past related, relevant meetings for context. The agentic tasks take in function parameters, which are generated by the LLM. ### 5. Server → Client Parameters via Socket Connection We pass back the function parameters as a JSON object from the server socket to the client. ### 6. Client Side Handler: API Call A client side handler receives a JSON object that includes which action (if any) was chosen by the Action Planner in step 3, then passes control to the appropriate handler function which handles authorizations and makes API calls to various services such as Google’s Gmail Client, Calendar API, text-to-speech, and more. ### 7. Client Action Notifications → File (monitored by Flask REST API) After the completion of each action, the client server writes the results of the action down to a file which is then read by the React Web App to display ephemeral updates on a UI, in addition to suggestions/answers/discussion questions/advice on a polling basis. ### 8. React Web App and Ephemeral UI To communicate updates to the user (specifically notifications and suggestions from Ephemeral), we poll the Flask API for any updates and serve it to the user via a React web app. Our app is called Ephemeral because we show information minimally yet expressively to the user, in order to promote focus in meetings. ## Challenges we ran into We spent a significant amount of our time optimizing for lower latency, which is important for a real-time consumer-facing application. In order to do this, we created sockets to enable 2-way communication between the client(s) and the server. Then, in order to support concurrent and parallel execution, we added support for multithreading on the server-side. Choosing action spaces that can be precisely articulated enough in text such that a language model can carry out actions was a troublesome task. We went through a lot of experimentation on different tasks to figure out which would have the highest value to humans and also the highest correctness guarantee. ## Accomplishments that we're proud of Successful integration of numerous OSS and closed source models into a working product, including Llama-70B-Chat, Mistral-7B, Stable Diffusion 2.1, OpenAI TTS, OpenAI Whisper, and more. Integration of real actions that we can see ourselves directly using was very cool to see go from a hypothetical to a reality. The potential for impact of this general workflow in various domains is not lost on us, as while the general productivity purpose stands, there are many more specific gains to be seen in fields such as digital education, telemedicine, and more! ## What we learned The possibility of powerful autonomous agents to supplement human workflows signals the shift of a new paradigm where more and more our imprecise language can be taken by these programs and turned into real actions on behalf of us. ## What's next for Ephemeral An agent is only constrained by the size of the action space you give it. We think that Ephemeral has the potential to grow boundlessly as more powerful actions are integrated into its planning capabilities and it returns more of a user’s time to them.
# Aqueduct ![](https://camo.githubusercontent.com/38eee254b913ee3775e6739f068ce7428d54c941/68747470733a2f2f696d672e736869656c64732e696f2f636972636c6563692f70726f6a6563742f6769746875622f6261646765732f736869656c64732f6d6173746572) [![License: MIT](https://img.shields.io/badge/License-MIT-yellow.svg)](https://opensource.org/licenses/MIT) [![PRs Welcome](https://img.shields.io/badge/PRs-welcome-brightgreen.svg?style=flat-square)](http://makeapullrequest.com) ## Inspiration The convenience of the internet has become essential to us in recent years. Despite this, billions of people still do not have access to the internet on-the-go. Our SMS client allows these users to access the current news, weather, stocks, as well as past encyclopedia knowledge and even perform google searches through only text messages. ## What it does Allows users without internet access to retrieve concise information on current news, weather, stocks, encyclopedia knowledge and more through text messages. ## How we built it The SMS client is fundamentally built upon Twilio, Node.Js, Express, RiveScript and MongoDB. This allowed us to set up a webhook that Twilio would interact with while having a dynamic chat using user sessions, allowing us to expand beyond a simple command interface and allowed for conversations with the bot. ## Challenges we ran into Our biggest hurdles were a slow host server speed when inputting and outputting text messages, as well as unfamiliarity with the language and environment to some. Despite the learning curve, we worked hard to adapt to new and unknown challenges under a time restraint. In the end, we managed to learn how to deploy the client onto Google Cloud to speed up the server, and also gained more in-depth knowledge in javascript listeners. ## Accomplishments that we're proud of All team members were very fast to adapt to any problems and hurdles, and learnt and applied new material very quickly. Additionally, communication was very efficient and concise, resulting in no conflicts during teamwork. ## What we learned * Gained a more in-depth understanding of javascript listeners * Deploy google cloud servers * Knowledge in the stream-lining process when working in a group on git
partial
## Inspiration We read online that quizzes help people retain the information they learn, and we figured if we could make it fun then more people would want to study. Oftentimes it's difficult to get study groups together, but everyone is constantly on their phones so a quick mobile quiz wouldn't be hard to do. ## What it does Cram is a live social quiz app for students to study for courses with their classmates. ## How we built it We created an iOS app in Swift and built the backend in Python. ## Challenges we ran into It's very difficult to generate questions and answers from a given piece of text, so that part of the app is still something we hope to improve on. ## What's next for Cram Next, we plan on improving our automatic question generation algorithm to incorporate machine learning to check for question quality.
## Inspiration The inspiration for the project was our desire to make studying and learning more efficient and accessible for students and educators. Utilizing advancements in technology, like the increased availability and lower cost of text embeddings, to make the process of finding answers within educational materials more seamless and convenient. ## What it does Wise Up is a website that takes many different types of file format, as well as plain text, and separates the information into "pages". Using text embeddings, it can then quickly search through all the pages in a text and figure out which ones are most likely to contain the answer to a question that the user sends. It can also recursively summarize the file at different levels of compression. ## How we built it With blood, sweat and tears! We used many tools offered to us throughout the challenge to simplify our life. We used Javascript, HTML and CSS for the website, and used it to communicate to a Flask backend that can run our python scripts involving API calls and such. We have API calls to openAI text embeddings, to cohere's xlarge model, to GPT-3's API, OpenAI's Whisper Speech-to-Text model, and several modules for getting an mp4 from a youtube link, a text from a pdf, and so on. ## Challenges we ran into We had problems getting the backend on Flask to run on a Ubuntu server, and later had to instead run it on a Windows machine. Moreover, getting the backend to communicate effectively with the frontend in real time was a real challenge. Extracting text and page data from files and links ended up taking more time than expected, and finally, since the latency of sending information back and forth from the front end to the backend would lead to a worse user experience, we attempted to implement some features of our semantic search algorithm in the frontend, which led to a lot of difficulties in transferring code from Python to Javascript. ## Accomplishments that we're proud of Since OpenAI's text embeddings are very good and very new, and we use GPT-3.5 based on extracted information to formulate the answer, we believe we likely equal the state of the art in the task of quickly analyzing text and answer complex questions on it, and the ease of use for many different file formats makes us proud that this project and website can be useful for so many people so often. To understand a textbook and answer questions about its content, or to find specific information without knowing any relevant keywords, this product is simply incredibly good, and costs pennies to run. Moreover, we have added an identification system (users signing up with a username and password) to ensure that a specific account is capped at a certain usage of the API, which is at our own costs (pennies, but we wish to avoid it becoming many dollars without our awareness of it. ## What we learned As time goes on, not only do LLMs get better, but new methods are developed to use them more efficiently and for greater results. Web development is quite unintuitive for beginners, especially when different programming languages need to interact. One tool that has saved us a few different times is using json for data transfer, and aws services to store Mbs of data very cheaply. Another thing we learned is that, unfortunately, as time goes on, LLMs get bigger and so sometimes much, much slower; api calls to GPT3 and to Whisper are often slow, taking minutes for 1000+ page textbooks. ## What's next for Wise Up What's next for Wise Up is to make our product faster and more user-friendly. A feature we could add is to summarize text with a fine-tuned model rather than zero-shot learning with GPT-3. Additionally, a next step is to explore partnerships with educational institutions and companies to bring Wise Up to a wider audience and help even more students and educators in their learning journey, or attempt for the website to go viral on social media by advertising its usefulness. Moreover, adding a financial component to the account system could let our users cover the low costs of the APIs, aws and CPU running Whisper.
Fujifusion is our group's submission for Hack MIT 2018. It is a data-driven application for predicting corporate credit ratings. ## Problem Scholars and regulators generally agree that credit rating agency failures were at the center of the 2007-08 global financial crisis. ## Solution \*Train a machine learning model to automate the prediction of corporate credit ratings. \*Compare vendor ratings with predicted ratings to identify discrepancies. \*Present this information in a cross-platform application for RBC’s traders and clients. ## Data Data obtained from RBC Capital Markets consists of 20 features recorded for 27 companies at multiple points in time for a total of 524 samples. Available at <https://github.com/em3057/RBC_CM> ## Analysis We took two approaches to analyzing the data: a supervised approach to predict corporate credit ratings and an unsupervised approach to try to cluster companies into scoring groups. ## Product We present a cross-platform application built using Ionic that works with Android, iOS, and PCs. Our platform allows users to view their investments, our predicted credit rating for each company, a vendor rating for each company, and visual cues to outline discrepancies. They can buy and sell stock through our app, while also exploring other companies they would potentially be interested in investing in.
partial
## Inspiration Public speaking is a critical skill in our lives. The ability to communicate effectively and efficiently is a very crucial, yet difficult skill to hone. For a few of us on the team, having grown up competing in public speaking competitions, we understand too well the challenges that individuals looking to improve their public speaking and presentation skills face. Building off of our experience of effective techniques and best practices and through analyzing the speech patterns of very well-known public speakers, we have designed a web app that will target weaker points in your speech and identify your strengths to make us all better and more effective communicators. ## What it does By analyzing speaking data from many successful public speakers from a variety industries and backgrounds, we have established relatively robust standards for optimal speed, energy levels and pausing frequency during a speech. Taking into consideration the overall tone of the speech, as selected by the user, we are able to tailor our analyses to the user's needs. This simple and easy to use web application will offer users insight into their overall accuracy, enunciation, WPM, pause frequency, energy levels throughout the speech, error frequency per interval and summarize some helpful tips to improve their performance the next time around. ## How we built it For the backend, we built a centralized RESTful Flask API to fetch all backend data from one endpoint. We used Google Cloud Storage to store files greater than 30 seconds as we found that locally saved audio files could only retain about 20-30 seconds of audio. We also used Google Cloud App Engine to deploy our Flask API as well as Google Cloud Speech To Text to transcribe the audio. Various python libraries were used for the analysis of voice data, and the resulting response returns within 5-10 seconds. The web application user interface was built using React, HTML and CSS and focused on displaying analyses in a clear and concise manner. We had two members of the team in charge of designing and developing the front end and two working on the back end functionality. ## Challenges we ran into This hackathon, our team wanted to focus on creating a really good user interface to accompany the functionality. In our planning stages, we started looking into way more features than the time frame could accommodate, so a big challenge we faced was firstly, dealing with the time pressure and secondly, having to revisit our ideas many times and changing or removing functionality. ## Accomplishments that we're proud of Our team is really proud of how well we worked together this hackathon, both in terms of team-wide discussions as well as efficient delegation of tasks for individual work. We leveraged many new technologies and learned so much in the process! Finally, we were able to create a good user interface to use as a platform to deliver our intended functionality. ## What we learned Following the challenge that we faced during this hackathon, we were able to learn the importance of iteration within the design process and how helpful it is to revisit ideas and questions to see if they are still realistic and/or relevant. We also learned a lot about the great functionality that Google Cloud provides and how to leverage that in order to make our application better. ## What's next for Talko In the future, we plan on continuing to develop the UI as well as add more functionality such as support for different languages. We are also considering creating a mobile app to make it more accessible to users on their phones.
## Inspiration Lectures all around the world last on average 100.68 minutes. That number goes all the way up to 216.86 minutes for art students. As students in engineering, we spend roughly 480 hours a day listening to lectures. Add an additional 480 minutes for homework (we're told to study an hour for every hour in a lecture), 120 minutes for personal breaks, 45 minutes for hygeine, not to mention tutorials, office hours, and et. cetera. Thinking about this reminded us of the triangle of sleep, grades and a social life-- and how you can only pick two. We felt that this was unfair and that there had to be a way around this. Most people approach this by attending lectures at home. But often, they just put lectures at 2x speed, or skip sections altogether. This isn't an efficient approach to studying in the slightest. ## What it does Our web-based application takes audio files- whether it be from lectures, interviews or your favourite podcast, and takes out all the silent bits-- the parts you don't care about. That is, the intermediate walking, writing, thinking, pausing or any waiting that happens. By analyzing the waveforms, we can algorithmically select and remove parts of the audio that are quieter than the rest. This is done over our python script running behind our UI. ## How I built it We used PHP/HTML/CSS with Bootstrap to generate the frontend, hosted on a DigitalOcean LAMP droplet with a namecheap domain. On the droplet, we have hosted an Ubuntu web server, which hosts our python file which gets run on the shell. ## Challenges I ran into For all members in the team, it was our first time approaching all of our tasks. Going head on into something we don't know about, in a timed and stressful situation such as a hackathon was really challenging, and something we were very glad that we persevered through. ## Accomplishments that I'm proud of Creating a final product from scratch, without the use of templates or too much guidance from tutorials is pretty rewarding. Often in the web development process, templates and guides are used to help someone learn. However, we developed all of the scripting and the UI ourselves as a team. We even went so far as to design the icons and artwork ourselves. ## What I learned We learnt a lot about the importance of working collaboratively to create a full-stack project. Each individual in the team was assigned a different compartment of the project-- from web deployment, to scripting, to graphic design and user interface. Each role was vastly different from the next and it took a whole team to pull this together. We all gained a greater understanding of the work that goes on in large tech companies. ## What's next for lectr.me Ideally, we'd like to develop the idea to have much more features-- perhaps introducing video, and other options. This idea was really a starting point and there's so much potential for it. ## Examples <https://drive.google.com/drive/folders/1eUm0j95Im7Uh5GG4HwLQXreF0Lzu1TNi?usp=sharing>
## Inspiration It's easy to zone off in online meetings/lectures, and it's difficult to rewind without losing focus at the moment. It could also be disrespectful to others if you expose the fact that you weren't paying attention. Wouldn't it be nice if we can just quickly skim through a list of keywords to immediately see what happened? ## What it does Rewind is an intelligent, collaborative and interactive web canvas with built in voice chat that maintains a list of live-updated keywords that summarize the voice chat history. You can see timestamps of the keywords and click on them to reveal the actual transcribed text. ## How we built it Communications: WebRTC, WebSockets, HTTPS We used WebRTC, a peer-to-peer protocol to connect the users though a voice channel, and we used websockets to update the web pages dynamically, so the users would get instant feedback for others actions. Additionally, a web server is used to maintain stateful information. For summarization and live transcript generation, we used Google Cloud APIs, including natural language processing as well as voice recognition. Audio transcription and summary: Google Cloud Speech (live transcription) and natural language APIs (for summarization) ## Challenges we ran into There are many challenges that we ran into when we tried to bring this project to reality. For the backend development, one of the most challenging problems was getting WebRTC to work on both the backend and the frontend. We spent more than 18 hours on it to come to a working prototype. In addition, the frontend development was also full of challenges. The design and implementation of the canvas involved many trial and errors and the history rewinding page was also time-consuming. Overall, most components of the project took the combined effort of everyone on the team and we have learned a lot from this experience. ## Accomplishments that we're proud of Despite all the challenges we ran into, we were able to have a working product with many different features. Although the final product is by no means perfect, we had fun working on it utilizing every bit of intelligence we had. We were proud to have learned many new tools and get through all the bugs! ## What we learned For the backend, the main thing we learned was how to use WebRTC, which includes client negotiations and management. We also learned how to use Google Cloud Platform in a Python backend and integrate it with the websockets. As for the frontend, we learned to use various javascript elements to help develop interactive client webapp. We also learned event delegation in javascript to help with an essential component of the history page of the frontend. ## What's next for Rewind We imagined a mini dashboard that also shows other live-updated information, such as the sentiment, summary of the entire meeting, as well as the ability to examine information on a particular user.
partial
## Inspiration For physical therapy patients, doing your home exercise program is a crucial part of therapy and recovery. These exercises improve the body and allow patients to remain pain-free without having to pay for costly repeat visits. However, doing these exercises incorrectly can hinder progress and put you back in the doctor’s office. ## What it does PocketPT uses deep learning technologies to detect and correct patient's form in a broad range of Physical Therapy exercises. ## How we built it We used the NVIDIA Jetson-Nano computer and a Logitech webcam to build a deep learning model. We trained the model on over 100 images in order to detect the accuracy of Physical Therapy postures. ## Challenges we ran into Since our group was using new technology, we struggled at first with setting up the hardware and figuring out how to train the deep learning model. ## Accomplishments that we're proud of We are proud that we created a working deep learning model despite no prior experience with hardware hacking or machine learning. ## What we learned We learned the principles of deep learning, hardware, and IoT. We learned how to use the NVIDIA Jetson Nano computer for use in various disciplines. ## What's next for PocketPT In the future, we want to expand to include more Physical Therapy postures. We also want to implement our product for use on Apple Watch and FitBit, which would allow a more seamless workout experience for users.
## 💡 Inspiration 💡 Have you ever wished you could play the piano perfectly? Well, instead of playing yourself, why not get Ludwig to play it for you? Regardless of your ability to read sheet music, just upload it to Ludwig and he'll scan, analyze, and play the entire sheet music within the span of a few seconds! Sometimes, you just want someone to play the piano for you, so we aimed to make a robot that could be your little personal piano player! This project allows us to bring music to places like elderly homes, where live performances can uplift residents who may not have frequent access to musicians. We were excited to combine computer vision, MIDI parsing, and robotics to create something tangible that shows how technology can open new doors. Ultimately, our project makes music more inclusive and brings people together through shared experiences. ## ❓What it does ❓ Ludwig is your music prodigy. Ludwig can read any sheet music that you upload to him, then convert it to a MIDI file, convert that to playable notes on the piano scale, then play each of those notes on the piano with its fingers! You can upload any kind of sheet music and see the music come to life! ## ⚙️ How we built it ⚙️ For this project, we leveraged OpenCV for computer vision to read the sheet music. The sheet reading goes through a process of image filtering, converting it to binary, classifying the characters, identifying the notes, then exporting them as a MIDI file. We then have a server running for transferring the file over to Ludwig's brain via SSH. Using the Raspberry Pi, we leveraged multiple servo motors with a servo module to simultaneously move multiple fingers for Ludwig. In the Raspberry Pi, we developed functions, key mappers, and note mapping systems that allow Ludwig to play the piano effectively. ## Challenges we ran into ⚔️ We had a few roadbumps along the way. Some major ones included file transferring over SSH as well as just making fingers strong enough on the piano that could withstand the torque. It was also fairly difficult trying to figure out the OpenCV for reading the sheet music. We had a model that was fairly slow in reading and converting the music notes. However, we were able to learn from the mentors in Hack The North and learn how to speed it up to make it more efficient. We also wanted to ## Accomplishments that we're proud of 🏆 * Got a working robot to read and play piano music! * File transfer working via SSH * Conversion from MIDI to key presses mapped to fingers * Piano playing melody ablities! ## What we learned 📚 * Working with Raspberry Pi 3 and its libraries for servo motors and additional components * Working with OpenCV and fine tuning models for reading sheet music * SSH protocols and just general networking concepts for transferring files * Parsing MIDI files into useful data through some really cool Python libraries ## What's next for Ludwig 🤔 * MORE OCTAVES! We might add some sort of DC motor with a gearbox, essentially a conveyer belt, which can enable the motors to move up the piano keyboard to allow for more octaves. * Improved photo recognition for reading accents and BPM * Realistic fingers via 3D printing
## Inspiration Post-operation recovery is a vital aspect of healthcare. Keeping careful track of rehabilitation can ensure that patients get better as quickly as possible. We were inspired to create this project when thinking of elderly and single people who are in need of assistance when recovering from surgery. We imagined Dr. Debbie as a continuation of the initial care that patients received, which could carry on into their homes. ## What it does The program currently provides services like physical therapy routines aided by computer vision, and a medicine routine which the AI model reinforces. Along the way, patients can interact with a live AI assistant which speaks to them. This makes the experience feel authentic while remaining professional and informational. ## How we built it Our application consisted of two primary machine learning models interacting with our Bootstrap frontend and Flask backend. * We choose to use Bootstrap in the frontend to seamlessly interact with the Rive animation avatar of Dr. Debbie and mockup a cohesive theme for our web application. * Utilizing Flask, we were able to incorporate our model logic for the person pose detection model via Roboflow directly into Python. * In order to support real-time physical therapy correction, we use a Deep Neural Network trained to perform keypoint recognition on your local live camera feed. It makes keypoint detections, calculating the angles of figures in the frame, allowing us to determine how correctly the patient is performing a certain physical therapy exercise. * Additionally, we utilized Cerebus Llama 3.1-8b to interact directly with the application user, serving as the direct AI companion for the user to communicate with either through text or a text-to-speech option! ## Challenges we ran into We ran into challenges with the computer vision models. Tracking the patient's body was a balancing act between accuracy and speed. Some models were tailored towards different applications, which meant experimenting with different training sets and parameters. Integrating the live avatar of Dr. Debbie was a challenge because the animation service, Rive, was new to the team. Interfacing the animation in a way that felt interactive and realistic was very challenging. Because of the time constraints of the hackathon, we were forced to prioritize certain aspects of the project, while scrapping others. For example, we found some features like medicine scanning and predictive modeling were feasible, but not worth the effort considering our resources. ## What's next for Dr. Debbie Our next two initiatives for Dr. Debbie include converting the application to a mobile app or a Windows desktop application. To reach more users on a more diverse variety of technologies, being able to load Dr. Debbie on your phone or your home computer can be much more convenient for our users. In addition to this, we hope to expand the statistical analysis of Dr. Debbie. To improve the connection between healthcare professionals and patients when they cannot be together in person, Dr. Debbie can provide data to professionals asynchronously helping them stay current with patient status.
winning
## Inspiration We are both AI and self-driving car researchers at Brown University. We are especially passionate about making autonomous cars and robots more accessible through applying intelligent software to "dumb" and inexpensive sensors ## What it does DeepDepth is a deep learning algorithm that can turn any 2D image into a 3D scene by predicting depth information from standard camera data. ## How we built it We used the [NYU Depth V2](http://cs.nyu.edu/%7Esilberman/datasets/nyu_depth_v2.html) dataset which consists of video frames of various rooms captured using the Kinect. It includes RGB data, Depth data, and accelerometer data. We didn't use the accelerometer data for this project. To learn the depth-prediction task, we constructed our own architecture for a fully-convolutional network using Baidu's PaddlePaddle deep learning framework. ## Challenges we ran into Neither of us had worked with PaddlePaddle before so there was a pretty step learning curve to get acquainted with a new deep learning framework. We also ran into a lot of issue with image formats and pixel values when working with the NYU V2 Depth dataset. Lastly as with all things deep learning, we spent a significant amount of time tuning our architecture and its various parameters. ## Accomplishments that we're proud of We are proud to have been able to get something up and running in a completely new framework in under 24 hours. Its always amazing when you finally see the training error start to drop. ## What we learned We learned how to use PaddlePaddle to construct deep learning architecture as well as how to deploy it to an AWS cluster. Also far too much about the numerous image format that are in existence. ## What's next for DeepDepth Given more time we would love to have a live demo that works with the web camera in your computer. We believe that the ability to derive depth information from 2D image data would have numerous industrial applications from self-driving cars to 3D movies.
## Inspiration: A platform where people can share and read other stories ## What it does: We can post and share stories to everyone ## How I built it : I build it with the help of my team using python, django,html,css ## Challenges I ran into: I didn't have many ideas about teamwork so I had to go throgh many things ## Accomplishments that I'm proud of: I did my task that was challenging initially ## What I learned: I learned teamwork, deal with challenges ## What's next for Stories: We can do endless things with this
## Inspiration With the recent Corona Virus outbreak, we noticed a major issue in charitable donations of equipment/supplies ending up in the wrong hands or are lost in transit. How can donors know their support is really reaching those in need? At the same time, those in need would benefit from a way of customizing what they require, almost like a purchasing experience. With these two needs in mind, we created Promise. A charity donation platform to ensure the right aid is provided to the right place. ## What it does Promise has two components. First, a donation request view to submitting aid requests and confirm aids was received. Second, a donor world map view of where donation requests are coming from. The request view allows aid workers, doctors, and responders to specificity the quantity/type of aid required (for our demo we've chosen quantity of syringes, medicine, and face masks as examples) after verifying their identity by taking a picture with their government IDs. We verify identities through Microsoft Azure's face verification service. Once a request is submitted, it and all previous requests will be visible in the donor world view. The donor world view provides a Google Map overlay for potential donors to look at all current donation requests as pins. Donors can select these pins, see their details and make the choice to make a donation. Upon donating, donors are presented with a QR code that would be applied to the aid's packaging. Once the aid has been received by the requesting individual(s), they would scan the QR code, confirm it has been received or notify the donor there's been an issue with the item/loss of items. The comments of the recipient are visible to the donor on the same pin. ## How we built it Frontend: React   Backend: Flask, Node   DB: MySQL, Firebase Realtime DB   Hosting: Firebase, Oracle Cloud   Storage: Firebase   API: Google Maps, Azure Face Detection, Azure Face Verification   Design: Figma, Sketch ## Challenges we ran into Some of the APIs we used had outdated documentation. Finding a good of ensuring information flow (the correct request was referred to each time) for both the donor and recipient. ## Accomplishments that we're proud of We utilized a good number of new technologies and created a solid project in the end which we believe has great potential for good. We've built a platform that is design-led, and that we believe works well in practice, for both end-users and the overall experience. ## What we learned Utilizing React states in a way that benefits a multi-page web app Building facial recognition authentication with MS Azure ## What's next for Promise Improve detail of information provided by a recipient on QR scan. Give donors a statistical view of how much aid is being received so both donors and recipients can better action going forward. Add location-based package tracking similar to Amazon/Arrival by Shopify for transparency
losing
## Inspiration The inspiration behind EcoNomNom stemmed from a collective desire to address the environmental impact of our daily choices, particularly in the realm of food consumption. We recognized the need for a tool that effortlessly integrates with users' existing habits, guiding them towards eco-friendly cooking alternatives without compromising on taste or convenience. ## What it does EcoNomNom is a Chrome extension that enhances the user's cooking experience by providing eco-friendly recipe suggestions from existing recipe pages. The extension analyzes the ingredients of a given recipe and offers sustainable alternatives, promoting conscious food choices. It seamlessly integrates into the browsing experience, making it easy for users to adopt a more environmentally friendly approach to cooking. ## How we built it The development of EcoNomNom involved a multi-faceted approach. We utilized React for the frontend, creating a user-friendly interface that seamlessly integrates with the Chrome browser. Leveraging the Chrome Extension API was crucial for achieving the extension's smooth integration into existing recipe pages. The core functionality of analyzing and suggesting eco-friendly alternatives was implemented using a combination of web scraping techniques, Flask APIs, and custom OpenAI assistants for ingredient analysis. This allowed us to generate meaningful and context-aware suggestions based on the ingredients present in a given recipe. ## Challenges we ran into Building EcoNomNom presented several challenges. One significant obstacle was the diversity of recipe websites and formats, requiring us to develop robust mechanisms to ensure compatibility. Additionally, accurately assessing the environmental impact of different ingredients and suggesting viable alternatives posed a complex problem that required careful consideration. ## Accomplishments that we're proud of Despite the challenges, our team successfully created a functional and user-friendly Chrome extension that promotes sustainable cooking practices. The seamless integration, the accuracy of eco-friendly suggestions, and the overall usability of EcoNomNom are accomplishments that we take pride in. ## What we learned The development of EcoNomNom provided valuable insights into web dev, prompt engineering, and the intricacies of creating browser extensions. We gained a deeper understanding of the challenges associated with promoting sustainability in everyday activities and the importance of user-friendly design in driving adoption. ## What's next for EcoNomNom Looking ahead, our vision for EcoNomNom includes several exciting enhancements. We plan to allow users to customize their sustainability preferences and dietary restrictions, providing a more tailored experience. Integration with online grocery platforms and the development of collaborative features to share eco-friendly recipes within a community are also on our roadmap. Our commitment to continuous improvement ensures that EcoNomNom will evolve into an even more powerful tool for promoting eco-friendly cooking practices.
## Inspiration Everyone loves to eat. But whether you’re a college student, a fitness enthusiast trying to supplement your gains, or have dietary restrictions, it can be hard to come up with meal ideas. LetMeCook is an innovative computer vision-powered web application that combines a scan of a user’s fridge or cupboard with dietary needs to generate personalized recipes based on the ingredients they have. ## What it does When opening LetMeCook, users are first prompted to take an image of their fridge or cupboard. After this, the taken image is sent to a backend server where it is entered into an object segmentation and image classification machine-learning algorithm to classify the food items being seen. Next, the app sends this data to the Edamam API, which then returns comprehensive nutritional facts for each ingredient. After this, users are presented with an option to add custom dietary needs or go directly to the recipe page. When adding dietary needs, users fill out a questionnaire regarding allergies, dietary preferences (such as vegetarian or vegan), or specific nutritional goals (like high-protein or low-carb). They are also prompted to select a meal type (like breakfast or dinner), time-to-prepare limit, and tools available for preparation (like microwave or stove). Next, the dietary criteria, classified ingredients, and corresponding nutritional facts are sent to the OpenAI API, and a personalized recipe is generated to match the user's needs. Finally, LetMeCook displays the recipe and step-by-step instructions for preparation onscreen. If users are unsatisfied with the recipe, they can add a comment and generate a new recipe. ## How we built it The frontend was designed using React with Tailwind for styling. This was done to allow the UI to be dynamic and adjust seamlessly regardless of varying devices. A component library called Radix-UI was used for prefabricating components and Lucide was used for icon components. To use the device's local camera in the app, a library called react-dashcam was utilized. To edit the photos, a library called react-image-crop was used. After the initial image and dietary restrictions are entered, the image is encoded to base64 and entered as a parameter in an HTTP request to the backend server. The backend server is hosted using ngrok and passes the received image to the Google Cloud Vision API. A response containing the classified ingredients is then passed to the Edamam API where nutritional facts are stored about each respective ingredient. All of the information gathered until this point (ingredients, nutritional facts, dietary needs) is then passed to the OpenAI API where a custom recipe is generated and returned. Finally, a response containing the meal name, ingredients, step-by-step instructions for preparation, and nutritional information is returned to the interface and displayed onscreen. ## Challenges we ran into One of the biggest challenges we ran into was creating the model to accurately and rapidly classify the objects in the taken picture. Because we were trying to classify multiple objects from the same image, we sought to create an object segmentation and classification model, but this required hardware capabilities incompatible with our laptops. As a result, we had to switch to using Google Cloud's Vision API, which would allow us to perform the same data extraction necessary. Additionally, we ran into many issues when working on the frontend and allowing it to be responsive regardless of device type, size, or orientation. Finally, we had to troubleshoot the sequence of HTTP communication between the interface and the backend server for specific data types and formatting. ## Accomplishments that we're proud of We are proud to have recognized a very prevalent problem around us and engineered a seamless and powerful tool to solve it. We all enjoyed the bittersweet experience of discovering bugs, editing troublesome code, and staying up overnight working to overcome the various challenges we faced. Additionally, we are proud to have learned many new tools and technologies to create a successful mobile application. Ultimately, our efforts and determination culminated in an innovative, functional product we are all very proud of and excited to present. Lastly, we are proud to have created a product that could reduce food waste and revolutionize the home cooking space around the world. ## What we learned First and foremost, we've learned the profound impact that technology can have on simplifying everyday challenges. In researching the problem, we learned how pervasive the problem of "What to make?" is in home cooking around the world. It can be painstakingly difficult to make home-cooked meals with limited ingredients and numerous dietary criteria. However, we also discovered how effective intelligent-recipe generation can be when paired with computer vision and user-entered dietary needs. Finally, the hackathon motivated us to learn a lot about the technologies we worked with - whether it be new errors or desired functions, new ideas and strategies had to be employed to make the solution work. ## What's next for LetMeCook There is much potential for LetMeCook's functionality and interfacing. First, the ability to take photos of multiple food storages will be implemented. Additionally, we will add the ability to manually edit ingredients after scanning, such as removing detected ingredients or adding new ingredients. A feature allowing users to generate more detailed recipes with currently unavailable ingredients would also be useful for users willing to go to a grocery store. Overall, there are many improvements that could be made to elevate LetMeCook's overall functionality.
## Inspiration: Every year, the world wastes about 2.5 billion tons of food, with the United States alone discarding nearly 60 million tons. This staggering waste inspired us to create **eco**, an app focused on food sustainability and community generosity. ## What it does: **eco** leverages advanced Computer Vision technology, powered by YOLOv8 and OpenCV, to detect fruits, vegetables, and groceries while accurately predicting their expiry dates. The app includes a Discord bot that notifies users of impending expirations and alerts them about unused groceries. Users can easily generate delicious recipes using OpenAI's API, utilizing ingredients from their fridge. Additionally, **eco** features a Shameboard to track and highlight instances of food waste, encouraging community members to take responsibility for their consumption habits. ## How we built it: For the frontend, we chose React, Typescript, and TailwindCSS to create a sleek and responsive interface. The database is powered by Supabase Serverless, ensuring reliable and scalable data management. The heart of **eco** is its advanced Computer Vision model, developed with Python, OpenCV, and YOLOv8, allowing us to accurately predict expiry dates for fruits, vegetables, and groceries. We leveraged OpenAI's API to generate recipes based on expiring foods, providing users with practical and creative meal ideas. Additionally, we integrated a Discord bot using JavaScript for seamless communication and alerts within our Discord server. ## Challenges we ran into: During development, we encountered significant challenges with WebSockets and training the Computer Vision model. These hurdles ignited our passion for problem-solving, driving us to think creatively and push the boundaries of innovation. Through perseverance and ingenuity, we not only overcame these obstacles but also emerged stronger, armed with newfound skills and a deepened resolve to tackle future challenges head-on. ## Accomplishments that we're proud of: We take pride in our adaptive approach, tackling challenges head-on to deliver a fully functional app. Our successful integration of Computer Vision, Discord Bot functionality, and recipe generation showcases our dedication and skill in developing **eco**. ## What we learned: Building **eco** was a transformative journey that taught us invaluable lessons in teamwork, problem-solving, and the seamless integration of technology. We immersed ourselves in the intricacies of Computer Vision, Discord bot development, and frontend/backend development, elevating our skills to new heights. These experiences have not only enriched our project but have also empowered us with a passion for innovation and a drive to excel in future endeavors. **Eco** is not just an app; it's a movement towards a more sustainable and generous community. Join us in reducing food waste and fostering a sense of responsibility towards our environment with eco.
partial
## Inspiration Do you wish something could really help you get up in the morning ## What it does Sleepful uses machine vision to assist your sleep. It tracks if you have actually gotten out of bed. ## How we built it OpenCV, MEAN, Microsoft Azure ## Accomplishments that we're proud of An application that can track your sleep live ## What's next for Sleepful Sleepful has a huge potential to be an assistant for your sleep and morning. We hope to implement features to track quality of your sleep and guide you through your morning routine
## Inspiration On the bus ride to another hackathon, one of our teammates was trying to get some sleep, but was having trouble because of how complex and loud the sound of people in the bus was. This led to the idea that in sufficiently noisy environment, hearing could be just as descriptive and rich as seeing. Therefore to better enable people with visual impairments to be able to navigate and understand their environment, we created a piece of software that is able to describe and create an auditory map of ones environment. ## What it does In a sentence, it uses machine vision to give individuals a kind of echo location. More specifically, one simply needs to hold their cell phone up, and the software will work to guide them using a 3D auditory map. The video feed is streamed over to a server where our modified version of the yolo9000 classification convolutional neural network identifies and localizes the objects of interest within the image. It will then return the position and name of each object back to ones phone. It also uses the Watson IBM api to further augment its readings by validating what objects are actually in the scene, and whether or not they have been misclassified. From here, we make it seem as though each object essentially says its own name, so that individual can essentially create a spacial map of their environment just through audio cues. The sounds get quieter the further away the objects are, and the ratio of sounds between the left and right are also varied as the object moves around the use. The phone are records its orientation, and remembers where past objects were for a few seconds afterwards, even if it is no longer seeing them. However, we also thought about where in everyday life you would want extra detail, and one aspect that stood out to us was faces. Generally, people use specific details on and individual's face to recognize them, so using microsoft's face recognition api, we added a feature that will allow our system to identify and follow friend and family by name. All one has to do is set up their face as a recognizable face, and they are now their own identifiable feature in one's personal system. ## What's next for SoundSight This system could easily be further augmented with voice recognition and processing software that would allow for feedback that would allow for a much more natural experience. It could also be paired with a simple infrared imaging camera to be used to navigate during the night time, making it universally usable. A final idea for future improvement could be to further enhance the machine vision of the system, thereby maximizing its overall effectiveness
## Inspiration The original idea was to create an alarm clock that could aim at the ~~victim's~~ sleeping person's face and shoot water instead of playing a sound to wake-up. Obviously, nobody carries around peristaltic pumps at hackathons so the water squirting part had to be removed, but the idea of getting a plateform that could aim at a person't face remained. ## What it does It simply tries to always keep a webcam pointed directly at the largest face in it's field of view. ## How I built it The brain is a Raspberry Pi model 3 with a webcam attachment that streams raw pictures to Microsoft Cognitive Services. The cloud API then identifies the faces (if any) in the picture and gives a coordinate in pixel of the position of the face. These coordinates are then converted to an offset (in pixel) from the current position. This offset (in X and Y but only X is used) is then transmitted to the Arduino that's in control of the stepper motor. This is done by encoding the data as a JSON string, sending it over the serial connection between the Pi and the Arduino and parsing the string on the Arduino. A translation is done to get an actual number of steps. The translation isn't necessarily precise, as the algorithm will naturally converge towards the center of the face. ## Challenges I ran into Building the enclosure was a lot harder than what I believed initially. It was impossible to build it with two axis of freedom. A compromise was reached by having only the assembly rotate on the X axis (it can pan but not tilt.) Acrylic panels were used. This was sub-optimal as we had no proper equipment to drill into acrylic to secure screws correctly. Furthermore, the shape of the stepper-motors made it very hard to secure anything to their rotating axis. This is the reason the tilt feature had to be abandoned. Proper tooling *and expertise* could have fixed these issues. ## Accomplishments that I'm proud of Stepping out of my confort zone by making a project that depends on areas of expertise I am not familiar with (physical fabrication). ## What I learned It's easier to write software than to build *real* stuff. There is no "fast iterations" in hardware. It was also my first time using epoxy resin as well as laser cuted acrylic. These two materials are interesting to work with and are a good alternative to using thin wood as I was used to before. It's incredibly faster to glue than wood and the laser cutting of the acrylic allows for a precision that's hard to match with wood. It was a lot easier than what I imagined working with the electronics, as driver and library support was already existing and the pieces of equipment as well as the libraries where well documented. ## What's next for FaceTracker Re-do the enclosure with appropriate materials and proper engineering. Switch to OpenCV for image recognition as using a cloud service incurs too much latency. Refine the algorithm to take advantage of the reduced latency. Add tilt capabilities to the project.
partial
## Inspiration As students who listen to music to help with our productivity, we wanted to not only create a music sharing application but also a website to allow others to discover new music, all through where they are located. We were inspired by Pokemon-Go but wanted to create a similar implementation with music for any user to listen to. Anywhere. Anytime. ## What it does Meet Your Beat implements a live map where users are able to drop "beats" (a.k.a Spotify beacons). These beacons store a song on the map, allowing other users to click on the beacon and listen to the song. Using location data, users will be able to see other beacons posted around them that were created by others and have the ability to "tune into" the beacon by listening to the song stationed there. Multiple users can listen to the same beacon to simulate a "silent disco" as well. ## How I built it We first customized the Google Map API to be hosted on our website, as well as fetch the Spotify data for a beacon when a user places their beat. We then designed the website and began implementing the SQL database to hold the user data. ## Challenges I ran into * Having limited experience with Javascript and API usage * Hosting our domain through Google Cloud, which we were unaccustomed to ## Accomplishments that I'm proud of Our team is very proud of our ability to merge various elements for our website, such as the SQL database hosting the Spotify data for other users to access on the website. As well, we are proud of the fact that we learned so many new skills and languages to implement the API's and database ## What I learned We learned a variety of new skills and languages to help us gather the data to implement the website. Despite numerous challenges, all of us took away something new, such as web development, database querying, and API implementation ## What's next for Meet Your Beat * static beacons to have permanent stations at more notable landmarks. These static beacons could have songs with the highest ratings. * share beacons with friends * AR implementation * mobile app implementation
## Inspiration We were inspired by JetBlue's challenge to utilize their data in a new way and we realized that, while there are plenty of websites and phone applications that allow you to find the best flight deal, there is none that provide a way to easily plan the trip and items you will need with your friends and family. ## What it does GrouPlane allows users to create "Rooms" tied to their user account with each room representing an unique event, such as a flight from Toronto to Boston for a week. Within the room, users can select flight times, see the best flight deal, and plan out what they'll need to bring with them. Users can also share the room's unique ID with their friends who can then utilize this ID to join the created room, being able to see the flight plan and modify the needed items. ## How we built it GrouPlane was built utilizing Android Studio with Firebase, the Google Cloud Platform Authentication API, and JetBlue flight information. Within Android Studio, Java code and XML was utilized. ## Challenges we ran into The challenges we ran into was learning how to use Android Studio/GCP/Firebase, and having to overcome the slow Internet speed present at the event. In terms of Android Studio/GCP/Firebase, we were all either entirely new or very new to the environment and so had to learn how to access and utilize all the features available. The slow Internet speed was a challenge due to not only making it difficult to learn for the former tools, but, due to the online nature of the database, having long periods of time where we could not test our code due to having no way to connect to the database. ## Accomplishments that we're proud of We are proud of being able to finish the application despite the challenges. Not only were we able to overcome these challenges but we were able to build an application there functions to the full extent we intended while having an easy to use interface. ## What we learned We learned a lot about how to program Android applications and how to utilize the Google Cloud Platform, specifically Firebase and Google Authentication. ## What's next for GrouPlane GrouPlane has many possible avenues for expansion, in particular we would like to integrate GrouPlane with Airbnb, Hotel chains, and Amazon Alexa. In terms of Airbnb and hotel chains, we would utilize their APIs in order to pull information about hotel deals for the flight locations picked for users can plan out their entire trip within GrouPlane. With this integration, we would also expand GrouPlane to be able to inform everyone within the "event room" about how much the event will cost each person. We would also integrate Amazon Alexa with GrouPlane in order to provide users the ability to plane out their vacation entirely through the speech interface provided by Alexa rather than having to type on their phone.
## Inspiration * While at the CalHacks venue, a team member, Jeffrey, took a picture of the city buildings and wondered, "How can we capture the vibe of this picture?". The team had previously wanted to work on an AI and music-related project, so we got to work in trying to successfully capture the VibeS. ## What it does Provided a picture and a Spotify account, the application will provide a curated playlist that captures the VibeS of the picture based on the user's preferences. ## How we built it We trained a visual transformer BEIT from Hugging Face using Intel's Cloud Computing services that categorizes pictures. Using some of these categories, we then fetch from a Convex database populated with the user's songs from his "niche" playlists and provide the top results. ## Challenges we ran into Making a dataset for the visual transformer was challenging as well as learning new technologies like Convex database mutation and querying. ## Accomplishments that we're proud of We are proud of building accurate models for the time of the day and environment classifications. We are also proud of being able to build an appealing front end for our project. ## What we learned We learned a lot of skills such as training and testing models using Hugging Face and Intel's Cloud Computing Services, using Convex for database mutations and queries, fetching user information using Spotify's API, and getting more comfortable building in TypeScript and React. ## What's next for VibeS More data, more classifications for images, and more features such as instantly adding a playlist to Spotify, linking results to Spotify song links, etc.
partial
## Inspiration The inspiration for this project is the Covid crisis and the lack of medical assistance for the elderly. ## What it does On the Blynk IoT app and website, Life's Assistant gives real-time temperature, blood pressure, and humidity statistics. A notification/email is sent to the user and the user's loved one, who will also be registered on the Blynk IoT platform, if sensor levels are below/above normal human values. The user's location will also be shared to the user's loved one via Google Maps' Geolocation API. ## How we built it We built this app using the sensor values from BME680 and transferred these values to the Blynk IoT device using the Arduino UNO Wifi Rev2. To get the Geolocation data, we used the Google Cloud Platform and created HTTP requests to get the GPS data(longitude and latitude). ## Challenges we ran into Some challenges we ran into are: * Buying components lacking documentation * Compatibility issues with libraries/shields/boards * Implementing and using GPS location from Google Maps API ## What we learned This was a great learning experience, for first-time Arduino users! We learned: * The importance of buying components with proper documentation, as it's very difficult to understand how the products work otherwise * How the Blynk platform works and the many cool features it has! * How the Geolocation API from Google Maps works * The basics of Arduino programming ## What's next for Life’s Assistant Some next steps for Life's Assistant are: * Adding more sensors(gas detection, pulse rate and oximeter) * Completing the integration of all components(Google Maps) * Adding a calling feature in addition to messages and notifications * Doing some machine learning analysis to detect anomalies in stored data
## Inspiration One of our close friends is at risk of Alzheimer's. He learns different languages and engages his brain by learning various skills which will significantly decrease his chances of suffering from Alzheimer's later. Our game is convenient for people like him to keep the risks of being diagnosed with dementia at bay. ## What it does In this game, a random LED pattern is displayed which the user is supposed to memorize. The user is supposed to use hand gestures to repeat the memorized pattern. If the user fails to memorize the correct pattern, the buzzer beeps. ## How we built it We had two major components to our project; hardware and software. The hardware component of our project used an Arduino UNO, LED lights, a base shield, a Grove switch and a Grove gesture sensor. Our software side of the project used the Arduino IDE and GitHub. We have linked them in our project overview for your convenience. ## Challenges we ran into Some of the major challenges we faced were storing data and making sure that the buzzer doesn't beep at the wrong time. ## Accomplishments that we're proud of We were exploring new terrain in this hackathon with regard to developing a hardware project in combination with the Arduino IDE. We found that it was quite different in relation to the software/application programming we were used to, so we're very happy with the overall learning experience. ## What we learned We learned how to apply our skillset in software and application development in a hardware setting. Primarily, this was our first experience working with Arduino, and we were able to use this opportunity at UofT to catch up to the learning curve. ## What's next for Evocalit Future steps for our project look like revisiting the iteration cycles to clean up any repetitive inputs and incorporating more sensitive machine learning algorithms alongside the Grove sensors so as to maximize the accuracy and precision of the user inputs through computer vision.
# BlockOJ > > Boundless creativity. > > > ## What is BlockOJ? BlockOJ is an online judge built around Google's Blockly library that teaches children how to code. The library allows us to implement a code editor which lets the user program with various blocks (function blocks, variable blocks, etc.). ![Figure 1. Image of BlockOJ Editor](https://i.imgur.com/UOmBhL4.png) On BlockOJ, users can sign up and use our lego-like code editor to solve instructive programming challenges! Solutions can be verified by pitting them against numerous test cases hidden in our servers :) -- simply click the "submit" button and we'll take care of the rest. Our lightning fast judge, painstakingly written in C, will provide instantaneous feedback on the correctness of your solution (ie. how many of the test cases did your program evaluate correctly?). ![Figure 2. Image of entire judge submission page](https://i.imgur.com/N898UAw.jpg) ## Inspiration and Design Motivation Back in late June, our team came across the article announcing the "[new Ontario elementary math curriculum to include coding starting in Grade 1](https://www.thestar.com/politics/provincial/2020/06/23/new-ontario-elementary-math-curriculum-to-include-coding-starting-in-grade-1.html)." During Hack The 6ix, we wanted to build a practical application that can aid our hard working elementary school teachers deliver the coding aspect of this new curriculum. We wanted a tool that was 1. Intuitive to use, 2. Instructive, and most important of all 3. Engaging Using the Blockly library, we were able to use a code editor which resembles building with LEGO: the block-by-block assembly process is **procedural** and children can easily look at the **big picture** of programming by looking at how the blocks interlock with each other. Our programming challenges aim to gameify learning, making it less intimidating and more appealing to younger audiences. Not only will children using BlockOJ **learn by doing**, but they will also slowly accumulate basic programming know-how through our carefully designed sequence of problems. Finally, not all our problems are easy. Some are hard (in fact, the problem in our demo is extremely difficult for elementary students). In our opinion, it is beneficial to mix in one or two difficult challenges in problemsets, for they give children the opportunity to gain valuable problem solving experience. Difficult problems also pave room for students to engage with teachers. Solutions are saved so children can easily come back to a difficult problem after they gain more experience. ## How we built it Here's the tl;dr version. * AWS EC2 * PostgreSQL * NodeJS * Express * C * Pug * SASS * JavaScript *We used a link shortener for our "Try it out" link because DevPost doesn't like URLs with ports.*
losing
## About Learning a foreign language can pose challenges, particularly without opportunities for conversational practice. Enter SpyLingo! Enhance your language proficiency by engaging in missions designed to extract specific information from targets. You select a conversation topic, and the spy agency devises a set of objectives for you to query the target about, thereby completing the mission. Users can choose their native language and the language they aim to learn. The website and all interfaces seamlessly translate into their native tongue, while missions are presented in the foreign language. ## Features * Choose a conversation topic provided by the spy agency and it will generate a designated target and a set of objectives to discuss. * Engage the target in dialogue in the foreign language on any subject! As you achieve objectives, they'll be automatically marked off your mission list. * Witness dynamically generated images of the target, reflecting the topics they discuss, after each response. * Enhance listening skills with automatically generated audio of the target's response. * Translate the entire message into your native language for comprehension checks. * Instantly translate any selected word within the conversation context, providing additional examples of its usage in the foreign language, which can be bookmarked for future review. * Access hints for formulating questions about the objectives list to guide interactions with the target. * Your messages are automatically checked for grammar and spelling, with explanations in your native language for correcting foreign language errors. ## How we built it With the time constraint of the hackathon, this project was built entirely on the frontend of a web application. The TogetherAI API was used for all text and image generation and the ElevenLabs API was used for audio generation. The OpenAI API was used for detecting spelling and grammar mistakes. ## Challenges we ran into The largest challenge of this project was building something that can work seamlessly in **812 different native-foreign language combinations.** There was a lot of time spent on polishing the user experience to work with different sized text, word parsing, different punctuation characters, etc. Even more challenging was the prompt engineering required to ensure the AI would speak in the language it is supposed to. The chat models frequently would revert to English if the prompt was in English, even if the prompt specified the model should respond in a different language. As a result, there are **over 800** prompts used, as each one has to be translated into every language supported during build time. There was also a lot of challenges in reducing the latency of the API responses to make for a pleasant user experience. After many rounds of performance optimizations, the app now effectively generates the text, audio, and images in perceived real time. ## Accomplishments that I'm proud of The biggest challenges also yielded the biggest accomplishments in my eyes. Building a chatbot that can be interacted with in any language and operates in real time by myself in the time limit was certainly no small task. I'm also exceptionally proud of the fact that I honestly think it's fun to play. I've had many projects that get dumped on a dusty shelf once completed, but the fact that I fully intend to keep using this after the hackathon to improve my language skills makes me very happy. ## What we learned I had never used these APIs before beginning this hackathon, so there was quite a bit of documentation that I had to read to understand for how to correctly stream the text & audio generation. ## What's next for SpyLingo There are still more features that I'd like to add, like different types of missions for the user. I also think the image prompting can use some more work since I'm not as familiar with image generation. I would like to productionize this project and setup a proper backend & database for it. Maybe I'll set up a stripe integration and make it available for the public too!
## Inspiration We were inspired not only by the difficulty of acquiring a new language, but also the doors, opportunities and personal relationships that we have established thanks to our efforts acquiring a foreign language. ## What it does It makes learning a language a familiar experience by tying it to the world around you. Once one starts the LinguaLens app (or webpage), the only view is a camera. Pointing the camera at objects in the nearby areas will fetch an identification and classification of that object from the IBM-Watson Image Recognition service. The label is then handed off to the Google Translate API, which handles converting it to the foreign language of interest. The person can now enter the name of the object who is the focus of the picture, but in the language he/she is currently learning. The correct label will always be given to the user, including the label in the vernacular tongue in order to ensure there were no misclassifications. When misclassifications do occur, these are stored locally on the server and then submitted to IBM-Watson as training data to improve the model. ## How we built it We have a Heroku server with node.js that serves a basic webpage with a single webcam view. Snapping a picture on this page will submit the picture to a python flask server hosted on an AWS server. This python module receives the image, makes the call to the IBM-Watson API to identify the object of focus, and then makes the calls to the Google Translate API. Finally, the python module sends a response back containing only the potential labels identified by IBM-Watson in the language of interest. ## Challenges we ran into Our lack of experience in web development was the single greatest mission blocker of this hackathon. We were not able to reach our final desired product because of limitations encountered when interfacing between Heroku, Flask and AWS. ## Accomplishments that we're proud of We are proud, nonetheless, of what we were able to accomplish with nodejs and flask. We were able to setup 2 servers independently and begin making API calls with them. We're proud that the only thing we missed was the integration; something that can be fulfilled with minimal issues if we team up with someone experienced in web development. ## What we learned -A crash course in web development -How to use the IBM-Watson API -How to use the Google-Translate API -Source control -Project management ## What's next for LinguaLens Once LinguaLens overcomes the integration issue, it will be ready to put on the web and on the iPhone for testing. We have a basic repurposed iOS app from IBM-Watson that also implemented a single camera-view app to perform image recognition on waste. We have edited the source code for it to build and work with our general object classification purposes. This would mean LinguaLens is not far away from bringing users a polished language learning experience on the App Store!
## Inspiration Partially inspired by the Smart Cities track, we wanted our app to have the direct utility of ordering food, while still being fun to interact with. We aimed to combine convenience with entertainment, making the experience more enjoyable than your typical drive-through order. ## What it does You interact using only your voice. The app automatically detects when you start and stop talking, uses AI to transcribe what you say, figures out the food items (with modifications) you want to order, and adds them to your current order. It even handles details like size and flavor preferences. The AI then generates text-to-speech audio, which is played back to confirm your order in a humorous, engaging way. There is absolutely zero set-up or management necessary, as the program will completely ignore all background noises and conversation. Even then, it will still take your order with staggering precision. ## How we built it The frontend of the app is built with React and TypeScript, while the backend uses Flask and Python. We containerized the app using Docker and deployed it using Defang. The design of the menu is also done in Canva with a dash of Harvard colors. ## Challenges we ran into One major challenge was getting the different parts of the app—frontend, backend, and AI—to communicate effectively. From media file conversions to AI prompt engineering, we worked through each of the problems together. We struggled particularly with maintaining smooth communication once the app was deployed. Additionally, fine-tuning the AI to accurately extract order information from voice inputs while keeping the interaction natural was a big hurdle. ## Accomplishments that we're proud of We're proud of building a fully functioning product that successfully integrates all the features we envisioned. We also managed to deploy the app, which was a huge achievement given the complexity of the project. Completing our initial feature set within the hackathon timeframe was a key success for us. Trying to work with Python data type was difficult to manage, and we were proud to navigate around that. We are also extremely proud to meet a bunch of new people and tackle new challenges that we were not previously comfortable with. ## What we learned We honed our skills in React, TypeScript, Flask, and Python, especially in how to make these technologies work together. We also learned how to containerize and deploy applications using Docker and Docker Compose, as well as how to use Defang for cloud deployment. ## What's next for Harvard Burger Moving forward, we want to add a business-facing interface, where restaurant staff would be able to view and fulfill customer orders. There will also be individual kiosk devices to handle order inputs. These features would allow *Harvard Burger* to move from a demo to a fully functional app that restaurants could actually use. Lastly, we can sell the product by designing marketing strategies for fast food chains.
losing
# InstaQuote InstaQuote is an SMS based service that allows users to get a new car insurance quote without the hassle of calling their insurance provider and waiting in a long queue. # What Inspired You We wanted a more convenient way to get a quote on auto-insurance in the event of a change within your driver profile (i.e. demerit point change, license class increase, new car make, etc...) Since insurance rates are not something that change often we found it appropriate to create an SMS based service, thus saving the hassle of installing an app that would rarely be used as well as the time of calling your insurance provider to get a simple quote. As a company, this service would be useful for clients because it gives them peace of mind that there is an overarching service which can be texted anytime for an instant quote. # What We Learned We learned how to connect API's using Standard Library and we also learned JavaScript. Additionally, we learned how to use backend databases to store information and manipulate that data within the database. # Challenges We Faced We had some trouble with understanding and getting used to JavaScript syntax
## Motivation Coding skills are in high demand and will soon become a necessary skill for nearly all industries. Jobs in STEM have grown by 79 percent since 1990, and are expected to grow an additional 13 percent by 2027, according to a 2018 Pew Research Center survey. This provides strong motivation for educators to find a way to engage students early in building their coding knowledge. Mixed Reality may very well be the answer. A study conducted at Georgia Tech found that students who used mobile augmented reality platforms to learn coding performed better on assessments than their counterparts. Furthermore, research at Tufts University shows that tangible programming encourages high-level computational thinking. Two of our team members are instructors for an introductory programming class at the Colorado School of Mines. One team member is an interaction designer at the California College of the Art and is new to programming. Our fourth team member is a first-year computer science at the University of Maryland. Learning from each other's experiences, we aim to create the first mixed reality platform for tangible programming, which is also grounded in the reality-based interaction framework. This framework has two main principles: 1) First, interaction **takes place in the real world**, so students no longer program behind large computer monitors where they have easy access to distractions such as games, IM, and the Web. 2) Second, interaction behaves more like the real world. That is, tangible languages take advantage of **students’ knowledge of the everyday, non-computer world** to express and enforce language syntax. Using these two concepts, we bring you MusicBlox! ## What is is MusicBlox combines mixed reality with introductory programming lessons to create a **tangible programming experience**. In comparison to other products on the market, like the LEGO Mindstorm, our tangible programming education platform **cuts cost in the classroom** (no need to buy expensive hardware!), **increases reliability** (virtual objects will never get tear and wear), and **allows greater freedom in the design** of the tangible programming blocks (teachers can print out new card/tiles and map them to new programming concepts). This platform is currently usable on the **Magic Leap** AR headset, but will soon be expanded to more readily available platforms like phones and tablets. Our platform is built using the research performed by Google’s Project Bloks and operates under a similar principle of gamifying programming and using tangible programming lessons. The platform consists of a baseboard where students must place tiles. Each of these tiles is associated with a concrete world item. For our first version, we focused on music. Thus, the tiles include a song note, a guitar, a piano, and a record. These tiles can be combined in various ways to teach programming concepts. Students must order the tiles correctly on the baseboard in order to win the various levels on the platform. For example, on level 1, a student must correctly place a music note, a piano, and a sound in order to reinforce the concept of a method. That is, an input (song note) is fed into a method (the piano) to produce an output (sound). Thus, this platform not only provides a tangible way of thinking (students are able to interact with the tiles while visualizing augmented objects), but also makes use of everyday, non-computer world objects to express and enforce computational thinking. ## How we built it Our initial version is deployed on the Magic Leap AR headset. There are four components to the project, which we split equally among our team members. The first is image recognition, which Natalie worked predominantly on. This required using the Magic Leap API to locate and track various image targets (the baseboard, the tiles) and rendering augmented objects on those tracked targets. The second component, which Nhan worked on, involved extended reality interaction. This involved both Magic Leap and Unity to determine how to interact with buttons and user interfaces in the Magic leap headset. The third component, which Casey spearheaded, focused on integration and scene development within Unity. As the user flows through the program, there are different game scenes they encounter, which Casey designed and implemented. Furthermore, Casey ensured the seamless integration of all these scenes for a flawless user experience. The fourth component, led by Ryan, involved project design, research, and user experience. Ryan tackled user interaction layouts to determine the best workflow for children to learn programming, concept development, and packaging of the platform. ## Challenges we ran into We faced many challenges with the nuances of the Magic Leap platform, but we are extremely grateful to the Magic Leap mentors for providing their time and expertise over the duration of the hackathon! ## Accomplishments that We're Proud of We are very proud of the user experience within our product. This feels like a platform that we could already begin testing with children and getting user feedback. With our design expert Ryan, we were able to package the platform to be clean, fresh, and easy to interact with. ## What We learned Two of our team members were very unfamiliar with the Magic Leap platform, so we were able to learn a lot about mixed reality platforms that we previously did not. By implementing MusicBlox, we learned about image recognition and object manipulation within Magic Leap. Moreover, with our scene integration, we all learned more about the Unity platform and game development. ## What’s next for MusicBlox: Tangible Programming Education in Mixed Reality This platform is currently only usable on the Magic Leap AR device. Our next big step would be to expand to more readily available platforms like phones and tablets. This would allow for more product integration within classrooms. Furthermore, we only have one version which depends on music concepts and teaches methods and loops. We would like to expand our versions to include other everyday objects as a basis for learning abstract programming concepts.
# 🚗 InsuclaimAI: Simplifying Insurance Claims 📝 ## 🌟 Inspiration 💡 After a frustrating experience with a minor fender-bender, I was faced with the overwhelming process of filing an insurance claim. Filling out endless forms, speaking to multiple customer service representatives, and waiting for assessments felt like a second job. That's when I knew that there needed to be a more streamlined process. Thus, InsuclaimAI was conceived as a solution to simplify the insurance claim maze. ## 🎓 What I Learned ### 🛠 Technologies #### 📖 OCR (Optical Character Recognition) * OCR technologies like OpenCV helped in scanning and reading textual information from physical insurance documents, automating the data extraction phase. #### 🧠 Machine Learning Algorithms (CNN) * Utilized Convolutional Neural Networks to analyze and assess damage in photographs, providing an immediate preliminary estimate for claims. #### 🌐 API Integrations * Integrated APIs from various insurance providers to automate the claims process. This helped in creating a centralized database for multiple types of insurance. ### 🌈 Other Skills #### 🎨 Importance of User Experience * Focused on intuitive design and simple navigation to make the application user-friendly. #### 🛡️ Data Privacy Laws * Learned about GDPR, CCPA, and other regional data privacy laws to make sure the application is compliant. #### 📑 How Insurance Claims Work * Acquired a deep understanding of the insurance sector, including how claims are filed, and processed, and what factors influence the approval or denial of claims. ## 🏗️ How It Was Built ### Step 1️⃣: Research & Planning * Conducted market research and user interviews to identify pain points. * Designed a comprehensive flowchart to map out user journeys and backend processes. ### Step 2️⃣: Tech Stack Selection * After evaluating various programming languages and frameworks, Python, TensorFlow, and Flet (From Python) were selected as they provided the most robust and scalable solutions. ### Step 3️⃣: Development #### 📖 OCR * Integrated Tesseract for OCR capabilities, enabling the app to automatically fill out forms using details from uploaded insurance documents. #### 📸 Image Analysis * Exploited an NLP model trained on thousands of car accident photos to detect the damages on automobiles. #### 🏗️ Backend ##### 📞 Twilio * Integrated Twilio to facilitate voice calling with insurance agencies. This allows users to directly reach out to the Insurance Agency, making the process even more seamless. ##### ⛓️ Aleo * Used Aleo to tokenize PDFs containing sensitive insurance information on the blockchain. This ensures the highest levels of data integrity and security. Every PDF is turned into a unique token that can be securely and transparently tracked. ##### 👁️ Verbwire * Integrated Verbwire for advanced user authentication using FaceID. This adds an extra layer of security by authenticating users through facial recognition before they can access or modify sensitive insurance information. #### 🖼️ Frontend * Used Flet to create a simple yet effective user interface. Incorporated feedback mechanisms for real-time user experience improvements. ## ⛔ Challenges Faced #### 🔒 Data Privacy * Researching and implementing data encryption and secure authentication took longer than anticipated, given the sensitive nature of the data. #### 🌐 API Integration * Where available, we integrated with their REST APIs, providing a standard way to exchange data between our application and the insurance providers. This enhanced our application's ability to offer a seamless and centralized service for multiple types of insurance. #### 🎯 Quality Assurance * Iteratively improved OCR and image analysis components to reach a satisfactory level of accuracy. Constantly validated results with actual data. #### 📜 Legal Concerns * Spent time consulting with legal advisors to ensure compliance with various insurance regulations and data protection laws. ## 🚀 The Future 👁️ InsuclaimAI aims to be a comprehensive insurance claim solution. Beyond just automating the claims process, we plan on collaborating with auto repair shops, towing services, and even medical facilities in the case of personal injuries, to provide a one-stop solution for all post-accident needs.
winning
## Inspiration 🌈 Our team has all experienced the struggle of jumping into a pre-existing codebase and having to process how everything works before starting to add our own changes. This can be a daunting task, especially when commit messages lack detail or context. We also know that when it comes time to push our changes, we often gloss over the commit message to get the change out as soon as possible, not helping any future collaborators or even our future selves. We wanted to create a web app that allows users to better understand the journey of the product, allowing users to comprehend previous design decisions and see how a codebase has evolved over time. GitInsights aims to bridge the gap between hastily written commit messages and clear, comprehensive documentation, making collaboration and onboarding smoother and more efficient. ## What it does 💻 * Summarizes commits and tracks individual files in each commit, and suggests more accurate commit messages. * The app automatically suggests tags for commits, with the option for users to add their own custom tags for further sorting of data. * Provides a visual timeline of user activity through commits, across all branches of a repository Allows filtering commit data by user, highlighting the contributions of individuals ## How we built it ⚒️ The frontend is developed with Next.js, using TypeScript and various libraries for UI/UX enhancement. The backend uses Express.js , which handles our API calls to GitHub and OpenAI. We used Prisma as our ORM to connect to a PostgreSQL database for CRUD operations. For authentication, we utilized GitHub OAuth to generate JWT access tokens, securely accessing and managing users' GitHub information. The JWT is stored in cookie storage and sent to the backend API for authentication. We created a github application that users must all add onto their accounts when signing up. This allowed us to not only authenticate as our application on the backend, but also as the end user who provides access to this app. ## Challenges we ran into ☣️☢️⚠️ Originally, we wanted to use an open source LLM, like LLaMa, since we were parsing through a lot of data but we quickly realized it was too inefficient, taking over 10 seconds to analyze each commit message. We also learned to use new technologies like d3.js, the github api, prisma, yeah honestly everything for me ## Accomplishments that we're proud of 😁 The user interface is so slay, especially the timeline page. The features work! ## What we learned 🧠 Running LLMs locally saves you money, but LLMs require lots of computation (wow) and are thus very slow when running locally ## What's next for GitInsights * Filter by tags, more advanced filtering and visualizations * Adding webhooks to the github repository to enable automatic analysis and real time changes * Implementing CRON background jobs, especially with the analysis the application needs to do when it first signs on an user, possibly done with RabbitMQ * Creating native .gitignore files to refine the summarization process by ignoring files unrelated to development (i.e., package.json, package-lock.json, **pycache**).
## Inspiration **The Tales of Detective Toasty** draws deep inspiration from visual novels like **Persona** and **Ghost Trick** and we wanted to play homage to our childhood games through the fusion of art, music, narrative, and technology. Our goal was to explore the possibilities of AI within game development. We used AI to create detailed character sprites, immersive backgrounds, and engaging slide art. This approach allows players to engage deeply with the game's characters, navigating through dialogues and piecing together clues in a captivating murder mystery that feels personal and expansive. By enriching the narrative in this way, we invite players into Detective Toasty’s charming yet suspense-filled world. ## What It Does In **The Tales of Detective Toasty**, players step into the shoes of the famous detective Toasty, trapped on a boat with four suspects in a gripping AI-powered murder mystery. The game challenges you to investigate suspects, explore various rooms, and piece together the story through your interactions. Your AI-powered assistant enhances these interactions by providing dynamic dialogue, ensuring that each playthrough is unique. We aim to expand the game with more chapters and further develop inventory systems and crime scene investigations. ## How We Built It Our project was crafted using **Ren'py**, a Python-based visual novel engine, and Python. We wrote our scripts from scratch, given Ren'py’s niche adoption. Integration of the ChatGPT API allowed us to develop a custom AI assistant that adapts dialogues based on player's question, enhancing the storytelling as it is trained on the world of Detective Toasty. Visual elements were created using Dall-E and refined with Procreate, while Superimpose helped in adding transparent backgrounds to sprites. The auditory landscape was enriched with music and effects sourced from YouTube, and the UI was designed with Canva. ## Challenges We Ran Into Our main challenge was adjusting the ChatGPT prompts to ensure the AI-generated dialogues fit seamlessly within our narrative, maintaining consistency and advancing the plot effectively. Being our first hackathon, we also faced a steep learning curve with tools like ChatGPT and other OpenAI utilities and learning about the functionalities of Ren'Py and debugging. We struggled with learning character design transitions and refining our artwork, teaching us valuable lessons through trial and error. Furthermore, we had difficulties with character placement, sizing, and overall UI so we had to learn all the components on how to solve this issue and learn an entirely new framework from scratch. ## Accomplishments That We’re Proud Of Participating in our first hackathon and pushing the boundaries of interactive storytelling has been rewarding. We are proud of our teamwork and the gameplay experience we've managed to create, and we're excited about the future of our game development journey. ## What We Learned This project sharpened our skills in game development under tight deadlines and understanding of the balance required between storytelling and coding in game design. It also improved our collaborative abilities within a team setting. ## What’s Next for The Tales of Detective Toasty Looking ahead, we plan to make the gameplay experience better by introducing more complex story arcs, deeper AI interactions, and advanced game mechanics to enhance the unpredictability and engagement of the mystery. Planned features include: * **Dynamic Inventory System**: An inventory that updates with both scripted and AI-generated evidence. * **Interactive GPT for Character Dialogues**: Enhancing character interactions with AI support to foster a unique and dynamic player experience. * **Expanded Storyline**: Introducing additional layers and mysteries to the game to deepen the narrative and player involvement. * *and more...* :D
## Inspiration All of us have gone through the painstaking and difficult process of onboarding as interns and making sense of huge repositories with many layers of folders and files. We hoped to shorten this or remove it completely through the use of Code Flow. ## What it does Code Flow exists to speed up onboarding and make code easy to understand for non-technical people such as Project Managers and Business Analysts. Once the user has uploaded the repo, it has 2 main features. First, it can visualize the entire repo by showing how different folders and files are connected and providing a brief summary of each file and folder. It can also visualize a file by showing how the different functions are connected for a more technical user. The second feature is a specialized chatbot that allows you to ask questions about the entire project as a whole or even specific files. For example, "Which file do I need to change to implement this new feature?" ## How we built it We used React to build the front end. Any folders uploaded by the user through the UI are stored using MongoDB. The backend is built using Python-Flask. If the user chooses a visualization, we first summarize what every file and folder does and display that in a graph data structure using the library pyvis. We analyze whether files are connected in the graph based on an algorithm that checks features such as the functions imported, etc. For the file-level visualization, we analyze the file's code using an AST and figure out which functions are interacting with each other. Finally for the chatbot, when the user asks a question we first use Cohere's embeddings to check the similarity of the question with the description we generated for the files. After narrowing down the correct file, we use its code to answer the question using Cohere generate. ## Challenges we ran into We struggled a lot with narrowing down which file to use to answer the user's questions. We initially thought to simply use Cohere generate to reply with the correct file but knew that it isn't specialized for that purpose. We decided to use embeddings and then had to figure out how to use those numbers to actually get a valid result. We also struggled with getting all of our tech stacks to work as we used React, MongoDB and Flask. Making the API calls seamless proved to be very difficult. ## Accomplishments that we're proud of This was our first time using Cohere's embeddings feature and accurately analyzing the result to match the best file. We are also proud of being able to combine various different stacks and have a working application. ## What we learned We learned a lot about NLP, how embeddings work, and what they can be used for. In addition, we learned how to problem solve and step out of our comfort zones to test new technologies. ## What's next for Code Flow We plan on adding annotations for key sections of the code, possibly using a new UI so that the user can quickly understand important parts without wasting time.
winning
## Inspiration Phreesia's challenge about storing medical data with a third-party software ## What it does Allows users to analyze an image and see a prediction for skin cancer(benign or malignant), and allows upload of the image and the prediction into cloud storage ## How we built it Tensorflow Lite and Andriod studio, with Firebase for user authentication and cloud storage (+GitHub, proto io) ## Challenges we ran into Implementing Tensorflow Lite into Android studio ## Accomplishments that we're proud of Building an app with the functionality we intended and a model UI for what the app could be ## What we learned How to implement an ML model in Android Studio, how to use Firebase for cloud storage ## What's next for Skin Cancer Detection App Using blockchain we can enable safe data transfer of patient's data to doctors. Adding access to more ML analysis tools to create an ecosystem of physician tools available. Worked on doctor's note summarization with a python API with co:here NLP generation. (Did not finish in time)
## Inspiration Each year, more people are diagnosed with skin cancer than all other cancer's combined. It is also estimated that one in five people will develop skin cancer by the time they reach 70 years of age. Skin cancer is also one of the easiest to cure, if detected early. With this in mind, we were inspired to develop a solution that empowers users to analyze and monitor the health of their skin. ## What it does Möle allows users to take pictures of freckles, blemishes and moles on their skin. The application then uses a Machine Learning image processing model to compare it to more than ten thousand images of various skin conditions. The resulting analysis informs the user whether the blemish is benign or malignant, and if it resembles Melanoma, Bowens Disease, Basal Cell Carcinoma, Benign lesions, Dermatofibroma, or Melanocytic Nevi. The application also allows users to track blemishes over time, providing updates on their health as they develop and change. ## How we built it Möle is built for Android devices, using Kotlin/Java and a Realm database. The image processing agent was built using Azure's Custom Vision Model, trained using HAM10000, a neural network training dataset published in the Harvard Dataverse, with over ten thousand images of common skin lesions. For more information, please visit: <https://bit.ly/2UlvD7P> ## Challenges we ran into All team members were unfamiliar with the projects technical stack at the beginning of this competition. The learning curve, coupled with the limited development time, proved to be our biggest challenge. Sorting, tagging and uploading over 10 000 images for our computer vision model was also a challenge on its own, which required a fair amount of scripting. ## Accomplishments that we're proud of All members of our team undertook a large part of this project, working with tech we were not familiar with, and managed to complete a project within 24 hours! On top of this, we feel that the product we created could be used to help educate and inform millions of people worldwide, who may not have access to dermatologists or skin care professionals. ## What we learned We would struggle to fit everything we learned over the past 24 hours onto one page, but we feel the biggest lessons were that... * To always focus on the minimum viable product when time is limited * It's more fun to work on projects you're excited about * Wear more sunscreen, skin care is no joke! ## What's next for Möle Training our model with larger datasets to improve accuracy, and fine tuning the interface to create a more seamless user experience. We also intend to better integrate users age, sex, and geographical location into the analysis in order to produce more accurate results.
## Inspiration The inspiration for this app arose from two key insights about medical education. 1. Medicine is inherently interdisciplinary. For example, in fields like dermatology, pattern recognition plays a vital role in diagnosis. Previous studies have shown that incorporating techniques from other fields, such as art analysis, can enhance these skills, highlighting the benefits of cross-disciplinary approaches. Additionally, with the rapid advancement of AI, which has its roots in pattern recognition, there is a tremendous opportunity to revolutionize medical training. 2. Second, traditional methods like textbooks and static images often lack the interactivity and personalized feedback needed to develop diagnostic skills effectively. Current education emphasizes the knowledge of various diagnostic features, but not the ability to recognize such features. This app was designed to address these gaps, creating a dynamic, tech-driven solution to better prepare medical students for the complexities of real-world practice. ## What it does This app provides an interactive learning platform for medical students, focusing on dermatological diagnosis. It presents users with real-world images of skin conditions and challenges them to make a diagnosis. After each attempt, the app delivers personalized feedback, explaining the reasoning behind the correct answer, whether the diagnosis was accurate or not. By emphasizing pattern recognition and critical thinking, in concert with a comprehensive dataset of over 400,000 images, the app helps students refine their diagnostic skills in a hands-on manner. With its ability to adapt to individual performance, the app ensures a tailored learning experience, making it an effective tool for bridging the gap between theoretical knowledge and clinical application. ## How we built it To build the app, we utilized a variety of tools and technologies across both the frontend and backend. On the frontend, we implemented React with TypeScript and styled the interface using TailwindCSS. To track user progress in real time, we integrated React’s Rechart library, allowing us to display interactive statistical visualizations. Axios was employed to handle requests and responses between the frontend and backend, ensuring smooth communication. On the backend, we used Python with Pandas, Scikit-Learn, and Numpy to create a machine learning model capable of identifying key factors for diagnosis. Additionally, we integrated OpenAI’s API with Flask to generate large language model (LLM) responses from user input, making the app highly interactive and responsive. ## Challenges we ran into One of the primary challenges we encountered was integrating OpenAI’s API to deliver real-time feedback to users, which was critical for enhancing the app's personalized learning experience. Navigating the complexities of API communication and ensuring seamless functionality required significant troubleshooting. Additionally, learning how to use Flask to connect the frontend and backend posed another challenge, as some team members were unfamiliar with this framework. This required us to invest time in researching and experimenting with different approaches to ensure proper integration and communication between the app's components. ## Accomplishments that we're proud of We are particularly proud of successfully completing our first hackathon, where we built this app from concept to execution. Despite being new to many of the technologies involved, we developed a full-stack application, learning the theory and implementation of tools like Flask and OpenAI's API along the way. Another accomplishment was our ability to work as a cohesive team, bringing together members from diverse, interdisciplinary backgrounds, both in general interests and in past CS experiences. This collaborative effort allowed us to combine different skill sets and perspectives to create a functional and innovative app that addresses key gaps in medical education. ## What we learned Throughout the development of this app, we learned the importance of interdisciplinary collaboration. By combining medical knowledge, AI, and software development, we were able to create a more effective and engaging tool than any one field could produce alone. We also gained a deeper understanding of the technical challenges that come with working on large datasets and implementing adaptive feedback systems. ## What's next for DermaDrill Looking ahead, there are many areas our app can expand into. With AI identifying the reasoning behind a certain diagnosis, we can explore the potential for diagnostic assistance, where AI can identify areas that may be abnormal to ultimately support clinical decision-making, giving physicians another tool. Furthermore, in other fields that are based on image-based diagnosis, such as radiology or pathology, we can apply a similar identification and feedback system. Future applications of such an app can enhance clinical diagnostic abilities while acknowledging the complexities of real world practice.
partial
## Inspiration When we joined the hackathon, we began brainstorming about problems in our lives. After discussing some constant struggles in their lives with many friends and family, one response was ultimately shared: health. Interestingly, one of the biggest health concerns that impacts everyone in their lives comes from their *skin*. Even though the skin is the biggest organ in the body and is the first thing everyone notices, it is the most neglected part of the body. As a result, we decided to create a user-friendly multi-modal model that can discover their skin discomfort through a simple picture. Then, through accessible communication with a dermatologist-like chatbot, they can receive recommendations, such as specific types of sunscreen or over-the-counter medications. Especially for families that struggle with insurance money or finding the time to go and wait for a doctor, it is an accessible way to understand the blemishes that appear on one's skin immediately. ## What it does The app is a skin-detection model that detects skin diseases through pictures. Through a multi-modal neural network, we attempt to identify the disease through training on thousands of data entries from actual patients. Then, we provide them with information on their disease, recommendations on how to treat their disease (such as using specific SPF sunscreen or over-the-counter medications), and finally, we provide them with their nearest pharmacies and hospitals. ## How we built it Our project, SkinSkan, was built through a systematic engineering process to create a user-friendly app for early detection of skin conditions. Initially, we researched publicly available datasets that included treatment recommendations for various skin diseases. We implemented a multi-modal neural network model after finding a diverse dataset with almost more than 2000 patients with multiple diseases. Through a combination of convolutional neural networks, ResNet, and feed-forward neural networks, we created a comprehensive model incorporating clinical and image datasets to predict possible skin conditions. Furthermore, to make customer interaction seamless, we implemented a chatbot using GPT 4o from Open API to provide users with accurate and tailored medical recommendations. By developing a robust multi-modal model capable of diagnosing skin conditions from images and user-provided symptoms, we make strides in making personalized medicine a reality. ## Challenges we ran into The first challenge we faced was finding the appropriate data. Most of the data we encountered needed to be more comprehensive and include recommendations for skin diseases. The data we ultimately used was from Google Cloud, which included the dermatology and weighted dermatology labels. We also encountered overfitting on the training set. Thus, we experimented with the number of epochs, cropped the input images, and used ResNet layers to improve accuracy. We finally chose the most ideal epoch by plotting loss vs. epoch and the accuracy vs. epoch graphs. Another challenge included utilizing the free Google Colab TPU, which we were able to resolve this issue by switching from different devices. Last but not least, we had problems with our chatbot outputting random texts that tended to hallucinate based on specific responses. We fixed this by grounding its output in the information that the user gave. ## Accomplishments that we're proud of We are all proud of the model we trained and put together, as this project had many moving parts. This experience has had its fair share of learning moments and pivoting directions. However, through a great deal of discussions and talking about exactly how we can adequately address our issue and support each other, we came up with a solution. Additionally, in the past 24 hours, we've learned a lot about learning quickly on our feet and moving forward. Last but not least, we've all bonded so much with each other through these past 24 hours. We've all seen each other struggle and grow; this experience has just been gratifying. ## What we learned One of the aspects we learned from this experience was how to use prompt engineering effectively and ground an AI model and user information. We also learned how to incorporate multi-modal data to be fed into a generalized convolutional and feed forward neural network. In general, we just had more hands-on experience working with RESTful API. Overall, this experience was incredible. Not only did we elevate our knowledge and hands-on experience in building a comprehensive model as SkinSkan, we were able to solve a real world problem. From learning more about the intricate heterogeneities of various skin conditions to skincare recommendations, we were able to utilize our app on our own and several of our friends' skin using a simple smartphone camera to validate the performance of the model. It’s so gratifying to see the work that we’ve built being put into use and benefiting people. ## What's next for SkinSkan We are incredibly excited for the future of SkinSkan. By expanding the model to incorporate more minute details of the skin and detect more subtle and milder conditions, SkinSkan will be able to help hundreds of people detect conditions that they may have ignored. Furthermore, by incorporating medical and family history alongside genetic background, our model could be a viable tool that hospitals around the world could use to direct them to the right treatment plan. Lastly, in the future, we hope to form partnerships with skincare and dermatology companies to expand the accessibility of our services for people of all backgrounds.
## Inspiration In 2012 in the U.S infants and newborns made up 73% of hospitals stays and 57.9% of hospital costs. This adds up to $21,654.6 million dollars. As a group of students eager to make a change in the healthcare industry utilizing machine learning software, we thought this was the perfect project for us. Statistical data showed an increase in infant hospital visits in recent years which further solidified our mission to tackle this problem at its core. ## What it does Our software uses a website with user authentication to collect data about an infant. This data considers factors such as temperature, time of last meal, fluid intake, etc. This data is then pushed onto a MySQL server and is fetched by a remote device using a python script. After loading the data onto a local machine, it is passed into a linear regression machine learning model which outputs the probability of the infant requiring medical attention. Analysis results from the ML model is passed back into the website where it is displayed through graphs and other means of data visualization. This created dashboard is visible to users through their accounts and to their family doctors. Family doctors can analyze the data for themselves and agree or disagree with the model result. This iterative process trains the model over time. This process looks to ease the stress on parents and insure those who seriously need medical attention are the ones receiving it. Alongside optimizing the procedure, the product also decreases hospital costs thereby lowering taxes. We also implemented a secure hash to uniquely and securely identify each user. Using a hyper-secure combination of the user's data, we gave each patient a way to receive the status of their infant's evaluation from our AI and doctor verification. ## Challenges we ran into At first, we challenged ourselves to create an ethical hacking platform. After discussing and developing the idea, we realized it was already done. We were challenged to think of something new with the same amount of complexity. As first year students with little to no experience, we wanted to tinker with AI and push the bounds of healthcare efficiency. The algorithms didn't work, the server wouldn't connect, and the website wouldn't deploy. We persevered and through the help of mentors and peers we were able to make a fully functional product. As a team, we were able to pick up on ML concepts and data-basing at an accelerated pace. We were challenged as students, upcoming engineers, and as people. Our ability to push through and deliver results were shown over the course of this hackathon. ## Accomplishments that we're proud of We're proud of our functional database that can be accessed from a remote device. The ML algorithm, python script, and website were all commendable achievements for us. These components on their own are fairly useless, our biggest accomplishment was interfacing all of these with one another and creating an overall user experience that delivers in performance and results. Using sha256 we securely passed each user a unique and near impossible to reverse hash to allow them to check the status of their evaluation. ## What we learned We learnt about important concepts in neural networks using TensorFlow and the inner workings of the HTML code in a website. We also learnt how to set-up a server and configure it for remote access. We learned a lot about how cyber-security plays a crucial role in the information technology industry. This opportunity allowed us to connect on a more personal level with the users around us, being able to create a more reliable and user friendly interface. ## What's next for InfantXpert We're looking to develop a mobile application in IOS and Android for this app. We'd like to provide this as a free service so everyone can access the application regardless of their financial status.
## Inspiration Our inspiration came from the danger of skin-related diseases, along with the rising costs of medical care. DermaFix not only provides an alternative free option for those who can't afford to visit a doctor due to financial issues but provides real-time diagnosis. ## What it does Scans and analyzes the user's skin, determining if the user has any sort of skin disease. If anything is detected, possible remedies are provided, with a google map displaying nearby places to get treatment. ## How we built it We learned to create a Flask application, using HTML, CSS, and Javascript to develop the front end. We used TensorFlow, feeding an image classifier machine learning model to differentiate between clear skin and 20 other skin diseases. ## Challenges we ran into Fine-tuning the image classifying model to be accurate at least 85% of the time. ## Accomplishments that we're proud of Creating a model that is accurate 95% of the time. ## What we learned HTML, CSS, Flask, TensorFlow ## What's next for Derma Fix Using a larger dataset for a much more accurate diagnosis, along with more APIs to be used, in order to contact nearby doctors, and automatically set appointments for those that need it
winning
## Inspiration: Global warming is a very big problem for the world. It has to be tackled correctly, or else it may create very bad outcomes ## What it does: Predicts the effects global warming will bring to the world by a year in the future ## How we built it: Using C++ ## Challenges we ran into: Lots of data were required to analyze ## Accomplishments that we're proud of: Analytical skills ## What we learned: Global warming will create huge loss to the planet, it has to be tackled carefully ## What's next for Global Warming effects predictor: Innovative ways to control climate change
## Inspiration The awesome campuses of Berkeley (Go Bears!) and Stanford + a love for AR and computer vision shared by the both of us. ## What it does Early-stage prototype of a Microsoft HoloLens app that allows wearers to access information "tagged" onto notable buildings, art installations, historical landmarks, etc through CustomVisionAI and a holographic windowed display. ## How we built it After a full day's worth of brainstorming and research, we decided to complete a project that would be applicable to the real world while applying AR and computer vision tech. Previous experience working with Unity and the Microsoft HoloLens 1, .NET frameworks, and machine learning in conjunction with consultation from mentors from Magic Leap and Microsoft allowed us to perfect our ideas and our plan of action. The first step to making this project possible was training a model of various buildings and sculptures from around Stanford and Berkeley campuses, while also creating a basic mixed reality Unity project. Afterward, we started integrating the mixed reality platform with the Azure dataset/model, which took up the bulk of our time. ## Challenges we ran into Although we were successfully able to create such a model and Unity project, many of the Unity libraries containing valuable classes, methods, and information had been lost due to un-updated documentation from the Web. ## Accomplishments that we're proud of We had very high success rates with testing our CustomVisionAI model; even before training the AI with nighttime and rotated images, it was able to recognize the buildings and landmarks with very high (90% or higher) accuracy. ## What we learned We learned plentiful amounts of valuable skills toward utilizing Microsoft Azure, .NET frameworks, MRTK 2, and the nuances of handling multiple forms of multi-platform documentation. ## What's next for TourgEYEd We are planning to fully integrate the output of the Azure dataset, shown in JSON string form to be readable in a C# script for a Unity GameObject so that it can interact with the augmented reality scripts developed on the Unity side of the project. In addition, further planning has been made to develop accessibility tools for VIPs (Visually-Impaired Persons) with voice recognition software and text-to-speech guidance of tours.
## Inspiration Our inspiration for TRACY came from the desire to enhance tennis training through advanced technology. One of our members was a former tennis enthusiast who has always strived to refine their skills. They soon realized that the post-game analysis process took too much time in their busy schedule. We aimed to create a system that not only analyzes gameplay but also provides personalized insights for players to improve their skills. ## What it does and how we built it TRACY utilizes computer vision algorithms and pre-trained neural networks to analyze tennis footage, tracking player movements, and ball trajectories. The system then employs ChatGPT for AI-driven insights, generating personalized natural language summaries highlighting players' strengths and weaknesses. The output includes dynamic visuals and statistical data using React.js, offering a comprehensive overview and further insights into the player's performance. ## Challenges we ran into Developing a seamless integration between computer vision, ChatGPT, and real-time video analysis posed several challenges. Ensuring accuracy in 2D ball tracking from a singular camera angle, optimizing processing speed, and fine-tuning the algorithm for accurate tracking were a key hurdle we overcame during the development process. The depth of the ball became a challenge as we were limited to one camera angle but we were able to tackle it by using machine learning techniques. ## Accomplishments that we're proud of We are proud to have successfully created TRACY, a system that brings together state-of-the-art technologies to provide valuable insights to tennis players. Achieving a balance between accuracy, speed, and interpretability was a significant accomplishment for our team. ## What we learned Through the development of TRACY, we gained valuable insights into the complexities of integrating computer vision with natural language processing. We also enhanced our understanding of the challenges involved in real-time analysis of sports footage and the importance of providing actionable insights to users. ## What's next for TRACY Looking ahead, we plan to further refine TRACY by incorporating user feedback and expanding the range of insights it can offer. Additionally, we aim to explore potential collaborations with tennis coaches and players to tailor the system to meet the diverse needs of the tennis community.
losing
## MoodBox ### Smart DJ'ing using Facial Recognition You're hosting a party with your friends. You want to play the hippest music and you’re scared of your friends judging you for your taste in music. You ask your friends what songs they want to listen to… And only one person replies with that one Bruno Mars song that you’re all sick of listening to. Well fear not, with MoodBox you can now set a mood and our app will intelligently select the best songs from your friends’ public playlists! ### What it looks like You set up your laptop on the side of the room so that it has a good view of the room. Create an empty playlist for your party. This playlist will contain all the songs for the night. Run our script with that playlist, sit back and relax. Feel free to adjust the level of hypeness as your party progresses. Increase the hype as the party hits the drop and then make your songs more chill as the night winds down into the morning. It’s as simple as adjusting a slider in our dank UI. ### Behind the scenes We used python’s `facial_recognition` package based on `opencv` library to implement facial recognition on ourselves. We have a map from our facial features from spotify user ids, which we use to find the saved songs. We use the `spotipy` package to manipulate the playlist in real-time. Once we find a new face in the frame, we first read in the current mood from the slider, and find songs in that user’s public library of songs that match the mood set by the host the best. Once someone is out of the frame for long enough, they get removed from our buffer, and their songs get removed from the playlist. This also ensures that the playlist is empty at the end of the party, and everyone goes home happy.
## Overview We created a Smart Glasses hack called SmartEQ! This unique hack leverages the power of machine learning and facial recognition to determine the emotion of the person you are talking to and conducts sentiment analysis on your conversation, all in real time! Since we couldn’t get a pair of Smart Glasses (*hint hint* MLH Hardware Lab?), we created our MVP using a microphone and webcam, acting just like the camera and mic would be on a set of Smart Glasses. ## Inspiration Millions of children around the world have Autism Spectrum Disorder, and for these kids it can be difficult to understand the emotions people around them. This can make it very hard to make friends and get along with their peers. As a result, this negatively impacts their well-being and leads to issues including depression. We wanted to help these children understand others’ emotions and enable them to create long-lasting friendships. We learned about research studies that wanted to use technology to help kids on the autism spectrum understand the emotions of others and thought, hey, let’s see if we could use our weekend to build something that can help out! ## What it does SmartEQ determines the mood of the person in frame, based off of their facial expressions and sentiment analysis on their speech. SmartEQ then determines the most probable emotion from the image analysis and sentiment analysis of the conversation, and provides a percentage of confidence in its answer. SmartEQ helps a child on the autism spectrum better understand the emotions of the person they are conversing with. ## How we built it The lovely front end you are seeing on screen is build with React.js and the back end is a Python flask server. For the machine learning predictions we used a whole bunch of Microsoft Azure Cognitive Services API’s, including speech to text from the microphone, conducting sentiment analysis based off of this text, and the Face API to predict the emotion of the person in frame. ## Challenges we ran into Newton and Max came to QHacks as a confident duo with the initial challenge of snagging some more teammates to hack with! For Lee and Zarif, this was their first hackathon and they both came solo. Together, we ended up forming a pretty awesome team :D. But that’s not to say everything went as perfectly as our new found friendships did. Newton and Lee built the front end while Max and Zarif built out the back end, and as you may have guessed, when we went to connect our code together, just about everything went wrong. We kept hitting the maximum Azure requests that our free accounts permitted, encountered very weird socket-io bugs that made our entire hack break and making sure Max didn’t drink less than 5 Red Bulls per hour. ## Accomplishments that we're proud of We all worked with technologies that we were not familiar with, and so we were able to learn a lot while developing a functional prototype. We synthesised different forms of machine learning by integrating speech-to-text technology with sentiment analysis, which let us detect a person’s emotions just using their spoken words. We used both facial recognition and the aforementioned speech-to-sentiment-analysis to develop a holistic approach in interpreting a person’s emotions. We used Socket.IO to create real-time input and output data streams to maximize efficiency. ## What we learned We learnt about web sockets, how to develop a web app using web sockets and how to debug web socket errors. We also learnt how to harness the power of Microsoft Azure's Machine Learning and Cognitive Science libraries. We learnt that "cat" has a positive sentiment analysis and "dog" has a neutral score, which makes no sense what so ever because dogs are definitely way cuter than cats. (Zarif strongly disagrees) ## What's next for SmartEQ We would deploy our hack onto real Smart Glasses :D. This would allow us to deploy our tech in real life, first into small sample groups to figure out what works and what doesn't work, and after we smooth out the kinks, we could publish it as an added technology for Smart Glasses. This app would also be useful for people with social-emotional agnosia, a condition that may be caused by brain trauma that can cause them to be unable to process facial expressions. In addition, this technology has many other cool applications! For example, we could make it into a widely used company app that company employees can integrate into their online meeting tools. This is especially valuable for HR managers, who can monitor their employees’ emotional well beings at work and be able to implement initiatives to help if their employees are not happy.
## Inspiration I wanted to make a fun experience with an unconventional search method, and what I came up with was making a moodboard complete with GIFs and a Spotify playlist, all depending on what your current facial expression was. ## What it does The user clicks to take a picture of their current facial expression, then the server takes the photo and passes it to the Google Cloud Vision API, where the facial emotions are evaluated then returned to the server and evaluated. Depending on the analyzed mood, a GIF board and Spotify playlist are selected to match your current mood. ## How I built it I used Node.js to build the backend to run an Express server and also to use the Giphy, Google Cloud Vision, and Spotify Web API wrappers. On the frontend, its HTML, JS, and CSS with a Pug template as well so I can pass variables from the backend to the frontend. ## Challenges I ran into A challenge I ran into was the limited amount of emotion that Google Cloud Vision analyzes, so I had to get creative and use some feature detection to analyze emotions that I programmed. Also, the sheer amount of callbacks I had to deal with made my life a little more miserable. ## Accomplishments that I'm proud of First and foremost, I am proud of being able to use the Google Cloud Vision API to add some machine learning/computer vision into my project. I am also proud of the integration of Giphy and Spotify for a true multimedia experience that is very fun and easy-to-use. ## What I learned I learned how to use Google Cloud Platform, and how machine learning is utilized to analyze facial expressions through computer vision and advanced mathematics. ## What's next for Big Mood Analyzer A switch to invert some of the moods, so if you are sad, it will show happy things, or if you're tired, it will show things to energize you.
winning
## Inspiration As a group of indecisive people, our friends have always found it difficult to plan an outing. What we want to do and where we want to go are both questions that are met with people saying “anywhere” or “I don’t care”. There are many factors we need to consider when picking a spot and the endless choices often end in frustration and disagreements. We would scroll through Google, Google Maps, or Yelp trying to find options that match what we want to do and where we live with no success. We wanted something that could consider these factors for us and thus, Bonfire was created. ## What it does Bonfire is a web app designed to streamline planning your next outing with friends. We automatically gather a list of local destinations according to your desired search parameters including activity type and maximum travel distance. With Bonfire, you can invite friends and figure out where to go easily and quickly through the cards we created for the voting process. We strive to provide our users with both quality and quantity when picking your next outing. ## How we built it The UI was first designed using Figma and then served through a react.js frontend. When a new room is made the settings are sent to a node.js server which creates a unique 4-character code to identify your room. The node server then queries the Google Places API using your location information from the geolocation API, activity options, and maximum distance from the host to generate a list of nearby locations that match your desired filters. Each location is run through Yelp API to fetch ratings and reviews that are available for each user to read while voting. The location data information is stored in a secure cockroach database and is queried and sent to a user whenever they join a room using the 4-character code. The user’s votes are recorded and stored through CockroachDB. Finally, whenever someone visits the results page, voting results are queried from CockroachDB and organized on a podium. Since a database stores the results, this page does not expire. ## Challenges we ran into Picking an idea; getting the front-end and back-end to work together; running out of time. ## Accomplishments that we're proud of Jennifer's proud that she learned how to design the entire UI using Figma. Kevin's proud that he learned how to use "sequel" and that he finally wrote real code during a hackathon. Arthur's proud that he figured out how to interface with Google Places and Yelp and that he speedran editing a video. Lavan's proud that he took a power nap instead of pulling an all-nighter (oh, and he learned how to use hooks and make API calls in React). ## What we learned Even when you think you're almost done, you are not almost done. ## What's next for Bonfire Expanding the reviews and allowing more images to be displayed of each location.
## Inspiration ``` We were inspired by the conversation we had with one of the Kensho developers. He told us about how he studies the correlation between seemingly random pairs of events and the stock market--this led us to ask the same question but in a more physical sense. We wondered: what are the similarly random patterns for a person in his day to day travels ``` ## What it does ``` The app serves two purposes--one for the consumer and one for the business owner. Foremost, the user can see which places or being actively visited based on other user's activities--these are the "hot spots". Furthermore, the user can also look at each business's most common successor--where an average user will most likely go to next--in order to gain valuable suggestions on new points of entertainment or businesses. From a business owner standpoint, he can see the quantitative results of the net user population on the website in order to track the flow of his customers. This will allow a business owner to change practices based on the tendencies of his customers. ``` ## How I built it ``` The app was built for iOS in Swift. It regularly posts the users location to a Node server that triggers MongoDB queries. Also, the app requests a general lay of the land when first starting up where it makes a get request to the Node server to get 18 points of interest nearby. These 18 nearby points are found through the Google Maps API. All of the "hotness" data and closest neighbor suggestions on the app stem from this API get request. Finally, the website uses other API endpoints to get basic data in order to plot an undirected graph. ``` ## Challenges I ran into ``` The implementation of getting data from google was somewhat sloppy. This, along with the fact that we needed to make a high volume of requests, led to occasional max outs on the Google Maps API limit. Constructing the graph also proved to be quite challenging. Finally, creating mock data was horribly tedious do to the close relationship between the data--an SQL DB would have worked much better. (PS: This did not need ASYNC JavaScript) ``` ## Accomplishments that I'm proud of The interface of the app looks very professional and clean, and the concept seemed to legitimate. ## What I learned We learned that the typical MEAN stack (or its other expressions) may not always function the best. For example, a PHP + MySQL combination would fulfill the role much better; however, we were limited in the fact that our best iOS dev is also our best PHP dev. On a more specific note, we learned that Google's Map API can only make 20 requests at one time. This highlights the important of designing smart algorithms that minimize http requests. ## What's next for HotSpot We would like to implement this on a large scale where the number of business is not limited. We believe this will do the best job of answering our initial question of whether or not patterns exist for a person in his day to day travels because most people move farther than a small, multiple block radius. Also, some notion of machine learning could benefit the analysis of this data for the business owner substantially.
<https://www.townsquares.tech> Discord Usernames: `jkarbi#1190`, `Leland#1463`, `Dalton#6802` Discord Team Name: `Team 13`, channel `team-13-text` ## Inspiration Traditionally, citizens write to city counsellors or stage protests when they are unhappy with how their government is acting. Nowadays, citizens can use social media to express their opinions, but the many voices makes platforms crowded and messages can get lost. Ever wondered if there was a better way? That's why we built TownSquares. ## What it does TownSquares lets anyone ask their community for its opinion by creating GPS-based polls. **Polls are locked to GPS coordinates** and can only be **answered by community members within a set radius**. Polls can be used to **inspire change in a community** by making the voice of the people heard loud and clear. Not happy with how a city service is being delivered in your community? Post a poll on TownSquares and see if your neighbours agree. Then use the results to get the attention of your representatives in government! ## How we built it Tech stack: **MEAN (MongoDB, Express.js, Angular, Node.js)**. **Mapbox API** used to display a map and the poll locations. Backend deployed on **Google Cloud** using **App Engine**. **MongoDB** running as a shared cluster on MongoDB Atlas. ## Challenges we ran into Deploying the app on GCP and mapping to a custom domain name. Working with Angular, since we had limited frontend development experience. ## Accomplishments that we're proud of We came into this hackathon with a plan for what we were going to build and which components of the project we would all be responsible for. That really set us up for sucess, and is something we are really proud of! ## What we learned Deployment using GCP App Engine and mapping to custom domain names, integrating with Mapbox, and frontend development with Angular! ## What's next for TownSquares We hope to continue working on this following the hackathon because we think it could really be popular!! We know there's more for us to build and we're excited to do that :).
losing