anchor
stringlengths
1
23.8k
positive
stringlengths
1
23.8k
negative
stringlengths
1
31k
anchor_status
stringclasses
3 values
# MaTricks - The Matrix Editing App for iOS ### TreeHacks Project 2017 --- ### Built by Patrick Revilla, Renner Lucena, Tai Kao-Sowa, Michael Karr --- The world of matrices and linear algebra can be interesting, but also very perplexing for students who are developing their abstract reasoning. MaTricks (MAY-tricks) is an iOS app written in Swift that is designed to help students learn about matrices and their operations. In MaTricks, users can input custom matrices and perform operations on them. Through a popup and text view interface, users follow the prompts to create and save matrices. The app allows for error checking for the dimensions in operations such as inverse or the determinant.
## Inspiration We got the inspiration while solving some math questions. We were solving some of the questions wrong, but couldn't get any idea in what step we were doing wrong. Online, it was even worse: there were only videos, and you had to figure all of the rest out by yourself. The only way to see exactly where you did a mistake was to have a teacher with you. How crazy! Then, we said, technology could help us solve this, and it could even enable us to build a platform that can intelligently give the most efficient route of learning to each person, so no time would be wasted solving the same things again and again! ## What it does The app provides you with some questions (currently math) and a drawing area to solve the question. While you are solving, the app can compare your handwritten solution steps with the correct ones and tell if your step was correct or false. Even more, since it also has educational content built-in, it can track and show you more of the questions that you did incorrectly, and even questions including steps you did incorrect while solving other questions. ## How we built it We built the recognition part using the MyScript math handwriting recognition API, and all the tracking, statistics and other stuff using Swift, UIKit and AVFoundation. ## Challenges we ran into We ran into lots of challenges while building all the data models, since each one is interconnected with the others, and all the steps, questions, tags, etc. make up quite a large variety of data. With the said variety of data, also came a torrent of user interface bugs, and it took *some* perseverance to solve them all as quickly as possible. Also, probably the one of the biggest challenges we dealt with was to deal with the IDE itself crashing :) ## Accomplishments that we're proud of We are proud of the data collection and recommendation system that we built from the ground up (entirely in Swift!), and the UI that we built, since even though the app doesn't have a large quantity of educational content inside yet, we built it with the ability to expand easily as content gets added, in mind. ## What we learned The biggest thing we learned was how to build a data set large enough to give personalized recommendations, and also how to divide and conquer it before it gets too complex. We also learned to go beyond what the documentation on the internet offers while debugging, and to solve things by going from examples, without documentation on how to implement. ## What's next for Tat We think that Tat has quite a potential to redefine education for years to come if we can build more upon it, with more content, more data and even the possibility of integrating crowd-trained AI.
## Inspiration Making learning fun for children is harder than ever. Mobile Phones have desensitized them to videos and simple app games that intend to teach a concept. We wanted to use Projection Mapping and Computer Vision to create an extremely engaging game that utilizes both the physical world, and the virtual. This basic game intends to prep them for natural disasters through an engaging manner. We think a slightly more developed version would be effective in engaging class participation in places like school, or even museums and exhibitions, where projection-mapping tech is widely used. ## What it does The Camera scans for markers in the camera image, and then uses the markers position and rotation to create shapes on the canvas. This canvas then undergoes an affine transformation and then gets outputted by the projector as if it were an overlay on top of any object situated next to the markers. This means that moving the markers results in these shapes following the markers' position. ## How the game works When the game starts, Melvin the Martian needs to prepare for an earthquake. In order to do so you need to build him a path to his First Aid Kit with your Blocks (that you can physically move around, as they are attached to markers). After he gets his first Aid kit, you need to build him a table to hide under, before the earthquake approaches (again, using any physical objects attached to markers). After he hides, You Win! ## How I built it I began by trying to identify the markers - for which there was an already implemented library that required extensive tuning to get working right. I then made the calibration process, which took three points from the initial, untransformed camera image, and the actual location of these three points on the projector screen. This automatically created a transformation matrix that I then applied to every graphic I rendered (eg. the physical blocks). After this, I made the game, and used the position of the markers to determine is certain events were satisfied, which decided whether the game would progress, or wait until it received the correct input. ## Challenges I ran into It was very difficult to transform the camera's perspective (which was at a different frame of reference to the projector's) to the projector's perspective. Every camera image had undergone some varying scale, rotation and translation, which require me to create a calibration program that ran at the start of the program's launch. ## Accomplishments that I'm proud of Instead of relying wholly on any library, I tried my best to directly manipulate the Numpy Matrices in order to achieve transformation effects referred to previously. I'm also happy that I was able to greatly speed up camera-projector frame calibration, which began taking around 5 minutes, and now takes about 15-20 seconds. ## What I learned I learnt a great deal about Affine Transformations, how to decompose a transformation matrix into its scale, rotation and translation values. I also learnt the drawbacks of using more precise markers (eg. April tags, or ARUCO tags) as opposed to something much simpler, like an HSV color & shape detector. ## What's next for Earthquake Education With Projection Mapping and CV I want to automate the calibration process, so it requires no user input (which is technically possible, but is prone to error and requires knowledge about the camera being used). I also want to get rid of the ARUCO tags entirely, and instead use the edges of physical objects to somehow manipulate the virtual world.
losing
## Inspiration Our inspiration was to provide a robust and user-friendly financial platform. After hours of brainstorming, we decided to create a decentralized mutual fund on mobile devices. We also wanted to explore new technologies whilst creating a product that has several use cases of social impact. Our team used React Native and Smart Contracts along with Celo's SDK to discover BlockChain and the conglomerate use cases associated with these technologies. This includes group insurance, financial literacy, personal investment. ## What it does Allows users in shared communities to pool their funds and use our platform to easily invest in different stock and companies for which they are passionate about with a decreased/shared risk. ## How we built it * Smart Contract for the transfer of funds on the blockchain made using Solidity * A robust backend and authentication system made using node.js, express.js, and MongoDB. * Elegant front end made with react-native and Celo's SDK. ## Challenges we ran into Unfamiliar with the tech stack used to create this project and the BlockChain technology. ## What we learned We learned the many new languages and frameworks used. This includes building cross-platform mobile apps on react-native, the underlying principles of BlockChain technology such as smart contracts, and decentralized apps. ## What's next for *PoolNVest* Expanding our API to select low-risk stocks and allows the community to vote upon where to invest the funds. Refine and improve the proof of concept into a marketable MVP and tailor the UI towards the specific use cases as mentioned above.
## Inspiration We wanted to bring financial literacy into being a part of your everyday life while also bringing in futuristic applications such as augmented reality to really motivate people into learning about finance and business every day. We were looking at a fintech solution that didn't look towards enabling financial information to only bankers or the investment community but also to the young and curious who can learn in an interesting way based on the products they use everyday. ## What it does Our mobile app looks at company logos, identifies the company and grabs the financial information, recent company news and company financial statements of the company and displays the data in an Augmented Reality dashboard. Furthermore, we allow speech recognition to better help those unfamiliar with financial jargon to better save and invest. ## How we built it Built using wikitude SDK that can handle Augmented Reality for mobile applications and uses a mix of Financial data apis with Highcharts and other charting/data visualization libraries for the dashboard. ## Challenges we ran into Augmented reality is very hard, especially when combined with image recognition. There were many workarounds and long delays spent debugging near-unknown issues. Moreover, we were building using Android, something that none of us had prior experience with which made it harder. ## Accomplishments that we're proud of Circumventing challenges as a professional team and maintaining a strong team atmosphere no matter what to bring something that we believe is truly cool and fun-to-use. ## What we learned Lots of things about Augmented Reality, graphics and Android mobile app development. ## What's next for ARnance Potential to build in more charts, financials and better speech/chatbot abilities into out application. There is also a direction to be more interactive with using hands to play around with our dashboard once we figure that part out.
## Inspiration We constantly have friends asking us for advice for investing, or ask for investing advice ourselves. We realized how easy a platform that allowed people to make collaboratively make investments would make sharing information between people. We formed this project out of inspiration to solve a problem in our own lives. The word 'Omada' means group in Greek, and we thought it sounded official and got our message across. ## What it does Our platform allows you to form groups with other people, put your money in a pool, and decide which stocks the group should buy. We use a unanimous voting system to make sure that everyone who has money involved agrees to the investments being made. We also allow for searching up stocks and their graphs, as well as individual portfolio analysis. The way that the buying and selling of stocks actually works is as follows: let's say a group has two members, A and B. If A has $75 on the app, and person B has $25 on the app, and they agree to buy a stock costing $100. When they sell the stock, person A gets 75% of the revenue from selling the stock and person B gets 25%. Person A: $75 Person B: $25 Buy stock for $100 Stock increases to $200 Sell Stock Person A: $150 Person B: $200 We use a proposal system in order to buy stocks. One person finds a stock that they want to buy with the group, and makes a proposal for the type of order, the amount, and the price they want to buy the stock at. The proposal then goes up for a vote. If everyone agrees to purchasing the stock, then the order is sent to the market. The same process occurs for selling a stock. ## How we built it We built the webapp using Flask, specifically to handle routing and so that we could use python for the backend. We used BlackRock for the charts, and NASDAQ for live updates of charts. Additionally, we used mLab with MongoDB and Azure for our databases, and Azure for cloud hosting. Our frontend is JavaScript, HTML, and CSS. ## Challenges we ran into We had a hard time initially with routing the app using Flask, as this was our first time using it. Additionally, Blackrock has an insane amount of data, so getting that organized and figuring out what we wanted to do with that and processing it was challenging, but also really fun. ## Accomplishments that we're proud of I'm proud that we got the service working as much as we did! We decided to take on a huge project, which could realistically take months of time to make if this was a workplace, but we got a lot of features implemented and plan on continuing to work on the project as time moves forward. None of us had ever used Flask, MongoDB, Azure, BlackRock, or Nasdaq before this, so it was really cool getting everything together and working the way it does. ## What's next for Omada We hope to polish everything off, add features we didn't have time to implement, and start using it for ourselves! If we are able to make it work, maybe even publishing it!
winning
## Inspiration Because of COVID-19, we're experiencing not only a global health crisis but also extreme psychological stress. The isolation and loneliness from social distancing, loss of personal and physical spaces, and not being able to enjoy the outdoors, for which there's growing evidence that nature helps relieve stress, are all taking its toll on people, To help relieve some of this stress using virtual reality, Moment of Bliss was created. ## What it does Moment of Bliss is is a free VR therapy option for anyone who can use some respite from the stress of everyday life. While it's designed as a single player game, you can interact with other people virtually by leaving notes for the next person. It also offers a lot of open virtual space and ways to enjoy nature like birdwatching from the comfort of your room. While the app was designed initially for veterans who may not have the means (e.g. transportation, nearby healthcare facility that offers it, money to pay for VR therapy) to take participate in virtual therapy for PTSD, the idea lends itself to a wider audience who fall under the umbrella of experiencing psychological stress. This app can also help people who cannot travel much or leave their space, have limited or no access to safe green spaces, are looking for free ways to destress, or want to feel connected with others while having a space they can call their own. ## How we built it Unity, C#, EchoAR ## Challenges we ran into I've learned that not all Unity projects translate nicely on webGL. I can build a standalone application and run it without issues, but when I built project in webGL and uploaded to simmer.io, the first scene works, but the main part of the project (lots of open natural space!) perhaps takes a long time to load because of its sheer size, so all I have is a still shot. ## Accomplishments that I'm proud of Made a landscape from scratch in 3d using Unity! ## What I learned A lot about Unity— start early. It took maybe 3x the amount of time to build the webGL product than it did to create a standalone app, and the standalone app took about 30 min to build. (Crazy, right?!) ## What's next for Moment of Bliss Build out features (rainy area for those who enjoy listening to rain) and easter eggs
## Inspiration Our inspiration for this project comes from our own experiences as University students. As students, we understand the importance of mental health and the role it plays in one's day to day life. With increasing workloads, stress of obtaining a good co-op and maintaining good marks, people’s mental health takes a big hit and without a good balance can often lead to many problems. That is why we wanted to tackle this challenge while also putting a fun twist. Our goal with our product is to help provide users with mental health resources, but that can easily be done with any google search so that is why we wanted to add an “experience” part to our project where users can explore and learn more about themselves while also aiding in their mental health. ## What it does Our project is a simulation where users are placed in a very calm and peaceful world open to exploration. Along their journey, they are accompanied by an entity that they can talk to whenever they want. This entity is there to support them and is there to demonstrate that they are not alone. As the user walks through the world, there are various characters they can meet that can provide the user with resources such as links, articles, etc. that can help better their mental health. When the user becomes more comfortable and interested, they have the choice to learn more and provide more information about themselves. This allows the user to have more personal resources such as local support groups in their area, etc. ## How we built it We divided ourselves into two separate teams where one team works on the front end and the other on the back end. For our front end, we used unity to develop our world and characters. For the back end, we used openai’s chat API in order to generate our helpful resources with it all being coded in python. To connect our backend with our front end, we decided to use flask and a server website called Pythonanywhere. ## Challenges we ran into We ran into multiple challenges during our time building this project. The first challenge was really nailing down on how we would want to execute our project. We spent a lot of time discussing how we would want to make our project unique while still achieving our goals. Furthermore, another challenge came with the front end as building the project on unity was challenging itself. We had to figure out how to receive input by the user and make sure that our back end gives us the correct information. Finally, to adhere to all types of accessibility, we also decided to add a VR option to our simulation for more immersion so that the user can really feel like they are in a safe space to talk and that there are people to help them. Getting the VR setup was very difficult but also a super fun challenge. For our back end, we encountered many challenges, especially getting the exact responses we wanted. We really had to be kinda creative on how we wanted to give the user the right responses and ensure they get the necessary resources. ## Accomplishments that we're proud of We are very proud of the final product that we have come up with, especially our front end as that was the most challenging part. This entire project definitely pushed all of us past our skill level and we most definitely learned a lot. ## What we learned We learnt a lot during this project. We most definitely learnt more about working as a team as this was our second official time working together. Not only that, but in terms of technical skills, we all believe that we had learnt something new and definitely improved the way we think about certain aspects of coding ## What's next for a conversation… While we could not fully complete our project in time as we encountered many issues with combining the front end and back end, we are still proud
## Inspiration Life is short and fast-paced, filled with fleeting moments of joy, achievements, and connection. During Hack the North, our team met so many amazing and inspiring people in such a short amount of time. It struck us how easy it is to forget these meaningful interactions and experiences as life rushes on. This inspired us to create Flashback—a VR experience designed to memorialize these cherished moments and allow users to revisit them in a deeply immersive and personalized museum. Unlike traditional social media, which encourages constant sharing with others, Flashback offers a personal and introspective journey through your own memories. It’s designed for the individual, allowing users to relive their most cherished moments in an immersive, meaningful way. #### A quote that resonated with us this weekend: Life is not measured by time. It is measured by moments. There is a limit to how much you can embrace a moment. But there is no limit to how much you can appreciate it. Bonus: Great for ~~us~~ forgetful folks! ## What it does Flashback is a VR experience that transforms your personal memories and achievements into interactive, immersive museum exhibits. Users can upload photos, videos, personal audio clips, and music to create unique 3D galleries, where each memory comes to life. As you walk up to an exhibit, specific music, audio clips, and captions are triggered, bringing the memory to life in a dynamic way. Memories can also be grouped into collections. Instead of just scrolling through pictures, Flashback lets you step into your memories—hear familiar voices, see cherished moments, and relive experiences in a fully immersive environment. Additionally, there is a web app where users can upload, update, and maintain their growing museum of memories. Flashback evolves with you over time, offering a place to revisit positive memories on difficult days, and preserve fleeting moments like our time at Hack the North. ## How we built it Flashback is built with React, Node.js, JavaScript, Express.js, Convex, HTML, CSS, Spotify APIand Material UI for the web app's front and back end. The VR experience is developed using Unity, .NET, and C#, with testing done on a Meta Quest VR headset. Our mascot Framey was drawn up by our team mate Jenn. ## Challenges we ran into * Picking up Convex * Designing on Figma for the first time * Unity and C# are HARD (our teams first time making a VR project) * learning new tech is hard * merge requests on front end is hard * Sleep deprivation * figuring out how to connect and integrate client, server, and VR ## Accomplishments that we're proud of * Everything we made! We worked very hard! * Adapting and persevering through a lot of roadblocks this weekend * Creating a super cool VR experience (shoutout to Alan!!!) * First time designer making a hi-fidelity mock up of the web app and VR user flows in Figma * Spending time together and having fun as Hack the North 2024 ## What we learned Everyone on our team had experience in different technologies, but this weekend we each tried doing something new- using Convex DB integration for the first time, designing for the first time, learning C# and Unity, trying out front-end development, and creating our first VR project! Additionally, we learned that unity is hard and doing research and regularly communication about feasibility is extremely important. ## What's next for Flashback * integration authentication and allow users to visit other museums! * more customizability - users can choose their own VR assets to personalize their space and memories * more fields for memories - multi-media, video, and personal audio upload * more interactiveness in VR environment
losing
## Inspiration During these trying times, the pandemic impacted many people by isolating them in their homes. People are not able to socialize like they used to and find people they can relate with. For example, students who are transitioning to college or a new school where they don’t know anyone. Matcher aims to improve students' mental health by matching them with people who share similar interests and allows them to communicate. Overall, its goal is to connect people across the world. ## What it does The user first logs in and answers a series of comprehensive, researched backed questions (AI determined questions) to determine his/her personality type. Then, we use machine learning to match people and connect them. Users can email each other after they are matched! Our custom Machine Learning algorithm used K-Means Algorithm, and Random Forest to study people's personalities. ## How we built it We used React on the front end, Firebase for authentication and storage, and Python for the server and machine learning. ## Challenges we ran into We all faced unique challenges but losing one member mid way really damped our spirits and limited our potential. * Gordon: I was new to firebase and I didn’t follow the right program flow in the first half of the hackathon. * Lucia: The challenge I ran into was trying to figure out how to properly route the web pages together on React. Also, how to integrate Firebase database on the Front End since I never used it before. * Anindya: Time management. ## Accomplishments that we're proud of We are proud that we are able to persevere after losing a member but still managing to achieve a lot. We are also proud that we showed resiliency when we realized that we messed up our program flow mid way and had to start over from the beginning. We are happy that we learned and implemented new technologies that we have never used before. Our hard work and perseverance resulted in an app that is useful and will make an impact to people's lives! ## What we learned We believe that what doesn't kill you, makes you stronger. * Gordon: After chatting with mentors, I learnt about SWE practises, Firebase flow, and Flask. I also handled setback and failure from wasting 10 hours. * Lucia: I learned about Firebase and how to integrate it into React Front End. I also learned more about how to use React Hooks! * Anindya: I learned how to study unique properties of data using unsupervised learning methods. Also I learned how to integrate Firebase with Python. ## What's next for Matcher We would like to finish our web app by completing our integration of the Firebase Realtime Database. We plan to add social networking features such as a messaging and video chat feature which allows users to communicate with each other on the web app. This will allow them to discuss their interests with one another right at our site! We would like to make this project accessible to multiple platforms such as mobile as well.
## Inspiration Everyone in this team has previously been to post-secondary and noticed that their large group of friends have been slowly dwindling since graduation, especially after COVID. It's already well known that once you leave school it's a lot harder to make friends, so we got this idea to make FriendFinder to match you with people with similar hobbies in the same neighbourhood as you. ## What it does **Find friends!** When making an account on FriendFinder, you will be asked to input your hobbies, whether you prefer chatting or hanging out, whether you enjoy outdoor activities or not, and your neighbourhood. It then gives other users a relative score based on your profile, with more matching hobbies and preferences having a higher score. Now when ever you log in, the front page will show you a list of people near you with the highest score, allowing you to send them friend requests to start a chat. ## How we built it **With friends!** We used HTML, CSS, and Javascript for the frontend and Firebase and Firestore for the backend. ## Challenges we ran into **Our friends...** Just kidding. One of the biggest challenges we faced was the short amount of time (24 hours) of this hackathon. Being first year students, we made a project of similar scale in school but over 4 months! Another challenge was that none of us knew how to implement a real time chat app into our project. At first we wanted to learn a new language React and make the chat app beautiful, but due to time constraints, we researched a simpler way to do it just to give it base functionality. ## Accomplishments that we're proud of **Our friendship survived!** After the initial scramble to figure out what we were doing, we managed to get a minimum viable product in 24 hours. We are really proud that we incorporated our knowledge from school and learned something new and integrated it together without any major issues. ## What we learned **Make good friends** The most important thing we learned is that team work is one of the most important things needed for a good development team. Being able to communicate with your team and dividing work up by each team member's strengths is what made it possible to finish this project within the strict time limit. The hackathon was a really fun experience and we're really glad that we could form a team together. ## What's next for FriendFinder **More features to find more friends better** * beautify the app * add friend / pending friend requests feature * security/encryption of messages * report user function * more detailed hobby selection list for better matching * update user's profile / hobby selection list at any time * let users add photos * group chat function * rewrite sections of code to become more efficient
## 🌱 Inspiration With the ongoing climate crisis, we recognized a major gap in the incentives for individuals to make greener choices in their day-to-day lives. People want to contribute to the solution, but without tangible rewards, it can be hard to motivate long-term change. That's where we come in! We wanted to create a fun, engaging, and rewarding way for users to reduce their carbon footprint and make eco-friendly decisions. ## 🌍 What it does Our web app is a point-based system that encourages users to make greener choices. Users can: * 📸 Scan receipts using AI, which analyzes purchases and gives points for buying eco-friendly products from partner companies. * 🚴‍♂️ Earn points by taking eco-friendly transportation (e.g., biking, public transit) by tapping their phone via NFC. * 🌿 See real-time carbon emission savings and get rewarded for making sustainable choices. * 🎯 Track daily streaks, unlock milestones, and compete with others on the leaderboard. * 🎁 Browse a personalized rewards page with custom suggestions based on trends and current point total. ## 🛠️ How we built it We used a mix of technologies to bring this project to life: * **Frontend**: Remix, React, ShadCN, Tailwind CSS for smooth, responsive UI. * **Backend**: Express.js, Node.js for handling server-side logic. * **Database**: PostgreSQL for storing user data and points. * **AI**: GPT-4 for receipt scanning and product classification, helping to recognize eco-friendly products. * **NFC**: We integrated NFC technology to detect when users make eco-friendly transportation choices. ## 🔧 Challenges we ran into One of the biggest challenges was figuring out how to fork the RBC points API, adapt it, and then code our own additions to match our needs. This was particularly tricky when working with the database schemas and migration files. Sending image files across the web also gave us some headaches, especially when incorporating real-time processing with AI. ## 🏆 Accomplishments we're proud of One of the biggest achievements of this project was stepping out of our comfort zones. Many of us worked with a tech stack we weren't very familiar with, especially **Remix**. Despite the steep learning curve, we managed to build a fully functional web app that exceeded our expectations. ## 🎓 What we learned * We learned a lot about integrating various technologies, like AI for receipt scanning and NFC for tracking eco-friendly transportation. * The biggest takeaway? Knowing that pushing the boundaries of what we thought was possible (like the receipt scanner) can lead to amazing outcomes! ## 🚀 What's next We have exciting future plans for the app: * **Health app integration**: Connect the app to health platforms to reward users for their daily steps and other healthy behaviors. * **Mobile app development**: Transfer the app to a native mobile environment to leverage all the features of smartphones, making it even easier for users to engage with the platform and make green choices.
losing
## Inspiration Many people want to find ways to recycle more, make donations, find charities to support, go to local health clinics or non profits like Planned parenthood, support environmental issues but don't know how or where to look. This often means looking up a place to donate clothes in your city, or a place that accepts recycling certain materials like metals, for example. This app solves this problem by giving the location to all such organizations in one place. ## What it does Includes a map where organizations (incentivized because they want to reach more people) and people can place a pin on the map of established places (i.e. a junkyard or building housing a health clinic) upload or take photos of the place, add comments about it or other places. There are different maps based on interest like Nonprofit map, Donations map, Volunteer map, Health map and a Profile view where pins from all maps can be seen. Social media is very important to this app. It leverages social media by allowing users to login with facebook and post a comment about a location to their wall. For further discussion fostering social good, there is a section of the app where users can chat about these issues. Was inspired by the app Waze, where users can real-time comment on traffic. Here users can real-time comment on different issues.## Challenges I ran into ## How I built it Android app written in Java. Used Parse for backend and facebook APIs for login and sharing a post to facebook wall. Used Google maps API to pin to maps. ## Accomplishments that I'm proud of All of the special features on the map, such as filter by date, shake device for different version of map (i.e. hybrid), creating a chat sections so that users can communicate. Sharing to facebook, as social media sharing is an important part of the app. ## What I learned Setting up Parse database, creating functions to both take a photo with the app AND upload from existing photo library on phone. ## What's next for Contribute Monitoring what is commented/posted. github link includes code to a completely different project, the most recent commit is my project for HackPrinceton
## Inspiration One of our first ideas was an Instagram-esque social media site for recipe blogs. We also were interested in working with location data - somewhere along the line there was an idea to make an app that allowed you to track down your friends. Somehow, we managed to combine both of these wildly different ideas into a real-world applicable site. After researching shelters and food banks (aka googling and clicking on the first result), we realized that while these establishments do have a working relationship, oftentimes the shelters and food banks are required to buy key missing ingredients. Thus, our application was created to further personalize the relationship and interaction between these establishments to aid in decreasing food waste and ensuring people are getting culturally-significant, healthy, and delicious food. ## What it does Markets and Shelters/food banks log in to their respective homepage. From there, they can see the other establishments near them, as well as an interactive sidebar. For shelters, they can see nearby participating markets and look at their supply of food. For markets, they will be able to see nearby shelters and their food requests. They will also be able to change their inventory of available foods for those shelters. ## How we built it We used next.js and a variety of different style options (css, bootstrap, tailwind.css) to make a "dynamic" website. ## Challenges we ran into We realized the crux of our application, which relies on a google map api to get nearby markets and their distances, is behind a paywall of $0. We didn't want to enter our credit card info to google. Sorry :/ As well, we were using react-native for a good four hours or so in the beginning, but it wasn't displaying on our localport (it was a blank page). Spent a long time trying to debug. So that was fun. Our team members also used many different stylesheets. The majority of it was in normal style.css, but we have one component that's entirely in bootstrap (installing it for next.js was a pain). Also, there was an attempt to use tailwind.css for some components. ## Accomplishments that we're proud of Our UI/UX design, including all our styling, was AMAZING. Shoutout to Lindsay for their major contributions. As well, this was the first time the majority of our team touched react in their lives, so I think our progress was pretty good. Given that we actually chose to slept on Friday night I'd say we accomplished a lot. ## What we learned Auth is a pain. Never again. It didn't even work :( ## What's next for crumbz There's a lot to be implemented. From changing our logo to making sure the authentication actually works there is so much more room for crumbz to grow. If we had more time and commitment this application will become so much more.
## Inspiration Both of our families immigrated to the US roughly two decades ago. **Many** of the **elders** in our family **are not comfortable navigating the medical system alone** and since a young age, both Brandon and I have been translating important documents... We believe in empowering this user base with the ability to have **empathetic conversations with their medical report**. We want users to **navigate their health journey with clarity and confidence**. ## What it does * ChickyAI securely studies your medical report and extracts the relevant information about what the patient is presenting with, the physician's opinions, and plans for the next steps. * Users are then able to ask our GenAI chatbot about the report's details and how to confidently move forward. * Users can take notes on what they discovered and be fed relevant public health insights to aid them in their health journey. ## How we built it We used Convex.dev and an assortment of tools in-between. We also use Lucia for authentication. ## Challenges we ran into * Unreliable Wi-Fi made it so we had to take "trust falls" and go without the help of ChatGPT at times in order to keep going. ## Accomplishments that we're proud of * Responsive site with a seamless user journey. * Gaining a working intuition for full-stack development. * Escaping "tutorial hell". ## What we learned * A lot. This is our first time making a website of any kind! Also, our first hackathon! * "All rabbit holes are good rabbit holes" when it comes to becoming more confident software engineers. ## What's next for ChickyAI * Blockchain encryption and ensure HIPPA compliance. * Connecting users with live certified medical professionals to replace the GenAI feature. * Connecting users with more public health resources based on location, demographics, and more. * Allowing users to securely establish a profile and improve offline access.
losing
## Our Vision Canada, without a doubt, is one of the greatest countries to live in. Maple Syrup. Hockey. Tim Hortons. Exceptional nice-ness. But perhaps what makes us most Canadian, is our true appreciation of multiculturalism. Many of us are or have families which have traveled far and wide to be a part of this great country. Lately, this has become even more common, with many refugees choosing the safety of our borders. But as receptive as Canada has been, there still exist difficulties for these individuals, many of whom have little prior exposure to English. Introducing, bridgED. Fast, practical, and educational translations allow our new friends to understand the environment around them, right from one of the most universally used tools, our phones. Using IBM’s Watson visual recognition and language translator, we’re able to identify and then translate objects. Utilizing photos not only make it faster than typing, but it allows the identification of items without obvious translations in native tongue. Convenient descriptions and wiki links allow them to then quickly understand the object on a deeper level; making bridgED a speedy learning alternative, especially for those who find it difficult to learn the language on a formal level, like many hard-working workers who lack the time. But of course, this may still not be a totally practical alternative, given the time spent raising and lowering the phone. That’s why we also have a soon-to-be-integrated feature utilizing AR, to provide near instant translations, optimal for travel. High-importance signs such as “Dead-End”, tourist areas with a high-density of unique cultural goods, or a way to quickly understand the variety of local cuisine; are all things that bridgED can help with. bridgED was designed with goal to help keep us all together. Because what’s better than enjoying Tim’s, poutine, and hockey? It’s doing it through the collective understanding of a nation united among differences. Eh? ## How we built it For our project, we utilized the specialized skills of the entire team, and split our work force in two. One would use node JS as the base platform and use react-native to build the core educational functionality of our app, utilizing IBM Watson. Our other group would make another app through the use of Unity; implementing an AR framework to provide a more immersive, alternative experience, focused on speed and quick practicality. ## Challenges we ran into We initially ran into some difficulty dividing work evenly, as some of us were much more experienced using certain frameworks than others. While they both provided unique challenges, we ended up sticking through the difficulties and ultimately decided to go with BOTH applications. Through the use of libraries, integrating our React App directly into our Unity app should be possible, later allowing us to provide a more complete individual package. ## Accomplishments that we're proud of We ran into a lot of trouble getting started, especially with Unity, as our members experienced with Unity were a rather rusty and had little experience doing initial set up of projects. We ran into a lot of small issues as well with versioning, our android deployments, and the usefulness of our APIs; but ultimately, we believe we over came those challenges and came up with a pretty good product and strong proof of concept. ## What we learned While learning and trying out new frameworks are great, we were able to get much more done than we probably would have by optimizing our work distribution. Also, we have a little bit to go still with image recognition, before it can become truly reliable. ## What's next for bridgED? * Smarter general image interpretation would greatly improve the usefulness. We could attempt to integrate google image searching for more consistent results. * More features to emphasize and support learning. Can take user voting to determine effectiveness of the algorithm's guesses, improving it for the future. * Finalizing and streamlining into a single application package ![Try it out](https://i.imgur.com/9iuYq1l.png)
## Inspiration The inspiration for our project stems from the increasing trend of online shopping and the declining foot traffic in physical stores. Our goal was to provide a unique and engaging experience for customers, encouraging them to visit physical stores and rediscover the joy of in-person shopping. We wanted to create an interactive and entertaining shopping experience that would entice customers to visit stores more frequently and foster a deeper connection between them and the store's brand. ## What it does Our project is an AR scavenger hunt experience that gamifies the shopping experience. The scavenger hunt encourages customers to explore the store and discover new products they may have otherwise overlooked. As customers find specific products, they can earn points which can be redeemed for exclusive deals and discounts on future purchases. This innovative marketing scheme not only provides customers with an entertaining experience but also incentivizes them to visit stores more frequently and purchase products they may have otherwise overlooked. ## How we built it To create the AR component of our project, we used Vuforia and Unity, two widely used platforms for building AR applications. The Vuforia platform allowed us to create and track image targets, while Unity was used to design the 3D models for the AR experience. We then integrated the AR component into an Android application by importing it as a Gradle project. Our team utilized agile development methodologies to ensure efficient collaboration and problem-solving throughout the development process. ## Challenges we ran into One of the challenges we faced was integrating multiple APIs and ensuring that they worked together seamlessly. Another challenge was importing the AR component and creating the desired functionality within our project. We also faced issues with debugging and resolving technical errors that arose during the development process. ## Accomplishments that we're proud of Despite the challenges we faced, we were able to achieve successful teamwork and collaboration. Despite forming the team later than other groups, we were able to effectively communicate and work together to bring our project to fruition. We are proud of the end result, which was a polished and functional AR scavenger hunt experience that met our objectives. ## What we learned We learned how difficult it is to truly ship out software, and we are grateful to have joined the hackathon. We gained a deeper understanding of the importance of project planning, effective communication, and collaboration among team members. We also learned that the development process can be challenging and unpredictable, and that it requires perseverance and problem-solving skills. Additionally, participating in the hackathon taught us valuable technical skills such as integrating APIs, creating AR functionality, and importing projects onto an Android application. ## What's next for Winnur Looking forward, we plan to incorporate Computer Vision technology into our project to prevent potential damage to our product's packaging. We also aim to expand the reach of our AR scavenger hunt experience by partnering with more retailers and enhancing the user interface and experience. We are excited about the potential for future development and growth of Winnur.
## Inspiration Over **15% of American adults**, over **37 million** people, are either **deaf** or have trouble hearing according to the National Institutes of Health. One in eight people have hearing loss in both ears, and not being able to hear or freely express your thoughts to the rest of the world can put deaf people in isolation. However, only 250 - 500 million adults in America are said to know ASL. We strongly believe that no one's disability should hold them back from expressing themself to the world, and so we decided to build Sign Sync, **an end-to-end, real-time communication app**, to **bridge the language barrier** between a **deaf** and a **non-deaf** person. Using Natural Language Processing to analyze spoken text and Computer Vision models to translate sign language to English, and vice versa, our app brings us closer to a more inclusive and understanding world. ## What it does Our app connects a deaf person who speaks American Sign Language into their device's camera to a non-deaf person who then listens through a text-to-speech output. The non-deaf person can respond by recording their voice and having their sentences translated directly into sign language visuals for the deaf person to see and understand. After seeing the sign language visuals, the deaf person can respond to the camera to continue the conversation. We believe real-time communication is the key to having a fluid conversation, and thus we use automatic speech-to-text and text-to-speech translations. Our app is a web app designed for desktop and mobile devices for instant communication, and we use a clean and easy-to-read interface that ensures a deaf person can follow along without missing out on any parts of the conversation in the chat box. ## How we built it For our project, precision and user-friendliness were at the forefront of our considerations. We were determined to achieve two critical objectives: 1. Precision in Real-Time Object Detection: Our foremost goal was to develop an exceptionally accurate model capable of real-time object detection. We understood the urgency of efficient item recognition and the pivotal role it played in our image detection model. 2. Seamless Website Navigation: Equally essential was ensuring that our website offered a seamless and intuitive user experience. We prioritized designing an interface that anyone could effortlessly navigate, eliminating any potential obstacles for our users. * Frontend Development with Vue.js: To rapidly prototype a user interface that seamlessly adapts to both desktop and mobile devices, we turned to Vue.js. Its flexibility and speed in UI development were instrumental in shaping our user experience. * Backend Powered by Flask: For the robust foundation of our API and backend framework, Flask was our framework of choice. It provided the means to create endpoints that our frontend leverages to retrieve essential data. * Speech-to-Text Transformation: To enable the transformation of spoken language into text, we integrated the webkitSpeechRecognition library. This technology forms the backbone of our speech recognition system, facilitating communication with our app. * NLTK for Language Preprocessing: Recognizing that sign language possesses distinct grammar, punctuation, and syntax compared to spoken English, we turned to the NLTK library. This aided us in preprocessing spoken sentences, ensuring they were converted into a format comprehensible by sign language users. * Translating Hand Motions to Sign Language: A pivotal aspect of our project involved translating the intricate hand and arm movements of sign language into a visual form. To accomplish this, we employed a MobileNetV2 convolutional neural network. Trained meticulously to identify individual characters using the device's camera, our model achieves an impressive accuracy rate of 97%. It proficiently classifies video stream frames into one of the 26 letters of the sign language alphabet or one of the three punctuation marks used in sign language. The result is the coherent output of multiple characters, skillfully pieced together to form complete sentences ## Challenges we ran into Since we used multiple AI models, it was tough for us to integrate them seamlessly with our Vue frontend. Since we are also using the webcam through the website, it was a massive challenge to seamlessly use video footage, run realtime object detection and classification on it and show the results on the webpage simultaneously. We also had to find as many opensource datasets for ASL as possible, which was definitely a challenge, since with a short budget and time we could not get all the words in ASL, and thus, had to resort to spelling words out letter by letter. We also had trouble figuring out how to do real time computer vision on a stream of hand gestures of ASL. ## Accomplishments that we're proud of We are really proud to be working on a project that can have a profound impact on the lives of deaf individuals and contribute to greater accessibility and inclusivity. Some accomplishments that we are proud of are: * Accessibility and Inclusivity: Our app is a significant step towards improving accessibility for the deaf community. * Innovative Technology: Developing a system that seamlessly translates sign language involves cutting-edge technologies such as computer vision, natural language processing, and speech recognition. Mastering these technologies and making them work harmoniously in our app is a major achievement. * User-Centered Design: Crafting an app that's user-friendly and intuitive for both deaf and hearing users has been a priority. * Speech Recognition: Our success in implementing speech recognition technology is a source of pride. * Multiple AI Models: We also loved merging natural language processing and computer vision in the same application. ## What we learned We learned a lot about how accessibility works for individuals that are from the deaf community. Our research led us to a lot of new information and we found ways to include that into our project. We also learned a lot about Natural Language Processing, Computer Vision, and CNN's. We learned new technologies this weekend. As a team of individuals with different skillsets, we were also able to collaborate and learn to focus on our individual strengths while working on a project. ## What's next? We have a ton of ideas planned for Sign Sync next! * Translate between languages other than English * Translate between other sign languages, not just ASL * Native mobile app with no internet access required for more seamless usage * Usage of more sophisticated datasets that can recognize words and not just letters * Use a video image to demonstrate the sign language component, instead of static images
winning
## Inspiration We wanted to pioneer the use of computationally intensive image processing and machine learning algorithms for use in low resource robotic or embedded devices by leveraging cloud computing. ## What it does CloudChaser (or "Chase" for short) allows the user to input custom objects for chase to track. To do this Chase will first rotate counter-clockwise until the object comes into the field of view of the front-facing camera, then it will continue in the direction of the object, continually updating its orientation. ## How we built it "Chase" was built with four continuous rotation servo motors mounted onto our custom modeled 3D-printed chassis. Chase's front facing camera was built using a raspberry pi 3 camera mounted onto a custom 3D-printed camera mount. The four motors and the camera are controlled by the raspberry pi 3B which streams video to and receives driving instructions from our cloud GPU server through TCP sockets. We interpret this cloud data using YOLO (our object recognition library) which is connected through another TCP socket to our cloud-based parser script, which interprets the data and tells the robot which direction to move. ## Challenges we ran into The first challenge we ran into was designing the layout and model for the robot chassis. Because the print for the chassis was going to take 12 hours, we had to make sure we had the perfect dimensions on the very first try, so we took calipers to the motors, dug through the data sheets, and made test mounts to ensure we nailed the print. The next challenge was setting up the TCP socket connections and developing our software such that it could accept data from multiple different sources in real time. We ended up solving the connection timing issue by using a service called cam2web to stream the webcam to a URL instead of through TCP, allowing us to not have to queue up the data on our server. The biggest challenge however by far was dealing with the camera latency. We wanted to camera to be as close to live as possible so we took all possible processing to be on the cloud and none on the pi, but since the rasbian operating system would frequently context switch away from our video stream, we still got frequent lag spikes. We ended up solving this problem by decreasing the priority of our driving script relative to the video stream on the pi. ## Accomplishments that we're proud of We're proud of the fact that we were able to model and design a robot that is relatively sturdy in such a short time. We're also really proud of the fact that we were able to interface the Amazon Alexa skill with the cloud server, as nobody on our team had done an Alexa skill before. However, by far, the accomplishment that we are the most proud of is the fact that our video stream latency from the raspberry pi to the cloud is low enough that we can reliably navigate the robot with that data. ## What we learned Through working on the project, our team learned how to write an skill for Amazon Alexa, how to design and model a robot to fit specific hardware, and how to program and optimize a socket application for multiple incoming connections in real time with minimal latency. ## What's next for CloudChaser In the future we would ideally like for Chase to be able to compress a higher quality video stream and have separate PWM drivers for the servo motors to enable higher precision turning. We also want to try to make Chase aware of his position in a 3D environment and track his distance away from objects, allowing him to "tail" objects instead of just chase them. ## CloudChaser in the news! <https://medium.com/penn-engineering/object-seeking-robot-wins-pennapps-xvii-469adb756fad> <https://penntechreview.com/read/cloudchaser>
## Inspiration There were two primary sources of inspiration. The first one was a paper published by University of Oxford researchers, who proposed a state of the art deep learning pipeline to extract spoken language from video. The paper can be found [here](http://www.robots.ox.ac.uk/%7Evgg/publications/2018/Afouras18b/afouras18b.pdf). The repo for the model used as a base template can be found [here](https://github.com/afourast/deep_lip_reading). The second source of inspiration is an existing product on the market, [Focals by North](https://www.bynorth.com/). Focals are smart glasses that aim to put the important parts of your life right in front of you through a projected heads up display. We thought it would be a great idea to build onto a platform like this through adding a camera and using artificial intelligence to gain valuable insights about what you see, which in our case, is deciphering speech from visual input. This has applications in aiding individuals who are deaf or hard-of-hearing, noisy environments where automatic speech recognition is difficult, and in conjunction with speech recognition for ultra-accurate, real-time transcripts. ## What it does The user presses a button on the side of the glasses, which begins recording, and upon pressing the button again, recording ends. The camera is connected to a raspberry pi, which is a web enabled device. The raspberry pi uploads the recording to google cloud, and submits a post to a web server along with the file name uploaded. The web server downloads the video from google cloud, runs facial detection through a haar cascade classifier, and feeds that into a transformer network which transcribes the video. Upon finished, a front-end web application is notified through socket communication, and this results in the front-end streaming the video from google cloud as well as displaying the transcription output from the back-end server. ## How we built it The hardware platform is a raspberry pi zero interfaced with a pi camera. A python script is run on the raspberry pi to listen for GPIO, record video, upload to google cloud, and post to the back-end server. The back-end server is implemented using Flask, a web framework in Python. The back-end server runs the processing pipeline, which utilizes TensorFlow and OpenCV. The front-end is implemented using React in JavaScript. ## Challenges we ran into * TensorFlow proved to be difficult to integrate with the back-end server due to dependency and driver compatibility issues, forcing us to run it on CPU only, which does not yield maximum performance * It was difficult to establish a network connection on the Raspberry Pi, which we worked around through USB-tethering with a mobile device ## Accomplishments that we're proud of * Establishing a multi-step pipeline that features hardware, cloud storage, a back-end server, and a front-end web application * Design of the glasses prototype ## What we learned * How to setup a back-end web server using Flask * How to facilitate socket communication between Flask and React * How to setup a web server through local host tunneling using ngrok * How to convert a video into a text prediction through 3D spatio-temporal convolutions and transformer networks * How to interface with Google Cloud for data storage between various components such as hardware, back-end, and front-end ## What's next for Synviz * With stronger on-board battery, 5G network connection, and a computationally stronger compute server, we believe it will be possible to achieve near real-time transcription from a video feed that can be implemented on an existing platform like North's Focals to deliver a promising business appeal
## Inspiration With a vision to develop an innovative solution for portable videography, Team Scope worked over this past weekend to create a device that allows for low-cost, high-quality, and stable motion and panoramic photography for any user. Currently, such equipment exists only for high-end dslr cameras, is expensive, and is extremely difficult to transport. As photographers ourselves, such equipment has always felt out of reach, and both amateurs and veterans would substantially benefit from a better solution, which provides us with a market ripe for innovation. ## What it does In contrast to current expensive, unwieldy designs, our solution is compact and modular, giving us the capability to quickly set over 20ft of track - while still fitting all the components into a single backpack. There are two main assemblies to SCOPE: first, our modular track whose length can be quickly extended, and second, our carriage which houses all electronics and controls the motion of the mounted camera. ## Design and performance The hardware was designed in Solidworks and OnShape (a cloud based CAD program), and rapidly prototyped using both laser cutters and 3d printers. All materials we used are readily available, such as mdf fiberboard and acrylic plastic, which would drive down the cost of our product. On the software side, we used an Arduino Uno to power three full-rotation continuous servos, which provide us with a wide range of possible movements. With simple keyboard inputs, the user can interact with the system and control the lateral and rotational motion of the mounted camera, all the while maintaining a consistent quality of footage. We are incredibly proud of the performance of this design, which is able to capture extended time-lapse footage easily and at a professional level. After extensive testing, we are pleased to say that SCOPE has beaten our expectations for ease of use, modularity, and quality of footage. ## Challenges and lessons Given that this was our first hackathon, and that all team members are freshman with limited experience, we faced numerous challenges in implementing our vision. Foremost among these was learning to code in the Arduino language, which none of us had ever used previously - something that was made especially difficult by our inexperience with software in general. But with the support of the PennApps community, we are happy to have learned a great deal over the past 36 hours, and are now fully confident in our ability to develop similar arduino-controlled products in the future. In addition, As we go forward, we are excited to apply our newly-acquired skills to new passions, and to continue to hack. The people we've met at PennApps have helped us with everything from small tasks, such as operating a specific laser cutter, to intangible advice about navigating the college world and life in general. The four of us are better engineers as a result. ## What's next? We believe that there are many possibilities for the future of SCOPE, which we will continue to explore. Among these are the introduction of a curved track for the camera to follow, the addition of a gimbal for finer motion control, and the development of preset sequences of varying speeds and direction for the user to access. Additionally, we believe there is significant room for weight reduction to enhance the portability of our product. If produced on a larger scale, our product will be cheap to develop, require very few components to assemble, and still be just as effective as more expensive solutions. ## Questions? Contact us at [teamscopecamera@gmail.com](mailto:teamscopecamera@gmail.com)
winning
## What it does XEN SPACE is an interactive web-based game that incorporates emotion recognition technology and the Leap motion controller to create an immersive emotional experience that will pave the way for the future gaming industry. # How we built it We built it using three.js, Leap Motion Controller for controls, and Indico Facial Emotion API. We also used Blender, Cinema4D, Adobe Photoshop, and Sketch for all graphical assets.
## Inspiration Having previously volunteered and worked with children with cerebral palsy, we were struck with the monotony and inaccessibility of traditional physiotherapy. We came up with a cheaper, more portable, and more engaging way to deliver treatment by creating virtual reality games geared towards 12-15 year olds. We targeted this age group because puberty is a crucial period for retention of plasticity in a child's limbs. We implemented interactive games in VR using Oculus' Rift and Leap motion's controllers. ## What it does We designed games that targeted specific hand/elbow/shoulder gestures and used a leap motion controller to track the gestures. Our system improves motor skill, cognitive abilities, emotional growth and social skills of children affected by cerebral palsy. ## How we built it Our games use of leap-motion's hand-tracking technology and the Oculus' immersive system to deliver engaging, exciting, physiotherapy sessions that patients will look forward to playing. These games were created using Unity and C#, and could be played using an Oculus Rift with a Leap Motion controller mounted on top. We also used an Alienware computer with a dedicated graphics card to run the Oculus. ## Challenges we ran into The biggest challenge we ran into was getting the Oculus running. None of our computers had the ports and the capabilities needed to run the Oculus because it needed so much power. Thankfully we were able to acquire an appropriate laptop through MLH, but the Alienware computer we got was locked out of windows. We then spent the first 6 hours re-installing windows and repairing the laptop, which was a challenge. We also faced difficulties programming the interactions between the hands and the objects in the games because it was our first time creating a VR game using Unity, leap motion controls, and Oculus Rift. ## Accomplishments that we're proud of We were proud of our end result because it was our first time creating a VR game with an Oculus Rift and we were amazed by the user experience we were able to provide. Our games were really fun to play! It was intensely gratifying to see our games working, and to know that it would be able to help others! ## What we learned This project gave us the opportunity to educate ourselves on the realities of not being able-bodied. We developed an appreciation for the struggles people living with cerebral palsy face, and also learned a lot of Unity. ## What's next for Alternative Physical Treatment We will develop more advanced games involving a greater combination of hand and elbow gestures, and hopefully get testing in local rehabilitation hospitals. We also hope to integrate data recording and playback functions for treatment analysis. ## Business Model Canvas <https://mcgill-my.sharepoint.com/:b:/g/personal/ion_banaru_mail_mcgill_ca/EYvNcH-mRI1Eo9bQFMoVu5sB7iIn1o7RXM_SoTUFdsPEdw?e=SWf6PO>
## Inspiration We wanted to create a creative HoloLens experience that truly transformed your space and motivated the user to interact in fun, innovative (and silly!) ways. Re-imagining simple classics seemed like a good place to start, and our redesign of Snake turned out to be more engaging than it had any right to be (: ## What it does Upon starting the game, the user is prompted to scan their space. Using the Hololens's Spatial Mapping sensors and some scripts that we wrote, we were able to get a full understanding of the user's space and automatically create a custom play area specific to your surroundings by analyzing the normals of the spatial mesh with raycasts and calculating which areas of the room find themselves empty. After scanning and generating the playspace, the user can play the game. Users must use their head to collect CyberCubes™ while at the same time avoiding the ever-growing CyberTail™ that follows them. Other special pickups are also available, like the CyberMotivationalVortex™, which attracts all of the surrounding cubes into a single point in space if you say a motivational quote (and explodes, transforming the colors of the space completely), and the CyberGravityPull™, which can help you get out of sticky situations by dropping all of the CyberTail™ spheres on the ground for a few seconds. The game also has a number of easter eggs and voice commands that can be used to enhance your CyberExperience™. Try saying "Samuel Jackson", for instance. Bonus points for whoever discovers the others. ## How I built it Unity, Hololens, C#, coding, caffeine, sheer will. ## Challenges I ran into Discovering meaningful interactions for the HoloLens is always a challenge given its limited input. Because the documentation on HoloLens development (specially with things like SpatialMapping) is so limited, we also had to develop a lot of our own technology to get the desired final result. ## Accomplishments that I'm proud of It looks polished and it's very fun to play - we also got to design a lot of our own sound effects, assets, easter eggs and interactions. The gameplay loop is simple but has depth. ## What I learned Mixed Reality is TheFuture™, a bunch of Unity and HoloLens development tricks, sound design discoveries and also what are the things that make a HoloLens interaction fun ## What's next for Cyber Snake More polish, create a story-mode and post it in the Microsoft Store for others to enjoy.
winning
## Inspiration The world has now experienced a full year of virtual everything. A main component of this new digital experience is **voice**. We now use our computers to listen to lectures, business meetings, music, interviews, and our teammates in virtual hackathons. This presents a serious challenge to those with difficulty hearing, or simply those who need help keeping up with the constant stream of information coming out of our headphones. ## What it does To help tackle this issue, we created Stenotes - an overlay application designed to run on the user's desktop. It captures audio output from everything on the computer, transcribes the detected speech into text, and displays it in the overlay as captions. It automatically saves the transcript as a file that the user can review later. Stenotes also features a Summary window which detects keywords from the audio and presents the user with small summaries about the topics mentioned in the audio. The cards can also be clicked, leading the user to a Wikipedia page about the topic. ## How we built it The front-end of Stenotes is built using **Electron** (**Javascript**, **HTML**, **CSS**) and uses a **SocketIO Client** to receive data from the back-end. The back-end is built using a **Python Flask SocketIO** server that periodically sends data to the front-end. The desktop audio is collected by configuring a **SoundDevice** Python library. The audio is then transcribed to text using a **Vosk ML Model** that runs speech recognition and outputs detected text as partial or complete sentences. The completes sentences are also stored in a buffer and passed to a **BERT Keyword Identification ML Model** to detect important words and topics from the text. Keywords are then passed to a **Wikipedia API** to scrape a summary from a Wikipedia page about the topic. All the metadata (partial sentences, complete sentences, and keywords + summaries) are **threaded** to run simultaneously and be passed through the socket upon availability. ## Challenges we ran into It was difficult to establish the communication between the Electron front-end and the Python back-end using SocketIO. It was also difficult to maintain multi-threading functionality while sending socket messages from the back-end. It was also difficult to find a suitable method of collecting desktop audio. We also had many setbacks where some technology could not be easily integrated (such as Javascript based MediaRecorders and certain speech-to-text frameworks) and had to be removed, costing time and requiring us to plan again. ## Accomplishments that we're proud of We are proud of the mistakes the ML model makes when transcribing text. We are proud of the mistakes the Wikipedia API makes when it returns the wrong web page. We are proud of being able to collect desktop audio and configure it to our uses. We are proud of being able to maintain front-end back-end communication using SocketIO. We are proud of being able to generate keywords and pass them to the front-end. ## What we learned We learned how to combine Javascript and Python tools more effectively. We learned to laugh at AI. ## What's next for Stenotes Improved UI/UX and better speech-to-text recognition
## Inspiration The need to combat the isolation that can come from digital technology, combined with the need to help minority groups improve their english, resulted in PRO-nounce. ## What it does The web app allows users to record audio. Each file is passed to the server where **Azure Speech Recognition** is used to convert the speech to text and analyze it against the word of the day. If it matches, the user gains a point to their score. The leaderboard keeps track of the scores of all the users. At the end of the day the top 50 users get regular stickers to recognize their achievement, the user in 3rd place gets a bronze sticker, 2nd place gets a silver sticker, and 1st gets a gold sticker. ## How I built it Using Keystone - a model driven content management system, we created a backend to manage the site as an administrator. After that I worked on styling the frontend using HTML and CSS. I also integrated with **Azure Speech Recognition** to validate the pronunciation of the words and I store all recordings on **Azure's Storage Container**. Finally, I hosted the web application on an **Azure Virtual Machine**. ## Challenges I ran into I didn't have enough time to implement all the features I wanted to implement. For example, **Azure's Speaker Recognition/Identification** of the recordings to ensure each recording from a user is coming from a unique person so that users don't record one person multiple times and get points for that. One of the main aims of the app is to combat social isolation, this is difficult to achieve if the user isn't forced to go out and find multiple DIFFERENT people to record. In addition, one of the main features that must be implemented is the ability for the user to edit their information. I also ran out of time but will hopefully implement both of these features and more. ## Accomplishments that I'm proud of Being able to come up with a unique game idea and execute a very large chunk of it in a short period of time. I've also never been responsible for a website's backend, so I'm proud I was able to get it up and running. ## What I learned I explored the usage of **Azure Cloud Services** and learned how to work with audio files. I used **FFMPEG** to convert the audio files recorded by the browser to the format **Azure Speech-To-Text** recognizes (ogg).
## Inspiration As the Coronavirus pandemic continues to impact our lives, students are forced to stay at home and deal with the difficulties that come with online learning. Personally, we have struggled with connection issues, professors who speak unclearly, and noisy environments. We can only imagine what lectures are like for students who have a language barrier, struggle with hearing impairment, or do not have access to a quiet and comfortable learning environment. For our hack we wanted to tackle this problem and create a tool that helps improve learning experiences and make classes more accessible for struggling students, with hopes of making a positive social impact by helping people communicate during a time filled with challenges and uncertainty. ## What it does EasyCC is a chrome extension that provides real-time closed captioning for any audio source running from your computer. EasyCC supports all platforms including Zoom, Collaborate Ultra, Discord, Google Meet, and can even transcribe Youtube videos! ## How we built it We first prototyped the UI in Figma and developed the front-end for the chrome extension using HTML and CSS. Using Node.js, we then integrated tools that allowed us to capture audio from the desktop and process speech into text using Google Cloud’s Speech-to-text engine. Using socket.io, we relayed the transcripts to our front-end to be displayed in real-time for the user. ## Challenges we ran into Most of the issues that we ran into were related to the backend and its integration. In particular, setting up our software architecture was challenging because we need to continuously pass large amounts of data from the backend to the frontend, which requires us to have a good understanding of how the web works and how each component interacts with each other. Since calling the Google Speech To Text API must be done in the backend, we had to effectively integrate it to the frontend so that the transcribed messages are displayed approximately in real time. The main hurdle was the lag due to the constant calls between the frontend and the backend, which required us to integrate Socket.io into our codebase, another feat in and of itself. Initially, the audio stream would not record which we discovered to be a permission issue outside of our code, so we had to address that issue in order for the Google Speech To Text API to work. Oftentimes, the documentation for the APIs are hard to understand due to a lack of explanations and examples, so we had to engage in some trial and error and adapt the code to meet our needs in the application. ## Accomplishments that we're proud of We are proud to have created an application that improves the experience of online lectures using off the shelf technology. We wanted to keep the application straightforward so that we can have it running quickly. Despite having little familiarity with web development and chrome extensions, we managed to create a frontend and backend, and more importantly, link these two together to create a functional application. In the process, we gained exposure to relevant web technologies and picked up researching skills, which is critical to software development. Also, we collaborated effectively to polish our ideas, offer different approaches to solving complex problems, and complement each other’s skills. Finally, we learned how to seek help from mentors effectively, being able to identify issues beyond the scope of our knowledge and research, and using the pointers they provided to devise an effective solution. ## What we learned Since we all had little experience with web development, this was our first time using the relevant technologies in an integrated manner. In particular, connecting the frontend and the backend is a major challenge that we are proud to have completed, enabling us to better understand the architecture of web applications. We learned a lot about the services we used, namely Chrome Extensions, Google Speech-To-Text API, Socket.io. This was our first time using these resources, and we are very happy with how we used them in our application. Since our program is constantly communicating between the frontend and the backend, we decided to use Socket.io to facilitate these interactions as it is designed for instant messaging. This vastly improves the performance when displaying the transcribed message on the overlay compared to constantly making HTTP calls. Error diagnosis is a constant thing we dealt with when developing software, especially when incorporating unfamiliar APIs to our codebase. In particular, although the Google Speech to Text API seemed imposing upon first glance, we are able to read through the documentation, understand what the code is doing, and identify errors preventing the service from running correctly. This was a great experience to us and we have been exposed to several great services during this hackathon. ## What's next for EasyCC EasyCC has a lot of potential to become a viable captioning service. We hope to add features that will improve our extension and make it even more accessible and useful. For one, we would like to use a translation API, which will connect users all over the world, allowing them to communicate and understand different languages. We could also potentially publish EasyCC onto the Chrome Web Store, so that our service is readily available to anybody.
losing
## Inspiration The loneliness epidemic is a real thing and you don't get meaningful engagements with others, just by liking and commenting on Instagram posts, you get meaningful engagement by having real conversations with others, whether it's a text exchange, phone call, or zoom meeting. This project was inspired by the idea of reviving weak links in our network as described in *The Defining Decade* "Weak ties are the people we have met, or are connected to somehow, but do not currently know well. Maybe they are the coworkers we rarely talk with or the neighbor we only say hello to. We all have acquaintances we keep meaning to go out with but never do, and friends we lost touch with years ago. Weak ties are also our former employers or professors and any other associations who have not been promoted to close friends." ## What it does This web app helps bridge the divide between wanting to connect with others, to actually connecting with others. In our MVP, the Web App brings up a card with information on someone you are connected to. Users can swipe right to show interest in reconnecting or swipe left if they are not interested. In this way the process of finding people to reconnect with is gamified. If both people show interest in reconnecting, you are notified and can now connect! And if one person isn't interested, the other person will never know ... no harm done! ## How we built it The Web App was built using react and deployed with Google cloud's Firebase ## Challenges we ran into We originally planned to use Twitters API to aggregate data and recommend matches for our demo, but getting the developer account took longer than expected. After getting a developer account, we realized that we didn't use Twitter all that much, so we had no data to display. Another challenge we ran into was that we didn't have a lot of experience building Web Apps, so we had to learn on the fly. ## Accomplishments that we're proud of We came into this hackathon with little experience in Web development, so it's amazing to see how far we have been able to progress in just 36 hours! ## What we learned REACT! Also, we learned about how to publish a website, and how to access APIs! ## What's next for Rekindle Since our product is an extension or application within an existing social media, Our next steps would be to partner with Facebook, Twitter, LinkedIn, or other social media sites. Afterward, we would develop an algorithm to aggregate a user's connections on a given social media site and optimize the card swiping feature to recommend the people you will most likely connect with.
## Inspiration We’ve noticed that it’s often difficult to form intentional and lasting relationships when life moves so quickly. This issue has only been compounded by the pandemic, as students spend more time than ever isolated from others. As social media is increasingly making the world feel more “digital”, we wanted to provide a means for users to develop tangible and meaningful connections. Last week, I received an email from my residential college inviting students to sign up for a “buddy program” where they would be matched with other students with similar interests to go for walks, to the gym, or for a meal. The program garnered considerable interest, and we were inspired to expand upon the Google Forms setup to a more full-fledged social platform. ## What it does We built a social network that abstracts away the tediousness of scheduling and reduces the “activation energy” required to reach out to those you want to connect with. Scheduling a meeting with someone on your friend’s feed is only a few taps away. Our scheduling matching algorithm automatically determines the top best times for the meeting based on the inputted availabilities of both parties. Furthermore, forming meaningful connections is a process, we plan to provide data-driven reminders and activity suggestions to keep the ball rolling after an initial meeting.  ## How we built it We built the app for mobile, using react-native to leverage cross-platform support. We used redux for state management and firebase for user authentication. ## Challenges we ran into Getting the environment (emulators, dependencies, firebase) configured was tricky because of the many different setup methods. Also, getting the state management with Redux setup was challenging given all the boilerplate needed. ## Accomplishments that we're proud of We are proud of the cohesive and cleanliness of our design. Furthermore, the structure of state management with redux drastically improved maintainability and scalability for data to be passed around the app seamlessly. ## What we learned We learned how to create an end-to-end app in flutter, wireframe in Figma, and use API’s like firebase authentication and dependencies like React-redux. ## What's next for tiMe Further flesh out the post-meeting followups for maintaining connections and relationships
## Inspiration We saw that lots of people were looking for a team to work with for this hackathon, so we wanted to find a solution ## What it does I helps developers find projects to work, and helps project leaders find group members. By using the data from Github commits, it can determine what kind of projects a person is suitable for. ## How we built it We decided on building an app for the web, then chose a graphql, react, redux tech stack. ## Challenges we ran into The limitations on the Github api gave us alot of troubles. The limit on API calls made it so we couldnt get all the data we needed. The authentication was hard to implement since we had to try a number of ways to get it to work. The last challenge was determining how to make a relationship between the users and the projects they could be paired up with. ## Accomplishments that we're proud of We have all the parts for the foundation of a functional web app. The UI, the algorithms, the database and the authentication are all ready to show. ## What we learned We learned that using APIs can be challenging in that they give unique challenges. ## What's next for Hackr\_matchr Scaling up is next. Having it used for more kind of projects, with more robust matching algorithms and higher user capacity.
losing
## Inspiration MSN Messenger and AOL messenger ## What it does A messaging app where you can easily talk to friends. Just open the app, and you're instantly able to talk to them. ## How we built it We built this app using C#, ASP.NET, MAUI, and gRPC. There are two parts to the program, an ASP.NET server that receives messages and sends them to other users. The other part of this program is the client, written using MAUI. It allows users to send and receive messages easily, abstracting away the technical details. ## Challenges we ran into * We had issues getting the protocol buffers and services to generate definitions * We had a lot of trouble creating a MAUI project in .NET, because there's a lot of conflicting information online about it * We had trouble debugging exceptions because sometimes they wouldn't be printed to the console * We had problems with connecting gRPC to the ASP.NET server ## Accomplishments that we're proud of We are very proud that we finished this project before the deadline. We faced many challenges and at some points we had no idea if it was going to work or not. We created a distributed architecture where a very large number of clients could connect to the server and start chatting. ## What we learned We learned how to us gRPC, ASP.NET, and MAUI. ## What's next for msnmsg We plan to create msnmsg mobile apps for both iOS and Android. We also plan to add a Twilio integration for authentication via SMS messages.
## Inspiration Do you ever have an absolutely amazing thought but end up forgetting it? Or do you go to jot it down, only to spend more time figuring out which folder of which note-taking app to put it in? We have so many thoughts going through our heads all the time, especially in today's fast-paced, distraction-filled world: what groceries to buy, and what homework we need to do, and what pick up line to use on that cute girl at work. Many of these great ideas go forgotten. Even when we do remember to jot down our ideas, it usually ends up getting lost and forgotten in the mess of our other notes. Introducing Jotter: a way to quickly jot down your thoughts and organize them for you so you never have to worry about forgetting anything ever again. ## What it does Jotter is the new, revolutionary AI tool that will change the game in the art of recalling. People get so swamped with their daily tasks from homework for students to meetings for working adults that we never have time to slow down in this fast paced society, preventing us from remembering our brilliant ideas or problematic issues that we face in our day to day lives. These ideas and issues we face are all potential opportunities to innovate and create something that will make our lives easier. As coders, we love to innovate off of any small inefficiency and create a solution, but yet, we can’t even recall these issues we faced from the day. Thus, Jotter will categorize all your daily thoughts from daily tasks to long term goals to simply notes and ideas. We utilize Cohere generative AI to sort anything that you quickly jot down on Jotter to place your thought in the proper category, leading to an organized, streamlined database of your notes that can be easily accessed and looked back on. Furthermore, to sprinkle some happiness into our users’ lives and help remember our notes later, Jotter uses generative AI technology to include humorous and/or wholesome suggestions related to your note. The power of AI can truly increase our creativity and efficiency, and Jotter takes this power, bringing it to its maximum capabilities. ## How we built it In this project, we utilized Svetle for the front-end with the Material UI Component library and Tailwind CSS for styling while using python in the back-end with various APIs. We endured a painstaking engineering process. We challenged ourselves by going out of our comfort zones through discovering more about full stack development, utilizing svelte, python, cohere API, Fast API, and tailwind. We experimented with animations and UI components to develop an aesthetic front end that we could be proud of to complement our expansive back end development. Furthermore, through our full utilization of GitHub and intensive collaboration through branching and merging, we worked as a singular unit to develop this app. ## Challenges we ran into Some challenges that we ran into was learning to work in new technologies that some of us have never studied before. Although it can be difficult to adapt and sometimes efficiently distribute work, we were able to pick up speed and recognize how to truly divide and conquer as a team. We learned to make sure to help each other and reach out more when in need, providing a successful communication that helped us overcome the variety of technical challenges we encountered from integrating the front-end to back-end. Through this, we were able to learn so much from each other and truly learn how to effectively communicate as a team. ## Accomplishments that we're proud of We were very proud of our idea. Even though it took us over 12 hours to finalize our project idea, we made sure to consider all aspects before diving into the project and executing it. Our patience and discipline, although it was difficult to continue discussing, is something that we were able to recognize as extremely valuable. Furthermore, our execution of the project is something we all saw beauty in. We were amazed how 4 people that did not know each other beforehand were able to effectively and openly communicate their ideas and concerns in order to create a well-mannered discussion. We all found this to be a great learning experience both in soft-skills and technical skills and look forward to staying connected! Furthermore, we explored the domains of prompt engineering in generative AI prompts. Through continuously refining our generative AI prompts and categories, we were able to consider many edge cases—namely notes that may fall under none of the categories—requiring us to put ourselves in the user’s shoes. We all are very proud to say that Jotter does a stellar job of categorizing in a practical manner for any note you could think of. ## What we learned A well-thought out prompt can make an enormous difference when it comes to using generative AI. Through our experience of testing all the edge cases, we were able to see the extremity and vast differences that prompts can make in the response. Whether it was ensuring the humor for our suggestions weren’t too dark nor too bland, or simply making sure we got the right format in the prompt’s answer, spending hours creating the best prompt paid large dividends. Furthermore, something that all of us took away revolved around not just new found technical skills in AI and blockchain, but also collaborative skills in compensating and offering a helping hand to each other. We were able to fill in for each other’s weaknesses, working together like a succinct, well-oiled machine. Thus, we discovered a new side of ourselves within each other, providing us experiences that will last us our life-time. ## What's next for Jotter Jotter is already an extremely powerful tool that revolutionizes the way that we recall our tasks, goals, and ideas; however, we still have room to expand and flourish into more. Imagine the future where you can have your entire life transcribed, summarized, and semantically searched. This is exactly what we are thinking with Jotter. By accessing one’s microphone on their phone, we intend to utilize audio information and voice recognition in order to expand our horizons by creating a transcribing dictaphone running 24/7, letting you have notes on your entire day, month, even life that can be summarized and searched to recall anything you need. The potential this has for people with Alzheimer's to simply allow people to recall and cherish even the small moments in their lives would be such a meaningful and impactful tool that is not only practical, but also very sentimental. These are the aspirations for Jotter, and we are excited to continue revolutionizing recall!
## Inspiration In a world where finance is extremely important, everyone needs access to **banking services**. Citizens within **third world countries** are no exception, but they lack the banking technology infrastructure that many of us in first world countries take for granted. Mobile Applications and Web Portals don't work 100% for these people, so we decided to make software that requires nothing more than a **cellular connection to send SMS messages** in order to operate. This resulted in our hack, **UBank**. ## What it does **UBank** allows users to operate their bank accounts entirely through **text messaging**. Users can deposit money, transfer funds between accounts, transfer accounts to other users, and even purchases shares of stock via SMS. In addition to this text messaging capability, UBank also provides a web portal so that when our users gain access to a steady internet connection or PC, they can view their financial information on a more comprehensive level. ## How I built it We set up a backend HTTP server in **Node.js** to receive and fulfill requests. **Twilio** with ngrok was used to send and receive the text messages through a webhook on the backend Node.js server and applicant data was stored in firebase. The frontend was primarily built with **HTML, CSS, and Javascript** and HTTP requests were sent to the Node.js backend to receive applicant information and display it on the browser. We utilized Mozilla's speech to text library to incorporate speech commands and chart.js to display client data with intuitive graphs. ## Challenges I ran into * Some team members were new to Node.js, and therefore working with some of the server coding was a little complicated. However, we were able to leverage the experience of other group members which allowed all of us to learn and figure out everything in the end. * Using Twilio was a challenge because no team members had previous experience with the technology. We had difficulties making it communicate with our backend Node.js server, but after a few hours of hard work we eventually figured it out. ## Accomplishments that I'm proud of We are proud of making a **functioning**, **dynamic**, finished product. It feels great to design software that is adaptable and begging for the next steps of development. We're also super proud that we made an attempt at tackling a problem that is having severe negative effects on people all around the world, and we hope that someday our product can make it to those people. ## What I learned This was our first using **Twillio** so we learned a lot about utilizing that software. Front-end team members also got to learn and practice their **HTML/CSS/JS** skills which were a great experience. ## What's next for UBank * The next step for UBank probably would be implementing an authentication/anti-fraud system. Being a banking service, it's imperative that our customers' transactions are secure at all times, and we would be unable to launch without such a feature. * We hope to continue the development of UBank and gain some beta users so that we can test our product and incorporate customer feedback in order to improve our software before making an attempt at launching the service.
losing
## Inspiration 💡 *An address is a person's identity.* In California, there are over 1.2 million vacant homes, yet more than 150,000 people (homeless population in California, 2019) don't have access to a stable address. Without an address, people lose access to government benefits (welfare, food stamps), healthcare, banks, jobs, and more. As the housing crisis continues to escalate and worsen throughout COVID-19, a lack of an address significantly reduces the support available to escape homelessness. ## This is Paper Homes: Connecting you with spaces so you can go places. 📃🏠 Paper Homes is a web application designed for individuals experiencing homelessness to get matched with an address donated by a property owner. **Part 1: Donating an address** Housing associations, real estate companies, and private donors will be our main sources of address donations. As a donor, you can sign up to donate addresses either manually or via CSV, and later view the addresses you donated and the individuals matched with them in a dashboard. **Part 2: Receiving an address** To mitigate security concerns and provide more accessible resources, Paper Homes will be partnering with California homeless shelters under the “Paper Homes” program. We will communicate with shelter staff to help facilitate the matching process and ensure operations run smoothly. When signing up, a homeless individual can provide ID, however if they don’t have any forms of ID we facilitate the entire process in getting them an ID with pre-filled forms for application. Afterwards, they immediately get matched with a donated address! They can then access a dashboard with any documents (i.e. applying for a birth certificate, SSN, California ID Card, registering address with the government - all of which are free in California). During onboarding they can also set up mail forwarding ($1/year, funded by NPO grants and donations) to the homeless shelter they are associated with. Note: We are solely providing addresses for people, not a place to live. Addresses will expire in 6 months to ensure our database is up to date with in-use addresses as well as mail forwarding, however people can choose to renew their addresses every 6 months as needed. ## How we built it 🧰 **Backend** We built the backend in Node.js and utilized express to connect to our Firestore database. The routes were written with the Express.js framework. We used selenium and pdf editing packages to allow users to download any filled out pdf forms. Selenium was used to apply for documents on behalf of the users. **Frontend** We built a Node.js webpage to demo our Paper Homes platform, using React.js, HTML and CSS. The platform is made up of 2 main parts, the donor’s side and the recipient’s side. The front end includes a login/signup flow that populates and updates our Firestore database. Each side has its own dashboard. The donor side allows the user to add properties to donate and manage their properties (ie, if it is no longer vacant, see if the address is in use, etc). The recipient’s side shows the address provided to the user, steps to get any missing ID’s etc. ## Challenges we ran into 😤 There were a lot of non-technical challenges we ran into. Getting all the correct information into the website was challenging as the information we needed was spread out across the internet. In addition, it was the group’s first time using firebase, so we had some struggles getting that all set up and running. Also, some of our group members were relatively new to React so it was a learning curve to understand the workflow, routing and front end design. ## Accomplishments & what we learned 🏆 In just one weekend, we got a functional prototype of what the platform would look like. We have functional user flows for both donors and recipients that are fleshed out with good UI. The team learned a great deal about building web applications along with using firebase and React! ## What's next for Paper Homes 💭 Since our prototype is geared towards residents of California, the next step is to expand to other states! As each state has their own laws with how they deal with handing out ID and government benefits, there is still a lot of work ahead for Paper Homes! ## Ethics ⚖ In California alone, there are over 150,000 people experiencing homelessness. These people will find it significantly harder to find employment, receive government benefits, even vote without proper identification. The biggest hurdle is that many of these services are linked to an address, and since they do not have a permanent address that they can send mail to, they are locked out of these essential services. We believe that it is ethically wrong for us as a society to not act against the problem of the hole that the US government systems have put in place to make it almost impossible to escape homelessness. And this is not a small problem. An address is no longer just a location - it's now a de facto means of identification. If a person becomes homeless they are cut off from the basic services they need to recover. People experiencing homelessness also encounter other difficulties. Getting your first piece of ID is notoriously hard because most ID’s require an existing form of ID. In California, there are new laws to help with this problem, but they are new and not widely known. While these laws do reduce the barriers to get an ID, without knowing the processes, having the right forms, and getting the right signatures from the right people, it can take over 2 years to get an ID. Paper Homes attempts to solve these problems by providing a method for people to obtain essential pieces of ID, along with allowing people to receive a proxy address to use. As of the 2018 census, there are 1.2 million vacant houses in California. Our platform allows for donors with vacant properties to allow people experiencing homelessness to put down their address to receive government benefits and other necessities that we take for granted. With the donated address, we set up mail forwarding with USPS to forward their mail from this donated address to a homeless shelter near them. With proper identification and a permanent address, people experiencing homelessness can now vote, apply for government benefits, and apply for jobs, greatly increasing their chance of finding stability and recovering from this period of instability Paper Homes unlocks access to the services needed to recover from homelessness. They will be able to open a bank account, receive mail, see a doctor, use libraries, get benefits, and apply for jobs. However, we recognize the need to protect a person’s data and acknowledge that the use of an online platform makes this difficult. Additionally, while over 80% of people experiencing homelessness have access to a smartphone, access to this platform is still somewhat limited. Nevertheless, we believe that a free and highly effective platform could bring a large amount of benefit. So long that we prioritize the needs of a person experiencing homelessness first, we will able to greatly help them rather than harming them. There are some ethical considerations that still need to be explored: We must ensure that each user’s information security and confidentiality are of the highest importance. Given that we will be storing sensitive and confidential information about the user’s identity, this is top of mind. Without it, the benefit that our platform provides is offset by the damage to their security. Therefore, we will be keeping user data 100% confidential when receiving and storing by using hashing techniques, encryption, etc. Secondly, as mentioned previously, while this will unlock access to services needed to recover from homelessness, there are some segments of the overall population that will not be able to access these services due to limited access to the internet. While we currently have focused the product on California, US where access to the internet is relatively high (80% of people facing homelessness have access to a smartphone and free wifi is common), there are other states and countries that are limited. In addition to the ideas mentioned above, some next steps would be to design a proper user and donor consent form and agreement that both supports users’ rights and removes any concern about the confidentiality of the data. Our goal is to provide means for people facing homelessness to receive the resources they need to recover and thus should be as transparent as possible. ## Sources [1](https://www.cnet.com/news/homeless-not-phoneless-askizzy-app-saving-societys-forgotten-smartphone-tech-users/#:%7E:text=%22Ninety%2Dfive%20percent%20of%20people,have%20smartphones%2C%22%20said%20Spriggs) [2](https://calmatters.org/explainers/californias-homelessness-crisis-explained/) [3](https://calmatters.org/housing/2020/03/vacancy-fines-california-housing-crisis-homeless/)
## Inspiration Our inspiration for this project was to develop a new approach to how animal shelter networks function, and how the nationwide animal care and shelter systems can be improved to function more efficiently, and cost effectively. In particular, we sought out to develop a program that will help care for animals, find facilities capable of providing the care needed for a particular animal, and eradicate the use of euthanization to quell shelter overpopulation. ## What it does Our program retrieves input data from various shelters, estimates the capacity limit of these shelters, determines which shelters are currently at capacity, or operating above capacity, and optimizes the transfer or animals capable of being moved to new facilities in the cheapest way possible. In particular, the process of optimizing transfers to different facilities based on which facilities are overpopulated was the particular goal of our hack. Our algorithm moves animals from high-population shelters to low-population shelters, while using google maps data to find the optimal routes between any two facilities. Optimization of routes takes into account the cost of traveling to a different facility, and the cost of moving any given number of animals to that facility through cost estimations. Finally, upon determining optimal transfer routes between facilities in our network, our algorithm plots the locations of a map, giving visual representations of how using this optimization scheme will redistribute the animal population over multiple shelters. ## How we built it We built our program using a python infrastructure with json API calls and data manipulation. In particular, we used python to make json API calls to rescue groups and google maps, stored the returned json data, and used python to interpret and analyze this data. Since there are no publicly available datasets containing shelter data, we used rescue groups to generate our own test data sets to run through our program. Our program takes this data, and optimizes how to organize and distribute animals based on this data. ## Challenges we ran into The lack of publicly available data for use was particularly difficult since we needed to generate our own datasets in order to test our system. This problem made us particularly aware of the need to generate a program that can function as a nationwide data acquisition program for shelters to input and share their animal information with neighboring shelters. Since our team didn't have significant experience working on many parts of this project, the entire process was a learning experience. ## Accomplishments that we're proud of We're particularly proud of the time we managed to commit to building this program, given the level of experience we had going into this project as our first hackathon. Our algorithm operates efficiently, using as much information as we were able to incorporate from our limited dataset, and constraints on how we were able to access the data we had compiled. Since our algorithm can find the optimal position to send animals that are at risk due to their location in an overpopulated shelter, our program offers a solution to efficiently redistribute animals at the lowest cost, in order to prevent euthanization of animals, which was our primary goal behind this project. ## What we learned Aside from technical skills learned in the process of working on this project, we all learned how to work as a team on a large software project while under a strict time constraint. This was particularly important since we only began working on the project on the afternoon of the second day of the hackathon. In terms of technical skills, we all learned a lot about using APIs, json calls in python, and learning python much farther in depth than any of us previously had experience in. Additionally, this hackathon was the first time one of our team members had ever coded, and by the end of the project she had written the entire front end of the project and data visualization process. ## What's next for Everybody Lives We had a lot of other ideas that we came up with as a result of this project that we wanted to implement, but did not have the time nor resources available to work on. Specifically, there are numerous areas we would like to improve upon and we conceptualized numerous solutions to issues present in today's shelter management and systems. Overall, we envisioned a software program used by shelters across the country in order to streamline the data acquisition process, and share this data between shelters in order to coordinate animal transfers, and resource sharing to better serve animals at any shelter. The data acquisition process could be improved by developing an easy to use mobile or desktop app that allows to easily input information on new shelter arrivals which immediately is added to a nationally available dataset, which can be used to optimize transfers, resource sharing, and population distribution. Another potential contribution to our program would be to develop a type of transportation and ride-share system that would allow people traveling various distances to transport animals from shelter to shelter such that animals more suited to particular climates and regions would be likely to be adopted in these regions. This feature would be similar to an Uber pool system. Lastly, the most prominent method of improving our program would be to develop a more robust algorithm to run the optimization process, that incorporates more information on every animal, and makes more detailed optimization decisions based on larger input data sets. Additionally, a machine learning mechanism could be implemented in the algorithm in order to learn what situations warrant an animal transfer, from the perspective of the shelter, rather than only basing transfers on data alone. This would make the algorithm grow, learn and become more robust over time.
## Inspiration The system was Mostafa's idea. In a world with transparency has become more of a public concern in recent years, it was important to provide a medium to hold charities accountable for the money they received in good faith, and to encourage people with the means to donate. ## What it does Project Glass is a system that uses BlockChain technology to track donations given to charitable organizations that have opted-in. Each donation is given a unique "tracking key" like the kind you get on parcels to track the status of deliveries. Donors can then lookup their donation on the Project Glass website to see exactly where each dollar ended up. It also provides suggestions for where it is best for the organization to spend money. This is driven by a machine learning algorithm that detects events on data collected on topics relevant to the NGOs in the network. The ML algorithm detects relevant events, which are then dispatched using PubSub+ to the Project Glass partner-organizations. The organizations would then be able to see a live feed of relevant data that they can use better leverage their short-term investments. ## How we built it We use Blockchain and a proprietary currency to keep track of every dollar spent. Each invested dollar is turned into a unit of currency and tied to a transaction id. The transactions of every dollar is then logged into the blockchain from the time it is deposited till the time it is sent to an external entity (such as another NGO, or if it was used for an expense). A person with a tracking id can use it to look up the final destination of every dollar that they have spent, which adds transparency as a result. Auditors can also use this information to verify the claims of the NGO expenditure by matching their bank transactions to what they claimed in the system, this makes their job easier. We use data gathering, AI, and PubSub+ to generate and publish events. We have a data stream that we run a time-series based machine learning algorithm which detects events. The events are then sent over a PubSub+ topic which is received by the Project Glass service and used to drive suggestions for where it is best for an organization to send money. ## Challenges we ran into The main challenge was adoption, how do we make sure that this system can easily be adopted given the use of a new currency? The solution is to use the new currency strictly for tracking investments dollar-to-dollar. This currency cannot be used or exchanged in any other context as it is only meant to be used to augment the existing financial system with traceability. In Project Glass, we limited the use of this currency to compiling transactions to the ledger and mapping individual investments with every contribution they eventually make.
winning
## Inspiration In today's world, technology has made it possible for people from various backgrounds and cultures to interact and understand each other through various cross-cultural and cross-linguistic platforms. Spoken language is a much smaller barrier than it was a few decades ago. But there still remains a community amongst us with whom a majority of us can't communicate face-to-face due to our lack of knowledge of their mode of communication. What makes their language different from any other is that their speech isn't spoken, it is shown. This is particularly pronounced in the education domain for students and educators in this domain can feel isolated in mixed learning environments and this project hopes that through it, they are able to better communicate and integrate with the world around them. ## What it does Our contribution is Talk To The Hand — a web application that helps hearing impaired people share their message with the world, even if they don't have the physical or financial access to an interpreter. Sign language speakers open the application and sign into their computer’s camera. Talk To The Hand uses computer vision and machine learning to interpret their message, transmit the content to their audience via voice assistant, and create a written transcript for visual confirmation of the translation. After the user is done speaking, Talk To The Hand gives the opportunity to share the written transcript of their talk through email, text message, or link. We imagine that this tool will be especially helpful and powerful in public speaking settings — not unlike presenting at a Hackathon! Talk To The Hand dramatically increases the ability of deaf and hard of hearing people to speak to a broad audience. ## How we built it We have two components to the application, the first being the machine learning model that recognizes hand gestures and predicts the corresponding meaning and second being the web application that provides the user with an intuitive interface to perform the task of interpreting the signs and speaking them out with multiple language support for speech. We built the model by training deep neural nets on a Kaggle dataset - Sign Language MNIST for hand gesture recognition (<https://www.kaggle.com/datamunge/sign-language-mnist>). Once we set up the inference mechanism to get the prediction from the model for the hand gesture given as an image, we used the prediction and converted it to speech in English using the Houndify text-to-speech API. We then set up the web application through which the user can interact using images of hand gestures and the interpretation of the gesture is both displayed as text and spoken out in their language of choice. ## Challenges we ran into One of the biggest hurdles we faced as a team was the development of our hosting platform. Despite the lack of experience in the domain, we wanted to make our application more accessible and intuitive for our users. After exploring some of the more basic web development technologies such as HTML, CSS and Javascript, we shifted to nuanced web/mobile app development to make our application implementable in various domains with ease. We faced obstacles during the transfer of data from the frontend to the backend and vice versa for our images and speech responses from API calls. In the process, we managed to set up a web based application. ## Accomplishments that we're proud of First and foremost, we are proud of having thought of and built the first iteration of an application that will allow for people dependent on sign language to cross any barriers to communication that may come their way. We are hopeful about the impact it will have on this community and are looking forward to carrying to the next phase. We are thrilled about developing a model that can predict the letter corresponding to the Sign Language and integrating it with Text-to-Speech API and deploying a functional web application even though our team is inexperienced with web development. Overall, we relish the experience for having pushed ourselves beyond what we thought was possible and working on something that we believe will change the world. ## What we learned One of the biggest takeaways for our team as a whole is going through the entire development life cycle fo a product starting from ideation to building the minimum viable product. We were exposed to more applications of Computer Vision through this project. ## What's next for Talk to the Hand Today’s version of Talk To The Hand is a very minimal representation created in 36 hours in order to show proof of concept. Next steps would include in-depth sign education and refined experience based on user testing and feedback. We believe Talk To The Hand could make a powerful impact in public speaking and presentation settings for the deaf and hard of hearing, especially in countries and communities where physical and financial access to interpreters proves difficult. Imagine a neighborhood activist sharing an impassioned speech before a protest, a middle school class president giving his inaugural address, or a young hacker pitching to her panel of judges.
## Inspiration At Carb0, we're committed to empowering individuals to take control of their carbon footprint and contribute to a more sustainable future. Our inspiration comes from the fact that 72% of CO2 emissions could be reduced by changes in consumer behavior, yet many companies lack the motivation to conduct ESG reports if not required by investors or the government. We believe that establishing consumer-driven ESG can drive companies to be accountable and take action to provide more sustainable products and services. ## What it does We created **a personal carbon tracker** that **incentivizes** customers to adopt low-carbon lifestyles and **democratizes carbon footprint data**, making it easier for everyone to contribute to a sustainable future. Our platform provides information to influence consumers' purchase decisions and provides alternatives to help them make sustainable decisions. This way, we can encourage companies, investors, and the government to take responsibility and be more sustainable. ## How we built it We began by identifying the problem and then went through an intense ideation process to converge on our consumer-driven ESG idea. We defined the user journey and pain points to create a convenient, incentivizing, and user-centric platform. Our reward system easily links to digital payment details and helps track CO2 emissions with data visualization and cashback based on monthly summaries. We also make product carbon footprint data easily accessible and searchable. ## Challenges we ran into Our biggest challenge was integrating front-end and back-end and defining scope. We faced technical assumptions since the accurate database was not available due to time constraints. ## Accomplishments that we're proud of Despite these challenges, we are proud of our self-sustaining system to establish consumer-driven ESG, successful integration of front-end and back-end with a user-friendly interface, and the intense ideation process we went through. ## What we learned During this project, we learned how to rapidly prototype a digital app in limited time and resources, gained a deeper understanding of ESG, its current challenges, and potential solutions. ## What's next for Carb0 - Empower your carbon journey Our next steps are to conduct user testing and iterations for a higher-fidelity prototype, enrich carbon footprint database coverage and accuracy. We also plan to potentially add Carb0 as an add-on for digital wallets to reach a broader audience and engage more people in a more sustainable lifestyle. Our vision is that **consumer-driven ESG** will incentivize governments, investors, and companies to take more initiatives in creating a more sustainable world. Join us on our journey to a sustainable future with Carb0!
## Inspiration Post-pandemic, the world had become increasingly reliant on video conferencing platforms like Zoom to stay connected and continue business, education, and social interactions. However, one glaring issue persisted – the lack of accessibility for individuals who rely on American Sign Language (ASL) to communicate. Our team of passionate individuals, each bringing unique skills and experiences to the table, came together with a shared motivation to bridge this accessibility gap. We were driven by a collective desire to empower the Deaf and hard-of-hearing community by making virtual meetings and conversations more *inclusive and accessible*. ## What it does **Gesture** is a cutting-edge application that integrates with Zoom to provide real-time translation of the American Sign Language (ASL) alphabet into text. ## How we built it **Parts of Gesture** 1. Video Capture and Processing 2. ASL Alphabet Dataset Training 3. Processing into Sentences 4. Zoom Integration ### Video Capture and Processing We employed pyvirtualcam to serve as a virtual overlay camera within Zoom. Utilising OpenCV, the application identifies hand movements and captures a static image at regular intervals, whenever it detects a user's hand in view. For the image processing, we used MediaPipe to perform precise hand landmarking, generating nodes at crucial joints and key points on the hands. ### ASL Alphabet Dataset Training We obtained out training data from the [ASL Alphabet dataset](https://www.kaggle.com/datasets/grassknoted/asl-alphabet?resource=download), and employed MediaPipe to identify hand landmarks within static images. Subsequently, we trained a machine learning model using TensorFlow that identifies the ASL signs and translates it into English letters. ### Processing into Sentences We assemble the individual letters into words, which are then processed through a spell checker. Following this, the refined words are fed into an NLP model to correct and improve grammatical structure. **Example** ``` hi Hack MIT! Tis is Gesture, an ASL trnzlation ap. We aree sO exsited to b here! ``` **Result** ``` Hi Hack MIT! This is Gesture, an ASL translation app. We are so excited to be here! ``` ### Zoom Integration For bidirectional communication, we utilise zoom\_cc for speech to text (closed captions). Futhermore, the virtual camera video feed is integrated into Zoom. ## Challenges we ran into Initially, our intention was to utilise a comprehensive dataset containing more than 2,000 ASL gestures. However, after refining the dataset repository, we realised that our hardware lacked the robust GPUs needed for the computational demands of training and testing the data. By this time, it was already late in the evening. We opted to divide into two sub-teams to concurrently work on two different projects: Gesture, and an EEG system designed to monitor the progression of Alzheimer's disease. Ultimately, we chose to proceed with Gesture, believing it offered the greatest potential for rapid development and meaningful community impact. ## Accomplishments that we're proud of We're proud of being able to not sleep to finish two concurrent projects. ## What we learnt We learnt that 24 hours is not enough. ## What's next for Gesture Use ASL Gesture to text and speech translation.
partial
## Inspiration We often want to read certain manga because we are interested in their stories, but are completely barred from doing so purely because there is no translation for them in English. As a result, we decided to make our own solution, in which we take the pages of the manga, translate them, and rewrite them for you. ## What it does It takes the image of the manga page and sends it to a backend. The backend first uses a neural network trained with thousands of cases of actual manga to detect the areas of writing and text on the manga. Then, the server clears that area out of the image. Using Google Cloud Services, we then take the written Japanese and translate into English. Lastly we rewrite that English in its corresponding postitions on the original image to complete the manga page. ## How we built it We used python and flask with a bit of html and css for the front end Web server. We used Expo to create a mobile front end as well. We wrote the backend in python. ## Challenges we ran into One of the challenges was properly using Expo, a service/platform new to us, to fit our many needs. There were some functionalities we wanted that Expo didn't have. However, we found manual work-arounds. ## Accomplishments that we're proud of We are proud of successfully creating this project, especially because it was a difficult task. The fact that we successfully completed a working product that we can consider using ourselves makes this accomplishment even better. ## What we learned We learned a lot about how to use Expo, since it was our first time using it. We also learned about how to modify images through python along the way. ## What's next for Kero Kero's front-end can be expanded to look nicer and have more functionality, like multiple images at once for translating a whole chapter of a manga.
## Inspiration We've always wanted to be able to point our phone at an object and know what that object is in another language. So we built that app. ## What it does Point your phone's camera towards an object, and it will identify that object for you, using the Inception neural network. We translate the object from a source language (English) to a target language, usually a language that the user wants to learn using Google Translation API. Using AR Kit, we depict the image name, in both English and a foreign language, on top of the object. In order to help you find the word, we help you see some different ways of using that word in a sentence. All in all, the app is a great resource for learning how to pronounce and learn about different objects in different languages. ## How we built it We built the frontend mobile app in Swift, used AR Kit to place words on top of an object, and used Google Cloud functions to access APIs. ## Challenges we ran into Dealing with Swift frontend frames, and getting authentication keys to work properly for APIs. ## Accomplishments that we're proud of We built an app looks awesome with AR Kit, and has great functionality. We took an app idea and worked together to make it come to life. ## What we learned We learned in greater depth how Swift 4 works, how to use AR Kit, and how easy it is to use Google Cloud functions to offload a server-like computation away from your app without having to set up a server. ## What's next for TranslateAR IPO in December
## Inspiration I was inspired to create this app by my aging Chinese grandparents, who are trying to learn English by reading books. I noticed them struggle with certain words and lean in squinting to read the next paragraph. I decided that I could create a better way. ## What it does Spectacle is an upgrade for your reading glasses. Through OCR, you can take a picture (or choose from your camera roll) of any text in any language, be it book, newspaper, or otherwise, and convert it into accessible text (everything is in Verdana, the most readable screen font). In addition, through English word frequency analysis, we link definitions to the harder words in the text so you can follow along. Everything is customizable: the font size, the difficulty of the text, and even the language the text translates to. Being able to switch back to native Traditional Chinese to solidify their understanding of the text is a blessing for English language learners like my grandparents. With this list of features: * Translate images to any language * From any language * English word difficulty detector and easy-to-access definitions * Replace hard words with easier ones inline. * Accessibility: font and font size Spectacle can help younger people and non-native English speakers learn English vocabulary as well as help older/near-sighted people read without straining their eyes. ## How I built it Spectacle is built on top of Expo.io, a convenient framework for coding in React-native yet supporting iOS and Android alike. I decided this would be best as a mobile app because people love to read everywhere, so Expo was definitely a good choice. I used various Google Cloud ML services, including vision and NLP, to render and process the text. Additionally, I used Google Cloud Translate to translate text into other languages. For word frequencies and definitions, I combined the Words.ai API with my own algorithm to determine which words were considered "difficult". ## Challenges I ran into Although I had used Expo.io a little in the past, this was my first big project with the framework, so it was challenging to go through the documentation and see what React-native features were supported and what weren't. The same can be said for the Google Cloud Platform. Before this, I had only deployed a Node.js app to the Google App Engine, so getting into it and using all these APIs was definitely tough. ## Accomplishments that I'm proud of I'm proud that I got through the challenges I listed above and made a beautiful app (along with my team of course) that I will be proud to show to my grandparents. I'm also proud that I set lofty, yet realistic goals for the app and managed to meet them. Most of the time, when my team goes to a hackathon, we end up trying to add too many features and have an unfinished product by the time it's over, so I'm very glad we didn't let it happen this time. ## What I learned I learned a lot about Google Cloud Platform, Expo.io, and React-native, as well as how to put them together in (maybe not the best) but a working way. ## What's next for Spectacle I want to add the ability to save images/text for later, so that you can essentially store some reading material for later, and pull it up whenever you want. I also want to further upgrade the hard word detection algorithm that I made.
partial
## Inspiration After looking at the Hack the 6ix prizes, we were all drawn to the BLAHAJ. On a more serious note, we realized that one thing we all have in common is accidentally killing our house plants. This inspired a sense of environmental awareness and we wanted to create a project that would encourage others to take better care of their plants. ## What it does Poképlants employs a combination of cameras, moisture sensors, and a photoresistor to provide real-time insight into the health of our household plants. Using this information, the web app creates an interactive gaming experience where users can gain insight into their plants while levelling up and battling other players’ plants. Stronger plants have stronger abilities, so our game is meant to encourage environmental awareness while creating an incentive for players to take better care of their plants. ## How we built it + Back-end: The back end was a LOT of python, we took a new challenge on us and decided to try out using socketIO, for a websocket so that we can have multiplayers, this messed us up for hours and hours, until we finally got it working. Aside from this, we have an arduino to read for the moistness of the soil, the brightness of the surroundings, as well as the picture of it, where we leveraged computer vision to realize what the plant is. Finally, using langchain, we developed an agent to handle all of the arduino info to the front end and managing the states, and for storage, we used mongodb to hold all of the data needed. [backend explanation here] ### Front-end: The front-end was developed with **React.js**, which we used to create a web-based game. We were inspired by the design of old pokémon games, which we thought might evoke nostalgia for many players. ## Challenges we ran into We had a lot of difficulty setting up socketio and connecting the api with it to the front end or the database ## Accomplishments that we're proud of We are incredibly proud of integrating our web sockets between frontend and backend and using arduino data from the sensors. ## What's next for Poképlants * Since the game was designed with a multiplayer experience in mind, we want to have more social capabilities by creating a friends list and leaderboard * Another area to explore would be a connection to the community; for plants that are seriously injured, we could suggest and contact local botanists for help * Some users might prefer the feeling of a mobile app, so one next step would be to create a mobile solution for our project
## Inspiration The inspiration for our project stems from the increasing trend of online shopping and the declining foot traffic in physical stores. Our goal was to provide a unique and engaging experience for customers, encouraging them to visit physical stores and rediscover the joy of in-person shopping. We wanted to create an interactive and entertaining shopping experience that would entice customers to visit stores more frequently and foster a deeper connection between them and the store's brand. ## What it does Our project is an AR scavenger hunt experience that gamifies the shopping experience. The scavenger hunt encourages customers to explore the store and discover new products they may have otherwise overlooked. As customers find specific products, they can earn points which can be redeemed for exclusive deals and discounts on future purchases. This innovative marketing scheme not only provides customers with an entertaining experience but also incentivizes them to visit stores more frequently and purchase products they may have otherwise overlooked. ## How we built it To create the AR component of our project, we used Vuforia and Unity, two widely used platforms for building AR applications. The Vuforia platform allowed us to create and track image targets, while Unity was used to design the 3D models for the AR experience. We then integrated the AR component into an Android application by importing it as a Gradle project. Our team utilized agile development methodologies to ensure efficient collaboration and problem-solving throughout the development process. ## Challenges we ran into One of the challenges we faced was integrating multiple APIs and ensuring that they worked together seamlessly. Another challenge was importing the AR component and creating the desired functionality within our project. We also faced issues with debugging and resolving technical errors that arose during the development process. ## Accomplishments that we're proud of Despite the challenges we faced, we were able to achieve successful teamwork and collaboration. Despite forming the team later than other groups, we were able to effectively communicate and work together to bring our project to fruition. We are proud of the end result, which was a polished and functional AR scavenger hunt experience that met our objectives. ## What we learned We learned how difficult it is to truly ship out software, and we are grateful to have joined the hackathon. We gained a deeper understanding of the importance of project planning, effective communication, and collaboration among team members. We also learned that the development process can be challenging and unpredictable, and that it requires perseverance and problem-solving skills. Additionally, participating in the hackathon taught us valuable technical skills such as integrating APIs, creating AR functionality, and importing projects onto an Android application. ## What's next for Winnur Looking forward, we plan to incorporate Computer Vision technology into our project to prevent potential damage to our product's packaging. We also aim to expand the reach of our AR scavenger hunt experience by partnering with more retailers and enhancing the user interface and experience. We are excited about the potential for future development and growth of Winnur.
## **Inspiration** 1/5 elderly Canadians feel isolated as a result of their environment. In an everchanging world, society often forgets about the people that built the present world we live in. Elderly people constantly battle between the desire to connect with the changing society around them whilst worrying that they are being a bother to their children and grandchildren. Having something that always has time to understand and talk to you would be great for loneliness and mental health issues in elderly people. This is where BloomBuddy comes in. Like a human, the plant lives alongside the user, undergoing all the environmental changes and basic necessities it would normally need. As such, it understands the environment that elderly people live in and is able to be a present and effective companion for elderly people. BloomBuddy not only analyzes the words coming from the user but also the environment and climate surrounding the user that could impact their mental health. ## **What it does** Using machine learning, BloomBuddy is able to analyze the environment both the plant and the user is living in and generate a personality based off of those metrics. Through the unique personality created through the shared environment between the user and BloomBuddy, the plant is able to provide realistic and relatable responses to the user. BloomBuddy lives alongside the human and generates outputs based off of whether it needs resources (eg. water, more light...), it is self caring and automatically notifies the user on it's metrics using sensors. The user is able to communicate with their unique BloomBuddy through a screen and typed input, effectively communicating with the personalized ML model as they would a human. Each BloomBuddy is unique in both personality and digital presence, with a individualized NFT generated per plant built on the **Flow** blockchain network. *The plant's metrics are easily viewable through the 7 inch display or through \*\*Kintone*\* where the user can see further data analysis of measured metrics throughout a period of time. ## **How we built it** Some of the sensors we used includes a photoresistor, a humidity sensor, a temperature sensor and a moisture sensor to collect the large variety of data. Arduino IDE was used to program the Arduino and ESP32 respectively, while also programming the communication between the two with UART protocol to store the data on the ESP32 to display on the screen. We built the webpage with HTML for the template, CSS for the styling, and JavaScript for the functionality, while also integrating OpenAI API into the webpage. The NFT was created on the Flow blockchain network using Cadence and Javascript to create the smartcontracts and environment to validate the NFT. ## **Challenges we ran into** It was our first time working with a lot of the technologies and we namely had several issues with hardware. Our initial approach with the Raspberry Pi failed due to a faulty SD card reader. After hours of long work to debug and resolve the issue, we pivoted to an Arduino instead to fulfill the role as the central computing unit of our project. However, the Arduino still had some issues on communicating with the ESP32 that needed to be solved. When implementing Flow into our project, we were unfamiliar with the environment and it was our first time setting a Web3 environment up. We encountered many issues getting the environment, smart contracts and scripts setup. Nevertheless, our team learned a lot by working with new software and languages. It took all of 24+ hours with no sleep and many Red Bulls but we'd like to think it was worth it :). ## **Accomplishments that we're proud of** We were able to support our hardware system with a strong, multi-aspect software system consisting of tech ranging from Web3, web and app development, data processing and more. The entire tech stack that we learned was geared towards the one goal of providing the most realistic interactable machine to provide companionship to elderly people and we'd like to say we've accomplished that. As a team, we used our time efficiently by delegating responsibilities well based on skills and experience. Despite issues with hardware and software, our team pushed through it and learned lots in the process. The issue we tackled hit home for our entire team as we've seen first hand how elderly people get neglected on accident in our fast-paced society. We're proud that within 24 hours, we were able to construct an MVP of a promising solution that we all truly believe in. ## **What we learned** Although we had many setbacks, we overcame most of them and learned from our experiences. We quickly learned how to implement OpenAPI using JavaScript within a microcontroller such as the Arduino. Furthermore, as we had limited experience with NFTs, we learned how use Flow for Web3 implementation with NFTs. Through the hackathon, we were able to deploy a webpage on the ESP32 with the help of the mentor panel, as without them, we wouldn't have been able to get past this step. Most of all, we've learned that any technology is learnable given a strong passion for the project and a team that's motivated to learn. As a collective team, we can confidently say we've learned way more then we expected coming into MakeUofT. ## **What's next for: BloomBuddy** The next step is scaling the project by adding more functionalities to 1) Increase accessibility for elderly people with disabilities 2) Generate more impact by personalizing the plant more to tailor to individual mental illnesses 3) Further develop Web3 functionality to enhance NFT collections and reward users with FLOW tokens. and more... BloomBuddy is just the start of what personifying our hobbies could look like. Whether we realize it or not, every living being shows their behavior in some way. BloomBuddy is proof of concept that personifying the things we care about will enhance the growth of others as well as ourselves to feasibly make a larger impact in the world with the help of technologies like AI and Web3.
winning
## Inspiration☁️: Our inspiration for "Snippit"✂️ comes from the infamous youtube meme, titled "Barack Obama Sings Call Me Maybe!" The basis of our app focuses on producing a "meme" according to the users request and promotes our enthusiasm for learning new concepts like Text-to-Speech recognition and the overall journey in completing a hackathon. We agreed that in a competitive environment like Hack the North, it is equally valuable to bring fun and laughter to all hackers at such a memorable event! ## What it does💁‍♂️: "Snippit" is a real-time application that allows the user to translate any sentence of their liking into a randomized video file, saying exactly what the user requested. It uses a combination of API's as well as a tailored database that consists of more than 100,000 words with matching timestamps and YouTube URL's to ultimately stitch together a short but funny clip! ## How we built it🔨: * Frontend: React, CSS * Backend: Golang, SQL * Services: CockroachDB, AssemblyAPI ## Challenges we ran into 😳: * Devising an unbiased and efficient algorithm to populate our SQL database with words from a transcribed text. * Understanding FFMPEG source code to adapt with Golang to interact with multimedia files and modify them accordingly. ## Accomplishments that we're proud of💪: * Successfully coupled multiple API's with backend gateway including AssemblyAPI. * Working and communicating effectively with the team, engaging in all aspects of engineering the application including UI/UX Design, Database Management, Full Stack Development, and API Integration. * Experimented with multiple API's to reduce download time for YouTube videos. ## What we learned🧠: * Working with new languages and tools to build both the frontend and backend of the application. * Learned how data processing works and how to engineer data for feeding it to a language processing service. ## What's next for Snippit💼: * Optimize our load time and media creation algorithm. * Extend our UI to interact with the user in even more creative ways!
## Inspiration With COVID-19 pulling us away from loved ones, calling is more prevalent than ever. If you're going to listen to your significant other all day, you better enjoy the sound of their voice! ## What it does Listen to voice recordings in a voice assistant and decide whether to swipe left or right through voice commands. ## How we built it We built the frontend dialogue using Voiceflow while the backend uses Firebase (Storage to store audio files, Hosting for a single page for audio uploads from users and Cloud Functions for an API for random retrieval of an audio file). ## Challenges we ran into We were unfamiliar with some of the technologies and had to learn how to use them quickly. It was also challenging to work remotely at this virtual hackathon. ## Accomplishments that we're proud of Built a fun project at an awesome hackathon! ## What we learned How to communicate and set objectives even when working from home. ## What's next for Voice Tinder The project can be extended to develop a matching algorithm based on swipes.
## Inspiration 💡 With the introduction of online-based learning, a lot of video tutorials are being created for students and learners to gain knowledge. As much as the idea is excellent, there was also a constraint whereby the tutorials may contain long hours of content and in some cases, it is inaccessible to users with disability. This is seen as a problem in the world today, that's why we built an innovative and creative solution **Vid2Text** to provide the solution to this. It is a web application that provides users with easy access to audio and video text transcription for all types of users. So either if the file is in an audio or video format it can always be converted to readable text. ## 🍁About Vid2Text is a web app that allows users to upload audio and video files with ease, which then generates Modified and Customized Audio and Video Transcriptions. Some of the features it provides are: ### Features * Automatically transcribe audio and video files with high accuracy. * Modified and Customized Audio and Video Transcriptions. * Easy Keyword Search through Text * Easy Keyword Search and Highlight through Text. ## How we built it We built our project using Django, a Python web framework that uses the MVC architecture to develop full-stack web applications. When the user uploads the video they want to transcribe, we created a script that uploads the video onto the Django model database and after that, the video gets uploaded to the AssemblyAI server. The response of that part is the *upload\_url*. Finally, we send a post request, with the video transcript ID and get the video transcript text as the response. We utilized the AssemblyAI to match and search the transcript text for keywords. We also created an accessible and good user experience for the client-side. ## Challenges we ran into In course of the hackathon, we faced some issues with limited time and integration of the AssemblyAI to determine the video duration of the uploaded videos dynamically. Initially, we were confused about how to do that, but we finally figured it out. ## Accomplishments that we're proud of Finally, after long hours of work, we were able to build and deploy the full web application. The team was able to put in extra effort towards making it work. ## What we learned This hackathon gave us the opportunity to learn how to use a Django project together with utilizing the Django and AssemblyAI API and also we were able to work together as a team despite the fact we were from different timezones. ## What's next for Vid2Text ⏭ For our next steps: We plan to include more features like multiple language transcription, and export text files as pdf. Also, improve the user experience and make it more accessible.
losing
## 💡 **Inspiration** The COVID-19 pandemic exposed the weaknesses of the fast fashion industry, revealing its unsustainable practices that left over 70 million garment workers without pay. In response, we developed our virtual try-on app to advocate sustainability and equality in fashion. Our platform enables users to explore their personal style while minimizing waste and production. By offering virtual try-ons, we shift the focus from fast fashion to thoughtful consumption. We advocate diversity by encouraging users to experiment with styles from various cultures, fostering an inclusive fashion community. ## 🔍 **What It Does** FashioNova intelligently recommends clothing based on your input in the web app. You can enter any prompt and virtually try on clothes using your camera. Our technology detects your body and overlays the selected garments onto your image, allowing you to see how they would look on you. FashioNova streamlines your shopping experience, saving you time by eliminating the hassle of trying on clothes and searching for outfits. This presents a fantastic opportunity for fashion companies to upload images of their own clothing, allowing customers to virtually "Try before they buy," ultimately enhancing the shopping experience and boosting sales. ## ⚙️ **How we built it** ***Frontend:*** We first made our logo with Adobe Express. Then, we created the client side of our web app using React.js and JSX based on a high-fidelity prototype we created using Figma. ***Backend:*** Our backend was programmed in Python with Flask, where it contains the WebRTC video component for the computer vision and VR/AR tech components of the virtual trying on clothes. As well, it has the Cloudflare AI API logic that allows specific clothing recommendations from the existing wardrobe based on the user's prompt. ## 🚧 **Challenges we ran into** We had some issues regarding the WebSocket logic (Socketio server) which caused a massive lag in the real-time video footage with the body node displays and virtual clothes. To avoid this error, we resorted to just displaying the current computer's webcam. We also had trouble navigating the CloudFlare AI API to choose clothing options based on the desired prompt and the machine learning classification dataset we created, but after some hard perseverance, we were able to implement the feature. ## ✔️ **Accomplishments that we're proud of** We are very proud of our ability to create a working video recognition with a body node detection system that allowed us to creatively add the virtual clothes fitted on the user. Creating and training a machine learning model for classification and labelling clothing data was a first for all of us. Also integrating CloudFlare Ai for the first time and integrating it for creative use felt very achievable! Plus, creating a unique idea and having functional backend features and a very artsy front-end design was very rewarding!!! ## 📚 **What we learned** As a team, we shared valuable knowledge and unique experiences, learning from one another throughout the process. We selected video recognition technology for our project, despite it being new to most of us. Although we encountered numerous bugs along the way, we worked together to overcome them, strengthening our collaboration and problem-solving skills. ## 🔭 **What's next for us!** 1. *Expanded Try-Ons:* We plan to introduce a wider variety of clothing styles and add accessories like shoes, hats, sunglasses, necklaces, and scarves for a more complete virtual experience. 2. *Enhanced Interactivity:* We aim to incorporate a location feature that displays the weather forecast and suggests appropriate clothing based on current conditions. As well, as add a 3D rendering/design option to make the clothes look more realistic vibe! 3. *Relevant Datasets:* We aim to enhance our model's ability to recognize and understand various clothing styles by utilizing datasets that feature a broader and more diverse range of apparel. 4. *Marketing and future:* We hope to partner with clothing companies to add a shopping feature within the app. Clothing stores that want to allow their clothes to be tried on virtually will be added to our shopping catalogue and users can try out clothes without even going to the store. If they like a piece, they will be directed to the company's website link for that product. 5. *Diversity Implementations:* Our web app aims to help all kinds of people. We are excited to continue working on this project to add helpful features to people with disabilities/injuries, people living busy lives, and people who want to upgrade their fashion through a cultural and futuristic approach. We hope to help people have ease with shopping for clothes, by reducing physical strain, and time spent on shopping, and making it more convenient for our future clients. Join us to help improve the future of virtual fashion! Invest in **FASHIONOVA !!!**
## Inspiration At the moment, the fashion industry is a mess. In terms of sustainability, 20% of Global Waste Water comes from textile dyeing according to the UN Environmental Program. Furthermore, 85% of textiles are thrown away or dumped into landfills, including completely unused clothing. However, one major concern is that the fact 60% of clothing does not even have the ability to be recycled. One of the other major issues is costs in order to remain sustainable. For the most part, buying sustainable clothes means paying a premium, which makes it hard for most people to afford. Sustainable clothing is not affordable. On top of that, Fast Fashion higher costs due to extremely fast turn around times, further reinforcing unsustainable fashion. Getting cheap sustainable clothing is hard by itself, forget trendy clothing due to current fast fashion. In consideration of all of these issues brought us to create a sustainable solution: Fashion Flipped or F^2. Our product is based on the concept of up cycling previous clothing in order to match up and coming trends, while keeping costs low with reused clothing. A network of vendors are asked on application basis to present their creative portfolio, who then design new clothing for consumers. On the consumer end, a subscription based fee is charged and following a quick survey, our algorithm is able to determine the preferences of the user. Based on our inventory of vendor received up cycled clothing, we present the user monthly a customized box of outfits depending on the selected tier level. At the end of the month, the clothing is returned and a new box of outfits is provided. ## What it does For our UI, we use MUI library to help with UI creation and component creation. For our authentication, you can go ahead and press the login in button to trigger the popup to sign in with 0Auth. Following your login, you will be prompted with a new user navigation at the top. You can go ahead and start off by going to your profile to select some pieces of clothing that you would like. After completing this quiz, the algorithm will have calculated and stored your preferences in Convex. The algorithm uses OpenAI's ML model to test data minimizing real design preference decision making. After each profile question is answered, the data is stored in Convex in a profile and is then later rendered for customization. Using the average score, the customization algorithm will index the database for closest scores. For Stripe, using secure form fields, the UI passes a payment token along with user to the Vercel server which then talks directly to Stripe for the transaction. At the moment, we are facing some Stripe deployment issues to the production website so please check out the live demo during judging if you are interested. ## How we built it FashionFlipped is built using convex, a serverless backend, Auth0 for authentication, and a React.JS frontend. The dataset of clothing items was prefilled using OpenAI's CLIP machine learning model using test data through a process that mimics the way real designs would be input. From there, we use custom convex functions to build a profile of the user's style by averaging preferred styles selected during onboarding. Another set of convex functions calculates the cosine similarity between the user's style profile and the other clothing items to pick out an assortment of clothes users are comfortable with while introducing new styles. **Primary Features** * Auth0 Authentication * Convex (Serverless Backend), populating with users * Our Clothing Customization ML Model/Algorithm, populating average ratings in results in Convex * Stripe API for payment transactions * Vendor Registration, populating users in Convex * React-Routing for passing through screens and sending data through screens ## Challenges we ran into After experimenting with larger datasets we figured out convex has a strict memory limit. The combination of intense runtime calculations and large dataset rows due to the CLIP features meant we easily hit the limit. This prevented us from demoing the larger dataset but did push us to build more efficient operations by sharing the computing between the client and server. Another challenge we encountered was the usage of a new technology. Trying out a new technology like Convex meant we did not have nearly as much experience with it than other products. The docs were recently created, so our unfamiliarity pushed us into a lot of learning with the platform. Convex was unfamiliar, but also serverless backends were a completely new concept that we had to learn rapidly to implement the technology. Another major issues we had was deploying the project to Vercel. Due to Github ownership issues, unfamiliarity with implementing a serverless backend to production and server management issues, Vercel deployment had some challenges. Declaring environment variables and reassigning the deployment command in Vercel helped solved the issues we were having with Convex and were able to make it work. ## Accomplishments that we're proud of Our recommendation engine is exceptional. It learns your style preferences with very little input and allows us to categorize a large selection of changing styles, creating a seamless user experience. By utilizing cutting-edge technology, we are able to achieve high accuracy and adaptability presenting our platform as a great choice for anyone. * Convex (Serverless Backend), populating with users * Our Clothing Customization ML Model/Algorithm, populating average ratings in results in Convex * Stripe API for payment transactions * Vendor Registration * Vercel Deployment for Live Application * Passing data between routes with React-Routing * Responsive, Dynamic UI ## What we learned We learned how to integrate new services such as a server-less backend such as Convex. We also learned how to integrate a recommendation algorithm using OpenAI and Python into React. Furthermore, we learned the different considerations we had to take in order to deploying to a live domain. All of us also learned a ton about UI. Developing efficient UI with so many layers overwriting each other became trouble, especially when importing difference components. We had much troubleshooting with being able to center select items or apply the proper style. After having so much practice with UI across the board, it likely will be much faster development going forward. ## What's next for FashionFlipped We plan to add a feedback form to provide more user input into our algorithm for more accurate customization. On top of that, we plan on creating a vendor side platform to view orders and manage demanded styles from admin. Creating an admin portion of the platform will become important as well to manage what trends and inventory is being created by vendors. We hope to deploy this as a business idea, and how to learn the logistical and technical aspects of running a business with a more sophisticated sophisticated supply chain.
## Inspiration Globally, over 92 million tons of textile waste are generated annually, contributing to overflowing landfills and environmental degradation. What's more, the fashion industry is responsible for 10% of global carbon emissions, with fast fashion being a significant contributor due to its rapid production cycles and disposal of unsold items. The inspiration behind our project, ReStyle, is rooted in the urgent need to address the environmental impact of fast fashion. Witnessing the alarming levels of clothing waste and carbon emissions prompted our team to develop a solution that empowers individuals to make sustainable choices effortlessly. We believe in reshaping the future of fashion by promoting a circular economy and encouraging responsible consumer behaviour. ## What it does ReStyle is a revolutionary platform that leverages AI matching to transform how people buy and sell pre-loved clothing items. The platform simplifies the selling process for users, incentivizing them to resell rather than contribute to the environmental crisis of clothing ending up in landfills. Our advanced AI matching algorithm analyzes user preferences, creating tailored recommendations for buyers and ensuring a seamless connection between sellers and buyers. ## How we built it We used React Native and Expo to build the front end, creating different screens and components for the clothing matching, camera, and user profile functionality. The backend functionality was made possible using Firebase and the OpenAI API. Each user's style preferences are saved in a Firebase Realtime Database, as are the style descriptions for each piece of clothing, and when a user takes a picture of a piece of clothing, the OpenAI API is called to generate a description for that piece of clothing, and this description is saved to the DB. When the user is on the home page, they will see the top pieces of clothing that match with their style, retrieved from the DB and the matches generated using the OpenAI API. ## Challenges we ran into * Our entire team was new to the technologies we utilized. * This included React Native, Expo, Firebase, OpenAI. ## Accomplishments that we're proud of * Efficient and even work distribution between all team members * A visually aesthetic and accurate and working application! ## What we learned * React Native * Expo * Firebase * OpenAI ## What's next for ReStyle Continuously refine our AI matching algorithm, incorporating machine learning advancements to provide even more accurate and personalized recommendations for users, enabling users to save clothing that they are interested in.
losing
## Inspiration We got our inspiration from the countless calorie tracking apps. First of all, there isn't a single website we could find that tracked calories. There are a ton of apps, but not one website. Secondly, None of them offered recipes built in. In our website, the user can search for food items, and directly look at their recipes. Lastly, our nutrition analysis app analyses any food item you've ever heard of. ## What it does Add food you eat in a day, track your calories, track fat%, and other nutrients, search recipes, and get DETAILED info about any food item/recipe. ## How we built it Html, min.css, min.js, js, were planning on using deso/auth0 for login but couldnt due to time constraints. ## Challenges we ran into We initially used react, but couldn't make the full app using react since we used static html to interact with the food apis. We also had another sole recipe finder app which we removed due to it being react only. Integrating the botdoc api was a MAJOR challenge, since we had no prior experience, and had no idea what we were doing basically. A suggestion to the BotDoc team would be to add demo apps to their documentation/tutorials, since currently theres literally nothing available except the documentation. The api is quite unheard of too as of now. ## Accomplishments that we're proud of Making the website working, and getting it up and running using a github pages deployment ## What we learned A LOT about botdoc, and refreshed our knowledge of html, css, js. ## What's next for Foodify Improving the css firstly lol, right now its REALLY REALLY BAD.
## 💡Inspiration * 2020 US Census survey showed that adults were 3x more likely to screen positive for depression or anxiety in 2020 vs 2019 * A 2019 review of 18 papers summarized that wearable data could help identify depression, and coupled with behavioral therapy can help improve mental health * 1 in 5 americans owns wearables now, and this adoption is projected to grow 18% every year * Pattrn aims to turn activity and mood data into actionable insights for better mental health. ## 🤔 What it does * Digests activity monitor data and produces bullet point actionable summary on health status * Allows users to set goals on health metrics, and provide daily, weekly, month review against goals * Based on user mood rating and memo entry, deduce activities that correlates with good and bad days [![Screen-Shot-2022-10-16-at-1-09-40-PM.jpg](https://i.postimg.cc/MZhjdqRw/Screen-Shot-2022-10-16-at-1-09-40-PM.jpg)](https://postimg.cc/bd9JvX3V) [![Fire-Shot-Capture-060-Pattrn-localhost.png](https://i.postimg.cc/zBQpx6wQ/Fire-Shot-Capture-060-Pattrn-localhost.png)](https://postimg.cc/bDQQJ6B0) ## 🦾 How we built it * Frontend: ReactJS * Backend: Flask, Google Cloud App Engine, Intersystems FHIR, Cockroach Labs DB, Cohere ## 👨🏻‍🤝‍👨🏽 Challenges / Accomplishments * Ideating and validating took up a big chunk of this 24 hour hack * Continuous integration and deployment, and Github collaboration for 4 developers in this short hack * Each team member pushing ourselves to try something we have never tried before ## 🛠 Hack for Health * Pattrn currently is able to summarize actionable steps for users to take towards a healthy lifestyle * Apart from health goal setting and reviewing, pattrn also analyses what activities have historically correlated with "good" and "bad" days ## 🛠 Intersystems Tech Prize * We paginated a GET and POST request * Generated synthetic data and pushed it in 2 different time resolution (Date, Minutes) * Endpoints used: Patient, Observation, Goals, Allergy Intolerance * Optimized API calls in pushing payloads through bundle request ## 🛠 Cockroach Labs Tech Prize * Spawned a serverless Cockroach Lab instance * Saved user credentials * Stored key mapping for FHIR user base * Stored sentiment data from user daily text input ## 🛠 Most Creative Use of GitHub * Implemented CICD, protected master branch, pull request checks ## 🛠 Cohere Prize * Used sentiment analysis toolkit to parse user text input, model human languages and classify sentiments with timestamp related to user text input * Framework designed to implement a continuous learning pipeline for the future ## 🛠 Google Cloud Prize * App Engine to host the React app and Flask observer and linked to Compute Engine * Hosted Cockroach Lab virtual machine ## What's next for Pattrn * Continue working on improving sentiment analysis on user’s health journal entry * Better understand pattern between user health metrics and daily activities and events * Provide personalized recommendations on steps to improve mental health * Provide real time feedback eg. haptic when stressful episode are predicted Temporary login credentials: Username: [norcal2@hacks.edu](mailto:norcal2@hacks.edu) Password: norcal
## Inspiration As we brainstormed areas we could work in for our project, we began to look for inconveniences in each of our lives that we could tackle. One of our teammates unfortunately has a lot of dietary restrictions due to allergies, and as we watched him finding organizers to check ingredients and straining to read the microscopic text on processed foods' packaging, we realized that this was an everyday issue that we could help to resolve, and that the issue is not limited to just our teammate. Thus, we sought to find a way to make his and others' lives easier and simplify the way they check for allergens. ## What it does Our project scans food items' ingredients lists and identifies allergens within the ingredients list to ensure that a given food item is safe for consumption, as well as putting the tool in a user-friendly web app. ## How we built it We divided responsibilities and made sure each of us was on the same page when completing our individual parts. Some of us worked on the backend, with initializing databases and creating the script to process camera inputs, and some of us worked on frontend development, striving to create an easy-to-navigate platform for people to use. ## Challenges we ran into One major challenge we ran into was time management. As newer programmers to hackathons, the pace of the project development was a bit of a shock going into the work. Additionally, there were various incompatibilities between softwares that we ran into, causing a variety of setbacks that ultimately led to most of the issues with the final product. ## Accomplishments that we're proud of We are very proud of the fact that the tool is functional. Even though the product is certainly far from what we wanted to end up with, we are happy that we were able to at least approach a state of completion. ## What we learned In the end, our project was a part of the grander learning experience each of us went through. The stress of completing all intended functionality and the difficulties of working under difficult, tiring conditions was a combination that challenged us all, and from those challenges we were able to learn strategies to mitigate such obstacles in the future. ## What's next for foodsense We hope to be able to finally complete the web app in the way we originally intended. A big regret was definitely that we were not able to execute our plan as we originally meant to, so future development is definitely in the future of the website.
winning
## Inspiration With a prior interest in crypto and defi, we were attracted to Uniswap V3's simple yet brilliant automated market maker. The white papers were tantalizing and we had several eureka moments when pouring over them. However, we realized that the concepts were beyond the reach of most casual users who would be interested in using Uniswap. Consequently, we decided to build an algorithm that allowed Uniswap users to take a more hands-on and less theoretical approach, while mitigating risk, to understanding the nuances of the marketplace so they would be better suited to make decisions that aligned with their financial goals. ## What it does This project is intended to help new Uniswap users understand the novel processes that the financial protocol (Uniswap) operates upon, specifically with regards to its automated market maker. Taking an input of a hypothetical liquidity mining position in a liquidity pool of the user's choice, our predictive model uses past transactions within that liquidity pool to project the performance of the specified liquidity mining position over time - thus allowing Uniswap users to make better informed decisions regarding which liquidity pools and what currencies and what quantities to invest in. ## How we built it We divided the complete task into four main subproblems: the simulation model and rest of the backend, an intuitive UI with a frontend that emulated Uniswap's, the graphic design, and - most importantly - successfully integrating these three elements together. Each of these tasks took the entirety of the contest window to complete to a degree we were satisfied with given the time constraints. ## Challenges we ran into and accomplishments we're proud of Connecting all the different libraries, frameworks, and languages we used was by far the biggest and most frequent challenge we faced. This included running Python and NumPy through AWS, calling AWS with React and Node.js, making GraphQL queries to Uniswap V3's API, among many other tasks. Of course, re-implementing many of the key features Uniswap runs on to better our simulation was another major hurdle and took several hours of debugging. We had to return to the drawing board countless times to ensure we were correctly emulating the automated market maker as closely as possible. Another difficult task was making our UI as easy to use as possible for users. Notably, this meant correcting the inputs since there are many constraints for what position a user may actually take in a liquidity pool. Ultimately, in spite of the many technical hurdles, we are proud of what we have accomplished and believe our product is ready to be released pending a few final touches. ## What we learned Every aspect of this project introduced us to new concepts, or new implementations of concepts we had picked up previously. While we had dealt with similar subtasks in the past, this was our first time building something of this scope from the ground-up.
## What it does We made 2 front ends to demonstrate the capabilities of alexaMD. Being able to extrapolate input from users to determine the likelihood of various diseases with confidence scores. ## How we built it By scraping Mayo Clinic, a comprehensive medical database, we were able to compile information associated with illnesses and their characteristics. Using Watson's Natural Language Classifier suite, we integrated its natural language processing capabilities with Alexa's clear voice input to provide a seamless way to deliver medical diagnosis. ## Challenges we ran into Extracting data from Mayo Clinics 400+ articles and integrating it with IBM Watson + Amazon AWS Lambda ## What we learned Various techniques of efficiently processing large amounts of data and learning all the APIs needed. ## What's next for alexaMD Scaling to extrapolate information from new research papers and modifying providing cures/remedies to possible illnesses.
## Inspiration We have two bitcoin and cryptocurrency enthusiasts on our team, and only of us made money during its peak earlier this month. Cryptocurrencies are just too volatile, and its value depends too much on how the public feels about it. How people think and talk about a cryptocurrency affects it price to a large extent, unlike stocks which also have the support of the market, shareholders and the company itself. ## What it does Our website scrapes for thousands of social media posts and news articles to get information about the required cryptocurrency. We then analyse it using NLP and ML and determine whether the price is likely to go up or down in the very near future. We also display the current price graphs, social media and news trends (if they are positive, neutral or negative) and the popularity ranking of the selected currency on social platforms. ## How I built it The website is mostly built using node.js and bootstrap. We use chart.js for a lot of our web illustrations, as well as python for web scraping, performing sentimental analysis and text processing. NLKT and Google Cloud Natural Language API were especially useful with this. We also stored our database on firebase. **Google Cloud**: We used firebase to efficiently store and manage our database, and Google Cloud Natural Language API to perform sentimental analysis on hundreds of social media posts efficiently. ## Challenges I ran into It was especially hard to create, store and process the large datasets we made consisting of social media posts and news articles. Even though we only needed data from the past few weeks, it was a lot since so many people post online. Getting relevant data, free of spam and repeated posts, and actually getting useful information out of it was hard. ## Accomplishments that I'm proud of We are really proud that we were able to connect multiple streams of data, analyse them and display all relevant information. It was amazing to see when our results matched the past peaks and crashes in bitcoin price. ## What I learned We learned how to scrape relevant data from the web, clean it and perform sentimental analysis on it to make predictions about future prices. Most of this was new to our team members and we definitely learned a lot. ## What's next for We hope to further increase the functionality of our website. We want users to have an option to give the website permission to automatically buy and sell cryptocurrencies when it determines it is the best time to do so. ## Domain name We bought the domain name get-crypto-insights.online for the best domain name challenge since it is relevant to our project. If I found a website of this name on the internet, I would definitely visit it to improve my cryptocurrency trading experience. ## About Us We are Discord team #1, with @uditk, @soulkks, @kilobigeye and @rakshaa
winning
## Inspiration Recognizing the disastrous effects of the auto industry on the environment, our team wanted to find a way to help the average consumer mitigate the effects of automobiles on global climate change. We felt that there was an untapped potential to create a tool that helps people visualize cars' eco-friendliness, and also helps them pick a vehicle that is right for them. ## What it does CarChart is an eco-focused consumer tool which is designed to allow a consumer to make an informed decision when it comes to purchasing a car. However, this tool is also designed to measure the environmental impact that a consumer would incur as a result of purchasing a vehicle. With this tool, a customer can make an auto purhcase that both works for them, and the environment. This tool allows you to search by any combination of ranges including Year, Price, Seats, Engine Power, CO2 Emissions, Body type of the car, and fuel type of the car. In addition to this, it provides a nice visualization so that the consumer can compare the pros and cons of two different variables on a graph. ## How we built it We started out by webscraping to gather and sanitize all of the datapoints needed for our visualization. This scraping was done in Python and we stored our data in a Google Cloud-hosted MySQL database. Our web app is built on the Django web framework, with Javascript and P5.js (along with CSS) powering the graphics. The Django site is also hosted in Google Cloud. ## Challenges we ran into Collectively, the team ran into many problems throughout the weekend. Finding and scraping data proved to be much more difficult than expected since we could not find an appropriate API for our needs, and it took an extremely long time to correctly sanitize and save all of the data in our database, which also led to problems along the way. Another large issue that we ran into was getting our App Engine to talk with our own database. Unfortunately, since our database requires a white-listed IP, and we were using Google's App Engine (which does not allow static IPs), we spent a lot of time with the Google Cloud engineers debugging our code. The last challenge that we ran into was getting our front-end to play nicely with our backend code ## Accomplishments that we're proud of We're proud of the fact that we were able to host a comprehensive database on the Google Cloud platform, in spite of the fact that no one in our group had Google Cloud experience. We are also proud of the fact that we were able to accomplish 90+% the goal we set out to do without the use of any APIs. ## What We learned Our collaboration on this project necessitated a comprehensive review of git and the shared pain of having to integrate many moving parts into the same project. We learned how to utilize Google's App Engine and utilize Google's MySQL server. ## What's next for CarChart We would like to expand the front-end to have even more functionality Some of the features that we would like to include would be: * Letting users pick lists of cars that they are interested and compare * Displaying each datapoint with an image of the car * Adding even more dimensions that the user is allowed to search by ## Check the Project out here!! <https://pennapps-xx-252216.appspot.com/>
## Inspiration Over the course of the past year, one of the most heavily impacted industries due to the COVID-19 pandemic is the service sector. Specifically, COVID-19 has transformed the financial viability of restaurant models. Moving forward, it is projected that 36,000 small restaurants will not survive the winter as successful restaurants have thus far relied on online dining services such as Grubhub or Doordash. However, these methods come at the cost of flat premiums on every sale, driving up the food price and cutting at least 20% from a given restaurant’s revenue. Within these platforms, the most popular, established restaurants are prioritized due to built-in search algorithms. As such, not all small restaurants can join these otherwise expensive options, and there is no meaningful way for small restaurants to survive during COVID. ## What it does Potluck provides a platform for chefs to conveniently advertise their services to customers who will likewise be able to easily find nearby places to get their favorite foods. Chefs are able to upload information about their restaurant, such as their menus and locations, which is stored in Potluck’s encrypted database. Customers are presented with a personalized dashboard containing a list of ten nearby restaurants which are generated using an algorithm that factors in the customer’s preferences and sentiment analysis of previous customers. There is also a search function which will allow customers to find additional restaurants that they may enjoy. ## How I built it We built a web app with Flask where users can feed in data for a specific location, cuisine of food, and restaurant-related tags. Based on this input, restaurants in our database are filtered and ranked based on the distance to the given user location calculated using Google Maps API and the Natural Language Toolkit (NLTK), and a sentiment score based on any comments on the restaurant calculated using Google Cloud NLP. Within the page, consumers can provide comments on their dining experience with a certain restaurant and chefs can add information for their restaurant, including cuisine, menu items, location, and contact information. Data is stored in a PostgreSQL-based database on Google Cloud. ## Challenges I ran into One of the challenges that we faced was coming up a solution that matched the timeframe and bandwidth of our team. We did not want to be too ambitious with our ideas and technology yet provide a product that we felt was novel and meaningful. We also found it difficult to integrate the backend with the frontend. For example, we needed the results from the Natural Language Toolkit (NLTK) in the backend to be used by the Google Maps JavaScript API in the frontend. By utilizing Jinja templates, we were able to serve the webpage and modify its script code based on the backend results from NLTK. ## Accomplishments that I'm proud of We were able to identify a problem that was not only very meaningful to us and our community, but also one that we had a reasonable chance of approaching with our experience and tools. Not only did we get our functions and app to work very smoothly, we ended up with time to create a very pleasant user-experience and UI. We believe that how comfortable the user is when using the app is equally as important as how sophisticated the technology is. Additionally, we were happy that we were able to tie in our product into many meaningful ideas on community and small businesses, which we believe are very important in the current times. ## What I learned Tools we tried for the first time: Flask (with the additional challenge of running HTTPS), Jinja templates for dynamic HTML code, Google Cloud products (including Google Maps JS API), and PostgreSQL. For many of us, this was our first experience with a group technical project, and it was very instructive to find ways to best communicate and collaborate, especially in this virtual setting. We benefited from each other’s experiences and were able to learn when to use certain ML algorithms or how to make a dynamic frontend. ## What's next for Potluck For example, we want to incorporate an account system to make user-specific recommendations (Firebase). Additionally, regarding our Google Maps interface, we would like to have dynamic location identification. Furthermore, the capacity of our platform could help us expand program to pair people with any type of service, not just food. We believe that the flexibility of our app could be used for other ideas as well.
## Inspiration Nerf gun meets 2016 - with the use of facial recognition the nerf gun is able to spot a face and shot at it. ## What it does Sends a signal to the Particle Photon and triggers the rotation of the motor. The motor is then able to pull on a string with is attached to the trigger of the gun, at which point the nerf ball is released and the shot is fired! ## Challenges we ran into Our lack of hardware knowledge was one of the challenges we needed to overcome throughout this project. We had specific issues with the DC motors & servos, as well as becoming familiar with the Particle Photon. One of the main challenges we encountered was that the servo was not strong enough to pull the trigger of the nerf gun. We needed to open up the gun and loosen the trigger, so that it did not require as much strength to fire. ## How we built it ## Accomplishments that we are proud of ## What we learned ## What's next for Head Shot Automating the reload and adding the second motor to allow manual rotation of the gun.
winning
## Inspiration Amazon is the world's largest e-commerce company and one way they've achieved this is through their ability to deliver to customers much more efficiently that gets products on doorsteps much faster. They're able to do this because of their ability to store commonly bought products in their massive warehouses. Smaller businesses don't have this same luxury to be able to store many products in massive warehouses. That's where **RegionSell** comes in! With RegionSell, all sorts of businesses can begin to predict where and when sales of certain items might increase, all through an easy to use Chrome extension! This can help e-commerce developers move certain items to the top of recommendations on a storefront based on a user's location and can assist e-commerce brands as a whole to get a better idea of what sells better where. Further, couriers can keep commonly bought items on their route, allowing for even faster delivery for certain lucky customers. ## What it does RegionSell provides a Google Chrome extension that works by displaying the top locations where any item on any storefront is especially popular at the moment. It does this by calling a custom Cohere classification model trained on a dataset of over 55000 e-commerce transactions that returns the top regions above a certain confidence level. Further, RegionSell provides **live** updates and feedback by adding more examples to the data that the custom Cohere model is based on whenever a customer buys an item, by logging the item, the date and time, as well as the location where the user bought the item. This means that whenever a developer or store owner checks to see which items are popular in a location, the information is current. ## How I built it I built the frontend Chrome extension using React with TypeScript, and Bootstrap 5 for the styling. The Chrome API was used to get the information from the page and URL. I built the backend web API using Flask with Python. This backend used the Cohere Python SDK's /classify endpoint, which was using the custom model that I trained using a dataset from Kaggle. This API would communicate with the frontend and provide information on regional rankings. Further, when an item was added to cart, it would be added to the new examples via an endpoint on this API. Data cleaning for the datasets was done with Python as well. The backend was hosted on Azure Web Apps, and for this I used GitHub Actions to set up a Continuous Deployment pipeline that would build and deploy the API whenever changes were made to it. Other GitHub Actions were also used, including Pylint to enforce coding standard and CodeQL to ensure code security, as e-commerce data security can be incredibly important and any compromise can lead to massive losses for brands. In addition to all this, a small test storefront was created for demo purposes to showcase how this tool works on any potential e-commerce storefront. ## Challenges I ran into Since it was my first time setting up CI/CD pipelines, especially using **GitHub Actions**, I had trouble with ensuring that I was following standards and avoiding security bugs on the CI side. I also had trouble with Azure Web Apps and connecting this to the GitHub Action to allow for Continuous Deployment of the API. Eventually I was able to figure out how the app could be configured on Azure's portal to connect to GitHub Actions and properly deploy the web app. Further, I had never made a Google Chrome extension before, so it was quite a learning curve for me to figure out how the extension could read a page's contents and URL to use this to communicate with the Flask API. ## Accomplishments that I'm proud of I'm proud of the GitHub repository I have created, specifically with respect to not only the code that is on it, but also the various configurations I have made to allow for a more secure and standardized open-source repository. By adding the GitHub Actions for Continuous Integration, I can ensure that others contributing to the repository will have to follow the same standards and make it harder for bad actors to hurt the product. I'm also proud of my ability to create what I did within the time constraints, including all 3 applications and configuring various GitHub Actions, which I had never done before. ## What I learned I learned about the importance of a well constructed open-source repository, and how CI/CD platforms can make development so much easier by allowing bugs to be found, corrected, and put into production much easier. I also learned a great amount about how JS scripts work for Google Chrome extensions, including the differences with those for regular web apps. ## What's next for RegionSell A next step for RegionSell would be to further generalize the product so that it could not only work for countries, but possibly even certain cities or streets, depending on the size and capability of a business that uses it.
## Inspiration Because of covid-19 and the holiday season, we are getting increasingly guilty over the carbon footprint caused by our online shopping. This is not a coincidence, Amazon along contributed over 55.17 million tonnes of CO2 in 2019 alone, the equivalent of 13 coal power plants. We have seen many carbon footprint calculators that aim to measure individual carbon pollution. However, the pure mass of carbon footprint too abstract and has little meaning to average consumers. After calculating footprints, we would feel guilty about our carbon consumption caused by our lifestyles, and maybe, maybe donate once to offset the guilt inside us. The problem is, climate change cannot be eliminated by a single contribution because it's a continuous process, so we thought to gamefy carbon footprint to cultivate engagement, encourage donations, and raise awareness over the long term. ## What it does We build a google chrome extension to track the user’s amazon purchases and determine the carbon footprint of the product using all available variables scraped from the page, including product type, weight, distance, and shipping options in real-time. We set up Google Firebase to store user’s account information and purchase history and created a gaming system to track user progressions, achievements, and pet status in the backend. ## How we built it We created the front end using React.js, developed our web scraper using javascript to extract amazon information, and Netlify for deploying the website. We developed the back end in Python using Flask, storing our data on Firestore, calculating shipping distance using Google's distance-matrix API, and hosting on Google Cloud Platform. For the user authentication system, we used the SHA-256 hashes and salts to store passwords securely on the cloud. ## Challenges we ran into This is our first time developing a web app application for most of us because we have our background in Mechatronics Engineering and Computer Engineering. ## Accomplishments that we're proud of We are very proud that we are able to accomplish an app of this magnitude, as well as its potential impact on social good by reducing Carbon Footprint emission. ## What we learned We learned about utilizing the google cloud platform, integrating the front end and back end to make a complete webapp. ## What's next for Purrtector Our mission is to build tools to gamify our fight against climate change, cultivate user engagement, and make it fun to save the world. We see ourselves as a non-profit and we would welcome collaboration from third parties to offer additional perks and discounts to our users for reducing carbon emission by unlocking designated achievements with their pet similar. This would bring in additional incentives towards a carbon-neutral lifetime on top of emotional attachments to their pet. ## Domain.com Link <https://purrtector.space> Note: We weren't able to register this via domain.com due to the site errors but Sean said we could have this domain considered.
## Inspiration Google reviews are a very valuable insight for any business, however we felt that given the usual amount of reviews per location, it might be too time consuming and inconvenient for many business owners to take full advantage of and understand their reviews. We decided to solve this issue by displaying information in a simpler and more presentable way that will be more useful for business owners to preview. ## What it does We decided to solve this issue by categorising reviews into "positive", "negative" and "unrelated" subcategories, and providing informative summaries for the categories based on common patterns in the review. The software requires an input from the user with the name of the business, as well as the postal code. This request will be processed by our backend which will take in the information from the Redis database (or add it to the database if it is the first time the business is being searched), which is then by processed by the cohere natural language processing model (nlp) via their api. The cohere insights are then displayed on the website for the user to see a detailed summary of their positive and negative reviews as well as a percentage of the amount of positive, negative and unrelated reviews that customers have left. ## How we built it For the frontend, we used React, JavaScript, CSS, HTML, Figma and RadixUI. For the backend we used Python. The Google Geocoding API and Google Places API was used to fetch reviews for the given business, which was the stored in the Redis Databases. The Cohere API (Specifically Cohere Summarise, Cohere Classify) was used to categories these reviews, and provide informative summaries. Flask was used to create an API and relay the information to the frontend. ## Challenges we ran into Integrating flask with react was a challenge, especially trying to have post requests send information from the front end to the backend. It took multiple approaches and some time for us to overcome the challenge. We all got together, took a break and got back into it for 3 hours, and eventually we were able to find a solution that made everything work. Redis database was also very new to all of us and we had no prior experience with Redis which made it difficult to understand the database and its intricacies. However with persistence, we got the database running and structured it for our needs. ## Accomplishments that we're proud of We are very proud of figuring out the Redis database as we realized how useful and adaptable it is. We are also very happy with the design we created for the frontend. The cohere API was very easy to use and understand, and we loved working with it and are proud of how we tailored it to meet our needs. ## What we learned The main things we learned from this hackathon is using Redis, Cohere and learning how to integrate frontend and backend. We were all very impressed with how strong the Cohere API is and loved using it for our project. The API was very interesting to use and fit perfectly for the problem that we were trying to solve. Redis was very useful in managing the data we collected, and we noticed the exceptionally fast read and write operations compared to other databases we have used in the past. ## What's next for Reviewify We loved working together as a team for the first time, we went from strangers to friends in a matter of 24 hours. We are very happy with the amount and level of work that has been done throughout the hackathon. We want to further improve Reviewify in the future by scraping more reviews and storing them in our databases, to make the positive/negative review ratings ratio more accurate. We want to continue experimenting with the Cohere API, as well as have more visualisations, such as showing the user how their review ratings have changed over time (e.g. a span of 5 years) using graphs, helping business owners keep track of the progress of their reviews overtime and use our product longterm.
partial
## Introduction / Problem Statement: Before we get started with our pitch, who here has heard of Photoshop? Likely most people given it’s the most popular image editor in the world. Who here knows how to use every single feature that Photoshop offers? Likely no one, and for a good reason. Any idea how long you think the manual for Photoshop is? It’s not 50 or 100 or 500 pages. It’s **1017** pages. Adobe Photoshop - World’s most popular image editor. <https://helpx.adobe.com/pdf/photoshop_reference.pdf> **And this isn’t an isolated case!** Adobe Premiere Pro - World’s most popular video editor has a **818** pages manual. <https://helpx.adobe.com/content/dam/help/en/pdf/premiere_pro_reference.pdf> DaVinci Resolve - world’s most popular free video editor has a **1060** pages manual. <https://documents.blackmagicdesign.com/UserManuals/DaVinci_Resolve_12_Reference_Manual.pdf> As inexperienced video editors who needed to edit a demo video for this hackathon, we thought there has to be a better way to learn and use the features without going through thousand page manuals and hour-long YouTube videos. Why can’t some"buddy" just tell me exactly what to do? ## Solution: That’s the problem we decided to solve at Hack Western! Imagine an AI companion that could not only understand your queries but also help one navigate the user interface in realtime, providing step-by-step guidance through screen sharing while articulating instructions audibly. Essentially ChatGPT but it can help you with whatever is on your screen in realtime! Welcome to the new age of AI collaboration - **Share your vision with ScreenBuddy!** ## Tech Stack: The tech stack comprises: * A powerful combination of OpenCV and GPT-4-Vision for robust image recognition capabilities. * Vector embeddings are crafted using ChromaDB and LangChain, tailored specifically for training on DaVinci Resolve and Circle documentation, enhancing understanding and context. * GPT Whisper handles speech-to-text conversion. * GPT TTS seamlessly transforms text to speech. * The user interface is facilitated by the Tkinter Python Toolkit, offering a user-friendly screen-sharing experience for effective interaction with the AI system. This comprehensive stack creates a synergistic environment, enabling intuitive and efficient navigation through complex interfaces, whether in video editing with DaVinci Resolve or managing blockchain transactions on Circle. ## Challenges we ran into: * Integrating OpenCV and GPT-4-Vision for seamless image recognition posed technical hurdles when it came to streaming visual media files. * Fine-tuning vector embeddings using ChromaDB and LangChain required iterative experimentation to achieve good results with DaVinci Resolve. * Ensuring real-time responsiveness in Tkinter Python Toolkit for effective screen sharing and speech recognition was a significant challenge. ## Accomplishments that we're proud of: * Successful integration of OpenCV and GPT-4-Vision for robust image recognition capabilities. * Precision in crafting vector embeddings via ChromaDB and LangChain for tailored training on DaVinci Resolve and Circle documentation. * Seamless implementation of GPT Whisper and GPT TTS for speech-to-text and text-to-speech transformations. * Development of a user-friendly interface using Tkinter Python Toolkit for intuitive screen sharing. ## What we learned: * The synergy between computer vision and natural language processing is pivotal for effective AI-assisted navigation. * The importance of iterative testing and fine-tuning in creating a reliable and user-friendly system. * Addressing real-time responsiveness challenges in UI interactions enhances overall user experience. ## What's next for our project: * Implementing user feedback for continuous improvement and refinement. * Exploring additional applications beyond DaVinci Resolve and Circle for a broader user base. * Enhancing the AI's contextual understanding for even more intuitive interactions. * Collaborating with the community to expand the range of supported interfaces and functionalities.
The inspiration for "I'm Not Dead" is our common enjoyment of hiking. Our hack team members come from different backgrounds on the east coast. One of the biggest fears and dangers of hiking is getting stranded and helpless in an unfamiliar area. This application allows hikers of all skillsets to have the peace of mind that if they were to be harmed or stranded that an emergency contact is reached for help. Our team built this application from Map APIS provided by sponsors. Every 10 minutes a pin drop is marked as the hiker goes further in their journey. Whenever the hiker becomes unresponsive and not progressing in their journey our backend system reaches out the emergency contact to make them aware.
## Inspiration In online documentaries, we saw visually impaired individuals and their vision consisted of small apertures. We wanted to develop a product that would act as a remedy for this issue. ## What it does When a button is pressed, a picture is taken of the user's current view. This picture is then analyzed using OCR (Optical Character Recognition) and the text is extracted from the image. The text is then converted to speech for the user to listen to. ## How we built it We used a push button connected to the GPIO pins on the Qualcomm DragonBoard 410c. The input is taken from the button and initiates a python script that connects to the Azure Computer Vision API. The resulting text is sent to the Azure Speech API. ## Challenges we ran into Coming up with an idea that we were all interested in, incorporated a good amount of hardware, and met the themes of the makeathon was extremely difficult. We attempted to use Speech Diarization initially but realized the technology is not refined enough for our idea. We then modified our idea and wanted to use a hotkey detection model but had a lot of difficulty configuring it. In the end, we decided to use a pushbutton instead for simplicity in favour of both the user and us, the developers. ## Accomplishments that we're proud of This is our very first Makeathon and we were proud of accomplishing the challenge of developing a hardware project (using components we were completely unfamiliar with) within 24 hours. We also ended up with a fully function project. ## What we learned We learned how to operate and program a DragonBoard, as well as connect various APIs together. ## What's next for Aperture We want to implement hot-key detection instead of the push button to eliminate the need of tactile input altogether.
losing
## Inspiration We were inspired to make Anchor in hopes to promote positive, healthy mental and physical health. Being in the middle of the pandemic, we were also inspired to add virtual collaborative features to still encourage active living but in the safety of our homes. ## What it does Anchor is a personal workout app that aims to boost your mental and physical health through yoga, workouts, stretch, and dance! Users can do these on their own or with others. It makes everyday, mundane activities more fun and interactive! ## How we built it We primarily used AdobeXD and experimented with EchoAR ## Challenges we ran into Artificial intelligence! It was our first time trying virtual/augmented reality with EchoAR so we had difficulties trying to incorporate it into our final product. ## Accomplishments that we're proud of Learning new software and stepping out of our comfort zone! ## What we learned It was also our first time trying AdobeXD and EchoAR! We learned a lot about rendering and artificial intelligence. Definitely a great experience and lots of room for improvement in the future. ## What's next for Anchor We hope to fine-tune our artificial intelligence to create a better user experience, hopefully with EchoAR. This will help teach correct form and prevent injuries by letting the user see the yoga poses from all different angles as they could see a 360 video using augmented reality. We also hope to expand our platform and start a web application, as well as additional customizable features such as calorie trackers and fitness goal settings.
## **CoLab** makes exercise fun. In August 2020, **53%** of US adults reported that their mental health has been negatively impacted due to worry and stress over coronavirus. This is **significantly higher** than the 32% reported in March 2020. That being said, there is no doubt that Coronavirus has heavily impacted our everyday lives. Quarantine has us stuck inside, unable to workout at our gyms, practice with our teams, and socialize in classes. Doctor’s have suggested we exercise throughout lockdown, to maintain our health and for the release of endorphins. But it can be **hard to stay motivated**, especially when we’re stuck inside and don’t know the next time we can see our friends. Our inspiration comes from this, and we plan to solve these problems with **CoLab.** ## What it does CoLab enables you to workout with others, following a synced YouTube video or creating a custom workout plan that can be fully dynamic and customizable. ## How we built it Our technologies include: Twilio Programmable Video API, Node.JS and React. ## Challenges we ran into At first, we found it difficult to resize the Video References for local and remote participants. Luckily, we were able to resize and set the correct ratios using Flexbox and Bootstrap's grid system. We also needed to find a way to mute audio and disable video as these are core functionalities in any video sharing applications. We were luckily enough to find that someone else had the same issue on [stack overflow](https://stackoverflow.com/questions/41128817/twilio-video-mute-participant) which we were able to use to help build our solution. ## Accomplishments that we're proud of When the hackathon began, our team started brainstorming a ton of goals like real-time video, customizable workouts, etc. It was really inspiring and motivating to see us tackle these problems and accomplish most of our planned goals one by one. ## What we learned This sounds cliché but we learned how important it was to have a strong chemistry within our team. One of the many reasons why I believe our team was able to complete most of our goals was because we were all very communicative, helpful and efficient. We knew that we joined together to have a good time but we also joined because we wanted to develop our skills as developers. It helped us grow as individuals and we are now more competent in using new technologies like Twilios Programmable API! ## What's next for CoLab Our team will continue developing the CoLab platform and polishing it until we deem it acceptable for publishing. We really believe in the idea of CoLab and want to pursue the idea further. We hope you share that vision and our team would like to thank you for reading this verbose project story!
## Inspiration We got together a team passionate about social impact, and all the ideas we had kept going back to loneliness and isolation. We have all been in high pressure environments where mental health was not prioritized and we wanted to find a supportive and unobtrusive solution. After sharing some personal stories and observing our skillsets, the idea for Remy was born. **How can we create an AR buddy to be there for you?** ## What it does **Remy** is an app that contains an AR buddy who serves as a mental health companion. Through information accessed from "Apple Health" and "Google Calendar," Remy is able to help you stay on top of your schedule. He gives you suggestions on when to eat, when to sleep, and personally recommends articles on mental health hygiene. All this data is aggregated into a report that can then be sent to medical professionals. Personally, our favorite feature is his suggestions on when to go on walks and your ability to meet other Remy owners. ## How we built it We built an iOS application in Swift with ARKit and SceneKit with Apple Health data integration. Our 3D models were created from Mixima. ## Challenges we ran into We did not want Remy to promote codependency in its users, so we specifically set time aside to think about how we could specifically create a feature that focused on socialization. We've never worked with AR before, so this was an entirely new set of skills to learn. His biggest challenge was learning how to position AR models in a given scene. ## Accomplishments that we're proud of We have a functioning app of an AR buddy that we have grown heavily attached to. We feel that we have created a virtual avatar that many people really can fall for. ## What we learned Aside from this being many of the team's first times work on AR, the main learning point was about all the data that we gathered on the suicide epidemic for adolescents. Suicide rates have increased by 56% in the last 10 years, and this will only continue to get worse. We need change. ## What's next for Remy While our team has set out for Remy to be used in a college setting, we envision many other relevant use cases where Remy will be able to better support one's mental health wellness. Remy can be used as a tool by therapists to get better insights on sleep patterns and outdoor activity done by their clients, and this data can be used to further improve the client's recovery process. Clients who use Remy can send their activity logs to their therapists before sessions with a simple click of a button. To top it off, we envisage the Remy application being a resource hub for users to improve their overall wellness. Through providing valuable sleep hygiene tips and even lifestyle advice, Remy will be the one-stop, holistic companion for users experiencing mental health difficulties to turn to as they take their steps towards recovery.
losing
## Inspiration Alan Liang from Oakland faced a terrifying reality during an emergency. “I called 911, and the experience was horrible. I was placed on hold for 10 to 15 minutes. It took nearly 20 minutes to speak to a dispatcher.” The average wait time for 911 calls in Oakland is 54 seconds, but in emergencies, even a few seconds can mean the difference between life and death. No one should ever have to wait to be connected when every second could save a life. <https://www.nbcbayarea.com/news/local/oakland-911-crisis/3266349/> ## What it does The average 911 wait time in cities like Oakland is 54 seconds, but during an emergency, every second counts. Dial AI is designed to support 911 centers when they’re understaffed or overwhelmed with calls. Our system uses a multi-agent approach: one AI agent engages with the caller in real-time instantly with 0 wait time, just like a live 911 operator, while another extracts critical information. Simultaneously, additional agents categorize and prioritize the details for the appropriate departments, ensuring that responders receive the information they need quickly. We provide a comprehensive dashboard to all department responders, where incidents are stack-ranked by severity, complete with the caller's location and all necessary details for immediate action. In times of high call volume or staffing shortages, traditional 911 lines can become overloaded. Dial AI ensures real-time communication, providing callers with instant responses and swift action. The system scales seamlessly based on call volume, ensuring there are never any delays due to the robust and adaptive design of the technology. Imagine a scenario like Alan Liang’s—someone in desperate need of help. Instead of being placed on hold for minutes, Dial AI engages immediately, ensuring their issue is addressed without delay. ## How we built it We carefully evaluated different models to ensure the best fit for our use case, understanding that when someone calls 911, they are already in distress. Selecting an empathetic voice model was crucial, and Vapi was the obvious choice for its ability to respond with human-like sensitivity. For real-time conversations, we integrated Vapi with Twilio through webhooks, allowing callers to speak directly into their phones as they would with a live operator. We built the system using Fetch AI to manage the agents, with AgentVerse hosting them. In this setup, two agents communicate with each other in real time. When a call is placed, the first agent retrieves the chat ID and real-time transcripts using Twilio’s webhook. These transcripts are passed through the agent, which uses OpenAI’s API (GPT-4) to extract key information, organize it, and prioritize the emergency based on urgency. This agent communicates with another agent to pass the extracted information to the dispatcher console. These agents work together, with one handling the live interactions and event classifications. The data from the first agent is formatted in JSON by the second agent and sent to a dashboard, where dispatchers can view emergencies stack-ranked by severity and location. By using Fetch AI, AgentVerse, Vapi, and OpenAI, we’ve created a system that ensures quick, real-time responses during high call volumes without delays, making the process efficient, empathetic, and reliable in critical moments. ## Challenges we ran into One of the main challenges we encountered was connecting Vapi with Twilio using webhooks. While obtaining transcripts after the conversation ended was straightforward, extracting real-time transcripts during the call was much more complex. This step was critical for ensuring that information could reach dispatchers as quickly as possible, so overcoming this learning curve was essential for speeding up emergency response. Another significant challenge was getting agents to communicate effectively within AgentVerse. Since this was entirely new for us, we spent a lot of time learning and refining how agents interact with each other in real-time to ensure a seamless flow of information and decision-making. ## Accomplishments that we're proud of We’re incredibly proud to have built and validated a proof-of-concept AI emergency response agent that offers a truly lifelike experience, making conversational AI over the phone sound remarkably similar to speaking with a real 911 operator. Beyond that, we’ve developed a complete end-to-end solution that directly addresses a critical real-world issue—reducing high caller wait times while offering vital support to 911 operators. Our multi-agent model performed exceptionally well, and we managed to seamlessly integrate Twilio, Vapi, Streamlit and Fetch AI into a unified architecture. What excites us most is the real-time conversational AI on phones and how our agents work together to prioritize and process emergencies. This was our first experience working with voice technology, and we’re proud of how natural and effective the interactions feel, as well as how smoothly the agents communicate behind the scenes. ## What we learned We had never worked with voice synthesis or real-time conversational AI until this project, and we’re thrilled to have successfully implemented both. It was also a first for several team members to work with Streamlit and Webhooks, which added new technical challenges and learning opportunities. Through this project, we’ve gained a newfound respect for the vital work 911 dispatchers perform. It takes exceptional knowledge, skill, and empathy to handle constant emergency calls with such professionalism. While we’re proud of our accomplishments with Dial AI, this experience has deepened our appreciation for the challenging and compassionate work that dispatchers do every day. ## What's next for Dial AI This is just the first step toward creating an ideal world where every emergency call is answered instantly, with zero wait time. While we’re proud of what we’ve achieved, there’s still much to improve. Current models excel in clear-cut, black-and-white scenarios, but they struggle with more nuanced, gray-area situations. With advancements in frontier models, we expect Dial AI to become even better at making contextually aware decisions during calls. Additionally, we see room for improvement in both latency and empathy. As real-time conversational models evolve, Dial AI will continue to enhance its ability to respond faster and more compassionately. No seconds should be wasted—every second could save a life. That’s where the real impact of Dial AI lies. While we could have pursued a commercial project, like a customer service AI, our goal was to make a meaningful difference in people’s lives. We wanted to use advancements in technology to improve emergency response and potentially save lives.
## Inspiration In 2022 alone, the United States experienced 150 school shootings, following a staggering 240 incidents in 2021 and over 300 from 2010 to 2020. These tragedies have claimed countless innocent lives among students, teachers, and staff. During these horrific events, hiding students and teachers often struggle to discreetly alert authorities, compounded by the challenge 911 operators face in distinguishing genuine emergencies from false or accidental calls, thereby delaying critical response times. Background research into 911 dispatch operations reveals a critical issue: vital audio cues like distant gunshots or cries for help often go unnoticed amidst the chaos of emergency responses. This lack of clear communication can spell the difference between life and death for those trapped in these harrowing situations. Having experienced frequent active shooter drills and lockdowns firsthand during school, we intimately understand the pervasive fear that grips schools nationwide. Motivated to make a tangible difference, we took a novel approach with our project. Thus, our team developed 911-rEsQ, an AI-driven application tailored specifically for 911 operators. This innovation aims to enhance 911 operators' ability to swiftly and accurately respond to emergencies, particularly during school shootings. By leveraging advanced audio processing capabilities, 911-rEsQ promises to clean and enhance caller audio, enabling dispatchers to better assess the urgency and severity of each situation. This technological advancement is poised to revolutionize emergency response protocols, ensuring that every distress call receives the attention it urgently requires. As students ourselves, we believe in the transformative power of technology to mitigate the devastating impact of school shootings. Our initiative not only fills a critical gap in emergency response infrastructure but also exemplifies the potential of collaborative efforts in the hackathon community. By supporting projects like 911-rEsQ, we advocate for innovative solutions that safeguard lives and strengthen our communities against unthinkable tragedies. ## What it does 911-rEsQ revolutionizes emergency response by leveraging advanced AI technology to optimize communication between callers and 911 operators. This groundbreaking application processes audio from 911 calls, effectively cleaning up background noise and enhancing the clarity of the caller's message. By improving language comprehension, 911 operators can swiftly understand critical details even in chaotic environments. Moreover, 911-rEsQ goes beyond mere audio enhancement. It scans for crucial background sounds such as faint gunshots or distressed screams, providing operators with vital situational context to assess the severity of emergencies accurately. This capability enables operators to make informed decisions on dispatching appropriate resources, whether police, medics, or other responders, swiftly and accurately. In addition to its audio processing capabilities, the application analyzes the caller's emotional state. By detecting tones of distress or urgency, 911-rEsQ helps operators manage and prioritize calls effectively. This feature not only assists in distinguishing genuine emergencies from accidental or prank calls but also supports operators in calming panicked callers and gathering crucial information efficiently. By equipping 911 operators with enhanced audio clarity, real-time danger signal detection, and emotional analysis tools, 911-rEsQ significantly enhances emergency response capabilities. This innovation ensures that every call receives the urgent attention it deserves, potentially saving lives in critical situations where every second counts. ## How we built it The project employs Python libraries such as sounddevice, AudioSegment from pydub, scipy, librosa, and noisereduce to enhance the clarity of audio from 911 calls. These libraries are crucial in to clean up background noice and improving sound quality and the comprehensibility of caller voices, ensuring that 911 operators can effectively understand crucial details even in noisy environments where every second counts. Moreover, the project utilizes Hume AI's Expression Measurement API and websockets for analyzing the emotional states of callers based on their voice tones and patterns. This emotional analysis component provides operators with insights into the urgency and severity of each call, enabling them to prioritize responses accordingly. This will output the top 5 emotions detected by Hume AI's API to give operators more insight on the urgency and authenticity of the call. Simultaneously, a PyTorch-based model is developed to detect critical background noises, such as gunshots or screams in the audio. This capability enhances situational awareness for operators, allowing them to swiftly dispatch appropriate emergency resources. To integrate all these functionalities into a unified application, the project utilizes Flask and Streamlit for backend development and HTML/CSS for frontend interface design. Flask and Streamlit manages the backend processes, including audio enhancement, emotion analysis, and noise detection. This integration ensures seamless communication and real-time information display, enabling operators to make informed decisions promptly during emergencies. As part of our project, we retrained the Microsoft’s Contrastive Language-Audio Pre Training model, which offers zero-shot classifications, so we tailored it towards detecting not only gunshots and screams but fires, footsteps, breathing patterns, and other life-threatening sounds as well. However, this feature is not yet integrated into our Streamlit app. Before integration, we need to isolate background audio to ensure accurate classification, as the presence of talking can dominate the audio stream and affect the classification results. This additional step will enhance the effectiveness of our application in detecting and responding to critical emergencies. ## Challenges we ran into Integrating HumeAI for audio analysis initially presented challenges due to its specific application in our project. We had to explore and understand how to effectively utilize Hume AI's capabilities to analyze the intricate details of audio data from emergency calls. In the end, we decided to go with their Expression Measurement API as it is a perfect fit for the purposes of our project. This process involved tackling technical obstacles such as data preprocessing, ensuring seamless integration of the model into our system, and deciphering the output to derive meaningful insights into caller emotions and the context of each emergency situation. Similarly, we faced various challenges while developing the PyTorch CNN. As beginners in AI and machine learning, we faced a steep learning curve in understanding and utilizing the PyTorch framework. The task of building and refining the classification model required deep dives into fundamental concepts such as neural network architecture, optimization techniques, and effective training methodologies. Some challenges we faced included setting up the development environment, debugging code to achieve desired functionalities, and rigorously validating the model's accuracy and ability to perform in realistic emergency scenarios. ## Accomplishments that we're proud of We are incredibly proud of our team as this Hackathon experience demonstrated our ability to collaborate effectively and use our technical skills to address pressing timely societal and safety issues. As students in the United States, where school shootings are unfortunately common, this project was particularly meaningful to us. We were proud to develop a solution that has the potential to save lives, aid victims, and improve emergency response. Throughout the process, we pushed ourselves to learn new technologies over the weekend and created working models that can positively impact the lives of many people. We hope to continue fueling the fight to address the unfortunate reality that school shootings remain a significant issue in our society. We are also very proud that our project is applicable in numerous situations not only school shootings, but abduction victim calls, domestic violence situations, and more. Moreover, our project highlights the transformative power of technological advancement in addressing critical societal challenges and social impact. This project was more than just a hackathon; it was a stepping stone towards our aspirations to drive positive change in our communities! ## What we learned We found it deeply rewarding to apply AI technologies and methods to assist 911 operators in responding to calls from victims. Given the ongoing issue of school shootings in society, we take pride in raising awareness through an innovative solution that aids 911 operators and improves the chances of protecting victims. We are thrilled to have utilized our software expertise to create something that will positively impact many lives. Our primary focus this weekend was on developing models to classify faint audio. We gained proficiency in processing and refining audio, analyzing faint sounds for specific characteristics. Additionally, we acquired skills in using ML frameworks like PyTorch and exploring AI applications such as HumeAI, all while advancing our capabilities in web development and data analysis. ## What's next for 911 rEsQ The next step for 911 rEsQ involves advancing the integration of real-time audio capabilities into our project. Currently, we have achieved basic functionalities such as audio recording and real-time enhancement using our application. Our immediate objective is to seamlessly integrate these capabilities into HumeAI, enhancing the overall efficiency and effectiveness of emergency response operations. Furthermore, our plan includes collaborating with local governments to implement our application within their call centers. By doing so, we aim to empower emergency responders with advanced tools that can significantly improve their response times and outcomes during critical situations, such as school shootings and other emergencies. This initiative reflects our commitment to leveraging technology to enhance public safety and support the vital work of emergency services nationwide.
## Inspiration The increasing frequency and severity of natural disasters such as wildfires, floods, and hurricanes have created a pressing need for reliable, real-time information. Families, NGOs, emergency first responders, and government agencies often struggle to access trustworthy updates quickly, leading to delays in response and aid. Inspired by the need to streamline and verify information during crises, we developed Disasteraid.ai to provide concise, accurate, and timely updates. ## What it does Disasteraid.ai is an AI-powered platform consolidating trustworthy live updates about ongoing crises and packages them into summarized info-bites. Users can ask specific questions about crises like the New Mexico Wildfires and Floods to gain detailed insights. The platform also features an interactive map with pin drops indicating the precise coordinates of events, enhancing situational awareness for families, NGOs, emergency first responders, and government agencies. ## How we built it 1. Data Collection: We queried You.com to gather URLs and data on the latest developments concerning specific crises. 2. Information Extraction: We extracted critical information from these sources and combined it with data gathered through Retrieval-Augmented Generation (RAG). 3. AI Processing: The compiled information was input into Anthropic AI's Claude 3.5 model. 4. Output Generation: The AI model produced concise summaries and answers to user queries, alongside generating pin drops on the map to indicate event locations. ## Challenges we ran into 1. Data Verification: Ensuring the accuracy and trustworthiness of the data collected from multiple sources was a significant challenge. 2. Real-Time Processing: Developing a system capable of processing and summarizing information in real-time requires sophisticated algorithms and infrastructure. 3. User Interface: Creating an intuitive and user-friendly interface that allows users to easily access and interpret information presented by the platform. ## Accomplishments that we're proud of 1. Accurate Summarization: Successfully integrating AI to produce reliable and concise summaries of complex crisis situations. 2. Interactive Mapping: Developing a dynamic map feature that provides real-time location data, enhancing the usability and utility of the platform. 3. Broad Utility: Creating a versatile tool that serves diverse user groups, from families seeking safety information to emergency responders coordinating relief efforts. ## What we learned 1. Importance of Reliable Data: The critical need for accurate, real-time data in disaster management and the complexities involved in verifying information from various sources. 2. AI Capabilities: The potential and limitations of AI in processing and summarizing vast amounts of information quickly and accurately. 3. User Needs: Insights into the specific needs of different user groups during a crisis, allowing us to tailor our platform to better serve these needs. ## What's next for DisasterAid.ai 1. Enhanced Data Sources: Expanding our data sources to include more real-time feeds and integrating social media analytics for even faster updates. 2. Advanced AI Models: Continuously improving our AI models to enhance the accuracy and depth of our summaries and responses. 3. User Feedback Integration: Implementing feedback loops to gather user input and refine the platform's functionality and user interface. 4. Partnerships: Building partnerships with more emergency services and NGOs to broaden the reach and impact of Disasteraid.ai. 5. Scalability: Scaling our infrastructure to handle larger volumes of data and more simultaneous users during large-scale crises.
losing
## Inspiration Our inspiration for the creation of PlanIt came from the different social circles we spend time in. It seemed like no matter the group of people, planning an event was always cumbersome and there were too many little factors that made it annoying. ## What it does PlanIt allows for quick and easy construction of events. It uses Facebook and Googleplus’ APIs in order to connect people. A host chooses a date and invites people; once invited, everyone can contribute with ideas for places, which in turn, creates a list of potential events that are set to a vote. The host then looks at the results and chooses an overall main event. This main event becomes the spotlight for many features introduced by PlanIt. Some of the main features are quite simple in its idea: Let people know when you are on the way, thereby causing the app to track your location and instead of telling everyone your location, it lets everybody know how far away you are from the desired destination. Another feature is the carpool tab; people can volunteer to carpool and list their vehicle capacity, people can sort themselves as a sub­part of the driver. There are many more features that are in play. ## How we built it We used Microsoft Azure for cloud based server side development and Xamarin in order to have easy cross-­platform compatibility with C# as our main language. ## Challenges we ran into Some of the challenges that we ran into were based around back­end and server side issues. We spent 3 hours trying to fix one bug and even then it still seemed to be conflicted. All in all the front­end went by quite smoothly but the back­end took some work. ## Accomplishments that we're proud of We were very close to quitting late into the night but we were able to take a break and rally around a new project model in order to finish as much of it as we could. Not quitting was probably the most notable accomplishment of the event. ## What we learned We used two new softwares for this app. Xaramin and Microsoft Azure. We also learned that its possible to have a semi working product with only one day of work. ## What's next for PlanIt We were hoping to fully complete PlanIt for use within our own group of friends. If it gets positive feedback then we could see ourselves releasing this app on the market.
Energy is the future. More and more, that future relies on community efforts toward sustainability, and often, the best form of accountability occurs within peer networks. That's why we built SolarTrack, a energy tracker app that allows Birksun users to connect and collaborate with like-minded members of their community. In our app, the user profile reflects lifetime energy generated using Birksun, as well as a point conversion system that allows for the future development of gameified rewards. We also have a community map, where you can find a heatmap that describes where people generate the most energy using Birksun bags. In the future, this community map would also include nearby events and gatherings. Finally, there's the option to find family and friends and compete amongst them to accumulate the most points using Birksun bags. Here's to building a greener future for wearable tech, one bag at a time!
## Why We Created **Here** As college students, one question that we catch ourselves asking over and over again is – “Where are you studying today?” One of the most popular ways for students to coordinate is through texting. But messaging people individually can be time consuming and awkward for both the inviter and the invitee—reaching out can be scary, but turning down an invitation can be simply impolite. Similarly, group chats are designed to be a channel of communication, and as a result, a message about studying at a cafe two hours from now could easily be drowned out by other discussions or met with an awkward silence. Just as Instagram simplified casual photo sharing from tedious group-chatting through stories, we aim to simplify casual event coordination. Imagine being able to efficiently notify anyone from your closest friends to lecture buddies about what you’re doing—on your own schedule. Fundamentally, **Here** is an app that enables you to quickly notify either custom groups or general lists of friends of where you will be, what you will be doing, and how long you will be there for. These events can be anything from an open-invite work session at Bass Library to a casual dining hall lunch with your philosophy professor. It’s the perfect dynamic social calendar to fit your lifestyle. Groups are customizable, allowing you to organize your many distinct social groups. These may be your housemates, Friday board-game night group, fellow computer science majors, or even a mixture of them all. Rather than having exclusive group chat plans, **Here** allows for more flexibility to combine your various social spheres, casually and conveniently forming and strengthening connections. ## What it does **Here** facilitates low-stakes event invites between users who can send their location to specific groups of friends or a general list of everyone they know. Similar to how Instagram lowered the pressure involved in photo sharing, **Here** makes location and event sharing casual and convenient. ## How we built it UI/UX Design: Developed high fidelity mockups on Figma to follow a minimal and efficient design system. Thought through user flows and spoke with other students to better understand needed functionality. Frontend: Our app is built on React Native and Expo. Backend: We created a database schema and set up in Google Firebase. Our backend is built on Express.js. All team members contributed code! ## Challenges Our team consists of half first years and half sophomores. Additionally, the majority of us have never developed a mobile app or used these frameworks. As a result, the learning curve was steep, but eventually everyone became comfortable with their specialties and contributed significant work that led to the development of a functional app from scratch. Our idea also addresses a simple problem which can conversely be one of the most difficult to solve. We needed to spend a significant amount of time understanding why this problem has not been fully addressed with our current technology and how to uniquely position **Here** to have real change. ## Accomplishments that we're proud of We are extremely proud of how developed our app is currently, with a fully working database and custom frontend that we saw transformed from just Figma mockups to an interactive app. It was also eye opening to be able to speak with other students about our app and understand what direction this app can go into. ## What we learned Creating a mobile app from scratch—from designing it to getting it pitch ready in 36 hours—forced all of us to accelerate our coding skills and learn to coordinate together on different parts of the app (whether that is dealing with merge conflicts or creating a system to most efficiently use each other’s strengths). ## What's next for **Here** One of **Here’s** greatest strengths is the universality of its usage. After helping connect students with students, **Here** can then be turned towards universities to form a direct channel with their students. **Here** can provide educational institutions with the tools to foster intimate relations that spring from small, casual events. In a poll of more than sixty university students across the country, most students rarely checked their campus events pages, instead planning their calendars in accordance with what their friends are up to. With **Here**, universities will be able to more directly plug into those smaller social calendars to generate greater visibility over their own events and curate notifications more effectively for the students they want to target. Looking at the wider timeline, **Here** is perfectly placed at the revival of small-scale interactions after two years of meticulously planned agendas, allowing friends who have not seen each other in a while casually, conveniently reconnect. The whole team plans to continue to build and develop this app. We have become dedicated to the idea over these last 36 hours and are determined to see just how far we can take **Here**!
partial
## Inspiration We were inspired by recent advances in deep learning, which allow you to transfer style from one picture, say a famous painting, to your own pictures. We wanted to add tight integration with Facebook to allow people to experience the magic of this new technology in as accessible a manner as possible. ## What it does Semantic style transfer is a new deep-learning technique which takes images and applies styles to them -- we've drawn these styles from collections of paintings from history. Unfortunately Facebook requires app verification before allowing us to open it to non-developers (which takes a few days), so you can't run this yourself yet. We'd love to demo the integration for you with our own Facebook accounts, or you can see how it looks on an example photo at <http://tapioca.byron.io> ! ## How we built it AWS GPU instances running the deep-learning library torch, served by Flask. A mess of JS and hacked-together CSS on the front end in a desperate attempt to make things align, as is the way of such things. ## Challenges we ran into Centering things is hard. ## Accomplishments that we're proud of Got cutting-edge deep learning techniques working -- and integrated with Facebook! -- in only 36 hours. And it looks spiffy! ## What we learned Don't stay up so late. ## What's next for tapioca Taking over the world. But Instagram integration before that most likely.
## **Inspiration:** Our inspiration stemmed from the realization that the pinnacle of innovation occurs at the intersection of deep curiosity and an expansive space to explore one's imagination. Recognizing the barriers faced by learners—particularly their inability to gain real-time, personalized, and contextualized support—we envisioned a solution that would empower anyone, anywhere to seamlessly pursue their inherent curiosity and desire to learn. ## **What it does:** Our platform is a revolutionary step forward in the realm of AI-assisted learning. It integrates advanced AI technologies with intuitive human-computer interactions to enhance the context a generative AI model can work within. By analyzing screen content—be it text, graphics, or diagrams—and amalgamating it with the user's audio explanation, our platform grasps a nuanced understanding of the user's specific pain points. Imagine a learner pointing at a perplexing diagram while voicing out their doubts; our system swiftly responds by offering immediate clarifications, both verbally and with on-screen annotations. ## **How we built it**: We architected a Flask-based backend, creating RESTful APIs to seamlessly interface with user input and machine learning models. Integration of Google's Speech-to-Text enabled the transcription of users' learning preferences, and the incorporation of the Mathpix API facilitated image content extraction. Harnessing the prowess of the GPT-4 model, we've been able to produce contextually rich textual and audio feedback based on captured screen content and stored user data. For frontend fluidity, audio responses were encoded into base64 format, ensuring efficient playback without unnecessary re-renders. ## **Challenges we ran into**: Scaling the model to accommodate diverse learning scenarios, especially in the broad fields of maths and chemistry, was a notable challenge. Ensuring the accuracy of content extraction and effectively translating that into meaningful AI feedback required meticulous fine-tuning. ## **Accomplishments that we're proud of**: Successfully building a digital platform that not only deciphers image and audio content but also produces high utility, real-time feedback stands out as a paramount achievement. This platform has the potential to revolutionize how learners interact with digital content, breaking down barriers of confusion in real-time. One of the aspects of our implementation that separates us from other approaches is that we allow the user to perform ICL (In Context Learning), a feature that not many large language models don't allow the user to do seamlessly. ## **What we learned**: We learned the immense value of integrating multiple AI technologies for a holistic user experience. The project also reinforced the importance of continuous feedback loops in learning and the transformative potential of merging generative AI models with real-time user input.
## **Inspiration** Every **1 in 5 Canadians** live with the pains of arthritis ([source](https://arthritis.ca/about-arthritis/what-is-arthritis/arthritis-facts-and-figures)), and approximately 1.7 billion people around the world live in muscular and skeletal pain ([source](https://www.who.int/news-room/fact-sheets/detail/musculoskeletal-conditions)). Maybe it’s your family, your neighbors, or various stiffness felt in the mornings. ## **What it does** To increase the accessibility to mobility and strength exercises for all ages, Pain2Go showcases a human muscular model with basic areas of focus. Clicking in to these common areas will provide a selection of exercises, scalable to different difficulties to improve accessibility for everyone. 💪 ## **How we built it** Using vanilla JavaScript, HTML and CSS, and a frequented Figma board, we spent the 24 hours with the goal of creating an intuitive resource for others, while learning by trial and error in the process. ## **Challenges we ran into** There were several merge conflicts and major decisions to make. As beginners, we raced against time balancing tutorials, debugging and problem-solving. Whether it was creating a navigation bar, overlaying images, or creating interactive elements, we made it through. ## **Accomplishments that we're proud of** The main function of our project was achieved! We’re also proud to see the style come together. ## **What we learned** Learning online and from each other, we took steps (and leaps) forward with our collective programming, design and collaboration knowledge. ## **What's next for Pain2Go** * Create a mobile app to increase accessibility * Add a survey asking about pains, equipment, previous history of injury to create a personalized workout plans (paid; consulting) * Connect users to trusted, professional physiotherapists nearby * Increase the selection of exercises & test the ones that work * Increase the muscles targeted * Add a Contact Us
partial
## Inspiration An abundance of qualified applicants lose their chance to secure their dream job simply because they are unable to effectively present their knowledge and skills when it comes to the interview. The transformation of interviews into the virtual format due to the Covid-19 pandemic has created many challenges for the applicants, especially students as they have reduced access to in-person resources where they could develop their interview skills. ## What it does Interviewy is an **Artificial Intelligence** based interface that allows users to practice their interview skills by providing them an analysis of their video recorded interview based on their selected interview question. Users can reflect on their confidence levels and covered topics by selecting a specific time-stamp in their report. ## How we built it This Interface was built using the MERN stack In the backend we used the AssemblyAI APIs for monitoring the confidence levels and covered topics. The frontend used react components. ## Challenges we ran into * Learning to work with AssemblyAI * Storing files and sending them over an API * Managing large amounts of data given from an API * Organizing the API code structure in a proper way ## Accomplishments that we're proud of • Creating a streamlined Artificial Intelligence process • Team perseverance ## What we learned • Learning to work with AssemblyAI, Express.js • The hardest solution is not always the best solution ## What's next for Interviewy • Currently the confidence levels are measured through analyzing the words used during the interview. The next milestone of this project would be to analyze the alterations in tone of the interviewees in order to provide a more accurate feedback. • Creating an API for analyzing the video and the gestures of the the interviewees
## Inspiration Many people feel unconfident, shy, and/or awkward doing interview speaking. It can be challenging for them to know how to improve and what aspects are key to better performance. With Talkology, they will be able to practice in a rather private setting while receiving relatively objective speaking feedback based on numerical analysis instead of individual opinions. We hope this helps more students and general job seekers become more confident and comfortable, crack their behavioral interviews, and land that dream offer! ## What it does * Gives users interview questions (behavioural, future expansion to questions specific to the job/industry) * Performs quantitative analysis of users’ responses using speech-to-text & linguistic software package praat to study acoustic features of their speech * Displays performance metrics with suggestions in a user-friendly, interactive dashboard ## How we built it * React/JavaScript for the frontend dashboard and Flask/Python for backend server and requests * My-voice-analysis package for voice analysis in Python * AssemblyAI APIs for speech-to-text and sentiment analysis * MediaStream Recording API to get user’s voice recordings * Figma for the interactive display and prototyping ## Challenges we ran into We went through many conversations to reach this idea and as a result, only started hacking around 8AM on Saturday. On top of this time constraint layer, we also lacked experience in frontend and full stack development. Many of us had to spend a lot of our time debugging with package setup, server errors, and for some of us even M1-chip specific problems. ## Accomplishments that we're proud of This was Aidan’s first full-stack application ever. Though we started developing kind of late in the event, we were able to pull most of the pieces together within a day of time on Saturday. We really believe that this product (and/or future versions of it) will help other people with not only their job search process but also daily communication as well. The friendships we made along the way is also definitely something we cherish and feel grateful about <3 ## What we learned * Aidan: Basics of React and Flask * Spark: Introduction to Git and full-stack development with sprinkles of life advice * Cathleen: Deeper dive into Flask and React and structural induction * Helen: Better understanding of API calls & language models and managing many different parts of a product at once ## What's next for Talkology We hope to integrate computer vision approaches by collecting video recordings (rather than just audio) to perform analysis on hand gestures, overall posture, and body language. We also want to extend our language analysis to explore novel models aimed at performing tone analysis on live speech. Apart from our analysis methods, we hope to improve our question bank to be more than just behavioural questions and better cater to each user's specific job demands. Lastly, there are general loose ends that could be easily tied up to make the project more cohesive, such as integrating the live voice recording functionality and optimizing some remaining components of the interactive dashboard.
## Inspiration Many of us have a hard time preparing for interviews, presentations, and any other social situation. We wanted to sit down and have a real talk... with ourselves. ## What it does The app will analyse your speech, hand gestures, and facial expressions and give you both real-time feedback as well as a complete rundown of your results after you're done. ## How We built it We used Flask for the backend and used OpenCV, TensorFlow, and Google Cloud speech to text API to perform all of the background analyses. In the frontend, we used ReactJS and Formidable's Victory library to display real-time data visualisations. ## Challenges we ran into We had some difficulties on the backend integrating both video and voice together using multi-threading. We also ran into some issues with populating real-time data into our dashboard to display the results correctly in real-time. ## Accomplishments that we're proud of We were able to build a complete package that we believe is purposeful and gives users real feedback that is applicable to real life. We also managed to finish the app slightly ahead of schedule, giving us time to regroup and add some finishing touches. ## What we learned We learned that planning ahead is very effective because we had a very smooth experience for a majority of the hackathon since we knew exactly what we had to do from the start. ## What's next for RealTalk We'd like to transform the app into an actual service where people could log in and save their presentations so they can look at past recordings and results, and track their progress over time. We'd also like to implement a feature in the future where users could post their presentations online for real feedback from other users. Finally, we'd also like to re-implement the communication endpoints with websockets so we can push data directly to the client rather than spamming requests to the server. ![Image](https://i.imgur.com/aehDk3L.gif) Tracks movement of hands and face to provide real-time analysis on expressions and body-language. ![Image](https://i.imgur.com/tZAM0sI.gif)
partial
## Inspiration Published results show that ‘only 8% of cancer patients enroll in cancer trials’ for reasons including cost, lack of accessibility and lack of flexibility. ## What it does Delve serves to lower the barrier of entry for finding and participating in clinical trials and studies whilst providing trusted open data to the world. While collaborating with both Researchers and Participants, Delve uses AI to simplify complicated writing with the goal of reducing the spread of fake news. ## How we built it Low Fi to Mid Fi to High Fi, async building backend and frontend with a final merge and a lot of challenging issues and learning moments. ## Challenges we ran into Deployment ## Accomplishments that we're proud of We managed to get it up and working. ## What we learned Even when you've done something countless times, when it goes wrong it can still be very costly. Never be complacent. ## What's next for Delve Polishing the web app, deploying it more securely, and adding & polishing the features.
## Inspiration Drug discovery is one of the largest and most significant markets for better improving the advent of human health. For today's most challenging and deadly diseases like cancer and neurodegenerative disorders, the cost of developing a drug has ballooned to 2 billion and failure rates have reached a 97% failure rate. An extensive breakdown of the most prominent issues plaguing early-stage oncology trials has revealed that patient recruitment is the most critical pain point to running a successful trial. Cancers are fundamentally heterogeneous and notorious to develop drugs for. What's especially challenging is that today's patients often have to rely on clinical trials as a last resort for a cure. Trial recruitment models for today's most pressing clinical trials are incredibly limited; local clinics, hospital networks, and patient recruitment laws are incredibly variable. This makes finding and helping patients get matched to the right trial an incredibly challenging task due to location variation, lack of EHR interoperability, and few resources to best understand and connect with trials that make the most sense to enroll in. Our solution aims to solve this bidirectional problem through Unitrial: a patient and CRO-positioned platform that uses a large corpus of live clinical trials updated daily from ClinicalTrials.Gov and patient EHRs and medical records (genomic screens, lab test results, payer codes and insurance claims) to best match patients with trials and trials with patients. By streamlining the process we aim improve affordability and expedite the time for FDA drug approval and more effectively bring patients closer to available trials that can cure their disorders faster. ## What it does Our website takes queries in the form of a patient EHR profile (pdf) or medical condition descriptions (dictionary input) and returns the top clinical trial matches using an efficient RAG system powered by Mistral-7B. We specifically generated a novel data schema that leverages FHIR and MESH IDs in logging medically relevant tags to prevent the model from hallucinating from just the unstructured data. We are able to obtain both technical specifications of trials (enrollment criteria, measure of success, metadata on trial length, operations, sponsors) and the unstructured data that is critical in making a lot of the logistical decisions around trial enrollment (inclusion description, title and goals, relative rank of sources, sponsor conflict of interest). We have 2 visualization features. On the patient-end, a patient can upload PDFs of their EHR and medical data, from which we can extract the pdf raw text and intelligently format it to our internal EHR schema to be interoperable with existing FHIR protocols. We also de-identify any patient information so we can effectively pass data back to our system for CROs to potentially search and find patients as well. Then, the patient can either enter a custom prompt or a set of well-established prompts to search the trial space with their EHR and medical data to now search the clinical trial space. These EHR embeddings are uploaded into our system (in the future with real data, via HIPAA compliance) for clinical trial CROs to conduct more refined, specific searches to search across patients by either dictionary key values or a RAG QAS. We believe matching AI selection with real-world information from MESH-IDs and other biological tags can provide the most accurate information about clinical trials and best help match patients across different diseases. ## How we built it A) We access existing de-identified patient data from EHRs, preprocess the unstructured data into a structured format (key biomarkers formatted + raw text as input) B) Pass formatted key biomarkers into MED-BERT for medical-specific data and BGE for unstructured data to create novel embeddings for our RAG pipeline. C) Collect list of various Clinical Trials overviews with relevant trial parameters, information, timelines, and other necessary variables + preprocess them using our modified REST API on top of existing ClinicalTrials.gov API. This API is designed to be incredibly flexible for ANY disease query. Simply search which trials you'd like to learn more about and let the RAG system do the rest. D) Set up a Vector Database (using ChromaDB) for Clinical Trial Corpus and initial EHR patient cluster E) Use VDB Filtering to Reduce Search Space and Select N Top Documents from the Database (top N is determined by a similarity search across the prompt, uploaded EHR (get embeddings by document ID in corpus), and the clinical trials embeddings. F) Using BERT Embeddings with Query, Calculate Document Similarity Scores between Data and its respective “chunks” (trials), fetch top N docs and top I chunks in each document for final response ## Business Model We believe the best part of our platform that distinguishes us from the existing medical clinical trial recruitment products is not only our use of a novel RAG system that has been proved to have incredibly accurate decision-making power for trial recruitment, but also how our application is matching multiple stakeholders to build a asymmetric moat in both patient data, active patients, multiple clinical trials, dedicated CRO partners, and specialists to mediate the AI model results. Existing software for CROs for clinical trial recruitment are limited to statistical models and existing trial parameters to recruit from a stratified set of patients. Unitrial is positioned uniquely at the intersection of cutting-edge AI technology and clinical trial recruitment, offering a solution that leverages the power of real-time data analytics and patient-specific information. Our revenue model will capitalize on this by implementing a subscription service for CROs and healthcare providers who seek access to our refined patient-matching system. Additionally, we will explore value-based pricing models where fees are aligned with the outcomes of successful patient matches, which not only ensures a higher ROI for our clients but also aligns our business interests with the health outcomes of patients. Moreover, our platform can offer an advertisement option for pharmaceutical companies to feature their trials more prominently, ensuring higher visibility among relevant candidates. This dual-revenue stream from subscriptions and targeted advertising provides a sustainable business model while continuously improving the platform's capabilities. ## Challenges we ran into Finding good data to train our model was quite challenging. Once we found some suitable EHR data, we still had to do a lot of data cleaning and modification with some Python scripts. Furthermore, when building our RAG pipeline, we had trouble integrating with Langchain and making the Mistral model rely exclusively on our database of clinical trials. It would frequently hallucinate which was a major problem given our use case. ## Accomplishments that we're proud of We're proud of refining the model and creating a product capable of helping potential patients match with life-saving drugs. We've extensively tested our model and the hallucination rate is far lower than it initially was. ## What we learned We learned a lot about the inequalities of finding clinical trials and how to integrate various LLM frameworks. We also learned the specifics of building a RAG pipeline like working with different embedding techniques and vector space modeling and querying, as well as accounting for scalability in the development process. ## What's next for Clinical Trial Matchmaker Improve the model and add EHR data to our vector database so we can build a Clinical Trial -> Patient side
disclaimer: we didn't have enough time during the video demo to talk about everything we made for our project and explain our idea in detail so here's a thick 📕 for you to read if you're interested oop! ## Inspiration 💡 While discussing the recent cold weather in Vancouver, our friend Ryan remarked that he was keeping himself warm by mining cryptocurrency. After some light Google searching, we found out that it may be possible to leverage this idea in locations outside of his parents' basement. ![ryan-Heater.gif](https://i.postimg.cc/Wbrym1z4/ryan-Heater.gif) ## Initial Research 🔎 What we found out is that there are places where the heat generated by computers would not be considered a feature, but a useless byproduct. For example, cooling can be up to 40% of a data center’s total energy usage. Large bitcoin farms are experiencing the same issues, employing fans and air conditioners to move heat away from their devices. ![Screen-Shot-2022-02-20-at-6-16-33-AM.png](https://i.imgur.com/gxwHBuv.png) ## Problem 💭 Is there a way to reduce cooling loads by redistributing this heat to people who need to make these technologies more sustainable? ## What is the Box with Rounded Corners? 📦 The *Box with Rounded Corners* (*The Box* in short) is a platform where cryptocurrency miners could be used to provide free heating for small businesses. It consists of a custom-designed cryptocurrency mining rig, a smart thermostat system, and a data visualization platform powered by AI and ML to allow business owners to heat up their spaces more efficiently and effectively. ![flashing-Hardware.gif](https://i.postimg.cc/LXtqBPcT/flashing-Hardware.gif) *The Box* is a 3d printed housing that stores computing hardware required to mine crypto. William custom-designed an ASIC chip that accelerates sha256 hashing. This design was made using Google’s open-source SkyWater PDK and we hope to realize it into silicon if possible. In the future, this computing box will work in conjunction with energy storage hardware. ![chip.png](https://i.imgur.com/aDMgwiP.png) *Therma*, another part of our product, is a smart thermostat. *Therma* communicates with the box to control heating. By setting *Therma* higher, *The Box* will provide computing power or mine crypto. It uses **Mage** to predict ML models for determining which heating source is most efficient depending on external factors like predicted crypto price, past weather trends, and carbon footprint of the electricity source. ![therma.gif](https://i.postimg.cc/bYFG0qM4/therma.gif) Christy created a dashboard that helps the user visualize what happens within their *Boxes* and how it impacts the user. The main page shows stats for daily usage, an overview of monthly usage, crypto mined, as well as a comparison between using crypto mining (renewable energy resources) vs fossil fuels to heat up their space. There are also many other tabs that allow the user to control which of their boxes should be actively mining, more robust visualizations of environmental impact using **VMWare’s Wavefront**, regulation of temperature through **Mage’s ML model predictions**, and more of the costs and earnings breakdown. There is also a donation option that encourages users to donate to offset their carbon footprint. We initially started our project by doing visualizations with graphs and diagrams from MUI. However, we pivoted to using **VMWare’s Wavefront** after discovering how comprehensive and informative the graphs were for our users. We wanted to really showcase the sustainability factor of the *Box with Rounded Corners* product to encourage greener mining so it was useful for us to be able to highlight the data around environmental impact. While developing our product, we thought it would be amazing to have a way to share the dashboard without logging in. Our product uses **Mage** to train all our machine learning models to come up with predictions for when is the most efficient and effective time to mine crypto/increase room temperature, as well as predicting when there would be an excess of cleaner/renewable energy (ie. certain hours of the day have more energy generated from windmills) to mine more crypto. We were extremely amazed at how easy it was to train a model on the platform because one of us had no machine learning experience before. However, we did wish it could be more customized and allow for fine adjustment of parameters. ## Advantages for Small Businesses 👍 By using the *Box with Rounded Corners*, small businesses can be reimbursed for their energy usage, while lowering their heating bills. 43% of a small businesses’ energy usage is actually dedicated to space heating. ## Theoretical Savings 💾 In 2019, bitcoin mining consumed 111.7 terawatt-hours of power, or roughly equivalent to the power consumption of the Netherlands. If we were to assume a conservative savings estimate of 20% due to the reduction of cooling loads, the *Box with Rounded Corners* could theoretically provide a savings of 11.17 terawatt-hours per year if 50% of all bitcoin miners were to switch to our platform. The energy saved constitutes a reduction of 4.75 million tons of CO2 emissions per year and is enough to power 434,724 households for a year. [How Bitcoin's vast energy use could burst its bubble - BBC News](https://www150.statcan.gc.ca/t1/tbl1/en/tv.action?pid=2510006001) <https://www.eia.gov/tools/faqs/faq.php?id=74&t=11> ## How We Built It 🔨 The smart thermostat communicates with the box via an esp32 over wifi. *The Box* hosts a local copy of the backend server that talks with the **Mage** Api for our ML needs. The box also hosts the dashboard which provides visualization in conjunction with **VMWare**. We also designed 3d models of our *Box* and *Therma* products using SolidWorks and printed them out using a 3d printer. ![Screen-Shot-2022-02-20-at-6-49-36-AM.png](https://i.imgur.com/ibEt0El.png) ## About the Hardware 🔌 We built several hardware projects to actualize our project. The smart thermostat “Therma” was created using an esp32 for the brains, which connects over wifi to the box. The esp32 controls relays to interface with existing heating networks while also providing an interactive display to display the status of the home. It also sends temperature data to the box for further processing. Unfortunately, due to shipping delays we were not able to add occupancy sensors to the thermostat which would give our ML models more data to create a better prediction. On a side note, the green cover plate is 3d printed in glow in the dark filament so changing the temperature in the dark of night will be a problem of the past. We also designed and tested an ASIC meant to accelerate sha256 hashing. This design was heavily inspired by [Joachim Strömbergson’s](https://github.com/secworks) design and was truly an interesting project. This chip is meant to be included in the box to increase the efficiency and speed of crypto mining. This is similar to how purpose-built crypto miners are made and we would love to learn more. ## Challenges We Ran Into 🔥 Our biggest challenge was trying to build Web 3.0 using Web 2.0 technologies. Some of the challenges we faced were hardware-related. Our 3d printer experienced clogging issues, bed adhesion problems, and the filament was too old so we had to reprint our parts four times in total. Additionally, we did not have any experience working with sha256 hashing so there was a steep learning curve to understand how to design the ASIC chip. On the software end, there was some difficulty cleaning up datasets so the trained models had realistic predictions. ![3d-Printer.jpg](https://i.postimg.cc/mkFFzqbP/3d-Printer.jpg) ## Accomplishments We're Proud Of 🌟 We are extremely proud of what we were able to accomplish during this hackathon. We have been each other’s hackathon buddies for a while now but we’ve never been able to build and develop so much within a weekend before. We are particularly proud of being able to understand each others’ strengths and weaknesses so we could help each other out and teach one another new things during the project. William learned a lot about sha256 hashing while Christy finally tried machine learning for the first time! ## What's Next ▶️ Our final goal is to realize a network of decentralized edge servers capable of running applications on demand. This would involve creating a network of boxes that can communicate and distribute applications like IPFS but for applications. The final form of our network would be able to provide decentralized edge computing to any application that would want to be both decentralized and sustainable. We would also love to build out the energy storage hardware. This would involve building and testing a battery pack that will safely charge and discharge to supplement the grid. It would also involve building the ML algorithms needed to accurately predict when to charge and discharge the batteries so it is not only sustainable but also provides redundancy to the grid. Aditionally, we would love to see if our ASIC chip actually works. To achieve this we would send our design to a foundry like TSMC (we need a formal verification of the chip prior to this). With the limited time of a hackathon, only a very basic test suite was performed on the chip. With the creation of our platform, this would create a robust, decentralized, and sustainable computing network available everywhere for everyone to use. ## Ethics 🌈 Cryptocurrency has been at the forefront of criticism through its ethical and environmental impact. A single bitcoin transaction alone burns approximately 2,292.5 kilowatt-hours of energy - enough to power a house for 78 days. The environmental cost of cryptocurrency is what steers potential Web 3.0 users away, driving an ever-expanding digital divide. Because of these reasons, we believe that the excess waste created from cryptocurrency mining is ethically unsustainable for the future. Although efforts have been made using technologies such as renewable energy, this does not solve the issue of the core byproducts of cryptocurrencies, such as e-waste and waste heat. Our team has proposed to relocate this waste, specifically the thermal energy created by cryptocurrency miners, to heat small businesses at a reimbursed cost. *Box with Rounded Corners* strives to empower small businesses while offsetting the environmental impact of crypto mining through our dedicated platform. We plan on creating opportunities for users to invest these earnings into environmental organizations to empower positive impact within their own communities. While we acknowledge and identify that by providing a more sustainable solution to mining, we are also in turn promoting the use of cryptocurrency. Intermediate environmental costs such as the amount of energy needed to produce, buy and add products to the blockchain are high and our solution does not directly tackle this preliminary step. By adding a ‘safety net’ solution with our heat relocation, we are still enabling users to mine more cryptocurrencies and continue to contribute to energy waste. While crypto mining is inevitable, it’s important to acknowledge additional ethical implications other than just the environmental cost connected to the purpose of our organization. Targeting small business owners as our user base can be intimidating especially in the midst of the digital gap and the new concept of cryptocurrency. We recognize that there may be many business owners who may not trust the viability of our product and assume ulterior motives. Our team attempts to alleviate user experience by supplying small business owners with all the tools and transparency needed to boost confidence in utilizing our products. With our dashboard and familiar functionalities, users of all skill levels have been considered when implementing the UI by creating comprehensive labels and transparent data visualization while following WCAG3 standards. It is the intention of *Box with Rounded Corners* to use smart contracts and blockchain technologies to ensure that no parties can alter any details within the agreement. While this will cause some environmental impacts due to the energy required to add to the blockchain, we will strive to minimize this impact while maintaining trust between our company and our users. In the future, we hope to tackle our ethical concerns by conducting user research. It’s important for us to pinpoint the exact needs, behaviors, and pain points of our users in order to find the best solution - instead of just prescribing one.
losing
## Inspiration Twitter has become an integral part of our lives. It has the power to influence political outcomes and organize landmark events. However, rather than being a community that allows for users to share ideas and learn from each other, radical individuals—who openly spread hatred and silence discourse—have turned Twitter into an unsafe site. We wanted to change that. The issue is that Twitter users are not held accountable for their actions. As such, we created a web app that allows users to view the sentiments of a user's tweets. ## What it does We use the Twitter API to fetch a user's tweets. Then run it through the Google Cloud Natural Language API to get a sentiment analysis and classification. We then display the tweets using a graph visualization tool sorted by sentiments. Users can click a specific tweet and view the sentiment score of it as well as the tweet itself. ## How we built it We used React for the front end and Node.js + Express for the backend. The project is hosted on heroku and is linked to our domain retrieved from Domain.com. ## Challenges we ran into We initially had trouble understand the output of the Natural Language API. In the end, we were able to understand the output and use the data accordingly. ## Accomplishments that we're proud of Displaying the Tweets in a very visually appealing manner with D3.js and building a smooth UI with React. ## What we learned How to use React to build a responsive web app. ## What's next for TwitterSafe.Space Using OAuth with the Twitter API to let users sign into their own accounts and perform an analysis on private accounts that they have access to. As well, we are hoping to make it into a chrome extension so it can be accessed directly from twitter.
## Inspiration The inspiration came from the fact that modern democracy is flawed. Unfortunately, government lobbying, voter discrimination, misdirection, and undelivered promises are commonplace in democracies worldwide. This led to declining voter trust and less participation by the public during key moments that can seriously affect the power of the people over time. ## What it does Our online web application allows the people's vote to matter once again. It is a platform used for voting on different subjects where the choices are carefully analyzed by experts before being sent to the voters. When an organizer creates a poll, the options are sent to specific experts who are not only experts in the field of that poll's subject but are also of a diverse group to minimize biases. Then, the experts will have the opportunity to commend the organizer's options, stating each choice's pros and cons. After the analysis, the poll will be sent to a representative pool of people that will rank the options using the comment of the experts. This will allow the average voter to understand the different stakes at play better and make a more intelligent decision. The organizer can then check the votes and see the winning side. ## How we built it In the front end, we mostly used React-Bootstrap for the UI. We created multiple pages for the organizers, experts, and voters. In the back end, we used firebase to store the vote data and the experts' advice. It is also connected to a web scrapper that searches for the most relevant experts and voters. That web scrapper uses the Twilio API to text people to make commenting/voting more user-friendly and intuitive. ## Challenges we ran into We ran into a lot of UI issues using React-Bootstrap, especially when it came to connecting the front end with the database. We had to deal with multiple cases where the UI wouldn't load because of a single error between the UI and firebase. ## Accomplishments that we're proud of The web scrapper connected to the Twilio API is one of the accomplishments we are the proudest of. It runs fast and has excellent interconnections with the Twilio API, allowing it to send dozens of participants within seconds. ## What we learned We learned that being a perfectionist can be a hindrance during a hackathon. As we work increasingly on the front end, some of us debated the look of our web application. Although the discussion refined our UI, it also wasted valuable time that could have been used to implement more functionalities. ## What's next for Time2Vote We will later implement different algorithms to count votes and add proper authentication to our system.
## Inspiration Social media has been shown in studies that it thrives on emotional and moral content, particularly angry in nature. In similar studies, these types of posts have shown to have effects on people's well-being, mental health, and view of the world. We wanted to let people take control of their feed and gain insight into the potentially toxic accounts on their social media feed, so they can ultimately decide what to keep and what to remove while putting their mental well-being first. We want to make social media a place for knowledge and positivity, without the anger and hate that it can fuel. ## What it does The app performs an analysis on all the Twitter accounts the user follows and reads the tweets, checking for negative language and tone. Using machine learning algorithms, the app can detect negative and potentially toxic tweets and accounts to warn users of the potential impact, while giving them the option to act how they see fit with this new information. In creating this app and its purpose, the goal is to **put the user first** and empower them with data. ## How We Built It We wanted to make this application as accessible as possible, and in doing so, we made it with React Native so both iOS and Android users can use it. We used Twitter OAuth to be able to access who they follow and their tweets while **never storing their token** for privacy and security reasons. The app sends the request to our web server, written in Kotlin, hosted on Google App Engine where it uses Twitter's API and Google's Machine Learning APIs to perform the analysis and power back the data to the client. By using a multi-threaded approach for the tasks, we streamlined the workflow and reduced response time by **700%**, now being able to manage an order of magnitude more data. On top of that, we integrated GitHub Actions into our project, and, for a hackathon mind you, we have a *full continuous deployment* setup from our IDE to Google Cloud Platform. ## Challenges we ran into * While library and API integration was no problem in Kotlin, we had to find workarounds for issues regarding GCP deployment and local testing with Google's APIs * Since being cross-platform was our priority, we had issues integrating OAuth with its requirement for platform access (specifically for callbacks). * If each tweet was sent individually to Google's ML API, each user could have easily required over 1000+ requests, overreaching our limit. Using our technique to package the tweets together, even though it is unsupported, we were able to reduce those requests to a maximum of 200, well below our limits. ## What's next for pHeed pHeed has a long journey ahead: from integrating with more social media platforms to new features such as account toxicity tracking and account suggestions. The social media space is one that is rapidly growing and needs a user-first approach to keep it sustainable; ultimately, pHeed can play a strong role in user empowerment and social good.
partial
## Inspiration We thought, why entertain your pet, when you can get a robot to do it for you? We have seen offerings from many companies and have decided that they don't quite reach the mark in terms of appearances, these offerings are often very unalive, and uninspiring. We wanted to create an offering that resembled and imitated a cat! ## What it does Robo Cat acts as a friendly cat companion for any pet left home alone. It provides an engaging experience for pets at home. The Robo Cat can use a laser pointer, drive around, and most importantly meow and shake its tail. It effectively acts as a distraction for any lonely pet. ## How we built it Robo cat consists of multiple systems integrated together, including a wheeling, cat noise imitation, tail wagging, rotational laser, and monitor to create a functioning robot. To begin with, the wheeling system utilizes 2 DC motors and a proximity sensor in order to operate. In detail, the DC motors are attached to wheels - forcing the wheels to turn when power is provided. Additionally, when the proximity sensor receives a signal providing the information that a object is nearby, one of the wheels stops, while the other continues to operate - forcing the mechanism to turn. Secondly, a buzzer has been hardcoded at different frequencies in order to imitate the meowing of a cat, and this system continuously runs while the mechanism operates. Alongside, the cat noise, the mechanisms tail is continuously wagging with the use of a servo motor that turns between certain angles. Thirdly, in order to create a laser that can rotate in all angles, a dual system utilizing servo motors has been connected to a laser, and both of these mechanisms turn on when a switch is clicked. In detail, the bottom servo randomly turns between 0 and 180 degrees along the x axis, while the top servo (which is connected to the bottom servo and hence horizontally rotates) turns between 0 and 180 degrees along the y axis. Moreover, a laser is connected to the top servo, hence it rotates randomly along the x-y cartesian plane due to the servo motors. Lastly, an LCD monitor has been connected to the side of the mechanism, and it displays a welcome message, instructions and other messages that aid its imitation to act like a cat. ## Challenges we ran into During our iterative design process, we ran into multiple challenges including having difficulty in creating a meow noise, using the force sensor and getting the DC motors to seamlessly work with the proximity sensors. However, we perceived through the challenges and created a final functioning design due to our creative and critical thinking skils. ## Accomplishments that we're proud of We are very proud of many mechanisms inside of the robot. Most importantly, the meowing is done using a piezo buzzer, since using a speaker would require a proprietary driver. The driving mechanism required some more complex circuit analysis to really understand how the DC motors were driven. The laser movement mechanism was a quick fix to a lack of gearbox design time, and the tail is just adorable! ## What we learned We learned about how to utilize multiple different components that we never interacted or utilized before, including an LCD monitor, a laser, force sensor, buzzer and speaker. Additionally, we learned about how to integrate multiple systems together in order to create a robot. ## What's next for Robo Cat Robo cat is still in very early stages of development, and it's appearance is not the best, but we would like to work towards a cohesive, unified, Robo cat. A wooden construction with 3d printed parts is next in line for the development cycle of Robo Cat, and eventually, proprietary printed circuit boards with confident connections!
## Inspiration Roombas are a great way of cleaning your apartment while you're away from home. However, 80% of the time it sits in your closet all alone without any companionship or purpose, so we decided to give it more than one task. This project aims to solve the problem of user's desires for learning basic dance moves while taking advantage of having a Roomba at home (and finally giving meaning for the Roomba's life). ## What it does Beauty and the Roomba (TM) aims to provide busy millennials with the ability to learn very basic Waltz steps by following the Roomba. The Roomba wants to provide more than just cleaning services by enabling you to get more use out of it. We hope that by being able to teach you basic steps as the user you will gain more confidence and perhaps join a dance class, learn something new quickly, and be able to interact with the robot, as robots are known to often be anthropomorphized. ## How we built it We built it using Pyserial API, iRobot API, Python, and the resources available on iRobot on opcode. The API was used to be able to convert the opcode provided in the Create2 manuals to Python. For inspiration we also used open source code on GitHub as well as read about different hacks that had already been created with Roomba. ## Challenges we ran into The greatest challenge encountered during these last 24 hours was coming up with an idea. However, once the idea was created there were several other challenges that we, as a team, ran into. The most notable challenge however was connecting and sending signals to the hardware, because we were not familiar with the process and there was a steep learning curve. Additional challenges included figuring out the co-ordinate systems and the right speed to the tempo. ## Accomplishments that we're proud of The most notable accomplishment, apart from the constant joy every time it worked was ensuring that the software was able to move and play songs simultaneously. This was an accomplishment because we had envisioned the robot to be able to do that simultaneously and seeing it come alive was nothing short of amazing. ## What's next for Beauty and the Roomba (TM) We would hope that Beauty and the Roomba (TM) would be able to have more precise dance moves by having sharper turns. We also hope that it would be able to take on any dance style such as salsa and more complicated Waltz moves. We also hope that it would be able to eventually train data through machine learning of how the user moves in order to have more precise data about the user and recognize who is using it (if more than one user). We also would like to add features such as cameras and sensors for precision.
## Inspiration We realized how visually-impaired people find it difficult to perceive the objects coming near to them, or when they are either out on road, or when they are inside a building. They encounter potholes and stairs and things get really hard for them. We decided to tackle the issue of accessibility to support the Government of Canada's initiative to make work and public places completely accessible! ## What it does This is an IoT device which is designed to be something wearable or that can be attached to any visual aid being used. What it does is that it uses depth perception to perform obstacle detection, as well as integrates Google Assistant for outdoor navigation and all the other "smart activities" that the assistant can do. The assistant will provide voice directions (which can be geared towards using bluetooth devices easily) and the sensors will help in avoiding obstacles which helps in increasing self-awareness. Another beta-feature was to identify moving obstacles and play sounds so the person can recognize those moving objects (eg. barking sounds for a dog etc.) ## How we built it Its a raspberry-pi based device and we integrated Google Cloud SDK to be able to use the vision API and the Assistant and all the other features offered by GCP. We have sensors for depth perception and buzzers to play alert sounds as well as camera and microphone. ## Challenges we ran into It was hard for us to set up raspberry-pi, having had no background with it. We had to learn how to integrate cloud platforms with embedded systems and understand how micro controllers work, especially not being from the engineering background and 2 members being high school students. Also, multi-threading was a challenge for us in embedded architecture ## Accomplishments that we're proud of After hours of grinding, we were able to get the raspberry-pi working, as well as implementing depth perception and location tracking using Google Assistant, as well as object recognition. ## What we learned Working with hardware is tough, even though you could see what is happening, it was hard to interface software and hardware. ## What's next for i4Noi We want to explore more ways where i4Noi can help make things more accessible for blind people. Since we already have Google Cloud integration, we could integrate our another feature where we play sounds of living obstacles so special care can be done, for example when a dog comes in front, we produce barking sounds to alert the person. We would also like to implement multi-threading for our two processes and make this device as wearable as possible, so it can make a difference to the lives of the people.
losing
## Why MusicShift? When you listen to music, it belongs to you & your friends. We want to make sure you feel that way about every song. Switching aux cords, settling for lackluster playlists, or attempting to plan a playlist in advance doesn't let that happen. Through MusicShift, we make sure that the best playlist is also the most spontaneous. ## What is it? MusicShift is a plug-and-play, ever-evolving collaborative playlist in a box. Just plug in an aux cord, share a QR code with your friends, and let the best music start playing. MusicShift lets you collaborate on your playlists. You can add songs to your playlist, and even upvote songs that others have added so the more popular songs are played sooner. There is no limit to the songs you can search, and no limit to the number of people who can collaborate on a single playlist through real time multi-user sync. Playlists can have different purposes too. MusicShift is fun enough to be the music player during a carpool, and sophisticated enough to supply the music in public parks and restaurants. There's no need to worry about how your party's playlist fares when everyone is working together to pick the music. ## How it works MusicShift is made up of three parts: a hardware device, a progressive web app, and a database backend. The hardware device is a Raspberry Pi 2 which polls the backend (MongoDB database of tracks & votes used to generate rankings / play order) for the next Spotify song to play. Using Spotify’s Python bindings & taking advantage of its predictable caching locations, we intercept the downloaded streams and live route them to the aux output. Meanwhile, our progressive web app built using Polymer offers a live view into the playlist - what’s playing, what’s next, the ability to upvote/downvote songs to have them play sooner or later, and of course skip functionality (optional, configurable by the playlist creator). It loads instantly on users’ devices and presents itself as a like-native app (addable to the user lockscreen). ## What's next? Here's a look at the future of MusicShift: * User authentication, so you have complete control over your playlists * Playlist uploads through Spotify integration * Establish private and public streams for different settings and venues * NFC or Bluetooth beacon with MusicShift for easier connection
## Inspiration As students who listen to music to help with our productivity, we wanted to not only create a music sharing application but also a website to allow others to discover new music, all through where they are located. We were inspired by Pokemon-Go but wanted to create a similar implementation with music for any user to listen to. Anywhere. Anytime. ## What it does Meet Your Beat implements a live map where users are able to drop "beats" (a.k.a Spotify beacons). These beacons store a song on the map, allowing other users to click on the beacon and listen to the song. Using location data, users will be able to see other beacons posted around them that were created by others and have the ability to "tune into" the beacon by listening to the song stationed there. Multiple users can listen to the same beacon to simulate a "silent disco" as well. ## How I built it We first customized the Google Map API to be hosted on our website, as well as fetch the Spotify data for a beacon when a user places their beat. We then designed the website and began implementing the SQL database to hold the user data. ## Challenges I ran into * Having limited experience with Javascript and API usage * Hosting our domain through Google Cloud, which we were unaccustomed to ## Accomplishments that I'm proud of Our team is very proud of our ability to merge various elements for our website, such as the SQL database hosting the Spotify data for other users to access on the website. As well, we are proud of the fact that we learned so many new skills and languages to implement the API's and database ## What I learned We learned a variety of new skills and languages to help us gather the data to implement the website. Despite numerous challenges, all of us took away something new, such as web development, database querying, and API implementation ## What's next for Meet Your Beat * static beacons to have permanent stations at more notable landmarks. These static beacons could have songs with the highest ratings. * share beacons with friends * AR implementation * mobile app implementation
## Inspiration Hosts of social events/parties create their own music playlists and ultimately control the overall mood of the event. By giving attendees a platform to express their song preferences, more people end up feeling satisfied and content with the event. ## What it does Shuffle allows hosts to share their event/party playlists with attendees using a web interface. Attendees have the ability to view and vote for their favorite tracks using a cross-platform mobile application. The tracks in the playlist are shuffled in real time based on user votes. ## How we built it We used React for the web application (host) and react-native for the mobile application (client). Both applications access a central database made using MongoDB Stitch. We also used socket.io deployed on Heroku to provide real time updates. ## Challenges we ran into Integrating MongoDB Stitch and socket.io in order to show real time updates across multiple platforms. ## Accomplishments that we're proud of We're proud of the fact that we were able to create a cross platform web and mobile application. Only a valid internet connection is required to access our platform. ## What we learned All team members were able learn and experiment with a new tool/technology. ## What's next for Shuffle Integration with various music streaming services such as Spotify or Apple Music. Ability to filter playlists by mood using machine learning.
partial
## Inspiration I’ve struggled with disordered eating for a while now, but it’s always been pretty manageable for the most part. Maybe it was bad luck, maybe it was the pandemic wearing on me, but in January of 2021 it started becoming less manageable. Seeing a specialist helped, and so did the cognitive behavioral therapy plan we worked with. In the early phases of recovery, accountability is the most important factor. My food logs kept me accountable, but they also kept me isolated. One of the most insidious effects of an eating disorder is the way it steals you away from people you care about, and who care about you. Food brings us together, and it’s complicated enough dealing with an eating disorder. Having to scurry off to log my meals or just eat alone in my room was only a burden on top of a long and difficult road to recovery. I knew there had to be a better way. That’s where Food Logger comes into the picture. With the help of my teammates, Karson and Dorien, we created an app to help ease that burden. Food is meant to be enjoyed, and being present is part of that. Food Logger makes logging meals easier than ever and subtle too. ## What it does Food Logger is meant to be paired with CBT-E, a cognitive behavioral therapy program for eating disorder recovery. Loggers are able to log their meals, keeping track of when, what, and where they ate along with notes and other recovery specific information. Sharing your logs can be done either by exporting the logs to a CSV or shared via a QR code if your clinician also has the Food Logger app. Additionally, there is a help center in the app that puts access to a variety of crisis hotlines right in your hands, should you need it. ## How we built it The app is build using Flutter and Dart. The vast majority of the app is local. Logs are stored in memory, and when shared via QR code, the data is encrypted before being stored in our Firestore database. Logs are retrieved from Firestore using Cloud Functions as endpoints. ## Challenges we ran into There were a good number of challenges, among them were: Figuring out how to format data and store it locally on device Managing asynchronous requests to the filesystem How to securely upload encrypted logs to the internet without authenticating users Make those encrypted logs accessible to a desired user for viewing Design! ## Accomplishments that we’re proud of We are proud of being able to make an app that helps others. We are also proud that we were all able to work together extremely when it came to bouncing ideas off of each other and coming up with solutions to our problems. Finally, we are proud that we were able to develop some important features such as the ability to share logs and to hide logs. ## What we learned We learned that as a group we work extremely well together and that group work is an amazing way to get a lot of work done in a short amount of time. We also were able to use this experience to continue to develop our app programming skills and user interface skills as well. Most importantly, we learned that we can use our skills to make an impact in people's lives and make a difference. ## What’s next for Food Logger The next step is to get published on the App Store and Google Play Store. Subsequently, we have to get the word out that this app exists! While it does make logging significantly easier than using a spreadsheet, the ease of sharing is crucial as well. Many clinicians use CBT-E, and if they used Food Logger alongside the program, they could save themselves and their patients a lot of trouble.
## Inspiration Millions of young people, especially kids and teens, suffer from eating disorders, often feeling overwhelmed by the recovery process. Traditional recovery tools, such as calorie tracking, can further entrench unhealthy behaviours, and are often not suited for younger age groups. Inspired by the need to promote healthy habits in a fun, supportive way, we designed an app that gamifies the recovery process, helping to destigmatize eating disorders and support positive change. ## What it does Our mobile app helps kids and teens recover from eating disorders by allowing them to collect badges for trying new foods and food groups. Our solution replaces the habit of meticulously tracking food intake commonly associated with eating disorders, and replaces it with a simplified alternative and rewards to encourage healthy behaviours. Users are able to log different foods they've tried without the ability to view histories or log quantities and caloric values. Trying unfamiliar foods, different food groups, or foods within the user's personalized list of "fear foods" can earn progress towards badges, which unlock accessories for an animal friend companion cheering them along in their journey. The app focuses on building positive habits and reframes food as an opportunity for growth while destigmatizing recovery with fun rewards and kid-friendly UI. ## How we built it Using MongoDB, React Native, Figma, JavaScript, and Node.js, we created a smooth and interactive user experience. Our design focused on accessibility and simplicity to ensure a friendly environment. The Figma prototypes helped shape our UX to be playful and encouraging, while MongoDB efficiently handles the user data, allowing users to track their progress without overwhelming them with unnecessary information. ## Challenges we ran into Our biggest challenge were technical issues between merge errors, lost code, and environment issues. It took us a long time to set up XCode and simulators to view our app. We ran into quite a few hiccups along the way between unfamilarity with developing for mobile, pushing and pulling before ensuring edits aligned between team members. ## Accomplishments that we're proud of We successfully completed our project in the limited time-frame, without having to make any significant trade-offs from our MVP or between design and code. This was also some of our first time's using tools like React Native, and we were able to overcome roadblocks by collaborating as a team. ## What we learned Building this app taught us the importance of empathy in design. We learned how to balance functionality and care, ensuring that our app supports its users without becoming another source of stress. Technically, we honed our skills in full-stack development, mobile development, and version control. ## What's next for Bites for Badges We would implement more avatars and unlockables, along with use of AI to sort and tag food into categories to further improve our badge system and cover a greater range of cultural foods and cuisines. We would then like to conduct user tests, and work with dieticians and mental health professionals to eventually develop educational aspects to educate users about healthy habits and eating.
## Inspiration Partially inspired by the Smart Cities track, we wanted our app to have the direct utility of ordering food, while still being fun to interact with. We aimed to combine convenience with entertainment, making the experience more enjoyable than your typical drive-through order. ## What it does You interact using only your voice. The app automatically detects when you start and stop talking, uses AI to transcribe what you say, figures out the food items (with modifications) you want to order, and adds them to your current order. It even handles details like size and flavor preferences. The AI then generates text-to-speech audio, which is played back to confirm your order in a humorous, engaging way. There is absolutely zero set-up or management necessary, as the program will completely ignore all background noises and conversation. Even then, it will still take your order with staggering precision. ## How we built it The frontend of the app is built with React and TypeScript, while the backend uses Flask and Python. We containerized the app using Docker and deployed it using Defang. The design of the menu is also done in Canva with a dash of Harvard colors. ## Challenges we ran into One major challenge was getting the different parts of the app—frontend, backend, and AI—to communicate effectively. From media file conversions to AI prompt engineering, we worked through each of the problems together. We struggled particularly with maintaining smooth communication once the app was deployed. Additionally, fine-tuning the AI to accurately extract order information from voice inputs while keeping the interaction natural was a big hurdle. ## Accomplishments that we're proud of We're proud of building a fully functioning product that successfully integrates all the features we envisioned. We also managed to deploy the app, which was a huge achievement given the complexity of the project. Completing our initial feature set within the hackathon timeframe was a key success for us. Trying to work with Python data type was difficult to manage, and we were proud to navigate around that. We are also extremely proud to meet a bunch of new people and tackle new challenges that we were not previously comfortable with. ## What we learned We honed our skills in React, TypeScript, Flask, and Python, especially in how to make these technologies work together. We also learned how to containerize and deploy applications using Docker and Docker Compose, as well as how to use Defang for cloud deployment. ## What's next for Harvard Burger Moving forward, we want to add a business-facing interface, where restaurant staff would be able to view and fulfill customer orders. There will also be individual kiosk devices to handle order inputs. These features would allow *Harvard Burger* to move from a demo to a fully functional app that restaurants could actually use. Lastly, we can sell the product by designing marketing strategies for fast food chains.
losing
## Inspiration Genomic data is unique in that it is both incredibly personal and near impossible to change. Companies that store genomic data for analysis are vulnerable to data breaches, both traditional direct breaches and indirectly revealing insights into their data via the AI tools they develop. ## What it does DPAncestry is a platform that uses state-the-the-art local differential privacy algorithms to securely process genomic data while maintaining individual privacy. By adding a layer of obfuscation to the data, DPAncestry ensures that sensitive information remains confidential, even when analyzed for ancestral insights. While many top companies and organizations such as Google, Microsoft, Apple, and the U.S. Census Bureau have already adapted differentially privacy in their models, our platform is, to our knowledge, the first to pioneer this idea for the genetic testing sphere. To learn more about the research we referenced while developing our platform, check out: <https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9073402/> ## How we built it DPAncestry leverages local differential privacy (DP) algorithms, which work by adding controlled noise to individual data points before any analysis occurs. This approach ensures that the true values are obscured, but useful aggregate information can still be derived. We built our platform based on the methods detailed in the paper we cited, which provides a comprehensive framework for implementing differential privacy in genomic data analysis. ## Challenges we ran into One of the major challenges we faced was choosing a focus for the project that utilizes this advanced technology while still being impactful. Additionally, we had to carefully select the most suitable differential privacy algorithm that balances privacy with data utility, ensuring meaningful insights without compromising individual privacy. Our project additionally required parsing through academic research papers on privacy algorithms, which presented a substantial challenge for converting to a concrete implementation. ## Accomplishments that we're proud of We are proud of successfully integrating local differential privacy into a user-friendly platform that can handle data as complex as genomic data. It provides a simple, powerful and most significantly, anonymous service for ancestry determination. We also linked an LLM model, Anthropic’s Claude, which guides the user to interpreting their genomics results, and help understanding the privacy mechanisms behind the model. ## What we learned Throughout the development of DPAncestry, we gained a deeper understanding of the intricacies of differential privacy and how it can protect personally identifiable information. We also learned about the challenges of balancing privacy and data utility, and the importance of user trust in handling sensitive information. ## What's next for DPAncestry Once our project acquires additional investment, we aspire to accelerate our company into the first DP genetic testing company. We’ll develop our platform into a more cohesive product for seamless usage. Another proposition deliberated by the team was selling our software to genetics testing companies like 23andMe, to recover share prices after their major 2023 data leak, which leaked the sensitive data of over 6 million clients.
## Inspiration Learning never ends. It's the cornerstone of societal progress and personal growth. It helps us make better decisions, fosters further critical thinking, and facilitates our contribution to the collective wisdom of humanity. Learning transcends the purpose of solely acquiring knowledge. ## What it does Understanding the importance of learning, we wanted to build something that can make learning more convenient for anyone and everyone. Being students in college, we often find ourselves meticulously surfing the internet in hopes of relearning lectures/content that was difficult. Although we can do this, spending half an hour to sometimes multiple hours is simply not the most efficient use of time, and we often leave our computers more confused than how we were when we started. ## How we built it A typical scenario goes something like this: you begin a Google search for something you want to learn about or were confused by. As soon as you press search, you are confronted with hundreds of links to different websites, videos, articles, news, images, you name it! But having such a vast quantity of information thrown at you isn’t ideal for learning. What ends up happening is that you spend hours surfing through different articles and watching different videos, all while trying to piece together bits and pieces of what you understood from each source into one cohesive generalization of knowledge. What if learning could be made easier by optimizing search? What if you could get a guided learning experience to help you self-learn? That was the motivation behind Bloom. We wanted to leverage generative AI to optimize search specifically for learning purposes. We asked ourselves and others, what helps them learn? By using feedback and integrating it into our idea, we were able to create a platform that can teach you a new concept in a concise, understandable manner, with a test for knowledge as well as access to the most relevant articles and videos, thus enabling us to cover all types of learners. Bloom is helping make education more accessible to anyone who is looking to learn about anything. ## Challenges we ran into We faced many challenges when it came to merging our frontend and backend code successfully. At first, there were many merge conflicts in the editor but we were able to find a workaround/solution. This was also our first time experimenting with LangChain.js so we had problems with the initial setup and had to learn their wide array of use cases. ## Accomplishments that we're proud of/What's next for Bloom We are proud of Bloom as a service. We see just how valuable it can be in the real world. It is important that society understands that learning transcends the classroom. It is a continuous, evolving process that we must keep up with. With Bloom, our service to humanity is to make the process of learning more streamlined and convenient for our users. After all, learning is what allows humanity to progress. We hope to continue to optimize our search results, maximizing the convenience we bring to our users.
## Inspiration In a world where the voices of the minority are often not heard, technology must be adapted to fit the equitable needs of these groups. Picture the millions who live in a realm of silence, where for those who are deaf, you are constantly silenced and misinterpreted. Of the 50 million people in the United States with hearing loss, less than 500,000 — or about 1% — use sign language, according to Acessibility.com and a recent US Census. Over 466 million people across the globe struggle with deafness, a reality known to each in the deaf community. Imagine the pain where only 0.15% of people (in the United States) can understand you. As a mother, father, teacher, friend, or ally, there is a strong gap in communication that impacts deaf people every day. The need for a new technology is urgent from both an innovation perspective and a human rights perspective. Amidst this urgent disaster of an industry, a revolutionary vision emerges – Caption Glasses, a beacon of hope for the American Sign Language (ASL) community. Caption Glasses bring the magic of real-time translation to life, using artificial neural networks (machine learning) to detect ASL "fingerspeaking" (their one-to-one version of the alphabet), and creating instant subtitles displayed on glasses. This revolutionary piece effortlessly bridges the divide between English and sign language. Instant captions allow for the deaf child to request food from their parents. Instant captions allow TAs to answer questions in sign language. Instant captions allow for the nurse to understand the deaf community seeking urgent care at hospitals. Amplifying communication for the deaf community to the unprecedented level that Caption Glasses does increases the diversity of humankind through equitable accessibility means! With Caption Glasses, every sign becomes a verse, every gesture an eloquent expression. It's a revolution, a testament to humanity's potential to converse with one another. In a society where miscommunication causes wars, there is a huge profit associated with developing Caption Glasses. Join us in this journey as we redefine the meaning of connection, one word, one sign, and one profound moment at a time. ## What it does The Caption Glasses provide captions displayed on glasses after detecting American Sign Language (ASL). The captions are instant and in real-time, allowing for effective translations into the English Language for the glasses wearer. ## How we built it Recognizing the high learning curve of ASL, we began brainstorming for possible solutions to make sign language more approachable to everyone. We eventually settled on using AR-style glasses to display subtitles that can help an ASL learner quickly identify what sign they are looking at. We started our build with hardware and design, starting off by programming a SSD1306 OLED 0.96'' display with an Arduino Nano. We also began designing our main apparatus around the key hardware components, and created a quick prototype using foam. Next, we got to loading computer vision models onto a Raspberry Pi4. Although we were successful in loading a basic model that looks at generic object recognition, we were unable to find an ASL gesture recognition model that was compact enough to fit on the RPi. To circumvent this problem, we made an approach change that involved more use of the MediaPipe Hand Recognition models. The particular model we chose marked out 21 landmarks of the human hand (including wrist, fingertips, knuckles, etc.). We then created and trained a custom Artificial Neural Network that takes the position of these landmarks, and determines what letter we are trying to sign. At the same time, we 3D printed the main apparatus with a Prusa I3 3D printer, and put in all the key hardware components. This is when we became absolute best friends with hot glue! ## Challenges we ran into The main challenges we ran into during this project mainly had to do with programming on an RPi and 3D printing. Initially, we wanted to look for pre-trained models for recognizing ASL, but there were none that were compact enough to fit in the limited processing capability of the Raspberry Pi. We were able to circumvent the problem by creating a new model using MediaPipe and PyTorch, but we were unsuccessful in downloading the necessary libraries on the RPi to get the new model working. Thus, we were forced to use a laptop for the time being, but we will try to mitigate this problem by potentially looking into using ESP32i's in the future. As a team, we were new to 3D printing, and we had a great experience learning about the importance of calibrating the 3D printer, and had the opportunity to deal with a severe printer jam. While this greatly slowed down the progression of our project, we were lucky enough to be able to fix our printer's jam! ## Accomplishments that we're proud of Our biggest accomplishment is that we've brought our vision to life in the form of a physical working model. Employing the power of 3D printing through leveraging our expertise in SolidWorks design, we meticulously crafted the components, ensuring precision and functionality. Our prototype seamlessly integrates into a pair of glasses, a sleek and practical design. At its heart lies an Arduino Nano, wired to synchronize with a 40mm lens and a precisely positioned mirror. This connection facilitates real-time translation and instant captioning. Though having extensive hardware is challenging and extremely time-consuming, we greatly take the attention of the deaf community seriously and believe having a practical model adds great value. Another large accomplishment is creating our object detection model through a machine learning approach of detecting 21 points in a user's hand and creating the 'finger spelling' dataset. Training the machine learning model was fun but also an extensively difficult task. The process of developing the dataset through practicing ASL caused our team to pick up the useful language of ASL. ## What we learned Our journey in developing Caption Glasses revealed the profound need within the deaf community for inclusive, diverse, and accessible communication solutions. As we delved deeper into understanding the daily lives of over 466 million deaf individuals worldwide, including more than 500,000 users of American Sign Language (ASL) in the United States alone, we became acutely aware of the barriers they face in a predominantly spoken word. The hardware and machine learning development phases presented significant challenges. Integrating advanced technology into a compact, wearable form required a delicate balance of precision engineering and user-centric design. 3D printing, SolidWorks design, and intricate wiring demanded meticulous attention to detail. Overcoming these hurdles and achieving a seamless blend of hardware components within a pair of glasses was a monumental accomplishment. The machine learning aspect, essential for real-time translation and captioning, was equally demanding. Developing a model capable of accurately interpreting finger spelling and converting it into meaningful captions involved extensive training and fine-tuning. Balancing accuracy, speed, and efficiency pushed the boundaries of our understanding and capabilities in this rapidly evolving field. Through this journey, we've gained profound insights into the transformative potential of technology when harnessed for a noble cause. We've learned the true power of collaboration, dedication, and empathy. Our experiences have cemented our belief that innovation, coupled with a deep understanding of community needs, can drive positive change and improve the lives of many. With Caption Glasses, we're on a mission to redefine how the world communicates, striving for a future where every voice is heard, regardless of the language it speaks. ## What's next for Caption Glasses The market for Caption Glasses is insanely large, with infinite potential for advancements and innovations. In terms of user design and wearability, we can improve user comfort and style. The prototype given can easily scale to be less bulky and lighter. We can allow for customization and design patterns (aesthetic choices to integrate into the fashion community). In terms of our ML object detection model, we foresee its capability to decipher and translate various sign languages from across the globe pretty easily, not just ASL, promoting a universal mode of communication for the deaf community. Additionally, the potential to extend this technology to interpret and translate spoken languages, making Caption Glasses a tool for breaking down language barriers worldwide, is a vision that fuels our future endeavors. The possibilities are limitless, and we're dedicated to pushing boundaries, ensuring Caption Glasses evolve to embrace diverse forms of human expression, thus fostering an interconnected world.
partial
## Inspiration The inspiration for our Auto-Teach project stemmed from the growing need to empower both educators and learners with a **self-directed and adaptive** learning environment. We were inspired by the potential to merge technology with education to create a platform that fosters **personalized learning experiences**, allowing students to actively **engage with the material while offering educators tools to efficiently evaluate and guide individual progress**. ## What it does Auto-Teach is an innovative platform that facilitates **self-directed learning**. It allows instructors to **create problem sets and grading criteria** while enabling students to articulate their problem-solving methods and responses through text input or file uploads (future feature). The software leverages AI models to assesses student responses, offering **constructive feedback**, **pinpointing inaccuracies**, and **identifying areas for improvement**. It features automated grading capabilities that can evaluate a wide range of responses, from simple numerical answers to comprehensive essays, with precision. ## How we built it Our deliverable for Auto-Teach is a full-stack web app. Our front-end uses **ReactJS** as our framework and manages data using **convex**. Moreover, it leverages editor components from **TinyMCE** to provide student with better experience to edit their inputs. We also created back-end APIs using "FastAPI" and "Together.ai APIs" in our way building the AI evaluation feature. ## Challenges we ran into We were having troubles with incorporating Vectara's REST API and MindsDB into our project because we were not very familiar with the structure and implementation. We were able to figure out how to use it eventually but struggled with the time constraint. We also faced the challenge of generating the most effective prompt for chatbox so that it generates the best response for student submissions. ## Accomplishments that we're proud of Despite the challenges, we're proud to have successfully developed a functional prototype of Auto-Teach. Achieving an effective system for automated assessment, providing personalized feedback, and ensuring a user-friendly interface were significant accomplishments. Another thing we are proud of is that we effectively incorporates many technologies like convex, tinyMCE etc into our project at the end. ## What we learned We learned about how to work with backend APIs and also how to generate effective prompts for chatbox. We also got introduced to AI-incorporated databases such as MindsDB and was fascinated about what it can accomplish (such as generating predictions based on data present on a streaming basis and getting regular updates on information passed into the database). ## What's next for Auto-Teach * Divide the program into **two mode**: **instructor** mode and **student** mode * **Convert Handwritten** Answers into Text (OCR API) * **Incorporate OpenAI** tools along with Together.ai when generating feedback * **Build a database** storing all relevant information about each student (ex. grade, weakness, strength) and enabling automated AI workflow powered by MindsDB * **Complete analysis** of student's performance on different type of questions, allows teachers to learn about student's weakness. * **Fine-tuned grading model** using tools from Together.ai to calibrate the model to better provide feedback. * **Notify** students instantly about their performance (could set up notifications using MindsDB and get notified every day about any poor performance) * **Upgrade security** to protect against any illegal accesses
## Inspiration We've noticed that many educators draw common structures on boards, just to erase them and redraw them in common ways to portray something. Imagine your CS teacher drawing an array to show you how bubble sort works, and erasing elements for every swap. This learning experience can be optimized with AI. ## What It Does Our software recognizes digits drawn and digitizes the information. If you draw a list of numbers, it'll recognize it as an array and let you visualize bubble sort automatically. If you draw a pair of axes, it'll recognize this and let you write an equation that it will automatically graph. The voice assisted list operator allows one to execute the most commonly used list operation, "append" through voice alone. A typical use case would be a professor free to roam around the classroom and incorporate a more intimate learning experience, since edits need no longer be made by hand. ## How We Built It The digits are recognized using a neural network trained on the MNIST hand written digits data set. Our code scans the canvas to find digits written in one continuous stroke, puts bounding boxes on them and cuts them out, shrinks them to run through the neural network, and outputs the digit and location info to the results canvas. For the voice driven list operator, the backend server's written in Node.js/Express.js. It accepts voice commands through Bixby and sends them to Almond, which stores and updates the list in a remote server, and also in the web user interface. ## Challenges We Ran Into * The canvas was difficult to work with using JavaScript * It is unbelievably hard to test voice-driven applications amidst a room full of noisy hackers haha ## Accomplishments that We're Proud Of * Our software can accurately recognize digits and digitize the info! ## What We Learned * Almond's, like, *really* cool * Speech recognition has a long way to go, but is also quite impressive in its current form. ## What's Next for Super Smart Board * Recognizing trees and visualizing search algorithms * Recognizing structures commonly found in humanities classes and implementing operations for them * Leveraging Almond's unique capabilities to facilitate operations like inserting at a specific index and expanding uses to data structures besides lists * More robust error handling, in case the voice command is misinterpreted (as it often is) * Generating code to represent the changes made alongside the visual data structure representation
## Inspiration There are millions of people around the world who have a physical or learning disability which makes creating visual presentations extremely difficult. They may be visually impaired, suffer from ADHD or have disabilities like Parkinsons. For these people, being unable to create presentations isn’t just a hassle. It’s a barrier to learning, a reason for feeling left out, or a career disadvantage in the workplace. That’s why we created **Pitch.ai.** ## What it does Pitch.ai is a web app which creates visual presentations for you as you present. Once you open the web app, just start talking! Pitch.ai will listen to what you say and in real-time and generate a slide deck based on the content of your speech, just as if you had a slideshow prepared in advance. ## How we built it We used a **React** client combined with a **Flask** server to make our API calls. To continuously listen for audio to convert to text, we used a react library called “react-speech-recognition”. Then, we designed an algorithm to detect pauses in the speech in order to separate sentences, which would be sent to the Flask server. The Flask server would then use multithreading in order to make several API calls simultaneously. Firstly, the **Monkeylearn** API is used to find the most relevant keyword in the sentence. Then, the keyword is sent to **SerpAPI** in order to find an image to add to the presentation. At the same time, an API call is sent to OpenAPI’s GPT-3 in order to generate a caption to put on the slide. The caption, keyword and image of a single slide deck are all combined into an object to be sent back to the client. ## Challenges we ran into * Learning how to make dynamic websites * Optimizing audio processing time * Increasing efficiency of server ## Accomplishments that we're proud of * Made an aesthetic user interface * Distributing work efficiently * Good organization and integration of many APIs ## What we learned * Multithreading * How to use continuous audio input * How to use React hooks, Animations, Figma ## What's next for Pitch.ai * Faster and more accurate picture, keyword and caption generation * "Presentation mode” * Integrate a database to save your generated presentation * Customizable templates for slide structure, color, etc. * Build our own web scraping API to find images
partial
*A dictionary is at the end for reference to biology terms.* ## Inspiration We're at a hackathon right? So I thought, why stop at coding with 1s and 0s when we can code with A, T, G, and C? 🧬 Genetic engineering is a cornerstone of the modern world, whether its for agriculture, medicine, and more; but let's face it, it's got a bit of a learning curve. That's where GenomIQ comes in. I wanted to create a tool that lets anyone – yes, even you – play around with editing plasmids and dive into genetic engineering. But here's the kicker: we're not just making it easier, we're turbocharging it with the expressive ability of LLMs to potentially generate functional protein-coding DNA strings. ## What it does GenomIQ streamlines plasmid engineering by combining AI-powered gene generation with a curated gene database. It uses a custom-finetuned Cohere model to create novel DNA sequences, validated for biological plausibility via AlphaFold 2 and iterated on. Alternatively, you can rapidly search for existing genes stored in our Chroma vectordb. The platform automatically optimizes restriction sites and integrates essential genetic elements. Users can easily design, modify, and export plasmids ready for real-world synthesis, bridging the gap between computational design and practical genetic engineering. ## How I built it This is a Flask web app built with python, and vanilla html/css/js on the frontend. The vectordb is powered by Chroma. LLM is Cohere fine tuned on a short custom dataset included in the github repo. Restriction sites are automatically scored and sorted based on usefulness for clean insertion. Verification is performed by a local instance of Alphafold 2, which based on the provided DNA sequence will give you a structure file. I found a website that implements Prosa, a scoring metric for proteins, and built a web scrapper/bot that uploads your structure file and gathers the z-score from there. The plasmid viewer is a canvas that is updated whenever a route returns new features. The repo also includes a file for a short fine tuning dataset builder tool with a GUI, that I put together to make it easier to fine tune my model. I developed a benchmark set and performed an evaluation of the standard cohere model vs the fine tuned model, and compared their z-score across. As displayed in the image, the fine tune is much more capable of producing biologically plausible strings of DNA. ![Benchmark results](https://cdn.discordapp.com/attachments/966810847823405117/1284738453929590906/z_score_comparison.png?ex=66e7b96c&is=66e667ec&hm=80bb7c82525c97852610688a32b67fe929584bd00a5c22a68f06af78dc91dc87&) ## Challenges I ran into Cohere api timeouts: lots of requests would not work randomly, had to use threading to check how long it was running, and be able to cut it off if it takes too long. Frontend as a whole was a big challenge, I have hardly built web apps before so this was a lot of back and forth, wondering why X element wont go to the center of the page no matter how hard I try. ## Accomplishments that I'm proud of Building a cool project in a day and a half :) ## What I learned Vector db, alphafold, genetic engineering, ## What's next for GenomIQ I want to evaluate what a tool like GenomIQ's place in the world could be. I want to reach out to people who would be interested in such a tool, and see what direction to take it in. There are a lot of improvements that can be made, as well as opportunity for some incredible new features. ## Dictionary **Plasmid**: Small circular ring of DNA. These are typically cut up and have new genes inserted into them. Afterwards, these plasmids are inserted into organisms like yeast, bacteria, etc who will now express the new gene. **Restriction site**: The zones on the plasmid where we do the cutting. Some sites are more desirable than others, typically given by uniqueness (only want to cut in one spot) and distance from other genes/features (don't want to cut up something important). *Sorry if any of this seems jumbled... im really tired.*
## Inspiration One of my team members, Thor, was looking for citations for his psychology research paper while on the bus ride up from SoCal. He had a psychology paper due and as he scrambled to finish the assignment, he struggled to find articles to cite due to the fragmented landscape of paywalled academic journals and lackluster indexing. We looked into his problem and realized that research is very unfriendly to those without the means. Access to Academic journals are expensive (>$500) for one person and the biggest commercial indexers such as Web of Science or Scopus will charge you for using their search. Open source alternatives such as IEEEExplore and arXiv either do not have the breadth of research or lack vetting as preprints. A further exigence is the detrimental nature of science journalism that fuels a cycle of misleading publications. When scientists are at the mercy of research journals to expose their work for grants and journals need clicks to drive revenue, the scientific community as a whole pays the price. This felt like an issue that needed to be tackled. zKnowledgeBase is a decentralized research platform that eliminates paywalls, enables free sharing of verified research without third-party control, and mitigates censorship risks - empowering academics with open and unbiased access to knowledge. ## What it does We built a decentralized web platform allowing users to search and store research articles, immutably and forever. Users can upload research PDFs and search for articles using our vector-embedded search. We secure our articles with a Merkle Tree, where the root is publicly available on the Avalanche Blockchain. ## How we built it We allow users to upload research papers in PDF form. We store their submissions on IPFS, a distributed ledger designed for file storage, allowing for reliable uptime and free access. At the same time, we chunk and vector embed the paper, using LangChain and together.ai to render the paper into a multiple hundred dimensional vector, stored in ChromaDB's vector database. When the user searches with our platform, we vector embed their search query and use cosine similarity to compare the search vector to the stored vectors in the vector database. We then present the most similar papers in a scrollable format. Finally, we used a Merkle tree built with Zig for submission security and to verify that papers retrieved from IPFS came from our uploads. ## Challenges we ran into We used a lot of new technologies for the first time including Merkle Trees, Zig, Vector Databases, and we had to read a lot of documentation and learn quickly to finish on time. The integration of the front end and backend took some time. ## Accomplishments that we're proud of We used Zig for the first time and managed to code a complicated Delta Merkle Proof quickly and correctly despite time constraints. We were able to follow a plan from ideation to submission. ## What we learned We learned that it is important to budget your time wisely and spend time with system design. ## What's next for zKnowledge Base: RICHER METADATA: Publication, # of Citations, etc INCENTIVIZE UPLOADS: Spread the word and get more engagement TOKENS: Distribute tokens for decentralized governance
## Inspiration Large Language Models (LLMs) are limited by a token cap, making it difficult for them to process large contexts, such as entire codebases. We wanted to overcome this limitation and provide a solution that enables LLMs to handle extensive projects more efficiently. ## What it does LLM Pro Max intelligently breaks a codebase into manageable chunks and feeds only the relevant information to the LLM, ensuring token efficiency and improved response accuracy. It also provides an interactive dependency graph that visualizes the relationships between different parts of the codebase, making it easier to understand complex dependencies. ## How we built it Our landing page and chatbot interface were developed using React. We used Python and Pyvis to create an interactive visualization graph, while FastAPI powered the backend for dependency graph content. We've added third-party authentication using the GitHub Social Identity Provider on Auth0. We set up our project's backend using Convex and also added a Convex database to store the chats. We implemented Chroma for vector embeddings of GitHub codebases, leveraging advanced Retrieval-Augmented Generation (RAG) techniques, including query expansion and re-ranking. This enhanced the Cohere-powered chatbot’s ability to respond with high accuracy by focusing on relevant sections of the codebase. ## Challenges we ran into We faced a learning curve with vector embedding codebases and applying new RAG techniques. Integrating all the components—especially since different team members worked on separate parts—posed a challenge when connecting everything at the end. ## Accomplishments that we're proud of We successfully created a fully functional repo agent capable of retrieving and presenting highly relevant and accurate information from GitHub repositories. This feat was made possible through RAG techniques, surpassing the limits of current chatbots restricted by character context. ## What we learned We deepened our understanding of vector embedding, enhanced our skills with RAG techniques, and gained valuable experience in team collaboration and merging diverse components into a cohesive product. ## What's next for LLM Pro Max We aim to improve the user interface and refine the chatbot’s interactions, making the experience even smoother and more visually appealing. (Please Fund Us)
partial
## Inspiration Our project, "**Jarvis**," was born out of a deep-seated desire to empower individuals with visual impairments by providing them with a groundbreaking tool for comprehending and navigating their surroundings. Our aspiration was to bridge the accessibility gap and ensure that blind individuals can fully grasp their environment. By providing the visually impaired community access to **auditory descriptions** of their surroundings, a **personal assistant**, and an understanding of **non-verbal cues**, we have built the world's most advanced tool for the visually impaired community. ## What it does "**Jarvis**" is a revolutionary technology that boasts a multifaceted array of functionalities. It not only perceives and identifies elements in the blind person's surroundings but also offers **auditory descriptions**, effectively narrating the environmental aspects they encounter. We utilize a **speech-to-text** and **text-to-speech model** similar to **Siri** / **Alexa**, enabling ease of access. Moreover, our model possesses the remarkable capability to recognize and interpret the **facial expressions** of individuals who stand in close proximity to the blind person, providing them with invaluable social cues. Furthermore, users can ask questions that may require critical reasoning, such as what to order from a menu or navigating complex public-transport-maps. Our system is extended to the **Amazfit**, enabling users to get a description of their surroundings or identify the people around them with a single press. ## How we built it The development of "**Jarvis**" was a meticulous and collaborative endeavor that involved a comprehensive array of cutting-edge technologies and methodologies. Our team harnessed state-of-the-art **machine learning frameworks** and sophisticated **computer vision techniques** to get analysis about the environment, like , **Hume**, **LlaVa**, **OpenCV**, a sophisticated computer vision techniques to get analysis about the environment, and used **next.js** to create our frontend which was established with the **ZeppOS** using **Amazfit smartwatch**. ## Challenges we ran into Throughout the development process, we encountered a host of formidable challenges. These obstacles included the intricacies of training a model to recognize and interpret a diverse range of environmental elements and human expressions. We also had to grapple with the intricacies of optimizing the model for real-time usage on the **Zepp smartwatch** and get through the **vibrations** get enabled according to the **Hume** emotional analysis model, we faced issues while integrating **OCR (Optical Character Recognition)** capabilities with the **text-to speech** model. However, our team's relentless commitment and problem-solving skills enabled us to surmount these challenges. ## Accomplishments that we're proud of Our proudest achievements in the course of this project encompass several remarkable milestones. These include the successful development of "**Jarvis**" a model that can audibly describe complex environments to blind individuals, thus enhancing their **situational awareness**. Furthermore, our model's ability to discern and interpret **human facial expressions** stands as a noteworthy accomplishment. ## What we learned # Hume **Hume** is instrumental for our project's **emotion-analysis**. This information is then translated into **audio descriptions** and the **vibrations** onto **Amazfit smartwatch**, providing users with valuable insights about their surroundings. By capturing facial expressions and analyzing them, our system can provide feedback on the **emotions** displayed by individuals in the user's vicinity. This feature is particularly beneficial in social interactions, as it aids users in understanding **non-verbal cues**. # Zepp Our project involved a deep dive into the capabilities of **ZeppOS**, and we successfully integrated the **Amazfit smartwatch** into our web application. This integration is not just a technical achievement; it has far-reaching implications for the visually impaired. With this technology, we've created a user-friendly application that provides an in-depth understanding of the user's surroundings, significantly enhancing their daily experiences. By using the **vibrations**, the visually impaired are notified of their actions. Furthermore, the intensity of the vibration is proportional to the intensity of the emotion measured through **Hume**. # Ziiliz We used **Zilliz** to host **Milvus** online, and stored a dataset of images and their vector embeddings. Each image was classified as a person; hence, we were able to build an **identity-classification** tool using **Zilliz's** reverse-image-search tool. We further set a minimum threshold below which people's identities were not recognized, i.e. their data was not in **Zilliz**. We estimate the accuracy of this model to be around **95%**. # Github We acquired a comprehensive understanding of the capabilities of version control using **Git** and established an organization. Within this organization, we allocated specific tasks labeled as "**TODO**" to each team member. **Git** was effectively employed to facilitate team discussions, workflows, and identify issues within each other's contributions. The overall development of "**Jarvis**" has been a rich learning experience for our team. We have acquired a deep understanding of cutting-edge **machine learning**, **computer vision**, and **speech synthesis** techniques. Moreover, we have gained invaluable insights into the complexities of real-world application, particularly when adapting technology for wearable devices. This project has not only broadened our technical knowledge but has also instilled in us a profound sense of empathy and a commitment to enhancing the lives of visually impaired individuals. ## What's next for Jarvis The future holds exciting prospects for "**Jarvis.**" We envision continuous development and refinement of our model, with a focus on expanding its capabilities to provide even more comprehensive **environmental descriptions**. In the pipeline are plans to extend its compatibility to a wider range of **wearable devices**, ensuring its accessibility to a broader audience. Additionally, we are exploring opportunities for collaboration with organizations dedicated to the betterment of **accessibility technology**. The journey ahead involves further advancements in **assistive technology** and greater empowerment for individuals with visual impairments.
## Inspiration Vision—our most dominant sense—plays a critical role in every faucet and stage in our lives. Over 40 million people worldwide (and increasing) struggle with blindness and 20% of those over 85 experience permanent vision loss. In a world catered to the visually-abled, developing assistive technologies to help blind individuals regain autonomy over their living spaces is becoming increasingly important. ## What it does ReVision is a pair of smart glasses that seamlessly intertwines the features of AI and computer vision to help blind people navigate their surroundings. One of our main features is the integration of an environmental scan system to describe a person’s surroundings in great detail—voiced through Google text-to-speech. Not only this, but the user is able to have a conversation with ALICE (Artificial Lenses Integrated Computer Eyes), ReVision’s own AI assistant. “Alice, what am I looking at?”, “Alice, how much cash am I holding?”, “Alice, how’s the weather?” are all examples of questions ReVision can successfully answer. Our glasses also detect nearby objects and signals buzzing when the user approaches an obstacle or wall. Furthermore, ReVision is capable of scanning to find a specific object. For example—at an aisle of the grocery store—” Alice, where is the milk?” will have Alice scan the view for milk to let the user know of its position. With ReVision, we are helping blind people regain independence within society. ## How we built it To build ReVision, we used a combination of hardware components and modules along with CV. For hardware, we integrated an Arduino uno to seamlessly communicate back and forth between some of the inputs and outputs like the ultrasonic sensor and vibrating buzzer for haptic feedback. Our features that helped the user navigate their world heavily relied on a dismantled webcam that is hooked up to a coco-ssd model and ChatGPT 4 to identify objects and describe the environment. We also used text-to-speech and speech-to-text to make interacting with ALICE friendly and natural. As for the prototype of the actual product, we used stockpaper, and glue—held together with the framework of an old pair of glasses. We attached the hardware components to the inside of the frame, which pokes out to retain information. An additional feature of ReVision is the effortless attachment of the shade cover, covering the lens of our glasses. We did this using magnets, allowing for a sleek and cohesive design. ## Challenges we ran into One of the most prominent challenges we conquered was soldering ourselves for the first time as well as DIYing our USB cord for this project. As well, our web camera somehow ended up getting ripped once we had finished our prototype and ended up not working. To fix this, we had to solder the wires and dissect our goggles to fix their composition within the frames. ## Accomplishments that we're proud of Through human design thinking, we knew that we wanted to create technology that not only promotes accessibility and equity but also does not look too distinctive. We are incredibly proud of the fact that we created a wearable assistive device that is disguised as an everyday accessory. ## What we learned With half our team being completely new to hackathons and working with AI, taking on this project was a large jump into STEM for us. We learned how to program AI, wearable technologies, and even how to solder since our wires were all so short for some reason. Combining and exchanging our skills and strengths, our team also learned design skills—making the most compact, fashionable glasses to act as a container for all the technologies they hold. ## What's next for ReVision Our mission is to make the world a better place; step-by-step. For the future of ReVision, we want to expand our horizons to help those with other sensory disabilities such as deafness and even touch.
## Inspiration Integration for patients into society: why can't it be achieved? This is due to the lack of attempts to combine medical solutions and the perspectives of patients in daily use. More specifically, we notice fields in aid to visual disabilities lack efficiency, as the most common option for patients with blindness is to use a cane and tap as they move forward, which can be slow, dangerous, and limited. They are clunky and draw attention to the crowd, leading to more possible stigmas and inconveniences in use. We attempt to solve this, combining effective healthcare and fashion. ## What it does * At Signifeye, we have created a pair of shades with I/O sensors that provides audio feedback to the wearer on how far they are to the object they are looking at. * We help patients build a 3D map of their surroundings and they can move around much quicker, as opposed to slowly tapping the guide cane forward * Signifeye comes with a companion app that allows for both the blind user and caretakers. The UI is easy to navigate for the blind user and allows for easier haptic feedback manipulation.Through the app, caretakers can also monitor and render assistance to the blind user, thereby being there for the latter 24/7 without having to be there physically through tracking of data and movement. ## How we built it * The frame of the sunglasses is inspired by high-street fashion, and was modeled via Rhinoceros 3D to balance aesthetics and functionality. The frame is manufactured using acrylic sheets on a laser cutter for rapid prototyping * The sensor arrays consist of an ultrasonic sensor, a piezo speaker, a 5V regulator and a 9V battery, and are powered by the Arduino MKR WiFi 1010 * The app was created using React Native and Figma for more comprehensive user details, using Expo Go and VSCode for a development environment that could produce testable outputs. ## Challenges we ran into Difficulty of iterative hardware prototyping under time and resource constraints * Limited design iterations, * Shortage of micro-USB cables that transfer power and data, and * For the frame design, coordinating the hardware with the design for dimensioning. Implementing hardware data to softwares * Collecting Arduino data into a file and accommodating that with the function of the application, and * Altering user and haptic feedback on different mobile operating systems, where different programs had different dependencies that had to be followed. ## What we learned As most of us were beginner hackers, we learned about multiple aspects that went into creating a viable product. * Fully integrating hardware and software functionality, including Arduino programming and streamlining. * The ability to connect cross platform softwares, where I had to incorporate features or data pulled from hardware or data platforms. * Dealing with transferring of data and the use of computer language to process different formats, such as audio files or censor induced wavelengths. * Became more proficient in running and debugging code. I was able to adjust to a more independent and local setting, where an emulator or external source was required aside from just an IDE terminal.
winning
## Inspiration Depository Repository is an Airbnb style app that allows users to rent space from each other. ## How we built it Depository Repository is made of 3 separate components: a phone app, its backend, and its security camera. The security camera ran on the DragonBoard 410c and python. The app is made entirely with Expo and React Native. The backend was powered by Stdlib, Algolia, and Google Cloud Services. The security camera functioned by looking for faces. When it would see a face, it would upload an event to our database. Facial recognition was powered by Google Cloud Services. The React Native app allowed users to post, browse, and rent storage units. Each rented unit had a status page which listed events from the security camera. ## Challenges we ran into We were new to Stdlb and the DragonBoard 410c. ## Accomplishments that we're proud of Creating a fairly large application in 36 hours.
## Inspiration Moving out of residence is a tedious part for students. Why? Because we own a bunch of stuffs we no longer need and we don't know where or how to donate them. That is why our team create an app that students can share their food, shampoo, chair,.. anything to the others. ## What it does An individual makes a post about stuffs that they have spare and want to share. They also choose the preferred meeting time and a location on campus that they can deliver. If someone wants to take it, they will inform the poster by hitting the "Take it" button on the post. ## How we built it * Figma: we used the platform to brainstorm the idea, draft the app's layout and design * React.js: we use React, especially React-Bootstrap library to create a responsive and well-organized website. * Firebase: besides connecting the app with a real-time storage, we use Firebase Authentication for log-in page. ## Challenges we ran into We are all beginners in React.js. Hence, it took time for us to learn the basics of the library and implement it to the website. We also had some conflicts in the design and task distribution process. ## Accomplishments that we're proud of Picking up React.js and create a responsive website ## What we learned * Authentication using Firebase * React.js * Web development * Teamwork ## What's next for ShareWithMe Initially, the app works locally in the university residence so that it's safe to share privacy information and for students to contact others. Later on, when it has stricter security and scam detection, we want to expand the region to outside of the university. We also plan to finalize the profile page so user can check what they had lent and taken.
## Inspiration Our inspiration was to provide a robust and user-friendly financial platform. After hours of brainstorming, we decided to create a decentralized mutual fund on mobile devices. We also wanted to explore new technologies whilst creating a product that has several use cases of social impact. Our team used React Native and Smart Contracts along with Celo's SDK to discover BlockChain and the conglomerate use cases associated with these technologies. This includes group insurance, financial literacy, personal investment. ## What it does Allows users in shared communities to pool their funds and use our platform to easily invest in different stock and companies for which they are passionate about with a decreased/shared risk. ## How we built it * Smart Contract for the transfer of funds on the blockchain made using Solidity * A robust backend and authentication system made using node.js, express.js, and MongoDB. * Elegant front end made with react-native and Celo's SDK. ## Challenges we ran into Unfamiliar with the tech stack used to create this project and the BlockChain technology. ## What we learned We learned the many new languages and frameworks used. This includes building cross-platform mobile apps on react-native, the underlying principles of BlockChain technology such as smart contracts, and decentralized apps. ## What's next for *PoolNVest* Expanding our API to select low-risk stocks and allows the community to vote upon where to invest the funds. Refine and improve the proof of concept into a marketable MVP and tailor the UI towards the specific use cases as mentioned above.
losing
## Inspiration We meet hundreds of people throughout our lifetimes, from chance encounters at cafes, to teammates at school, to lifelong friends. With so many people bumping into and out of our lives, it may be hard to keep track of them all. Individuals with social disabilities, especially those who have a difficult time with social cues, may find it a challenge to recall any information about the background, or even facial features, of those they met in the past. Mnemonic hopes to provide a seamless platform that keeps track of all the people a user meets, along with details of how they met and the topics discussed in previous encounters. ## What it does Mnemonic is a social memory companion that uses IBM Watson's AlchemyLanguage API and Microsoft Cognitive Service API to keep track of all the people a user meets. Along with date and location information about each first encounter, Mnemonic uses natural language processing to find relevant keywords, which serve as mnemonics. This social platform can be useful not only as a networking tool, but also as an accessibility platform that allows individuals with social disabilities to interact with others in a more seamless way. ## How we built it There are three main components of Mnemonic: an iOS app, a Linode server, and a Raspberry Pi. When a user first meets someone, the user triggers the start of the process by pushing a button on a breadboard. This push triggers the camera of the Raspberry Pi to take three photos of the person that is met. This also sends information to the server. The iOS app constantly sends post requests to the Linode server to see whether an action is required. If one is, it either matches the photos to an existing profile using Microsoft Cognitive Services Face API, in which case the app will pull up the existing profile, or it will create a new profile by recording the audio of the conversation. Using IBM Watson's AlchemyLanguage API, we analyze this data for any relevant keywords, using these as keyword topics to be stored in that person's profile. The user can use this information to more easily recall the person the next time that person is seen again.
## Inspiration Bullying is an issue prevalent worldwide - regardless of race, age or gender. Having seen it up close in our daily school lives, yet having done nothing about it, we decided to take a stand and try to tackle this issue using the skills at our disposal. We don't believe that bullies always deserve punishment - instead, we should reach out to them and help them overcome whatever reasons may be causing them to bully. Because of this, we decided to implement both a short time as well as a long term solution. ## What it does No Duckling Is Ugly is an IoT system that listens to conversations by students in classrooms, performs real time sentiment analysis on their interactions and displays the most recent and relevant bullying events, identifying the students involved in the interaction. In the short run, teachers are able to see real time when bullying occurs and intervene if necessary - in the long run, data on such events is collected and displayed in a user friendly manner, to help teachers decide on how to guide their class down the healthiest and most peaceful path. ## How we built it Hardware: We used Qualcomm Dragonboard 410c boards to serve as the listening IoT device, and soldered analog microphones onto them (the boards did not come with microphones inbuilt). Software: We used PyAudio and webrtcvad to read a constant stream of audio and break it into chunks to perform processing on. We then used Google Speech Recognition to convert this speech to text, and performed sentiment analysis using the Perspective API to determine the toxicity of the statement. If toxic, we use Microsoft's Cognitive Services API to determine who said the statement, and use Express to create a REST API, which finally interfaces with the MongoDB Stitch service to store the relevant data. ## Challenges we ran into 1. **Audio encoding PCM** - The speaker recognition service we use requires audio input in a specific PCM format, with a 16K sampling rate and 16 bit encoding. Figuring out how to convert real time audio to this format was a challenge. 2. **No mic on Dragonboard** - The boards we were provided with didn't come with onboard microphones, and the GPIO pins seemed to be dysfunctional, so we ended up soldering the mics directly onto the board after analyzing the chip architecture. 3. **Integrating MongoDB Stitch with Python and Angular** - MongoDB Stitch does not have an SDK for either Python or Angular, so we had to create a middleman service (using Express) based on Node.js to act as a REST API, handling requests from Python and interfacing with MongoDB. 4. **Handling streaming audio** - None of the services we used supported constantly streaming audio, so we had to determine how exactly to split up the audio into chunks. We eventually used webrtcvad to detect if voices were present in the frames being recorded, creating temporary WAV files with the necessary encodings to send to the necessary APIs ## Accomplishments that we're proud of Being able to work together and distribute work effectively. This project had too many components to seem feasible in 36 hours, especially taking into account the obstacles we faced, yet we are proud that we managed to implement a working project in this time. Not only did we create something we were passionate about, we also managed to create something that will hopefully help people ## What we learned We had never worked with any of the technologies used in this project before, except AngularJS and Python - we learned how to use a Dragonboard, how to set up a MongoDB Stitch service, as well as audio formatting and detection. Most of all, we learned how to work together well as a team. ## What's next for No Duckling Is Ugly The applications for this technology reaches far beyond the classroom - in the future, this could even be applied to detecting crimes happening real time, prompting faster police response times and potentially saving lives. The possibilities are endless
## Inspiration 💡 We embarked on this project with the goal to create a platform to ease and improve teaching from elementary to high school. The inspiration stemmed from Liam's relatives who work in education and shared their struggle to keep track of their students' performances. We saw an opportunity to ameliorate the teacher's experience, as well as the student's pace of learning :) ## What it does 🔍 Folio provides a complete overview of the classroom, where the instructor can click and view each student's information, attendance, the instructor's previous comments, perceived strengths and areas of improvement, special needs etc. ## How we built it 🦾 **Brainstorming&Planning:** We started by outlining the project requirements, features, and data structure for the MVP. This helped in creating a roadmap for development and split the team into distinct roles: two people on the backend and two on the design and frontend side. **Development:** We implemented the project using Javascript, Node.js - Express for Folio's data, and previewed and built the UI using Figma, Html, CSS and React. **Integration&Testing:** Rigorous testing was performed to identify and resolve bugs. ## Challenges we ran into 🧠 Connecting the backend to the frontend was not a familiar maneuver to any of us: thankfully our wonderful mentors were here to walk us through it! ## Accomplishments that we're proud of 🌟 **This is our first Hackathon!!** * Learning backend from scratch, and implementing it by following a single video tutorial * Learning to use Figma and VSCode the same day * Connecting the database to our frontend platform successfully ✨ ## What we learned ✍️ Building a functional MVP is better than coming up with a perfectly-designed, overly ambitious project that doesn't work! ## What's next for Folio ❓ Our MVP can be complemented in a number of ways. Our vision involves these 3 immediate steps: * Building the student interface * Adding a function to assign students in randomly generated groups, and implement a visual representation of it * Generate graphs and charts for the teacher to visualize the cohort's&individual performances
partial
## 💡 Inspiration * Typically, AI applications assist users through a request/response fashion, where users ask a question, and the AI provides an answer. However, I wanted to create an experience where the **user becomes the AI**. * The idea for Speaktree AI was born from a desire to help people improve their public speaking and communication skills. I recognized that many individuals struggle with articulating their thoughts clearly and confidently. * With Speaktree AI, as you speak, the app prompts questions and provides real-time suggestions to help build and improve your responses dynamically, while gathering AI-driven analytics. ## 🚀 What it does * As you speak, Speaktree provides real-time suggestions to help you build and improve your answers. * The app also offers detailed analytics on your speech, helping you refine your communication skills. * Powered by AWS Bedrock, Lambda, and API Gateway, Speaktree AI delivers an interactive and seamless experience on a native iOS Swift app. ## 🛠️ How I built it * **Backend**: Utilized AWS Bedrock for the model and exposed an API endpoint using AWS Lambda and API Gateway. Also integrated OpenAI API. This allows the user to select from a host of models and select the one they like best. * **Frontend**: Built the mobile app natively using Xcode, Swift, and SwiftUI, ensuring a seamless and responsive user interface. Integrated Face ID for authentication. ## 🏃 Challenges I ran into * **Low-Latency Speech to Text**: Ensuring real-time speech processing was a challenge. Instead of using a speech to text API which was too slow, I utilized the SF Speech Recognizer (iOS native), which processes speech on-device quickly and efficiently. This significantly improved the app's ability to provide real-time suggestions without lag. ## 🎉 Accomplishments that I'm proud of * **Fast Model Querying to Support Real-Time Assistance**: One of the biggest challenges was ensuring the app could process speech and provide suggestions in real-time without lag. We solved this by implementing dynamic model selection (**efficient using bedrock**), allowing users to choose the most suitable model based on the contents of their conversation, as different language models excel at different tasks. This ensures quick and relevant feedback tailored to the user's needs. ## 📚 What I learned * Throughout the development of Speaktree AI, I realized that different language models excel at different tasks. This understanding was crucial in designing the app to leverage the strengths of various models effectively. Additionally, I discovered the **power of AWS Bedrock** and ecosystem in enabling efficient utilization of these different models within the app. ## 🔮 What's next for Speaktree AI: Real-time Speech Enhancement * **Audio Suggestions**: Provide real-time suggestions via audio so users don’t have to read while speaking, helping them build confidence during in-person meetings and presentations. * **Gamified Learning**: Introduce a more game-like mode with scores and levels to make learning engaging and fun, encouraging users to improve their speaking skills through challenges and rewards. * **Empathy Detection**: Integrate empathy detection to provide users with more specific insights on their tone and emotional delivery, helping them communicate more effectively and empathetically. * **Progress Tracking**: Persist users' previous conversations and measure their improvement over time, providing detailed progress reports and personalized recommendations for further development.
## Inspiration Availability of very expensive and high-quality VR equipment and the realization that this could be used to help disabled or immobilized people experience a realistic simulation. ## Main Features The high-level purpose of our system is to allow disabled or immobilized users to take advantage of virtual reality in order to experience mobility in any chosen environment. The Oculus Rift, a head-mounted virtual reality device, allows our users to be fully immersed in the robot's virtual environment. The left thumb stick of an Xbox One controller is used to yaw the robot's camera, or move it forwards or backwards to explore more uncharted territories. ## How it was built The robot was built using a Qualcomm Dragon Board 410c, an Oculus Rift VR headset along with software APIs, a NRF wireless transceiver, Arduino Uno. We used Wi-Fi to allow data flow between the Qualcomm Board and the Oculus VR Headset and then the headset to the local Wi-Fi and then . We used a basic chassis for the prototype of the robot while using DC motors for driving and Servo motors for yawing the camera on the robot. ## Challenges We Ran Into We encountered several bugs and errors when trying to establish Wi-Fi connection between Arduino and the NRF transceiver. ## What We learned Interfacing Qualcomm Dragon Board with VR systems and then using Arduino to send data from cameras to the VR system. See our project on [Hackster](https://www.hackster.io/137869/experiencing-mobile-robotics-in-vr-78a52f)! ## What's next for VRobot More elaborate computer vision of the robot and a wider range of VR environments.
## Inspiration <https://www.youtube.com/watch?v=lxuOxQzDN3Y> Robbie's story stuck out to me at the endless limitations of technology. He was diagnosed with muscular dystrophy which prevented him from having full control of his arms and legs. He was gifted with a Google home that crafted his home into a voice controlled machine. We wanted to take this a step further and make computers more accessible for people such as Robbie. ## What it does We use a Google Cloud based API that helps us detect words and phrases captured from the microphone input. We then convert those phrases into commands for the computer to execute. Since the python script is run in the terminal it can be used across the computer and all its applications. ## How I built it The first (and hardest) step was figuring out how to leverage Google's API to our advantage. We knew it was able to detect words from an audio file but there was more to this project than that. We started piecing libraries to get access to the microphone, file system, keyboard and mouse events, cursor x,y coordinates, and so much more. We build a large (~30) function library that could be used to control almost anything in the computer ## Challenges I ran into Configuring the many libraries took a lot of time. Especially with compatibility issues with mac vs. windows, python2 vs. 3, etc. Many of our challenges were solved by either thinking of a better solution or asking people on forums like StackOverflow. For example, we wanted to change the volume of the computer using the fn+arrow key shortcut, but python is not allowed to access that key. ## Accomplishments that I'm proud of We are proud of the fact that we had built an alpha version of an application we intend to keep developing, because we believe in its real-world applications. From a technical perspective, I was also proud of the fact that we were able to successfully use a Google Cloud API. ## What I learned We learned a lot about how the machine interacts with different events in the computer and the time dependencies involved. We also learned about the ease of use of a Google API which encourages us to use more, and encourage others to do so, too. Also we learned about the different nuances of speech detection like how to tell the API to pick the word "one" over "won" in certain context, or how to change a "one" to a "1", or how to reduce ambient noise. ## What's next for Speech Computer Control At the moment we are manually running this script through the command line but ideally we would want a more user friendly experience (GUI). Additionally, we had developed a chrome extension that numbers off each link on a page after a Google or Youtube search query, so that we would be able to say something like "jump to link 4". We were unable to get the web-to-python code just right, but we plan on implementing it in the near future.
losing
## Inspiration On the bus ride to another hackathon, one of our teammates was trying to get some sleep, but was having trouble because of how complex and loud the sound of people in the bus was. This led to the idea that in sufficiently noisy environment, hearing could be just as descriptive and rich as seeing. Therefore to better enable people with visual impairments to be able to navigate and understand their environment, we created a piece of software that is able to describe and create an auditory map of ones environment. ## What it does In a sentence, it uses machine vision to give individuals a kind of echo location. More specifically, one simply needs to hold their cell phone up, and the software will work to guide them using a 3D auditory map. The video feed is streamed over to a server where our modified version of the yolo9000 classification convolutional neural network identifies and localizes the objects of interest within the image. It will then return the position and name of each object back to ones phone. It also uses the Watson IBM api to further augment its readings by validating what objects are actually in the scene, and whether or not they have been misclassified. From here, we make it seem as though each object essentially says its own name, so that individual can essentially create a spacial map of their environment just through audio cues. The sounds get quieter the further away the objects are, and the ratio of sounds between the left and right are also varied as the object moves around the use. The phone are records its orientation, and remembers where past objects were for a few seconds afterwards, even if it is no longer seeing them. However, we also thought about where in everyday life you would want extra detail, and one aspect that stood out to us was faces. Generally, people use specific details on and individual's face to recognize them, so using microsoft's face recognition api, we added a feature that will allow our system to identify and follow friend and family by name. All one has to do is set up their face as a recognizable face, and they are now their own identifiable feature in one's personal system. ## What's next for SoundSight This system could easily be further augmented with voice recognition and processing software that would allow for feedback that would allow for a much more natural experience. It could also be paired with a simple infrared imaging camera to be used to navigate during the night time, making it universally usable. A final idea for future improvement could be to further enhance the machine vision of the system, thereby maximizing its overall effectiveness
## Inspiration Our inspiration comes from the 217 million people in the world who have moderate to severe vision impairment, and 36 million people who are fully blind. In our modern society, with all its comforts, it is easy to forget that there are so many people who do not have the same luxuries as us. It is unthinkably difficult for these visually impaired individuals to navigate everyday life and activities. We believe that the new technology of this era presents a potential solution to this issue. ## What it does InsightAI detects the location and size of common objects in real time. This data is necessitated by our novel 3D audio spatialization algorithm, which in turn, powers our Augmented Reality audio system. This system communicates the location of said objects to the user and allows for the formulation of a mental heatmap of the world. All of this is done through just a conventional mobile smartphone and headphones. This process can be terminated simply using our intuitive haptic user experience (so that it is accessible for those with vision impairments). It also supports multiple languages in order for the project to be scalable to other countries and cultures. ## How we built it We used Tensorflow.js for the real-time object detection. It is trained on the COCO Single Shot MultiBox Detection dataset with 90 object classes and 330,000 images. We then convert the object(s) into an audio signal via a text-to-speech algorithm with natural language synthesis that supports multiple languages. We then used a custom algorithm to effectively deliver the AR audio to the user’s audio device, in such a manner, that the user can understand the location of the indicated object. In order to properly interface with the visually impaired, we focused on minimalistic and intuitive audio-first design principles to facilitate usage by the intended audience. Finally we hosted the entire web app on Zeit to allow it to be accessible to everyone. ## How does the augmented reality (AR) sound system work? The sound is outputted binaurally through the web audio API. This means that we play each headphone or earbud differently, based on the location of the object. The differentiation in the sound is determined by our algorithm. You can think of our algorithm as a program that creates an mental audio data heatmap of the world around the user. Because of this immersive system, the user can very intuitively locate objects. ## Challenges we ran into There were a multitude of bugs, which were eventually solved through discussion and collaboration. One such bug was that the audio was quite slow and did not match with the rate of object detection, because we were downloading the audio snippet from an external source for every frame. We found a solution to this problem by downloading the files locally and playing those files complementing the objects detected. Additionally, we ran into many issues pertaining to getting the tensorflow.js model to work with mobile instead of desktop. ## Accomplishments that we're proud of and what we learned We are proud that we learned how to use Tensorflow.js to recognize many objects in real time, as this was one of our first projects that used live ML, and we are very proud of how it turned out. We also learned how to use the Web Audio API and created a surround sound left and right channel system using headphones. Further, this was one of our first projects to integrate AR. ## What's next for InsightAI We will definitely be updating our project in the future to support more functionality. For example, optical character recognition and facial recognition could be used to greatly make the lives of the visually impaired in everyday life. Imagine if the blind could immediately recognize people they knew through such a system. An integrated OCR system would open up the possibility for writing unaccompanied by braille to be understood by the impaired, allowing for much easier navigation of both everyday life. Our app is very capable of scaling up to multiple different languages as well.
## Inspiration Giving visually impaired people the opportunity to experience their surroundings like never before ## What it does Essentially we attach a camera and sensor on to a hat, which when worn by the visually impaired person will allow them to recognize objects faster and sense obstructions at a certain distance away. ## How we built it This was built using python's openCV, numpy, speech\_recognition, pyaudio, alsa mixer,espeak packages and arduino ## Challenges we ran into One of the major challenges was setting up openCV Setting up the dragon board with Debian Linux Setting up the speech to text recognition Setting up bluetooth from the command line ## Accomplishments that we're proud of Successful setup of the hardware Making the speech to text recognition to work ## What we learned YOLO and its uses ## What's next for realeyes Hopefully make a more feasible product and powerful product to make an impact.
partial
## 🤔 Problem Statement * 55 million people worldwide struggle to engage with their past memories effectively (World Health Organization) and 40% of us will experience some form of memory loss (Alzhiemer's Society of Canada). This widespread struggle with nostalgia emphasizes the critical need for user-friendly solutions. Utilizing modern technology to support reminiscence therapy and enhance cognitive stimulation in this population is essential. ## 💡 Inspiration * Alarming statistics from organizations like the Alzheimer's Society of Canada and the World Health Organization motivated us. * Desire to create a solution to assist individuals experiencing memory loss and dementia. * Urge to build a machine learning and computer vision project to test our skillsets. ## 🤖 What it does * DementiaBuddy offers personalized support for individuals with dementia symptoms. * Integrates machine learning, computer vision, and natural language processing technologies. * Facilitates face recognition, memory recording, transcription, summarization, and conversation. * Helps users stay grounded, recall memories, and manage symptoms effectively. ## 🧠 How we built it * Backend developed using Python libraries including OpenCV, TensorFlow, and PyTorch. * Integration with Supabase for data storage. * Utilization of Cohere Summarize API for text summarization. * Frontend built with Next.js, incorporating Voiceflow for chatbot functionality. ## 🧩 Challenges we ran into * Limited team size with only two initial members. * Late addition of two teammates on Saturday. * Required efficient communication, task prioritization, and adaptability, especially with such unique circumstances for our team. * Lack of experience in combining all these foreign sponsorship technology, as well as limited frontend and fullstack abilities. ## 🏆 Accomplishments that we're proud of * Successful development of a functional prototype within the given timeframe. * Implementation of key features including face recognition and memory recording. * Integration of components into a cohesive system. ## 💻 What we learned * Enhanced skills in machine learning, computer vision, and natural language processing. * Improved project management, teamwork, and problem-solving abilities. * Deepened understanding of dementia care and human-centered design principles. ## 🚀 What's next for DementiaBuddy * Refining face recognition algorithm for improved accuracy and scalability. * Expanding memory recording capabilities. * Enhancing chatbot's conversational abilities. * Collaborating with healthcare professionals for validation and tailoring to diverse needs. ## 📈 Why DementiaBuddy? Asides from being considered for the Top 3 prizes, we worked really hard so that DementiaBuddy could be considered to win multiple sponsorship awards at this hackathon, including the Best Build with Co:Here, RBC's Retro-Revolution: Bridging Eras with Innovation Prize, Best Use of Auth0, Best Use of StarkNet, & Best .tech Domain Name. Our project stands out because we've successfully integrated multiple cutting-edge technologies to create a user-friendly and accessible platform for those with memory ailments. Here's how we've met each challenge: * 💫 Best Build with Co:Here: Dementia Buddy should win the Best Build with Cohere award because it uses Cohere's Summarizing API to make remembering easier for people with memory issues. By summarizing long memories into shorter versions, it helps users connect with their past experiences better. This simple and effective use of Cohere's technology shows how well the project is made and how it focuses on helping users. * 💫 RBC's Retro-Revolution - Bridging Eras with Innovation Prize: Dementia Buddy seamlessly combines nostalgia with modern technology, perfectly fitting the criteria of the RBC Bridging Eras prize. By updating the traditional photobook with dynamic video memories, it transforms the reminiscence experience, especially for individuals dealing with dementia and memory issues. Through leveraging advanced digital media tools, Dementia Buddy not only preserves cherished memories but also deepens emotional connections to the past. This innovative approach revitalizes traditional memory preservation methods, offering a valuable resource for stimulating cognitive function and improving overall well-being. * 💫 Best Use of Auth0: We succesfully used Auth0's API within our Next.js frontend to help users login and ensure that our web app maintains a personalized experience for users. * 💫 Best .tech Domain Name: AMachineLearningProjectToHelpYouTakeATripDownMemoryLane.tech, I can't think of a better domain name. It perfectly describes our project.
## Inspiration A big problem that exists and is most prevalent in cancer is the idea that people developed these diseases and conditions but never get it checked up on, because no one knows that they have one of these diseases until it is too late. The whole spectrum of neurodegenerative diseases matches this phenomenon. Diseases related to dementia have symptoms that are present way before the average time that someone suspects they have this disease. When shown the power of Alexa, that idea came to us that we could utilize Alexa, a product that is increasing in popularity especially amongst the older generations, to be able to utilize predictive analytics to foresee potential diseases or conditions. With neurodegenerative diseases, the symptoms are usually changes in the speaking patterns and habits that an individual has. Hence, we believed that utilizing this along with machine learning would be an amazing way to predict the likelihood of these devastating conditions. ## What it does This Alexa skill allows for analysis of speech attributes such as pauses, repeated words, and unintelligible words, which are attributes of speech known for being among the first victims of neurodegenerative diseases like Alzheimer's. With this information, we can monitor significant worsening of speech over time, notify users that they have exhibited symptoms of Alzheimer's and recommend that they see a doctor for a second opinion. ## How we built it Our projects had many parts. The analytical part of the project was utilizing Python and machine learning through scikit-learn to develop an algorithm given a lot of training data to predict if an individual has a form of dementia given speech to text translated transcripts. From there that input was received (somewhat) from Amazon Alexa where she would be able to respond to user prompts. This was done through a combination of node.js on the Amazon Developer platform and AWS Lambda to achieve this. The combination of the two allows for an analytical mechanism to interact over time with individuals to predict their health and well being; the use case for our project was predicting the presence of dementia given a certain confidence interval along with having some cool health-related features integrated into our Alexa Skill. This is what constituted of our project, Echo MyHealth. ## Challenges we ran into Because we aren’t Amazon, we could not create the dream implementation of the idea. Ideally, the machine learning algorithm could run passively upon any request without having to open the MyHealth skill. Instead, we created a proof of concept in the form of an Alexa Skill. ## Accomplishments that we're proud of Using the machine learning algorithm, we are able to accurately predict instances of Alzheimer’s with an accuracy of 70 percent, with nothing but vocal recordings. Along with that, our team had no experience with Javascript and limited machine learning experience. None of our team knew each other coming into the Hackathon and we created something super awesome and that we enjoyed doing. ## What we learned We learned so much so it was really awesome. We all learned aspects of node.js and javascript that none of us knew before. We were also exposed to creating Alexa Skills which was so exciting because of the power developers have. ## What's next for Echo MyHealth We did a lot in 36 hours but there was more we wanted to do. We wanted to better increase the interactions between our analytical half and our Alexa half. Moreover, we want to be able to implement more features that can help us increase the accuracy of our machine learning algorithm. Lastly, we also would love to make Echo MyHealth a more immersive platform with more functionality given more time.
## Inspiration 2 days before flying to Hack the North, Darryl forgot his keys and spent the better part of an afternoon retracing his steps to find it- But what if there was a personal assistant that remembered everything for you? Memories should be made easier with the technologies we have today. ## What it does A camera records you as you go about your day to day life, storing "comic book strip" panels containing images and context of what you're doing as you go about your life. When you want to remember something you can ask out loud, and it'll use Open AI's API to search through its "memories" to bring up the location, time, and your action when you lost it. This can help with knowing where you placed your keys, if you locked your door/garage, and other day to day life. ## How we built it The React-based UI interface records using your webcam, screenshotting every second, and stopping at the 9 second mark before creating a 3x3 comic image. This was done because having static images would not give enough context for certain scenarios, and we wanted to reduce the rate of API requests per image. After generating this image, it sends this to OpenAI's turbo vision model, which then gives contextualized info about the image. This info is then posted sent to our Express.JS service hosted on Vercel, which in turn parses this data and sends it to Cloud Firestore (stored in a Firebase database). To re-access this data, the browser's built in speech recognition is utilized by us along with the SpeechSynthesis API in order to communicate back and forth with the user. The user speaks, the dialogue is converted into text and processed by Open AI, which then classifies it as either a search for an action, or an object find. It then searches through the database and speaks out loud, giving information with a naturalized response. ## Challenges we ran into We originally planned on using a VR headset, webcam, NEST camera, or anything external with a camera, which we could attach to our bodies somehow. Unfortunately the hardware lottery didn't go our way; to combat this, we decided to make use of MacOS's continuity feature, using our iPhone camera connected to our macbook as our primary input. ## Accomplishments that we're proud of As a two person team, we're proud of how well we were able to work together and silo our tasks so they didn't interfere with each other. Also, this was Michelle's first time working with Express.JS and Firebase, so we're proud of how fast we were able to learn! ## What we learned We learned about OpenAI's turbo vision API capabilities, how to work together as a team, how to sleep effectively on a couch and with very little sleep. ## What's next for ReCall: Memories done for you! We originally had a vision for people with amnesia and memory loss problems, where there would be a catalogue for the people that they've met in the past to help them as they recover. We didn't have too much context on these health problems however, and limited scope, so in the future we would like to implement a face recognition feature to help people remember their friends and family.
partial
## Inspiration The Rasberry Pi 4 is a great cheap device for hacking. It's discrete yet powerful enough to run most hacking tools found on Kali Linux. For this hack, I wanted to test out how Aircrack-ng tools would work on the RPI4. ## What it does The hack captures a 4-Way Handshake and uses it to initiate a brute force its way into a wifi network using WPA/WPA2 authentication using a password list. ## What we learned A big takeaway from this was the realization that nothing is as secure as we think it is, there'll always be an exploit that lets you in. It was also great using an RPI4 to test out hacking tools and not having to use a VM which can lead to some unexpected behavior. ## What's next for Wifi Hacking with a RPI4 I would like to buy the accessories that can make this hack 100% portable, so attaching an LCD screen and a power supply, that way this hack could be moved around and wouldn't be limited by big monitor I was using.
## Inspiration While waiting for the Hack the North bus after flying in from California, we ended up talking to ~10 other contestants. But during our bus ride, we realized that we'd forgotten almost everybody's name, and since we hadn't gotten their LinkedIns, we effectively had no way to stay in touch. Realizing that the next three days were going to involve meeting hundreds of new people, we decided to make the InPin to have an easy way to connect with all of them. ## What it does Hang the InPin on your shirt and hit a button before you talk to somebody. The InPin focuses on key details like name, location, and place of work, while allowing you to focus on real conversation. As you walk away, hit the button again to stop listening and a fully automated and personalized connection request is sent straight to the person you just talked to. Look back at all the cool people you've connected with on the InPin Memories app. (and for those of you concerned with security, we promise we never save audio or video) ## How we built it Hardware: Raspberry Pi 4, Custom 3D Printed Casing, Breadboard, Button, Jumper Wires, DRW Sponsor Battery Pack (Rechargeable Power Source), some Superglue and Duct Tape Software: OpenAI, Cohere (robust query generation), Convex (InPin Memories app), Python, Flask, NextJS ## Challenges we ran into This was our first time building a hardware hack and we had no shortage of issues with everything from formatting our MicroSD card to wiring our breadboard. Most notably, from around 8 am on Saturday to 4 am on Sunday we spent almost all of our time trying to trying to SSH into our Raspberry Pi. ## Accomplishments that we're proud of As first time hardware hackers, holding our physical product in our hands is really a crazy feeling. We're proud that we stuck with the InPin through hours of failures and most importantly, we're proud of the fact that we definitely made the trip up from Cali worth it. ## What we learned Try new things! InPin wouldn't have been a thing if we didn't decide to take the chance and build some hardware. We had our ups and downs but this experience definitely taught us to be adventurous and step out of our comfort zone. ## What's next for InPin Make Memories searchable (for example you'd be able to look up something like "who was that super interesting guy from Cali working on an automated connection wearable?"). Lip reading to make audio processing more robust in noisy environments. Make the InPin smaller, cheaper, and more polished. The prototype we have right now is more of a proof of concept than anything, but we'd love to see this taken to a point where people can, for example, be handed out InPins at the start of a conference or networking event.
## Inspiration Ideas for interactions from: * <http://paperprograms.org/> * <http://dynamicland.org/> but I wanted to go from the existing computer down, rather from the bottom up, and make something that was a twist on the existing desktop: Web browser, Terminal, chat apps, keyboard, windows. ## What it does Maps your Mac desktop windows onto pieces of paper + tracks a keyboard and lets you focus on whichever one is closest to the keyboard. Goal is to make something you might use day-to-day as a full computer. ## How I built it A webcam and pico projector mounted above desk + OpenCV doing basic computer vision to find all the pieces of paper and the keyboard. ## Challenges I ran into * Reliable tracking under different light conditions. * Feedback effects from projected light. * Tracking the keyboard reliably. * Hooking into macOS to control window focus ## Accomplishments that I'm proud of Learning some CV stuff, simplifying the pipelines I saw online by a lot and getting better performance (binary thresholds are great), getting a surprisingly usable system. Cool emergent things like combining pieces of paper + the side ideas I mention below. ## What I learned Some interesting side ideas here: * Playing with the calibrated camera is fun on its own; you can render it in place and get a cool ghost effect * Would be fun to use a deep learning thing to identify and compute with arbitrary objects ## What's next for Computertop Desk * Pointing tool (laser pointer?) * More robust CV pipeline? Machine learning? * Optimizations: run stuff on GPU, cut latency down, improve throughput * More 'multiplayer' stuff: arbitrary rotations of pages, multiple keyboards at once
losing
## Inspiration It's lunchtime, you are looking for somewhere to go to eat so you open Yelp and look for recommendations. After scrolling through many pages, you are overwhelmed by the number of restaurants around you and can't decide where to eat so you end up going to the fast food restaurant you always go to. We've all been there. What others like may not be what you like, but you also do not want to waste time entering all your preferences on the app. But wait, you always like photos on social media, so shouldn't your phone know what you like already? ## What it does Doko will collect data about the photos foods/restaurants the user have liked on social media and next time the user pass by that location, we will notify you. The user can also see restaurants around him/her in a convenient map view. Since t you have shown interest already in these restaurants, we are confident in our recommendations. ## How we built it We used Twitter API to query tweets a certain user has liked every 10 seconds. The backend is written in Python serving as the API connecting Mongo DB and iOS. ## Challenges we ran into Getting trapped by MongoDB Stitch iOS SDK. It took us nearly 2 hours to find out the issue in our project (also the inexplicit of the document) after reading the source code of it. ## Accomplishments that we're proud of ## What we learned ## What's next for Doko Our first step will be to support social media other than Twitter. Then we can include additional features such as restaurant recommendations using machine learning algorithms, or making a reservation within the app.
## Inspiration The inspiration for our project stems from the increasing trend of online shopping and the declining foot traffic in physical stores. Our goal was to provide a unique and engaging experience for customers, encouraging them to visit physical stores and rediscover the joy of in-person shopping. We wanted to create an interactive and entertaining shopping experience that would entice customers to visit stores more frequently and foster a deeper connection between them and the store's brand. ## What it does Our project is an AR scavenger hunt experience that gamifies the shopping experience. The scavenger hunt encourages customers to explore the store and discover new products they may have otherwise overlooked. As customers find specific products, they can earn points which can be redeemed for exclusive deals and discounts on future purchases. This innovative marketing scheme not only provides customers with an entertaining experience but also incentivizes them to visit stores more frequently and purchase products they may have otherwise overlooked. ## How we built it To create the AR component of our project, we used Vuforia and Unity, two widely used platforms for building AR applications. The Vuforia platform allowed us to create and track image targets, while Unity was used to design the 3D models for the AR experience. We then integrated the AR component into an Android application by importing it as a Gradle project. Our team utilized agile development methodologies to ensure efficient collaboration and problem-solving throughout the development process. ## Challenges we ran into One of the challenges we faced was integrating multiple APIs and ensuring that they worked together seamlessly. Another challenge was importing the AR component and creating the desired functionality within our project. We also faced issues with debugging and resolving technical errors that arose during the development process. ## Accomplishments that we're proud of Despite the challenges we faced, we were able to achieve successful teamwork and collaboration. Despite forming the team later than other groups, we were able to effectively communicate and work together to bring our project to fruition. We are proud of the end result, which was a polished and functional AR scavenger hunt experience that met our objectives. ## What we learned We learned how difficult it is to truly ship out software, and we are grateful to have joined the hackathon. We gained a deeper understanding of the importance of project planning, effective communication, and collaboration among team members. We also learned that the development process can be challenging and unpredictable, and that it requires perseverance and problem-solving skills. Additionally, participating in the hackathon taught us valuable technical skills such as integrating APIs, creating AR functionality, and importing projects onto an Android application. ## What's next for Winnur Looking forward, we plan to incorporate Computer Vision technology into our project to prevent potential damage to our product's packaging. We also aim to expand the reach of our AR scavenger hunt experience by partnering with more retailers and enhancing the user interface and experience. We are excited about the potential for future development and growth of Winnur.
## Inspiration Nicotine addiction is an epidemic that is taking the nation in millions due to the growing popularity of "E-Cigarettes." This has affected over 50 million people in the U.S. alone including our friends, family, and loved ones. 90% of these users started smoking before reaching 20 years old. Since the outbreak of trendy E-Cig companies like JUUL and Suorin, the number of high schoolers smoking spiked by 71% in just 2 years. With this problem heavily affecting our generation and generations to come, we want to propose, HelpingHand, a solution that could help mitigate the detrimental effects we noticed. ## What it does HelpingHand aims to combat this problem by utilizing FitBit's to detect when drug abusers experience withdrawal symptoms throughout their rehabilitation process. HelpingHand emphasizes the importance of positive reinforcement, peer support, and community in helping those struggling to quit, actually quit. Our advanced technology detects when the drug abuser is experiencing stress, therefore a high probability of nicotine relapse and notify their loved ones to provide a helping hand. ## How we built it Fitbit SDK for the Fitbit app and sending REST calls to the iOS app. Flutter and Dart for the iOS app and to receive REST calls. Express.JS for Twilio API, communication between the Fitbit data on the phone and Firebase. Firebase for an intermediary server through which our platforms can communicate. Java, org.json to create mock JSON containing timestamps, heartbeat rate, number of steps taken. And lots and lots of coffee! ## Challenges we ran into Due to our unfamiliarity with the Fitbit API, an initial challenge we ran into was figuring out how to collect the intraday time-series heart rate data from the Fitbit. Cross-platform communication between the Fitbit and our iOS Flutter app proved to be rather difficult, but once we overcame that hurdle things became much easier. Our team also spent a significant amount of time with research on smoking and its health effects, particularly the effect it has on heart beat rate and heart beat variability. Unfortunately, there is no publicly available dataset containing heart rate for individuals attempting to quit smoking. After interviewing people who have experienced nicotine withdrawals in the past, we discovered that high levels of anxiety will greatly increase a smoker's urge to relapse. It was from here that we began to think of the core aspects of HelpingHand and to start building the app. ## Accomplishments that we're proud of We're surprised at ourselves as to how much all learned this weekend! There were many different API's, frameworks, and hardware we weren't entirely comfortable with and having the opportunity to challenge ourselves was a ton of fun. We're especially proud of the fact that we have been able to finish -- and get working -- the core, defining aspects of HelpingHand. Every one of the team members personally knows someone who has suffered or is currently suffering from nicotine addiction. *By leveraging existing technologies such as the Fitbit, we sincerely hope that HelpingHand will positively impact the substance abuse epidemic in the United States.* **However, HelpingHand's mission doesn't stop at PennApps. We're committed to extending the app to other substances such as opioids, as well as mental health with depression-related anxiety hacks.** ## What we learned We fiddled a ton with the Fitbit SDK, and learned how to make Fitbit apps! Communication across the different platforms -- between Fitbit, our mobile app, Firebase, and Twilio was challenging but incredibly rewarding. In addition to the technology, a big takeaway from this is the immense knowledge we gained about drug abuse, rehabilitation, and recovery. From the personal stories we shared, to the case studies we discovered, this was truly a great opportunity to learn through real-world application. ## What's next for HelpingHand We would like to scale our app so that people across the world can get help coping with nicotine addiction. It would also enable for an expansive, online support community. We know "machine learning" gets tossed around a lot, but utilizing it to minimize false positives would be super promising. Moreover, only a portion of smart watch wearers use FitBit. An integration of HelpingHand to more health watches such as the Apple Watch would make our app more accessible. We also spent time debating on whether to tackle nicotine addiction or depression-related anxiety attacks during this weekend. While we opted for nicotine addiction, we believe extending HelpingHand to depression and other abused substances will open the door for many more applications.
winning
## Inspiration One of our team members was in the evacuation warning zone for the raging California fires in the Bay Area just a few weeks ago. Part of their family's preparation for this disaster included the tiresome, tedious, time-sensitive process of listing every item in their house for insurance claims in the event that it's burned down. This process took upwards of 15 hours between 3 people working on it and even then many items were missed an unaccounted for. Claim Cart is here to help! ## What it does Problems Solved (1) Families often have many belongings they don’t account for. It’s time intensive and inconvenient to coordinate, maintain, and update extensive lists of household items. Listing mundane, forgotten items can potentially add thousands of dollars to their insurance. (2) Insurance companies have private master lists of the most commonly used items and what the cheapest viable replacements are. Families are losing out on thousands of dollars because their claims don’t state the actual brand or price of their items. For example, if a family listed “toaster”, they would get $5 (the cheapest alternative), but if they listed “stainless steel - high end toaster: $35” they might get $30 instead. Claim Cart has two main value propositions: time and money. It is significantly faster to take a picture of your items than manually entering every object in. It’s also more efficient for members to collaborate on making a family master list. ## Challenges I ran into Our team was split between 3 different time zones, so communication and coordination was a challenge! ## Accomplishments that I'm proud of For three of our members, PennApps was their first hackathon. It was a great experience building our first hack! ## What's next for Claim Cart In the future, we will make Claim Cart available to people on all platforms.
## Inspiration The inspiration for our project stems from the increasing trend of online shopping and the declining foot traffic in physical stores. Our goal was to provide a unique and engaging experience for customers, encouraging them to visit physical stores and rediscover the joy of in-person shopping. We wanted to create an interactive and entertaining shopping experience that would entice customers to visit stores more frequently and foster a deeper connection between them and the store's brand. ## What it does Our project is an AR scavenger hunt experience that gamifies the shopping experience. The scavenger hunt encourages customers to explore the store and discover new products they may have otherwise overlooked. As customers find specific products, they can earn points which can be redeemed for exclusive deals and discounts on future purchases. This innovative marketing scheme not only provides customers with an entertaining experience but also incentivizes them to visit stores more frequently and purchase products they may have otherwise overlooked. ## How we built it To create the AR component of our project, we used Vuforia and Unity, two widely used platforms for building AR applications. The Vuforia platform allowed us to create and track image targets, while Unity was used to design the 3D models for the AR experience. We then integrated the AR component into an Android application by importing it as a Gradle project. Our team utilized agile development methodologies to ensure efficient collaboration and problem-solving throughout the development process. ## Challenges we ran into One of the challenges we faced was integrating multiple APIs and ensuring that they worked together seamlessly. Another challenge was importing the AR component and creating the desired functionality within our project. We also faced issues with debugging and resolving technical errors that arose during the development process. ## Accomplishments that we're proud of Despite the challenges we faced, we were able to achieve successful teamwork and collaboration. Despite forming the team later than other groups, we were able to effectively communicate and work together to bring our project to fruition. We are proud of the end result, which was a polished and functional AR scavenger hunt experience that met our objectives. ## What we learned We learned how difficult it is to truly ship out software, and we are grateful to have joined the hackathon. We gained a deeper understanding of the importance of project planning, effective communication, and collaboration among team members. We also learned that the development process can be challenging and unpredictable, and that it requires perseverance and problem-solving skills. Additionally, participating in the hackathon taught us valuable technical skills such as integrating APIs, creating AR functionality, and importing projects onto an Android application. ## What's next for Winnur Looking forward, we plan to incorporate Computer Vision technology into our project to prevent potential damage to our product's packaging. We also aim to expand the reach of our AR scavenger hunt experience by partnering with more retailers and enhancing the user interface and experience. We are excited about the potential for future development and growth of Winnur.
## Inspiration With the news of Hurricane Irma, Harvey, and the destructive wildfires ravaging western states, we learned that many people under crises situations have difficulty coordinating supplies and operating statuses of nearby stores and shelters. We decided that this would be a meaningful and realistic problem to tackle with the advent of crowdsourcing and mobile technologies. ## What it does The mobile app helps users find local resources such as food, gas, and shelters. On top of a map display, we've incorporated a tagging system to help keep track of inventory, safety, and functionality of stores, gas stations, and shelters. Any user can contribute to the tagging system by adding new tags, and any tags they add are stored in the database and synced across the entire platform. This way if a shelter is at full capacity, other users will instantly learn about it and not waste their efforts to travel there. ## How we built it We used React Native to build a mobile application for iOS and Android. For the entire development cycle we used Expo to test the app on our own Android and iOS platforms. The app connects to an API provided by an Express app that is hosted an nginx server running on AWS to access relevant data. We gathered this data through a combination of web scraping, the Foursquare API, and the Google Places API. We looked at several alternatives for API's and determined that these were the best tools available to us. The data is stored in a Mongo database. ## Challenges we ran into The biggest challenge was finding datasets for grocery stores, supermarkets, gas stations, and shelters. We explored different options for API's and datasets to use, such as NREL, Walmart's API, and the Red Cross website. However the best by far were Google Places API, Foursquare's API, and FEMA's MapService. One issue we ran into was that the FEMA website returned 404 errors for most of Friday night. We ended up writing an email to the Department of Homeland Security detailing the technical issue and they fixed it shortly after we sent it. ## Accomplishments that we're proud of Given our lack of hackathon experience, we are proud that we are able to come up with and execute on a practical, yet novel solution to a topical issue. We were able to distribute tasks among the three of us to develop an entire React Native app built on a robust backend that allows users to view important resource data in real time. ## What we learned We learned how to effectively coordinate front-end and and back-end work. More specifically, we learned how to develop a React Native app from scratch as well as build an API to expose specific endpoints to handle data delivery between the app and the database. ## What's next for Disaster Source We hope to bring this to the public. To do this, we will streamline the user experience, and restructure the backend to scale appropriately. We also hope to find additional data of higher accuracy, and introduce weighted tags to better reflect the inventory of stores.
winning
# meetNYU A mobile app for NYU students to meet each other.
## Inspiration Modern meetup apps are too focused on "dating", we wanted to create an app that helps students in the city make new friends on campus. The app gets rid of stalling when it comes to time and place to meet, we implemented a location tracking for each person so that when they match, they can try to meet up immediately. ## What it does Users can login to the app and fill out the criteria for the person they want to meet within a 1km radius, the app then tries to find a student that matches the criteria. Once found, the students are able to see eachother's locations and can easily meetup by looking at the pin points on the map. ## How we built it Built using Node.js backend with MongoDB on AWS server, our front-end utilizes javascript uses requests to communicate and gets location data. ## Challenges we ran into Working with AWS and the Google Maps Javascript API was very hard, Google Maps API was surprisingly hard to work with, the functionalities that they provide is very limited. AWS just had a lot of bugs that we had to work with during setup, once we got past it, working with it was more easier. ## Accomplishments that we're proud of We were able to get the core functionality of the program to work, sharing location data and seeing other users on the map.
## Inspiration Students often have a hard time finding complementary co-founders for their ventures/ideas and have limited interaction with students from other universities. Many universities don't even have entrepreneurship centers to help facilitate the matching of co-founders. Furthermore, it is hard to seek validation from a wide range of perspectives on your ideas when you're immediate network is just your university peers. ## What it does VenYard is a gamified platform that keeps users engaged and interested in entrepreneurship while building a community where students can search for co-founders across the world based on complementary skill sets and personas. VenYard’s collaboration features also extend to the ideation process feature where students can seek feedback and validation on their ideas from students beyond their university. We want to give the same access to entrepreneurship and venture building to every student across the world so they can have the tools and support to change the world. ## How we built it We built VenYard using JS, HTML, CSS, Node.js, MySQL, and a lack of sleep! ## Challenges we ran into We had several database-related issues related to the project submission page and the chat feature on each project dashboard. Furthermore, when clicking on a participant on a project's dashboard, we wanted their profile to be brought up but we ran into database issues there but that is the first problem we hope to fix. ## Accomplishments that we're proud of For a pair of programmers who have horrible taste in design, we are proud of how this project turned out visually. We are also proud of how we have reached a point in our programming abilities where we are able to turn our ideas into reality! ## What we learned We were able to advance our knowledge of MySql and Javascript specifically. Aside from that, we were also able to practice pair programming by using the LiveShare extension on VSCode. ## What's next for VenYard We hope to expand the "Matching" feature by making it so that users can specify more criteria for what they want in the ideal co-founder. Additionally, we probably would have to take a look at the UI and make sure it's user-friendly because there are a few aspects that are still a little clunky. Lastly, the profile search feature needs to be redone because our initial idea of combining search and matching profiles doesn't make sense. ## User Credentials if you do not want to create an account username: [revantkantamneni@gmail.com](mailto:revantkantamneni@gmail.com) password: revant ## Submission Category Education and Social Good ## Discord Name revantk16#6733, nicholas#2124
losing
## Inspiration We've noticed that many educators draw common structures on boards, just to erase them and redraw them in common ways to portray something. Imagine your CS teacher drawing an array to show you how bubble sort works, and erasing elements for every swap. This learning experience can be optimized with AI. ## What It Does Our software recognizes digits drawn and digitizes the information. If you draw a list of numbers, it'll recognize it as an array and let you visualize bubble sort automatically. If you draw a pair of axes, it'll recognize this and let you write an equation that it will automatically graph. The voice assisted list operator allows one to execute the most commonly used list operation, "append" through voice alone. A typical use case would be a professor free to roam around the classroom and incorporate a more intimate learning experience, since edits need no longer be made by hand. ## How We Built It The digits are recognized using a neural network trained on the MNIST hand written digits data set. Our code scans the canvas to find digits written in one continuous stroke, puts bounding boxes on them and cuts them out, shrinks them to run through the neural network, and outputs the digit and location info to the results canvas. For the voice driven list operator, the backend server's written in Node.js/Express.js. It accepts voice commands through Bixby and sends them to Almond, which stores and updates the list in a remote server, and also in the web user interface. ## Challenges We Ran Into * The canvas was difficult to work with using JavaScript * It is unbelievably hard to test voice-driven applications amidst a room full of noisy hackers haha ## Accomplishments that We're Proud Of * Our software can accurately recognize digits and digitize the info! ## What We Learned * Almond's, like, *really* cool * Speech recognition has a long way to go, but is also quite impressive in its current form. ## What's Next for Super Smart Board * Recognizing trees and visualizing search algorithms * Recognizing structures commonly found in humanities classes and implementing operations for them * Leveraging Almond's unique capabilities to facilitate operations like inserting at a specific index and expanding uses to data structures besides lists * More robust error handling, in case the voice command is misinterpreted (as it often is) * Generating code to represent the changes made alongside the visual data structure representation
## Inspiration Many of us had class sessions in which the teacher pulled out whiteboards or chalkboards and used them as a tool for teaching. These made classes very interactive and engaging. With the switch to virtual teaching, class interactivity has been harder. Usually a teacher just shares their screen and talks, and students can ask questions in the chat. We wanted to build something to bring back this childhood memory for students now and help classes be more engaging and encourage more students to attend, especially in the younger grades. ## What it does Our application creates an environment where teachers can engage students through the use of virtual whiteboards. There will be two available views, the teacher’s view and the student’s view. Each view will have a canvas that the corresponding user can draw on. The difference between the views is that the teacher’s view contains a list of all the students’ canvases while the students can only view the teacher’s canvas in addition to their own. An example use case for our application would be in a math class where the teacher can put a math problem on their canvas and students could show their work and solution on their own canvas. The teacher can then verify that the students are reaching the solution properly and can help students if they see that they are struggling. Students can follow along and when they want the teacher’s attention, click on the I’m Done button to notify the teacher. Teachers can see their boards and mark up anything they would want to. Teachers can also put students in groups and those students can share a whiteboard together to collaborate. ## How we built it * **Backend:** We used Socket.IO to handle the real-time update of the whiteboard. We also have a Firebase database to store the user accounts and details. * **Frontend:** We used React to create the application and Socket.IO to connect it to the backend. * **DevOps:** The server is hosted on Google App Engine and the frontend website is hosted on Firebase and redirected to Domain.com. ## Challenges we ran into Understanding and planning an architecture for the application. We went back and forth about if we needed a database or could handle all the information through Socket.IO. Displaying multiple canvases while maintaining the functionality was also an issue we faced. ## Accomplishments that we're proud of We successfully were able to display multiple canvases while maintaining the drawing functionality. This was also the first time we used Socket.IO and were successfully able to use it in our project. ## What we learned This was the first time we used Socket.IO to handle realtime database connections. We also learned how to create mouse strokes on a canvas in React. ## What's next for Lecturely This product can be useful even past digital schooling as it can save schools money as they would not have to purchase supplies. Thus it could benefit from building out more features. Currently, Lecturely doesn’t support audio but it would be on our roadmap. Thus, classes would still need to have another software also running to handle the audio communication.
## Inspiration Every time I talk to someone about boardgames, a few games always skip my mind because there are just so many good games to keep track of! If only there was a convenient location where I could access all of the board games that I own or am interested in. ## What it does Boardhoard allows you to access your library of games from anywhere! It will conveniently give you the ability to search for board games and you will be able to add them to your library so that you can access them with ease whenever you want! ## How we built it We leveraged the versatility of react to create a beautiful UI and the depth of information from BoardGameGeek's API to provide us with the necessary information to display the games. We used Charles and Postman to generate queries and used Java and HTTP libraries to fetch sample data to test our implementation. ## Challenges we ran into BoardGameGeek's (BGG) API returns XML responses which are not ideal. We found an alternative server that converted the responses to JSON which we then used to populate our app. Another challenge involved fetching a complete catalogue of all games on BGG, it simply could not be done. We had to come up with work arounds to fetch large amounts of data. We had trouble implementing individual user databases. ## Accomplishments that we're proud of It worked! It was a great accomplishment that we were able to maintain code quality and styling throughout the project. ## What we learned Learned about the importance of setting proper headers and authorization to POST requests. Learned how to persevere and make something work when faced with a limited set of APIs ## What's next for Boardhoard Add the ability to share you library with other people. Include more metadata to each game detail.
partial
## Inspiration No API? No problem! LLMs and AI have solidified the importance of text-based interaction -- APIcasso aims to harden this concept by simplifying the process of turning websites into structured APIs. ## What it does APIcasso has two primary functionalities: ✌️ 1. **Schema generation**: users provide a website URL and an empty JSON schema, which is automatically filled and returned by the Cohere AI backend as a well-structured API. It also generates a permanent URL for accessing the completed schema, allowing for easy reference and integration. 2. **Automation**: users provide a website URL and automation prompt, APIcasso returns an endpoint for the automation. For each JSON schema or automation requested, the user is prompted to pay for their token via ETH and Meta Mask. ## How we built it * The backend is Cohere-based and written in Python * The frontend is Next.js-based and paired with Tailwind CSS * Crypto integration (ETH) is done through Meta Mask ## Challenges we ran into We initially struggled with clearly defining our goal -- this idea has a lot of exciting potential projects/functionalities associated with it. It was difficult to pick just two -- (1) schema generation and (2) automation. ## Accomplishments that we're proud of Being able to properly integrate the frontend and backend despite working separately throughout the majority of the weekend. Integrating ETH verification. Working with Cohere (a platform that all of us were new to). Functioning on limited sleep. ## What we learned We learned a lot about the intricacies of working with real-time schema generation, creating dynamic and interactive UIs, and managing async operations for seamless frontend-backend communication. ## What's next for APIcasso If given extra time, we would plan to extend APIcasso’s capabilities by adding support for more complex API structures, expanding language support, and offering deeper integrations with developer tools and cloud platforms to enhance usability.
## Inspiration Data analytics can be **extremely** time-consuming. We strove to create a tool utilizing modern AI technology to generate analysis such as trend recognition on user-uploaded datasets.The inspiration behind our product stemmed from the growing complexity and volume of data in today's digital age. As businesses and organizations grapple with increasingly massive datasets, the need for efficient, accurate, and rapid data analysis became evident. We even saw this within one of our sponsor's work, CapitalOne, in which they have volumes of financial transaction data, which is very difficult to manually, or even programmatically parse. We recognized the frustration many professionals faced when dealing with cumbersome manual data analysis processes. By combining **advanced machine learning algorithms** with **user-friendly design**, we aimed to empower users from various domains to effortlessly extract valuable insights from their data. ## What it does On our website, a user can upload their data, generally in the form of a .csv file, which will then be sent to our backend processes. These backend processes utilize Docker and MLBot to train a LLM which performs the proper data analyses. ## How we built it Front-end was very simple. We created the platform using Next.js and React.js and hosted on Vercel. The back-end was created using Python, in which we employed use of technologies such as Docker and MLBot to perform data analyses as well as return charts, which were then processed on the front-end using ApexCharts.js. ## Challenges we ran into * It was some of our first times working in live time with multiple people on the same project. This advanced our understand of how Git's features worked. * There was difficulty getting the Docker server to be publicly available to our front-end, since we had our server locally hosted on the back-end. * Even once it was publicly available, it was difficult to figure out how to actually connect it to the front-end. ## Accomplishments that we're proud of * We were able to create a full-fledged, functional product within the allotted time we were given. * We utilized our knowledge of how APIs worked to incorporate multiple of them into our project. * We worked positively as a team even though we had not met each other before. ## What we learned * Learning how to incorporate multiple APIs into one product with Next. * Learned a new tech-stack * Learned how to work simultaneously on the same product with multiple people. ## What's next for DataDaddy ### Short Term * Add a more diverse applicability to different types of datasets and statistical analyses. * Add more compatibility with SQL/NoSQL commands from Natural Language. * Attend more hackathons :) ### Long Term * Minimize the amount of work workers need to do for their data analyses, almost creating a pipeline from data to results. * Have the product be able to interpret what type of data it has (e.g. financial, physical, etc.) to perform the most appropriate analyses.
## FLEX [Freelancing Linking Expertise Xchange] ## Inspiration Freelancers deserve a platform where they can fully showcase their skills, without worrying about high fees or delayed payments. Companies need fast, reliable access to talent with specific expertise to complete jobs efficiently. "FLEX" bridges the gap, enabling recruiters to instantly find top candidates through AI-powered conversations, ensuring the right fit, right away. ## What it does Clients talk to our AI, explaining the type of candidate they need and any specific skills they're looking for. As they speak, the AI highlights important keywords and asks any more factors that they would need with the candidate. This data is then analyzed and parsed through our vast database of Freelancers or the best matching candidates. The AI then talks back to the recruiter, showing the top candidates based on the recruiter’s requirements. Once the recruiter picks the right candidate, they can create a smart contract that’s securely stored and managed on the blockchain for transparent payments and agreements. ## How we built it We built starting with the Frontend using **Next.JS**, and deployed the entire application on **Terraform** for seamless scalability. For voice interaction, we integrated **Deepgram** to generate human-like voice and process recruiter inputs, which are then handled by **Fetch.ai**'s agents. These agents work in tandem: one agent interacts with **Flask** to analyze keywords from the recruiter's speech, another queries the **SingleStore** database, and the third handles communication with **Deepgram**. Using SingleStore's real-time data analysis and Full-Text Search, we find the best candidates based on factors provided by the client. For secure transactions, we utilized **SUI** blockchain, creating an agreement object once the recruiter posts a job. When a freelancer is selected and both parties reach an agreement, the object gets updated, and escrowed funds are released upon task completion—all through Smart Contracts developed in **Move**. We also used Flask and **Express.js** to manage backend and routing efficiently. ## Challenges we ran into We faced challenges integrating Fetch.ai agents for the first time, particularly with getting smooth communication between them. Learning Move for SUI and connecting smart contracts with the frontend also proved tricky. Setting up reliable Speech to Text was tough, as we struggled to control when voice input should stop. Despite these hurdles, we persevered and successfully developed this full stack application. ## Accomplishments that we're proud of We’re proud to have built a fully finished application while learning and implementing new technologies here at CalHacks. Successfully integrating blockchain and AI into a cohesive solution was a major achievement, especially given how cutting-edge both are. It’s exciting to create something that leverages the potential of these rapidly emerging technologies. ## What we learned We learned how to work with a range of new technologies, including SUI for blockchain transactions, Fetch.ai for agent communication, and SingleStore for real-time data analysis. We also gained experience with Deepgram for voice AI integration. ## What's next for FLEX Next, we plan to implement DAOs for conflict resolution, allowing decentralized governance to handle disputes between freelancers and clients. We also aim to launch on the SUI mainnet and conduct thorough testing to ensure scalability and performance.
partial
## Inspiration Ever had those long loading screens, you know, those ones where you can't use your screen? Well now you can play a lite game of guitar hero on your keyboard, while your game is loading! ## What it does Keys travel and light up, your job is to press them following a sick beat. Get combos and watch the keyboard light up into many colors. ## How we built it Corsair SDK and a lot of C++... an unhealthy amount. Analysis of sound waves and Fourier transform done in python. ## Challenges we ran into ## Accomplishments that we're proud of -Developed a small game engine from scratch leveraging all kinds of optimizations in c++. -Fully multi-threaded and asynchronous key state checking for smoothest experience of the game -Audio signal processing to procedurally generate a game using the sound waves of a song so that the patterns are on beat and grooovvyy. Usage of Fast Fourrier Transform precise analysis. -Multimedia integration: the whole app including music playing logic, gameplay, time synchronization and keyboard control is on one desktop app. Works even in the background! ## What we learned All of us learned C++ being the first time we used it. ## What's next for KeyTar Hero Get acquired by Corsair.
## Inspiration Our initial hackathon idea was to do something with a game. We also realized that we all had a fond passion for music and that's when we thought to ourselves, why aren't there more games that play around with the background music? That's when we started to brainstorm ideas on how the game could interact with the music, and soon all the details for what we now know as Music Dash started coming together. ## What it does The game uses the Spotify API to get a thorough audio analysis of the song that is playing in the background. For this demo the song chose was "Mr. Blue Sky" by Electric Light Orchestra. Due to the time constraints, the factor used to determine how the game looked was the length of each bar to keep things simple. Every time a new bar starts, a new platform is created. You will notice this in the gameplay where the new platforms appear synced up to the music. ## How we built it The development was done using Unity using both the GUI to manipulate the sprites and collision boxes, while Microsoft Visual Studio was used to program with C# and add additional functionalities to the entities in Unity, such as the movement of the player and the control over when the platforms were created. ## Challenges we ran into We encountered issues when trying to incorporate the Spotify API. Firstly was finding a way for the API to be accessed using C#, and thankful NuGet was a good solution. The other major problem was syncing the music to the creation of the platforms, mainly the part of delaying the creation of the platforms. The WatiForSeconds function turned out to be quite complicated for those of us on the team who were new to C#, with keywords such as yield and IEnumerator being unfamiliar. However through perseverance and research, these issues were resolved. ## Accomplishments that we're proud of We came up with a creative idea that hasn't been explored much in the mainstream gaming industry. ## What we learned The basics of Unity and game development and how flexible of a tool it can be with the help of assets and add-ons from the internet ## What's next for Music Dash Using more complicated metrics from the music, such as the frequency, to determine more elements of the game such as frequency of enemies, the height of platforms, movement speed, etc. In general, we hope to make the game be much more in harmony with the music.
## Inspiration Pianos are usually heavy and expensive, so a keyboard made from cheap and portable materials like paper can be very useful. That's why we came up with the paper piano idea. With a paper piano, it's natural to connect it to a computer and achieve multiple purposes like composing. While most composers rely on midi file export to view and make adjustment to their compositions, we wish to realize real-time visual representation of the music being played so people can view their music notes in real time. ## What it does Pianeer is a highly innovated and convenient music composing software+hardware system. Multiple modes are available for different musical functions. It includes a paper piano which provides real experience of playing keyboard but can be easily rolled up and carried. The composing mode of the software provides real time visual representation of the music that's being played and exports a midi file. The play mode provides sheet music and keyboard highlights for practicing purpose. The practice mode allows users to practice their compositions casually without being recorded. Pianeer is extremely multi-functional and portable, and can be used for both beginners and mature composers. ## How we built it We put electric paint on a hard paper and connect it to arduinos via wires. Using capacitive sensing library we are able to give input signals by simply putting fingers on the paper. We exported the input signal to our program to generate sound and realize other functions. ## Challenges we ran into * It's very hard to accurately draw so many keys on the paper using electric paint since each key has to be clearly separated. * Adjusting delay and accuracy of the paper piano is very challenging. * Convert music input signals to visual representation. * User Interface design. ## Accomplishments that we're proud of * Lovely user interface that corresponds to the superhero theme. * Pretty accurate sensing and sound generation of the paper piano. * Real time visual representation of the music * Various functions and modes we've accomplished. ## What we learned Pygame, arduino, multi-threading ## What's next for Pianeer Make a more delicate prototype of the paper piano and realize the player mode we have no time to accomplish this time.
losing
## Inspiration I was bored, sleep deprived, and sleeping in the most uncomfortable space on campus. Then it hit me, and I made it. Also the deadline. Yes. ## What it does Basically, the AI scrapes web for good ways to pursue the hobby (just lingo & NPL), then according to the location query, looks up areas to pursue said hobby/activity. ## How I\* built it Flask & HTML and co:here's AI. Go Python! ## Challenges we ran into I didn't know how to use Flask or HTML, embedding the AI, pyCharm for Windows ## Accomplishments that we're proud of I now know how to use Flask & HTML ## What we learned? Flask & HTML, also don't do hackathons solo ## What's next for explor? Actually develop it (if coursework allows), integrate it with google maps, establish and develop a more interactive & animated gui for the website, and create a supporting app, on a reasonable timeline.
## Inspiration As a game developer, it's always been apparent that lighting up environments is both the most challenging and important aspect of making traditional immersive experiences. We wonder "is there a faster way"? ## What it does MLuminate attempts to take the complex features of a scene (lights, player, objects) and use a smart ML dimensionality reduction strategy to return a constant-time lightmap representation of the scene. ## How I built it The primary strategy for this app was to use Principal Component analysis to reduce the dimensionality space of a descriptive set that would characterize both the environment objects and the light conditions, before using a CNN to generate subsequent lightmap data. ## Challenges I ran into Big questions to answer were: "How do we represent the scene?", "How do we make this a simpler problem?", "What tradeoffs are we inviting by reducing the runtime to an approximation?", and "Where is the training data?". ## Accomplishments that I'm proud of Of course, effective solutions to the aforementioned problems, as well as finally getting some sleep in a hackathon! ## What I learned Time management, effective problem-solving, and just how hard vector math is. ## What's next for MLuminate Expanding to more objects, more bounces, different specularities/materials, and different kinds of light sources (not just directional).
## Inspiration ### Many people struggle with research. We wanted to fix that. So we built something to provide a starting point for research on any topic, with the goal of being concise and informative. ## What it does ### It uses uipath to find the definition of almost any topic on Wikipedia to give the user a bit of a general overview of the topic. Afterward, it scrapes the titles of the most relevant news articles to give the user a head-start in their research. ## How we built it 1. As mentioned before we have a uipath component that gets the definition of the topic from Wikipedia as a general overview. For that, we have built an algorithm that goes step by step into this process just like a human would. This gets tied into the python code using the start\_wikifetcher.py file which uses paperclip and other functions from the sentence\_tools.py. 2. sentence\_tools.py has numerous algorithms that are responsible for the computation to scrape the most relevant information. Such as the extract\_first\_sentence() and get\_key\_points\_google(), which extract the first sentence and get the headlines from the first page of google news on the topic respectively 3. For the front end, there is an index.html file that contains the HTML content of the page. It is fairly basic and easy to understand for the reader. 4. The graphics were made using a PowerPoint presentation. ### Note: Please refer to the README file from the git repository for a more thorough explanation of the installation process. ## Challenges we ran into 1. Google's limit for requests before it labels your IP as spam. 2. Working remotely 3. Managing teammates 4. Having unrealistic goals of making an AI that writes essays like humans. ## Accomplishments that we're proud of ### For the accomplishments, I will address how we overcame the challenges in their respective order as that is an accomplishment in itself. 1. We found a way to just get the first page from google search and we settled on only doing the search once other than doing a search for all selective combinations of the topic 2. We used discord calls to connect better 3. We optimized the team by selectively picking teammates. 4. We have made that a goal for the future now. ## What we learned ### This project taught us both technical and nontechnical skills, we had to familiarize ourselves with numerous APIs and new technologies such as UiPath. On the non-technical side of things, the reason this project was a success was because of the equitable amount of work everyone was given. Everyone worked on their part because they truly wanted to. ## What's next for Essay Generator ### Next step is to use all the data found by web scraping to form an essay just like a human would and that requires AI technologies like machine learning and neural networks. This was too hard to figure out in the 36 hrs. period but in the future years, we all will try to come up with a solution for this idea.
losing
## Inspiration Ella, a 9 year old schoolgirl in London, England, recently died from respiratory failure. She lived with her family in an apartment above a heavily polluted road in south London, and suffered from three years of seizures and 27 visits to the hospital to treat her asthma attacks. At the time of her death, local air pollution levels breached legal EU limits. Ella is just one of 600,000 children worldwide who pass away each year from inhaling polluted air, and one of 7 million people in totally annually. Through Ella’s story. we realized how many people are affected by air pollutants, be it direct emissions from vehicles, gases from industrial plants, or carcinogens. Especially as urbanization speeds up, the emergence of deadly pollutants and more widespread pollution calls for urgency in this topic. As a result, we decided to use IoT sensors to monitor air quality and predict levels within the next 24 hours through machine learning. This would allow municipalities to identify and prioritize their investments in air filtration for specific locations experiencing spike or consistently high pollution levels. Residents will also be informed on the severity of pollutants nearby, as well as have the knowledge needed to take preventative care measures. Through this, we hope Clair.ai can help improve air quality one region at a time, and build safe, sustainable cities. ## What it does Through sensors installed on a connected IOT device, components of air quality such as humidity and temperature will be measured within the immediate vicinity. Timing and location factors such as time of day, day of week, month, and geographical location will also be measured to gain a comprehensive analysis on the data set. The raw data is then processed in the cloud to identify patterns and volatility of pollutant levels. A compute engine on Google Cloud is enabled with various trained machine learning models to predict particulate matter less than 2.5 micrometers (PM2.5), within the next 24 hours, and categorize this output based on threat level. The projection results are live streamed via Firebase and displayed for the user through various data visualization charts based on an accessible web application. ## How we built it The primary data source serving as input for the predictive model was taken from a connected Qualcomm 401c IOT board with various sensors taking in data for temperature and humidity. A compute engine on Google Cloud was initialized to train the machine learning models utilized for PM2.5 prediction. An air quality dataset with relevant and similar measurable features was utilized to train the Machine Learning model. The data from the IOT board is then streamed to the compute engine, which predicts the PM2.5 levels for the next 24 hours and streams the output to a Firebase data store. A dynamic web application built with React then pulls the live data from Firebase, which visualizes the data to deliver the insights from the model to the users. ## Challenges we ran into We ran into many challenges while building Clair.ai. Our greatest challenge was working with hardware sensors and a Qualcomm 401C board that we were unfamiliar with. Our board was running Android as an OS; and we spent a lot of time trying to run Shell scripts and then Python to interface with our sensors. Moreover, the soldering iron would not turn on, preventing us from soldering our air quality sensor onto the board. In terms of machine learning, due to the limited amount of measured features available, feature engineering was required to provide a sufficient amount of predictive power for our models. Moreover, a lot of time was spent on normalizing features, accessing training and cross-validation data set with similar features, as well as training and determining the most appropriate model to use for machine learning. Our criteria chosen for the most appropriate model were the interpretability of the output as well as the minimization of the rooted mean squared error. Another challenge for us was setting up the live data pipeline from the server running the machine learning models to the web application. A dynamic front-end was set up to receive the continuous stream of data, and to visualize the data in an intuitive manner. ## Accomplishments that we're proud of We’re very proud of our fully functional project, and mostly being able to connect the sensors to the Qualcomm board and interface with the sensors. As our team has not had prior experience working with sensors or IOT boards, we were proud of our ability to tackle the ambiguity of approaching the task and working together cohesively to brainstorm and test different strategies. We’re also proud of our teamwork abilities, as we leveraged each team member’s strengths in front-end, back-end, hardware, and business case construction to build a complete and comprehensive solution. ## What we learned We learned a ton while developing Clair.ai, including setting up Shell scripts and Python environments for Qualcomm 401C Android-based boards, feature engineering and model building for various machine learning techniques, and setting up a web application that is able to visualize data through a live data pipeline using Firebase. ## What's next for Clair.ai We hope that Clair.ai represents an opportunity to incite a change for both residents and municipal governments. We believe that through Clair.ai, we’ve enabled the opportunity for residents to track surrounding air quality and focus on preventative measures for their health and safety, and for municipal governments to efficiently plan for urban development and large-scale monitoring of air quality in cities to mitigate pollution, and focus on the development of a more sustainable future.
![AirMigo Logo](https://d112y698adiu2z.cloudfront.net/photos/production/software_photos/003/090/300/datas/original.png) ## Inspiration Our inspiration came from the growing concern over air pollution and its detrimental effects on individuals with respiratory conditions like asthma and allergies. Most available AQI data only updates hourly, making it difficult for people to track real-time changes. We wanted to create a solution that provides real-time, personalized, and actionable alerts, helping individuals proactively manage their health and stay safe. ## What it does The application collects real-time data from multiple sources, including traffic, pollens level and air quality index (AQI) data. It analyzes the air quality in the user's surrounding area, visualizes on the map, and provides real-time updates. Based on the user’s specific health conditions, such as asthma or allergies, the app sends personalized notifications with warnings and recommendations, helping users to take appropriate action to protect their health. ## How we built it We first collect real-time data from multiple third-party APIs, including Google Maps API, to gather information such as traffic, air quality, and pollen levels. Then, we use SingleStore DB for low-latency queries, allowing to query and update data in real-time. We also use Groq API to call the Llama 3 model. With the metadata about the surroundings and user's health condition, the LLM model generate personalized notification messages. We use React for frontend and Node for backend development. Lastly, we deploy it on Vercel. ## Challenges we ran into * Data Integration: Collecting real-time data from multiple sources and ensuring consistent, accurate updates was quite a challenge. * Familiarizing with SDKs: We spent considerable time getting familiar with the SingleStore SDK and Groq SDK. * API Rate Limits: Handling rate limits from third-party APIs posed a challenge, especially for a real-time application that requires frequent updates. ## Accomplishments that we're proud of * We successfully built a working product with core features, including real-time data updates, personalized LLM-generated messages, and map visualization. * We also talked to a bunch of cool people, from hackers to sponsors. * Implemented and learned a handful of new technologies. ## What we learned * We learned a lot about the architecture behind real-time applications, particularly in handling large volumes of data and optimizing for low latency. In the future, we could explore using a Kafka queue to streamline real-time data even further. * We gained insights into how AI models like Llama 3 can be used to generate personalized and meaningful notifications. * BIGGEST LESSON: SLEEP IS IMPORTANT !!! ## What's Next for AirMigo * We aim to expand the app’s capabilities to account for a wider range of health conditions, such as COPD, bronchitis, or heart conditions, to make the app more versatile. * We plan to integrate additional real-time data sources, such as weather conditions, humidity levels, and wildfire smoke, to provide even more accurate real-time air quality reports. * In case of having more resources (more API rates), we want to increase the geographical radius for data collection and allow for more frequent updates and larger coverage areas. * Let users go to a destination via a path with least air resistance. * Cleaner and more intuitive UI
## Problem Statement As the number of the elderly population is constantly growing, there is an increasing demand for home care. In fact, the market for safety and security solutions in the healthcare sector is estimated to reach $40.1 billion by 2025. The elderly, disabled, and vulnerable people face a constant risk of falls and other accidents, especially in environments like hospitals, nursing homes, and home care environments, where they require constant supervision. However, traditional monitoring methods, such as human caregivers or surveillance cameras, are often not enough to provide prompt and effective responses in emergency situations. This potentially has serious consequences, including injury, prolonged recovery, and increased healthcare costs. ## Solution The proposed app aims to address this problem by providing real-time monitoring and alert system, using a camera and cloud-based machine learning algorithms to detect any signs of injury or danger, and immediately notify designated emergency contacts, such as healthcare professionals, with information about the user's condition and collected personal data. We believe that the app has the potential to revolutionize the way vulnerable individuals are monitored and protected, by providing a safer and more secure environment in designated institutions. ## Developing Process Prior to development, our designer used Figma to create a prototype which was used as a reference point when the developers were building the platform in HTML, CSS, and ReactJs. For the cloud-based machine learning algorithms, we used Computer Vision, Open CV, Numpy, and Flask to train the model on a dataset of various poses and movements and to detect any signs of injury or danger in real time. Because of limited resources, we decided to use our phones as an analogue to cameras to do the live streams for the real-time monitoring. ## Impact * **Improved safety:** The real-time monitoring and alert system provided by the app helps to reduce the risk of falls and other accidents, keeping vulnerable individuals safer and reducing the likelihood of serious injury. * **Faster response time:** The app triggers an alert and sends notifications to designated emergency contacts in case of any danger or injury, which allows for a faster response time and more effective response. * **Increased efficiency:** Using cloud-based machine learning algorithms and computer vision techniques allow the app to analyze the user's movements and detect any signs of danger without constant human supervision. * **Better patient care:** In a hospital setting, the app could be used to monitor patients and alert nurses if they are in danger of falling or if their vital signs indicate that they need medical attention. This could lead to improved patient care, reduced medical costs, and faster recovery times. * **Peace of mind for families and caregivers:** The app provides families and caregivers with peace of mind, knowing that their loved ones are being monitored and protected and that they will be immediately notified in case of any danger or emergency. ## Challenges One of the biggest challenges have been integrating all the different technologies, such as live streaming and machine learning algorithms, and making sure they worked together seamlessly. ## Successes The project was a collaborative effort between a designer and developers, which highlights the importance of cross-functional teams in delivering complex technical solutions. Overall, the project was a success and resulted in a cutting-edge solution that can help protect vulnerable individuals. ## Things Learnt * **Importance of cross-functional teams:** As there were different specialists working on the project, it helped us understand the value of cross-functional teams in addressing complex challenges and delivering successful results. * **Integrating different technologies:** Our team learned the challenges and importance of integrating different technologies to deliver a seamless and effective solution. * **Machine learning for health applications:** After doing the research and completing the project, our team learned about the potential and challenges of using machine learning in the healthcare industry, and the steps required to build and deploy a successful machine learning model. ## Future Plans for SafeSpot * First of all, the usage of the app could be extended to other settings, such as elderly care facilities, schools, kindergartens, or emergency rooms to provide a safer and more secure environment for vulnerable individuals. * Apart from the web, the platform could also be implemented as a mobile app. In this case scenario, the alert would pop up privately on the user’s phone and notify only people who are given access to it. * The app could also be integrated with wearable devices, such as fitness trackers, which could provide additional data and context to help determine if the user is in danger or has been injured.
losing
## Inspiration MoodTunes was inspired by a deep understanding of the profound impact music has on emotions and mental well-being. We recognized the need to create a tool that could harness the therapeutic power of music to alleviate anxiety and promote relaxation. We were driven by the idea of making music a key component of mental health and wellness. ## What it does MoodTunes is an app designed to help individuals manage their anxiety levels through personalized music recommendations. Users input their anxiety levels using the Hamilton Anxiety Rating Scale, and the app utilizes advanced algorithms, including LSTM and NLP, to curate custom playlists that aim to alleviate stress and promote relaxation. It provides a unique blend of music and mental well-being, enhancing the user's overall mood and mental health. Made use of CockroachDB in order to store all user data. ## How we built it MoodTunes was crafted using Django for the backend, LSTM and NLP for anxiety prediction, Streamlit for the user interface, and Python for seamless integration. To store data, we decided to use CockroachDB. We also utilized web automation and data scraping techniques to curate an extensive music library. This combination of technologies enabled us to offer accurate anxiety assessments and personalized music recommendations via a user-friendly app. ## Challenges we ran into Developing algorithms that accurately assessed anxiety levels and recommended suitable music was an ongoing challenge. Additionally, we faced difficulties related to training the transformer model, which impacted our ability to achieve the desired level of accuracy in assessing anxiety levels and making music recommendations. Overcoming these challenges required innovative solutions and continuous efforts to refine the app. ## Accomplishments that we're proud of Successfully launching an app that harnesses the therapeutic power of music to support mental well-being. Creating a vast music library and a robust algorithm that delivers personalized music recommendations. ## What we learned During the development of MoodTunes, we learned about the complexities of music psychology and intricate algorithm development. We also gained insights into user engagement and the importance of continually refining and improving the user experience. ## What's next for MoodTunes Expand the app's music library to provide an even wider range of music choices for users. Continue refining and enhancing the algorithm to improve the accuracy of anxiety assessments and music recommendations. Explore partnerships with mental health professionals and institutions to integrate MoodTunes into therapy and wellness programs.
## Inspiration Both of us are students who hope to enter the now deflated computer science market, and we shared similar experiences in the mass LeetCode grind. However, we also understood that simply completing LeetCode questions weren't enough; often, potential candidates are met with an unpleasant surprise when they are asked to walk through their thought process. We believed that not being able to communicate your algorithmic ideas was a missed problem, and we hoped to eliminate this with our solution. ## What it does DaVinci Solve is a way for users to practice communicating their thought process in approaching LeetCode problems (and any other competitive programming problems). This website is an AI-simulated interview scenario which prompts users to outline their solutions to various LeetCode problems through speech and provides feedback on the algorithmic approach they provide. ## How we built it We used Gradio as a quick front-end and back-end solution. We implemented both Groq and OpenAI APIs for speech-to-text, text-to-speech, and LLM generation. We implemented Leetscrape to scrape problems off of LeetCode, then used Beautiful-Soup to format the problem for display. We started by mapping out our idea in a flow-chart that we could incrementally complete in order to keep our progress on track. Then we organized our APIs and created a console-based version using various calls from our APIs. Then, we used Gradio to wrap up the functionality in a MVP localhost website. ## Challenges we ran into There was a good amount of bugs that kept surfacing as we wrote more code, and it was also hard to figure out how to move on with the lacking documentation and flexibility from Gradio. However, eventually we pulled out the bug spray. With a good amount of perseverance, we finally weeded out every single bug from our code, which provided a good amount of relief. ## Accomplishments that we're proud of We're pretty proud of being able to go all out on our first hackathon project while spending lots of time for other activities and enjoying the hackathon experience. We even managed to avoid consuming any caffeine and steered away from all-nighters in favour for a good amount of sleep. But definitely the most satisfying thing was getting our project to work in the end. ## What we learned This was our first hackathon, and it was an eye-opening experience to see how much could happen within 36 hours. We developed skills on building and testing as fast as possible. Lastly, we learned how to cooperate on an exciting project while squeezing out as much fun out of Hack the North as we could. ## What's next for DaVinci Solve We plan on implementing a better front-end, a memory system for the feedback-giving LLM, and an in-built IDE for users to test their actual code after illustrating their approach.
## Inspiration What inspired us to build this application was spreading mental health awareness in relationship with the ongoing COVID-19 pandemic around the world. While it is easy to brush off signs of fatigue and emotional stress as just "being tired", often times, there is a deeper problem at the root of it. We designed this application to be as approachable and user-friendly as possible and allowed it to scale and rapidly change based on user trends. ## What it does The project takes a scan of a face using a video stream and interprets that data by using machine learning and specially-trained models for emotion recognition. Receiving the facial data, the model is then able to process it and output the probability of a user's current emotion. After clicking the "Recommend Videos" button, the probability data is exported as an array and is processed internally, in order to determine the right query to send to the YouTube API. Once the query is sent and a response is received, the response is validated and the videos are served to the user. This process is scalable and the videos do change as newer ones get released and the YouTube algorithm serves new content. In short, this project is able to identify your emotions using face detection and suggest you videos based on how you feel. ## How we built it The project was built as a react app leveraging face-api.js to detect the emotions and youtube-music-api for the music recommendations. The UI was designed using Material UI. The project was built using the [REACT](https://reactjs.org/) framework, powered by [NodeJS](https://nodejs.org/en/). While it is possible to simply link the `package.json` file, the core libraries that were used were the following * **[Redux](https://react-redux.js.org/)** * **[Face-API](https://justadudewhohacks.github.io/face-api.js/docs/index.html)** * **[GoogleAPIs](https://www.npmjs.com/package/googleapis)** * **[MUI](https://mui.com/)** * The rest were sub-dependencies that were installed automagically using [npm](https://www.npmjs.com/) ## Challenges we ran into We faced many challenges throughout this Hackathon, including both programming and logistical ones, most of them involved dealing with React and its handling of objects and props. Here are some of the most harder challenges that we encountered with React while working on the project: * Integration of `face-api.js`, as initially figuring out how to map the user's face and adding a canvas on top of the video stream proved to be a challenge, given how none of us really worked with that library before. * Integration of `googleapis`' YouTube API v3, as the documentation was not very obvious and it was difficult to not only get the API key required to access the API itself, but also finding the correct URL in order to properly formulate our search query. Another challenge with this library is that it does not properly communicate its rate limiting. In this case, we did not know we could only do a maximum of 100 requests per day, and so we quickly reached our API limit and had to get a new key. Beware! * Correctly set the camera refresh interval so that the canvas can update and be displayed to the user. Finding the correct timing and making sure that the camera would be disabled when the recommendations are displayed as well as when switching pages was a big challenge, as there was no real good documentation or solution for what we were trying to do. We ended up implementing it, but the entire process was filled with hurdles and challenges! * Finding the right theme. It was very important to us from the very start to make it presentable and easy to use to the user. Because of that, we took a lot of time to carefully select a color palette that the users would (hopefully) be pleased by. However, this required many hours of trial-and-error, and so it took us quite some time to figure out what colors to use, all while working on completing the project we had set out to do at the start of the Hackathon. ## Accomplishments that we're proud of While we did face many challenges and setbacks as we've outlined above, the results we something that we can really be proud of. Going into specifics, here are some of our best and satisfying moments throughout the challenge: * Building a well-functioning app with a nice design. This was the initial goal. We did it. We're super proud of the work that we put in, the amount of hours we've spent debugging and fixing issues and it filled us with confidence knowing that we were able to plan everything out and implement everything that we wanted, given the amount of time that we had. An unforgettable experience to say the least. * Solving the API integration issues which plagued us since the start. We knew, once we set out to develop this project, that meddling with APIs was never going to be an easy task. We were very unprepared for the amount of pain we were about to go through with the YouTube API. Part of that is mostly because of us: we chose libraries and packages that we were not very familiar with, and so, not only did we have to learn how to use them, but we also had to adapt them to our codebase to integrate them into our product. That was quite a challenge, but finally seeing it work after all the long hours we put in is absolutely worth it, and we're really glad it turned out this way. ## What we learned To keep this section short, here are some of the things we learned throughout the Hackathon: * How to work with new APIs * How to debug UI issues use components to build our applications * Understand and fully utilize React's suite of packages and libraries, as well as other styling tools such as MaterialUI (MUI) * Rely on each other strengths * And much, much more, but if we kept talking, the list would go on forever! ## What's next for MoodChanger Well, given how the name **is** *Moodchanger*, there is one thing that we all wish we could change next. The world! PS: Maybe add file support one day? :pensive: PPS: Pst! The project is accessible on [GitHub](https://github.com/mike1572/face)!
losing
## Inspiration In 2010, when Haiti was rocked by an earthquake that killed over 150,000 people, aid workers manned SMS help lines where victims could reach out for help. Even with the international humanitarian effort, there was not enough manpower to effectively handle the volume of communication. We set out to fix that. ## What it does EmergAlert takes the place of a humanitarian volunteer at the phone lines, automating basic contact. It allows victims to request help, tell their location, place calls and messages to other people, and inform aid workers about their situation. ## How we built it We used Mix.NLU to create a Natural Language Understanding model that categorizes and interprets text messages, paired with the Smooch API to handle SMS and Slack contact. We use FHIR to search for an individual's medical history to give more accurate advice. ## Challenges we ran into Mentoring first time hackers was both a challenge and a joy. ## Accomplishments that we're proud of Coming to Canada. ## What we learned Project management is integral to a good hacking experience, as is realistic goal-setting. ## What's next for EmergAlert Bringing more depth to the NLU responses and available actions would improve the app's helpfulness in disaster situations, and is a good next step for our group.
## Inspiration Every year hundreds of thousands of preventable deaths occur due to the lack of first aid knowledge in our societies. Many lives could be saved if the right people are in the right places at the right times. We aim towards connecting people by giving them the opportunity to help each other in times of medical need. ## What it does It is a mobile application that is aimed towards connecting members of our society together in times of urgent medical need. Users can sign up as respondents which will allow them to be notified when people within a 300 meter radius are having a medical emergency. This can help users receive first aid prior to the arrival of an ambulance or healthcare professional greatly increasing their chances of survival. This application fills the gap between making the 911 call and having the ambulance arrive. ## How we built it The app is Android native and relies heavily on the Google Cloud Platform. User registration and authentication is done through the use of Fireauth. Additionally, user data, locations, help requests and responses are all communicated through the Firebase Realtime Database. Lastly, the Firebase ML Kit was also used to provide text recognition for the app's registration page. Users could take a picture of their ID and their information can be retracted. ## Challenges we ran into There were numerous challenges in terms of handling the flow of data through the Firebase Realtime Database and providing the correct data to authorized users. ## Accomplishments that we're proud of We were able to build a functioning prototype! Additionally we were able to track and update user locations in a MapFragment and ended up doing/implementing things that we had never done before.
## Inspiration On average, an EMT can take 10 minutes to arrive at the scene of an emergency while incidents such as choking or heart attacks can turn fatal within 3 minutes. Those 10 minutes between the start of the emergency and when help arrives are vital in the patient's survival. ## What it does Any surrounding good-Samaritan may use the app, press SOS, use their voice to explain the situation, and the app will ping nearby certified CPR, EMT, or any person will relevant experience who can arrive on the scene before 911 can. HelpSignal is used to make the most of the time between the start of an emergency and when ambulances arrive ## How we built it We used React Native and Expo Development to build the application, targeting Android for live voice transcription from expo-speech-recognition and sending the transcription after recording to Cloudflare Worker. The Cloudflare Worker then uses the BAAI general embedding model to vectorize the transcription. The categories of needed certifications or experience are in a vector database, and vector search is done to get the most relevant person for the situation. The account system is on Amazon RDS, as well as the current emergencies. After an emergency is categorized, it's put onto the database, which is called on every refresh by people with accounts and certifications. A map is shown on the page to show locations of emergencies. ## Challenges we ran into We had difficulty implementing the audio as none of us had access to iOS development kit, nor macOS laptops for running Expo Development on iOS. In order to record and collect audio to transcribe live, an Android system was needed. We spent a considerable amount of time setting up the Android SDK. ## Accomplishments that we're proud of Throughout this project, we encountered many different roadblocks, which required determination and flexibility to get around. As a group, we were able to effectively communicate and pivot roles on the fly. As a result, we all stayed occupied and spent all 36 hours wisely designing and implementing different systems. Our feature of using Cloudflare Workers to use vector search was a big accomplishment for us, as well as getting authentication and accounts working with the stored certifications and experience, and an engaging UI/UX. ## What we learned Coming into this project, few of us had experience with React Native, and some of us had no experience coding with TypeScript and React in general. This seeming roadblock forced us to learn syntax and techniques for working with the technologies on the fly. Additionally, getting Expo Development working with Gradle and running on an Android simulator was a big learning experience for how Android development works. ## What's next for HelpSignal Being able to grow HelpSignal through advertising and social media would not only allow HelpSignal to become more popular, but would also improve the app. As more and more users get onboarded, there's more people available to help others, and therefore more of a chance that there's people to help in case of emergency. Using WebSockets instead of database updating for the emergencies would also let updating of emergencies be more instantaneous, and push notifications would allow for people not currently using the app to be notified when someone needs help. Connecting users with 911 while submitting an emergency would also allow for police to still be notified as normal. ![Tech stack](https://github.com/josephHelfenbein/HelpSignal/blob/e2f312bb462b1c0eb7dea3082bbb18cdbfa2022a/techstack.png?raw=true)
winning
## Inspiration We take our inspiration from our everyday lives. As avid travellers, we often run into places with foreign languages and need help with translations. As avid learners, we're always eager to add more words to our bank of knowledge. As children of immigrant parents, we know how difficult it is to grasp a new language and how comforting it is to hear the voice in your native tongue. LingoVision was born with these inspirations and these inspirations were born from our experiences. ## What it does LingoVision uses AdHawk MindLink's eye-tracking glasses to capture foreign words or sentences as pictures when given a signal (double blink). Those sentences are played back in an audio translation (either using an earpiece, or out loud with a speaker) in your preferred language of choice. Additionally, LingoVision stores all of the old photos and translations for future review and study. ## How we built it We used the AdHawk MindLink eye-tracking classes to map the user's point of view, and detect where exactly in that space they're focusing on. From there, we used Google's Cloud Vision API to perform OCR and construct bounding boxes around text. We developed a custom algorithm to infer what text the user is most likely looking at, based on the vector projected from the glasses, and the available bounding boxes from CV analysis. After that, we pipe the text output into the DeepL translator API to a language of the users choice. Finally, the output is sent to Google's text to speech service to be delivered to the user. We use Firebase Cloud Firestore to keep track of global settings, such as output language, and also a log of translation events for future reference. ## Challenges we ran into * Getting the eye-tracker to be properly calibrated (it was always a bit off than our view) * Using a Mac, when the officially supported platforms are Windows and Linux (yay virtualization!) ## Accomplishments that we're proud of * Hearing the first audio playback of a translation was exciting * Seeing the system work completely hands free while walking around the event venue was super cool! ## What we learned * we learned about how to work within the limitations of the eye tracker ## What's next for LingoVision One of the next steps in our plan for LingoVision is to develop a dictionary for individual words. Since we're all about encouraging learning, we want to our users to see definitions of individual words and add them in a dictionary. Another goal is to eliminate the need to be tethered to a computer. Computers are the currently used due to ease of development and software constraints. If a user is able to simply use eye tracking glasses with their cell phone, usability would improve significantly.
## Inspiration Alex K's girlfriend Allie is a writer and loves to read, but has had trouble with reading for the last few years because of an eye tracking disorder. She now tends towards listening to audiobooks when possible, but misses the experience of reading a physical book. Millions of other people also struggle with reading, whether for medical reasons or because of dyslexia (15-43 million Americans) or not knowing how to read. They face significant limitations in life, both for reading books and things like street signs, but existing phone apps that read text out loud are cumbersome to use, and existing "reading glasses" are thousands of dollars! Thankfully, modern technology makes developing "reading glasses" much cheaper and easier, thanks to advances in AI for the software side and 3D printing for rapid prototyping. We set out to prove through this hackathon that glasses that open the world of written text to those who have trouble entering it themselves can be cheap and accessible. ## What it does Our device attaches magnetically to a pair of glasses to allow users to wear it comfortably while reading, whether that's on a couch, at a desk or elsewhere. The software tracks what they are seeing and when written words appear in front of it, chooses the clearest frame and transcribes the text and then reads it out loud. ## How we built it **Software (Alex K)** - On the software side, we first needed to get image-to-text (OCR or optical character recognition) and text-to-speech (TTS) working. After trying a couple of libraries for each, we found Google's Cloud Vision API to have the best performance for OCR and their Google Cloud Text-to-Speech to also be the top pick for TTS. The TTS performance was perfect for our purposes out of the box, but bizarrely, the OCR API seemed to predict characters with an excellent level of accuracy individually, but poor accuracy overall due to seemingly not including any knowledge of the English language in the process. (E.g. errors like "Intreduction" etc.) So the next step was implementing a simple unigram language model to filter down the Google library's predictions to the most likely words. Stringing everything together was done in Python with a combination of Google API calls and various libraries including OpenCV for camera/image work, pydub for audio and PIL and matplotlib for image manipulation. **Hardware (Alex G)**: We tore apart an unsuspecting Logitech webcam, and had to do some minor surgery to focus the lens at an arms-length reading distance. We CAD-ed a custom housing for the camera with mounts for magnets to easily attach to the legs of glasses. This was 3D printed on a Form 2 printer, and a set of magnets glued in to the slots, with a corresponding set on some NerdNation glasses. ## Challenges we ran into The Google Cloud Vision API was very easy to use for individual images, but making synchronous batched calls proved to be challenging! Finding the best video frame to use for the OCR software was also not easy and writing that code took up a good fraction of the total time. Perhaps most annoyingly, the Logitech webcam did not focus well at any distance! When we cracked it open we were able to carefully remove bits of glue holding the lens to the seller’s configuration, and dial it to the right distance for holding a book at arm’s length. We also couldn’t find magnets until the last minute and made a guess on the magnet mount hole sizes and had an *exciting* Dremel session to fit them which resulted in the part cracking and being beautifully epoxied back together. ## Acknowledgements The Alexes would like to thank our girlfriends, Allie and Min Joo, for their patience and understanding while we went off to be each other's Valentine's at this hackathon.
## Inspiration It seems that I can never get things done. Managing life with ADHD is a constant battle, overwhelmed by scattered thoughts, unfinished tasks, and a mountain of procrastination. Struggling with time management and trying to keep up with daily responsibilities, I need more than my own mind to finish the task I need to do. It is hard for me to start the tasks, and I find it hard to identify which task should be prioritized as I struggle to figure out which one is more important at the moment. I need multiple support tools to help me navigate the study process, as I lose interest easily, and a reading disorder makes studying tasks almost impossible to finish. About 5 percent of adults live with ADHD, which means that millions of people are navigating these challenges every day just like me. For me, the start is always the hardest, so aI present is here to help you get started and everything after! ## What it does aI present is a browser application that’s easy to access through everyday use. Based on user input, aI present will use AI to break down the task into multiple smaller tasks. It will identify which task should be prioritized and generate a to-do list in a progress bar using gradient color. We use AI to decide how long each task should take. When each task is done, the progress will be shown in the progress bar. ## What’s next for “aI present” 1. **Voice Emotional Support**: This feature will be activated when the user is taking longer than expected to finish the tasks. The voice will give back encouragement to support the user when they are frustrated, and AI will be used to identify the user’s emotion through their tone. 2. **Support for Reading Disorder**: This can be turned on by the user. There will be some special fonts that can help the user to better read and understand the text if they have a reading disorder.
winning
## Inspiration Emergency situations can be extremely sudden and can seem paralyzing, especially for young children. In most cases, children from the ages of 4-10 are unaware of how to respond to a situation that requires contact with first responders, and what the most important information to communicate. In the case of a parent or guardian having a health issue, children are left feeling helpless. We wanted to give children confidence that is key to their healthy cognitive and social development by empowering them with the knowledge of how to quickly and accurately respond in emergency situations, which is why we created Hero Alert. ## What it does Our product provides a tangible device for kids to interact with, guiding them through the process of making a call to 9-1-1 emergency services. A conversational AI bot uses natural language understanding to listen to the child’s responses and tailor the conversation accordingly, creating a sense that the child is talking to a real emergency operator. Our device has multiple positive impacts: the educational aspect of encouraging children’s cognitive development skills and preparing them for serious, real-life situations; giving parents more peace of mind, knowing that their child can respond to dire situations; and providing a diverting, engaging game for children to feel like their favorite Marvel superhero while taking the necessary steps to save the day! ## How we built it On the software side, our first step was to find find images from comic books that closely resemble real-life emergency and crisis scenarios. We implemented our own comic classifier with the help of IBM Watson’s visual recognition service, classifying and re-tagging images made available by Marvel’s Comics API into crisis categories such as fire, violence, water disasters, or unconsciousness. The physical device randomly retrieves and displays these image objects from an mLab database each time a user mimics a 9-1-1 call. We used the Houndify conversational AI by SoundHound to interpret the voice recordings and generate smart responses. Different emergencies scenarios were stored as pages in Houndify and different responses from the child were stored as commands. We used Houndify’s smart expressions to build up potential user inputs and ensure the correct output was sent back to the Pi. Running on the Pi was a series of Python scripts, a command engine and an interaction engine, that enabled the flow of data and verified the child’s input. On the hardware end, we used a Raspberry Pi 3 connected to a Sony Eye camera/microphone to record audio and a small hdmi monitor to display a tagged Marvel image. The telephone 9-1-1 digits were inputted as haptic buttons connected to the Pi’s GPIO pins. All of the electronics were encapsulated in a custom laser cut box that acted as both a prototype for children’s toy and as protection for the electronics. ## Challenges we ran into The comics from the Marvel API are hand-drawn and don’t come with detailed descriptions, so we had a tough time training a general model to match pictures to each scenario. We ended up creating a custom classifier with IBM Watson’s visual recognition service, using a few pre-selected images from Marvel, then applied that to the entirety of the Marvel imageset to diversify our selection. The next challenge was creating conversational logic flow that could be applied to a variety of statements a child might say while on the phone. We created several scenarios that involved numerous potential emergency situations and used Houndify’s Smart Expressions to evaluate the response from a child. Matching statements to these expressions allowed us to understand the conversation and provide custom feedback and responses throughout the mock phone call. We also wanted to make sure that we provide a sense of empowerment for the child. While they should not make unnecessary calls, children should not be afraid or anxious to talk with emergency services during an emergency. We want them to feel comfortable, capable, and strong enough to make that call and help the situation they are in. Our implementation of Marvel Comics allowed us provide some sense of super-power to the children during the calls. ## Accomplishments that we're proud of Our end product works smoothly and simulates an actual conversation for a variety of crisis scenarios, while providing words of encouragement and an unconventional approach to emergency response. We used a large variety of APIs and platforms and are proud that we were able to have all of them work with one another in a unified product. ## What we learned We learned that the ideation process and collaboration are keys in solving any wicked problem that exists in society. We also learned that having a multidisciplinary team with very diverse backgrounds and skill sets provides the most comprehensive contributions and challenges us both as individuals and as a team. ## What's next for Hero Alert! We'd love to get more user feedback and continue development and prototyping of the device in the future, so that one day it will be available on store shelves.
## Inspiration In times of disaster, the capacity of rigid networks like cell service and internet dramatically decreases at the same time demand increases as people try to get information and contact loved ones. This can lead to crippled telecom services which can significantly impact first responders in disaster struck areas, especially in dense urban environments where traditional radios don't work well. We wanted to test newer radio and AI/ML technologies to see if we could make a better solution to this problem, which led to this project. ## What it does Device nodes in the field network to each other and to the command node through LoRa to send messages, which helps increase the range and resiliency as more device nodes join. The command & control center is provided with summaries of reports coming from the field, which are visualized on the map. ## How we built it We built the local devices using Wio Terminals and LoRa modules provided by Seeed Studio; we also integrated magnetometers into the devices to provide a basic sense of direction. Whisper was used for speech-to-text with Prediction Guard for summarization, keyword extraction, and command extraction, and trained a neural network on Intel Developer Cloud to perform binary image classification to distinguish damaged and undamaged buildings. ## Challenges we ran into The limited RAM and storage of microcontrollers made it more difficult to record audio and run TinyML as we intended. Many modules, especially the LoRa and magnetometer, did not have existing libraries so these needed to be coded as well which added to the complexity of the project. ## Accomplishments that we're proud of: * We wrote a library so that LoRa modules can communicate with each other across long distances * We integrated Intel's optimization of AI models to make efficient, effective AI models * We worked together to create something that works ## What we learned: * How to prompt AI models * How to write drivers and libraries from scratch by reading datasheets * How to use the Wio Terminal and the LoRa module ## What's next for Meshworks - NLP LoRa Mesh Network for Emergency Response * We will improve the audio quality captured by the Wio Terminal and move edge-processing of the speech-to-text to increase the transmission speed and reduce bandwidth use. * We will add a high-speed LoRa network to allow for faster communication between first responders in a localized area * We will integrate the microcontroller and the LoRa modules onto a single board with GPS in order to improve ease of transportation and reliability
## Inspiration Learning how to communicate is one of the most critical life skills that a child needs to learn during their earliest stages of life. Additionally, monitoring a child's mental health, and providing the appropriate emotional support for a child in stressful situations are also crucial needs that parents are sometimes unable to fulfill. Our socially intelligent toy addresses both of these problem spaces by acting as a conversational partner that facilitates and maintains the child's mental health through heart rate detection and auditory cues. ## What it does The toy serves as a chat-bot that is able to converse with child on a daily basis. It's google cloud supported artificial intelligence allows it to not only fulfill basic requests, but also provide context appropriate comments that can help the child learn basic social and conversational skills. Our toy also includes a heart-rate sensor that checks for abnormalities in the child's heart rate, and alerts parents and adjust its conversational intentions accordingly. ## How we built it We build this project using the Google AIY voice-kit. We also used the pulse sensor to measure heart rate. We used the voice-kit to process and generate voice commands and data, and the chat bot to generate emotionally supportive and engaging responses. Using tensorflow and sequence to sequence ML, we attempted to improve upon existing chatbots by increasing the amount of training data they have access to. ## Challenges we ran into We ran into various challenges along the way. None of us have a lot of Machine Learning experience and struggled when we had to come up with the best way to train our chatbot. Furthermore, we struggled while figuring out the most efficient and effective way to parse through the databases that we found. ## Accomplishments that we're proud of The google voice kit is works extremely well and responds to all of our commands. Also we have a really cute unicorn stuffed animal now. ## What we learned We all got to work with a variety of unfamiliar technology during this hackathon. Although not all of our attempts were successful, trying to interface and connect each of these technologies was an extremely rewarding experience. ## What's next for Emotional and Social Support Toy Next, we want to build a speech translation system and further help children develop new language skills. We also want to be able to use Google's facial expression recognition system to generate responses based on the person's emotions. Another improvement we could make is to find ways to feed data back into the speech recognition system on a daily basis to further improve the chat bots' ability to communicate with children. We could also expand its application to healthcare contexts (i.e. hospitals could keep a few of these chatbots on hand to give to children when they don't have access to the emotional support of a direct family member or friend. The chatbot could converse with the child using language that she would understand, and act as a friend during that time of distress).
partial
## Inspiration Ordering delivery and eating out is a major aspect of our social lives. But when healthy eating and dieting comes into play it interferes with our ability to eat out and hangout with friends. With a wave of fitness hitting our generation as a storm we have to preserve our social relationships while allowing these health conscious people to feel at peace with their dieting plans. With NutroPNG, we enable these differences to be settled once in for all by allowing health freaks to keep up with their diet plans while still making restaurant eating possible. ## What it does The user has the option to take a picture or upload their own picture using our the front end of our web application. With this input the backend detects the foods in the photo and labels them through AI image processing using Google Vision API. Finally with CalorieNinja API, these labels are sent to a remote database where we match up the labels to generate the nutritional contents of the food and we display these contents to our users in an interactive manner. ## How we built it Frontend: Vue.js, tailwindCSS Backend: Python Flask, Google Vision API, CalorieNinja API ## Challenges we ran into As we are many first-year students, learning while developing a product within 24h is a big challenge. ## Accomplishments that we're proud of We are proud to implement AI in a capacity to assist people in their daily lives. And to hopefully allow this idea to improve peoples relationships and social lives while still maintaining their goals. ## What we learned As most of our team are first-year students with minimal experience, we've leveraged our strengths to collaborate together. As well, we learned to use the Google Vision API with cameras, and we are now able to do even more. ## What's next for McHacks * Calculate sum of calories, etc. * Use image processing to estimate serving sizes * Implement technology into prevalent nutrition trackers, i.e Lifesum, MyPlate, etc. * Collaborate with local restaurant businesses
## Inspiration A couple of weeks ago, 3 of us met up at a new Italian restaurant and we started going over the menu. It became very clear to us that there were a lot of options, but also a lot of them didn't match our dietary requirements. And so, we though of Easy Eats, a solution that analyzes the menu for you, to show you what options are available to you without the dissapointment. ## What it does You first start by signing up to our service through the web app, set your preferences and link your phone number. Then, any time you're out (or even if you're deciding on a place to go) just pull up the Easy Eats contact and send a picture of the menu via text - No internet required! Easy Eats then does the hard work of going through the menu and comparing the items with your preferences, and highlights options that it thinks you would like, dislike and love! It then returns the menu to you, and saves you time when deciding your next meal. Even if you don't have any dietary restricitons, by sharing your preferences Easy Eats will learn what foods you like and suggest better meals and restaurants. ## How we built it The heart of Easy Eats lies on the Google Cloud Platform (GCP), and the soul is offered by Twilio. The user interacts with Twilio's APIs by sending and recieving messages, Twilio also initiates some of the API calls that are directed to GCP through Twilio's serverless functions. The user can also interact with Easy Eats through Twilio's chat function or REST APIs that connect to the front end. In the background, Easy Eats uses Firestore to store user information, and Cloud Storage buckets to store all images+links sent to the platform. From there the images/PDFs are parsed using either the OCR engine or Vision AI API (OCR works better with PDFs whereas Vision AI is more accurate when used on images). Then, the data is passed through the NLP engine (customized for food) to find synonym for popular dietary restrictions (such as Pork byproducts: Salami, Ham, ...). Finally, App Engine glues everything together by hosting the frontend and the backend on its servers. ## Challenges we ran into This was the first hackathon for a couple of us, but also the first time for any of us to use Twilio. That proved a little hard to work with as we misunderstood the difference between Twilio Serverless Functions and the Twilio SDK for use on an express server. We ended up getting lost in the wrong documentation, scratching our heads for hours until we were able to fix the API calls. Further, with so many moving parts a few of the integrations were very difficult to work with, especially when having to re-download + reupload files, taking valuable time from the end user. ## Accomplishments that we're proud of Overall we built a solid system that connects Twilio, GCP, a back end, Front end and a database and provides a seamless experience. There is no dependency on the user either, they just send a text message from any device and the system does the work. It's also special to us as we personally found it hard to find good restaurants that match our dietary restrictions, it also made us realize just how many foods have different names that one would normally google. ## What's next for Easy Eats We plan on continuing development by suggesting local restaurants that are well suited for the end user. This would also allow us to monetize the platform by giving paid-priority to some restaurants. There's also a lot to be improved in terms of code efficiency (I think we have O(n4) in one of the functions ahah...) to make this a smoother experience. Easy Eats will change restaurant dining as we know it. Easy Eats will expand its services and continue to make life easier for people, looking to provide local suggestions based on your preference.
## Inspiration At the start of the 2021 school year, we moved away from home, away from all the luxuries of living with our parents. We were thrown into the adult world where we had to manage personal finances and cook for ourselves, all without prior experience. As we budgeted our meals and bought bulk foods, we always had one problem, wasting food. Whether the food spoiled, or we didn’t know what to do with a specific food item, we saw a lot of food wasted and thrown out into the garbage. This however, was not just a problem we had with our household, we saw this problem occur in many of the houses of our friends. This got us curious and we looked up statistics for food waste, and was shocked when we saw the numbers. National Zero Waste council’s research on household food waste in Canada has revealed that almost 2.3 million tonnes of edible food is wasted each year, costing more than $21 billion. This costs the average Canadian family $1300 per year. On Top of the economic implications that food waste has, it also has environmental implications. The 2.3 million tonnes of avoidable food waste is equivalent to about 6.9 million tonnes of CO2 and more than 2 million cars on the road. We decided that we needed to combat this and decided to come up with our app Chef Buddy, an app that will help you reduce food waste ## What it does Chef Buddy is a Food/Tech company with an emphasis on zero food waste. Chef Buddy works by having the user take an image and upload it to our webapp, where it encrypts the image and goes through a machine learning and image recognition algorithm to detect types of foods/ingredients in the picture. It then searches through hundreds of thousands of recipes that can be made with the ingredients, and returns back to the user the top 6 recommendations. ## How we built it Chef Buddy uses Firebase, Google vision,Spoonacular API,OpenCV, Numpy, Pandas,Python, HTML, CSS and Javascript. In the backend we utilized Google Firebase realtime database to store our encrypted image string in base64. We then have python retrieve the encrypted base64 string where it decrypts the image string and sends the image to Google Vision, where the image recognition machine learning model will return back to us the foods/ingredients that are within the picture. Javascript will then retrieve the data that was sent back from Google Vision and send it to the spoonacular API, the API searches through hundreds of thousands of recipes that have the food/ingredient that Google Vision gives back. We then show the top 6 recommended recipes onto the front end with HTML and CSS. ## Challenges we ran into Throughout the development journey we ran into many problems that we were able to pivot from and create effective solutions to our problems. The first problem that we ran into was with information transferring from JavaScript to Python and vice versa. We realized that the libraries that we wanted to use to transfer data were not secure and inefficient in run time. We then pivoted and decided to create a database where we can store, retrieve and read data from each other. We decided to do this with Firebase, Google’s cloud-hosted database, which solved our problem and allowed us to efficiently transfer data to each other. The second challenge that we ran into during our development process was food recognition, initially we decided to utilize OpenCV’s object detection and image recognition models, however we soon realize that many of the databases that are offered on kaggle could not meet our demands and after many hours of training our model it could not seem to properly return the food/ingredients from our images. We found that Google Vision would be the answer to our problems, using Google Vision’s model we were able to detect food/ingredients from our uploaded pictures and not only returned an extremely accurate prediction on the food/ingredient it also sent it back fast. ## Accomplishments that we're proud of Chef Buddy is a project that we are extremely proud of. This was our first time working with a database and we were able to learn the fundamentals of Firebase in just a matter of hours by learning to read documentation. Another accomplishment that we are proud of was integrating the API’s that we used and having them work with each other. Specifically being able to connect our Google Vision with the Spoonacular API. As it was our first time working with both API’s and was a problem that took us a while to solve. Connecting each of our separate parts to create the final product was an extremely relieving, exciting moment for all of us and seeing our code work with one another without any errors is the biggest accomplishment. ## What we learned * User and client interaction through the firebase Realtime database. * We learned how to use the backend to create and read entries from the database * Learned APIs calls using Google’s open-source software called Google vision API and spoonacular to find recipes * Learned how to interact the front-end with backend with a JavaScript framework. ## What's next for Chef Buddy * What's next is to further improve this project by adding better features to improve efficiency and convenience.. * We can add features such as live scanning which can scan multiple objects in a short scanned video. * Another feature we can add is filters which allows the user to change what they want to see. * We want Chef Buddy to identify what is spoiling first in the photo.
winning
## Inspiration The inspiration for Skill Chain came from the desire to create a more efficient and serious job application process. We wanted to reduce spam and ensure that applicants are genuinely interested in the positions they apply for. The concept is similar to Tinder, but with a twist - it’s for job applications! ## What it does Skill Chain is a blockchain-based platform where users pay to apply for jobs. The application fee is automatically deducted when a user decides to apply for a job. If the applicant is not selected (left-swiped by the recruiter), the money is refunded. This innovative approach reduces spam and ensures that only serious candidates apply. ## How we built it We built Skill Chain using the XRP Ledger for blockchain transactions and smart contracts, ensuring a secure and transparent process. The backend REST API was developed using Java Spring Boot, providing robust and scalable server-side software. For the frontend, we used the Angular framework to create a user-friendly interface. ## Challenges we ran into Implementing the blockchain transactions and smart contracts on the XRP Ledger was a significant challenge due to its complexity. Additionally, integrating the frontend with the backend while ensuring data consistency and security was also a hurdle we had to overcome. ## Accomplishments that we're proud of We’re proud of developing a unique solution that addresses a real-world problem. The successful integration of different technologies (blockchain, backend, and frontend) to create a cohesive platform is a significant achievement for us. ## What we learned Through this project, we learned about the practical implementation of blockchain technology and smart contracts. We also gained experience in backend and frontend development, and how to integrate them effectively. ## What's next for Skill Chain The next step for Skill Chain is to incorporate more advanced features, such as AI-based matching of applicants to jobs. We also plan to expand our user base and collaborate with more companies to offer a wider range of job opportunities.
## Inspiration The counterfeiting industry is anticipated to grow to $2.8 trillion in 2022 costing 5.4 million jobs. These counterfeiting operations push real producers to bankruptcy as cheaper knockoffs with unknown origins flood the market. In order to solve this issue we developed a blockchain powered service with tags that uniquely identify products which cannot be faked or duplicated while also giving transparency. As consumers today not only value the product itself but also the story behind it. ## What it does Certi-Chain uses a python based blockchain to authenticate any products with a Certi-Chain NFC tag. Each tag will contain a unique ID attached to the blockchain that cannot be faked. Users are able to tap their phones on any product containing a Certi-Chain tag to view the authenticity of a product through the Certi-Chain blockchain. Additionally if the product is authentic users are also able to see where the products materials were sourced and assembled. ## How we built it Certi-Chain uses a simple python blockchain implementation to store the relevant product data. It uses a proof of work algorithm to add blocks to the blockchain and check if a blockchain is valid. Additionally, since this blockchain is decentralized, nodes (computers that host a blockchain) have to be synced using a consensus algorithm to decide which version of the blockchain from any node should be used. In order to render web pages, we used Python Flask with our web server running the blockchain to fetch relative information from the blockchain and displayed it to the user in a style that is easy to understand. A web client to input information into the chain was also created using Flask to communicate with the server. ## Challenges we ran into For all of our group members this project was one of the toughest we had. The first challenge we ran into was that once our idea was decided we quickly realized only one group member had the appropriate hardware to test our product in real life. Additionally, we deliberately chose an idea in which none of us had experience in. This meant we had to spent a portion of time to understand concepts such as blockchain and also using frameworks like flask. Beyond the starting choices we also hit several roadblocks as we were unable to get the blockchain running on the cloud for a significant portion of the project hindering the development. However, in the end we were able to effectively rack our minds on these issues and achieve a product that exceeded our expectations going in. In the end we were all extremely proud of our end result and we all believe that the struggle was definitely worth it in the end. ## Accomplishments that we're proud of Our largest achievement was that we were able to accomplish all our wishes for this project in the short time span we were given. Not only did we learn flask, some more python, web hosting, NFC interactions, blockchain and more, we were also able to combine these ideas into one cohesive project. Being able to see the blockchain run for the first time after hours of troubleshooting was a magical moment for all of us. As for the smaller wins sprinkled through the day we were able to work with physical NFC tags and create labels that we stuck on just about any product we had. We also came out more confident in the skills we already knew and also developed new skills we gained on the way. ## What we learned In the development of Certi-Check we learnt so much about blockchains, hashes, encryption, python web frameworks, product design, and also about the counterfeiting industry too. We came into the hackathon with only a rudimentary idea what blockchains even were and throughout the development process we came to understand the nuances of blockchain technology and security. As for web development and hosting using the flask framework to create pages that were populated with python objects was certainly a learning curve for us but it was a learning curve that we overcame. Lastly, we were all able to learn more about each other and also the difficulties and joys of pursuing a project that seemed almost impossible at the start. ## What's next for Certi-Chain Our team really believes that what we made in the past 36 hours can make a real tangible difference in the world market. We would love to continue developing and pursuing this project so that it can be polished for real world use. This includes us tightening the security on our blockchain, looking into better hosting, and improving the user experience for anyone who would tap on a Certi-Chain tag.
## Problem Statement How can we revolutionize waste management and recycling processes with a high-tech, user-friendly solution, reducing environmental impact and promoting sustainability? In the current waste management landscape, sorting and recycling are often inefficient and labor-intensive. Many recyclable materials end up in landfills due to improper segregation, leading to environmental degradation and resource wastage. The challenge lies in developing a technology that not only sorts waste effectively but also educates and engages users in sorting their trash. Every day people like you and us lack the intuition of advanced technology to classify and sort waste accurately. The goal is to create a system that uses cutting-edge technology to revolutionize how we handle waste, making the process more efficient, accurate, and interactive through our friendly robot. ## Our Solution Introducing WALL-E, a groundbreaking waste classifier and manager robot that leverages machine learning algorithms and Arduino electronic components to transform recycling practices. This solution involves developing a smart robotic system that intelligently classifies and sorts waste into appropriate compartments all while minimizing operation latency. Our WALL-E robot offers an interactive experience through its fun design and LCD screen outputs. The final setup of WALL-E involves an Arduino with a wifi module hosting a web server. This server is designed to handle client requests, integrating seamlessly with our machine-learning model. When an item is presented to WALL-E, the model analyzes the image from the robot's camera and determines the item's category. Based on this classification, a request is sent to the Arduino web server, which then activates the servo motors to present the correct compartment for the user to dispose of their item. This system ensures a smooth and automated process, from waste recognition to appropriate disposal. **Intelligent Waste Classification** At the heart of WALL-E is a machine learning model trained to identify and classify various types of waste materials into different categories such as "Cardboard", "Metal", "Plastic", "Glass", "Trash", etc. Given the item's classification, we accurately rotate the correct trash compartment in front of the user. The use of powerful servo motors and Arduino technology ensures precise and efficient sorting, enhancing the overall effectiveness of the recycling process all while maintaining structural integrity. **User-Friendly Interaction** WALL-E features an interactive interface that not only performs the sorting task but also educates users about recycling practices. The robot’s design is user-friendly and engaging, encouraging more people to participate actively in recycling. Users can interact with WALL-E, learning about the different types of recyclables. ## Technological Stack Backend: Python, TensorFlow, Arduino, C++
partial
## Inspiration I like looking at things. I do not enjoy bad quality videos . I do not enjoy waiting. My CPU is a lazy fool. He just lays there like a drunkard on new years eve. My poor router has a heart attack every other day so I can stream the latest Kylie Jenner video blog post, or has the kids these days call it, a 'vlog' post. CPU isn't being effectively leveraged to improve video quality. Deep learning methods are in their own world, concerned more with accuracy than applications. We decided to develop a machine learning application to enhance resolution while developing our models in such a way that they can effective run without 10,000 GPUs. ## What it does We reduce your streaming bill. We let you stream Kylie's vlog in high definition. We connect first world medical resources to developing nations. We make convert an unrecognizeable figure in a cop's body cam to a human being. We improve video resolution. ## How I built it Wow. So lots of stuff. Web scraping youtube videos for datasets of 144, 240, 360, 480 pixels. Error catching, thread timeouts, yada, yada. Data is the most import part of machine learning, and no one cares in the slightest. So I'll move on. ## ML stuff now. Where the challenges begin We tried research papers. Super Resolution Generative Adversarial Model [link](https://arxiv.org/abs/1609.04802). SRGAN with an attention layer [link](https://arxiv.org/pdf/1812.04821.pdf). These were so bad. The models were to large to hold in our laptop, much less in real time. The model's weights alone consisted of over 16GB. And yeah, they get pretty good accuracy. That's the result of training a million residual layers (actually *only* 80 layers) for months on GPU clusters. We did not have the time or resources to build anything similar to these papers. We did not follow onward with this path. We instead looked to our own experience. Our team had previously analyzed the connection between image recognition and natural language processing and their shared relationship to high dimensional spaces [see here](https://arxiv.org/abs/1809.05286). We took these learnings and built a model that minimized the root mean squared error as it upscaled from 240 to 480 px. However, we quickly hit a wall, as this pixel based loss consistently left the upscaled output with blurry edges. In order to address these edges, we used our model as the Generator in a Generative Adversarial Network. However, our generator was too powerful, and the discriminator was lost. We decided then to leverage the work of the researchers before us in order to build this application for the people. We loaded a pretrained VGG network and leveraged its image embeddings as preprocessing for our discriminator. Leveraging this pretrained model, we were able to effectively iron out the blurry edges while still minimizing mean squared error. Now model built. We then worked at 4 AM to build an application that can convert videos into high resolution. ## Accomplishments that I'm proud of Building it good. ## What I learned Balanced approaches and leveraging past learning ## What's next for Crystallize Real time stream-enhance app.
## Inspiration Inspired by a team member's desire to study through his courses by listening to his textbook readings recited by his favorite anime characters, functionality that does not exist on any app on the market, we realized that there was an opportunity to build a similar app that would bring about even deeper social impact. Dyslexics, the visually impaired, and those who simply enjoy learning by having their favorite characters read to them (e.g. children, fans of TV series, etc.) would benefit from a highly personalized app. ## What it does Our web app, EduVoicer, allows a user to upload a segment of their favorite template voice audio (only needs to be a few seconds long) and a PDF of a textbook and uses existing Deepfake technology to synthesize the dictation from the textbook using the users' favorite voice. The Deepfake tech relies on a multi-network model trained using transfer learning on hours of voice data. The encoder first generates a fixed embedding of a given voice sample of only a few seconds, which characterizes the unique features of the voice. Then, this embedding is used in conjunction with a seq2seq synthesis network that generates a mel spectrogram based on the text (obtained via optical character recognition from the PDF). Finally, this mel spectrogram is converted into the time-domain via the Wave-RNN vocoder (see [this](https://arxiv.org/pdf/1806.04558.pdf) paper for more technical details). Then, the user automatically downloads the .WAV file of his/her favorite voice reading the PDF contents! ## How we built it We combined a number of different APIs and technologies to build this app. For leveraging scalable machine learning and intelligence compute, we heavily relied on the Google Cloud APIs -- including the Google Cloud PDF-to-text API, Google Cloud Compute Engine VMs, and Google Cloud Storage; for the deep learning techniques, we mainly relied on existing Deepfake code written for Python and Tensorflow (see Github repo [here](https://github.com/rodrigo-castellon/Real-Time-Voice-Cloning), which is a fork). For web server functionality, we relied on Python's Flask module, the Python standard library, HTML, and CSS. In the end, we pieced together the web server with Google Cloud Platform (GCP) via the GCP API, utilizing Google Cloud Storage buckets to store and manage the data the app would be manipulating. ## Challenges we ran into Some of the greatest difficulties were encountered in the superficially simplest implementations. For example, the front-end initially seemed trivial (what's more to it than a page with two upload buttons?), but many of the intricacies associated with communicating with Google Cloud meant that we had to spend multiple hours creating even a landing page with just drag-and-drop and upload functionality. On the backend, 10 excruciating hours were spent attempting (successfully) to integrate existing Deepfake/Voice-cloning code with the Google Cloud Platform. Many mistakes were made, and in the process, there was much learning. ## Accomplishments that we're proud of We're immensely proud of piecing all of these disparate components together quickly and managing to arrive at a functioning build. What started out as merely an idea manifested itself into usable app within hours. ## What we learned We learned today that sometimes the seemingly simplest things (dealing with python/CUDA versions for hours) can be the greatest barriers to building something that could be socially impactful. We also realized the value of well-developed, well-documented APIs (e.g. Google Cloud Platform) for programmers who want to create great products. ## What's next for EduVoicer EduVoicer still has a long way to go before it could gain users. Our first next step is to implementing functionality, possibly with some image segmentation techniques, to decide what parts of the PDF should be scanned; this way, tables and charts could be intelligently discarded (or, even better, referenced throughout the audio dictation). The app is also not robust enough to handle large multi-page PDF files; the preliminary app was designed as a minimum viable product, only including enough to process a single-page PDF. Thus, we plan on ways of both increasing efficiency (time-wise) and scaling the app by splitting up PDFs into fragments, processing them in parallel, and returning the output to the user after collating individual text-to-speech outputs. In the same vein, the voice cloning algorithm was restricted by length of input text, so this is an area we seek to scale and parallelize in the future. Finally, we are thinking of using some caching mechanisms server-side to reduce waiting time for the output audio file.
## Inspiration In a world where technology's potential is often misused, we couldn't help but feel inspired to change the narrative. While scrolling through TikTok, we were struck by the realization that powerful innovations like deepfakes were being used in less-than-ideal ways. It was then that we decided to embark on a mission to harness the potential of deepfakes for educational purposes. ## What it does Our innovative platform, CelebLearn, enables users to upload a PDF textbook image via their device, subsequently offering them the opportunity to select from a diverse range of celebrities. This unique feature allows users to receive personalized lessons directly from their chosen celebrity. Our software facilitates this process by having celebrities explain complex concepts and summarizing the content of the PDF textbook image. Following this explanation, users will be encouraged to record themselves explaining the concept in their own words. Our software then analyzes the PDF, generating keywords that reflect the user's understanding of the concept. This feedback loop empowers users to gauge their comprehension levels and identify areas for improvement. Additionally, our software compares the keywords generated to the user's recorded explanation, identifying any missed words or concepts. This comprehensive analysis is then provided to the user, allowing them to see exactly where they can improve their understanding. ## How we built it We built CelebLearn on react with typescript. For the lip sync portion of the software, we used Sync Labs Lip Sync API and to connect the front end with the backend, we used FastAPI. Lastly, we utilized OpenAI to summarize information that the user provides with their uploaded PDF. It is also used to generate the transcript of the video and keywords that are used to test the user’s knowledge of a certain area. ## Challenges we ran into This project involved many separate API calls and piecing together several consecutive parts to make the functional program. It made the connection between the front and back end difficult to manage. One of the most difficult parts of generating the DeepFake, especially given the limited number of APIs and resources available online — we needed to compromise. We had to run each step separately (get template audio → change voice of audio → take template video → add lip syncing and combine the two), which was hard. ## Accomplishments that we're proud of We are proud that we managed to create such intricate and complex software in such little time. Specifically, we have never used ML models to create deepfakes as the concept is still relatively new to us. Additionally, it was the first time for three of our teammates to create a React website. ## What we learned Throughout the hackathon, we participated in many workshops and created many connections. We engaged in many conversations that involved certain bugs and issues that others were having and learned from their experience using JavaScript and React. Additionally, throughout the workshops, we learned about the importance of incorporating accessibility features in coding software, which made us understand its cruciality. ## What's next for CelebLearn CelebLearn strives to continue its services for teaching people to learn educational concepts in a fun and enticing way. For future use, we plan to implement a feedback box for our users to communicate with us about problems with our program so that we can work to fix them. We hope to add more celebrity options and even allow users to generate their own by inputting which individual they would like to see on the software. ## Built with * Python * React * Javascript * FastAPI * OpenAI * Sync Labs Lip Sync API * Technologies * Optical Character Recognition * Text to speech * Speech transcription
winning
## Inspiration The world is going through totally different times today, and there is more need to give back and help each other during these uncertain times. Volassis was born during the COVID-19 pandemic when so much volunteering was happening but not a single system was available to enable and motivate individuals to give back to the community. The project started ideation during my Sophomore year when I was volunteering with an organization for senior care called "Sunrise Senior Living", and all their records were manual. I started to automate the attendance system, and in a matter of few months many new features required for full online volunteering were added during the COVID-19 times. ## What it does The system provides the youth with a completely automated one-stop platform for finding volunteer opportunities, volunteering and get approved volunteer hours in matter of few clicks. The system was developed out of a need to improve the existing archaic email based manual systems, which make this process very cumbersome and time-consuming. ## Why I built it Volassis is a centralized system developed by Lakshya Gupta from Tompkins High School in Katy, Texas, who recognized the need for such a system while volunteering himself. Lakshya says, "I have the knack to recognize limitations in existing systems, and I feel almost irresistible drive to fix and improve them. With my passion in Computer Science and a good hold on several software technologies, I wanted to create an enhanced and easy-to-use volunteering system that not only made finding opportunities easier but also provided one-stop platform for all volunteer needs." ## How I built it I built it by starting off with designing the database tables in MYSQL. I designed several tables to track the users logging and give an analysis based off of this. Then, I started developing the REST API since this would be the backend of my project, allowing me to call functions to view the users logs and give analysis from the database content through the REST API function calls. After this, I starting developing the react native app and called the REST API functions to be able to keep track of the user entered data and view the database content through the app. And finally, I made a website using mostly HTML, Javascript, and Typescript in order to allow the user to see their logged in hours on the app through the website. The link to the website is volassis.com. Github repository link is <https://github.com/LakshyaGupta/Volassis-REST-API/> and <https://github.com/LakshyaGupta/Volassis-Mobile-Application/>. ## Challenges I ran into Some challenges I ran into were initially getting started with the API, building an API is very tough in the sense that many errors occur when you first run the API. Another challenge I ran into was efficiently several new languages in a short time frame and being able to effectively deploy the project in a timely manner. I believe the toughest challenges of the project were being able to finalize the program, make the website, design the database table, and running REST API. ## Accomplishments that I'm proud of Volassis is a centralized system developed by Lakshya Gupta from Tompkins High School in Katy, Texas, who recognized the need for such a system while volunteering himself. Lakshya says, "I have the knack to recognize limitations in existing systems, and I feel almost irresistible drive to fix and improve them. With my passion in Computer Science and a good hold on several software technologies, I wanted to create an enhanced and easy-to-use volunteering system that not only made finding opportunities easier but also provided one-stop platform for all volunteer needs." ## What I learned I learned several new languages such as React Native, Typescript, and js, which I believe will be truly beneficial to me when pursuing computer science in college and later in a job. Through this hackathon, my passion for computer science has greatly increased. ## What's next for Volassis Currently, 4 organizations are using my system for their volunteering needs, and I am in the process of contacting more organizations to assist them during these difficult times.
## Inspiration It took us a while to think of an idea for this project- after a long day of zoom school, we sat down on Friday with very little motivation to do work. As we pushed through this lack of drive our friends in the other room would offer little encouragements to keep us going and we started to realize just how powerful those comments are. For all people working online, and university students in particular, the struggle to balance life on and off the screen is difficult. We often find ourselves forgetting to do daily tasks like drink enough water or even just take a small break, and, when we do, there is very often negativity towards the idea of rest. This is where You're Doing Great comes in. ## What it does Our web application is focused on helping students and online workers alike stay motivated throughout the day while making the time and space to care for their physical and mental health. Users are able to select different kinds of activities that they want to be reminded about (e.g. drinking water, eating food, movement, etc.) and they can also input messages that they find personally motivational. Then, throughout the day (at their own predetermined intervals) they will receive random positive messages, either through text or call, that will inspire and encourage. There is also an additional feature where users can send messages to friends so that they can share warmth and support because we are all going through it together. Lastly, we understand that sometimes positivity and understanding aren't enough for what someone is going through and so we have a list of further resources available on our site. ## How we built it We built it using: * AWS + DynamoDB + Lambda + Cognito + APIGateway + Amplify * React + Redux + React-Dom + MaterialUI * serverless * Twilio * Domain.com * Netlify ## Challenges we ran into Centring divs should not be so difficult :( Transferring the name servers from domain.com to Netlify Serverless deploying with dependencies ## Accomplishments that we're proud of Our logo! It works :) ## What we learned We learned how to host a domain and we improved our front-end html/css skills ## What's next for You're Doing Great We could always implement more reminder features and we could refine our friends feature so that people can only include selected individuals. Additionally, we could add a chatbot functionality so that users could do a little check in when they get a message.
## Inspiration With multiple members of our team having been a part of environmental conservation initiatives and even running some of our own, an issue we have continually recognized is the difficulty in reaching out to community members that share the same vision. Outside of a school setting, it's difficult to easily connect with initiatives and to find others interested in them, and so we wanted to solve that issue by centralizing a space for these communities. ## What it does The demographic here is two-fold. Users that are interested in volunteering have the capability of logging in, and uses their provided location to narrow down nearby events to a radius of their choosing. This makes sorting through hundreds of events quick and easy, and provides a clear pathway to convert the desire to help into tangible change. Users interested in organizing their own events can create accounts and use a simple process to create an event with all its information and post it both to their own page's feed and to the main initiatives list that volunteers are able to browse through. With just a few clicks, an event can be made available to the many volunteers eager to make a difference. ## How we built it As this project is a website, and many of our team are beginners, we worked mostly with HTML, CSS, and JS. We also integrated bootstrap to help with styling and formatting for the pages to improve user experience. ## Challenges we ran into As relative beginners, one challenge we ran into was working with JavaScript files across multiple HTML pages, and finding that parts of our functionality were only accessible using node.js. To work around this, we focused on rebranching our website pages to ensure easier connections and finding ways to make our code simpler and more comprehensive. ## Accomplishments that we're proud of We're proud of the community that we built with each other during this hackathon. We truly had so much passion for making this a working product, and loved our logo so much we event made stickers! On a technical level, as first-time users of JavaScript, we're particularly proud of our work with connecting HTML input, using JavaScript for string handling, and then creating new elements on the website. Being able to collect input initiatives into our database and display them with live updates was for us, the most difficult technical work, but also by far the most rewarding. ## What we learned For our team as a whole, the biggest takeaway has been a strongly renewed interest in web development and the intricacies behind connecting so many different aspects of functionality using JavaScript. ## What's next for BranchOut Moving forward, we're looking to integrate node.js to supplement our implementation, and to increase connectivity between the different inputs available. We truly believe in our mission to promote nature conservation initiatives, and hope to further expand this into an app to increase accessibility and improve user experience.
partial
## Inspiration Most students know the feeling of being behind in classes, and Stanford really tries to help us out. We get both lecture slides and videos for many classes, but neither alone is sufficient. Lecture slides are useful for quick information look up, but are often too dense to interpret without explanation. On the other hand, videos fill in these holes in understanding, but are riddled with superfluous information that can take hours to parse through. What if we could combine these two resources to create a fully integrated visual and auditory learning experience? ## What it does Slip establishes a two-way mapping between class videos and slides to allow for a seamless transition between the two. Watch a few slides until the material gets too dense and then click the slide to instantly move to the exact point in the video where that same concept is being explained. By fully integrating classroom resources, Slip allows students to navigate between class notes, slides, and videos with a single click. ## How we built it We collected our source data by extracting all the slides from lecture notes with ImageMagick and key frames from the class video using ffmpeg. After extraction, we use SIFT to identify the slide, if present, in every frame, and OCR (optical character recognition) to see how closely the text in each slide/frame match up. By combining these two metrics, we can compute optimal slide and frame mappings for the entire lecture with 90-95% confidence. ## Challenges we ran into Accuracy is extremely important, but often videos don’t have great captures of the slides. Neither image processing nor OCR alone were enough to reach an accuracy we liked, but they compliment each other very well. OCR is very good for text-heavy slides and image processing is very effective on others. Even using both together, the algorithm still found incorrect mappings much of the time. The big trick for great accuracy was using the knowledge that we have the slides in order. This allows us to not simply look for the best frame for each slide, but the best set of frames for all the slides at once, such that the slides are in order. Optimizing this in a reasonable amount of time required a clever dynamic programming solution, but greatly increased accuracy. ## Accomplishments that we're proud of Definitely accuracy. We came in with very little knowledge of image processing and ended up getting some really good accuracy. We also built a seamless front end that makes it super simple for the user to switch between video and lecture, maximizing productivity. ## What we learned If it can go wrong, it will go wrong. We definitely had issues along the way from libraries with bad documentation, to hard to fun bugs, to being completely unsure of how to proceed, but together we powered through. ## What's next for Slip Improve speed of algorithm for image processing.
## Inspiration Fully homomorphic computing is a hip new crypto trick that lets you compute on encrypted data. It's pretty wild, so I wanted to try something wild with it. FHE has been getting getting super fast - boolean operations now only take tens of milliseconds, down from minutes or hours just a few years ago. Most applications of FHE still focus on computing known functions on static data, but it's fast enough now to host a real language all on its own. The function I'm homomophically evaluating is *eval*, and the data I'm operating on is code. "Brainfreeze" is what happens if you think about this too hard too long. ## What it does Brainfreeze is a fully-homomorphic runtime for the language [https://en.wikipedia.org/wiki/Brainfuck](Brainfuck). ## How I built it I wrote Python bindings for the TFHE C library for fast FHE. TFHE only exposes boolean operations on single bits at a time, so I wrote a framework for assembling and evaluating virtual homomorphic circuits in Python. Then I wrote an ALU for simple 8-bit arithmetic, and a tiny CPU for dispatching on Brainfuck's 8 possible operations. ## Does it work? No! I didn't have time to finish the entire instruction set - only moving the data pointer (< and >) and incrementing and decrementing the data (+ and -) work right now :-/. It turns out that computers are complicated and I don't remember as much of 6.004 as I thought I did. ## Could it work? Definitely at small scales! But there are some severe limiting factors. FHE guarantees - mathematically - to leak **absolutely no** information about the data it's operation on, and that results in a sort of catastrophically exponential branching nightmare because the computer has to execute *every possible instruction on every possible memory address **during every single clock cycle***, because it's not sure which is the "real" data or the "real" instruction and which is just noise.
## Inspiration The whiteboard or chalkboard is an essential tool in instructional settings - to learn better, students need a way to directly transport code from a non-text medium to a more workable environment. ## What it does Enables someone to take a picture of handwritten or printed text converts it directly to code or text on your favorite text editor on your computer. ## How we built it On the front end, we built an app using Ionic/Cordova so the user could take a picture of their code. Behind the scenes, using JavaScript, our software harnesses the power of the Google Cloud Vision API to perform intelligent character recognition (ICR) of handwritten words. Following that, we applied our own formatting algorithms to prettify the code. Finally, our server sends the formatted code to the desired computer, which opens it with the appropriate file extension in your favorite IDE. In addition, the client handles all scripting of minimization and fileOS. ## Challenges we ran into The vision API is trained on text with correct grammar and punctuation. This makes recognition of code quite difficult, especially indentation and camel case. We were able to overcome this issue with some clever algorithms. Also, despite a general lack of JavaScript knowledge, we were able to make good use of documentation to solve our issues. ## Accomplishments that we're proud of A beautiful spacing algorithm that recursively categorizes lines into indentation levels. Getting the app to talk to the main server to talk to the target computer. Scripting the client to display final result in a matter of seconds. ## What we learned How to integrate and use the Google Cloud Vision API. How to build and communicate across servers in JavaScript. How to interact with native functions of a phone. ## What's next for Codify It's feasibly to increase accuracy by using the Levenshtein distance between words. In addition, we can improve algorithms to work well with code. Finally, we can add image preprocessing (heighten image contrast, rotate accordingly) to make it more readable to the vision API.
partial
## Inspiration The inspiration for TranslatAR comes from the desire to understand different languages in order to better connect with people around the world and wanting to implement AR into our application for a complete interactive experience. Learning a new language can be extremely difficult and nowadays people are having a hard time finding excitement in learning a new language. With TranslatAR, we are able to reinvigorate people's interest and passion to learning a new language, and hence connecting people around the world. ## What it does TranslatAR uses the iPhone's camera to detect objects by our own custom trained model from Microsoft's Cognitive Custom Vision API. Once the objects are detected, they will be able to be translated into the user's selected language in real time using Augmented Reality. What's unique about our AR app, is that the words are anchored to the object in space, therefore identifying and creating a "label" in a unique environment for users to learn. ## How we built it We built our application using primarily Microsoft applications. Microsoft Cognitive Services that were used includes, Microsoft translator API and custom vision model API. Other programs used included Micrsoft Azure, ARkit, and Swift. ## Challenges we ran into We had no prior experience building a mobile app using Swift before. We wanted to use Swift because of their new ARKit. As you might have predict, we ran into many challenges with understanding and programming with an unfamiliar language and being able to communicate between new partners. Challenges in Swift included embedding Microsoft API's within the code because of lack of code documentation for Swift4. Other challenges included training our API to make precise and accurate prediction over 95% of the time. We had to train more than 50 instances of an object. ## Accomplishments that we're proud of We are proud of our progress we made while utilizing a completely unfamiliar language. Implementing AR into our application was very fun to do and something we feel many users will enjoy. We are also very proud of being able to learn, have fun, and meet new people while at this hackathon! ## What we learned We learned a completely new language and were able to overcome obstacles intertwined with learning an unfamiliar topic in a short period of time. This allowed us to develop our skills in better understanding language we are unfamiliar with. ## What's next for TranslatAR We want to continue to add more languages to the application so there are less barriers in connecting and understanding different parts of the world. Future endeavors can include expanding API recognition to more objects and adding extra features such as practice phrases and speech pronunciation. With response capabilities, and better UI/UX functionality we believe that TranslatAR can truly change the way we learn.
## Inspiration Languages are hard to learn and pronunciation is often especially difficult, which all of us had experienced first-hand. We decided to create a real-time augmented reality language learning game called Ceci ("this" in French, pronounced as say-see). ## What it does Ceci quizzes the user on the vocabulary of the language they are studying based on what they see in the world. It highlights the word they are being quizzed on with a box around the corresponding object and recognizes it with machine learning. The user says the word, and Ceci uses voice recognition to detect whether or not they are correct. To incentivize the user, there is also a point system. ## How we built it Using CoreML for machine learning, Ceci is able to detect and label possible objects to quiz the user on. Then, we used the built-in Xcode speech recognition tool to check the user's answers. In general, everything was written in Swift, including the point system that rewards correct answers. ## Challenges we ran into We initially planned to use many ARKit features, but quickly discovered that the quality of the classification in its object detection is lacking. Object detection is central to Ceci, so we were forced to find something else. Instead, we used another machine learning library, and it was a bit of a challenge to go through the non-documented issues and limitations, due to the relative novelty of this technology. ## Accomplishments that we're proud of We are proud that we were able to combine various exciting technologies into Ceci. For example, we used a scalable, mobile machine learning library that none of us have ever used before, and incorporated it along an Apple-developed speech-to-text transcription. ## What we learned Most of the team wasn't familiar with Swift specifically and iOS development in general, and learned them to develop features like the points system. None of us had done iOS augmented reality before so we had to experiment with a lot of platforms and ideas to decide what was feasible. Also, most of the team didn't know most of the others when we started, so we learned how to work together most efficiently and to leverage our strengths. ## What's next for Ceci We intend to and can pretty easily add more languages to Ceci such as German, Spanish, Russian, and Chinese (including Mandarin and Cantonese). We also want to make Ceci more social, adding support for sharing words learned and a leader board. In addition, building on the point system to make points redeemable for custom themes and improving the choice of quiz objects based on spaced repetition learning are major features we hope to implement.
## AI, AI, AI... The number of projects using LLMs has skyrocketed with the wave of artificial intelligence. But what if you *were* the AI, tasked with fulfilling countless orders and managing requests in real time? Welcome to chatgpME, a fast-paced, chaotic game where you step into the role of an AI who has to juggle multiple requests, analyzing input, and delivering perfect responses under pressure! ## Inspired by games like Overcooked... chatgpME challenges you to process human queries as quickly and accurately as possible. Each round brings a flood of requests—ranging from simple math questions to complex emotional support queries—and it's your job to fulfill them quickly with high-quality responses! ## How to Play Take Orders: Players receive a constant stream of requests, represented by different "orders" from human users. The orders vary in complexity—from basic facts and math solutions to creative writing and emotional advice. Process Responses: Quickly scan each order, analyze the request, and deliver a response before the timer runs out. Get analyzed - our built-in AI checks how similar your answer is to what a real AI would say :) ## Key Features Fast-Paced Gameplay: Just like Overcooked, players need to juggle multiple tasks at once. Keep those responses flowing and maintain accuracy, or you’ll quickly find yourself overwhelmed. Orders with a Twist: The more aware the AI becomes, the more unpredictable it gets. Some responses might start including strange, existential musings—or it might start asking you questions in the middle of a task! ## How We Built It Concept & Design: We started by imagining a game where the player experiences life as ChatGPT, but with all the real-time pressure of a time management game like Overcooked. Designs were created in Procreate and our handy notebooks. Tech Stack: Using Unity, we integrated a system where mock requests are sent to the player, each with specific requirements and difficulty levels. A template was generated using defang, and we also used it to sanitize user inputs. Answers are then evaluated using the fantastic Cohere API! Playtesting: Through multiple playtests, we refined the speed and unpredictability of the game to keep players engaged and on their toes.
losing
## Inspiration No one likes waiting in line for things, especially for medically related issues. For many, walk-ins have a tremendously long wait time and for some, calling can seem daunting. That is why we created Schedule Cloud. With Schedule Cloud, a client is able to see a doctor's available schedule live, and book an appointment. The doctor is able to see all their bookings and also update the schedule if prior appointments are being delayed or finish faster, thus increasing efficiency for both parties. As well as allowing both parties to be productive, it also allows a company to save money on administrative support (e.g. a receptionist) and thereby be more productive and allocate money to products or services that matter more. Lastly, this application works for ANY relationship, not just client-doctor. This can extend to employees-manager, students-professors/TAs, learners-driving instructors, and many more. ## What it does One can sign up as the receiver (doctor) or a client. A unique code is provided to the receiver and clients will use that unique code to be linked to the receiver. Then, the receiver can easily block out the times that they are unavailable. A client will instantly be able to see the update in realtime and if wanted, they can book their own appointment time in a vacant slot simply by tapping on a slot. After that, the receiver will see that the time slot is booked and who it is booked by. There is also an option for DELAY, where if an event will take longer than required, it shows up on the connected schedule. ## How we built it The backend of Schedule Cloud was created using NodeJS. The front end was build in Android Studio using Java. ## Challenges we ran into We weren't able to use FireBase, because the user authentication system didn't allow for tags on authorized users. Since we needed to differentiate between patients and doctors, we were forced to give up FireBase and use a different server. ## Accomplishments that we're proud of Managing to complete a skeleton for the program on time. Just being able to deliver a usable interface was a huge accomplishment for us. ## What's next for Schedule\_Cloud A more pleasing and aesthetic user interface, option for client/receiver to add/delete notes for an appointment and have it visible to each other, and numerous parties.
## Inspiration It has been rough. We are not allowed to see friends and family and we are forced to stay home. Compromises had to be made to keep others safe. We believe quality healthcare should not be one of the compromises. So we built an app that allows patients to directly contact doctors to get prescriptions. It reduces contact, and it frees up valuable time for the already overworked medical staff. ## What it does It is an android app that allows patients to contact doctors directly through an app that stores info about prescription through a database. ## How we built it We used Android Studio for the front end and Firebase for the backend and lots of youtube videos and searching on google. ## Challenges we ran into It is the first Hackathon experience for 3 of our 4 members. Naturally, we were very passionate about learning and decided to make a project about things we don't know (Android Programming, and Database Programming). None of us had any Android Programming, and Database Programming skills so it was definitely a fun challenge to learn both in just 40 hours. ## Accomplishments that we're proud of 1. Learned Basic Android Studio and successfully made a simple app. 2. Made 2 user Interfaces. One for Patient and another for Doctor 3. Learned Database Programming in 40 hours 4. Learned how to talk to a database inside an android app. 5. Learned to start an android process(eg. when login is successful we start running the DoctorActivity class). ## What we learned 1. Android Studio 2. Database Programming/XML 3. Using Firebase 4. GitHub ## What's next for Health Android App For The New Age of Disconnection We want to further develop the back end of the app so that the user can start storing data. We ran out of time unfortunately. We also want to use the google Voice-to-text API to allow users to record voice memos to allow better communication between the patients and doctors.
## Inspiration As University of Waterloo students who are constantly moving in and out of many locations, as well as constantly changing roommates, there are many times when we discovered friction or difficulty in communicating with each other to get stuff done around the house. ## What it does Our platform allows roommates to quickly schedule and assign chores, as well as provide a messageboard for common things. ## How we built it Our solution is built on ruby-on-rails, meant to be a quick simple solution. ## Challenges we ran into The time constraint made it hard to develop all the features we wanted, so we had to reduce scope on many sections and provide a limited feature-set. ## Accomplishments that we're proud of We thought that we did a great job on the design, delivering a modern and clean look. ## What we learned Prioritize features beforehand, and stick to features that would be useful to as many people as possible. So, instead of overloading features that may not be that useful, we should focus on delivering the core features and make them as easy as possible. ## What's next for LiveTogether Finish the features we set out to accomplish, and finish theming the pages that we did not have time to concentrate on. We will be using LiveTogether with our roommates, and are hoping to get some real use out of it!
losing
## Inspiration We admired the convenience Honey provides for finding coupon codes. We wanted to apply the same concept except towards making more sustainable purchases online. ## What it does Recommends sustainable and local business alternatives when shopping online. ## How we built it Front-end was built with React.js and Bootstrap. The back-end was built with Python, Flask and CockroachDB. ## Challenges we ran into Difficulties setting up the environment across the team, especially with cross-platform development in the back-end. Extracting the current URL from a webpage was also challenging. ## Accomplishments that we're proud of Creating a working product! Successful end-to-end data pipeline. ## What we learned We learned how to implement a Chrome Extension. Also learned how to deploy to Heroku, and set up/use a database in CockroachDB. ## What's next for Conscious Consumer First, it's important to expand to make it easier to add local businesses. We want to continue improving the relational algorithm that takes an item on a website, and relates it to a similar local business in the user's area. Finally, we want to replace the ESG rating scraping with a corporate account with rating agencies so we can query ESG data easier.
## Inspiration The inspiration for our project stems from the increasing trend of online shopping and the declining foot traffic in physical stores. Our goal was to provide a unique and engaging experience for customers, encouraging them to visit physical stores and rediscover the joy of in-person shopping. We wanted to create an interactive and entertaining shopping experience that would entice customers to visit stores more frequently and foster a deeper connection between them and the store's brand. ## What it does Our project is an AR scavenger hunt experience that gamifies the shopping experience. The scavenger hunt encourages customers to explore the store and discover new products they may have otherwise overlooked. As customers find specific products, they can earn points which can be redeemed for exclusive deals and discounts on future purchases. This innovative marketing scheme not only provides customers with an entertaining experience but also incentivizes them to visit stores more frequently and purchase products they may have otherwise overlooked. ## How we built it To create the AR component of our project, we used Vuforia and Unity, two widely used platforms for building AR applications. The Vuforia platform allowed us to create and track image targets, while Unity was used to design the 3D models for the AR experience. We then integrated the AR component into an Android application by importing it as a Gradle project. Our team utilized agile development methodologies to ensure efficient collaboration and problem-solving throughout the development process. ## Challenges we ran into One of the challenges we faced was integrating multiple APIs and ensuring that they worked together seamlessly. Another challenge was importing the AR component and creating the desired functionality within our project. We also faced issues with debugging and resolving technical errors that arose during the development process. ## Accomplishments that we're proud of Despite the challenges we faced, we were able to achieve successful teamwork and collaboration. Despite forming the team later than other groups, we were able to effectively communicate and work together to bring our project to fruition. We are proud of the end result, which was a polished and functional AR scavenger hunt experience that met our objectives. ## What we learned We learned how difficult it is to truly ship out software, and we are grateful to have joined the hackathon. We gained a deeper understanding of the importance of project planning, effective communication, and collaboration among team members. We also learned that the development process can be challenging and unpredictable, and that it requires perseverance and problem-solving skills. Additionally, participating in the hackathon taught us valuable technical skills such as integrating APIs, creating AR functionality, and importing projects onto an Android application. ## What's next for Winnur Looking forward, we plan to incorporate Computer Vision technology into our project to prevent potential damage to our product's packaging. We also aim to expand the reach of our AR scavenger hunt experience by partnering with more retailers and enhancing the user interface and experience. We are excited about the potential for future development and growth of Winnur.
## Inspiration The fashion industry is responsible for 10% of all annual global carbon emissions, roughly 2.7 billion tons a year. This is more than all international flights and maritime shipping combined! Nowadays, companies promote purchasing products, but this extension helps users take a step back from consumerism to rethink the sustainability of their choices. ## What it does Our Chrome extension incentivizes consumers to make ecofriendly purchases by rewarding their ecofriendly purchasing habits. This extension offers an ecofriendly rating for each clothing product online along with econews about companies, and gives points to users based on the sustainability of their purchases that can be converted to gift cards at sustainable companies, like Patagonia and Seventh Generation. ## How we built it We built a chrome extension and webapp and linked the two. On the backend, we used Python, and for the frontend, we used React and Bootstrap. Flask allows us to switch between pages on the webapp. To create our feature that aggregates recent econews about companies and aggregate it to a paragraph summary, we used openAI and metaphor. ## Challenges we ran into Our biggest challenges were web scraping to read the product description from EBay to assist in our product ratings. We attempted to use OCR for image recognition but found an alternative metric to use. We also had difficulty deciding on the weightage of metrics for our ecofriendly rating. ## Accomplishments that we're proud of We are so proud of our team for building this! This is a useful, impactful, and realistic Chrome extension that can help both people and the planet. ## What we learned We learned the effect of the fashion industry, the factors that go into sustainability, and technologically, how to create both a Chrome extension and a web app and connect the two. We also learned that every member has a different strength, so we leveraged those while learning from each other throughout the hackathon. ## What's next for Sustain We want to expand Sustain onto sites aside from C2C sites (like big clothing brand sites!). We also want to include additional metrics into the product ratings to ensure that we are rating products and rewarding consumers fairly for their efforts. Ideally, we also include an educational component so that consumers can continue to stay informed.
winning
## Inspiration No one likes waiting around too much, especially when we feel we need immediate attention. 95% of people in hospital waiting rooms tend to get frustrated over waiting times and uncertainty. And this problem affects around 60 million people every year, just in the US. We would like to alleviate this problem and offer alternative services to relieve the stress and frustration that people experience. ## What it does We let people upload their medical history and list of symptoms before they reach the waiting rooms of hospitals. They can do this through the voice assistant feature, where in a conversation style they tell their symptoms, relating details and circumstances. They also have the option of just writing these in a standard form, if it's easier for them. Based on the symptoms and circumstances the patient receives a category label of 'mild', 'moderate' or 'critical' and is added to the virtual queue. This way the hospitals can take care of their patients more efficiently by having a fair ranking system (incl. time of arrival as well) that determines the queue and patients have a higher satisfaction level as well, because they see a transparent process without the usual uncertainty and they feel attended to. This way they can be told an estimate range of waiting time, which frees them from stress and they are also shown a progress bar to see if a doctor has reviewed his case already, insurance was contacted or any status changed. Patients are also provided with tips and educational content regarding their symptoms and pains, battling this way the abundant stream of misinformation and incorrectness that comes from the media and unreliable sources. Hospital experiences shouldn't be all negative, let's try try to change that! ## How we built it We are running a Microsoft Azure server and developed the interface in React. We used the Houndify API for the voice assistance and the Azure Text Analytics API for processing. The designs were built in Figma. ## Challenges we ran into Brainstorming took longer than we anticipated and had to keep our cool and not stress, but in the end we agreed on an idea that has enormous potential and it was worth it to chew on it longer. We have had a little experience with voice assistance in the past but have never user Houndify, so we spent a bit of time figuring out how to piece everything together. We were thinking of implementing multiple user input languages so that less fluent English speakers could use the app as well. ## Accomplishments that we're proud of Treehacks had many interesting side events, so we're happy that we were able to piece everything together by the end. We believe that the project tackles a real and large-scale societal problem and we enjoyed creating something in the domain. ## What we learned We learned a lot during the weekend about text and voice analytics and about the US healthcare system in general. Some of us flew in all the way from Sweden, for some of us this was the first hackathon attended so working together with new people with different experiences definitely proved to be exciting and valuable.
## Inspiration Our inspiration for this project was the technological and communication gap between healthcare professionals and patients, restricted access to both one’s own health data and physicians, misdiagnosis due to lack of historical information, as well as rising demand in distance-healthcare due to the lack of physicians in rural areas and increasing patient medical home practices. Time is of the essence in the field of medicine, and we hope to save time, energy, money and empower self-care for both healthcare professionals and patients by automating standard vitals measurement, providing simple data visualization and communication channel. ## What it does What eVital does is that it gets up-to-date daily data about our vitals from wearable technology and mobile health and sends that data to our family doctors, practitioners or caregivers so that they can monitor our health. eVital also allows for seamless communication and monitoring by allowing doctors to assign tasks and prescriptions and to monitor these through the app. ## How we built it We built the app on iOS using data from the health kit API which leverages data from apple watch and the health app. The languages and technologies that we used to create this are MongoDB Atlas, React Native, Node.js, Azure, Tensor Flow, and Python (for a bit of Machine Learning). ## Challenges we ran into The challenges we ran into are the following: 1) We had difficulty narrowing down the scope of our idea due to constraints like data-privacy laws, and the vast possibilities of the healthcare field. 2) Deploying using Azure 3) Having to use Vanilla React Native installation ## Accomplishments that we're proud of We are very proud of the fact that we were able to bring our vision to life, even though in hindsight the scope of our project is very large. We are really happy with how much work we were able to complete given the scope and the time that we have. We are also proud that our idea is not only cool but it actually solves a real-life problem that we can work on in the long-term. ## What we learned We learned how to manage time (or how to do it better next time). We learned a lot about the health care industry and what are the missing gaps in terms of pain points and possible technological intervention. We learned how to improve our cross-functional teamwork, since we are a team of 1 Designer, 1 Product Manager, 1 Back-End developer, 1 Front-End developer, and 1 Machine Learning Specialist. ## What's next for eVital Our next steps are the following: 1) We want to be able to implement real-time updates for both doctors and patients. 2) We want to be able to integrate machine learning into the app for automated medical alerts. 3) Add more data visualization and data analytics. 4) Adding a functional log-in 5) Adding functionality for different user types aside from doctors and patients. (caregivers, parents etc) 6) We want to put push notifications for patients' tasks for better monitoring.
## Inspiration Around 1 in 5, British Columbians do not have a family doctor. That is about 1 million people. Not only is it difficult to find a family doctor, but it can also be difficult to find a doctor that can speak your native language if English isn’t your first language. Especially when medical jargon is introduced, it can be overwhelming trying to understand all the medical terms and for older immigrant parents, this disconnect of language can serve as a deterrent to seeking medical aid. We want to create a solution that allows users to have a better relationship with their healthcare professionals and a better understanding of their own health. ## What it does Our solution “MediScribe” is a web application that transcribes and stores what the doctor is saying while highlighting medical terms and providing simpler definitions and offering translations in different languages. The raw transcriptions can be accessed at a later time and can be sent to others via email. ## How we built it We used MongoDB, ExpressJS, ReactJS, and NodeJS to handle the frontend and backend. We also incorporated multiple APIs such as the Microsoft Speech-to-Text API to transcribe the audio, Merriam-Webster Medical Dictionary API to define medical terms, Microsoft Translator API to translate medical terms to different languages, and the Twilio SendGrid Email API to send the transcript to the person the user chooses. ## Challenges we ran into We ran into difficulty configuring the speech-to-text API and having it listen non-stop. The tech stack was relatively new to us all, so we were spending a lot of time reading documentation to figure out the syntax. ## Accomplishments that we're proud of We were able to create all the functionality we planned for and our web application looks very similar to the Figma draft. We were able to learn on the fly and navigate a tech stack that we were not very familiar with. Furthermore, we were proud of the end product that we were able to produce and the experiences we gained from it. ## What we learned We learned the divide and conquer method of working on a project where we work on our individual components, then figure out how to integrate all the parts. We also learned how to use various APIs and the different methods to configure them. ## What's next for MediScribe We are planning to increase the translation capabilities to more languages and offer a setting so that all the text on the website can be translated into a language of choice.
partial
## Inspiration We as a team shared the same interest in knowing more about Machine Learning and its applications. upon looking at the challenges available, we were immediately drawn to the innovation factory and their challenges, and thought of potential projects revolving around that category. We started brainstorming, and went through over a dozen design ideas as to how to implement a solution related to smart cities. By looking at the different information received from the camera data, we landed on the idea of requiring the raw footage itself and using it to look for what we would call a distress signal, in case anyone felt unsafe in their current area. ## What it does We have set up a signal that if done in front of the camera, a machine learning algorithm would be able to detect the signal and notify authorities that maybe they should check out this location, for the possibility of catching a potentially suspicious suspect or even being present to keep civilians safe. ## How we built it First, we collected data off the innovation factory API, and inspected the code carefully to get to know what each part does. After putting pieces together, we were able to extract a video footage of the nearest camera to us. A member of our team ventured off in search of the camera itself to collect different kinds of poses to later be used in training our machine learning module. Eventually, due to compiling issues, we had to scrap the training algorithm we made and went for a similarly pre-trained algorithm to accomplish the basics of our project. ## Challenges we ran into Using the Innovation Factory API, the fact that the cameras are located very far away, the machine learning algorithms unfortunately being an older version and would not compile with our code, and finally the frame rate on the playback of the footage when running the algorithm through it. ## Accomplishments that we are proud of Ari: Being able to go above and beyond what I learned in school to create a cool project Donya: Getting to know the basics of how machine learning works Alok: How to deal with unexpected challenges and look at it as a positive change Sudhanshu: The interesting scenario of posing in front of a camera while being directed by people recording me from a mile away. ## What I learned Machine learning basics, Postman, working on different ways to maximize playback time on the footage, and many more major and/or minor things we were able to accomplish this hackathon all with either none or incomplete information. ## What's next for Smart City SOS hopefully working with innovation factory to grow our project as well as inspiring individuals with similar passion or desire to create a change.
## Inspiration We were heavily focused on the machine learning aspect and realized that we lacked any datasets which could be used to train a model. So we tried to figure out what kind of activity which might impact insurance rates that we could also collect data for right from the equipment which we had. ## What it does Insurity takes a video feed from a person driving and evaluates it for risky behavior. ## How we built it We used Node.js, Express, and Amazon's Rekognition API to evaluate facial expressions and personal behaviors. ## Challenges we ran into This was our third idea. We had to abandon two major other ideas because the data did not seem to exist for the purposes of machine learning.
## Inspiration Raymond was a business student and saw a necessity for small businesses to adapt to the pandemic. With the help of Kelly and Kyle, we sought to fill the online needs of struggling businesses by creating a webapp to streamline online sales. ## What it does We created a webapp that can demonstrate the potential of having a popover that allows a business to showcase a selected bundle of products to sell. This helps their customers make better purchase decisions as the product is handpicked by staff. ## How we built it To build this project, we used Python’s Flask framework, HTML, CSS, and Bootstrap. ## Challenges we ran into As beginner programmers in the BCIT CST’s diploma, we had to ramp up our skills to get our idea off the ground. Initially, we were not familiar with certain frameworks so we took this as a learning experience. ## Accomplishments that we're proud of We were able to merge different languages and found the opportunity to synergize the different skills our team has to come up with the final project. We were very nervous to complete this project within the timeframe but we overcame this with planning and good communication. ## What we learned 1. How to set expectations and create a feasible project scope 2. We learned how to scale the project and find alternative solutions for each challenge 3. Collaborating with programming: Google Drive, Discord, and Github ## What's next for Grocery Store Webapp While we were able to accomplish a prototype of our webapp during the timeline of the hackathon, these are things we would implement if we were to grow this project. * We would connect the webapp directly to a payment portal * We would allow the transaction to ping the business owner on a sale * We would create a Control Panel/ Settings so the business could easily change the product description/ images/ prices/ * We would increase usability to different web platforms. For example, businesses would be able to use the app on WordPress, javascript websites, and so on. We would aim to find a method to add this webapp as a plugin that could be used by various platforms.
winning
## Inspiration Inspired by the MIT Reuse mailing list, which is rather chaotic and unorganized. ## What it does * Allows users to create public listings of items they are giving away * Allows users to mark items as taken * Displays all available listings on a map * Auto archives old listings * Looks good on all screens ## How we built it * Django for the backend * Bulma CSS framework for the frontend * Google Maps API * Added custom JavaScript to allow users to pick a location using GMaps ## Challenges we ran into * Limiting the scope of the project * Learning how to use the GMaps API * Falling asleep after drinking too much coffee ## Accomplishments that we're proud of * Building a working project * Working as a team * Sleeping ## What we learned * Hacking under time-pressure ## What's next for MIT Reuse * Adding pictures to listings * Logging in using Athena kerberos * Integration with the MIT Reuse mailing list
## Inspiration As a student at university, after first year it is very difficult to find all the information you need to find off campus living, we wanted to solve this problem to help students find the information that they need. There is no central website or collection of information that exists that can aid students with this problem, and thats where we come in! ## What it does Using a constantly updated database, we track what listings are available near the university for prospective housing options in a certain area. The results are then displayed in a meaningful and simple fashion providing the user with all the information required to make an informed decision, such as: relative location of housing to points of interest (i.e. University, Restaurants, Gyms, etc.), comparing houses by price and size, and providing a price average to give the user a point of reference when looking at house prices. ## How we built it Using various platforms and different languages, we built our website with many different moving parts. One collects data about available housing from the major landlords of the area and stores it into the data base. The second part takes the data from the database and intereprets it in a meaningful manner. That information is then taken and displayed in a sleek and elegant website which is accessible to the end user. ## Challenges we ran into Collecting data from websites despite varying HTML/Source Codes, CSS, using our data and applying it best for the consumer, interacting various API's, and gluing it all together. ## Accomplishments that we're proud of We are proud that we manage to collect 50 property listings which would effectively provide many students with ample choices in where to live, easing the process for them. We are also proud of how well worked together especially since most of our team members come from different Universities and had only met at the hackathon. ## What we learned We learned integral software design skills which we incorporated in our projects design. We also learned about different types of API's specifically the Google API and how to interact with them. ## What's next for FindLiving.Space We are going to scale it to incorporate data about listing in more cities to assist students, from other universities, facing similar difficulties finding affordable housings. We would also like to offer features which would benefit the landlords as well, giving them an estimate on their property's value based on the, hopefully, thousands of property listings on the site. We want to create the ideal solution the problem we are trying to solve.
## Inspiration: The single biggest problem and bottleneck in training large AI models today is compute. Just within the past week, Sam Altman tried raising an $8 trillion dollar fund to build a large network of computers for training larger multimodal models.
losing
## Overview AOFS is an automatic sanitization robot that navigates around spaces, detecting doorknobs using a custom trained machine-learning algorithm and sanitizing them using antibacterial agent. ## Inspiration It is known that in hospitals and other public areas, infections spread via our hands. Door handles, in particular, are one such place where germs accumulate. Cleaning such areas is extremely important, but hospitals are often at a short of staff and the sanitization may not be done as often as should be. We therefore wanted to create a robot that would automate this, which both frees up healthcare staff to do more important tasks and ensures that public spaces remain clean. ## What it does AOFS travels along walls in public spaces, monitoring the walls. When a door handle is detected, the robot stops automatically sprays it with antibacterial agent to sanitize it. ## How we built it The body of the robot came from a broken roomba. Using two ultrasonic sensors for movement and a mounted web-cam for detection, it navigates along walls and scans for doors. Our doorknob-detecting computer vision algorithm is trained via transfer learning on the [YOLO network](https://pjreddie.com/darknet/yolo/) (one of the state of the art real-time object detection algorithms) using custom collected and labelled data: using the pre-trained weights for the network, we froze all 256 layers except the last three, which we re-trained on our data using a Google Cloud server. The trained algorithm runs on a Qualcomm Dragonboard 410c which then relays information to the arduino. ## Challenges we ran into Gathering and especially labelling our data was definitely the most painstaking part of the project, as all doorknobs in our dataset of over 3000 pictures had to be boxed by hand. Training the network then also took a significant amount of time. Some issues also occured as the serial interface is not native to the qualcomm dragonboard. ## Accomplishments that we're proud of We managed to implement all hardware elements such as pump, nozzle and electrical components, as well as an algorithm that navigated using wall-following. Also, we managed to train an artificial neural network with our own custom made dataset, in less than 24h! ## What we learned Hacking existing hardware for a new purpose, creating a custom dataset and training a machine learning algorithm. ## What's next for AOFS Increasing our training dataset to incorporate more varied images of doorknobs and training the network on more data for a longer period of time. Using computer vision to incorporate mapping of spaces as well as simple detection, in order to navigate more intelligently.
## Inspiration Given the current state of events, we were inspired to create a device that was able to mitigate the transfer of bacteria through commonly touched surfaces, as well as provide a more efficient method of keeping track of the number of people entering or exiting a room for health and safety purposes. Due to the fact that this process is carried out manually, it is not possible to ensure that door handles are sterilized as often as they should be, as well, it is very difficult to keep track of the total occupancy of a room, especially if there are multiple entrances. Therefore, by creating this device we ensure that each and every doorknob is sanitized on a needed basis. It is our hope that the implementation of DHD will drastically reduce the transmission of the virus and other diseases. ## What it does Our product DHD functions by detecting whether a person is approaching a door through the use of an ultrasonic sensor. Once the person opens the door, the Hall effect sensor detects a change in the magnetic field as the process of opening the door separates the magnet from the sensor which triggers the disinfection process. This is indicated by a blue LED turning on. The spraying process then begins as a DC motor rotates and pulls the trigger on the spray bottle for a duration of five seconds. Next, a fan turns on in order to dry the doorknob, so that the door handle is prepared for the next user. this concludes the disinfecting process. The LCD display now registers that one person has entered the room and the "People Counter" increases by 1. After spraying about 500 times, a red LED then turns on to indicate that the spray bottle needs to be refilled. Once the device has been supplied with a new sanitization spray bottle the user must press a push-button to reactivate the device. To account for an individual leaving a room, after detecting a door closing, an ultrasonic sensor will check for an increase in distance as the person walks away. This is registered as an exit and the counter of the number of people in the room will decrease by 1. Essentially, the prototype is responsible for disinfecting a door handle after each subsequent use. The desktop application functions via a Bluetooth module on the Arduino which is able to transfer the data it collects. Each disinfecting device installed transmits data about its current status, whether the device is on and ready, disinfecting, or off. It is also able to transmit the current number of people within a room, in addition to the maximum allowable occupancy (based on the current health restrictions), and detects cases where the room or facility is overcrowded. Finally, the application keeps track of the status of the devices installed in all rooms and tabulates them. ## Who is it intended for? This device was mainly built for installation in Office Buildings and for other businesses. As discussed previously, it helps create a safer work environment for employees and helps monitor the occupancy in the entire building autonomously. The enterprise would give an employee or security personnel the job of monitoring the occupancy limit in the various rooms every hour and that of refilling the spray bottles once a day, or as needed. ## How we built it DHD was built through a collaborative process that involved one member constructing the entire prototype while the other two members worked simultaneously to develop code for the device, as well as creating a desktop application that would collect emitted data from the Bluetooth module and organize it in a user-friendly manner. The prototype was constructed through the use of an Arduino, which was able to control all of the electronic components. ## Challenges we ran into Due to the fact that we had limited access to resources, it was a challenge to be able to create an entire working prototype with the hardware we had. Specifically, one significant challenge we ran into was the fact that the member responsible for the prototype did not have access to a Bluetooth module, so testing the desktop application in conjunction with the prototype was not possible. Although both projects function as they should carry out the tasks they were designed for as we simulated the connection between the prototype and the Desktop App using a separate Arduino and Bluetooth module, we would need to meet in person in order to ensure that they function together. ## Accomplishments that we are proud of Given the online nature of the Hackathon, we are very proud to say that we were able to create an entire functional prototype, as well as a desktop application, within a limited time frame. With few resources and capabilities to collaborate together, putting everything together was an accomplishment on its own. ## What we learned With all of us coming from the discipline of Mechatronics, it was a challenge to push ourselves outside of the scope of what we are taught in class. Since most of what we learn is theoretical, being able to practically apply our skills it was an interesting experience. we were able to explore a variety of sensors and other electronic components, as well as work with Java in an online fashion, where we were able to create a desktop application. ## What's next for DHD Solutions Moving forward we hope to equip DHD with a UV light in order to ensure the effective sanitization of each door handle. In addition, we hope to organize the electronic components more efficiently so that the device can be as compact as possible, which will include our own PCB design to minimize any extra unnecessary components. Finally, we hope to install a protocol that will adapt the spraying nozzle to accommodate any door handle and to be able to sanitize each one thoroughly.
## Inspiration Every year roughly 25% of recyclable material is not able to be recycled due to contamination. We set out to reduce the amount of things that are needlessly sent to the landfill by reducing how much people put the wrong things into recycling bins (i.e. no coffee cups). ## What it does This project is a lid for a recycling bin that uses sensors, microcontrollers, servos, and ML/AI to determine if something should be recycled or not and physically does it. To do this it follows the following process: 1. Waits for object to be placed on lid 2. Take picture of object using webcam 3. Does image processing to normalize image 4. Sends image to Tensorflow model 5. Model predicts material type and confidence ratings 6. If material isn't recyclable, it sends a *YEET* signal and if it is it sends a *drop* signal to the Arduino 7. Arduino performs the motion sent to it it (aka. slaps it *Happy Gilmore* style or drops it) 8. System resets and waits to run again ## How we built it We used an Arduino Uno with an Ultrasonic sensor to detect the proximity of an object, and once it meets the threshold, the Arduino sends information to the pre-trained TensorFlow ML Model to detect whether the object is recyclable or not. Once the processing is complete, information is sent from the Python script to the Arduino to determine whether to yeet or drop the object in the recycling bin. ## Challenges we ran into A main challenge we ran into was integrating both the individual hardware and software components together, as it was difficult to send information from the Arduino to the Python scripts we wanted to run. Additionally, we debugged a lot in terms of the servo not working and many issues when working with the ML model. ## Accomplishments that we're proud of We are proud of successfully integrating both software and hardware components together to create a whole project. Additionally, it was all of our first times experimenting with new technology such as TensorFlow/Machine Learning, and working with an Arduino. ## What we learned * TensorFlow * Arduino Development * Jupyter * Debugging ## What's next for Happy RecycleMore Currently the model tries to predict everything in the picture which leads to inaccuracies since it detects things in the backgrounds like people's clothes which aren't recyclable causing it to yeet the object when it should drop it. To fix this we'd like to only use the object in the centre of the image in the prediction model or reorient the camera to not be able to see anything else.
partial
## Inspiration The inspiration for Green Cart is to support local farmers by connecting them directly to consumers for fresh and nutritious produce. The goal is to promote community support for farmers and encourage people to eat fresh and locally sourced food. ## What it does GreenCart is a webapp that connects local farmers to consumers for fresh, nutritious produce, allowing consumers to buy directly from farmers in their community. The app provides a platform for consumers to browse and purchase produce from local farmers, and for farmers to promote and sell their products. Additionally, GreenCart aims to promote community support for farmers and encourage people to eat fresh and locally sourced food. ## How we built it The GreenCart app was built using a combination of technologies including React, TypeScript, HTML, CSS, Redux and various APIs. React is a JavaScript library for building user interfaces, TypeScript is a typed superset of JavaScript that adds optional static types, HTML and CSS are used for creating the layout and styling of the app, Redux is a library that manages the state of the app, and the APIs allow the app to connect to different services and resources. The choice of these technologies allowed the team to create a robust and efficient app that can connect local farmers to consumers for fresh, nutritious produce while supporting the community. ## Challenges we ran into The GreenCart webapp development team encountered a number of challenges during the design and development process. The initial setup of the project, which involved setting up the project structure using React, TypeScript, HTML, CSS, and Redux, and integrating various APIs, was a challenge. Additionally, utilizing Github effectively as a team to ensure proper collaboration and version control was difficult. Another significant challenge was designing the UI/UX of the app to make it visually appealing and user-friendly. The team also had trouble with the search function, making sure it could effectively filter and display results. Another major challenge was debugging and fixing issues with the checkout balance not working properly. Finally, time constraints were a challenge as the team had to balance the development of various features while meeting deadlines. ## Accomplishments that we're proud of As this was the first time for most of the team members to use React, TypeScript, and other technologies, the development process presented some challenges. Despite this, the team was able to accomplish many things that they were proud of. Some examples of these accomplishments could include: Successfully setting up the initial project structure and integrating the necessary technologies. Implementing a user-friendly and visually appealing UI/UX design for the app. Working collaboratively as a team and utilizing Github for version control and collaboration. Successfully launching the web app and getting a positive feedback from users. ## What we learned During this hackathon, the team learned a variety of things, including: How to use React, TypeScript, HTML, CSS, and Redux to build a web application. How to effectively collaborate as a team using Github for version control and issue tracking. How to design and implement a user-friendly and visually appealing UI/UX. How to troubleshoot and debug issues with the app, such as the blog page not working properly. How to work under pressure and adapt to new technologies and challenges. They also learn how to build a web app that can connect local farmers to consumers for fresh, nutritious produce while supporting the community. Overall, the team gained valuable experience in web development, teamwork, and project management during this hackathon. ## What's next for Green Cart Marketing and Promotion: Develop a comprehensive marketing and promotion strategy to attract customers and build brand awareness. This could include social media advertising, email campaigns, and influencer partnerships. Improve User Experience: Continuously gather feedback from users and use it to improve the app's user experience. This could include adding new features, fixing bugs and optimizing the performance. Expand the Product Offerings: Consider expanding the range of products offered on the app to attract a wider customer base. This could include organic and non-organic produce, meat, dairy and more. Partnership with Local Organizations: Form partnerships with local organizations such as supermarkets, restaurants, and community groups to expand the reach of the app and increase the number of farmers and products available. ## Git Repo ; <https://github.com/LaeekAhmed/Green-Cart/tree/master/Downloads/web_dev/Khana-master>
## Inspiration Let’s face it: getting your groceries is hard. As students, we’re constantly looking for food that is healthy, convenient, and cheap. With so many choices for where to shop and what to get, it’s hard to find the best way to get your groceries. Our product makes it easy. It helps you plan your grocery trip, helping you save time and money. ## What it does Our product takes your list of grocery items and searches an automatically generated database of deals and prices at numerous stores. We collect this data by collecting prices from both grocery store websites directly as well as couponing websites. We show you the best way to purchase items from stores nearby your postal code, choosing the best deals per item, and algorithmically determining a fast way to make your grocery run to these stores. We help you shorten your grocery trip by allowing you to filter which stores you want to visit, and suggesting ways to balance trip time with savings. This helps you reach a balance that is fast and affordable. For your convenience, we offer an alternative option where you could get your grocery orders delivered from several different stores by ordering online. Finally, as a bonus, we offer AI generated suggestions for recipes you can cook, because you might not know exactly what you want right away. Also, as students, it is incredibly helpful to have a thorough recipe ready to go right away. ## How we built it On the frontend, we used **JavaScript** with **React, Vite, and TailwindCSS**. On the backend, we made a server using **Python and FastAPI**. In order to collect grocery information quickly and accurately, we used **Cloudscraper** (Python) and **Puppeteer** (Node.js). We processed data using handcrafted text searching. To find the items that most relate to what the user desires, we experimented with **Cohere's semantic search**, but found that an implementation of the **Levenshtein distance string algorithm** works best for this case, largely since the user only provides one to two-word grocery item entries. To determine the best travel paths, we combined the **Google Maps API** with our own path-finding code. We determine the path using a **greedy algorithm**. This algorithm, though heuristic in nature, still gives us a reasonably accurate result without exhausting resources and time on simulating many different possibilities. To process user payments, we used the **Paybilt API** to accept Interac E-transfers. Sometimes, it is more convenient for us to just have the items delivered than to go out and buy it ourselves. To provide automatically generated recipes, we used **OpenAI’s GPT API**. ## Challenges we ran into Everything. Firstly, as Waterloo students, we are facing midterms next week. Throughout this weekend, it has been essential to balance working on our project with our mental health, rest, and last-minute study. Collaborating in a team of four was a challenge. We had to decide on a project idea, scope, and expectations, and get working on it immediately. Maximizing our productivity was difficult when some tasks depended on others. We also faced a number of challenges with merging our Git commits; we tended to overwrite one anothers’ code, and bugs resulted. We all had to learn new technologies, techniques, and ideas to make it all happen. Of course, we also faced a fair number of technical roadblocks working with code and APIs. However, with reading documentation, speaking with sponsors/mentors, and admittedly a few workarounds, we solved them. ## Accomplishments that we’re proud of We felt that we put forth a very effective prototype given the time and resource constraints. This is an app that we ourselves can start using right away for our own purposes. ## What we learned Perhaps most of our learning came from the process of going from an idea to a fully working prototype. We learned to work efficiently even when we didn’t know what we were doing, or when we were tired at 2 am. We had to develop a team dynamic in less than two days, understanding how best to communicate and work together quickly, resolving our literal and metaphorical merge conflicts. We persisted towards our goal, and we were successful. Additionally, we were able to learn about technologies in software development. We incorporated location and map data, web scraping, payments, and large language models into our product. ## What’s next for our project We’re very proud that, although still rough, our product is functional. We don’t have any specific plans, but we’re considering further work on it. Obviously, we will use it to save time in our own daily lives.
## Inspiration We've all had to fill out paperwork going to a new doctor before: it's a pain, and it's information we've already written down for other doctors a million times before. Our health information ends up all over the place, not only making it difficult for us, but making it difficult for researchers to find participants for studies. ## What it does HealthConnect stores your medical history on your phone, and enables you to send it to a doctor just by scanning a one-time-use QR code. It's completely end-to-end encrypted, and your information is encrypted when it's stored on your phone. We provide an API for researchers to request a study of people with specific medical traits, such as a family history of cancer. Researchers upload their existing data analysis code written using PyTorch, and we automatically modify it to provide *differential privacy* -- in other words, we guarantee mathematically that our user's privacy will not be violated by any research conducted. It's completely automatic, saving researchers time and money. ## How we built it ### Architecture We used a scalable microservice architecture to build our application: small connectors interface between the mobile app and doctors and researchers, and a dedicated executor runs machine learning code. ### Doctor Connector The Doctor Connector enables seamless end-to-end encrypted transmission of data between users and medical providers. It receives a public key from a provider, and then allows the mobile app to upload data that's been encrypted with that key. After the data's been uploaded, the doctor's software can download it, decrypt it, and save it locally. ### ML Connector The ML Connector is the star of the show: it manages what research studies are currently running, and processes new data as people join research studies. It uses a two-step hashing algorithm to verify that users are legitimate participants in a study (i.e. they have not modified their app to try and join every study), and collects the information of participants who are eligible to participate in the study. And, it does this without ever writing their data to disk, adding an extra layer of security. ### ML Executor The ML Executor augments a researcher's Python analysis program to provide differential privacy guarantees, runs it, and returns the result to the researcher. ### Mobile App The Mobile App interfaces with both connectors to share data, and provides secure, encrypted storage of users' health information. ### Languages Used Our backend services are written in Python, and we used React Native to build our mobile app. ## Challenges we ran into It was difficult to get each of our services working together since we were a distributed team. ## Accomplishments that we're proud of We're proud of getting everything to work in concert together, and we're proud of the privacy and security guarantees we were able to provide in such a limited amount of time. ## What we learned * Flask * Python ## What's next for HealthConnect We'd like to expand the HealthConnect platform so those beyond academic researchers, such as for-profit companies, could identify and compensate participants in medical studies. Test
partial
## Inspiration Some of the topics that we first explored were meditation, mindfulness, and gratefulness. We wanted to create a way for people to help themselves. This approach would allow for a completely personal guide based on the user’s past experience. Overall, our project inspiration comes from the hope to make use of our past negative experiences and channel that into positive guidance for the future. ## What it does Home Zone is an app created to help users cope with emotions by allowing them to track emotional experiences in the past and offer guidance to their future selves. ## How we built it We used xcode and coded in swift. Since we had little to no experience with coding in swift, we had to rely heavily on online tutorials for guidance. We brainstormed the idea of Home Zone after thinking about ways to help people with their health. Once we had a basic idea in mind, we needed to create a framework for the user interface/experience. We split our team into two: one that focused on the overall design and conceptual aspects, and one that focused on implementing the functionalities into code. We started with the idea of creating an app where a user can write down their mindfulness and emotions, continually improving on the concept throughout the day, adding new features and changing the design of the app. ## Challenges we ran into The hardest challenge was learning how to use swift because of all of the intricacies of the language that couldn't be captured in the workshops we went to. A new language comes with the challenges of learning which internal libraries we can use for the app. The hardest part of the application was debugging, since we weren’t used to debugging non-GUI project code, with pre-written encapsulating test cases. ## Accomplishments that we're proud of After ideating, designing, coding, and rounds and rounds of debugging, the first time we exported the app onto our phone and interacted with it was pure magic!! We also are proud that we implemented our design into code to the best of our ability and accomplished the goal we set for our product. ## What we learned Since this was our first Hackathon and our first experience of working under tight time constraints on a project, we had to learn how to manage our expectations. We were very ambitious at the start and ideated very complex and elaborate solutions. However, as we started to implement it, we realized that it was unrealistic given the limited time we had. Thus, we chose to condense our solution but still made sure that it addresses the pain points we defined and achieve the goal we set for ourselves. ## What's next for Home Zone We want to implement more features into our app that will allow our users to add more positivity to their home zone. One planned feature is an embedded music player for Spotify that will allow users to link a specific song and/or playlist into the page which can be played to comfort the user when they’re experiencing a specific event.
## Inspiration: *This project was inspired by the idea of growth mindset. We all live busy lives and face obstacles everyday but in the face of a difficult situation we can either rise to the occasion or let our obstacles win.* ## What it does *Through the use of Google NLP, our app runs analysis on the users text input of how they are feeling and runs analysis on their text to determine if they are thinking positively or negatively. Then they are prompted to change the negative thought into a positive one.* ## How I built it *We built the app using Swift code,Swift UI kit and designed the logo on adobe illustrator, We used APIs from the Google Cloud Platform such as NLP for text/sentiment analysis.* ## Challenges I ran into *Having a starting point was difficult since neither one of us was experienced with coding Swift on XCOde. But with perseverance we were able to overcome this challenge and keep going. We also ran into difficulties with using APIs due to the version of Swift not matching. Then we did not have the time to build a server so we had to tweak our idea slightly.* ## Accomplishments that I'm proud of: *-creating a fully functional iOS application on Swift for the first time -using our creativity to come up with a social impact project -combating each challenge that came our way* ## What I learned *I learned expect the unexpected since not everything goes as planned. But with a courage you can accomplish anything. I also learned to prioritize.* ## What's next for Vent *to improve search optimization build traction for the app have an optional Facebook login page in case users want to enter in their information Build the website version of the app*
## Inspiration We wanted to bring financial literacy into being a part of your everyday life while also bringing in futuristic applications such as augmented reality to really motivate people into learning about finance and business every day. We were looking at a fintech solution that didn't look towards enabling financial information to only bankers or the investment community but also to the young and curious who can learn in an interesting way based on the products they use everyday. ## What it does Our mobile app looks at company logos, identifies the company and grabs the financial information, recent company news and company financial statements of the company and displays the data in an Augmented Reality dashboard. Furthermore, we allow speech recognition to better help those unfamiliar with financial jargon to better save and invest. ## How we built it Built using wikitude SDK that can handle Augmented Reality for mobile applications and uses a mix of Financial data apis with Highcharts and other charting/data visualization libraries for the dashboard. ## Challenges we ran into Augmented reality is very hard, especially when combined with image recognition. There were many workarounds and long delays spent debugging near-unknown issues. Moreover, we were building using Android, something that none of us had prior experience with which made it harder. ## Accomplishments that we're proud of Circumventing challenges as a professional team and maintaining a strong team atmosphere no matter what to bring something that we believe is truly cool and fun-to-use. ## What we learned Lots of things about Augmented Reality, graphics and Android mobile app development. ## What's next for ARnance Potential to build in more charts, financials and better speech/chatbot abilities into out application. There is also a direction to be more interactive with using hands to play around with our dashboard once we figure that part out.
losing
## Inspiration One of the greatest challenges facing our society today is food waste. From an environmental perspective, Canadians waste about *183 kilograms of solid food* per person, per year. This amounts to more than six million tonnes of food a year, wasted. From an economic perspective, this amounts to *31 billion dollars worth of food wasted* annually. For our hack, we wanted to tackle this problem and develop an app that would help people across the world do their part in the fight against food waste. We wanted to work with voice recognition and computer vision - so we used these different tools to develop a user-friendly app to help track and manage food and expiration dates. ## What it does greenEats is an all in one grocery and food waste management app. With greenEats, logging your groceries is as simple as taking a picture of your receipt or listing out purchases with your voice as you put them away. With this information, greenEats holds an inventory of your current groceries (called My Fridge) and notifies you when your items are about to expire. Furthermore, greenEats can even make recipe recommendations based off of items you select from your inventory, inspiring creativity while promoting usage of items closer to expiration. ## How we built it We built an Android app with Java, using Android studio for the front end, and Firebase for the backend. We worked with Microsoft Azure Speech Services to get our speech-to-text software working, and the Firebase MLKit Vision API for our optical character recognition of receipts. We also wrote a custom API with stdlib that takes ingredients as inputs and returns recipe recommendations. ## Challenges we ran into With all of us being completely new to cloud computing it took us around 4 hours to just get our environments set up and start coding. Once we had our environments set up, we were able to take advantage of the help here and worked our way through. When it came to reading the receipt, it was difficult to isolate only the desired items. For the custom API, the most painstaking task was managing the HTTP requests. Because we were new to Azure, it took us some time to get comfortable with using it. To tackle these tasks, we decided to all split up and tackle them one-on-one. Alex worked with scanning the receipt, Sarvan built the custom API, Richard integrated the voice recognition, and Maxwell did most of the app development on Android studio. ## Accomplishments that we're proud of We're super stoked that we offer 3 completely different grocery input methods: Camera, Speech, and Manual Input. We believe that the UI we created is very engaging and presents the data in a helpful way. Furthermore, we think that the app's ability to provide recipe recommendations really puts us over the edge and shows how we took on a wide variety of tasks in a small amount of time. ## What we learned For most of us this is the first application that we built - we learned a lot about how to create a UI and how to consider mobile functionality. Furthermore, this was also our first experience with cloud computing and APIs. Creating our Android application introduced us to the impact these technologies can have, and how simple it really is for someone to build a fairly complex application. ## What's next for greenEats We originally intended this to be an all-purpose grocery-management app, so we wanted to have a feature that could allow the user to easily order groceries online through the app, potentially based off of food that would expire soon. We also wanted to implement a barcode scanner, using the Barcode Scanner API offered by Google Cloud, thus providing another option to allow for a more user-friendly experience. In addition, we wanted to transition to Firebase Realtime Database to refine the user experience. These tasks were considered outside of our scope because of time constraints, so we decided to focus our efforts on the fundamental parts of our app.
## Inspiration As college students more accustomed to having meals prepared by someone else than doing so ourselves, we are not the best at keeping track of ingredients’ expiration dates. As a consequence, money is wasted and food waste is produced, thereby discounting the financially advantageous aspect of cooking and increasing the amount of food that is wasted. With this problem in mind, we built an iOS app that easily allows anyone to record and track expiration dates for groceries. ## What it does The app, iPerish, allows users to either take a photo of a receipt or load a pre-saved picture of the receipt from their photo library. The app uses Tesseract OCR to identify and parse through the text scanned from the receipt, generating an estimated expiration date for each food item listed. It then sorts the items by their expiration dates and displays the items with their corresponding expiration dates in a tabular view, such that the user can easily keep track of food that needs to be consumed soon. Once the user has consumed or disposed of the food, they could then remove the corresponding item from the list. Furthermore, as the expiration date for an item approaches, the text is highlighted in red. ## How we built it We used Swift, Xcode, and the Tesseract OCR API. To generate expiration dates for grocery items, we made a local database with standard expiration dates for common grocery goods. ## Challenges we ran into We found out that one of our initial ideas had already been implemented by one of CalHacks' sponsors. After discovering this, we had to scrap the idea and restart our ideation stage. Choosing the right API for OCR on an iOS app also required time. We tried many available APIs, including the Microsoft Cognitive Services and Google Computer Vision APIs, but they do not have iOS support (the former has a third-party SDK that unfortunately does not work, at least for OCR). We eventually decided to use Tesseract for our app. Our team met at Cubstart; this hackathon *is* our first hackathon ever! So, while we had some challenges setting things up initially, this made the process all the more rewarding! ## Accomplishments that we're proud of We successfully managed to learn the Tesseract OCR API and made a final, beautiful product - iPerish. Our app has a very intuitive, user-friendly UI and an elegant app icon and launch screen. We have a functional MVP, and we are proud that our idea has been successfully implemented. On top of that, we have a promising market in no small part due to the ubiquitous functionality of our app. ## What we learned During the hackathon, we learned both hard and soft skills. We learned how to incorporate the Tesseract API and make an iOS mobile app. We also learned team building skills such as cooperating, communicating, and dividing labor to efficiently use each and every team member's assets and skill sets. ## What's next for iPerish Machine learning can optimize iPerish greatly. For instance, it can be used to expand our current database of common expiration dates by extrapolating expiration dates for similar products (e.g. milk-based items). Machine learning can also serve to increase the accuracy of the estimates by learning the nuances in shelf life of similarly-worded products. Additionally, ML can help users identify their most frequently bought products using data from scanned receipts. The app could recommend future grocery items to users, streamlining their grocery list planning experience. Aside from machine learning, another useful update would be a notification feature that alerts users about items that will expire soon, so that they can consume the items in question before the expiration date.
## Inspiration Food-waste has been determined as one of the biggest contributors to a greenhouse gas, methane, which has 21 more times global warming potential than carbon dioxide. Each year, around $31 billion of food are wasted every year in Canada, which about half of these wastes are produced at households. And sometimes we throw these food away because we have forgotten its expiry date. If there exists an assistant that can notify users to eat the food before its expiry date, we could reduce food waste by a great amount. ## What it does StillGood is a voice activated Google Assistant service that keeps track of your grocery purchases and predicts when they are good or expired. Using this information, StillGood is able to notify users before food has gone bad to prevent it from being wasted. It can also suggest recipes using those ingredients in order to enable users to use their goods to their full extent. StillGood can then offer you more sustainable alternatives as well to what you already have in your fridge in order to help you shop better and buy more sustainable products that are healthier and/or have a smaller carbon footprint. ## How I built it After we decided on the idea that we were going to build upon, we split the team into groups of two, with two working on the web interface and two working on the voice control and data processing of the product. One team-member was web-scraping for useful and reliable data sets to serve as sample set of our product; one team-member was learning how to connect voice-control in Google Home Mini to our system; one team-member was learning and working on the front-end of the web interface while the other member was responsible for the back-end of the web interface and the overall structure of the system. ## Challenges I ran into The main challenges the team ran into is how to design for best user experience. The service should be easy to use and to understand, and there shouldn't be any additional steps that require the users to complete in order to get the answers they want. We have experienced hard time trying to find reliable data through web scraping.It is also a brand new field for us to incorporate smart home into our application. Besides the voice control functionality of StillGood, we have also tried to maximize the user experience with the web interface as additional support. ## Accomplishments that I'm proud of To build something practical that could potentially tackle real-world problem. At the moment when we finished this project, it helps us gain a better understanding of how we could actually use the technical skills we have learned in class into something useful and could perhaps benefit the society as a whole. We also have the opportunity to work with technology that we have never worked with before(i.e. Alexa). ## What I learned All of the team were willing to take on tasks that they have never done before. Being exposed to new materials, we have practices our problem-solving skills and cooperate as a team. Some of us worked on web development while others worked on voice automation; tasks that we had never done before! ## What's next for StillGood As a product that is created within 24 hours, StillGood certainly has a lot of aspects that need to be polished and finalized for better use. Next steps for StillGood is going to be shaping it towards a smart fridge. The current method of determining expiry date of fruits and veggies is still vague and usually inaccurate. And currently we still require users to checkoff the food they have eaten by themselves. In the future, we hope to incorporate computer vision to monitor the status of inside of the fridge, providing accurate prediction of expiry dates by using machine learning and reducing users' work to record data. We also hope to add a point system that would offer users promotions if they obtain points from StillGood.
winning
## 💡 Inspiration💡 Our team is saddened by the fact that so many people think that COVID-19 is obsolete when the virus is still very much relevant and impactful to us. We recognize that there are still a lot of people around the world that are quarantining—which can be a very depressing situation to be in. We wanted to create some way for people in quarantine, now or in the future, to help them stay healthy both physically and mentally; and to do so in a fun way! ## ⚙️ What it does ⚙️ We have a full-range of features. Users are welcomed by our virtual avatar, Pompy! Pompy is meant to be a virtual friend for users during quarantine. Users can view Pompy in 3D to see it with them in real-time and interact with Pompy. Users can also view a live recent data map that shows the relevance of COVID-19 even at this time. Users can also take a photo of their food to see the number of calories they eat to stay healthy during quarantine. Users can also escape their reality by entering a different landscape in 3D. Lastly, users can view a roadmap of next steps in their journey to get through their quarantine, and to speak to Pompy. ## 🏗️ How we built it 🏗️ ### 🟣 Echo3D 🟣 We used Echo3D to store the 3D models we render. Each rendering of Pompy in 3D and each landscape is a different animation that our team created in a 3D rendering software, Cinema 4D. We realized that, as the app progresses, we can find difficulty in storing all the 3D models locally. By using Echo3D, we download only the 3D models that we need, thus optimizing memory and smooth runtime. We can see Echo3D being much more useful as the animations that we create increase. ### 🔴 An Augmented Metaverse in Swift 🔴 We used Swift as the main component of our app, and used it to power our Augmented Reality views (ARViewControllers), our photo views (UIPickerControllers), and our speech recognition models (AVFoundation). To bring our 3D models to Augmented Reality, we used ARKit and RealityKit in code to create entities in the 3D space, as well as listeners that allow us to interact with 3D models, like with Pompy. ### ⚫ Data, ML, and Visualizations ⚫ There are two main components of our app that use data in a meaningful way. The first and most important is using data to train ML algorithms that are able to identify a type of food from an image and to predict the number of calories of that food. We used OpenCV and TensorFlow to create the algorithms, which are called in a Python Flask server. We also used data to show a choropleth map that shows the active COVID-19 cases by region, which helps people in quarantine to see how relevant COVID-19 still is (which it is still very much so)! ## 🚩 Challenges we ran into We wanted a way for users to communicate with Pompy through words and not just tap gestures. We planned to use voice recognition in AssemblyAI to receive the main point of the user and create a response to the user, but found a challenge when dabbling in audio files with the AssemblyAI API in Swift. Instead, we overcame this challenge by using a Swift-native Speech library, namely AVFoundation and AVAudioPlayer, to get responses to the user! ## 🥇 Accomplishments that we're proud of We have a functioning app of an AR buddy that we have grown heavily attached to. We feel that we have created a virtual avatar that many people really can fall for while interacting with it, virtually traveling places, talking with it, and getting through quarantine happily and healthily. ## 📚 What we learned For the last 36 hours, we learned a lot of new things from each other and how to collaborate to make a project. ## ⏳ What's next for ? We can use Pompy to help diagnose the user’s conditions in the future; asking users questions about their symptoms and their inner thoughts which they would otherwise be uncomfortable sharing can be more easily shared with a character like Pompy. While our team has set out for Pompy to be used in a Quarantine situation, we envision many other relevant use cases where Pompy will be able to better support one's companionship in hard times for factors such as anxiety and loneliness. Furthermore, we envisage the Pompy application being a resource hub for users to improve their overall wellness. Through providing valuable sleep hygiene, exercise tips and even lifestyle advice, Pompy will be the one-stop, holistic companion for users experiencing mental health difficulties to turn to as they take their steps towards recovery. \*\*we had to use separate github workspaces due to conflicts.
## Inspiration We got together a team passionate about social impact, and all the ideas we had kept going back to loneliness and isolation. We have all been in high pressure environments where mental health was not prioritized and we wanted to find a supportive and unobtrusive solution. After sharing some personal stories and observing our skillsets, the idea for Remy was born. **How can we create an AR buddy to be there for you?** ## What it does **Remy** is an app that contains an AR buddy who serves as a mental health companion. Through information accessed from "Apple Health" and "Google Calendar," Remy is able to help you stay on top of your schedule. He gives you suggestions on when to eat, when to sleep, and personally recommends articles on mental health hygiene. All this data is aggregated into a report that can then be sent to medical professionals. Personally, our favorite feature is his suggestions on when to go on walks and your ability to meet other Remy owners. ## How we built it We built an iOS application in Swift with ARKit and SceneKit with Apple Health data integration. Our 3D models were created from Mixima. ## Challenges we ran into We did not want Remy to promote codependency in its users, so we specifically set time aside to think about how we could specifically create a feature that focused on socialization. We've never worked with AR before, so this was an entirely new set of skills to learn. His biggest challenge was learning how to position AR models in a given scene. ## Accomplishments that we're proud of We have a functioning app of an AR buddy that we have grown heavily attached to. We feel that we have created a virtual avatar that many people really can fall for. ## What we learned Aside from this being many of the team's first times work on AR, the main learning point was about all the data that we gathered on the suicide epidemic for adolescents. Suicide rates have increased by 56% in the last 10 years, and this will only continue to get worse. We need change. ## What's next for Remy While our team has set out for Remy to be used in a college setting, we envision many other relevant use cases where Remy will be able to better support one's mental health wellness. Remy can be used as a tool by therapists to get better insights on sleep patterns and outdoor activity done by their clients, and this data can be used to further improve the client's recovery process. Clients who use Remy can send their activity logs to their therapists before sessions with a simple click of a button. To top it off, we envisage the Remy application being a resource hub for users to improve their overall wellness. Through providing valuable sleep hygiene tips and even lifestyle advice, Remy will be the one-stop, holistic companion for users experiencing mental health difficulties to turn to as they take their steps towards recovery.
## Inspiration The purchase of goods has changed drastically over the past decade, especially over the period of the pandemic. Although with these online purchases comes a drawback, the buyer can not see the product in front of them before buying it. This adds an element of uncertainty and undesirability to online shopping as this can cost the consumer time and the seller money in processing returns, in fact, a study showed that up to 40% of all online purchases are returned, and out of those returned items just 30% were resold to customers, with the rest going to landfills or other warehouses. With this app, we hope to reduce the number of returns by putting the object the user wants to buy in front of them before they buy it so that they know exactly what they are getting. ## What it does Say you are looking to buy a tv but are not sure if it will fit or how it will look in your home. You would be able to open the Ecomerce ARena Android app and browse the TVs on Amazon(since that's where you were planning to buy the TV from anyways). You can see all the info that Amazon has on the TV but then also use AR mode to view the TV in real life. ## How we built it To build the app we used Unity, coding everything within the engine using C#. We used the native AR foundation function provided and then built upon them to get the app to work just right. We also incorporated EchoAr into the app to manage all 3d models and ensure the app is lean and small in size. ## Challenges we ran into Augmented Reality development was new to all of us as well as was the Unity engine, having to learn and harness the power of these tools was difficult and we ran into a lot of problems building and getting the desired outcome. Another problem was how to get the models for each different product, we decided for this Hackathon to limit our scope to two types of products with the ability to keep adding more in the future easily. ## Accomplishments that we're proud of We are really proud of the final product for being able to detect surfaces and use the augmented reality capabilities super well. We are also really happy that we were able to incorporate web scraping to get live data from Amazon, as well as the echo AR cloud integration. ## What we learned We learned a great deal about how much work and how truly amazing it is to get augmented reality applications built even for those who look simple on the surface. There was a lot that changed quickly as this is still a new bleeding-edge technology. ## What's next for Ecommerce ARena We hope to expand its functionality to cover a greater variety of products, as well as supporting other vendors aside from Amazon such as Best Buy and Newegg. We can also start looking into the process for releasing the app into the Google app store, might even look into porting it to Apple products.
winning
## Inspiration To go with this year’s UofTHacks theme “connectivity” and TELUS’s advocation of using technology to improve mental health, we created a timeless space for participants to care for themselves. ## What it does Fort Awesome: Welcome to the Communitea manages records of tea-drinking experiences. As more tea lovers join, this virtual tea club will get powered up, providing more opportunities to connect for the tea lovers. ## How we built it We used HTML to structure the website, CSS to add visual elements, SQL for the database, and Python and PHP for support and alternatives. ## Challenges we ran into Due to our lack of experience, connecting the project's front and back end was a major challenge for us. ## Accomplishments that we're proud of As first-time hackers, we have now got much stronger grasps at how full-stack project development should look. We kept trying different approaches, programming languages, developmental tools, and communication methods. We're proud to have completed a viable demo. ## What we learned We learned the basics of SQL and PHP, setting up a website and server, and front-end web design with HTML, CSS, and javascript. We have also figured that it might take a long time for our codes to meet in the project if we work from the front and the back. In the future, we can start together and then develop code in different directions. ## What's next for Fort Awesome: Welcome to the Communitea We recognize and value the importance of caring for oneself and caring for one another. Therefore, we will look into enriching our users’ experience by providing tailored content when more user data become available.
## Being a university student during the pandemic is very difficult. Not being able to connect with peers, run study sessions with friends and experience university life can be challenging and demotivating. With no present implementation of a specific data base that allows students to meet people in their classes and be automatically put into group chats, we were inspired to create our own. ## Our app allows students to easily setup a personalized profile (school specific) to connect with fellow classmates, be automatically put into class group chats via schedule upload and be able to browse clubs and events specific to their school. This app is a great way for students to connect with others and stay on track of activities happening in their school community. ## We built this app using an open-source mobile application framework called React Native and a real-time, cloud hosted database called Firebase. We outlined the GUI with the app using flow diagrams and implemented an application design that could be used by students via mobile. To target a wide range of users, we made sure to implement an app that could be used on android and IOS. ## Being new to this form of mobile development, we faced many challenges creating this app. The first challenge we faced was using GitHub. Although being familiar to the platform, we were unsure how to use git commands to work on the project simultaneously. However, we were quick to learn the required commands to collaborate and deliver the app on GitHub. Another challenge we faced was nested navigation within the software. Since our project highly relied on a real-time database, we also encountered difficulties with implementing the data base framework into our implementation. ## An accomplishment we are proud of is learning a plethora of different frameworks and how to implement them. We are also proud of being able to learn, design and code a project that can potentially help current and future university students across Ontario enhance their university lifestyles. ## We learned many things implementing this project. Through this project we learned about version control and collaborative coding through Git Hub commands. Using Firebase, we learned how to handle changing data and multiple authentications. We were also able to learn how to use JavaScript fundamentals as a library to build GUI via React Native. Overall, we were able to learn how to create an android and IOS application from scratch. ## What's next for USL- University Student Life! We hope to further our expertise with the various platforms used creating this project and be able to create a fully functioning version. We hope to be able to help students across the province through this application.
## Inspiration With apps like DuoLingo, most people have no problem spending time to learn a new language, but few people know sign language. We are a team of Western students looking to break down communication barriers by making sign language everyone's next interactive challenge. ## What it does We use scikit-learn and Leap Motion to create an interactive way of learning and testing your sign language skills. The web app has a progression of challenges that range from identifying the alphabet, to having your signing recognized and checked. ## How to get into Signtology 1. [Clone this repository on Github](https://github.com/ivanzvonkov/hackwestern) \*Download the prerequisites in the Requirements.txt file. 2. Open command line 3. Navigate to working directory 4. Run python app.py 5. Copy provided link from command line to browser with '\api' appended ## Challenges we ran into The API we planned on using didn't end up working, so we had to become familiar with manipulating the frame data from the Leap Motion to define gestures. ## Accomplishments that we're proud of Getting the leap motion to recognize the letters we sign in front of it was a big step that made the project a lot more exciting to work on. ## What we learned It's important to stay flexible and pivot your project when your plans don't turn out to be feasible. We had to rethink our original idea on how we would store the Leap data to compare gestures. ## What's next for Signtology We want to develop curriculum bundles that will test more complex skills (e.g spelling your name, common phrases, etc). These bundles will be available for different age groups and will lead to a proficiency in sign language.
losing
## 💡 Inspiration > > #hackathon-help-channel > `<hacker>` Can a mentor help us with flask and Python? We're stuck on how to host our project. > > > How many times have you created an epic web app for a hackathon but couldn't deploy it to show publicly? At my first hackathon, my team worked hard on a Django + React app that only lived at `localhost:5000`. Many new developers don't have the infrastructure experience and knowledge required to deploy many of the amazing web apps they create for hackathons and side projects to the cloud. We wanted to make a tool that enables developers to share their projects through deployments without any cloud infrastructure/DevOps knowledge (Also, as 2 interns currently working in DevOps positions, we've been learning about lots of Infrastructure as Code (IaC), Configuration as Code (CaC), and automation tools, and we wanted to create a project to apply our learning.) ## 💭 What it does InfraBundle aims to: 1. ask a user for information about their project 2. generate appropriate IaC and CaC code configurations 3. bundle configurations with GitHub Actions workflow to simplify deployment Then, developers commit the bundle to their project repository where deployments become as easy as pushing to your branch (literally, that's the trigger). ## 🚧 How we built it As DevOps interns, we work with Ansible, Terraform, and CI/CD pipelines in an enterprise environment. We thought that these could help simplify the deployment process for hobbyists as well InfraBundle uses: * Ansible (CaC) * Terraform (IaC) * GitHub Actions (CI/CD) * Python and jinja (generating CaC, IaC from templates) * flask! (website) ## 😭 Challenges we ran into We're relatitvely new to Terraform and Ansible and stumbled into some trouble with all the nitty-gritty aspects of setting up scripts from scratch. In particular, we had trouble connecting an SSH key to the GitHub Action workflow for Ansible to use in each run. This led to the creation of temporary credentials that are generated in each run. With Ansible, we had trouble creating and activating a virtual environment (see: not carefully reading [ansible.builtin.pip](https://docs.ansible.com/ansible/latest/collections/ansible/builtin/pip_module.html) documentation on which parameters are mutually exclusive and confusing the multiple ways to pip install) In general, hackathons are very time constrained. Unfortunately, slow pipelines do not care about your time constraints. * hard to test locally * cluttering commit history when debugging pipelines ## 🏆 Accomplishments that we're proud of InfraBundle is capable of deploying itself! In other news, we're proud of the project being something we're genuinely interested in as a way to apply our learning. Although there's more functionality we wished to implement, we learned a lot about the tools used. We also used a GitHub project board to keep track of tasks for each step of the automation. ## 📘 What we learned Although we've deployed many times before, we learned a lot about automating the full deployment process. This involved handling data between tools and environments. We also learned to use GitHub Actions. ## ❓ What's next for InfraBundle InfraBundle currently only works for a subset of Python web apps and the only provider is Google Cloud Platform. With more time, we hope to: * Add more cloud providers (AWS, Linode) * Support more frameworks and languages (ReactJS, Express, Next.js, Gin) * Improve support for database servers * Improve documentation * Modularize deploy playbook to use roles * Integrate with GitHub and Google Cloud Platform * Support multiple web servers
## Inspiration An Article, about 86 per cent of Canada's plastic waste ends up in landfill, a big part due to Bad Sorting. We thought it shouldn't be impossible to build a prototype for a Smart bin. ## What it does The Smart bin is able, using Object detection, to sort Plastic, Glass, Metal, and Paper We see all around Canada the trash bins split into different types of trash. It sometimes becomes frustrating and this inspired us to built a solution that doesn't require us to think about the kind of trash being thrown The Waste Wizard takes any kind of trash you want to throw, uses machine learning to detect what kind of bin it should be disposed in, and drops it in the proper disposal bin ## How we built it\ Using Recyclable Cardboard, used dc motors, and 3d printed parts. ## Challenges we ran into We had to train our Model for the ground up, even getting all the data ## Accomplishments that we're proud of We managed to get the whole infrastructure build and all the motor and sensors working. ## What we learned How to create and train model, 3d print gears, use sensors ## What's next for Waste Wizard A Smart bin able to sort the 7 types of plastic
## Inspiration Humans have an inherent desire for beauty, seeking to create both beautiful objects and experiences. As our lives become increasingly digital, the importance of crafting seamless experiences online grows. Consequently, graphic and web design are becoming ever more important. Despite the availability of drag-and-drop templates and website builders, creating websites often remains cumbersome and inaccessible for many. We aimed to develop a straightforward solution to enable anyone to build and launch a website in just minutes. ## What it does Through our web app, you can prompt init to create any website. Start with a landing page- init will create the HTML code and CSS, and display a preview of your website. You can invite collaborators to work on the website in real time with you and prompt init through live interactions. Once you’re satisfied with your project, init offers one-click hosting. Deploy your website straight from our server and enjoy! ## How we built it Backend: we used 2 servers to run two different websockets. The first websocket was built with node and express, and managed connecting different people to a server, this allowed for live collaboration. Users can work on the same design, see prompts generated live, and deploy. The second websocket uses fastAPI and Python for prompt generation. User-given prompts and CSS and HTML context are sent to an object representing OpenAI GPT-4o model and patched. Frontend: We used React which connects to a Firebase database and Github OAuth. The frontend renders HTML and CSS content in a preview with a deploy button that is connected to a hosted link. Users can see collaborator movement in real-time. ## Challenges we ran into This project used 2 backends, so there were many merge conflicts and unexpected issues. Despite being challenging, we were glad we dove into the subject as deep as we did because it was incredibly rewarding. ## Accomplishments that we're proud of We’re proud that we were able to put together so many features for the app that takes you from creating a project to prompting, and collaborating.We introduced a random teammate at the last minute (it was their first hackathon)- and it went well! ## What we learned This was some of our first time using websockets and using multiple backends. We learned a lot about coding with best practices. ## What's next for init init is currently hosted- so feel free to try it out here! We hope to connect to vercel so you can deploy on your own.
winning
## Inspiration Patients with this condition have a difficult time eating with regular utensils and the current products on the market are very expensive. This device provides the service to the user at a much more affordable price. ## What it does This device uses the gyroscopic sensor and accelerometer to dictate the position of the device after calibration. Using the data received from the sensors, the servo motors move in real time to negate any movement. This allows the spoon to stabilize helping the user enjoy their meal! ## How I built it ``` Using the Arduino microprocessors connected to the different sensors, information was constantly sent back and forth between the sensors and the motors to negate any movement made by the user to create a stable spoon. ``` ## Challenges I ran into The calibration process was the most challenging as it was time-consuming as well as very complicated to implement. ## Accomplishments that I'm proud of Final product ## What I learned The world is a very shaky place, with a lot of hurdles, but on the plus side, at least our design can stabilize food and help people at the same time. ## What's next for GyroSpoon Forbes Top 30 under 30
## Inspiration This generation of technological innovation and human factor design focuses heavily on designing for individuals with disabilities. As such, the inspiration for our project was an application of object detection (Darknet YOLOv3) for visually impaired individuals. This user group in particular has limited visual modality, which the project aims to provide. ## What it does Our project aims to provide the visually impaired with sufficient awareness of their surroundings to maneuver. We created a head-mounted prototype that provides the user group real-time awareness of their surroundings through haptic feedback. Our smart wearable technology uses a computer-vision ML algorithm (convolutional neural network) to help scan the user’s environment to provide haptic feedback as a warning of a nearby obstacle. These obstacles are further categorized by our algorithm to be dynamic (moving, live objects), as well as static (stationary) objects. For our prototype, we filtered through all the objects detected to focus on the nearest object to provide immediate feedback to the user, as well as providing a stronger or weaker haptic feedback if said object is near or far respectively. ## Process While our idea is relatively simple in nature, we had no idea going in just how difficult the implementation was. Our main goal was to meet a minimum deliverable product that was capable of vibrating based on the position, type, and distance of an object. From there, we had extra goals like distance calibration, optimization/performance improvements, and a more complex human interface. Originally, the processing chip (the one with the neural network preloaded) was intended to be the Huawei Atlas. With the additional design optimizations unique to neural networks, it was perfect for our application. After 5 or so hours of tinkering with no progress however, we realized this would be far too difficult for our project. We turned to a Raspberry Pi and uploaded Google’s pre-trained image analysis network. To get the necessary IO for the haptic feedback, we also had this hooked up to an Arduino which was connected to a series of haptic motors. This showed much more promise than the Huawei board and we had a functional object identifier in no time. The rest of the night was spent figuring out data transmission to the Arduino board and the corresponding decoding/output. With only 3 hours to go, we still had to finish debugging and assemble the entire hardware rig. ## Key Takeaways In the future, we all learned just how important having a minimum deliverable product (MDP) was. Our solution could be executed with varying levels of complexity and we wasted a significant amount of time on unachievable pipe dreams instead of focusing on the important base implementation. The other key takeaway of this event was to be careful with new technology. Since the Huawei boards were so new and relatively complicated to set up, they were incredibly difficult to use. We did not even use the Huawei Atlas in our final implementation meaning that all our work was not useful to our MDP. ## Possible Improvements If we had more time, there are a few things we would seek to improve. First, the biggest improvement would be to get a better processor. Either a Raspberry Pi 4 or a suitable replacement would significantly improve the processing framerate. This would make it possible to provide more robust real-time tracking instead of tracking with significant delays. Second, we would expand the recognition capabilities of our system. Our current system only filters for a very specific set of objects, particular to an office/workplace environment. Our ideal implementation would be a system applicable to all aspects of daily life. This means more objects that are recognized with higher confidence. Third, we would add a robust distance measurement tool. The current project uses object width to estimate the distance to an object. This is not always accurate unfortunately and could be improved with minimal effort.
## Inspiration The Arduino community provides a full eco-system of developing systems, and I saw the potential in using hardware, IOT and cloud-integration to provide a unique solution for streamlining processes for business. ## What it does The web-app provides the workflow for a one-stop place to manage hundreds of different sensors by incorporating intelligence to each utility provided by the Arduino REST API. Imagine a health-care company that would need to manage all its heart-rate sensors and derive insights quickly and continuously on patient data. Or picture a way for a business to manage customer device location parameters by inputting customized conditions on the data or parameters. Or a way for a child to control her robot-controlled coffee machine from school. This app provides many different possibilities for use-cases. ## How we built it I connected iPhones to the Arduino cloud, and built a web-app with NodeJS that uses the Arduino IOT API to connect to the cloud, and connected MongoDB to make the app more efficient and scalable. I followed the CRM architecture to build the app, and implemented the best practices to keep scalability in mind, since it is the main focus of the app. ## Challenges we ran into A lot of the problems faced were naturally in the web application, and it required a lot of time. ## Accomplishments that we're proud of I are proud of the app and its usefulness in different contexts. This is a creative solution that could have real world uses if the intelligence is implemented carefully. ## What we learned I learned a LOT about web development, database management and API integration. ## What's next for OrangeBanana Provided we have more time, we would implement more sensors and more use-cases for handling each of these.
losing
## Inspiration Walking is a sustainable and effective form of transit whose popularity is negatively impacted by perceived concerns about boredom and safety. People who are choosing between multiple forms of transit might not select walking due to these issues. Our goal was to create a solution that would make walking more enjoyable, encouraging people to follow a more sustainable lifestyle by providing new benefits to the walking experience. ## What it does Our web app, WalkWithMe, helps connect users to other walkers nearby based on times and routes, allowing them to walk together to their intended destinations. It approximately finds the path that maximizes time spent walking together while also minimizing total travel distance for the people involved. People can create accounts that allows them to become verified users in the network, introducing a social aspect to walking that makes it fun and productive. Additionally, this reduces safety concerns as these are often less pronounced in groups of people versus individuals while walking; this is especially true at night. ## How we built it We used react.js for the frontend, Sonr and Golang for the backend. We hosted our website using Firebase. Our map data was generated from the Google Maps API. ## Challenges we ran into Our frontend team had to completely learn react.js for the project. We also did not have prior experience with the Sonr and Google Maps API. We needed to figure out how to integrate Sonr into the backend with Golang and Google Maps API to find the path. ## Accomplishments that we're proud of We are proud of developing and implementing a heuristic algorithm that finds a reasonable path to walk to the destination and for creating an effective backend and frontend setup despite just learning react and Sonr in the hackathon. We also overcame many bugs relating to Google's geocoding API. ## What we learned We learned react.js to display our interactive website efficiently, how to integrate Sonr into our project to store profile and location data, and how to use Google Maps to achieve our goals with our program. ## What's next for WalkWithMe We have many ideas for how we can take the next step with our app. We want to add a tiered verification system that grants you credit for completing walks without issues. The higher you are in the rating system, the more often you will be recommended walks with smaller groups of people (as you are viewed as more trustworthy). We also want to improve the user interface of the app, making it more intuitive to use. We also want to expand on the social aspect of the app, allowing people to form walking groups with others and deepen connections with people they meet. We also want to add geolocation trackers so that users can see where their group members are, in case they don't walk at a similar speed toward the meet-up location.
## Inspiration The vicarious experiences of friends, and some of our own, immediately made clear the potential benefit to public safety the City of London’s dataset provides. We felt inspired to use our skills to make more accessible, this data, to improve confidence for those travelling alone at night. ## What it does By factoring in the location of street lights, and greater presence of traffic, safeWalk intuitively presents the safest options for reaching your destination within the City of London. Guiding people along routes where they will avoid unlit areas, and are likely to walk beside other well-meaning citizens, the application can instill confidence for travellers and positively impact public safety. ## How we built it There were three main tasks in our build. 1) Frontend: Chosen for its flexibility and API availability, we used ReactJS to create a mobile-to-desktop scaling UI. Making heavy use of the available customization and data presentation in the Google Maps API, we were able to achieve a cohesive colour theme, and clearly present ideal routes and streetlight density. 2) Backend: We used Flask with Python to create a backend that we used as a proxy for connecting to the Google Maps Direction API and ranking the safety of each route. This was done because we had more experience as a team with Python and we believed the Data Processing would be easier with Python. 3) Data Processing: After querying the appropriate dataset from London Open Data, we had to create an algorithm to determine the “safest” route based on streetlight density. This was done by partitioning each route into subsections, determining a suitable geofence for each subsection, and then storing each lights in the geofence. Then, we determine the total number of lights per km to calculate an approximate safety rating. ## Challenges we ran into: 1) Frontend/Backend Connection: Connecting the frontend and backend of our project together via RESTful API was a challenge. It took some time because we had no experience with using CORS with a Flask API. 2) React Framework None of the team members had experience in React, and only limited experience in JavaScript. Every feature implementation took a great deal of trial and error as we learned the framework, and developed the tools to tackle front-end development. Once concepts were learned however, it was very simple to refine. 3) Data Processing Algorithms It took some time to develop an algorithm that could handle our edge cases appropriately. At first, we thought we could develop a graph with weighted edges to determine the safest path. Edge cases such as handling intersections properly and considering lights on either side of the road led us to dismissing the graph approach. ## Accomplishments that we are proud of Throughout our experience at Hack Western, although we encountered challenges, through dedication and perseverance we made multiple accomplishments. As a whole, the team was proud of the technical skills developed when learning to deal with the React Framework, data analysis, and web development. In addition, the levels of teamwork, organization, and enjoyment/team spirit reached in order to complete the project in a timely manner were great achievements From the perspective of the hack developed, and the limited knowledge of the React Framework, we were proud of the sleek UI design that we created. In addition, the overall system design lent itself well towards algorithm protection and process off-loading when utilizing a separate back-end and front-end. Overall, although a challenging experience, the hackathon allowed the team to reach accomplishments of new heights. ## What we learned For this project, we learned a lot more about React as a framework and how to leverage it to make a functional UI. Furthermore, we refined our web-based design skills by building both a frontend and backend while also use external APIs. ## What's next for safewalk.io In the future, we would like to be able to add more safety factors to safewalk.io. We foresee factors such as: Crime rate Pedestrian Accident rate Traffic density Road type
## Inspiration As college students learning to be socially responsible global citizens, we realized that it's important for all community members to feel a sense of ownership, responsibility, and equal access toward shared public spaces. Often, our interactions with public spaces inspire us to take action to help others in the community by initiating improvements and bringing up issues that need fixing. However, these issues don't always get addressed efficiently, in a way that empowers citizens to continue feeling that sense of ownership, or sometimes even at all! So, we devised a way to help FixIt for them! ## What it does Our app provides a way for users to report Issues in their communities with the click of a button. They can also vote on existing Issues that they want Fixed! This crowdsourcing platform leverages the power of collective individuals to raise awareness and improve public spaces by demonstrating a collective effort for change to the individuals responsible for enacting it. For example, city officials who hear in passing that a broken faucet in a public park restroom needs fixing might not perceive a significant sense of urgency to initiate repairs, but they would get a different picture when 50+ individuals want them to FixIt now! ## How we built it We started out by brainstorming use cases for our app and and discussing the populations we want to target with our app. Next, we discussed the main features of the app that we needed to ensure full functionality to serve these populations. We collectively decided to use Android Studio to build an Android app and use the Google Maps API to have an interactive map display. ## Challenges we ran into Our team had little to no exposure to Android SDK before so we experienced a steep learning curve while developing a functional prototype in 36 hours. The Google Maps API took a lot of patience for us to get working and figuring out certain UI elements. We are very happy with our end result and all the skills we learned in 36 hours! ## Accomplishments that we're proud of We are most proud of what we learned, how we grew as designers and programmers, and what we built with limited experience! As we were designing this app, we not only learned more about app design and technical expertise with the Google Maps API, but we also explored our roles as engineers that are also citizens. Empathizing with our user group showed us a clear way to lay out the key features of the app that we wanted to build and helped us create an efficient design and clear display. ## What we learned As we mentioned above, this project helped us learn more about the design process, Android Studio, the Google Maps API, and also what it means to be a global citizen who wants to actively participate in the community! The technical skills we gained put us in an excellent position to continue growing! ## What's next for FixIt An Issue’s Perspective \* Progress bar, fancier rating system \* Crowdfunding A Finder’s Perspective \* Filter Issues, badges/incentive system A Fixer’s Perspective \* Filter Issues off scores, Trending Issues
partial
## Inspiration We are tired of being forgotten and not recognized by others for our accomplishments. We built a software and platform that helps others get to know each other better and in a faster way, using technology to bring the world together. ## What it does Face Konnex identifies people and helps the user identify people, who they are, what they do, and how they can help others. ## How we built it We built it using Android studio, Java, OpenCV and Android Things ## Challenges we ran into Programming Android things for the first time. WiFi not working properly, storing the updated location. Display was slow. Java compiler problems. ## Accomplishments that we're proud of. Facial Recognition Software Successfully working on all Devices, 1. Android Things, 2. Android phones. Prototype for Konnex Glass Holo Phone. Working together as a team. ## What we learned Android Things, IOT, Advanced our android programming skills Working better as a team ## What's next for Konnex IOT Improving Facial Recognition software, identify and connecting users on konnex Inputting software into Konnex Holo Phone
## Inspiration 🍪 We’re fed up with our roommates stealing food from our designated kitchen cupboards. Few things are as soul-crushing as coming home after a long day and finding that someone has eaten the last Oreo cookie you had been saving. Suffice it to say, the university student population is in desperate need of an inexpensive, lightweight security solution to keep intruders out of our snacks... Introducing **Craven**, an innovative end-to-end pipeline to put your roommates in check and keep your snacks in stock. ## What it does 📸 Craven is centered around a small Nest security camera placed at the back of your snack cupboard. Whenever the cupboard is opened by someone, the camera snaps a photo of them and sends it to our server, where a facial recognition algorithm determines if the cupboard has been opened by its rightful owner or by an intruder. In the latter case, the owner will instantly receive an SMS informing them of the situation, and then our 'security guard' LLM will decide on the appropriate punishment for the perpetrator, based on their snack-theft history. First-time burglars may receive a simple SMS warning, but repeat offenders will have a photo of their heist, embellished with an AI-generated caption, posted on [our X account](https://x.com/craven_htn) for all to see. ## How we built it 🛠️ * **Backend:** Node.js * **Facial Recognition:** OpenCV, TensorFlow, DLib * **Pipeline:** Twilio, X, Cohere ## Challenges we ran into 🚩 In order to have unfettered access to the Nest camera's feed, we had to find a way to bypass Google's security protocol. We achieved this by running an HTTP proxy to imitate the credentials of an iOS device, allowing us to fetch snapshots from the camera at any time. Fine-tuning our facial recognition model also turned out to be a bit of a challenge. In order to ensure accuracy, it was important that we had a comprehensive set of training images for each roommate, and that the model was tested thoroughly. After many iterations, we settled on a K-nearest neighbours algorithm for classifying faces, which performed well both during the day and with night vision. Additionally, integrating the X API to automate the public shaming process required specific prompt engineering to create captions that were both humorous and effective in discouraging repeat offenders. ## Accomplishments that we're proud of 💪 * Successfully bypassing Nest’s security measures to access the camera feed. * Achieving high accuracy in facial recognition using a well-tuned K-nearest neighbours algorithm. * Fine-tuning Cohere to generate funny and engaging social media captions. * Creating a seamless, rapid security pipeline that requires no legwork from the cupboard owner. ## What we learned 🧠 Over the course of this hackathon, we gained valuable insights into how to circumvent API protocols to access hardware data streams (for a good cause, of course). We also deepened our understanding of facial recognition technology and learned how to tune computer vision models for improved accuracy. For our X integration, we learned how to engineer prompts for Cohere's API to ensure that the AI-generated captions were both humorous and contextual. Finally, we gained experience integrating multiple APIs (Nest, Twilio, X) into a cohesive, real-time application. ## What's next for Craven 🔮 * **Multi-owner support:** Extend Craven to work with multiple cupboards or fridges in shared spaces, creating a mutual accountability structure between roommates. * **Machine learning improvement:** Experiment with more advanced facial recognition models like deep learning for even better accuracy. * **Social features:** Create an online leaderboard for the most frequent offenders, and allow users to vote on the best captions generated for snack thieves. * **Voice activation:** Add voice commands to interact with Craven, allowing roommates to issue verbal warnings when the cupboard is opened.
## Inspiration According to a 2015 study in the American Journal of Infection Control, people touch their faces more than 20 times an hour on average. More concerningly, about 44% of the time involves contact with mucous membranes (e.g. eyes, nose, mouth). With the onset of the COVID-19 pandemic ravaging our population (with more than 300 million current cases according to the WHO), it's vital that we take preventative steps wherever possible to curb the spread of the virus. Health care professionals are urging us to refrain from touching these mucous membranes of ours as these parts of our face essentially act as pathways to the throat and lungs. ## What it does Our multi-platform application (a python application, and a hardware wearable) acts to make users aware of the frequency they are touching their faces in order for them to consciously avoid doing so in the future. The web app and python script work by detecting whenever the user's hands reach the vicinity of the user's face and tallies the total number of touches over a span of time. It presents the user with their rate of face touches, images of them touching their faces, and compares their rate with a **global average**! ## How we built it The base of the application (the hands tracking) was built using OpenCV and tkinter to create an intuitive interface for users. The database integration used CockroachDB to persist user login records and their face touching counts. The website was developed in React to showcase our products. The wearable schematic was written up using Fritzing and the code developed on Arduino IDE. By means of a tilt switch, the onboard microcontroller can detect when a user's hand is in an upright position, which typically only occurs when the hand is reaching up to touch the face. The device alerts the wearer via the buzzing of a vibratory motor/buzzer and the flashing of an LED. The emotion detection analysis component was built using the Google Cloud Vision API. ## Challenges we ran into After deciding to use opencv and deep vision to determine with live footage if a user was touching their face, we came to the unfortunate conclusion that there isn't a lot of high quality trained algorithms for detecting hands, given the variability of what a hand looks like (open, closed, pointed, etc.). In addition to this, the CockroachDB documentation was out of date/inconsistent which caused the actual implementation to differ from the documentation examples and a lot of debugging. ## Accomplishments that we're proud of Despite developing on three different OSes we managed to get our application to work on every platform. We are also proud of the multifaceted nature of our product which covers a variety of use cases. Despite being two projects we still managed to finish on time. To work around the original idea of detecting overlap between hands detected and faces, we opted to detect for eyes visible and determine whether an eye was covered due to hand contact. ## What we learned We learned how to use CockroachDB and how it differs from other DBMSes we have used in the past, such as MongoDB and MySQL. We learned about deep vision, how to utilize opencv with python to detect certain elements from a live web camera, and how intricate the process for generating Haar-cascade models are. ## What's next for Hands Off Our next steps would be to increase the accuracy of Hands Off to account for specific edge cases (ex. touching hair/glasses/etc.) to ensure false touches aren't reported. As well, to make the application more accessible to users, we would want to port the application to a web app so that it is easily accessible to everyone. Our use of CockroachDB will help with scaling in the future. With our newfound familliarity with opencv, we would like to train our own models to have a more precise and accurate deep vision algorithm that is much better suited to our project's goals.
winning
## Inspiration Want to take advantage of the AR and object detection technologies to help people to gain safer walking experiences and communicate distance information to help people with visual loss navigate. ## What it does Augment the world with the beeping sounds that change depending on your proximity towards obstacles and identifying surrounding objects and convert to speech to alert the user. ## How we built it ARKit; RealityKit uses Lidar sensor to detect the distance; AVFoundation, text to speech technology; CoreML with YoloV3 real time object detection machine learning model; SwiftUI ## Challenges we ran into Computational efficiency. Going through all pixels in the LiDAR sensor in real time wasn’t feasible. We had to optimize by cropping sensor data to the center of the screen ## Accomplishments that we're proud of It works as intended. ## What we learned We learned how to combine AR, AI, LiDar, ARKit and SwiftUI to make an iOS app in 15 hours. ## What's next for SeerAR Expand to Apple watch and Android devices; Improve the accuracy of object detection and recognition; Connect with Firebase and Google cloud APIs;
## Inspiration In the work from home era, many are missing the social aspect of in-person work. And what time of the workday most provided that social interaction? The lunch break. culina aims to bring back the social aspect to work from home lunches. Furthermore, it helps users reduce their food waste by encouraging the use of food that could otherwise be discarded and diversifying their palette by exposing them to international cuisine (that uses food they already have on hand)! ## What it does First, users input the groceries they have on hand. When another user is found with a similar pantry, the two are matched up and displayed a list of healthy, quick recipes that make use of their mutual ingredients. Then, they can use our built-in chat feature to choose a recipe and coordinate the means by which they want to remotely enjoy their meal together. ## How we built it The frontend was built using React.js, with all CSS styling, icons, and animation made entirely by us. The backend is a Flask server. Both a RESTful API (for user creation) and WebSockets (for matching and chatting) are used to communicate between the client and server. Users are stored in MongoDB. The full app is hosted on a Google App Engine flex instance and our database is hosted on MongoDB Atlas also through Google Cloud. We created our own recipe dataset by filtering and cleaning an existing one using Pandas, as well as scraping the image URLs that correspond to each recipe. ## Challenges we ran into We found it challenging to implement the matching system, especially coordinating client state using WebSockets. It was also difficult to scrape a set of images for the dataset. Some of our team members also overcame technical roadblocks on their machines so they had to think outside the box for solutions. ## Accomplishments that we're proud of. We are proud to have a working demo of such a complex application with many moving parts – and one that has impacts across many areas. We are also particularly proud of the design and branding of our project (the landing page is gorgeous 😍 props to David!) Furthermore, we are proud of the novel dataset that we created for our application. ## What we learned Each member of the team was exposed to new things throughout the development of culina. Yu Lu was very unfamiliar with anything web-dev related, so this hack allowed her to learn some basics of frontend, as well as explore image crawling techniques. For Camilla and David, React was a new skill for them to learn and this hackathon improved their styling techniques using CSS. David also learned more about how to make beautiful animations. Josh had never implemented a chat feature before, and gained experience teaching web development and managing full-stack application development with multiple collaborators. ## What's next for culina Future plans for the website include adding videochat component to so users don't need to leave our platform. To revolutionize the dating world, we would also like to allow users to decide if they are interested in using culina as a virtual dating app to find love while cooking. We would also be interested in implementing organization-level management to make it easier for companies to provide this as a service to their employees only. Lastly, the ability to decline a match would be a nice quality-of-life addition.
## Inspiration * Whenever I have a pain in the chest, leg or arm, I never know what to think or what to look up. It would be quite unnecessary to look up leg pain, that is so vague. And I am not able to name each part of my leg. There had to be a better way to asses that. After all, there are only 2 in 3,000 people that are trained medical professionals. What if we could enable anyone to determine what their problem is. ## What it does * Our iOS app provides an Augmented Reality experience backed by a Computer Vision algorithm to assess your symptoms when you are feeling ill and provides you with the most probable diagnosis. If you are feeling sick or having some sort of pain, a person would place pinpoints on those areas of pain on your body. The app would then process those pinpoints to provide you with a list of possible issues. AR here allows you to be extremely precise with indicating what area of your body is hurting/uncomfortable. ## How I built it * We built our app using ARKit and Swift. Our API is built in NodeJS and hosted on GCP. Our Machine Learning algorithms used Caffe and OpenCV for Computer Vision. Our website is written In Vue.js and also hosted on GCP. The website is live as well. ## Challenges I ran into * We had a ton of issues with everything from domain deployment to post request issues. * Figuring out the best way to translate 3-dimensional nodes in ARKit to usable coordinates for the ML algorithm to figure out the exact body part the node points to. ## Accomplishments that I'm proud of * iOS app is working, API is live and the website is almost done. ## What I learned * Learned about SceneKit which could be used for making iOS games and about ARKit which is for Augmented Reality. * We learned a lot regarding API calls and how different technologies integrate and work together. ## What's next for ExaminAR * Better visualization of the AR, using for example an overlay of the anatomy. We did not consider this idea because of the cost of those anatomy models. * Ability to use a front facing camera and thus not require assistance to operate.
winning
## Inspiration The biggest question asked in the resale of any item of value used to be about condition, functionality, and other tangibles. But now, in a world where counterfeits are so hard to spot and brand value reigns supreme, the single most important question has become “is it real?” As unfortunate as it is, with the explosion of e-commerce, counterfeits have become an increasingly prominent concern in the purchase of any item, from consumer electronics to even fields as critical as prescription medication. However, no sector of business has been hit harder by counterfeiting than that of luxury apparel and accessories. The rise of counterfeiting has spawned a $118 billion anti-counterfeiting industry, with advanced QR codes and printing techniques employed to make it more difficult for bad actors to manufacture fakes and third-party validation companies like StockX protecting the resale market from fakes. Ironically, anti-counterfeiting measures have inspired many counterfeiters to also manufacture fake anti-counterfeiting features, including copied QR codes, forged certificates of authenticity, and even fake StockX tags. As a result it is absolutely impossible to say with 100% certainty that anything sold by anybody but the manufacturer itself is real, a plight that has plagued the secondhand market for years. In a survey of several sneakerheads across the nation with varying stakes in the sneaker resale market, the unanimous single largest concern when buying goods, even with seemingly proper documentation, is that the product is counterfeit, and the process of verifying/proving authenticity takes an average of nearly half the time spent on a transaction (over ten minutes!). We seek to provide a solution to this problem by leveraging the immutability of the blockchain, transparent transaction history of digital assets, and the unforgeability of digital signatures to INSTANTLY certify with 100% certainty that a physical asset is real using an NFT. ## What it does Apto-check is a protocol built on the Aptos blockchain that backs verifiably legitimate physical assets with NFTs. The protocol works as such: When the admin receives the asset’s serial number and substantial evidence that an asset is legitimate (what qualifies as “substantial” is either proof of recent purchase directly from the manufacturer or verification by trusted third parties like StockX or GOAT), the admin mints an NFT for the asset using the admin account’s private key and sends it to the address of the user requesting the NFT. It is important to note that the protocol is structured such that new NFTs can only be minted by ONE ADDRESS, the admin account. This is done deliberately, as it allows for every NFT minted for Apto-check to be traced back to the exact same minting address, making Apto-check NFTs absolutely impossible to counterfeit. Even if a criminal were able to mint an NFT with the exact same picture, serial number, and internal smart contract data, because they don’t have the admin’s private key, the fake NFT would trace back to the criminal’s address, rather than the admin’s, making it painfully obvious that the NFT is fake. Although this is certainly enough to invalidate attempts to counterfeit Apto-check NFTs, Apto-check actually has an additional safeguard in place in that it is impossible to send, receive, or even view NFTs not minted by the admin address, meaning that even if a counterfeit Apto-check NFT were to make it into an Aptos wallet, it would be impossible to use in the Apto-check dApp. Apto-check syncs with a user’s Petra Aptos wallet, and shows a detailed overview of the user’s current Apto-check NFTs on the homepage. Because Apto-check is meant to be used as a tool for the sneaker resale industry, let us consider how the dApp performs in the case of a sneaker sale. The idea is that a sale between buyer Alice and seller Bob will go as follows: Alice takes a look at an Apto-check backed pair of Jordans and decides that she is interested in purchasing it. Alice asks the ultimate question in requesting proof that those Jordans are real. Bob shows Alice the corresponding Apto-check NFT on the homepage of the Apto-check dApp, and demonstrates that the NFT’s serial number field matches the physical serial number on the pair of Jordans. Alice is assured without any doubt that the pair of Jordans are real, and decides to buy the shoes. Bob transfers the corresponding NFT to Alice’s Aptos address through the Apto-check dApp. Upon receiving the pending transfer of the shoes’ NFT in her Aptos wallet, Alice pays Bob for the shoes and accepts the transaction. The NFT is transferred to her wallet, and Alice is now able to prove to anybody that her Jordans are undeniably real. The hope is that in the sneaker resale market, Apto-check NFTs will be used as an undeniable standard of authenticity. ## How we built it Frontend: React, Typescript, Tailwind, Next.js Backend: Typescript + Aptos API, Firebase Blockchain: Aptos Accounts DB: Firestore Aptos Wallet: Petra Design: GIMP & MS Paint ## Challenges we ran into Learning how to use the Aptos API to interact with the Aptos blockchain was difficult to pick up, as it was both of our first times ever being exposed to the Aptos blockchain. We encountered some difficulty with using the Aptos incrementer to access an account’s tokens for the frontend, and accidentally became rate-limited. Many thanks to the Aptos team for their incredible support in helping us through the process of building in the Aptos ecosystem, answering any and all questions we had, and for solving the rate-limiting fiasco :) ## Accomplishments that we're proud of We took a big risk by deciding to take an idea we had planned to implement on another blockchain and building it on Aptos, a blockchain we hadn’t heard of until Treehacks. We are especially proud of how we were still able to leverage some unique aspects of Aptos to implement additional security features (such as preventing non-admin-minted NFTs from appearing in the dApp) to make Apto-check as airtight as possible, and we’ve certainly learned the unique benefits of building of Aptos, especially for a project like ours. In addition, we’re very happy with the user-friendly user experience we’ve created, as we believe that especially with blockchain projects, it is of utmost importance to make it easy enough for a user with no blockchain background at all to use properly. ## Features Coming Soon to Apto-Check Our intentions from the start were to subsidize all gas fees as to prevent overcomplicating the user experience by requiring all users to have APT in their wallets. The means to implement this feature are soon to come in an upcoming Aptos update. We also intend on vastly simplifying the barriers of entry for blockchain users by utilizing QR codes to communicate public keys, and by removing the necessity of a user to set up their own Petra wallets. We plan to allow users to sign up and sign in with just their email addresses and passwords by setting up wallets for them when they sign up to remove the overcomplication of needing to deal with multiple platforms. ## Apto-Check’s growth strategy Apto-Check has been segmented for the sneakerhead market first for several reasons, with the most obvious being the importance of proving genuinity quickly and with complete certainty, and with the most important being that the sneakerhead community has a generally bullish outlook on NFTs and has a substantial amount of overlap with the NFT community. We hope to establish Apto-Check as the new golden standard for authenticity in the sneakerhead market, just like how StockX did with their verification tags before they started getting faked, and use this to expand into the very adjacent broader market for luxury apparel. Given Apto-check’s potential to disrupt the $1.7 - 4.5 trillion market for counterfeit goods, the market potential for Apto-check, especially if it becomes integrated in the luxury goods supply chain (e.g. partnering with manufacturers to mint NFTs for all goods produced), is incredible. Furthermore, because it costs fractions of a penny for each transaction, monetization via charging a flat fee for each NFT minted will be infinitely scalable due to the negligible cost of maintaining the Apto-check network.
## Inspiration With the rise of AI-generated content and DeepFakes, it's hard for people to identify what's real and what's fake. This leads to fake news and abuse. After seeing the launch of OpenAI's Sora model this week, we decided to build a solution to verify whether an image is real or AI-generated. ## What it does Aros is an **iOS app that allows you to verify that an image is real and not AI-generated**. It does this by cryptographically proving that you clicked an image on your iPhone, which means that the image is real. This is how it works: 1. When you click a photo using the Aros camera app, Aros uses your iPhone's Secure Enclave to cryptographically sign this image. 2. This signature is posted to the online Aros registry. 3. Anyone can use this signature and your public key to verify that the photo was clicked on your iPhone, and not generated using AI. We also built a **zero-knowledge prover** that verifies the signature on your image within a ZK circuit. This allows any **blockchain to easily verify** that an image is real. ## How we built it This is a system architecture diagram for Aros: ![System architecture diagram](https://hackmd.io/_uploads/SkrLOL126.png) ### Secure Enclave We create a cryptographic key pair in your iPhone's [Secure Enclave](https://developer.apple.com/documentation/security/certificate_key_and_trust_services/keys/protecting_keys_with_the_secure_enclave#2930473) to rely on **hardware security** and ensure that your private keys are never leaked outside your iPhone. Aros uses these keys to sign your photos to prove and verify that you clicked them on your iPhone. ### Zero-Knowledge To easily verify the image signatures on a blockchain, we decided to build a ZK verifier for this. We used state-of-the-art cryptographic systems like the **SP1 RISC-V prover** from Succinct Labs to verify the image signatures within a **Plonky3 circuit**. ### iOS App and Web Registry We built the iOS app using **Swift**. The Aros registry is used to store each image's hash and signature, along with users' public keys. It doesn't store the raw image data so we can protect privacy. We built the Aros registry using Next.js, Typescript, and Tailwind CSS. We **deployed the registry dashboard and registry API using Vercel**. ## Challenges we ran into * The Secure Enclave in the iPhone uses the **P-256 elliptic curve** but we found it hard to find a verifier ZK circuit for this curve within Circom or Halo2. So, we decided to use the SP1 RISC-V prover from Succinct Labs to verify the image signatures and generate a Plonky3 circuit. * We faced challenges with **base64 encoding and decoding** the public key. However, we realized that we could use the `base64EncodedString` function in Swift to help with this. ## Accomplishments that we're proud of * It was **our first time developing on iOS and using Swift**, so there was a pretty steep learning curve on the first day. We're really happy that we were able to learn Swift and iOS development over the weekend and successfully build this project. * It was a stretch goal for us to build a zero-knowledge verifier of the P256 signature verification. We're proud that we were able to build this, and now anyone can efficiently verify that an image is real on any blockchain as well. ## What we learned * In terms of technologies, we learned iOS development, Swift, and SwiftUI, and we also learned how to work with RISC-V ZK proving systems like the SP1 prover. * We learned about hardware security, specifically how to protect private keys using the Secure Enclave on iPhones. ## What's next for Aros * We want to extend this technology beyond just images, to **prove that audio and video is real** and not AI-generated. We have some ideas for this and we are excited to try these out soon! * We plan to deploy a **verifier smart contract** for the ZK circuit on Ethereum. * We hope to **work with social media platforms** to try to integrate our system since we think fake news and images are most prevalent on social media, and Aros can help reduce misinformation online.
## Inspiration As NFTs are exploding onto the scene in 2021, those unfamiliar with the technology behind them are left wondering why. We have been hearing about how Jack Dorsey sold an NFT of his first-ever tweet for over $2.9M or about Dapper Lab’s NBA Top Shot selling NFTs of NBA season moments for incredible amounts. According to DappRadar, which tracks sales across multiple blockchains, NFT sales volume surged to $2.5bn in the first half of 2021 and to $10.67bn in Q3. To understand the reasoning behind this novel concept, it requires a lot of research and technical understanding of blockchain to fully grasp what exactly is going on - and it can get overwhelming to decide where to even begin, with resources scattered and minimal interactive lessons available to learn about these different topics. So we want to bring financial literacy about these relatively new fintech concepts to a broader audience and contribute to a more inclusive economy. To achieve this, we hacked together an interactive experience to help individuals learn about DeFi and NFTs. ## What it does It helps individuals start learning about decentralized finance, blockchain, NFTs, and more through curated questions and interactive exercises designed to get them hands-on experience with varying concepts. ## How we built it We built a web application on Ethereum using Solidity and Javascript. For our interactive exercise (shown in our demo), we walk users through easily creating a freshly minted NFT by deploying a smart contract on Ethereum and storing it on IPFS and Filecoin using nft.storage. We used nft.storage to upload an image file and the NFT’s metadata to IPFS, utilizing its content addressing and producing a content identifier. Then we show the CID and IPFS URI to users, allowing them to easily access their first NFT! ## Challenges we ran into One of the biggest challenges we faced was our inexperience and lack of domain knowledge in DeFi, blockchain, and NFTs. Getting a grasp on the concepts and topics was tough in itself, but then trying to build on top of blockchain protocols was a whole other beast to tackle. ## Accomplishments that we're proud of Honestly, just the amount of knowledge and information we have learned from researching blockchain, NFTs, Filecoin, Ethereum, and then being able to apply that knowledge to build a web app on Ethereum in such a short amount of time is a great accomplishment that we’re proud of. ## What we learned We learned a lot about the intricacies of building on blockchain protocols as we have never done so before. More importantly, we were able to take this time to learn a lot about DeFi, how blockchain works, and where and how NFTs fit into the grand scheme of the evolving economy. What we have learned during the hackathon is by no means comprehensive, but it is a starting point to a better understanding. We hope that our educational project can demystify DeFi for beginners! ## What's next for DeFi Edu Curating more resources and topics so people can learn more about different topics relating to DeFi and understand more about the evolving economy.
partial
## Inspiration For three out of four of us, cryptocurrency, Web3, and NFTs were uncharted territory. With the continuous growth within the space, our team decided we wanted to learn more about the field this weekend. An ongoing trend we noticed when talking to others at Hack the North was that many others lacked knowledge on the subject, or felt intimidated. Inspired by the randomly generated words and QR codes on our hacker badges, we decided to create a tool around these that would provide an engaging soft introduction to NFTs for those attending Hack the North. Through bringing accessible knowledge and practice directly to hackers, Mint the North includes those who may be on the fence, uncomfortable with approaching subject experts, and existing enthusiasts alike. ## What It Does Mint the North creates AI-generated NFTs based on the words associated with each hacker's QR code. When going through the process, the web app provides participants with the opportunity to learn the basics of NFTs in a non-intimidating way. By the end of the process, hackers will sign off with a solid understanding of the basics of NFTs and their own personalized NFTs. Along with the tumblers, shirts, and stickers hackers get as swag throughout the event, participants can leave Hack the North with personalized NFTs and potentially a newly sparked interest and motivation to learn more. ## How We Built It **Smart Contract** Written in Solidity and deployed on Ethereum (testnet). **Backend** Written in node.js and express.js. Deployed on Google Firebase and assets are stored on Google Cloud Services. **Frontend** Designed in Figma and built using React. ## Challenges We Ran Into The primary challenge was finding an API for the AI-generated images. While many exist, there were different barriers restricting our use. After digging through several sources, we were able to find a solution. The solution also had issues at the start, so we had to make adjustments to the code and eventually ensured it worked well. Hosting has been a consistent challenge throughout the process as well due to a lack of free hub platforms to use for hosting our backend services. We use Google firebase as they have a built-in emulator that allows us to make use of advanced functionality while running locally. ## Accomplishments that We're Proud Of Those of us in the group that were new to the topic are proud of the amount we were able to learn and adapt in a short time. As we continued to build and adjust our project, new challenges occurred. Challenges like finding a functional API or hosting pushed our team to communicate, reorganize, and ultimately consider various solutions with limited resources. # What We Learned Aside from learning a lot about Web3, blockchain, crypto, and NFTs, the challenges that occurred throughout the process taught us a lot about problem-solving with limited resources and time. # What's Next for Mint the North While Mint the North utilized resources specific to Hack the North 2022, we envision the tool expanding to other hackathons and tech education settings. For Web3 company sponsors or supporters, the tool provides a direct connection and offering to communities that may be difficult to reach, or are interested but unsure how to proceed.
## Inspiration We noticed that UW has been providing menstrual products in bathrooms across campus. However, off-campus, emergencies still arise. Along with the healthcare system being overwhelmed and ambulance times being delayed, MedNow weaponizes location data to help deal with emergencies. Whether you need a menstrual product, bandaid, EpiPen, or even CPR, someone nearby may just be able to buy you a little time before the ambulance arrives or the issue resolves itself. ## What it does MedNow is a mobile app that takes voluntary “medics” and sends them notifications when someone needs help. When a patient clicks “Help,” their information such as medical history, emergency, and location data are sent to either basic medics or intermediate medics, basic being those who could deal with something like a bandaid and intermediate being those who are first aid trained and could deal with larger issues. If and when the paramedics arrive, they will have special authentication to scan a QR code to receive the patient’s health card info and more detailed personal data than what was provided to the medic. MedNow also includes an educational page of basic first aid demos and a monthly summary logging how many people you’ve helped. If you’re uncomfortable with sharing your personal information, you can still click “Help” as a guest and your location will be sent to nearby medics. If the situation allows, you can fill out what the emergency is as the medics find their way to you and nearby medics will have the opportunity to accept or decline your emergency. If accepted, they will be directed to Google Maps to find the patient; if declined, other medics will still have the opportunity to accept similar to Uber finding another nearby driver although they may be further away. ## How we built it We built MedNow using the Expo platform to develop a React Native app, taking advantage of its cross-platform development capabilities. Thus, MedNow runs on both iOS and Android devices. Additionally, our UI/UX was designed using Figma, where we created mockups depicting the app’s visual design and well as user flow. Finally, throughout development, we collaborated using Github. ## Challenges we ran into Our mobile app doesn’t have a solid back-end as everything was hard coded into our front-end. Our team started off with four members but ended with two due to busy schedules, and it was difficult to stand out against our competitors when there are many apps that connect you virtually with a doctor. But by making MedNow IRL and encouraging the kindness of good civilians, we can lessen the burden on the healthcare system and connect more people in both our digital and actual world. We also wondered how this app could gain popularity if there isn’t an incentive for medics to earn something of monetary value. Those who are medically trained may shy away from being on call outside working hours and privacy issues could arise. But with how popular Pokemon Go and Uber have been by using location data and personal information, we thought a similar idea could apply to something more practical like a medical issue or emergency. With how isolated everyone has been due to the pandemic, many people are willing to lend a helping hand and get to meet more people. ## Accomplishments that we're proud of Completing it within the time limit! We found the timing difficult as we have little hackathon experience and our team cut down to half, but MedNow managed to push through and we hope it makes a difference in the world. We’re also proud of how we came up with our ideas. The QR code was inspired by the HTN badges and MedNow as a whole was inspired by seeing menstrual products in the E7 bathrooms. We adapted to this new environment quickly and certainly learned a lot over this weekend. ## What we learned Speaking of learning, I (Sarah) had no UI/UX experience or even coding experience in general but learned how to use Figma this weekend to complete the design. Andrew quickly learned how to use React Native and Javascript and develop a mobile app. Most of all, we learned how to balance going to activities, networking, sleeping, and hacking all in 36 hours which isn’t something you learn every day (or, I guess, in 36 hours). ## What's next for MedNow We hope to implement a speech-to-text feature in MedNow for accessibility purposes, especially during an emergency. We also hope to make the location API more precise to the exact floor and room number of a building. Because one’s more likely to find themselves needing help in an unfamiliar area, we want to design the “Call 911” button so that it matches the area code of whatever country or region you’re in.
# Nexus, **Empowering Voices, Creating Connections**. ## Inspiration The inspiration for our project, Nexus, comes from our experience as individuals with unique interests and challenges. Often, it isn't easy to meet others with these interests or who can relate to our challenges through traditional social media platforms. With Nexus, people can effortlessly meet and converse with others who share these common interests and challenges, creating a vibrant community of like-minded individuals. Our aim is to foster meaningful connections and empower our users to explore, engage, and grow together in a space that truly understands and values their uniqueness. ## What it Does In Nexus, we empower our users to tailor their conversational experience. You have the flexibility to choose how you want to connect with others. Whether you prefer one-on-one interactions for more intimate conversations or want to participate in group discussions, our application Nexus has got you covered. We allow users to either get matched with a single person, fostering deeper connections, or join one of the many voice chats to speak in a group setting, promoting diverse discussions and the opportunity to engage with a broader community. With Nexus, the power to connect is in your hands, and the choice is yours to make. ## How we built it We built our application using a multitude of services/frameworks/tool: * React.js for the core client frontend * TypeScript for robust typing and abstraction support * Tailwind for a utility-first CSS framework * DaisyUI for animations and UI components * 100ms live for real-time audio communication * Clerk for a seamless and drop-in OAuth provider * React-icons for drop-in pixel perfect icons * Vite for simplified building and fast dev server * Convex for vector search over our database * React-router for client-side navigation * Convex for real-time server and end-to-end type safety * 100ms for real-time audio infrastructure and client SDK * MLH for our free .tech domain ## Challenges We Ran Into * Navigating new services and needing to read **a lot** of documentation -- since this was the first time any of us had used Convex and 100ms, it took a lot of research and heads-down coding to get Nexus working. * Being **awake** to work as a team -- since this hackathon is both **in-person** and **through the weekend**, we had many sleepless nights to ensure we can successfully produce Nexus. * Working with **very** poor internet throughout the duration of the hackathon, we estimate it cost us multiple hours of development time. ## Accomplishments that we're proud of * Finishing our project and getting it working! We were honestly surprised at our progress this weekend and are super proud of our end product Nexus. * Learning a ton of new technologies we would have never come across without Cal Hacks. * Being able to code for at times 12-16 hours straight and still be having fun! * Integrating 100ms well enough to experience bullet-proof audio communication. ## What we learned * Tools are tools for a reason! Embrace them, learn from them, and utilize them to make your applications better. * Sometimes, more sleep is better -- as humans, sleep can sometimes be the basis for our mental ability! * How to work together on a team project with many commits and iterate fast on our moving parts. ## What's next for Nexus * Make Nexus rooms only open at a cadence, ideally twice each day, formalizing the "meeting" aspect for users. * Allow users to favorite or persist their favorite matches to possibly re-connect in the future. * Create more options for users within rooms to interact with not just their own audio and voice but other users as well. * Establishing a more sophisticated and bullet-proof matchmaking service and algorithm. ## 🚀 Contributors 🚀 | | | | | | --- | --- | --- | --- | | [Jeff Huang](https://github.com/solderq35) | [Derek Williams](https://github.com/derek-williams00) | [Tom Nyuma](https://github.com/Nyumat) | [Sankalp Patil](https://github.com/Sankalpsp21) |
losing
## Inspiration Our goal from the beginning was to solve a **real world problem**, but we didn't know where to begin. We started by asking our mentors for advice. It was during a conversation with an RBC mentor that we learned about the problem of credit card fraud. It looked like a problem we could solve using an adaptive machine learning algorithm, and that was our motivation to come up with an analytics product like Panthera. ## What it does Panthera is a command line application. It has no front end due to this it is incredibly well suited for developers looking to move quickly. This is how it works: You provide Panthera with a dataset of all credit card transaction during a time period in .csv format. Panthera then analyses the data pre-training, and uses it's insight from this to select the best algorithm for the job. This makes Panthera highly adaptive to the type of data set. After a few minutes of training, Panthera runs it's machine learning logic on a subset of the transactions, and tells the developer how accurate it was in identifying fraudulent transactions (along with a few other stats). As of writing, and using the sample data set of 284,807 transactions from Europe containing 492 fraudulent transactions, Panthera was able to successfully flag fraudulent the transactions 91% of the time. ## How to use it Simply open up Terminal.app, navigating to the directory where `runme.py` + the dataset is stored and typing in `python runme.py` ## How we built it We built Panthera considering the future of machine learning. Python is incredibly popular for machine learning, has a lot of efficient and useful machine learning libraries, and has a promising future. For these reasons, Panthera was written using Python. We spent a lot of time honing in on the right subset of algorithms for the job, through multiple iterations. ## Challenges we ran into Selecting the right algorithms for different data sets was the biggest challenge. We needed to identify one that could work fast/efficiently and accurately with large data sets. We decided to go with bunch of 3 algorithms, of which were linear, and 1 was non-linear to encompass as many possible transaction databases as possible. ## Accomplishments that we're proud of Being able to learn the crux of machine learning in 12 hours, and then applying it into a single application is something that we're incredibly proud of. ## What we learned Machine Learning is really a simple process. The challenge with Machine Learning is to draw the right correlations from the data, perform statistical analysis of the outcomes (realizing 91% accuracy is not a great number when dealing with more than 100M transactions). Not to mention, learning new things is always **fun**. ## What's next for Panthera? We really want to go deeper into making Panthera more efficient. Answering questions like, how can we use Deep Learning, multi layered systems, or Recurrent Neural Networks to make Panthera more accurate.
## Inspiration Compiled C and C++ binaries compose a large portion of the software vulnerabilities present in today's server and commercial codebase. Oftentimes, these vulnerabilities can be detected and prevented by static analysis algorithms, such as the CLang static analyzer. However, with ever increasing complexity and frequency of exploits involving various mechanisms of memory corruption and arbitrary program control, static methods are becoming ineffective in identifying all possible attack surfaces in a given program. Machine learning, a powerful data analysis technique that has been used for finding patterns in a wide variety of datasets, is proposed to be a solution to more quickly and effectively identify potential weak points in a program so that they may be patched before deployment. ## What it does CodeHeat (short for automatic Code Heat Map analysis), is a machine-learning based vulnerability detection built specifically for C and C++ programs, but whose concepts may be easily expanded to perform similar analysis on programs in other languages. Instead of analyzing compiled binaries - which is what disassembers such as IDA and Ghidra do - CodeHeat analyzes the source program directly, exactly what is visible to the developer. This offers several advantages: first, source file analysis allows the developer to make changes to his or her program as it is being built, without having to repeatedly wait for compilation. Furthermore, vulnerabilities at the source level are much easier for the developer to identify and fix. This is much easier than having to map the compiled code back to the text to address a vulnerability. ## How we built it The machine learning library used to generate, train, and evaluate the model was Keras, which runs atop Tensorflow. Since Keras is a Python library, all analysis programs we built were written in Python. The data that is passed to the classifier is a series of tokens - C / C++ text source files had to first be tokenized with a lexer. The lexer was implemented from scratch in Python using the PLY (Python Lex-Yacc) library. The machine learning model itself consists of 7 types of internal layers: (1) embedding layer, (2) reshaping layer, (3) 2-dimensional convolutional layer, (4) a maximum pooling layer, (5) a flattening layer, (6) a dropout layer, and (7) dense layers (there are three). Parameters were selected according toa previous [research paper](https://arxiv.org/pdf/1807.04320.pdf?fbclid=IwAR15RDmbxQS_b-2bETRimOsgLpfTTOCd5Eno2fcmxLbauBZNpBKemtAXZoo) investing the properties of a similar vulnerability generation. The convolutional neural network is apposite for this application because the tokens are embedded into a higher space, resulting in a block of program text to be representable by an intensity image. In programming, neighboring tokens are known to affect each others' meanings, and the convolution reflects this proximity. ## Challenges we ran into Tokenization of the C code became our biggest challenge. The research paper we were following used their own custom tokenizer that reduced the token space to 156 symbols, and we had a hard time matching that, while still accounting for the different symbols that could be captured. ## Accomplishments that we're proud of We picked an idea that we thought was interesting, and we stuck with it beginning to end no matter the challenges. We've had to overcome many hurdles and although we didn't make the results that we would have liked, we are very happy with the progress we made. ## What we learned We learned about the process of lexing program text data into a set of symbols that makes it easiest for a machine learning model to find patterns among the program data. We also expanded our thinking about machine learning and its applicability to various problems - even though our datsets were text files (at most described by a one-dimensional string of characters), embedding into a higher space and using convolution enables patterns that would otherwise be difficult to observe to become clear. ## What's next for ML-Based Software Vulnerability Detection To improve CodeHeat, the central model must be trained to better identify offending code. This can be accomplished by selecting appropriate token rules for a tokenizer that more effectively represents the program code and meaning. Additionally, visualization of which parts of the code are most vulnerable would also be desirable. Visualization of which parts of the code are likely the most dangerous can be obtained by careful use of the output layer of the beginning of the convolutional network.
## Inspiration Fraud is a crime that can impact any Canadian, regardless of their education, age or income. From January 2014 to December 2017, Canadians lost more than $405 million to fraudsters. ~ [Statistic Info](https://www.competitionbureau.gc.ca/eic/site/cb-bc.nsf/eng/04334.html) We wanted to develop technology that detects potentially fraudulent activity and give account owners the ability to cancel such transactions. ## How it works Using scikit-learn, we were able to detect patterns in a user's previous banking data provided by TD's davinci API. We examined categories such as the location of the purchase, the cost of the purchase, and the purchase category. Afterwards, we determined certain parameters for the cost of purchase based on the purchase category, and purchase locations to validate transactions that met the requirements. Transactions that were made outside of these parameters were deemed suspicious activity and an alert is sent to the account owner, providing them with the ability to validate/decline the purchase. If the transaction is approved, it is added to the MongoDB database with the rest of the user's previous transactions. [TD's davinci API](https://td-davinci.com/) [Presentation Slide Show](https://slides.com/malharshah/deck#/projectapati) [Github Repository](https://github.com/mshah0722/FraudDetectionDeltaHacks2020) ## Challenges we ran into Initially, we tried to use Tensorflow for our ML model to analyze the user's previous banking history to find patterns and make the parameters. However, we were having difficulty correctly implementing it and there were mistakes being made in the model. This is why we decided to switch to scikit-learn, which our team had success using and our ML model turned out as we had expected. ## Accomplishments that we are proud of Learning to use and implement Machine Learning with such a large data set that we were provided with. Training the model to detect suspicious activity was finally achieved after several attempts. ## What we learned Handling large data files. Pattern detection/Data Analysis. Data Interpretation and Model development. ## What's next for Project Apati Improving the model by looking at other categories in the data to refine the model based on other transactions statistics. Providing more user's data to improve the training and testing data-set for the model.
losing
## Inspiration The project is an **educational learning app** designed to teach English through a **structured roadmap**, particularly targeting **youth and students with *learning disabilities*.** It breaks down English learning into multiple levels, starting from the basics like alphabets and progressing to reading full sentences. Each level contains a variety of **mini-games that engage different senses**, using **visual and auditory cues** to enhance understanding and maintain the attention of students. Successfully completing games *rewards students with coins*, which they can use to purchase **AI-generated books** tailored to their preferences. The app provides continuous **guidance and motivation** through audio support, helping students when they get stuck, and **offering a *clear path* for next steps in their learning journey.** ## What our project does The project is an **innovative educational learning app** designed to address the unique challenges faced by youth, especially those with learning disabilities, in mastering English. It provides a **comprehensive, structured approach to language learning**, starting with the very basics like\*alphabets and gradually progressing to more advanced skills, such as *reading* and comprehending full sentences. The app is divided into multiple levels, each focused on specific topics, ensuring that students build a solid foundation before moving on to more complex concepts. Unlike existing educational games, this app offers a concise and effective ***roadmap*** that guides students *step-by-step* through the learning process, **reducing the overwhelming choice that can hinder progress for students with learning disabilities.** Each level includes a variety of mini-games, designed to be highly engaging and interactive, using a combination of visual and auditory cues to captivate students' attention. These games not only test knowledge but also **promote *multi-sensory learning*,** catering to short attention spans by being visually appealing and concise. A unique feature of the app is its **reward system**: when students successfully complete games, they *earn coins* that can be used to purchase **AI-generated books** within the app. These books are *custom-made* based on the student's preferences in topics, genres, and styles, offering personalized content that further strengthens their reading skills. Additionally, the app provides motivational support through **audio guidance**, helping students when they struggle and encouraging them to continue learning. Through this systematic, engaging, and supportive approach, the project empowers students to improve their literacy skills while making learning *fun and rewarding*. --- ### **Key Features:** * **Structured roadmap:** Guides students from basic to advanced English learning. * **Multi-sensory engagement:** Visual and auditory cues enhance the learning experience. * **Reward system:** *Earn coins* to purchase personalized AI-generated books. * **Inclusivity:** Audio support helps students when they face challenges. * **Motivational design:** Short attention span-friendly and visually appealing games. ## How we built it The project was built using **NextJS** and **React** for both the frontend and backend. We integrated **GPT-4o**, **DALL-E 3** and **Google's Web Speech** APIs for *generating AI images, AI-powered stories*, and *speech recognition* functionalities. To manage user data and in-game currency within the application, we utilized the **Prisma** library and **SQLite** for our database system. In addition, we developed an **Adobe Add-on** using **JavaScript**, enabling users to easily upload avatars by leveraging **React's built-in camera** library. This seamless integration enhances user interaction by providing a smooth, intuitive experience for customizing avatars. ## Challenges we ran into One of the hardest was working with Adobe Express. We set out to create our own add-on, but the process was far more complex and time-consuming than we expected. The limited documentation made things even trickier, and connecting the playground with our code led to a lot of trial and error. After hours of hard work, we finally got it working, and that moment felt like a huge win! We were definitely overambitious at the start of the hackathon. We had all these big ideas and plans, but as we got deeper into the project, it became clear that some of them were far more complicated than we anticipated. This forced us to take a step back and re-evaluate what was actually achievable within the time limit. We had to compromise and shift our focus to more realistic goals, scaling back some features while making sure we could still deliver a polished final product. It was a tough decision, but it taught us the importance of balancing ambition with practicality. Even though these challenges pushed us to the limit, solving them was incredibly rewarding. We learned so much along the way, and by the end of it, we were proud of what we achieved! ## What we learned Looking ahead, we have some exciting plans for the future of our project. One of our main goals is to expand the game to support teaching in multiple different languages, making it accessible to a wider audience. We also want to integrate more AI features to make the application even more responsive and efficient. By doing this, we hope to offer users more personalized support and improve accessibility, helping them on their learning journey in an even more interactive and engaging way. The possibilities are endless, and we’re excited to see where we can take it next! We’re incredibly passionate about the impact this project can have. With literacy rates dropping and children with special needs not always having access to the extra resources they need, we believe this tool can play a crucial role in supporting their success. Education is the foundation of opportunity, and by expanding our game to offer multi-language teaching and integrating AI for more personalized support, we hope to bridge some of those gaps. We see this project as more than just a game—it’s a way to give children, especially those who need extra help, the tools they need to thrive in their learning journey.
## Inspiration We found that even there are so many resources to learn to code, but all of them fall into one of two categories: they are either in a generic course and grade structure, or are oversimplified to fit a high-level mould. We thought the ideal learning environment would be an interactive experience where players have to learn to code, not for a grade or score, but to progress an already interactive game. The code the students learn is actual Python script, but it guided with the help of an interactive tutorial. ## What it does This code models a "dinosaur game" structure where players have to jump over obstacles. However, as the player experiences more and more difficult obstacles through the level progression, they are encouraged to automate the character behavior with the use of Python commands. Players can code the behavior for the given level, telling the player to "jump when the obstacles is 10 pixels away" with workable Python script. The game covers the basic concepts behind integers, loops, and boolean statements. ## How we built it We began with a Pygame template and created a game akin to the "Dinosaur game" of Google Chrome. We then integrated a text editor that allows quick and dirty compilation of Python code into the visually appealing format of the game. Furthermore, we implemented a file structure for all educators to customize their own programming lessons and custom functions to target specific concepts, such as for loops and while loops. ## Challenges we ran into We had most trouble with troubleshooting an idea that is both educational and fun. Finding that halfway point pushed both our creativity and technical abilities. While there were some ideas that had heavily utilizing AI and VR, we knew that we could not code that up in 36 hours. The idea we settled on still challenged us, but was something we thought was accomplishable. We also had difficulty with the graphics side of the project as that is something that we do not actively focus on learning through standard CS courses in school. ## Accomplishments that we're proud of We were most proud of the code incorporation feature. We had many different approaches for incorporating the user input into the game, that finding one that worked proved to be very difficult. We considered making pre-written code snippets that the game would compare to the user input or creating a pseudocode system that could interpret the user's intentions. The idea we settled upon, the most graceful, was a method through which the user input is directly input into the character behavior instantiation, meaning that the user code is directly what is running the character--no proxies or comparison strings. We are proud of the cleanliness and truthfulness this hold with our mission statement--giving the user the most hand-ons and accurate coding experience. ## What we learned We learned so much about game design and the implementation of computer science skills we learned in the classroom. We also learned a lot about education, through both introspection into ourselves as well as some research articles we found about how best to teach concepts and drill practice. ## What's next for The Code Runner The next steps for Code Runner would be adding more concepts covered through the game functionality. We were hoping to cover while-loops and other Python elements that we thought were crucial building blocks for anyone working with code. We were also hoping to add some gravity features where obstacles can jump with realistic believability.
## Inspiration Our team’s mission is to build a tool that alleviates stress on Canadians during the hefty tax season. With Canadians spending over 7 hours to complete their tax returns and over $5 billion dollars to cover personal income compliance costs, we decided to come up with a solution to help Canadians save time and money. We created TaxEasy as a web application that uses machine learning to generate a tax return file based on your tax slips! With TaxEasy, Canadians don’t need to understand the complications involved with taxes to file their tax returns. All they need to do is upload their tax slips and TaxEasy will do the rest. While filing taxes only occurs once a month, it is a gruelling task that takes up time and money. We built TaxEasy in hopes of making Canadians’ lives easier so that they can use their saved time to explore their interests and spend time with their loved ones. ## What it does TaxEasy is a web application that simplifies the process of completing a tax return as it generates a tax return file for Canadians by taking the information given on tax slips. Using optical character recognition (OCR), TaxEasy recognizes specific categories in the uploaded tax slips and fills out the tax return form accordingly. For instance, when scanning the T4 form, TaxEasy looks for the “Employment Income” box and inserts the given value into the tax return form’s section for Employment Income. This is all done with a simple click of a button. Users only need to upload their tax slips for this process to occur. ## How we built it We used Microsoft Azure’s Optical Character Recognition (OCR) API for our machine learning implementation. This API was used to train 6 models to recognize the distinct categories present in the following tax slips: T4, T4A, T4A(OAS), T4AP, T1032, and T4E. During the training process, we used supervised learning by creating a labelled training set. We assigned labels based on the information needed on a tax return form. For instance, a tax return form requires an individual’s Employment Income on their T4. Thus, we trained our model to identify where that is on a T4 based on our labels. Moreover, we used Pandas, a Python library, to store the tax return data into a csv-file which was then used to fill in a blank tax return form. For our front-end we used HTML, CSS, Bootstrap, and Python Flask to ensure responsiveness and the smooth integration between our front-end and back-end. ## Challenges we ran into The biggest challenge was the learning curve for us. Having never used Python Flask and Microsoft Azure’s APIs, we spent the majority of our first day understanding the basics of each technology. This meant diving deep into YouTube videos and documentation reading. Once we gained an understanding of the technologies, we were ready to start our project! However, we were faced with the challenge of obtaining a dataset of tax slips. To overcome this, we decided to create our own tax slips using the files provided by the CRA. In order to maintain consistency with realistic tax slips, we used our own tax slips as reference. Overall, the challenges we had were overcome with persistence and creativity which were powered by our desire to learn. ## Accomplishments that we're proud of Starting the project, we were not confident that we could complete it within the timeframe since we were both going out of our comfort zones to learn new concepts. Thus, completing the project is an accomplishment in itself because it demonstrates our passion for learning new things. Moreover, we are proud to have created an application that can have an impact on Canadians. With time being more precious than ever, we’ve enabled Canadians to spend more of that time towards their own wellbeing. Overall, we’re extremely proud that we were able to learn new skills and make an impact. ## What we learned With no experience with APIs, we learned how to use Microsoft Azure’s OCR and Storage APIs in order to create a machine learning implementation to recognize the different structures given in tax slips. During this process, we got first-hand experience with supervised learning by having to label our data to increase our model accuracy. Moreover, we learned how to use Python to convert data into a csv-file in order to fill out a blank PDF file. On the front-end, we learned how to use Flask by leveraging its HTTP methods to allow for a smooth integration with our backend. ## What's next for TaxEasy For the future, we plan to implement a questionnaire feature that will allow users to input information that cannot be gathered from tax slips, such as email, birthdate, and etc… Moreover, we want to enhance our machine learning model by training it on a larger set of tax slips. We decided to only train our models over 6 tax slips due to the limited timeframe and the need to deliver a working product.
losing
## Inspiration We are a team of goofy engineers and we love making people laugh. As Western students (and a stray Waterloo engineer), we believe it's important to have a good time. We wanted to make this game to give people a reason to make funny faces more often. ## What it does We use OpenCV to analyze webcam input and initiate signals using winks and blinks. These signals control a game that we coded using PyGame. See it in action here: <https://youtu.be/3ye2gEP1TIc> ## How to get set up ##### Prerequisites * Python 2.7 * A webcam * OpenCV 1. [Clone this repository on Github](https://github.com/sarwhelan/hack-the-6ix) 2. Open command line 3. Navigate to working directory 4. Run `python maybe-a-game.py` ## How to play **SHOW ME WHAT YOU GOT** You are playing as Mr. Poopybutthole who is trying to tame some wild GMO pineapples. Dodge the island fruit and get the heck out of there! ##### Controls * Wink left to move left * Wink right to move right * Blink to jump **It's time to get SssSSsssSSSssshwinky!!!** ## How we built it Used haar cascades to detect faces and eyes. When users' eyes disappear, we can detect a wink or blink and use this to control Mr. Poopybutthole movements. ## Challenges we ran into * This was the first game any of us have ever built, and it was our first time using Pygame! Inveitably, we ran into some pretty hilarious mistakes which you can see in the gallery. * Merging the different pieces of code was by-far the biggest challenge. Perhaps merging shorter segments more frequently could have alleviated this. ## Accomplishments that we're proud of * We had a "pineapple breakthrough" where we realized how much more fun we could make our game by including this fun fruit. ## What we learned * It takes a lot of thought, time and patience to make a game look half decent. We have a lot more respect for game developers now. ## What's next for ShwinkySwhink We want to get better at recognizing movements. It would be cool to expand our game to be a stand-up dance game! We are also looking forward to making more hacky hackeronis to hack some smiles in the future.
## Inspiration After looking at the Hack the 6ix prizes, we were all drawn to the BLAHAJ. On a more serious note, we realized that one thing we all have in common is accidentally killing our house plants. This inspired a sense of environmental awareness and we wanted to create a project that would encourage others to take better care of their plants. ## What it does Poképlants employs a combination of cameras, moisture sensors, and a photoresistor to provide real-time insight into the health of our household plants. Using this information, the web app creates an interactive gaming experience where users can gain insight into their plants while levelling up and battling other players’ plants. Stronger plants have stronger abilities, so our game is meant to encourage environmental awareness while creating an incentive for players to take better care of their plants. ## How we built it + Back-end: The back end was a LOT of python, we took a new challenge on us and decided to try out using socketIO, for a websocket so that we can have multiplayers, this messed us up for hours and hours, until we finally got it working. Aside from this, we have an arduino to read for the moistness of the soil, the brightness of the surroundings, as well as the picture of it, where we leveraged computer vision to realize what the plant is. Finally, using langchain, we developed an agent to handle all of the arduino info to the front end and managing the states, and for storage, we used mongodb to hold all of the data needed. [backend explanation here] ### Front-end: The front-end was developed with **React.js**, which we used to create a web-based game. We were inspired by the design of old pokémon games, which we thought might evoke nostalgia for many players. ## Challenges we ran into We had a lot of difficulty setting up socketio and connecting the api with it to the front end or the database ## Accomplishments that we're proud of We are incredibly proud of integrating our web sockets between frontend and backend and using arduino data from the sensors. ## What's next for Poképlants * Since the game was designed with a multiplayer experience in mind, we want to have more social capabilities by creating a friends list and leaderboard * Another area to explore would be a connection to the community; for plants that are seriously injured, we could suggest and contact local botanists for help * Some users might prefer the feeling of a mobile app, so one next step would be to create a mobile solution for our project
## Inspiration Learning never ends. It's the cornerstone of societal progress and personal growth. It helps us make better decisions, fosters further critical thinking, and facilitates our contribution to the collective wisdom of humanity. Learning transcends the purpose of solely acquiring knowledge. ## What it does Understanding the importance of learning, we wanted to build something that can make learning more convenient for anyone and everyone. Being students in college, we often find ourselves meticulously surfing the internet in hopes of relearning lectures/content that was difficult. Although we can do this, spending half an hour to sometimes multiple hours is simply not the most efficient use of time, and we often leave our computers more confused than how we were when we started. ## How we built it A typical scenario goes something like this: you begin a Google search for something you want to learn about or were confused by. As soon as you press search, you are confronted with hundreds of links to different websites, videos, articles, news, images, you name it! But having such a vast quantity of information thrown at you isn’t ideal for learning. What ends up happening is that you spend hours surfing through different articles and watching different videos, all while trying to piece together bits and pieces of what you understood from each source into one cohesive generalization of knowledge. What if learning could be made easier by optimizing search? What if you could get a guided learning experience to help you self-learn? That was the motivation behind Bloom. We wanted to leverage generative AI to optimize search specifically for learning purposes. We asked ourselves and others, what helps them learn? By using feedback and integrating it into our idea, we were able to create a platform that can teach you a new concept in a concise, understandable manner, with a test for knowledge as well as access to the most relevant articles and videos, thus enabling us to cover all types of learners. Bloom is helping make education more accessible to anyone who is looking to learn about anything. ## Challenges we ran into We faced many challenges when it came to merging our frontend and backend code successfully. At first, there were many merge conflicts in the editor but we were able to find a workaround/solution. This was also our first time experimenting with LangChain.js so we had problems with the initial setup and had to learn their wide array of use cases. ## Accomplishments that we're proud of/What's next for Bloom We are proud of Bloom as a service. We see just how valuable it can be in the real world. It is important that society understands that learning transcends the classroom. It is a continuous, evolving process that we must keep up with. With Bloom, our service to humanity is to make the process of learning more streamlined and convenient for our users. After all, learning is what allows humanity to progress. We hope to continue to optimize our search results, maximizing the convenience we bring to our users.
winning
## Inspiration Many Universities have offices like [The Office of Career Consulting](https://www.vpul.upenn.edu/careerservices/) here at UPenn, yet these resources are often criminally under-maintained and underused. Meanwhile, students and job-seekers across the country are unsure about the industry standard - what should I do to get a career? What have others done to get the career I want? What is expected of me? ## What it does ProjectMe (theoretically would) operate off the proprietary data set of careers we mined using Selenium and Python during PennApps to return users relevant career data. Specifically, the system could be queried for paths from one career to another, for common paths from a career, and other interesting data visualizations. This could then be served to the user via interactive data interaction to best help the user understand what others in his/her field are doing to get employment. Furthermore, the system could connect to other users on the site (if they opt in) for light mentoring and networking opportunities. ## How we built it By generating a graph of all possible careers and one directional vertices from career to career (and sorting individual resume elements into careers using Tensorflow topic modelling) we can generate a dataset of paths between different given careers. Then we can walk this network with an algorithm not unlike Google map's distance algorithm, except optimizing for longest distance (most amount of people on edges walked). This produces the most popular path. We do this for a few paths and serve these paths to the user.
## Inspiration Have you ever had a brain fart in the middle of an interview? The interviewer asks a straightforward question, and you forget everything you've ever learned and all the details about your experiences. Because we all have. We wanted to solve this issue with Premove.ai, a tool to help you be your best in any future interview. ## What it does Premove.ai uses OpenAI API to answer your interview questions and recall information from your experiences. Simply upload your resume and Premove will remember all your details. The companion will generate personalized answers based on your resume to help you win that tough interview! ## How we built it Premove.ai is a Chrome extension that opens in your video conference. When you need help with an answer, simply click the 🧠 button and watch as a unique and informed answer is generated for you in real-time. ## Challenges we ran into Set up the front end and back end Set up APIs. Make the generated responses displayed dynamic and easy to read. Add functionality to let the API parse PDF uploads. ## Accomplishments that we're proud of We successfully hacked the meeting page of Google Meet to have real-time voice-to-text conversion. ## What we learned The workflow of building a software project. How to use google cloud speech-to-text API. How to use ChatGPT API. ## What's next for Interview Companion Add functoinalities to allow Companion to learn more contextual information: * allows job description upload * generate mock interview questions * provide feedback on interview performance
## Inspiration We know the struggles of students. Trying to get to that one class across campus in time. Deciding what to make for dinner. But there was one that stuck out to all of us: finding a study spot on campus. There have been countless times when we wander around Mills or Thode looking for a free space to study, wasting our precious study time before the exam. So, taking inspiration from parking lots, we designed a website that presents a live map of the free study areas of Thode Library. ## What it does A network of small mountable microcontrollers that uses ultrasonic sensors to check if a desk/study spot is occupied. In addition, it uses machine learning to determine peak hours and suggested availability from the aggregated data it collects from the sensors. A webpage that presents a live map, as well as peak hours and suggested availability . ## How we built it We used a Raspberry Pi 3B+ to receive distance data from an ultrasonic sensor and used a Python script to push the data to our database running MongoDB. The data is then pushed to our webpage running Node.js and Express.js as the backend, where it is updated in real time to a map. Using the data stored on our database, a machine learning algorithm was trained to determine peak hours and determine the best time to go to the library. ## Challenges we ran into We had an **life changing** experience learning back-end development, delving into new frameworks such as Node.js and Express.js. Although we were comfortable with front end design, linking the front end and the back end together to ensure the web app functioned as intended was challenging. For most of the team, this was the first time dabbling in ML. While we were able to find a Python library to assist us with training the model, connecting the model to our web app with Flask was a surprising challenge. In the end, we persevered through these challenges to arrive at our final hack. ## Accomplishments that we are proud of We think that our greatest accomplishment is the sheer amount of learning and knowledge we gained from doing this hack! Our hack seems simple in theory but putting it together was one of the toughest experiences at any hackathon we've attended. Pulling through and not giving up until the end was also noteworthy. Most importantly, we are all proud of our hack and cannot wait to show it off! ## What we learned Through rigorous debugging and non-stop testing, we earned more experience with Javascript and its various frameworks such as Node.js and Express.js. We also got hands-on involvement with programming concepts and databases such as mongoDB, machine learning, HTML, and scripting where we learned the applications of these tools. ## What's next for desk.lib If we had more time to work on this hack, we would have been able to increase cost effectiveness by branching four sensors off one chip. Also, we would implement more features to make an impact in other areas such as the ability to create social group beacons where others can join in for study, activities, or general socialization. We were also debating whether to integrate a solar panel so that the installation process can be easier.
losing
## Inspiration With everything being done virtually these days, including this hackathon, we spend a lot of time at our desks and behind screens. It's more important now than ever before to take breaks from time to time, but it's easy to get lost in our activities. Studies show that breaks increases over overall energy and productivity, and decreases exhaustion and fatigue. If only we had something to help us from forgetting.. ## What it does The screen connected to the microcontrollers tells you when it's time to give your eyes a break, or to move around a bit to get some exercise. Currently, it tells you to take a 20-second break for your eyes for every 20 minutes of sitting, and a few minutes of break to exercise for every hour of sitting. ## How we built it The hardware includes a RPi 3B+, aluminum foil contacts underneath the chair cushion, a screen, and wires to connect all these components. The software includes the RPi.GPIO library for reading the signal from the contacts and the tkinter library for the GUI displayed on the screen. ## Challenges we ran into Some python libraries were written for Python 2 and others for Python 3, so we took some time to resolve these dependency issues. The compliant structure underneath the cushion had to be a specific size and rigidity to allow the contacts to move appropriately when someone gets up/sits down on the chair. Finally, the contacts were sometimes inconsistent in the signals they sent to the microcontrollers. ## Accomplishments that we're proud of We built this system in a few hours and were successful in not spending all night or all day working on the project! ## What we learned Tkinter takes some time to learn to properly utilize its features, and hardware debugging needs to be a very thorough process! ## What's next for iBreak Other kinds of reminders could be implemented later like reminder to drink water, or some custom exercises that involve sit up/down repeatedly.
## Inspiration We realized how visually-impaired people find it difficult to perceive the objects coming near to them, or when they are either out on road, or when they are inside a building. They encounter potholes and stairs and things get really hard for them. We decided to tackle the issue of accessibility to support the Government of Canada's initiative to make work and public places completely accessible! ## What it does This is an IoT device which is designed to be something wearable or that can be attached to any visual aid being used. What it does is that it uses depth perception to perform obstacle detection, as well as integrates Google Assistant for outdoor navigation and all the other "smart activities" that the assistant can do. The assistant will provide voice directions (which can be geared towards using bluetooth devices easily) and the sensors will help in avoiding obstacles which helps in increasing self-awareness. Another beta-feature was to identify moving obstacles and play sounds so the person can recognize those moving objects (eg. barking sounds for a dog etc.) ## How we built it Its a raspberry-pi based device and we integrated Google Cloud SDK to be able to use the vision API and the Assistant and all the other features offered by GCP. We have sensors for depth perception and buzzers to play alert sounds as well as camera and microphone. ## Challenges we ran into It was hard for us to set up raspberry-pi, having had no background with it. We had to learn how to integrate cloud platforms with embedded systems and understand how micro controllers work, especially not being from the engineering background and 2 members being high school students. Also, multi-threading was a challenge for us in embedded architecture ## Accomplishments that we're proud of After hours of grinding, we were able to get the raspberry-pi working, as well as implementing depth perception and location tracking using Google Assistant, as well as object recognition. ## What we learned Working with hardware is tough, even though you could see what is happening, it was hard to interface software and hardware. ## What's next for i4Noi We want to explore more ways where i4Noi can help make things more accessible for blind people. Since we already have Google Cloud integration, we could integrate our another feature where we play sounds of living obstacles so special care can be done, for example when a dog comes in front, we produce barking sounds to alert the person. We would also like to implement multi-threading for our two processes and make this device as wearable as possible, so it can make a difference to the lives of the people.
## Inspiration: The inspiration for RehabiliNation comes from a mixture of our love for gaming, and our personal experiences regarding researching and working with those who have physical and mental disabilities. ## What it does: Provides an accessible gaming experience for people with physical disabilities and motivate those fighting through the struggles of physical rehabilitation. It can also be used to track the progress people make while going through their healing process. ## How we built it: The motion control arm band collects data using the gyroscope module linked to the Arduino board. It sends back the data to the Arduino serial monitor in the form of angles. We then use a python script to read the data from the serial monitor. It interprets the data into keyboard input, this allows us to interface with multiple games. Currently, it is used to play our Pac-man game which is written in java. ## Challenges we ran into: Our main challenges was determining how to utilize the gyroscope with the Arduino board and to trying to figure out how to receive and interpret the data with a python script. We also came across some issues with calibrating the motion sensors. ## Accomplishments that we're proud of Throughout our creation process, we all managed to learn about new technologies and new skills and programming concepts. We may have been pushed into the pool, but it was quite a fun way to learn, and in the end we came out with a finished product capable of helping people in need. ## What we learned We learned a great amount about the hardware product process, as well as the utilization of hardware in general. In general, it was a difficult but rewarding experience, and we thank U of T for providing us with this opportunity. ## What's next for RehabiliNation RehabiliNation will continue to refine our products in the future, including the use of better materials and more responsive hardware pieces than what was shown in today's proof of concept. Hopefully our products will be implemented by physical rehabilitation centres to help brighten the rehab process.
winning
# FarSight Safety Features \*Over Extension Indicator \*Struggle Detector \*Person Detector \*Fall Detector \*Email Notification \*GPS Location Made with an Arduino as a sensor hub. The Arduino then sends alerts as it detects them over serial communications to the Raspberry Pi. The Pi acts as our main microcomputer and interprets the information. The Pi also takes images and uses open cv to locate the person using the walker. When a potential issue occurs, all family members on the email list receive an email notify them of the issue, the walkers current gps location, and the last image taken.
Check it out at <https://junto.verafy.me> ## Inspiration In recent news, many immigrant families have been torn apart by the ICE as the government attempts to crack down on illegal immigration. This often results in long term separation or even indefinite separation of the parents and children, since the parents, many of whom are still fighting to stay in the US, have little to no information or time to track down their children at one of many federal detention centers. Juntos' mission is to create an avenue by which these separate families can be reunited. Because it is typically hard for young children to reach out to their families other than being able to recognize them, we have employed methods such as deep learning and computer vision to help create a network of intelligence that will hopefully bring these families together. ## What it does Juntos, in a broad sense, serves as an image to individual matching system. Users, including both parents and children at detention facilities, will submit a photo of themselves and a photo of the person they are looking for, if they have one. In the case that the individual lacks such a photo, we use a generative adversarial network (GAN) to help reconstruct an image of an individual from only basic descriptions of facial features such as face shape. Juntos then uses a mobile platform to create a common platform for users to identify one another, with features such as location maps of images in the database, as well as generating suggestions of others' profile pictures that closely resemble the target description. ## How we built it The GAN model behind Juntos comprises of two pretrained models, one of which is the CelebA dataset that has facial biometrics labeled for each image, and the other is the Flickr-Faces-HQ dataset, an extremely diverse set of pictures of faces. Next, we used these models to create a mapping from measurable facial features to the **latent space**, or the input space to the GAN, thus allowing us to recreate faces based off a set of descriptors. In order to make Juntos a more user friendly experience, we experimented multiple types of UI to select the features (buttons and sliders). First of all, we tried out buttons which had one major drawback: selection would not always be precise because some features were interconnected (e.g. increasing the probability of a goatee on the face naturally made the entire face appear more masculine, even when 'female' was selected). In the end, we decided to go with a slider system, which allowed for free form configuration of features without confusing the user. ## Challenges we ran into One challenge that we ran into was that when we were calculating the mapping from the feature space to the latent space, the amount of data we had to use to have a good map was extremely large, and exceeded the capabilities of the server. As a result, we had to use clever pipelining to be able to do this processing, such as computing the mapping in a series of iterated steps. Another challenge we faced was integrating the project components. Specifically, because our project had a large range of technologies, we ended up each working on our own portion of the project at first, so integrating the project effectively at the end was crucial to our success. ## Accomplishments that we're proud of One accomplishment we're proud of was that we managed to understand the details behind the GAN algorithm through reading many papers and experiments online, even though we did not come in with any prior experience with them. We also put together a full functional pipeline in a condensed amount of time. One of our group members even managed to connect on LinkedIn with one of the first authors of the GAN! ## What we learned Through building our project, we learned good design and version control practices that helped prevent major setbacks when things went wrong. We also learned about servers and various other technologies that together comprise a full stack project. ## What's next for Juntos With regards to the technical aspect of our project, we would like to improve the feature extractor we used. As noted above, it is currently trained on celebrity images, and so is biased toward features that celebrities tend to have. Training the feature extractor on a dataset more representative of our target population, with labels more closely corresponding to features we want to tune, would make the generator be easier to work with for arriving at a target image. This project can be generalized to not just reuniting migrant families, but also searching for people in general. Some possibilities are searching for lost children and missing persons, or allowing law enforcement to create more accurate pictures from eyewitness accounts. Swapping the backend out for GANs trained on other datasets would allow for searching for other things as well - in particular, a GAN trained to generate pictures of dogs, or pictures of cats, would allow people to use this app to search for runaway pets.
## Inspiration This project was inspired by the Professional Engineering course taken by all first year engineering students at McMaster University (1P03). The final project for the course was to design a solution to a problem of your choice that was given by St. Peter's Residence at Chedoke, a long term residence care home located in Hamilton, Ontario. One of the projects proposed by St. Peter's was to create a falling alarm to notify the nurses in the event of one of the residents having fallen. ## What it does It notifies nurses if a resident falls or stumbles via a push notification to the nurse's phones directly, or ideally a nurse's station within the residence. It does this using an accelerometer in a shoe/slipper to detect the orientation and motion of the resident's feet, allowing us to accurately tell if the resident has encountered a fall. ## How we built it We used a Particle Photon microcontroller alongside a MPU6050 gyro/accelerometer to be able to collect information about the movement of a residents foot and determine if the movement mimics the patterns of a typical fall. Once a typical fall has been read by the accelerometer, we used Twilio's RESTful API to transmit a text message to an emergency contact (or possibly a nurse/nurse station) so that they can assist the resident. ## Challenges we ran into Upon developing the algorithm to determine whether a resident has fallen, we discovered that there are many cases where a resident's feet could be in a position that can be interpreted as "fallen". For example, lounge chairs would position the feet as if the resident is laying down, so we needed to account for cases like this so that our system would not send an alert to the emergency contact just because the resident wanted to relax. To account for this, we analyzed the jerk (the rate of change of acceleration) to determine patterns in feet movement that are consistent in a fall. The two main patterns we focused on were: 1. A sudden impact, followed by the shoe changing orientation to a relatively horizontal position to a position perpendicular to the ground. (Critical alert sent to emergency contact). 2. A non-sudden change of shoe orientation to a position perpendicular to the ground, followed by a constant, sharp movement of the feet for at least 3 seconds (think of a slow fall, followed by a struggle on the ground). (Warning alert sent to emergency contact). ## Accomplishments that we're proud of We are proud of accomplishing the development of an algorithm that consistently is able to communicate to an emergency contact about the safety of a resident. Additionally, fitting the hardware available to us into the sole of a shoe was quite difficult, and we are proud of being able to fit each component in the small area cut out of the sole. ## What we learned We learned how to use RESTful API's, as well as how to use the Particle Photon to connect to the internet. Lastly, we learned that critical problem breakdowns are crucial in the developent process. ## What's next for VATS Next steps would be to optimize our circuits by using the equivalent components but in a much smaller form. By doing this, we would be able to decrease the footprint (pun intended) of our design within a clients shoe. Additionally, we would explore other areas we could store our system inside of a shoe (such as the tongue).
partial
## Inspiration The inspiration for our auto-scheduling project emerged from the **increased need for automated and personalized recommendations** in travel planning tools. We recognized that multi-agent LLMs hold immense potential to tackle complex tasks such as **personalized scheduling**. This potential drove our team to want to explore how these models could offer solutions that can integrate into daily life problems. We were also inspired to create this app because of the tediousness of scheduling, and we recongized that our tool could save lots of time. ## Our Goal We aim to simplify the process of planning trips and transport for important trips. We also want to introduce users to popular and fun locations along the way. DayGenie **automatically generates recommendations** based on user preferences and user schedule. ## How we built it (Structure of our model) ### Agent Workflow DayGenie is constructed as a **decentralized multi-agent large language model**. We defined 5 LLM Agents (InfoAgent, MapAgent, RedditAgent, SummaryAgent, Feedback Agent), each with specific prompting for their purpose. **User Input**: user preference(str), google calendar schedule(google calendar API call) * **InfoAgent**: Fetches User Input, generate prompts for MapAgent and RedditAgent. * **MapAgent**: Find the location of events happening in the calendar. * **RedditAgent**: Call Reddit API, fetch and analyze reviews based on the important keyword input. * **SummaryAgent**: Calculates the transportation(time, cost)/ recommendations based on locatison and preferences. * **FeedbackAgent**: Get physical feedback from the user, if negative run model again/ else output is provided to the user. **Model Output**: A list of personalized recommendations of Transportation, Restaurant/Cafe/Event places. ### Conversation construction We used Fetch AI to facilitate conversations and communication between our agents. We designed each agent to solve their own specific tasks and then pass results onto other in our workflow. ### Website construction Our website is constructed using Next.js and Tailwind CSS. We built an intuitive and attractive user interface using these tools. While we have not deployed the website yet, we plan on doing so in the future. ## Challenges we ran into * **Number of Agents:** How many Agents should we define? * **Reducing Hallucinations of Agents:** How do we handle hallucinations generated by large language models..? * **Understanding Fetch AI** We had some issues understanding the structure of Fetch AI and how to use it most effectively. ## Accomplishments During this Hackathon, the major accomplishment was successfully **generating conversations between the agents** we designed. By calling Google Calendar API, user input, Google Map API, and reddit API, we could successfully generate output recommendations for users. Towards the end of the project, we were also able to **add a feedback agent**, which plays a crucial role in ensuring that users receive the exact information they are seeking from our application. Finally, our team cooperation made all of this possible, which we could learn a lot from each other and have fun participating on this project. ## What's next for DayGenie * **Feedback on website**: We did not have enough time to add the feedback sign on the frontend website. We will work further to generate getting the feedback from the users so that we can call the feedbackAgent in the website. * **Speech to text generation**: DayGenie can only get input by text till now. The next step is to recognize speech from the user and input it as user preference. * **Addition of ScoringAgent**: To ensure quality recommendations, we make a ScoringAgent to evaluate the generated output. * **Fine-tune** the DayGenie model to foster better abilities.
# Things2Do Minimize time spent planning and maximize having fun with Things2Do! ## Inspiration The idea for Things2Do came from the difficulties that we experienced when planning events with friends. Planning events often involve venue selection which can be a time-consuming, tedious process. Our search for solutions online yielded websites like Google Maps, Yelp, and TripAdvisor, but each fell short of our needs and often had complicated filters or cluttered interfaces. More importantly, we were unable to find event planning that accounts for the total duration of an outing event and much less when it came to scheduling multiple visits to venues accounting for travel time. This inspired us to create Things2Do which minimizes time spent planning and maximizes time spent at meaningful locations for a variety of preferences on a tight schedule. Now, there's always something to do with Things2Do! ## What it does Share quality experiences with people that you enjoy spending time with. Things2Do provides the top 3 suggested venues to visit given constraints of the time spent at each venue, distance, and select category of place to go. Furthermore, the requirements surrounding the duration of a complete event plan across multiple venues can become increasingly complex when trying to account for the tight schedules of attendees, a wide variety of preferences, and travel time between multiple venues throughout the duration of an event. ## How we built it The functionality of Things2Do is powered by various APIs to retrieve the details of venues and spatiotemporal analysis with React for the front end, and express.js/node.js for the backend functionality. APIs: * openrouteservice to calculate travel time * Geoapify for location search autocomplete and geocoding * Yelp to retrieve names, addresses, distances, and ratings of venues Languages, tools, and frameworks: * JavaScript for compatibility with React, express.js/node.js, Verbwire, and other APIs * Express.js/node.js backend server * TailwindCSS for styling React components Other services: * Verbwire to mint NFTs (for memories!) from event pictures ## Challenges we ran into Initially, we wanted to use Google Maps API to find locations of venues but these features were not part of the free tier and even if we were to implement these ourselves it would still put us at risk of spending more than the free tier would allow. This resulted in us switching to node.js for the backend to work with JavaScript for better support for the open-source APIs that we used. We also struggled to find a free geocoding service so we settled for Geoapify which is open-source. JavaScript was also used so that Verbwire could be used to mint NFTs based on images from the event. Researching all of these new APIs and scouring documentation to determine if they fulfilled the desired functionality that we wanted to achieve with Things2Do was an enormous task since we never had experience with them before and were forced to do so for compatibility with the other services that we were using. Finally, we underestimated the time it would take to integrate the front-end to the back-end and add the NFT minting functionality on top of debugging. A challenge we also faced was coming up with an optimal method of computing an optimal event plan in consideration of all required parameters. This involved looking into algorithms like the Travelling Salesman, Dijkstra's and A\*. ## Accomplishments that we're proud of Our team is most proud of meeting all of the goals that we set for ourselves coming into this hackathon and tackling this project. Our goals consisted of learning how to integrate front-end and back-end services, creating an MVP, and having fun! The perseverance that was shown while we were debugging into the night and parsing messy documentation was nothing short of impressive and no matter what comes next for Things2Do, we will be sure to walk away proud of our achievements. ## What we learned We can definitively say that we learned everything that we set out to learn during this project at DeltaHacks IX. * Integrate front-end and back-end * Learn new languages, libraries, frameworks, or services * Include a sponsor challenge and design for a challenge them * Time management and teamwork * Web3 concepts and application of technology ## Things to Do The working prototype that we created is a small segment of everything that we would want in an app like this but there are many more features that could be implemented. * Multi-user voting feature using WebSockets * Extending categories of hangouts * Custom restaurant recommendations from attendees * Ability to have a vote of "no confidence" * Send out invites through a variety of social media platforms and calendars * Scheduling features for days and times of day * Incorporate hours of operation of venues
## Inspiration One In every 250 people suffer from cerebral palsy, where the affected person cannot move a limb properly, And thus require constant care throughout their lifetimes. To ease their way of living, we have made this project, 'para-pal'. The inspiration for this idea was blended with a number of research papers and a project called Pupil which used permutations to make communication possible with eye movements. ## What it does ![Main](https://media.discordapp.net/attachments/828211308305448983/828261879326572544/iris_seg.png?width=819&height=355) **"What if Eyes can Speak? Yesss - you heard it right!"** Para-pal is a novel idea that tracks patterns in the eye movement of the patient and then converts into actual speech. We use the state-of-the-art iris recognition (dlib) to accurately track the eye movements to figure out the the pattern. Our solution is sustainable and very cheap to build and setup. Uses QR codes to connect the caretaker and the patient's app. We enable paralyzed patients to **navigate across the screen using their eye movements**. They can select an action by placing the cursor for more than 3 seconds or alternatively, they can **blink three times to select the particular action**. A help request is immediately sent to the mobile application of the care taker as a **push notification** ## How we built it We've embraced flutter in our frontend to make the UI - simple, intuitive with modularity and customisabilty. The image processing and live-feed detection are done on a separate child python process. The iris-recognition at it's core uses dlib and pipe the output to opencv. We've developed a desktop-app (which is cross-platform with a rpi3 as well)for the patient and a mobile app for the caretaker. We also tried running our desktop application on Raspberry Pi using an old laptop screen. In the future, we wish to make a dedicated hardware which can be cost-efficient for patients with paralysis. ![hardware](https://media.discordapp.net/attachments/828211308305448983/828263070228676638/20210404_191100.jpg?width=542&height=406) ![hardware2](https://media.discordapp.net/attachments/828211308305448983/828263051420762182/20210404_191120.jpg?width=542&height=406) ## Challenges we ran into Building up the dlib took a significant amount of time, because there were no binaries/wheels and we had to build from source. Integrating features to enable connectivity and sessions between the caretaker's mobile and the desktop app was hard. Fine tuning some parameters of the ML model, preprocessing and cleaning the input was a real challenge. Since we were from a different time zone, it was challenging to stay awake throughout the 36 hours and make this project! ## Accomplishments that we're proud of * An actual working application in such a short time span. * Integrating additional hardware of a tablet for better camera accuracy. * Decoding the input feed with a very good accuracy. * Making a successful submission for HackPrinceton. * Team work :) ## What we learned * It is always better to use a pre-trained model than making one yourself, because of the significant accuracy difference. * QR scanning is complex and is harder to integrate in flutter than how it looks on the outside. * Rather than over-engineering a flutter component, search if a library exists that does exactly what is needed. ## What's next for Para Pal - What if your eyes can speak? * More easier prefix-less code patterns for the patient using an algorithm like huffman coding. * More advanced controls using ML that tracks and learns the patient's regular inputs to the app. * Better analytics to the care-taker. * More UI colored themes.
partial
## Inspiration As students of zoom university, we first hand understand the pain of visualizing tonnes of information from the lectures and textbooks. To stop this uncountable amount of rewatching lectures, we created Map your mind to convert text into mind-maps that summarize key concepts and graphically show their connections. ## What it does Map your mind takes any form of text input like your zoom lecture transcript, a chapter of your book and then finds the most import keywords and parent relationships to create mind-maps. So, a student simply uploads their zoom lecture transcript and we get the most important keywords and topics out of it to return an interactive mindmap for them! ## How we built it Front End: We created a webpage using html, css & javascript which acts as the interface between the user's input and the python script for getting the keywords 1. Used REST api from pyflask to connect the javascript to python 2. Implented the algorithm to find the keywords for given text using natural language processing. Used gensim, spacy libraries and features like lemmatization. We aim to use an LDA model to get important topics of the text and keyBERT model to extract important keywords for these topics. 3. We use basic graph algorithm to construct the parent nodes and leaves. 4. We a GO JS plugin to visualize the app. ## Challenges we ran into connecting python with javascript, fetch api, creating the most efficient algorithms. Getting a video for the demo ## Accomplishments that we're proud of We achieved our MVP and actually created a fully fledged working algorithm and webpage. ## What we learned natural language processing, rest api, web development, design ## What's next for Notes2MindMap! Achieve database management with the option for the user to save their mindmaps for different chapters. Complete all the functionalities
## What inspired us: The pandemic has changed the university norm to being primarily all online courses, increasing our usage and dependency on textbooks and course notes. Since we are all computer science students, we have many math courses with several definitions and theorems to memorize. When listening to a professor’s lecture, we often forget certain theorems that are being referred to. With discussAI, we are easily able to query the postgresql database with a command and receive an image from the textbook explaining what the definition/theorem is. Thus, we decided to use our knowledge with machine learning libraries to filter out these pieces of information. We believe that our program’s concept can be applied to other fields, outside of education. For instance, business meetings or training sessions can utilize these tools to effectively summarize long manuals and to search for keywords. ## What we learned: We had a lot of fun building this application since we were new to using Microsoft Azure applications. We learned how to integrate machine learning libraries such as OCR and sklearn for processing our information, and we deepened our knowledge in frontend (Angular.js) and backend(Django and Postgres). ## How we built it: We built our web application’s frontend using Angular.js to build our components and Agora.io to allow video conferencing. On our backend, we used Django and Postgresql for handling API requests from our frontend. We also used several Python libraries to convert the pdf file to png images, utilize Azure OCR to analyze these text images, apply the sklearn library to analyze the individual text, and finally crop the images to return specific snippets of definitions/theorems. ## Challenges we faced: The most challenging part was deciding our ML algorithm to derive specific image snippets from lengthy textbooks. Some other challenges we faced varies from importing images from Azure Storage to positioning CSS components. Nevertheless, the learning experience was amazing with the help of mentors, and we hope to participate again in the future!
## Inspiration It may have been the last day before an important exam, the first day at your job, or the start of your ambitious journey of learning a new language, where you were frustrated at the lack of engaging programming tutorials. It was impossible to get their "basics" down, as well as stay focused due to the struggle of navigating through the different tutorials trying to find the perfect one to solve your problems. Well, that's what led us to create Code Warriors. Code Warriors is a platform focused on encouraging the younger and older audience to learn how to code. Video games and programming are brought together to offer an engaging and fun way to learn how to code. Not only are you having fun, but you're constantly gaining new and meaningful skills! ## What it does Code warriors provides a gaming website where you can hone your skills in all the coding languages it provides, all while levelling up your character and following the storyline! As you follow Asmodeus the Python into the jungle of Pythania to find the lost amulet, you get to develop your skills in python by solving puzzles that incorporate data types, if statements, for loops, operators, and more. Once you finish each mission/storyline, you unlock new items, characters, XP, and coins which can help buy new storylines/coding languages to learn! In conclusion, Code Warriors offers a fun time that will make you forget you were even coding in the first place! ## How we built it We built code warriors by splitting our team into two to focus on two specific points of the project. The first team was the UI/UX team, which was tasked with creating the design of the website in Figma. This was important as we needed a team that could make our thoughts come to life in a short time, and design them nicely to make the website aesthetically pleasing. The second team was the frontend team, which was tasked with using react to create the final product, the website. They take what the UI/UX team has created, and add the logic and function behind it to serve as a real product. The UI/UX team shortly joined them after their initial task was completed, as their task takes less time to complete. ## Challenges we ran into The main challenge we faced was learning how to code with React. All of us had either basic/no experience with the language, so applying it to create code warriors was difficult. The main difficulties associated with this were organizing everything correctly, setting up the react-router to link pages, as well as setting up the compiler. ## Accomplishments that we're proud of The first accomplishment we were proud of was setting up the login page. It takes only registered usernames and passwords, and will not let you login in without them. We are also proud of the gamified look we gave the website, as it gives the impression that the user is playing a game. Lastly, we are proud of having the compiler embedded in the website as it allows for a lot more user interaction and function to the website. ## What we learned We learnt a lot about react, node, CSS, javascript, and tailwind. A lot of the syntax was new to us, as well as the applications of a lot of formatting options, such as padding, margins, and more. We learnt how to integrate tailwind with react, and how a lot of frontend programming works. We also learnt how to efficiently split tasks as a team. We were lucky enough to see that our initial split up of the group into two teams worked, which is why we know that we can continue to use this strategy for future competitions, projects, and more. ## What's next for Code Warriors What's next for code warriors is to add more lessons, integrate a full story behind the game, add more animations to give more of a game feel to it, as well as expand into different coding languages! The potential for code warriors is unlimited, and we can improve almost every aspect and expand the platform to proving a multitude of learning opportunities all while having an enjoyable experience. ## Important Info for the Figma Link **When opening the link, go into the simulation and press z to fit screen and then go full screen to experience true user interaction**
losing
## Inspiration We wanted to make a natural language processing app that inferred sentiment from vocal performance, while incorporating all of this within a game. ## What it does You can upload a YouTube video that will be parsed into text, and then challenge friends to make a rendition of what the text entails. Live scores are always changing via the realtime database. ## How we built it Anthony used Ionic 3 to make a mobile app that connects with Firebase to send challenge and user data, while Steven and Roy developed a REST API in Node.js that handles the transcription processing and challenge requests. ## Challenges we ran into Heroku is terribly incompatible with FFMPEG, the only reasonable API for media file conversion. On top of that, Heroku was the only online server option to provide a buildpack for FFMPEG. ## Accomplishments that we're proud of The UI runs smoothly and Firebase loads and transmits the data rapidly and correctly. ## What we learned If you want to get a transcript for a natural language, you should do so in-app before processing the data on a REST API. ## What's next for Romeo We are excited to implement invitation features that rely on push notifications. For instance, I can invite my friends via push to compete against my best rendition of a scene from Romeo and Juliet.
## Inspiration We wanted to solve a problem that was real for us, something that we could get value out of. We decided upon Vocally as it solves an issue faced by a lot of people during job interviews, presentations, and other occasions which include speaking in a clear and concise manner. The problem was that it takes a long time to record yourself and re-listen to it just to spot any sentence fillers like "um" or "like". We would like to make it easier to display statistics of one's speech. ## What it does The user clicks the record button and starts speaking. The application first converts speech-to-text using React's built-in speech recognition. After analyzing the results various text processing techniques (e.g. sentiment analysis), it displays feedback. ## How we built it * First, we needed to see how keywords could be extracted from an audio recording in the back-end. We settled with React's speech-to-text feature. * Next, we created API endpoints in Flask (a python web framework) for the React app to make requests from. * Fuzzy string matching, grammatical, and sentiment analysis were used to process and return the stats to the user using data visualization. * The last task was deployment to the pythonanywhere.com domain for demo testing purposes. ## Challenges I ran into Using flask as an API was easy, but we initially tried to host it on GCP, which proved to be difficult as our firewall rules were not configured properly. We moved onto pythonanywhere.com for hosting. For the front-end, we first decided to take a look at the Flutter framework to be able to make the application mobile accessible but the framework was introduced in 2018, and there were a lot of configuration issues that needed to be resolved. ## Accomplishments that we are proud of Getting the sound recorder to work on the front-end took longer than expected, but the end result was very satisfying. We're proud that we actually achieved creating an end-to-end solution. ## What I learned Exploring different framework options like Flutter, in the beginning, was a journey for us. The API that was created needed to delve deeper into the python programming language. We learned about various syntactical and natural language processing techniques. ## What's next for Vocally We may re-explore the concept of natural language processing, perhaps build our own algorithm from scratch and do more over a longer time period.
## Inspiration Both chronic pain disorders and opioid misuse are on the rise, and the two are even more related than you might think -- over 60% of people who misused prescription opioids did so for the purpose of pain relief. Despite the adoption of PDMPs (Prescription Drug Monitoring Programs) in 49 states, the US still faces a growing public health crisis -- opioid misuse was responsible for more deaths than cars and guns combined in the last year -- and lacks the high-resolution data needed to implement new solutions. While we were initially motivated to build Medley as an effort to address this problem, we quickly encountered another (and more personal) motivation. As one of our members has a chronic pain condition (albeit not one that requires opioids), we quickly realized that there is also a need for a medication and symptom tracking device on the patient side -- oftentimes giving patients access to their own health data and medication frequency data can enable them to better guide their own care. ## What it does Medley interacts with users on the basis of a personal RFID card, just like your TreeHacks badge. To talk to Medley, the user presses its button and will then be prompted to scan their ID card. Medley is then able to answer a number of requests, such as to dispense the user’s medication or contact their care provider. If the user has exceeded their recommended dosage for the current period, Medley will suggest a number of other treatment options added by the care provider or the patient themselves (for instance, using a TENS unit to alleviate migraine pain) and ask the patient to record their pain symptoms and intensity. ## How we built it This project required a combination of mechanical design, manufacturing, electronics, on-board programming, and integration with cloud services/our user website. Medley is built on a Raspberry Pi, with the raspiaudio mic and speaker system, and integrates an RFID card reader and motor drive system which makes use of Hall sensors to accurately actuate the device. On the software side, Medley uses Python to make calls to the Houndify API for audio and text, then makes calls to our Microsoft Azure SQL server. Our website uses the data to generate patient and doctor dashboards. ## Challenges we ran into Medley was an extremely technically challenging project, and one of the biggest challenges our team faced was the lack of documentation associated with entering uncharted territory. Some of our integrations had to be twisted a bit out of shape to fit together, and many tragic hours spent just trying to figure out the correct audio stream encoding. Of course, it wouldn’t be a hackathon project without overscoping and then panic as the deadline draws nearer, but because our project uses mechanical design, electronics, on-board code, and a cloud database/website, narrowing our scope was a challenge in itself. ## Accomplishments that we're proud of Getting the whole thing into a workable state by the deadline was a major accomplishment -- the first moment we finally integrated everything together was a massive relief. ## What we learned Among many things: The complexity and difficulty of implementing mechanical systems How to adjust mechatronics design parameters Usage of Azure SQL and WordPress for dynamic user pages Use of the Houndify API and custom commands Raspberry Pi audio streams ## What's next for Medley One feature we would have liked more time to implement is better database reporting and analytics. We envision Medley’s database as a patient- and doctor-usable extension of the existing state PDMPs, and would be able to leverage patterns in the data to flag abnormal behavior. Currently, a care provider might be overwhelmed by the amount of data potentially available, but adding a model to detect trends and unusual events would assist with this problem.
losing
# Inspiration As a team we decided to develop a service that we thought would not only be extremely useful to us, but to everyone around the world that struggles with storing physical receipts. We were inspired to build an eco friendly as well as innovative application that targets the pain points behind filing receipts, losing receipts, missing return policy deadlines, not being able to find the proper receipt with a particular item as well as tracking potentially bad spending habits. # What it does To solve these problems, we are proud to introduce, Receipto, a universal receipt tracker who's mission is to empower users with their personal finances, to track spending habits more easily as well as to replace physical receipts to reduce global paper usage. With Receipto you can upload or take a picture of a receipt, and it will automatically recognize all of the information found on the receipt. Once validated, it saves the picture and summarizes the data in a useful manner. In addition to storing receipts in an organized manner, you can get valuable information on your spending habits, you would also be able to search through receipt expenses based on certain categories, items and time frames. The most interesting feature is that once a receipt is loaded and validated, it will display a picture of all the items purchased thanks to the use of item codes and an image recognition API. Receipto will also notify you when a receipt may be approaching its potential return policy deadline which is based on a user input during receipt uploads. # How we built it We have chosen to build Receipto as a responsive web application, allowing us to develop a better user experience. We first drew up story boards by hand to visually predict and explore the user experience, then we developed the app using React, ViteJS, ChakraUI and Recharts. For the backend, we decided to use NodeJS deployed on Google Cloud Compute Engine. In order to read and retrieve information from the receipt, we used the Google Cloud Vision API along with our own parsing algorithm. Overall, we mostly focused on developing the main ideas, which consist of scanning and storing receipts as well as viewing the images of the items on the receipts. # Challenges we ran into Our main challenge was implementing the image recognition API, as it involved a lot of trial and error. Almost all receipts are different depending on the store and province. For example, in Quebec, there are two different taxes displayed on the receipt, and that affected how our app was able to recognize the data. To fix that, we made sure that if two types of taxes are displayed, our app would recognize that it comes from Quebec, and it would scan it as such. Additionally, almost all stores have different receipts, so we have adapted the app to recognize most major stores, but we also allow a user to manually add the data in case a receipt is very different. Either way, a user will know when it's necessary to change or to add data with visual alerts when uploading receipts. Another challenge was displaying the images of the items on the receipts. Not all receipts had item codes, stores that did have these codes ended up having different APIs. We overcame this challenge by finding an API called stocktrack.ca that combines the most popular store APIs in one place. # Accomplishments that we're proud of We are all very proud to have turned this idea into a working prototype as we agreed to pursue this idea knowing the difficulty behind it. We have many great ideas to implement in the future and have agreed to continue this project beyond McHacks in hopes of one day completing it. We our grateful to have had the opportunity to work together with such talented, patient, and organized team members. # What we learned With all the different skills each team member brought to the table, we were able to pick up new skills from each other. Some of us got introduced to new coding languages, others learned new UI design skills as well as simple organization and planning skills. Overall, McHacks has definitely showed us the value of team work, we all kept each other motivated and helped each other overcome each obstacle as a team. # What's next for Receipto? Now that we have a working prototype ready, we plan to further test our application with a selected sample of users to improve the user experience. Our plan is to polish up the main functionality of the application, and to expand the idea by adding exciting new features that we just didn't have time to add. Although we may love the idea, we need to make sure to conduct more market research to see if it could be a viable service that could change the way people perceive receipts and potentially considering adapting Receipto.
## What it does XEN SPACE is an interactive web-based game that incorporates emotion recognition technology and the Leap motion controller to create an immersive emotional experience that will pave the way for the future gaming industry. # How we built it We built it using three.js, Leap Motion Controller for controls, and Indico Facial Emotion API. We also used Blender, Cinema4D, Adobe Photoshop, and Sketch for all graphical assets.
## Inspiration A major problem when it comes to finances for students is maintaining their budgets. Saving receipts and budgeting manually can be quite burdensome. Additionally, this process is quite inefficient. That is why we wanted to create a program where people can easily scan their receipts and our program would be able to optimally budget their finance and accurately categorize the items which there buying. ## What it does Our program functions on three main parts. Firstly, it scans an user's uploaded receipt and analyzes the text for the items bought. The items are then categorized by our program to a certain list in which the program calculates how much money you are spending in each category. Based on the budget that you are trying to maintain, the bot informs the user through telegram about the details of their purchase and how well you are doing related to their budget goals. Overall, this program provides in detailed information about your budget and expenses by simply scanning your receipt. ## How we built it This program was built using technologies like AWS and Rekognition to develop the backend program which analyzed and categorized the scanned data receipt. A chatbot in the Telegraph was written in Python to provide users details about their budget. ## Challenges we ran into The main challenge we ran into was translating the receipt's image into text. As Amazon Rekognition is quite sensitive to image quality, we invested a lot of time into preprocessing the images to guarantee the best possible OCR result. Another issue we faced was displaying all the analyzed information through a bot in Telegram. In order to do this, we needed to get the data containing the cost and name of the items from the array of dictionaries. In the end, we were able to select each component and display it as needed. ## Accomplishments that we're proud of We are proud that in such a short span of time, we were able to meet the goals of our desired programs. As most of this technology was new to most of the members, it was an accomplishment to successfully code the program. Additionally, we used several different technologies that we were exposed to during the workshops and challenges. ## What we learned We all were able to delve deep into areas out of our comfort zone and see the workings behind apps we have previously used on our phones(telegram). We were able to not only create a new bot on telegram but also program it to respond to input relative to the input that the user gave it. Additionally, we were able to use different technologies like AWS Rekognition and RNN and implemented them all together to make one coherent program.
winning
## Inspiration We wanted to reduce global carbon footprint and pollution by optimizing waste management. 2019 was an incredible year for all environmental activities. We were inspired by the acts of 17-year old Greta Thunberg and how those acts created huge ripple effects across the world. With this passion for a greener world, synchronized with our technical knowledge, we created Recycle.space. ## What it does Using modern tech, we provide users with an easy way to identify where to sort and dispose of their waste items simply by holding it up to a camera. This application will be especially useful when permanent fixtures are erect in malls, markets, and large public locations. ## How we built it Using a flask-based backend to connect to Google Vision API, we captured images and categorized which waste categories the item belongs to. This was visualized using Reactstrap. ## Challenges I ran into * Deployment * Categorization of food items using Google API * Setting up Dev. Environment for a brand new laptop * Selecting appropriate backend framework * Parsing image files using React * UI designing using Reactstrap ## Accomplishments that I'm proud of * WE MADE IT! We are thrilled to create such an incredible app that would make people's lives easier while helping improve the global environment. ## What I learned * UI is difficult * Picking a good tech stack is important * Good version control practices is crucial ## What's next for Recycle.space Deploying a scalable and finalized version of the product to the cloud and working with local companies to deliver this product to public places such as malls.
## What it does MemoryLane is an app designed to support individuals coping with dementia by aiding in the recall of daily tasks, medication schedules, and essential dates. The app personalizes memories through its reminisce panel, providing a contextualized experience for users. Additionally, MemoryLane ensures timely reminders through WhatsApp, facilitating adherence to daily living routines such as medication administration and appointment attendance. ## How we built it The back end was developed using Flask and Python and MongoDB. Next.js was employed for the app's front-end development. Additionally, the app integrates the Google Cloud speech-to-text API to process audio messages from users, converting them into commands for execution. It also utilizes the InfoBip SDK for caregivers to establish timely messaging reminders through a calendar within the application. ## Challenges we ran into An initial hurdle we encountered involved selecting a front-end framework for the app. We transitioned from React to Next due to the seamless integration of styling provided by Next, a decision that proved to be efficient and time-saving. The subsequent challenge revolved around ensuring the functionality of text messaging. ## Accomplishments that we're proud of The accomplishments we have achieved thus far are truly significant milestones for us. We had the opportunity to explore and learn new technologies that were previously unfamiliar to us. The integration of voice recognition, text messaging, and the development of an easily accessible interface tailored to our audience is what fills us with pride. ## What's next for Memory Lane We aim for MemoryLane to incorporate additional accessibility features and support integration with other systems for implementing activities that offer memory exercises. Additionally, we envision MemoryLane forming partnerships with existing systems dedicated to supporting individuals with dementia. Recognizing the importance of overcoming organizational language barriers in healthcare systems, we advocate for the formal use of interoperability within the reminder aspect of the application. This integration aims to provide caregivers with a seamless means of receiving the latest health updates, eliminating any friction in accessing essential information.
## Lejr **Introduction** A web application that allows you to track how much money your friends owe you, and after your friend accepts your request of paying you back, the app will directly deposit the money into your bank account. **How we did it** We built the website using Interac's Public API and MongoDB hosted by MLab; the website is hosted on Heroku. Our Node.js/Express backend is also acting as a REST API for our Android application. **Inspiration** Our friends keep forgetting to pay us back, and we're uncomfortable with pestering them so we thought of the idea to make payment requests simple and quick by using the Interac API along with a Node.js backend.
winning
## 💡 Inspiration > > #hackathon-help-channel > `<hacker>` Can a mentor help us with flask and Python? We're stuck on how to host our project. > > > How many times have you created an epic web app for a hackathon but couldn't deploy it to show publicly? At my first hackathon, my team worked hard on a Django + React app that only lived at `localhost:5000`. Many new developers don't have the infrastructure experience and knowledge required to deploy many of the amazing web apps they create for hackathons and side projects to the cloud. We wanted to make a tool that enables developers to share their projects through deployments without any cloud infrastructure/DevOps knowledge (Also, as 2 interns currently working in DevOps positions, we've been learning about lots of Infrastructure as Code (IaC), Configuration as Code (CaC), and automation tools, and we wanted to create a project to apply our learning.) ## 💭 What it does InfraBundle aims to: 1. ask a user for information about their project 2. generate appropriate IaC and CaC code configurations 3. bundle configurations with GitHub Actions workflow to simplify deployment Then, developers commit the bundle to their project repository where deployments become as easy as pushing to your branch (literally, that's the trigger). ## 🚧 How we built it As DevOps interns, we work with Ansible, Terraform, and CI/CD pipelines in an enterprise environment. We thought that these could help simplify the deployment process for hobbyists as well InfraBundle uses: * Ansible (CaC) * Terraform (IaC) * GitHub Actions (CI/CD) * Python and jinja (generating CaC, IaC from templates) * flask! (website) ## 😭 Challenges we ran into We're relatitvely new to Terraform and Ansible and stumbled into some trouble with all the nitty-gritty aspects of setting up scripts from scratch. In particular, we had trouble connecting an SSH key to the GitHub Action workflow for Ansible to use in each run. This led to the creation of temporary credentials that are generated in each run. With Ansible, we had trouble creating and activating a virtual environment (see: not carefully reading [ansible.builtin.pip](https://docs.ansible.com/ansible/latest/collections/ansible/builtin/pip_module.html) documentation on which parameters are mutually exclusive and confusing the multiple ways to pip install) In general, hackathons are very time constrained. Unfortunately, slow pipelines do not care about your time constraints. * hard to test locally * cluttering commit history when debugging pipelines ## 🏆 Accomplishments that we're proud of InfraBundle is capable of deploying itself! In other news, we're proud of the project being something we're genuinely interested in as a way to apply our learning. Although there's more functionality we wished to implement, we learned a lot about the tools used. We also used a GitHub project board to keep track of tasks for each step of the automation. ## 📘 What we learned Although we've deployed many times before, we learned a lot about automating the full deployment process. This involved handling data between tools and environments. We also learned to use GitHub Actions. ## ❓ What's next for InfraBundle InfraBundle currently only works for a subset of Python web apps and the only provider is Google Cloud Platform. With more time, we hope to: * Add more cloud providers (AWS, Linode) * Support more frameworks and languages (ReactJS, Express, Next.js, Gin) * Improve support for database servers * Improve documentation * Modularize deploy playbook to use roles * Integrate with GitHub and Google Cloud Platform * Support multiple web servers
## Inspiration We saw that lots of people were looking for a team to work with for this hackathon, so we wanted to find a solution ## What it does I helps developers find projects to work, and helps project leaders find group members. By using the data from Github commits, it can determine what kind of projects a person is suitable for. ## How we built it We decided on building an app for the web, then chose a graphql, react, redux tech stack. ## Challenges we ran into The limitations on the Github api gave us alot of troubles. The limit on API calls made it so we couldnt get all the data we needed. The authentication was hard to implement since we had to try a number of ways to get it to work. The last challenge was determining how to make a relationship between the users and the projects they could be paired up with. ## Accomplishments that we're proud of We have all the parts for the foundation of a functional web app. The UI, the algorithms, the database and the authentication are all ready to show. ## What we learned We learned that using APIs can be challenging in that they give unique challenges. ## What's next for Hackr\_matchr Scaling up is next. Having it used for more kind of projects, with more robust matching algorithms and higher user capacity.
> > Domain.com domain: IDE-asy.com > > > ## Inspiration Software engineering and development have always been subject to change over the years. With new tools, frameworks, and languages being announced every year, it can be challenging for new developers or students to keep up with the new trends the technological industry has to offer. Creativity and project inspiration should not be limited by syntactic and programming knowledge. Quick Code allows ideas to come to life no matter the developer's experience, breaking the coding barrier to entry allowing everyone equal access to express their ideas in code. ## What it does Quick Code allowed users to code simply with high level voice commands. The user can speak in pseudo code and our platform will interpret the audio command and generate the corresponding javascript code snippet in the web-based IDE. ## How we built it We used React for the frontend, and the recorder.js API for the user voice input. We used runkit for the in-browser IDE. We used Python and Microsoft Azure for the backend, we used Microsoft Azure to process user input with the cognitive speech services modules and provide syntactic translation for the frontend’s IDE. ## Challenges we ran into > > "Before this hackathon I would usually deal with the back-end, however, for this project I challenged myself to experience a different role. I worked on the front end using react, as I do not have much experience with either react or Javascript, and so I put myself through the learning curve. It didn't help that this hacakthon was only 24 hours, however, I did it. I did my part on the front-end and I now have another language to add on my resume. > The main Challenge that I dealt with was the fact that many of the Voice reg" *-Iyad* > > > "Working with blobs, and voice data in JavaScript was entirely new to me." *-Isaac* > > > "Initial integration of the Speech to Text model was a challenge at first, and further recognition of user audio was an obstacle. However with the aid of recorder.js and Python Flask, we able to properly implement the Azure model." *-Amir* > > > "I have never worked with Microsoft Azure before this hackathon, but decided to embrace challenge and change for this project. Utilizing python to hit API endpoints was unfamiliar to me at first, however with extended effort and exploration my team and I were able to implement the model into our hack. Now with a better understanding of Microsoft Azure, I feel much more confident working with these services and will continue to pursue further education beyond this project." *-Kris* > > > ## Accomplishments that we're proud of > > "We had a few problems working with recorder.js as it used many outdated modules, as a result we had to ask many mentors to help us get the code running. Though they could not figure it out, after hours of research and trying, I was able to successfully implement recorder.js and have the output exactly as we needed. I am very proud of the fact that I was able to finish it and not have to compromise any data." *-Iyad* > > > "Being able to use Node and recorder.js to send user audio files to our back-end and getting the formatted code from Microsoft Azure's speech recognition model was the biggest feat we accomplished." *-Isaac* > > > "Generating and integrating the Microsoft Azure Speech to Text model in our back-end was a great accomplishment for our project. It allowed us to parse user's pseudo code into properly formatted code to provide to our website's IDE." *-Amir* > > > "Being able to properly integrate and interact with the Microsoft Azure's Speech to Text model was a great accomplishment!" *-Kris* > > > ## What we learned > > "I learned how to connect the backend to a react app, and how to work with the Voice recognition and recording modules in react. I also worked a bit with Python when trying to debug some problems in sending the voice recordings to Azure’s servers." *-Iyad* > > > "I was introduced to Python and learned how to properly interact with Microsoft's cognitive service models." *-Isaac* > > > "This hackathon introduced me to Microsoft Azure's Speech to Text model and Azure web app. It was a unique experience integrating a flask app with Azure cognitive services. The challenging part was to make the Speaker Recognition to work; which unfortunately, seems to be in preview/beta mode and not functioning properly. However, I'm quite happy with how the integration worked with the Speach2Text cognitive models and I ended up creating a neat api for our app." *-Amir* > > > "The biggest thing I learned was how to generate, call and integrate with Microsoft azure's cognitive services. Although it was a challenge at first, learning how to integrate Microsoft's models into our hack was an amazing learning experience. " *-Kris* > > > ## What's next for QuickCode We plan on continuing development and making this product available on the market. We first hope to include more functionality within Javascript, then extending to support other languages. From here, we want to integrate a group development environment, where users can work on files and projects together (version control). During the hackathon we also planned to have voice recognition to recognize and highlight which user is inputting (speaking) which code.
winning
## Inspiration As a kid, I loved going on school field trips and learning about the animals around me. With many schools being closed, I wanted to provide an educational experience at home. This way, students can travel and learn about animals from the comfort of their living room! ## What it does In Animal Adventures, you can walk around and explore the environment. In the environment, animals are roaming around. When an animal approaches you, you can pet the animal to learn facts about them, such as their behaviors, habitat, foods they eat, speed, and other fun facts! You can also choose to approach an animal you see yourself. ## How I built it Animal Adventures was built using Unity and can be played on an Oculus headset. Assets are from the Unity Store. ## Challenges I ran into This was my first time working with Oculus, Unity, and C#. My computer is very old, so the Unity setup process took a lot of time and adjusting. It took a long time to get the Oculus to load the apk. When I was able to finally set up Unity to develop for Oculus, I ran into trouble with moving the animals and learning how functions work together. ## Accomplishments that I'm proud of I'm very excited to have built my first app for VR! I'm proud of myself for learning a new programming language too! ## What I learned Developing for VR! ## What's next for Animal Adventures I want Animal Adventures to be even more immersive and educational. In the future, I hope to add science facts to include information about the weather, plants, trees, and more! I also want to develop assets to simulate real life national parks and well-known places around the world. I hope to also include a time travel feature for students to go back in time and learn about extinct animals. Thanks for reading and thank you to TreeHacks for this opportunity! :)
## Inspiration After hours and hours of coding, we just wanted to get our groove on! It's hard having to run on a treadmill for what seems like forever, but once you get your groove on, time just passes by and those calories get a burnin'! ## What it does This is an AR game. The objective of the game is to shoot "good vibes" to angry retro penguins that will turn happy once they are hit with the "good vibes". The goal is to dodge the "bad vibes" that the angry penguins are sending constantly. While you are dodging "bad vibes" and sending "good vibes"you can also burn a descent amount of calories. Since the game is always sending "bad vibes", players are always moving and are active while engaging in a fun, augmented reality experience. ## How we built it We used the Unity Game Engine and C# in order to create the app as well as the ARKit API . ## Challenges we ran into One of the challenges we ran into was learning how to use the C# language to perform the desired effects that we envisioned for our game. In addition, we were new to the process of making animations, so we had to take time to learn through tutorials and other peer hackers regarding tools within Unity that we could use to make what we wanted. Another set of challenges arose while we tried to navigate and utilize the ARKit API system. ## Accomplishments that we're proud of We are proud of the skills we gained in animation, C#, and Unity to create an AR game. ## What we learned We were able to learn firsthand the degree of intricacy, time, and dedication needed to make a game, even at the simplest level. Also, not all of us were knowledgeable of C# before coming to PennApps so we were able to gain some skills in this language while working on a cool, exciting project. ## What's next for Funky Fitness Adding music to make the experience more fun and adding more levels for players into interact in. ## Other Info/Crediting We obtained the graphic of the penguin (without the hair) from poly.google.com. The creator of the penguin was 14islands Lab. All other graphics were our own (clothing accents and hair).
## Inspiration Social media today is focused on connecting users who share similar friend circles and interests. The recommendations that show up on users' feeds are geared towards their current beliefs, leading to increasingly polarized iterations of the information in the hopes of keeping them online for extended periods of time. This way of connecting users results in "echo chambers" of experiences, ideologies, and culture. Common Grounds is fundamentally built with the purpose of connecting users who might not share the traits that traditional social media looks for in potential connections, but who could still be good friends given the opportunity. ## Additional Background Info Traditional social media is designed around the idea of a centralized network. Users accrue an increasing number of followers and likes. In recent years this has led to the rise of influencers, people who amass large followings on social media and thus have a disproportionate influence on others despite their knowledge and credibility on any given topic. This means that a retweet or post by a prominent figure on a supposed rumor could lead to its rapid spread into a common misperception. Common Grounds is aimed at building an egalitarian network of connections where one accrues "friends" not because they already have a large following or because they're a model, but because of the merit of their ideas. Further, because there are no followers or likes, users can focus on building meaningful connections in a stress-free environment. ## What it does A social platform that uses OpenAI’s GPT-3 language prediction model to generate prompts designed to **spark conversation**, and to form connections between people with seemingly **differing** opinions. * ML-generated questions and follow-up prompts * Smart matching to pair users with differing opinions * Video calling + option to mute/unmute * Option to add & remove friends * Dashboard to view weekly stats Because there's no search feature for friends, no publicly-viewable number of followers, and therefore an absence of influencers, users build authentic relationships in an environment where there isn't pressure to increase their numbers of followers or likes. ## How we built it Common Grounds is composed of two main components: a React frontend and a Python backend server. On the frontend, we use Firebase Auth for login, Twilio Video for video calling, and WebSockets for live, bidirectional client-server communication. Our frontend uses the NextJS React framework and is deployed to Vercel. On the backend, we used the AIOHTTP Python library to serve HTTP and Websocket requests, Firestore for data persistence, Twilio Video for video calling, and OpenAI GPT-3 for intelligent discussion prompt generation. Our backend is deployed to Azure web apps. ## Challenges we ran into * Designing a cohesive user experience * Deploying the backend server and setting up SSL * Complex state management and WebSocket connection issues on the frontend ## What's next for Common Grounds * Closed Captioning: for increased accessibility for those who may be deaf or hearing-impaired could be extended to live language translation to increase diversity of users * Direct Messaging: allow users to message their connections, to plan times to continue their conversations * More Sophisticated Matching & Prompts: over time, learn what type of matches yield the most meaningful discussion based on statistics such as duration of call and friend rate
losing
## Inspiration We wanted to create a game during to ease the boredom of quarantine, so we developed a text-based endless dungeon crawler. ## What it does Dungeon dwellers is a dungeon crawling game wherein you fight your way up an endless dungeon against monsters to find better gear and level up. ## How we built it We built it using Java and JavaSwing for our GUI. Java was used to create all the objects that would be used in order for the functionality. JavaSwing was used to create a GUI for the users ease of use. ## Challenges we ran into We had challenges integrating the front and back end. ## Accomplishments that we're proud of We are proud that the game functions as wanted, and has a degree of randomness which makes every floor of the game feel a little different. ## What we learned We learned how to work in a team while completing a common goal. Each person had their own tasks to complete and when they came together worked well. ## What's next for Dungeon Dwellers We will continue to work on the game in order to refine the game and add more features to make it a even better game.
## Inspiration Myself and my partner were feeling nostalgic and wanted to give a shot at recreating and old-style rpg top-down game, similar to games like the Legend of Zelda. We started planning it out, and after not too long, we were off to the races! ## What it does Bringing our programming and graphical experience, we managed to produce a game that provides entertainment, as well as the framework for something potentially in the future. ## How I built it All programming was done in the Eclipse IDE, written in Java. No libraries used, and everything was made from scratch. Graphics were done in MS Paint / Linux Pinta, using reference images from games such as the Legend of Zelda for inspiration. ## Challenges I ran into Time management, mostly. The project was quite an ambitious one for 36 hours, and required some crunching near the end to squeeze in the features we wanted; even then not all features were included, such as a skills tree or multiple NPCs. ## Accomplishments that I'm proud of We managed to pull off quite a bit in the 36 hours, including creating engaging combat mechanics and doing quirky visual effects to make the experience feel more true to it's nostalgic roots. I am especially proud of managing to create a world generator that would create swathes of rock, albeit a bit excessive at times. ## What I learned Things certainly will not all go to plan, and that is not nessasarily a bad thing; as you go along you will think of better ways of doing things and think of more efficient ways to code what you're doing. The plan for the game should be considered an outline, and changes can and certainly will be made! ## What's next for Islander's Misfortune I've spoken with my partner and we are debating over extending it to include the features we wanted to include over the weekend, including skills, leveling, more NPCs, and a more diverse map and build system. I have no doubt this will not be the last time we work on the project, as we both have things we want to tinker and adjust with.
## Inspiration After seeing the breakout success that was Pokemon Go, my partner and I were motivated to create our own game that was heavily tied to physical locations in the real-world. ## What it does Our game is supported on every device that has a modern web browser, absolutely no installation required. You walk around the real world, fighting your way through procedurally generated dungeons that are tied to physical locations. If you find that a dungeon is too hard, you can pair up with some friends and tackle it together. Unlike Niantic, who monetized Pokemon Go using micro-transactions, we plan to monetize the game by allowing local businesses to to bid on enhancements to their location in the game-world. For example, a local coffee shop could offer an in-game bonus to players who purchase a coffee at their location. By offloading the cost of the game onto businesses instead of players we hope to create a less "stressful" game, meaning players will spend more time having fun and less time worrying about when they'll need to cough up more money to keep playing. ## How We built it The stack for our game is built entirely around the Node.js ecosystem: express, socket.io, gulp, webpack, and more. For easy horizontal scaling, we make use of Heroku to manage and run our servers. Computationally intensive one-off tasks (such as image resizing) are offloaded onto AWS Lambda to help keep server costs down. To improve the speed at which our website and game assets load all static files are routed through MaxCDN, a continent delivery network with over 19 datacenters around the world. For security, all requests to any of our servers are routed through CloudFlare, a service which helps to keep websites safe using traffic filtering and other techniques. Finally, our public facing website makes use of Mithril MVC, an incredibly fast and light one-page-app framework. Using Mithril allows us to keep our website incredibly responsive and performant.
losing
## Inspiration We were trying for an IM cross MS paint experience, and we think it looks like that. ## What it does Users can create conversations with other users by putting a list of comma-separated usernames in the To field. ## How we built it We used Node JS combined with the Express.js web framework, Jade for templating, Sequelize as our ORM and PostgreSQL as our database. ## Challenges we ran into Server-side challenges with getting Node running, overloading the server with too many requests, and the need for extensive debugging. ## Accomplishments that we're proud of Getting a (mostly) fully up-and-running chat client up in 24 hours! ## What we learned We learned a lot about JavaScript, asynchronous operations and how to properly use them, as well as how to deploy a production environment node app. ## What's next for SketchWave We would like to improve the performance and security of the application, then launch it for our friends and people in our residence to use. We would like to include mobile platform support via a responsive web design as well, and possibly in the future even have a mobile app.
## Inspiration Kevin, one of our team members, is an enthusiastic basketball player, and frequently went to physiotherapy for a knee injury. He realized that a large part of the physiotherapy was actually away from the doctors' office - he needed to complete certain exercises with perfect form at home, in order to consistently improve his strength and balance. Through his story, we realized that so many people across North America require physiotherapy for far more severe conditions, be it from sports injuries, spinal chord injuries, or recovery from surgeries. Likewise, they will need to do at-home exercises individually, without supervision. For the patients, any repeated error can actually cause a deterioration in health. Therefore, we decided to leverage computer vision technology, to provide real-time feedback to patients to help them improve their rehab exercise form. At the same time, reports will be generated to the doctors, so that they may monitor the progress of patients and prioritize their urgency accordingly. We hope that phys.io will strengthen the feedback loop between patient and doctor, and accelerate the physical rehabilitation process for many North Americans. ## What it does Through a mobile app, the patients will be able to film and upload a video of themselves completing a certain rehab exercise. The video then gets analyzed using a machine vision neural network, such that the movements of each body segment is measured. This raw data is then further processed to yield measurements and benchmarks for the relative success of the movement. In the app, the patients will receive a general score for their physical health as measured against their individual milestones, tips to improve the form, and a timeline of progress over the past weeks. At the same time, the same video analysis will be sent to the corresponding doctor's dashboard, in which the doctor will receive a more thorough medical analysis in how the patient's body is working together and a timeline of progress. The algorithm will also provide suggestions for the doctors' treatment of the patient, such as prioritizing a next appointment or increasing the difficulty of the exercise. ## How we built it At the heart of the application is a Google Cloud Compute instance running together with a blobstore instance. The cloud compute cluster will ingest raw video posted to blobstore, and performs the machine vision analysis to yield the timescale body data. We used Google App Engine and Firebase to create the rest of the web application and API's for the 2 types of clients we support: an iOS app, and a doctor's dashboard site. This manages day to day operations such as data lookup, and account management, but also provides the interface for the mobile application to send video data to the compute cluster. Furthermore, the app engine sinks processed results and feedback from blobstore and populates it into Firebase, which is used as the database and data-sync. Finally, In order to generate reports for the doctors on the platform, we used stdlib's tasks and scale-able one-off functions to process results from Firebase over time and aggregate the data into complete chunks, which are then posted back into Firebase. ## Challenges we ran into One of the major challenges we ran into was interfacing each technology with each other. Overall, the data pipeline involves many steps that, while each in itself is critical, also involve too many diverse platforms and technologies for the time we had to build it. ## What's next for phys.io <https://docs.google.com/presentation/d/1Aq5esOgTQTXBWUPiorwaZxqXRFCekPFsfWqFSQvO3_c/edit?fbclid=IwAR0vqVDMYcX-e0-2MhiFKF400YdL8yelyKrLznvsMJVq_8HoEgjc-ePy8Hs#slide=id.g4838b09a0c_0_0>
## Inspiration The idea for journo came from a shower thought to create a journaling app that utilized the latest advancements in AI technology. We wanted to create an app that would not only allow users to journal their thoughts and experiences, but also provide them with personalized suggestions and insights based on and to inform their writing. ## What it does Journo is a journaling app that utilizes **GPT-3** and **Whisper** to provide users with a personalized journaling experience. The app allows users to both speak and write their thoughts, using Whisper's advanced transcription technology to generate accurate voice transcripts and GPT-3 to generate personalized suggestions and insights based on the user's writing. ## How we built it Journo was built using a combination of **React Native, TypeScript, Python** and a couple of other library languages. We used TypeScript and React Native for the front-end development and Python for the back-end. We also used Python for the integration of GPT-3 and Whisper. Finally, we used **Figma** and Photoshop to design our app layout and interactions. ## Challenges we ran into One of the biggest challenges we faced was integrating the different technologies we were using. We had to work through a lot of error-searching, cache crashes and some compatibility issues, but we were able to overcome (most of) them through teamwork, collaboration and critical thinking. We also faced some issues translating our Figma design into a fully-fleshed-out front-end, and in the end, decided to go for functionality over form. We've instead attached our Figma design as a proof-of-possible concept doable with a little more time. ## Accomplishments that we're proud of We are proud of the fact that two of us were attending our first-ever hackathons, and that we all worked really well together while learning new languages. Additionally, we are proud of the fact that we were able to successfully integrate GPT-3 and Whisper into our app, which was a major accomplishment for our team. ## What we learned We learned a lot about integrating different technologies and working as a team under pressure. We also gained valuable experience in using React Native, TypeScript and Python. ## What's next for journo We are planning to continue to develop and improve journo, with a focus on user experience and adding new features. We are also looking into expanding Journo to other platforms, such as web and iOS. We believe Journo has the potential to be a valuable tool for people looking to journal and improve their mental health. We are excited to see where Journo will go!
winning
## Inspiration College students are busy, juggling classes, research, extracurriculars and more. On top of that, creating a todo list and schedule can be overwhelming and stressful. Personally, we used Google Keep and Google Calendar to manage our tasks, but these tools require constant maintenance and force the scheduling and planning onto the user. Several tools such as Motion and Reclaim help business executives to optimize their time and maximize productivity. After talking to our peers, we realized college students are not solely concerned with maximizing output. Instead, we value our social lives, mental health, and work-life balance. With so many scheduling applications centered around productivity, we wanted to create a tool that works **with** users to maximize happiness and health. ## What it does Clockwork consists of a scheduling algorithm and full-stack application. The scheduling algorithm takes in a list of tasks and events, as well as individual user preferences, and outputs a balanced and doable schedule. Tasks include a name, description, estimated workload, dependencies (either a start date or previous task), and deadline. The algorithm first traverses the graph to augment nodes with additional information, such as the eventual due date and total hours needed for linked sub-tasks. Then, using a greedy algorithm, Clockwork matches your availability with the closest task sorted by due date. After creating an initial schedule, Clockwork finds how much free time is available, and creates modified schedules that satisfy user preferences such as workload distribution and weekend activity. The website allows users to create an account and log in to their dashboard. On the dashboard, users can quickly create tasks using both a form and a graphical user interface. Due dates and dependencies between tasks can be easily specified. Finally, users can view tasks due on a particular day, abstracting away the scheduling process and reducing stress. ## How we built it The scheduling algorithm uses a greedy algorithm and is implemented with Python, Object Oriented Programming, and MatPlotLib. The backend server is built with Python, FastAPI, SQLModel, and SQLite, and tested using Postman. It can accept asynchronous requests and uses a type system to safely interface with the SQL database. The website is built using functional ReactJS, TailwindCSS, React Redux, and the uber/react-digraph GitHub library. In total, we wrote about 2,000 lines of code, split 2/1 between JavaScript and Python. ## Challenges we ran into The uber/react-digraph library, while popular on GitHub with ~2k stars, has little documentation and some broken examples, making development of the website GUI more difficult. We used an iterative approach to incrementally add features and debug various bugs that arose. We initially struggled setting up CORS between the frontend and backend for the authentication workflow. We also spent several hours formulating the best approach for the scheduling algorithm and pivoted a couple times before reaching the greedy algorithm solution presented here. ## Accomplishments that we're proud of We are proud of finishing several aspects of the project. The algorithm required complex operations to traverse the task graph and augment nodes with downstream due dates. The backend required learning several new frameworks and creating a robust API service. The frontend is highly functional and supports multiple methods of creating new tasks. We also feel strongly that this product has real-world usability, and are proud of validating the idea during YHack. ## What we learned We both learned more about Python and Object Oriented Programming while working on the scheduling algorithm. Using the react-digraph package also was a good exercise in reading documentation and source code to leverage an existing product in an unconventional way. Finally, thinking about the applications of Clockwork helped us better understand our own needs within the scheduling space. ## What's next for Clockwork Aside from polishing the several components worked on during the hackathon, we hope to integrate Clockwork with Google Calendar to allow for time blocking and a more seamless user interaction. We also hope to increase personalization and allow all users to create schedules that work best with their own preferences. Finally, we could add a metrics component to the project that helps users improve their time blocking and more effectively manage their time and energy.
## Inspiration Our inspiration for this project was our experience as students. We believe students need more a digestible feed when it comes to due dates. Having to manually plan for homework, projects, and exams can be annoying and time consuming. StudyHedge is here to lift the scheduling burden off your shoulders! ## What it does StudyHedge uses your Canvas API token to compile a list of upcoming assignments and exams. You can configure a profile detailing personal events, preferred study hours, number of assignments to complete in a day, and more. StudyHedge combines this information to create a manageable study schedule for you. ## How we built it We built the project using React (Front-End), Flask (Back-End), Firebase (Database), and Google Cloud Run. ## Challenges we ran into Our biggest challenge resulted from difficulty connecting Firebase and FullCalendar.io. Due to inexperience, we were unable to resolve this issue in the given time. We also struggled with using the Eisenhower Matrix to come up with the right formula for weighting assignments. We discovered that there are many ways to do this. After exploring various branches of mathematics, we settled on a simple formula (Rank= weight/time^2). ## Accomplishments that we're proud of We are incredibly proud that we have a functional Back-End and that our UI is visually similar to our wireframes. We are also excited that we performed so well together as a newly formed group. ## What we learned Keith used React for the first time. He learned a lot about responsive front-end development and managed to create a remarkable website despite encountering some issues with third-party software along the way. Gabriella designed the UI and helped code the front-end. She learned about input validation and designing features to meet functionality requirements. Eli coded the back-end using Flask and Python. He struggled with using Docker to deploy his script but managed to conquer the steep learning curve. He also learned how to use the Twilio API. ## What's next for StudyHedge We are extremely excited to continue developing StudyHedge. As college students, we hope this idea can be as useful to others as it is to us. We want to scale this project eventually expand its reach to other universities. We'd also like to add more personal customization and calendar integration features. We are also considering implementing AI suggestions.
## Inspiration As full-time college students who are also working part-time, it is a daily struggle to keep up with our schedule organized while managing our academic assignments and personal responsibilities. The demands of our schedule often lead us to prioritize tasks over essential self-care. To help college students like us strike a balance between their academic or work responsibility and their self-care routine, our team has come up with a scheduling program utilizing AI to help us stay organized. ## What it does The program guides users to input their daily routines, such as wake-up, meal-times, and bedtime, forming the foundation for a personalized schedule. Users can add tasks by providing details like title, description, and estimated time, tailoring the schedule to their needs. The AI then generates a balanced schedule, integrating tasks with essential self-care activities like sleep and meals. This approach promotes efficient time management, helping users improve both productivity and well-being while maintaining a sustainable balance between work and self-care. ## How we built it First, we came up with a design in Figma that served as our outline of the website before adding any scripts or functions. We soon began writing codes using backend tools such as Node.js and Express.js, as well as frontend tools such as HTML, CSS, and Javascript tools. After that we made a Google Cloud server and created an API key, allowing us to use Gemini-Pro in our code. Using our varied skillsets, we produced a website built on HTML, CSS, and Javascript powered by Google's Gemini AI, capable of reading user input and generating schedules. ## Challenges we ran into We had a few problems with implementing the GeminiAI into our project but the one that kept persisting was giving Gemini AI the prompt to fulfill our intended purpose of creating an effective and organized schedule for users. ## Accomplishments that we're proud of As all of us are beginners, so we came to the hackathon without expecting much. So we’re proud of the fact that we learned so much in just a mere 36 hours and managed to build a program that works the way we wanted it to work. ## What we learned We were all really proud at what we have accomplished in these 36 hours. Throughout the experience, we learned how to bond and learn each other's skills when it comes to coding. This event was really impactful for us as it taught us how to utilize AI in our programs, how to use API, javascript and express. ## What's next for Schedulify We would like to add features that give users more customizations to their preferred schedule. We would also like to implement google calendar API so that our website can cross referenced the generated schedule from our website to google calendar. We also plan to incorporate Schedulify into mobile devices, allowing for more accessibility.
winning
## Inspiration Enabling Accessible Transportation for Those with Disabilities AccessRide is a cutting-edge website created to transform the transportation experience for those with impairments. We want to offer a welcoming, trustworthy, and accommodating ride-hailing service that is suited to the particular requirements of people with mobility disabilities since we are aware of the special obstacles they encounter. ## What it does Our goal is to close the accessibility gap in the transportation industry and guarantee that everyone has access to safe and practical travel alternatives. We link passengers with disabilities to skilled, sympathetic drivers who have been educated to offer specialised assistance and fulfill their particular needs using the AccessRide app. Accessibility:- The app focuses on ensuring accessibility for passengers with disabilities by offering vehicles equipped with wheelchair ramps or lifts, spacious interiors, and other necessary accessibility features. Specialized Drivers:- The app recruits drivers who are trained to provide assistance and support to passengers with disabilities. These drivers are knowledgeable about accessibility requirements and are committed to delivering a comfortable experience. Customized Preferences:- Passengers can specify their particular needs and preferences within the app, such as requiring a wheelchair-accessible vehicle, additional time for boarding and alighting, or any specific assistance required during the ride. Real-time Tracking:- Passengers can track the location of their assigned vehicle in real-time, providing peace of mind and ensuring they are prepared for pick-up. Safety Measures:- The app prioritizes passenger safety by conducting driver background checks, ensuring proper vehicle maintenance, and implementing safety protocols to enhance the overall travel experience. Seamless Payment:- The app offers convenient and secure payment options, allowing passengers to complete their transactions electronically, reducing the need for physical cash handling ## How we built it We built it using django, postgreSQL and Jupyter Notebook for driver selection ## Challenges we ran into Ultimately, the business impact of AccessRide stems from its ability to provide a valuable and inclusive service to people with disabilities. By prioritizing their needs and ensuring a comfortable and reliable transportation experience, the app can drive customer loyalty, attract new users, and make a positive social impact while growing as a successful business. To maintain quality service, AccessRide includes a feedback and rating system. This allows passengers to provide feedback on their experience and rate drivers based on their level of assistance, vehicle accessibility, and overall service quality. It was a challenging part in this event. ## Accomplishments that we're proud of We are proud that we completed our project. We look forward to develop more projects. ## What we learned We learned about the concepts of django and postgreSQL. We also learnt many algorithms in machine learning and implemented it as well. ## What's next for Accessride-Comfortable ride for all abilities In conclusion, AccessRide is an innovative and groundbreaking project that aims to transform the transportation experience for people with disabilities. By focusing on accessibility, specialized driver training, and a machine learning algorithm, the app sets itself apart from traditional ride-hailing services. It creates a unique platform that addresses the specific needs of passengers with disabilities and ensures a comfortable, reliable, and inclusive transportation experience. ## Your Comfort, Our Priority "Ride with Ease, Ride with Comfort“
## Inspiration Our project is inspired by the sister of one our creators, Joseph Ntaimo. Joseph often needs to help locate wheelchair accessible entrances to accommodate her, but they can be hard to find when buildings have multiple entrances. Therefore, we created our app as an innovative piece of assistive tech to improve accessibility across the campus. ## What it does The user can find wheelchair accessible entrances with ease and get directions on where to find them. ## How we built it We started off using MIT’s Accessible Routes interactive map to see where the wheelchair friendly entrances were located at MIT. We then inspected the JavaScript code running behind the map to find the latitude and longitude coordinates for each of the wheelchair locations. We then created a Python script that filtered out the latitude and longitude values, ignoring the other syntax from the coordinate data, and stored the values in separate text files. We tested whether our method would work in Python first, because it is the language we are most familiar with, by using string concatenation to add the proper Java syntax to the latitude and longitude points. Then we printed all of the points to the terminal and imported them into Android Studio. After being certain that the method would work, we uploaded these files into the raw folder in Android Studio and wrote code in Java that would iterate through both of the latitude/longitude lists simultaneously and plot them onto the map. The next step was learning how to change the color and image associated with each marker, which was very time intensive, but led us to having our custom logo for each of the markers. Separately, we designed elements of the app in Adobe Illustrator and imported logos and button designs into Android Studio. Then, through trial and error (and YouTube videos), we figured out how to make buttons link to different pages, so we could have both a FAQ page and the map. Then we combined both of the apps together atop of the original maps directory and ironed out the errors so that the pages would display properly. ## Challenges we ran into/Accomplishments We had a lot more ideas than we were able to implement. Stripping our app to basic, reasonable features was something we had to tackle in the beginning, but it kept changing as we discovered the limitations of our project throughout the 24 hours. Therefore, we had to sacrifice features that we would otherwise have loved to add. A big difficulty for our team was combining our different elements into a cohesive project. Since our team split up the usage of Android Studio, Adobe illustrator, and programming using the Google Maps API, it was most difficult to integrate all our work together. We are proud of how effectively we were able to split up our team’s roles based on everyone’s unique skills. In this way, we were able to be maximally productive and play to our strengths. We were also able to add Boston University accessible entrances in addition to MIT's, which proved that we could adopt this project for other schools and locations, not just MIT. ## What we learned We used Android Studio for the first time to make apps. We discovered how much Google API had to offer, allowing us to make our map and include features such as instant directions to a location. This helped us realize that we should use our resources to their full capabilities. ## What's next for HandyMap If given more time, we would have added many features such as accessibility for visually impaired students to help them find entrances, alerts for issues with accessing ramps and power doors, a community rating system of entrances, using machine learning and the community feature to auto-import maps that aren't interactive, and much, much more. Most important of all, we would apply it to all colleges and even anywhere in the world.
## Inspiration Imagine you are attending a friend’s event and you don’t have a ride to and from the event. The first solution that comes to mind is Uber or Lyft. But what if there was a more economical, eco-friendly solution? That’s exactly what we created at RiDir. ## What it does RiDir is a simple solution that allows you to get a ride from another attendee at your event. This eliminates the social awkwardness of asking an acquaintance or your friend’s friend for a ride. This app is especially useful for high school and college students who prioritize economy. The application allows hosts to create an event for up to 50 people. The attendees can then register through the app as passengers if they don’t have a ride, or as drivers if they are willing to drive someone home. ## How we built it An algorithm is responsible for assigning passengers to a driver in a way that creates an optimal route to minimize every driver’s travel time. RiDir utilizes Google Maps API to plot the addresses and obtain the information necessary to create optimal routes. The application was developed in Java and utilized Android Studio to integrate the Google Maps API. ## Challenges we ran into One of the challenges we ran into was working with each other remotely as some of the members had technical difficulties and weren’t able to be present for some of the meetings. This held us back a few times during the creation of the application. ## Accomplishments that we're proud of The accomplishments that we are proud of are that each one of us utilized a software technology that we have never used before. For example, for the front-end, we learned how to make UI/UX designs when designing the app in Figma, how to create a video using Canva for the presentation, and we also played around with Google Maps API for the implementation of the app. We split the work according to the team’s preferences, skills, and what they were more interested in learning. ## What we learned As a team, we learned how to work efficiently by setting deadlines for when we want our work to be done. We were also on a Zoom call during the majority of the time, which allowed us to ask questions if we’re stuck and help others in need of assistance. This helped us learn the importance of asking questions and answering these questions to avoid confusion and to solve problems before proceeding to the next steps. ## What's next for RiDir In the future, our team hopes to introduce new features to our app. These features include integrating our software with planning platforms such as Eventbrite to allow users to sign up for a ride in only a few clicks and expanding our audience to include larger events greater than 50 people where users are not necessarily connected through mutual friends and acquaintances. In addition, our team sees growth in creating an on-site signup option for users to find or offer a ride at the event. Finally, we plan to expand the app so that friends can share their carpooling information and location with each other to increase safety and accountability.
winning
## Inspiration As international students in the United States, we've faced the daunting challenge of navigating an unfamiliar and complex healthcare system. Far from home and family, we've experienced firsthand how expensive and difficult it can be to access proper healthcare. This struggle inspired us to create VitalPath - a platform designed to bridge the gap between individuals and healthcare resources, ensuring people can stay informed about their health and seek attention before situations become critical. ## What it does VitalPath is a user-friendly web platform that: * Collects self-reported health data from users * Uses AI to analyze symptoms and predict potential health conditions * Provides recommendations for managing health concerns * Connects users to remote healthcare resources when urgent attention is needed ## How we built it Our development process involved several key components: * Data Source: We utilized datasets from the CDC (Center for Disease Control and Prevention) to train our machine learning model. * Machine Learning: We developed a custom AI model capable of analyzing health data and predicting potential conditions. * Frontend: We built a responsive and intuitive user interface using React. * Backend: While we haven't fully implemented the API yet, we've laid the groundwork for integrating our ML model with the web application. ## Challenges we faced Throughout the development of VitalPath, we encountered several challenges: -**Data Complexity**: Working with health-related data required careful handling and interpretation. -**ML Integration**: Figuring out how to effectively integrate a machine learning model into a web application proved to be a complex task. -**Time Constraints**: Our ambitious goals were hampered by the hackathon's time limits, preventing us from fully implementing the API as initially planned. ## Accomplishments that we're proud of Despite the challenges, we're proud of several achievements: * Developing a functional website that lays the foundation for our vision * Training a machine learning model using real-world CDC data * Creating a user-friendly interface for inputting health data * Learning and applying new skills in a high-pressure, time-constrained environment ## What we learned This project was a significant learning experience for our team: * Training and working with machine learning models for health predictions * Developing responsive web applications using React * Understanding the complexities of healthcare data and its applications * Collaborating effectively as a team under tight deadlines * Project planning and management in a hackathon setting ## What's next for VitalPath We're excited about the future of VitalPath. Our next steps include: * Completing the integration of our trained model with the web application * Implementing and optimizing the API for efficient data processing * Continually refining our machine learning model to improve prediction accuracy * Expanding our platform with additional features like health education resources * Exploring partnerships with healthcare organizations to enhance our service offerings Our ultimate goal is to evolve VitalPath into a comprehensive platform for health monitoring and assistance, with a particular focus on serving underserved communities and individuals navigating unfamiliar healthcare systems.
## Inspiration We were inspired by several issues users addressed within the healthcare industries and our own current experiences. By browsing through current news and statistics in the healthcare sector, as well as the push towards remote options after the pandemic, we found that there is a lack of remote connectivity options for patients. Furthermore, doctors' diagnoses often lack transparency and accessibility as they are usually kept within the clinic and not with patient. We sought to solve this problem by creating an application that helps user's to better understand their health. ## What it does Our application strives to enhance and improve patients' relationships with their doctors by connecting with them remotely. We target key pain points that users have addressed in our secondary research. Users outline that they have trouble booking appointments with their doctor. They address that they have trouble tracking their own prescriptions and medication. Lastly, they discuss that they have trouble understanding their diagnosis beyond the doctor's office. To remediate these pain points, we created these key features. The first feature is a booking appointment feature that allows the user to book appointments virtually with their doctor according to both schedules. The second feature is a prescription tracker that reminds users when to take their medications and shows the doctor's instructions for it. The last feature is a monitoring feature that shows the user's diagnosis over time. Using the data from monitoring and methods of machine learning, we also create an essential risk assessment that suggests areas where the users may be at risk and should contact their doctor. ## How we built it We designed the product on Figma and built the backend using Express and Node.js. The database was stored using SQLite3, and the frontend was built using React. ## Challenges we ran into In order to inform our patients ahead about how much at risk they are to develop a heart disease based on their general health metrics, we implemented a logistic regression data model in scikit learn machine learning library in Python. To do so, our model was first trained on a dataset available freely on Kaggle and then the trained model was used on a patient's health records. However, due to lack of time, our model could not be efficiently trained to yield a higher accuracy. Moving forward, we consider training our model on different datasets as well so its well equipped to better predict risks of other health issues. In addition, during the initial brainstorming stage, we brainstormed many features which could not all be implemented due to the time constraints of the hackathon. To address this, we prioritized the most important features of the application and moved lower priority and more time intensive features to future development. During brainstorming, we also had trouble laying out a clear flow for the app, but the flow improved as we discussed it more. Structuring the backend was also a challenge, as we had to brainstorm an efficient way to store many types of data in our app, including health metrics, doctor information, prescriptions and appointments. In the end, we developed a database schema to store data effectively and be able to retrieve it easily before coding it in. ## Accomplishments that we're proud of We are very proud that we were able to plan and implement this comprehensive product which helps people connect with their health, with doctors and improve their overall well-being. ## Acknowledgements The cleaned dataset used to train our logistic regression model was available to us from Kaggle: <https://www.kaggle.com/datasets/alexteboul/heart-disease-health-indicators-dataset?resource=download> which is a cleaned version of <https://www.kaggle.com/datasets/cdc/behavioral-risk-factor-surveillance-system>. Both of these dataset are CC0: Public Domain licensed. ## What we learned Teamwork is crucial. We collaborated and strived to create the best product with our respective skillsets. We learned about the entire product creation process, from designing the frontend and backend, research and implementation of the product. We learned how to collaborate well with a tight deadline and to create a cohesive product. ## What's next for OPatience * Further training of the ML algorithm to improve accuracy of prediction * Enhancing security of data storage in the backend * User onboarding process
## Inspiration As students, we have found that there are very few high-quality resources on investing for those who are interested but don't have enough resources. Furthermore, we have found that investing and saving money can be a stressful experience. We hope to change this for those who want to save better with the help of our app, hopefully making it fun in the process! ## What it does Our app first asks a new client a brief questionnaire about themselves. Then, using their banking history, it generates 3 "demons", aka bad spending habits, to kill. Then, after the client chooses a habit to work on, it brings them to a dashboard where they can monitor their weekly progress on a task. Once the week is over, the app declares whether the client successfully beat the mission - if they did, they get rewarded with points which they can exchange for RBC Loyalty points! ## How we built it We built the frontend using React + Tailwind, using Routes to display our different pages. We used Cohere for our AI services, both for generating personalized weekly goals and creating a more in-depth report. We used Firebase for authentication + cloud database to keep track of users. For our data of users and transactions, as well as making/managing loyalty points, we used the RBC API. ## Challenges we ran into Piecing the APIs together was probably our most difficult challenge. Besides learning the different APIs in general, integrating the different technologies got quite tricky when we are trying to do multiple things at the same time! Besides API integration, definitely working without any sleep though was the hardest part! ## Accomplishments that we're proud of Definitely our biggest accomplishment was working so well together as a team. Despite only meeting each other the day before, we got along extremely well and were able to come up with some great ideas and execute under a lot of pressure (and sleep deprivation!) The biggest reward from this hackathon are the new friends we've found in each other :) ## What we learned I think each of us learned very different things: this was Homey and Alex's first hackathon, where they learned how to work under a small time constraint (and did extremely well!). Paige learned tons about React, frontend development, and working in a team. Vassily learned lots about his own strengths and weakness (surprisingly reliable at git, apparently, although he might have too much of a sweet tooth). ## What's next for Savvy Saver Demos! After that, we'll just have to see :)
losing
## Inspiration How much carbon does farming really sequester ? This is one question that inspired us to create this solution. With rising interest of governments around the world to start taxing farmers for their emissions, we wanted to find a way to calculate them. ## What it does A drone with a variety of sensors measure the CO2, CH4 and albedo of the land underneath it to estimate the actual carbon offset. The data collected in drone is sent to our server online which is fetched by MATLAB to calculate the carbon offset. The drone also has sensors to calculate water quality. In future the drone will also have soil moisture detection capability using microwaves similar to remote sensing satellites. With the offset we are able to calculate the carbon credits which can then trade over the pi platform. By using blockchain we enable : 1) No double counting of credits 2) Ensure wider participation from around the world (pi has already over 35 million users) 3) Ensure only algorithmic calculated credits are there. ## How we built it Using Arduino, PI, MATLAB ## Challenges we ran into PI was a tough challenge to implement. Loading sensors on drone was another big challenge. ## Accomplishments that we're proud of We were able to get all sensors to work, collect data in real time and run matlab analysis on it ## What we learned ## What's next for We Are Sus Farms
## Inspiration People are increasingly aware of climate change but lack actionable steps. Everything in life has a carbon cost, but it's difficult to understand, measure, and mitigate. Information about carbon footprints of products is often inaccessible for the average consumer, and alternatives are time consuming to research and find. ## What it does With GreenWise, you can link email or upload receipts to analyze your purchases and suggest products with lower carbon footprints. By tracking your carbon usage, it helps you understand and improve your environmental impact. It provides detailed insights, recommends sustainable alternatives, and facilitates informed choices. ## How we built it We started by building a tool that utilizes computer vision to read information off of a receipt, an API to gather information about the products, and finally ChatGPT API to categorize each of the products. We also set up an alternative form of gathering information in which the user forwards digital receipts to a unique email. Once we finished the process of getting information into storage, we built a web scraper to gather the carbon footprints of thousands of items for sale in American stores, and built a database that contains these, along with AI-vectorized form of the product's description. Vectorizing the product titles allowed us to quickly judge the linguistic similarity of two products by doing a quick mathematical operation. We utilized this to make the application compare each product against the database, identifying products that are highly similar with a reduced carbon output. This web application was built with a Python Flask backend and Bootstrap for the frontend, and we utilize ChromaDB, a vector database that allowed us to efficiently query through vectorized data. ## Accomplishments that we're proud of In 24 hours, we built a fully functional web application that uses real data to provide real actionable insights that allow users to reduce their carbon footprint ## What's next for GreenWise We'll be expanding e-receipt integration to support more payment processors, making the app seamless for everyone, and forging partnerships with companies to promote eco-friendly products and services to our consumers [Join the waitlist for GreenWise!](https://dea15e7b.sibforms.com/serve/MUIFAK0jCI1y3xTZjQJtHyTwScsgr4HDzPffD9ChU5vseLTmKcygfzpBHo9k0w0nmwJUdzVs7lLEamSJw6p1ACs1ShDU0u4BFVHjriKyheBu65k_ruajP85fpkxSqlBW2LqXqlPr24Cr0s3sVzB2yVPzClq3PoTVAhh_V3I28BIZslZRP-piPn0LD8yqMpB6nAsXhuHSOXt8qRQY)
## Inspiration' With the rise of IoT devices and the backbone support of the emerging 5G technology, BVLOS drone flights are becoming more readily available. According to CBInsights, Gartner, IBISworld, this US$3.34B market has the potential for growth and innovation. ## What it does **Reconnaissance drone software that utilizes custom object recognition and machine learning to track wanted targets.** It performs close to real-time speed with nearly 100% accuracy and allows a single operator to operate many drones at once. Bundled with a light sleek-designed web interface, it is highly inexpensive to maintain and easy to operate. **There is a Snapdragon Dragonboard that runs physically on the drones capturing real-time data and processing the video feed to identify targets. Identified targets are tagged and sent to an operator that is operating several drones at a time. This information can then be relayed to the appropriate parties.** ## How I built it There is a Snapdragon Dragonboard that runs physically on the drones capturing real-time data and processing the video feed to identify targets. This runs on a Python script that then sends the information to a backend server built using NodeJS (coincidentally also running on the Dragonboard for the demo) to do processing and to use Microsoft Azure to identify the potential targets. Operators use a frontend to access this information. ## Challenges I ran into Determining a way to reliably demonstrate this project became a challenge considering the drone is not moving and the GPS is not moving as well during the demonstration. The solution was to feed the program a video feed with simulated moving GPS coordinates so that the system believes it is moving in the air. The training model also required us to devote multiple engineers to spending most of their time training the model over the hackathon. ## Accomplishments that I'm proud of The code flow is adaptable to virtually an infinite number of scenarios with virtually **no hardcoding for the demo** except feeding it the video and GPS coordinates rather than the camera feed and actual GPS coordinates ## What I learned We learned a great amount on computer vision and building/training custom classification models. We used Node.js which is a highly versatile environment and can be configured to relay information very efficiently. Also, we learned a few javascript tricks and some pitfalls to avoid. ## What's next for Recognaissance Improving the classification model using more expansive datasets. Enhancing the software to be able to distinguish several objects at once allowing for more versatility.
winning
## Inspiration We built an AI-powered physical trainer/therapist that provides real-time feedback and companionship as you exercise. With the rise of digitization, people are spending more time indoors, leading to increasing trends of obesity and inactivity. We wanted to make it easier for people to get into health and fitness, ultimately improving lives by combating these the downsides of these trends. Our team built an AI-powered personal trainer that provides real-time feedback on exercise form using computer vision to analyze body movements. By leveraging state-of-the-art technologies, we aim to bring accessible and personalized fitness coaching to those who might feel isolated or have busy schedules, encouraging a more active lifestyle where it can otherwise be intimidating. ## What it does Our AI personal trainer is a web application compatible with laptops equipped with webcams, designed to lower the barriers to fitness. When a user performs an exercise, the AI analyzes their movements in real-time using a pre-trained deep learning model. It provides immediate feedback in both textual and visual formats, correcting form and offering tips for improvement. The system tracks progress over time, offering personalized workout recommendations and gradually increasing difficulty based on performance. With voice guidance included, users receive tailored fitness coaching from anywhere, empowering them to stay consistent in their journey and helping to combat inactivity and lower the barriers of entry to the great world of fitness. ## How we built it To create a solution that makes fitness more approachable, we focused on three main components: Computer Vision Model: We utilized MediaPipe and its Pose Landmarks to detect and analyze users' body movements during exercises. MediaPipe's lightweight framework allowed us to efficiently assess posture and angles in real-time, which is crucial for providing immediate form correction and ensuring effective workouts. Audio Interface: We initially planned to integrate OpenAI’s real-time API for seamless text-to-speech and speech-to-text capabilities, enhancing user interaction. However, due to time constraints with the newly released documentation, we implemented a hybrid solution using the Vosk API for speech recognition. While this approach introduced slightly higher latency, it enabled us to provide real-time auditory feedback, making the experience more engaging and accessible. User Interface: The front end was built using React with JavaScript for a responsive and intuitive design. The backend, developed in Flask with Python, manages communication between the AI model, audio interface, and user data. This setup allows the machine learning models to run efficiently, providing smooth real-time feedback without the need for powerful hardware. On the user interface side, the front end was built using React with JavaScript for a responsive and intuitive design. The backend, developed in Flask with Python, handles the communication between the AI model, audio interface, and the user's data. ## Challenges we ran into One of the major challenges was integrating the real-time audio interface. We initially planned to use OpenAI’s real-time API, but due to the recent release of the documentation, we didn’t have enough time to fully implement it. This led us to use the Vosk API in conjunction with our system, which introduced increased codebase complexity in handling real-time feedback. ## Accomplishments that we're proud of We're proud to have developed a functional AI personal trainer that combines computer vision and audio feedback to lower the barriers to fitness. Despite technical hurdles, we created a platform that can help people improve their health by making professional fitness guidance more accessible. Our application runs smoothly on various devices, making it easier for people to incorporate exercise into their daily lives and address the challenges of obesity and inactivity. ## What we learned Through this project, we learned that sometimes you need to take a "back door" approach when the original plan doesn’t go as expected. Our experience with OpenAI’s real-time API taught us that even with exciting new technologies, there can be limitations or time constraints that require alternative solutions. In this case, we had to pivot to using the Vosk API alongside our real-time system, which, while not ideal, allowed us to continue forward. This experience reinforced the importance of flexibility and problem-solving when working on complex, innovative projects. ## What's next for AI Personal Trainer Looking ahead, we plan to push the limits of the OpenAI real-time API to enhance performance and reduce latency, further improving the user experience. We aim to expand our exercise library and refine our feedback mechanisms to cater to users of all fitness levels. Developing a mobile app is also on our roadmap, increasing accessibility and convenience. Ultimately, we hope to collaborate with fitness professionals to validate and enhance our AI personal trainer, making it a reliable tool that encourages more people to lead healthier, active lives.
## Inspiration Kevin, one of our team members, is an enthusiastic basketball player, and frequently went to physiotherapy for a knee injury. He realized that a large part of the physiotherapy was actually away from the doctors' office - he needed to complete certain exercises with perfect form at home, in order to consistently improve his strength and balance. Through his story, we realized that so many people across North America require physiotherapy for far more severe conditions, be it from sports injuries, spinal chord injuries, or recovery from surgeries. Likewise, they will need to do at-home exercises individually, without supervision. For the patients, any repeated error can actually cause a deterioration in health. Therefore, we decided to leverage computer vision technology, to provide real-time feedback to patients to help them improve their rehab exercise form. At the same time, reports will be generated to the doctors, so that they may monitor the progress of patients and prioritize their urgency accordingly. We hope that phys.io will strengthen the feedback loop between patient and doctor, and accelerate the physical rehabilitation process for many North Americans. ## What it does Through a mobile app, the patients will be able to film and upload a video of themselves completing a certain rehab exercise. The video then gets analyzed using a machine vision neural network, such that the movements of each body segment is measured. This raw data is then further processed to yield measurements and benchmarks for the relative success of the movement. In the app, the patients will receive a general score for their physical health as measured against their individual milestones, tips to improve the form, and a timeline of progress over the past weeks. At the same time, the same video analysis will be sent to the corresponding doctor's dashboard, in which the doctor will receive a more thorough medical analysis in how the patient's body is working together and a timeline of progress. The algorithm will also provide suggestions for the doctors' treatment of the patient, such as prioritizing a next appointment or increasing the difficulty of the exercise. ## How we built it At the heart of the application is a Google Cloud Compute instance running together with a blobstore instance. The cloud compute cluster will ingest raw video posted to blobstore, and performs the machine vision analysis to yield the timescale body data. We used Google App Engine and Firebase to create the rest of the web application and API's for the 2 types of clients we support: an iOS app, and a doctor's dashboard site. This manages day to day operations such as data lookup, and account management, but also provides the interface for the mobile application to send video data to the compute cluster. Furthermore, the app engine sinks processed results and feedback from blobstore and populates it into Firebase, which is used as the database and data-sync. Finally, In order to generate reports for the doctors on the platform, we used stdlib's tasks and scale-able one-off functions to process results from Firebase over time and aggregate the data into complete chunks, which are then posted back into Firebase. ## Challenges we ran into One of the major challenges we ran into was interfacing each technology with each other. Overall, the data pipeline involves many steps that, while each in itself is critical, also involve too many diverse platforms and technologies for the time we had to build it. ## What's next for phys.io <https://docs.google.com/presentation/d/1Aq5esOgTQTXBWUPiorwaZxqXRFCekPFsfWqFSQvO3_c/edit?fbclid=IwAR0vqVDMYcX-e0-2MhiFKF400YdL8yelyKrLznvsMJVq_8HoEgjc-ePy8Hs#slide=id.g4838b09a0c_0_0>
## Inspiration Coach.me was born from a mission to make fitness training inclusive for all, leveraging AI and computer vision. Our aim is to bring personalized exercise feedback to everyone, fostering a friendly and accessible approach to fitness, while also helping users refine their form to prevent injuries. ## What it does Coach.me offers a variety of exercises and provides real-time feedback on form accuracy. Users receive instant notifications when performing exercises correctly and can ask questions for immediate feedback, enhancing their learning and safety during workouts. ## How we built it Coach.me was built using a combination of technologies to provide a seamless user experience. The UI was developed using the PyQt Python library, ensuring an intuitive interface for users. For the critical task of form detection, we employed MediaPipe and OpenCV for computer vision capabilities, enabling accurate real-time analysis of exercise form. Additionally, we integrated Whisper for speech-to-text functionality, ChatGPT for AI prompting, and OpenAI TTS for text-to-speech capabilities, enhancing the interactive experience and accessibility of the app. ## Challenges we ran into During development, we encountered hurdles installing required libraries on our Coach.me tablet running Ubuntu on an ARM architecture. We also worked to seamlessly integrate various technologies, ensuring effective feedback delivery. Through collaboration and innovation, we overcame these obstacles, enhancing the app's functionality. ## Accomplishments that we're proud of We had a blast when our pose detection algorithms nailed recognizing exercise forms, and we couldn't resist trying them out ourselves! Plus, we're stoked about seamlessly blending all those cool tech tools together to make a smooth user experience. These wins really showcase our team's knack for making fitness training fun and accessible. ## What we learned Working on Coach.me, we realized that crafting UIs with Python, especially using PyQt, can be quite tricky. Plus, we got a crash course in exercise anatomy, learning all about those nitty-gritty joint angles. These experiences really leveled up our skills and gave us some great stories to share! ## What's next for Coach.me We're eager to expand Coach.me's functionality by adding more exercises and catering to a broader range of fitness enthusiasts. Imagining Coach.me integrated into actual gyms is an exciting prospect, offering users access to real-time feedback and guidance during their workouts. Additionally, we're aiming to implement full workout detection capabilities, enabling Coach.me to track and log users' performance for valuable insights and progress tracking. These future developments will further enhance Coach.me's utility and effectiveness in helping users achieve their fitness goals.
winning
## Inspiration We were inspired by the instability and corruption of many developing governments and wanted to provide more transparency for citizens. The immutability and decentralization of IPFS seemed like the perfect tool for this problem. We further developed this idea into a framework for conducting government activities and passing laws in a secure manner through ethereum smart contracts ## What it does Lex Aeterna provides a service for governments to publish laws and store them on IPFS increasing security and transparency for citizens. We offer a website for viewing these laws and interfacing with our service but anyone can view these laws by looking at them directly on IPFS. We also offer increased security through the use of filecoin nodes to further decentralize the storage of laws and ensure that all laws and documents will **always** stay up. We also offer smart contracts which can be used to vote on proposed laws through ethereum transactions. Our website offers a UI for this functionality which includes secure account login through firebase. ## How we built it We used the ipfs-http-client in python to upload and download files on IPFS. We set up a firebase database to store countries and associated laws with CIDs and other parameters. We then used flask to create a rest API to connect our database, our front end and IPFS. We coded our front end using react. We coded our voting smart contract using solidity and deployed it to a test net using web3 on python. We then expanded our API so that governments could deploy and use voting smart contracts all through our API. We use firebase tokens to authenticate the use of API functionality. ## Challenges we ran into With such an ambitious project, we had to cover a lot of ground. Connecting the front end to our API was especially difficult because we didn't have much experience with react. It was difficult to learn on the fly and develop our front end as we went. ## Accomplishments that we're proud of Although we were very ambitious, we were able to pretty much implement all major functionality that we wanted to. We implemented an entire web application through the entire stack which uses IPFS and blockchain technology. Most of all we pushed through and continued to work even when we felt stuck. ## What we learned None us had used flask or react before however, we all became proficient enough to implement and API using flask and a front end using react. We also learned more about what it takes to plan and execute an original idea extremely quickly. ## What's next for Lex Aeterna First we would move to AWS to increase scalability and security. We would spend some time testing the security of our API and log in features. We would also want to expand our smart contracts to further provide more options for governments to utilize the ethereum infrastructure. For example, different types of votes such as super majority or government terms that expire after a period of time and even direct citizen votes for government officials or policies.
## Inspiration To introduce the most impartial and ensured form of voting submission in response to controversial democratic electoral polling following the 2018 US midterm elections. This event involved several encircling clauses of doubt and questioned authenticity of results by citizen voters. This propelled the idea of bringing enforced and much needed decentralized security to the polling process. ## What it does Allows voters to vote through a web portal on a blockchain. This web portal is written in HTML and Javascript using the Bootstrap UI framework and JQuery to send Ajax HTTP requests through a flask server written in Python communicating with a blockchain running on the ARK platform. The polling station uses a web portal to generate a unique passphrase for each voter. The voter then uses said passphrase to cast their ballot anonymously and securely. Following this, their vote alongside passphrase go to a flask web server where it is properly parsed and sent to the ARK blockchain accounting it as a transaction. Is transaction is delegated by one ARK coin represented as the count. Finally, a paper trail is generated following the submission of vote on the web portal in the event of public verification. ## How we built it The initial approach was to use Node.JS, however, Python with Flask was opted for as it proved to be a more optimally implementable solution. Visual studio code was used as a basis to present the HTML and CSS front end for visual representations of the voting interface. Alternatively, the ARK blockchain was constructed on the Docker container. These were used in a conjoined manner to deliver the web-based application. ## Challenges I ran into * Integration for seamless formation of app between front and back-end merge * Using flask as an intermediary to act as transitional fit for back-end * Understanding incorporation, use, and capability of blockchain for security in the purpose applied to ## Accomplishments that I'm proud of * Successful implementation of blockchain technology through an intuitive web-based medium to address a heavily relevant and critical societal concern ## What I learned * Application of ARK.io blockchain and security protocols * The multitude of transcriptional stages for encryption involving pass-phrases being converted to private and public keys * Utilizing JQuery to compile a comprehensive program ## What's next for Block Vote Expand Block Vote’s applicability in other areas requiring decentralized and trusted security, hence, introducing a universal initiative.
## Inspiration While the use cases for web3 expand every day from healthcare to polling systems, we wanted to explore the implementation of web3 in the entertainment sector. As the world of cryptocurrency expands, people would want to play games using crypto and win crypto. ## What it does The user gets to draw a picture and set the answer for the picture. The other players can then try to guess the answer. If they get it right, they are rewarded with crypto. In order to guess, the player needs to put in some crypto. As a result, the prize pool for that particular picture increases. The artist will get a portion of the prize pool as an incentive for drawing. ## How we built it To start off we used Solana's Twitter example and other social media-on-the-block-chain implementations we found online. Through that, we were able to set up creating a wallet on our local machines that could be used to test functions. Our next issue was uploading an image to the blockchain so that the data itself was de-centralized. We used IPFS for this task but ran into issues while connecting the uploading API to the function for creating a post. For our front end we had to flip-flop between React and Vue, as Due was already connected to our backend and could be used to fetch data, however, our team felt more comfortable in using React for front-end development. ## Challenges we ran into We ran into some challenges in building the blockchain and saving the drawn image. Moreover, the time crunch was also a big challenge for us. While we were able to learn many individual technologies like creating a wallet on our local machine, uploading images with IPFS, and sending posts through the blockchain combining all those elements together with our front end is what posed an issue in the constrained timings. Another problem was picking technologies. For our front end, React was a framework most of use were accustomed to however, Vue was better integrated without backend calls and for getting the drawing of our user. ## Accomplishments that we're proud of We are proud that we were able to learn and overcome so many challenges in a short period of time. Despite it having been 24 hours it feels like we have gained decent experience in Web3 and Solana specifically. ## What we learned None of us had ever worked on web3 before. This was our first time developing a decentralized application (dapp). We also learned about the various use cases of Web3 and its advantages. Furthermore, we explored building smart contracts. ## What's next for Cryptionary In the future, we hope that cryptionary will become an end-to-end game that anyone on the blockchain can enjoy in a safe way.
winning
## Inspiration Having struggled with depression in the past, we wanted to build a tool that could help people in that situation detect it early and give them the tools they need to get healthy again. ## What it does Our Chrome extension uses Lexalytics' Semantria API to detect when our users have a bad day, and bombard them with cuteness when they do. Additionally, we can detect the early signs of depression and direct our users to our website that features a variety of ressources to help them. ## How we built it We used a Chrome extension to track messages and web searches from a user, which would send data to Semantria API for lexical analysis. The returned sentiment value would be recorded and pooled over the course of the day/week/month to detect a person's negativity. ## Challenges we ran into Having never worked with PyMongo before, connecting with and figuring out the queries for MongoDB was challenging. We had a hard time figuring out the logic behind compressing and filtering the raw data to predict a person's mood. We also had challenges integrating the Semantria API, and in the end we were only able to successfully install in on one of our computers. Luckily, that was enough for us to integrate it with our server and build the project successfully! ## Accomplishments that we're proud of This was the first time for all of us building a Chrome extension and using Python/Flask as a back-end, so we're proud to have built something that actually runs smoothly! ## What we learned We learned just how powerful the Semantria API actually is when it comes to sentiment analysis, giving us a sentiment score precise to the hundredth of a unit. We also learned a lot about building a Python back-end and connecting it to a Mongo database. ## What's next for LemonAid Given the ressources, we plan on adding additional metrics to help detect the early symptoms of depression, such as tracking time spent on social media or the amount of Facebook conversations our users engage in, both of which are directly correlated to depression. We would also like to use these tools with the Semantria API to help detect other mental illnesses such as bipolar and anxiety disorder.
# Mental-Health-Tracker ## Mental & Emotional Health Diary This project was made because we all know how much of a pressing issue that mental health and depression can have not only on ourselves, but thousands of other students. Our goal was to make something where someone could have the chance to accurately assess and track their own mental health using the tools that Google has made available to access. We wanted the person to be able to openly express their feelings towards the diary for their own personal benefit. Along the way, we learned about using Google's Natural Language processor, developing using Android Studio, as well as deploying an app using Google's App Engine with a `node.js` framework. Those last two parts turned out to be the greatest challenges. Android Studio was a challenge as one of our developers had not used Java for a long time, nor had he ever developed using `.xml`. He was pushed to learn a lot about the program in a limited amount of time. The greatest challenge, however, was deploying the app using Google App Engine. This tool is extremely useful, and was made to seem that it would be easy to use, but we struggled to implement it using `node.js`. Issues arose with errors involving `favicon.ico` and `index.js`. It took us hours to resolve this issue and we were very discouraged, but we pushed though. After all, we had everything else - we knew we could push through this. The end product involves and app in which the user signs in using their google account. It opens to the home page, where the user is prompted to answer four question relating to their mental health for the day, and then rate themselves on a scale of 1-10 in terms of their happiness for the day. After this is finished, the user is given their mental health score, along with an encouraging message tagged with a cute picture. After this, the user has the option to view a graph of their mental health and happiness statistics to see how they progressed over the past week, or else a calendar option to see their happiness scores and specific answers for any day of the year. Overall, we are very happy with how this turned out. We even have ideas for how we could do more, as we know there is always room to improve!
## Inspiration We decided to build this project after noticing the increasing number of phishing and spam calls that the elderly and other individuals experience. Many people are unassuming to the tactics used by spam callers, and easily give away information like credit card numbers, social security numbers and personal details that put them at risk of being subject to financial fraud and even physical danger. Furthermore, most advice regarding scam calls and dealing with them are preventative—they focus on learning to avoid callers in large databases of identified numbers, or educating individuals about template tips to follow that change rapidly over time and are far outrun by the pace at which scammers change strategies and implement new scams. As such, our goal was to build a tool that allows individuals to actively control the processor of engaging with a scam call as opposed to regretting them and being susceptible to new and innovative measures that many scammers employ to catch their next victims. Harnessing the power of LLM inference and real-time data allowed us to do this instantly enough for users to take action against scam callers. ## What it does Our application allows a user to safeguard themselves from being scammed through introducing an intelligent security layer that determines the likelihood that a call is from a scammer as it is occurring. Our interactions are rooted completely in user consent - individuals have full autonomy over their information and are receptive of other's opinions, and can choose to give their Twilio phone number out to individuals not as close to them or marketers and other individuals they may not trust. All you have to do as a user is install our app and create a Twilio phone number for usage as your proxy, so that ShaScam can lay over any potential unwanted spam calls and warn you before you slip into a scammer's trap with well-founded analysis and steps to engage with the other person on the line. ## How we built it We made use of Twilio's API to create proxy phone numbers for users of our application to give away to individuals not close to them. With user consent, we sat over a call initiated by a potential scammer, rerouting it to the Twilio proxy number owned by the user and transcribing real time audio data into text using Google Cloud's speech to text integration with audio packets streamed from Twilio's API. Next, we progressively passed cohesive 12-20 word chunks of transcribed output as it was streamed to a 13B parameter Llama 2 model for inference about the likelihood of whether a caller over the phone was a scammer or not. Finally, we built out an intuitive, minimalistic interface for users to create a Twilio number to use and receive push notifications from to be alerted about a spam call should they receive it and obtain analysis about why the call is suspicious, as well as how they should respond to maximize their safety and ensure the person on the other end of the line is not malicious. ## Challenges we ran into We wanted to ensure that the accuracy of our model in detecting spam calls would not bias itself towards finding every interaction to be with a scammer. This is why we set out to build or find a dataset of spam calls to use. We found that several researchers had approached this problem, but there was no final solution for where to source datasets for this topic, as they were either hidden, or not up-to-par of the density of data we needed to make inferences and fine-tune an existing LLM to inform those inference. Initially, we also set out to try to make the experience of using our product as user-friendly as possible, and tried call-forwarding through iOS/Android phones, which seemed like an excellent way to sit over, transcribe and analyze a call. We soon realized, however, that this process reduced autonomy of the users of our product in controlling when their calls were monitored and introduced latency and platform challenges on Android and iOS. We quickly pivoted to creating proxy numbers that users could control use, monitoring and distribution of, thus giving them control of their own security while enabling scam call prevention for those that wish to safeguard themselves more strictly. We even included voice prompts that adhered to regulation regarding user consent of data collection and utilization, using Text-to-Speech through Twilio to confirm user comfort with being recorded over call. Furthermore, we wanted to finetune the model we were using for LLM inference to reduce generalization errors we noticed with the models we were using being particularly selective and overfitting to the definition and examples we provided of scam calls. We set out to find datasets of normal phone conversations and those recorded from scam calls, and hit a significant obstacle with finding the latter as such data is challenging to find due to individuals not typically being able to record scam calls they are on, and phone calls being difficult to record due to user privacy concerns. While we finally sourced a dataset of chatbot recorded scam calls—the Lenny dataset—and transcribed them, we hit many issues trying to finetune the model with Together API and Monster.API, instead pivoting to using a larger-parameter model that we were able to experiment with and prompt engineer to yield better results than our zero-shot approach with Llama-2. ## Accomplishments that we're proud of We are most proud of the progress we've made with analysis of scam calls using LLMs—no existing solution, per our research, is able to parse real-time audio data into text and monitor calls as they happen, before the danger and intentions of a scammer have already affected a user. ## What we learned The problem we are solving with this project is one that has existed for a long time, but has had suboptimal solutions that leave people unsafe. A key learning we had while ideating for this hackathon was that problems that are long standing and as simple to describe as scam call detection have layers of underlying detail and complexity that can be unpacked to lead to higher levels of security. Streaming real-time data into an LLM and communicating its continuously generated results with a clean, frictionless interface for our target demographic—senior citizens, who are primarily the most susceptible to these telephone attacks—was a huge lesson and experience in dealing with threading, parallel programming and integrating inference models with the power of real-time, dynamic data as opposed to the static forms each of us have typically used in projects outside of this hackathon. In particular, dealing with the platform restrictions of calling on iOS vs Android due to tight handles by carriers led us to innovate tremendously with our UI/UX flow and the control flow of our core program. Taking small pivots when we hit roadblocks with instances like call forwarding on iOS and chunking data intelligently to prevent overflowing our inference model with an overwhelming number of requests from streamed, noisy data taught us a lot about dealing with volatile data in-flow and finding technical innovations that move us forward towards our solution quickly. ## What's next for Shascam Alongside providing reasoning for why an attack may potentially be from a scammer, making it possible to categorize different scams experienced by users with respect to recency of certain scripts being experienced or tricks by scammers on call is a goal we have to help slowly educate users about the types of trends they are experiencing and are most susceptible to, introducing a form of personalization of what risks they may be putting themselves in during such conversations. Another area we want to continue in is finetuning. Specifically, we want to finetune an LLM on clean and diverse data which, due to the time constraints and resource constraints, we weren’t able to create and find during the hackathon.
partial