anchor
stringlengths
1
23.8k
positive
stringlengths
1
23.8k
negative
stringlengths
1
31k
anchor_status
stringclasses
3 values
## Inspiration After manually evaluating stocks with the Nasdaq Dozen (<http://www.nasdaq.com/investing/dozen/>) on Excel, I wanted to build an app that interfaces with the Aladdin API and checks how a stock compares with other companies automatically. ## What it does While it is not completed, the web app should be interfacing with the Aladdin API to gather the 12 metrics for the Nasdaq Dozen (<http://www.nasdaq.com/investing/dozen/>) and then provides a rating for how the stock should perform in the future. The twelve metrics for the Nasdaq Dozen include the following: Revenue, Earnings per Share (EPS), Return on Equity (ROE), Recommendations, Earnings Surprises, Forecast, Earnings Growth, PEG Ratio, Industry Earnings, Days to Cover, Insider Trading, and Weighted Alpha. This may require interaction with more than one API. ## How I built it I used bootstrapped HTML web pages on my local computer with ajax calls to the Aladdin APIs to make the product successful. ## Challenges I ran into I ran into a bug where I could not run the jquery get() method to navigate to the dashboard page. After this would have been completed I would have started searching for ways to check for the Nasdaq dozen. In addition, my team had several changes in project scope this weekend, so I had minimal time to try to implement my idea. I initially tried to build an app using Expo but ran into a few issues, but learned a lot about mobile development throughout the process. ## Accomplishments that I'm proud of I am proud of the look and vision of stock stats. I believe that is a solid foundation and could potentially become a great place to look up many statistics on publicly traded companies. ## What I learned I learned how to modify a bootstrapped web page and how important it is to properly define the scope of a project for a hackathon. ## What's next for PennAppsXVI I will hopefully continue to add to stock stats and complete the implementation I have in mind.
## Inspiration I've always been interested in learning about the various methods of investing and how to generate multiple passive income streams. When I found out that 43% of millennials don't know where to get started in the stock market, I wanted to create an app that could educate individuals on why certain stocks are beating the market and how they can get started with their investment budget. ## What it does The homepage focuses on the top gainers of the week and explains why they are doing so well. I also used an external library (react-native charts) to display stock charts for the current week. The second screen is a newsroom where users can read about various companies and build their knowledge. The last screen is a calculator where users can input their investment budget and it will render out information on which areas of the market they should invest in. ## How we built it This app is built with React Native, but uses data from a stock API called 'Alpha Vantage'. I also used react-native charts to display the stock charts of the current week. ## Challenges we ran into I had trouble conditionally rendering the different information topics based on what the user inputed. I also had to spend a lot of time researching the different topics so finishing on time was definitely a big challenge. ## Accomplishments that we're proud of I am really proud of the design of the overall app. I feel like it could have been a bit better, especially the newsroom but overall, I am happy with the design of the app. ## What we learned As I was researching topics for this app, I also learned many different investing strategies myself that I am so excited to try out! ## What's next for StockUp I hope to link this app to a news API so that it keeps updating automatically everyday. I also would like to add user authentication so that users can have their own personal account where they can add stocks to their watchlist.
## ✨ Inspiration Driven by the goal of more accessible and transformative education, our group set out to find a viable solution. Stocks are very rarely taught in school and in 3rd world countries even less, though if used right, it can help many people go above the poverty line. We seek to help students and adults learn more about stocks and what drives companies to gain or lower their stock value and use that information to make more informed decisions. ## 🚀 What it does Users are guided to a search bar where they can search a company stock for example "APPL" and almost instantly they can see the stock price over the last two years as a graph, with green and red dots spread out on the line graph. When they hover over the dots, the green dots explain why there is a general increasing trend in the stock and a news article to back it up, along with the price change from the previous day and what it is predicted to be from. An image shows up on the side of the graph showing the company image as well. ## 🔧 How we built it When a user writes a stock name, it accesses yahooFinance API and gets the data on stock price from the last 3 years. It takes the data and converts to a JSON File on local host 5000. Then using flask it is converted to our own API that populates the ChartsJS API with the data on the stock. Using a Matlab Server, we then take that data to find areas of most significance (the absolute value of slope is over a certain threshold). Those data points are set as green if it is positive or red if it is negative. Those specific dates in our data points are fed back to Gemini and asks it why it is thinking the stock shifted as it did and the price changed on the day as well. The Gemini at the same time also takes another request for a phrase that easy for the json search API to find a photo for about that company and then shows it on the screen. ## 🤯 Challenges we ran into Using the amount of API's we used and using them properly was VERY Hard especially making our own API and incorporating Flask. As well, getting Stock data to a MATLAB Server took a lot of time as it was all of our first time using it. Using POST and Fetch commands were new for us and took us a lot of time for us to get used too. ## 🏆 Accomplishments that we're proud of Connecting a prompt to a well-crafted stocks portfolio. learning MATLAB in a time crunch. connecting all of our API's successfully making a website that we believe has serious positive implications for this world ## 🧠 What we learned MATLAB integration Flask Integration Gemini API ## 🚀What's next for StockSee * Incorporating it on different mediums such as VR so users can see in real-time how stocks shift in from on them in an interactive way. * Making a small questionnaire on different parts of the stocks to ask whether if it good to buy it at the time * Use (MPT) and other common stock buying algorthimns to see how much money you would have made using it.
losing
## Inspiration With remote learning being used to teach, it gets harder to learn, especially when your teacher isn't there to directly teach and help you. Elementary school students, in particular, might have a difficult time with the online teaching, especially in mathematics. Children are the future, so it is our responsibility to teach them everything they need to learn, even if it is in the most unconventional setting. This website was created to be a kid-friend website to assist young students to understand mathematical concepts. ## What it does Students are able to use Cowculator to find descriptions and examples on math subjects that might be hard for them to understand. This website includes concepts from addition to counting money, and is aimed for students in first to fourth grade. ## How we built it The mockup of the site was created on Figma. The front-end was created using HTML, CSS, Javascript, Bootstrap. The back-end was created using Javascript, Express.js, and Google App Engine. ## Challenges we ran into We had some trouble with the alignment of the text and buttons on the pages. We also had trouble with routing to get the pages connected to each other. ## Accomplishments that we're proud of We are proud of using Express.js for this project because, it was the first time using it, and we were happy with how it turned out. We are also proud of how many pages we were able to make with ## What we learned One thing we learned from creating this project was how to use Express.js. It was our first time using it, so majority of the time was used to research on the topic and trial and error. We also learned how to deploy a node.js app on the Google App Engine. ## What's next for Cowculator Due to the time constraints, we were not able to implement all the pages we hoped to have. We hope to create a profile so students are able to track their progress, or see concepts that they might need help with. We also hope to create the calculator function, which would be used to assist students when they complete a practice problem.
## Inspiration Coming from South Texas, two of the team members saw ESL (English as a Second Language) students being denied of a proper education. Our team created a tool to break down language barriers that traditionally perpetuate socioeconomic cycles of poverty by providing detailed explanations about word problems using ChatGPT. Traditionally, people from this group would not have access to tutoring or 1-on-1 support, and this website is meant to rectify this glaring issue. ## What it does The website takes in a photo as input, and it uses optical character recognition to get the text from the problem. Then, it uses ChatGPT to generate a step-by-step explanation for each problem, and this output is tailored to the grade level and language of the student, enabling students from various backgrounds to get assistance they are often denied of. ## How we built it We coded the backend in Python with two parts: OCR and ChatGPT API implementation. Moreover, we considered the multiple parameters, such as grade and language, that we could implement in our code and eventually query ChatGPT with to make the result as helpful as possible. On the other side of the stack, we coded it in React with TypeScript to be as simple and intuitive as possible. It has two sections that clearly show what it is outputting and what ChatGPT has generated to assist the student. ## Challenges we ran into During the development of our product we often ran into struggles with deciding the optimal way to apply different APIs and learning how to implement them, many of which we ended up not using or changing our applications for, such as the IBM API. Through this process, we had to change our high level plan for the backend functions and consequently reimplement our frontend user interface to fit the operations. This also provided a compounding challenge of having to reestablish and discuss new ideas while communicating as a team. ## Accomplishments that we're proud of We are proud of the website layout. Personally the team is very fond of the color and the arrangement of the site’s elements. Another thing that we are proud of is just that we have something working, albeit jankily. This was our first hackathon, so we were proud to be able to contribute to the hackathon in some form. ## What we learned One invaluable skill we developed through this project was learning more about the unique plethora of APIs available and how we can integrate and combine them to create new revolutionary products that can help people in everyday life. We not only developed our technical skills, including git familiarity and web development, but we also developed our ability to communicate our ideas as a team and gain the confidence and creativity to create and carry out an idea from thought to production. ## What's next for Homework Helper As part of our mission to increase education accessibility and combat common socioeconomic barriers, we hope to use Homework Helper to not only translate and minimize the language barrier, but to also help those with visual and auditory disabilities. Some functions we hope to implement include having text-to-speech and speech-to-text features, and producing video solutions along with text answers.
## Inspiration Many of us had class sessions in which the teacher pulled out whiteboards or chalkboards and used them as a tool for teaching. These made classes very interactive and engaging. With the switch to virtual teaching, class interactivity has been harder. Usually a teacher just shares their screen and talks, and students can ask questions in the chat. We wanted to build something to bring back this childhood memory for students now and help classes be more engaging and encourage more students to attend, especially in the younger grades. ## What it does Our application creates an environment where teachers can engage students through the use of virtual whiteboards. There will be two available views, the teacher’s view and the student’s view. Each view will have a canvas that the corresponding user can draw on. The difference between the views is that the teacher’s view contains a list of all the students’ canvases while the students can only view the teacher’s canvas in addition to their own. An example use case for our application would be in a math class where the teacher can put a math problem on their canvas and students could show their work and solution on their own canvas. The teacher can then verify that the students are reaching the solution properly and can help students if they see that they are struggling. Students can follow along and when they want the teacher’s attention, click on the I’m Done button to notify the teacher. Teachers can see their boards and mark up anything they would want to. Teachers can also put students in groups and those students can share a whiteboard together to collaborate. ## How we built it * **Backend:** We used Socket.IO to handle the real-time update of the whiteboard. We also have a Firebase database to store the user accounts and details. * **Frontend:** We used React to create the application and Socket.IO to connect it to the backend. * **DevOps:** The server is hosted on Google App Engine and the frontend website is hosted on Firebase and redirected to Domain.com. ## Challenges we ran into Understanding and planning an architecture for the application. We went back and forth about if we needed a database or could handle all the information through Socket.IO. Displaying multiple canvases while maintaining the functionality was also an issue we faced. ## Accomplishments that we're proud of We successfully were able to display multiple canvases while maintaining the drawing functionality. This was also the first time we used Socket.IO and were successfully able to use it in our project. ## What we learned This was the first time we used Socket.IO to handle realtime database connections. We also learned how to create mouse strokes on a canvas in React. ## What's next for Lecturely This product can be useful even past digital schooling as it can save schools money as they would not have to purchase supplies. Thus it could benefit from building out more features. Currently, Lecturely doesn’t support audio but it would be on our roadmap. Thus, classes would still need to have another software also running to handle the audio communication.
losing
## Inspiration GeoGuesser is a fun game which went viral in the middle of the pandemic, but after having played for a long time, it started feeling tedious and boring. Our Discord bot tries to freshen up the stale air by providing a playlist of iconic locations in addendum to exciting trivia like movies and monuments for that extra hit of dopamine when you get the right answers! ## What it does The bot provides you with playlist options, currently restricted to Capital Cities of the World, Horror Movie Locations, and Landmarks of the World. After selecting a playlist, five random locations are chosen from a list of curated locations. You are then provided a picture from which you have to guess the location and the bit of trivia associated with the location, like the name of the movie from which we selected the location. You get points for how close you are to the location and if you got the bit of trivia correct or not. ## How we built it We used the *discord.py* library for actually coding the bot and interfacing it with discord. We stored our playlist data in external *excel* sheets which we parsed through as required. We utilized the *google-streetview* and *googlemaps* python libraries for accessing the google maps streetview APIs. ## Challenges we ran into For initially storing the data, we thought to use a playlist class while storing the playlist data as an array of playlist objects, but instead used excel for easier storage and updating. We also had some problems with the Google Maps Static Streetview API in the beginning, but they were mostly syntax and understanding issues which were overcome soon. ## Accomplishments that we're proud of Getting the Discord bot working and sending images from the API for the first time gave us an incredible feeling of satisfaction, as did implementing the input/output flows. Our points calculation system based on the Haversine Formula for Distances on Spheres was also an accomplishment we're proud of. ## What we learned We learned better syntax and practices for writing Python code. We learnt how to use the Google Cloud Platform and Streetview API. Some of the libraries we delved deeper into were Pandas and pandasql. We also learned a thing or two about Human Computer Interaction as designing an interface for gameplay was rather interesting on Discord. ## What's next for Geodude? Possibly adding more topics, and refining the loading of streetview images to better reflect the actual location.
## Inspiration Travel planning is a pain. Even after you find the places you want to visit, you still need to find out when they're open, how far away they are from one another, and work within your budget. With Wander, automatically create an itinerary based on your preferences – just pick where you want to go, and we'll handle the rest for you. ## What it does Wander shows you the top destinations, events, and eats wherever your travels take you, with your preferences, budget, and transportation in mind. For each day of your trip, Wander creates a schedule for you with a selection of places to visit, lunch, and dinner. It plans around your meals, open hours, event times, and destination proximity to make each day run as smoothly as possible. ## How we built it We built the backend on Node.js and Express which uses the Foursquare API to find relevant food and travel destinations and schedules the itinerary based on the event type, calculated distances, and open hours. The native iOS client is built in Swift. ## Challenges we ran into We had a hard time finding all the event data that we wanted in one place. In addition, we found it challenging to sync the information between the backend and the client. ## Accomplishments that we're proud of We’re really proud of our mascot, Little Bloop, and the overall design of our app – we worked hard to make the user experience as smooth as possible. We’re also proud of the way our team worked together (even in the early hours of the morning!), and we really believe that Wander can change the way we travel. ## What we learned It was surprising to discover that there were so many ways to build off of our original idea for Wander and make it more useful for travelers. After laying the technical foundation for Wander, we kept brainstorming new ways that we could make the itinerary scheduler even more useful, and thinking of more that we could account for – for instance, how open hours of venues could affect the itinerary. We also learned a lot about the importance of design and finding the best user flow in the context of traveling and being mobile. ## What's next for Wander We would love to continue working on Wander, iterating on the user flow to craft the friendliest end experience while optimizing the algorithms for creating itineraries and generating better destination suggestions.
## Inspiration memes have become a cultural phenomenon and a huge recreation for many young adults including ourselves. for this hackathon, we decided to connect the sociability aspect of the popular site "twitter", and combine it with a methodology of visualizing the activity of memes in various neighborhoods. we hope that through this application, we can create a multicultural collection of memes, and expose these memes, trending from popular cities to a widespread community of memers. ## What it does NWMeme is a data visualization of memes that are popular in different parts of the world. Entering the application, you are presented with a rich visual of a map with Pepe the frog markers that mark different cities on the map that has dank memes. Pepe markers are sized by their popularity score which is composed of retweets, likes, and replies. Clicking on Pepe markers will bring up an accordion that will display the top 5 memes in that city, pictures of each meme, and information about that meme. We also have a chatbot that is able to reply to simple queries about memes like "memes in Vancouver." ## How we built it We wanted to base our tech stack with the tools that the sponsors provided. This started from the bottom with CockroachDB as the database that stored all the data about memes that our twitter web crawler scrapes. Our web crawler was in written in python which was Google gave an advanced level talk about. Our backend server was in Node.js which CockroachDB provided a wrapper for hosted on Azure. Calling the backend APIs was a vanilla javascript application which uses mapbox for the Maps API. Alongside the data visualization on the maps, we also have a chatbot application using Microsoft's Bot Framework. ## Challenges we ran into We had many ideas we wanted to implement, but for the most part we had no idea where to begin. A lot of the challenge came from figuring out how to implement these ideas; for example, finding how to link a chatbot to our map. At the same time, we had to think of ways to scrape the dankest memes from the internet. We ended up choosing twitter as our resource and tried to come up with the hypest hashtags for the project. A big problem we ran into was that our database completely crashed an hour before the project was due. We had to redeploy our Azure VM and database from scratch. ## Accomplishments that we're proud of We were proud that we were able to use as many of the sponsor tools as possible instead of the tools that we were comfortable with. We really enjoy the learning experience and that is the biggest accomplishment. Bringing all the pieces together and having a cohesive working application was another accomplishment. It required lots of technical skills, communication, and teamwork and we are proud of what came up. ## What we learned We learned a lot about different tools and APIs that are available from the sponsors as well as gotten first hand mentoring with working with them. It's been a great technical learning experience. Asides from technical learning, we also learned a lot of communication skills and time boxing. The largest part of our success relied on that we all were working on parallel tasks that did not block one another, and ended up coming together for integration. ## What's next for NWMemes2017Web We really want to work on improving interactivity for our users. For example, we could have chat for users to discuss meme trends. We also want more data visualization to show trends over time and other statistics. It would also be great to grab memes from different websites to make sure we cover as much of the online meme ecosystem.
winning
## Inspiration Alex K's girlfriend Allie is a writer and loves to read, but has had trouble with reading for the last few years because of an eye tracking disorder. She now tends towards listening to audiobooks when possible, but misses the experience of reading a physical book. Millions of other people also struggle with reading, whether for medical reasons or because of dyslexia (15-43 million Americans) or not knowing how to read. They face significant limitations in life, both for reading books and things like street signs, but existing phone apps that read text out loud are cumbersome to use, and existing "reading glasses" are thousands of dollars! Thankfully, modern technology makes developing "reading glasses" much cheaper and easier, thanks to advances in AI for the software side and 3D printing for rapid prototyping. We set out to prove through this hackathon that glasses that open the world of written text to those who have trouble entering it themselves can be cheap and accessible. ## What it does Our device attaches magnetically to a pair of glasses to allow users to wear it comfortably while reading, whether that's on a couch, at a desk or elsewhere. The software tracks what they are seeing and when written words appear in front of it, chooses the clearest frame and transcribes the text and then reads it out loud. ## How we built it **Software (Alex K)** - On the software side, we first needed to get image-to-text (OCR or optical character recognition) and text-to-speech (TTS) working. After trying a couple of libraries for each, we found Google's Cloud Vision API to have the best performance for OCR and their Google Cloud Text-to-Speech to also be the top pick for TTS. The TTS performance was perfect for our purposes out of the box, but bizarrely, the OCR API seemed to predict characters with an excellent level of accuracy individually, but poor accuracy overall due to seemingly not including any knowledge of the English language in the process. (E.g. errors like "Intreduction" etc.) So the next step was implementing a simple unigram language model to filter down the Google library's predictions to the most likely words. Stringing everything together was done in Python with a combination of Google API calls and various libraries including OpenCV for camera/image work, pydub for audio and PIL and matplotlib for image manipulation. **Hardware (Alex G)**: We tore apart an unsuspecting Logitech webcam, and had to do some minor surgery to focus the lens at an arms-length reading distance. We CAD-ed a custom housing for the camera with mounts for magnets to easily attach to the legs of glasses. This was 3D printed on a Form 2 printer, and a set of magnets glued in to the slots, with a corresponding set on some NerdNation glasses. ## Challenges we ran into The Google Cloud Vision API was very easy to use for individual images, but making synchronous batched calls proved to be challenging! Finding the best video frame to use for the OCR software was also not easy and writing that code took up a good fraction of the total time. Perhaps most annoyingly, the Logitech webcam did not focus well at any distance! When we cracked it open we were able to carefully remove bits of glue holding the lens to the seller’s configuration, and dial it to the right distance for holding a book at arm’s length. We also couldn’t find magnets until the last minute and made a guess on the magnet mount hole sizes and had an *exciting* Dremel session to fit them which resulted in the part cracking and being beautifully epoxied back together. ## Acknowledgements The Alexes would like to thank our girlfriends, Allie and Min Joo, for their patience and understanding while we went off to be each other's Valentine's at this hackathon.
## Inspiration The idea was to help people who are blind, to be able to discretely gather context during social interactions and general day-to-day activities ## What it does The glasses take a picture and analyze them using Microsoft, Google, and IBM Watson's Vision Recognition APIs and try to understand what is happening. They then form a sentence and let the user know. There's also a neural network at play that discerns between the two dens and can tell who is in the frame ## How I built it We took a RPi Camera and increased the length of the cable. We then made a hole in our lens of the glasses and fit it in there. We added a touch sensor to discretely control the camera as well. ## Challenges I ran into The biggest challenge we ran into was Natural Language Processing, as in trying to parse together a human-sounding sentence that describes the scene. ## What I learned I learnt a lot about the different vision APIs out there and creating/trainingyour own Neural Network. ## What's next for Let Me See We want to further improve our analysis and reduce our analyzing time.
## Inspiration There are thousands of people worldwide who suffer from conditions that make it difficult for them to both understand speech and also speak for themselves. According to the Journal of Deaf Studies and Deaf Education, the loss of a personal form of expression (through speech) has the capability to impact their (affected individuals') internal stress and lead to detachment. One of our main goals in this project was to solve this problem, by developing a tool that would a step forward in the effort to make it seamless for everyone to communicate. By exclusively utilizing mouth movements to predict speech, we can introduce a unique modality for communication. While developing this tool, we also realized how helpful it would be to ourselves in daily usage, as well. In areas of commotion, and while hands are busy, the ability to simply use natural lip-reading in front of a camera to transcribe text would make it much easier to communicate. ## What it does **The Speakinto.space website-based hack has two functions: first and foremost, it is able to 'lip-read' a stream from the user (discarding audio) and transcribe to text; and secondly, it is capable of mimicking one's speech patterns to generate accurate vocal recordings of user-inputted text with very little latency.** ## How we built it We have a Flask server running on an AWS server (Thanks for the free credit, AWS!), which is connected to a Machine Learning model running on the server, with a frontend made with HTML and MaterializeCSS. This was trained to transcribe people mouthing words, using the millions of words in LRW and LSR datasets (from the BBC and TED). This algorithm's integration is the centerpiece of our hack. We then used the HTML MediaRecorder to take 8-second clips of video to initially implement the video-to-mouthing-words function on the website, using a direct application of the machine learning model. We then later added an encoder model, to translate audio into an embedding containing vocal information, and then a decoder, to convert the embeddings to speech. To convert the text in the first function to speech output, we use the Google Text-to-Speech API, and this would be the main point of future development of the technology, in having noiseless calls. ## Challenges we ran into The machine learning model was quite difficult to create, and required a large amount of testing (and caffeine) to finally result in a model that was fairly accurate for visual analysis (72%). The process of preprocessing the data, and formatting such a large amount of data to train the algorithm was the area which took the most time, but it was extremely rewarding when we finally saw our model begin to train. ## Accomplishments that we're proud of Our final product is much more than any of us expected, especially the ability to which it seemed like it was an impossibility when we first started. We are very proud of the optimizations that were necessary to run the webpage fast enough to be viable in an actual use scenario. ## What we learned The development of such a wide array of computing concepts, from web development, to statistical analysis, to the development and optimization of ML models, was an amazing learning experience over the last two days. We all learned so much from each other, as each one of us brought special expertise to our team. ## What's next for speaking.space As a standalone site, it has its use cases, but the use cases are limited due to the requirement to navigate to the page. The next steps are to integrate it in with other services, such as Facebook Messenger or Google Keyboard, to make it available when it is needed just as conveniently as its inspiration.
winning
## What it does Alzheimer's disease and dementia affect many of our loved ones every year; in fact, **76,000 diagnoses** of dementia are made every year in Canada. One of the largest issues caused by Alzheimer's is the loss of ability to make informed, cognitive decisions about their finances. This makes such patients especially vulnerable to things such as scams and high-pressure sales tactics. Here's an unfortunate real-life example of this: <https://www.cbc.ca/news/business/senior-alzheimers-upsold-bell-products-source-1.6014904> We were inspired by this heartbreaking story to build HeimWallet. HeimWallet is a digital banking solution that allows for **supervision** over a savings account owned by an individual incapable of managing their finances, and is specifically **tailored** to patients with Alzheimer's disease or dementia. It can be thought of as a mobile debit card linked to a savings account that only allows spending if certain conditions set by a designated *guardian* are met. It allows a family member or other trusted guardian to set a **daily allowance** for a patient and **keep track of their purchases**. It also allows guardians to keep tabs on the **location of patients via GPS** every time a purchase is attempted, and to authorize or refuse attempted purchases that go beyond the daily allowance. This ensures that patients and their guardians can have confidence that the patient's assets are in safe hands. Further, the daily allowance feature empowers patients to be independent and **shop with confidence**, knowing that their disease will not be able to dominate their finances. The name "HeimWallet" comes from "-Heim" in "Alzheimer's". It also alludes to Heimdall, the mythical Norse guardian of the bridge leading to Asgard. ## How we built it The frontend was built using React-Native and Expo, while the backend was made using Python (Flask) and MongoDB. SMS functionality was added using Twilio, and location services were added using Google Maps API. The backend was also deployed to Heroku. We chose **React-Native** because it allowed us to build our app for both iOS and Android using one codebase. **Expo** enabled rapid testing and prototyping of our app. **Flask**'s lightweightness was key in getting the backend built under tight time constraints, and **MongoDB** was a natural choice for our database since we were building our app using JavaScript. **Twilio** enabled us to create a solution that worked even for guardians who did not have the app installed. Its text message-based interactions enabled us to build a product accessible to those without smartphones or mobile data. We deployed our backend to **Heroku** so that Twilio could access our backend's webhook for incoming text messages. Finally, the **Google Maps API**'s reverse geocoding feature enables guardians to see the addresses of where patients are located when a transaction is attempted. ## Challenges we ran into * Fighting with Heroku for almost *six hours* to get the backend deployed. The core mistake ended up being that we were trying to deploy our Python-based backend as a Node.js app.. oops. * Learning to use React Native -- all of us were new to it, and although we all had experience building web apps, we didn't quite have that same foundation with mobile apps. * Incorporating Figma designs on React Native in a way such that it is cross-platform between Android, iOS, and Web. A lot of styling works differently between these platforms, so it was tricky to make our app look consistent everywhere. * Managing mix of team members who were hacking in-person + online. Constant communication to keep everyone in the loop was key! ## Accomplishments that we're proud of We're super proud that we managed to come together and make our vision a reality! And we're especially proud of how much we learned and took away from this hackathon. From learning React Native, to Twilio, to getting better with Figma and sharpening our video-editing skills for our submission, it was thrilling to have gained exposure to so much in so little time. We're also proud of the genuine hard work every member of our team put in to make this project happen -- we worked deep into the A.M. hours, and constantly sought to improve the usability of our product with continuous suggestions and improvements. ## What's next for HeimWallet Here are some things we think we can add on to HeimWallet in order to bring it to the next level: * Proper integration of SOS (e.g. call 911) and Send Location functionality in the patient interface * Ability to have multiple guardians for one patient, so that there are many eyes safeguarding the same assets * Better security and authentication features for the app; of course, security is vital in a fintech product * Feature to allow patients to send a voice memo to a guardian in order to clarify a spending request
## Inspiration We were inspired by Katie's 3-month hospital stay as a child when she had a difficult-to-diagnose condition. During that time, she remembers being bored and scared -- there was nothing fun to do and no one to talk to. We also looked into the larger problem and realized that 10-15% of kids in hospitals develop PTSD from their experience (not their injury) and 20-25% in ICUs develop PTSD. ## What it does The AR iOS app we created presents educational, gamification aspects to make the hospital experience more bearable for elementary-aged children. These features include: * An **augmented reality game system** with **educational medical questions** that pop up based on image recognition of given hospital objects. For example, if the child points the phone at an MRI machine, a basic quiz question about MRIs will pop-up. * If the child chooses the correct answer in these quizzes, they see a sparkly animation indicating that they earned **gems**. These gems go towards their total gem count. * Each time they earn enough gems, kids **level-up**. On their profile, they can see a progress bar of how many total levels they've conquered. * Upon leveling up, children are presented with an **emotional check-in**. We do sentiment analysis on their response and **parents receive a text message** of their child's input and an analysis of the strongest emotion portrayed in the text. * Kids can also view a **leaderboard of gem rankings** within their hospital. This social aspect helps connect kids in the hospital in a fun way as they compete to see who can earn the most gems. ## How we built it We used **Xcode** to make the UI-heavy screens of the app. We used **Unity** with **Augmented Reality** for the gamification and learning aspect. The **iOS app (with Unity embedded)** calls a **Firebase Realtime Database** to get the user’s progress and score as well as push new data. We also use **IBM Watson** to analyze the child input for sentiment and the **Twilio API** to send updates to the parents. The backend, which communicates with the **Swift** and **C# code** is written in **Python** using the **Flask** microframework. We deployed this Flask app using **Heroku**. ## Accomplishments that we're proud of We are proud of getting all the components to work together in our app, given our use of multiple APIs and development platforms. In particular, we are proud of getting our flask backend to work with each component. ## What's next for HealthHunt AR In the future, we would like to add more game features like more questions, detecting real-life objects in addition to images, and adding safety measures like GPS tracking. We would also like to add an ALERT button for if the child needs assistance. Other cool extensions include a chatbot for learning medical facts, a QR code scavenger hunt, and better social interactions. To enhance the quality of our quizzes, we would interview doctors and teachers to create the best educational content.
## Inspiration: We got our inspiration from looking at some of our grandparents. Many of our grandparents live alone across the other side of the world and elderly people who feel lonely and bored have a high probability of experiencing health problems such as depression, Alzheimer's, and others. It's also difficult for dependent children to continue to care for the elderly. There are lots of senior healthcare systems with IoT systems these days. However, the ones with sensors and cameras can make users (seniors) feel like they're being watched, which can be a turn-off. ## What it does: Our app aids doctors in diagnosing and monitoring patients' progress with a disease over time. Evi will prompt the patient with a questionnaire tailored to their specific health issue. We record the patient's responses as well as their emotional state throughout the interaction. We then send this information to a doctor for analysis. The detailed insights provided along with the doctor's analysis will be crucial in tailoring personalized treatment plans, ensuring both the physical and emotional well-being of the patient, and adapting strategies as needed to optimize their overall health outcomes over time. We will demo a dementia questionnaire, measuring Patients, long-term memory, short-term memory, mood, and behavioral changes. Evi will be in charge of monitoring their emotional state throughout the interaction. ## How we built it: We built our application using Hume.AI API, along with Python for the application to be used on. ## Challenges we ran into: Integrating Hume.AI's API presented significant challenges, particularly because the documentation was outdated, requiring us to collaborate closely with Hume.AI's team to understand how to effectively integrate the API into our application. Additionally, we had to carefully design the user interface to be elderly-friendly, ensuring it was intuitive and accessible for our target audience. Another major task was developing a recommendation system that could accurately and safely suggest supplements and over-the-counter medications suitable for elderly users, considering their specific health needs and conditions. These challenges demanded meticulous planning, testing, and collaboration to ensure the app was both functional and user-friendly. ## Accomplishments that we're proud of: We're really proud of what we've achieved with this project. Integrating Hume.AI's API was a tough nut to crack, especially with the outdated documentation. But by teaming up with the folks at Hume.AI, we figured it out and got it running smoothly. Designing the app to be user-friendly for the elderly was another big win. We made sure the interface is simple and intuitive, so it's easy for them to use. Plus, we built a smart recommendation system that carefully picks out supplements and over-the-counter meds tailored for our users' needs. Another highlight is our ability to track user data like steps, sleep hours, and exercise routines, which gets reported back to their caretakers. This real-time health monitoring feature is essential for proactive care. The app also makes it super easy to book appointments with doctors or nurses and send messages to caretakers, streamlining communication and care coordination. We put a lot of effort into ensuring these functionalities work seamlessly together, creating a comprehensive healthcare tool. These accomplishments show not just our technical skills but also our dedication to making a meaningful impact with our app. ## What we learned: Here's what we learned from this project. We got hands-on experience implementing the Hume.AI API within a mobile application, which was a great learning curve, especially working with real-time emotion detection and data analysis. We also became proficient in using Figma to design a user-friendly UI, ensuring our elderly patients could navigate the app with ease. Additionally, we utilized Flask to create a robust server that securely transmits crucial patient data to their caretakers and doctors. This not only strengthened our technical skills but also deepened our understanding of creating practical, user-centered healthcare solutions. ## What's next for Harmony Health: As we prepare to bring our hackathon project to market, our next steps involve targeting hospitals for implementation. We will focus on refining our app to meet the stringent requirements of the healthcare industry and ensure it complies with all relevant regulations and standards. Collaborating with healthcare professionals, we will conduct pilot programs in selected hospitals to gather feedback and demonstrate the app's effectiveness in real-world settings. Additionally, we will develop comprehensive training materials and support resources for medical staff to facilitate smooth integration into their existing workflows. By building strong relationships with hospital administrators and showcasing the benefits of our app, we will aim to establish a solid foundation for widespread adoption and improve patient care on a large scale.
winning
## Inspiration Have you ever met someone, but forgot their name right afterwards? Our inspiration for INFU comes from our own struggles to remember every detail of every conversation. We all deal with moments of embarrassment or disconnection when failing to remember someone’s name or details of past conversations. We know these challenges are not unique to us, but actually common across various social and professional settings. INFU was born to bridge the gap between our human limitations and the potential for enhanced interpersonal connections—ensuring no details or interactions are lost to memory again. ## What it does By attaching a camera and microphone to a user, we can record different conversations with people by transcribing the audio and categorizing using facial recognition. With this, we can upload these details onto a database and have it summarised by an AI and displayed on our website and custom wrist wearable. ## How we built it There are three main parts to the project. The first part is the hardware which includes all the wearable components. The second part includes face recognition and speech-to-text processing that receives camera and microphone input from the user's iPhone. The third part is storing, modifying, and retrieving data of people's faces, names, and conversations from our database. The hardware comprises an ESP-32, an OLED screen, and two wires that act as touch buttons. These touch buttons act as record and stop recording buttons which turn on and off the face recognition and microphone. Data is sent wirelessly via Bluetooth to the laptop which processes the face recognition and speech data. Once a person's name and your conversation with them are extracted from the current data or prior data from the database, the laptop sends that data to the wearable and displays it using the OLED screen. The laptop acts as the control center. It runs a backend Python script that takes in data from the wearable via Bluetooth and iPhone via WiFi. The Python Face Recognition library then detects the speaker's face and takes a picture. Speech data is subsequently extracted from the microphone using the Google Cloud Speech to Text API which is then parsed through the OpenAI API, allowing us to obtain the person's name and the discussion the user had with that person. This data gets sent to the wearable and the cloud database along with a picture of the person's face labeled with their name. Therefore, if the user meets the person again, their name and last conversation summary can be extracted from the database and displayed on the wearable for the user to see. ## Accomplishments that we're proud of * Creating an end product with a complex tech stack despite various setbacks * Having a working demo * Organizing and working efficiently as a team to complete this project over the weekend * Combining and integrating hardware, software, and AI into a project ## What's next for Infu * Further optimizing our hardware * Develop our own ML model to enhance speech-to-text accuracy to account for different accents, speech mannerisms, languages * Integrate more advanced NLP techniques to refine conversational transcripts * Improve user experience by employing personalization and privacy features
## Inspiration Imagine a world where your best friend is standing in front of you, but you can't see them. Or you go to read a menu, but you are not able to because the restaurant does not have specialized brail menus. For millions of visually impaired people around the world, those are not hypotheticals, they are facts of life. Hollywood has largely solved this problem in entertainment. Audio descriptions allow the blind or visually impaired to follow the plot of movies easily. With Sight, we are trying to bring the power of audio description to everyday life. ## What it does Sight is an app that allows the visually impaired to recognize their friends, get an idea of their surroundings, and have written text read aloud. The app also uses voice recognition to listen for speech commands to identify objects, people or to read text. ## How we built it The front-end is a native iOS app written in Swift and Objective-C with XCode. We use Apple's native vision and speech API's to give the user intuitive control over the app. --- The back-end service is written in Go and is served with NGrok. --- We repurposed the Facebook tagging algorithm to recognize a user's friends. When the Sight app sees a face, it is automatically uploaded to the back-end service. The back-end then "posts" the picture to the user's Facebook privately. If any faces show up in the photo, Facebook's tagging algorithm suggests possibilities for who out of the user's friend group they might be. We scrape this data from Facebook to match names with faces in the original picture. If and when Sight recognizes a person as one of the user's friends, that friend's name is read aloud. --- We make use of the Google Vision API in three ways: * To run sentiment analysis on people's faces, to get an idea of whether they are happy, sad, surprised etc. * To run Optical Character Recognition on text in the real world which is then read aloud to the user. * For label detection, to indentify objects and surroundings in the real world which the user can then query about. ## Challenges we ran into There were a plethora of challenges we experienced over the course of the hackathon. 1. Each member of the team wrote their portion of the back-end service a language they were comfortable in. However when we came together, we decided that combining services written in different languages would be overly complicated, so we decided to rewrite the entire back-end in Go. 2. When we rewrote portions of the back-end in Go, this gave us a massive performance boost. However, this turned out to be both a curse and a blessing. Because of the limitation of how quickly we are able to upload images to Facebook, we had to add a workaround to ensure that we do not check for tag suggestions before the photo has been uploaded. 3. When the Optical Character Recognition service was prototyped in Python on Google App Engine, it became mysteriously rate-limited by the Google Vision API. Re-generating API keys proved to no avail, and ultimately we overcame this by rewriting the service in Go. ## Accomplishments that we're proud of Each member of the team came to this hackathon with a very disjoint set of skills and ideas, so we are really glad about how well we were able to build an elegant and put together app. Facebook does not have an official algorithm for letting apps use their facial recognition service, so we are proud of the workaround we figured out that allowed us to use Facebook's powerful facial recognition software. We are also proud of how fast the Go back-end runs, but more than anything, we are proud of building a really awesome app. ## What we learned Najm taught himself Go over the course of the weekend, which he had no experience with before coming to YHack. Nathaniel and Liang learned about the Google Vision API, and how to use it for OCR, facial detection, and facial emotion analysis. Zak learned about building a native iOS app that communicates with a data-rich APIs. We also learned about making clever use of Facebook's API to make use of their powerful facial recognition service. Over the course of the weekend, we encountered more problems and bugs than we'd probably like to admit. Most of all we learned a ton of valuable problem-solving skills while we worked together to overcome these challenges. ## What's next for Sight If Facebook ever decides to add an API that allows facial recognition, we think that would allow for even more powerful friend recognition functionality in our app. Ultimately, we plan to host the back-end on Google App Engine.
## Inspiration As AR becomes develops and becomes more discrete, we are going to see it more and more in every day life. We were excited by the social side of AR, and how integrating social networks in real life interactions brings the "social" side of social networking back. We wanted to find a way to identify humans through facial tracking/recognition and bring up a known set of social characteristics to augment your interaction. ## What it does Once you've talked to a person for a certain amount of time, SocialEyes recognizes and stores their face on the Android app. It groups those faces by person and you then have the ability to connect their face to their Facebook account. From then on, whenever you encounter that person, SocialEyes tracks and recognizes the face and brings up his or her information in a HUD environment. SocialEyes can even tell you the persons heart rate as you are talking to them. ## How I built it We tuned Haar cascades trained for finding faces in OpenCV to locate the positions of heads in space from the view of the camera of our Meta Augmented Reality (AR) Headset. If we've never seen the face before, we send the face to our back end, which can be accessed and edited using our companion Android app. From here, faces are sent to Azure, where they are grouped using Project Oxford. The groups can be tagged using our Android app, and then linked to Facebook using the Facebook API. The next time that we see the face, we can send the face to our back end, which then sends the face to Azure for identification (among the different tagged face groups we've accumulated). The back end transmits information about the person originally grabbed from the Facebook API back to the computer running the headset, which then displays the information on the AR headset next to the identified person's head. At the same time, we also calculated heart rates of subjects given only the AR headset's video feed using eulerian magnification, a technique originally developed in MIT's Computer Science and Artificial intelligence laboratory. We captured groups of pixels located on people's foreheads located using a combination of Haar cascades for the eyes and head. We then manipulated these captured groups of pixels using a signal processing library (part of OpenMDAO) for calculation of the subject's heart rates (after a small calibration period). ## Challenges I ran into The scope of this project presented the most issues--we were working on basically three different project that all had to come together and function smoothly. Latency was easily the largest issue we had that covered all three projects. We had to use computer vision to detect and track heads in the frame, while simultaneously updating the UI and sending off our images for external processing in Azure. Concurrently, we had to find the heart rate through image processing. We were also working with a Meta kit that is still only for developers, so there are limitations on things like field of view and resolution that we had to work around. The facial tracking had ## Accomplishments that I'm proud of We're most proud of the sheer complexity of this project. There was so much to do on so many different platforms that we couldn't be sure anything would work together smoothly. Because it was so multifaceted, we had to work evenly as a team and strategically divide up tasks, so we're also proud of how well the team worked together. ## What I learned We learned a lot about Azure and Facebook Graph API, which were both instrumental in our project's success. We had to learn a lot in a very short amount of time, but both ended up working flawlessly with our product. ## What's next for SocialEyes AR What's most exciting about this project is the interaction between AR and humans. AR headsets can only augment reality so much without interacting with humans, and we think this is the next step in human-to-tech interaction. Five or ten years down the road, this is the kind of thing that will humanize Artificial Intelligence--the ability to identify and "know" a human being by face.
winning
## Inspiration The results of random brainstorming ## What it does Creates a new endpoint that accepts your chosen parameters and executes your code to generate a response ## How We built it Create a front end in HTML/CSS/JS Created a server with NodeJS Hosted on Google App Engine ## Challenges I ran into Passing files to the server ## Accomplishments that I'm proud of Executed a function with arbitrary parameters from a postgreSQL DB ## What I learned How to convert a function string back to a function and call it with parameters with NodeJS ## What's next for Totally Secure Code Execution
## Inspiration Interest in statistical analysis and parsing emotions from text. ## What it does Scores and visualises emotional word distribution of text in websites in a provided url. ## How we built it With Git and Grit ## Challenges we ran into DNS caches are annoying. Dynamic JSON was a new learning experience for some of us. ## Accomplishments that we're proud of The completeness of our project. ## What we learned That we still love rubber duckies. Also text parsing is cool. Also Node.js is cool. ## What's next for EmoShown This app can grow in an infinite number of ways. For example: we can make a chrome extension to give you information about the emotional distribution of words in any page you are currently browsing. We can also run statistical analysis on the emotional distribution of words in many sites.
## Inspiration <https://www.youtube.com/watch?v=lxuOxQzDN3Y> Robbie's story stuck out to me at the endless limitations of technology. He was diagnosed with muscular dystrophy which prevented him from having full control of his arms and legs. He was gifted with a Google home that crafted his home into a voice controlled machine. We wanted to take this a step further and make computers more accessible for people such as Robbie. ## What it does We use a Google Cloud based API that helps us detect words and phrases captured from the microphone input. We then convert those phrases into commands for the computer to execute. Since the python script is run in the terminal it can be used across the computer and all its applications. ## How I built it The first (and hardest) step was figuring out how to leverage Google's API to our advantage. We knew it was able to detect words from an audio file but there was more to this project than that. We started piecing libraries to get access to the microphone, file system, keyboard and mouse events, cursor x,y coordinates, and so much more. We build a large (~30) function library that could be used to control almost anything in the computer ## Challenges I ran into Configuring the many libraries took a lot of time. Especially with compatibility issues with mac vs. windows, python2 vs. 3, etc. Many of our challenges were solved by either thinking of a better solution or asking people on forums like StackOverflow. For example, we wanted to change the volume of the computer using the fn+arrow key shortcut, but python is not allowed to access that key. ## Accomplishments that I'm proud of We are proud of the fact that we had built an alpha version of an application we intend to keep developing, because we believe in its real-world applications. From a technical perspective, I was also proud of the fact that we were able to successfully use a Google Cloud API. ## What I learned We learned a lot about how the machine interacts with different events in the computer and the time dependencies involved. We also learned about the ease of use of a Google API which encourages us to use more, and encourage others to do so, too. Also we learned about the different nuances of speech detection like how to tell the API to pick the word "one" over "won" in certain context, or how to change a "one" to a "1", or how to reduce ambient noise. ## What's next for Speech Computer Control At the moment we are manually running this script through the command line but ideally we would want a more user friendly experience (GUI). Additionally, we had developed a chrome extension that numbers off each link on a page after a Google or Youtube search query, so that we would be able to say something like "jump to link 4". We were unable to get the web-to-python code just right, but we plan on implementing it in the near future.
losing
## Inspiration During extreme events such as natural disasters or virus outbreaks, crisis managers are the decision makers. Their job is difficult since the right decision can save lives while the wrong decision can lead to their loss. Making such decisions in real-time can be daunting when there is insufficient information, which is often the case. Recently, big data has gained a lot of traction in crisis management by addressing this issue; however it creates a new challenge. How can you act on data when there's just too much of it to keep up with? One example of this is the use of social media during crises. In theory, social media posts can give crisis managers an unprecedented level of real-time situational awareness. In practice, the noise-to-signal ratio and volume of social media is too large to be useful. I built CrisisTweetMap to address this issue by creating a dynamic dashboard for visualizing crisis-related tweets in real-time. The focus of this project was to make it easier for crisis managers to extract useful and actionable information. To showcase the prototype, I used tweets about the current coronavirus outbreak. ## What it does * Scrape live crisis-related tweets from Twitter; * Classify tweets in relevant categories with deep learning NLP model; * Extract geolocation from tweets with different methods; * Push classified and geolocated tweets to database in real-time; * Pull tweets from database in real-time to visualize on dashboard; * Allows dynamic user interaction with dashboard ## How I built it * Tweepy + custom wrapper for scraping and cleaning tweets; * AllenNLP + torch + BERT + CrisisNLP dataset for model training/deployment; * Spacy NER + geotext for extracting location names from text * geopy + gazetteer elasticsearch docker container for extracting geolocation from locations; * shapely for sampling geolocation from bounding boxes; * SQLite3 + pandas for database push/pull; * Dash + plotly + mapbox for live visualizations; ## Challenges I ran into * Geolocation is hard; * Stream stalling due to large/slow neural network; * Responsive visualization of large amounts of data interactively; ## Accomplishments that I'm proud of * A working prototype ## What I learned * Different methods for fuzzy geolocation from text; * Live map visualizations with Dash; ## What's next for CrisisTweetMap * Other crises like extreme weather events;
**In times of disaster, there is an outpouring of desire to help from the public. We built a platform which connects people who want to help with people in need.** ## Inspiration Natural disasters are an increasingly pertinent global issue which our team is quite concerned with. So when we encountered the IBM challenge relating to this topic, we took interest and further contemplated how we could create a phone application that would directly help with disaster relief. ## What it does **Stronger Together** connects people in need of disaster relief with local community members willing to volunteer their time and/or resources. Such resources include but are not limited to shelter, water, medicine, clothing, and hygiene products. People in need may input their information and what they need, and volunteers may then use the app to find people in need of what they can provide. For example, someone whose home is affected by flooding due to Hurricane Florence in North Carolina can input their name, email, and phone number in a request to find shelter so that this need is discoverable by any volunteers able to offer shelter. Such a volunteer may then contact the person in need through call, text, or email to work out the logistics of getting to the volunteer’s home to receive shelter. ## How we built it We used Android Studio to build the Android app. We deployed an Azure server to handle our backend(Python). We used Google Maps API on our app. We are currently working on using Twilio for communication and IBM watson API to prioritize help requests in a community. ## Challenges we ran into Integrating the Google Maps API into our app proved to be a great challenge for us. We also realized that our original idea of including a blood donation as one of the resources would require some correspondence with an organization such as the Red Cross in order to ensure the donation would be legal. Thus, we decided to add a blood donation to our future aspirations for this project due to the time constraint of the hackathon. ## Accomplishments that we're proud of We are happy with our design and with the simplicity of our app. We learned a great deal about writing the server side of an app and designing an Android app using Java (and Google Map’s API” during the past 24 hours. We had huge aspirations and eventually we created an app that can potentially save people’s lives. ## What we learned We learned how to integrate Google Maps API into our app. We learn how to deploy a server with Microsoft Azure. We also learned how to use Figma to prototype designs. ## What's next for Stronger Together We have high hopes for the future of this app. The goal is to add an AI based notification system which alerts people who live in a predicted disaster area. We aim to decrease the impact of the disaster by alerting volunteers and locals in advance. We also may include some more resources such as blood donations.
## Inspiration Everybody carries a smartphone nowadays. Communication technologies contained within these devices could provide invaluable data for first responders, safety personnel, hosts of events, and even curious individuals. Furthermore, we combine quantitative data with context from social media to provide further insight to the activities of an area. ## What it does reSCue is a web application that empowers first responders to view a live heatmap of a region as well as precise locations of individuals within a building if floor plan data exists. A live Twitter feed is viewable alongside the map to provide context for activity occurring within a given location. ## How we built it We utilized **WRLD API** to create 3D heatmaps of regions as well as the interiors of buildings. **Cisco Meraki API** was used to gather the location data of individuals from installed routers. **Socket.io** was used to update the client with real-time data. **Twitter API** was used to power the activity feed. The front-end was built with **Bootstrap** while the back-end ran on a **node.js** server. The project source is viewable at <https://github.com/DenimMazuki/reSCue>. ## Challenges we ran into and what we learned Coding to properly overlay the heatmap in different contexts on the map forced us to account for different use cases. Pipelining data between APIs required extensive reading of documentation. ## Accomplishments that we're proud of This was the first hackathon for most of our team members. We learned much during this 36 hour journey about new and unfamiliar technologies and project management. We're proud of our product, which provides the users with a clean, beautiful experience where they can visualize and interact with the data they expect to see. ## What's next for reSCue The technology behind reSCue may be ported to other applications, such as tracking events, office traffic, etc. As more interior data becomes available, the application will only become more useful. Features that may prove to be useful in the future include a **timeline to view historical location data**, **analysis** of social media feeds to highlight topics and events being discussed, notifications of spikes in traffic, and data from sources, such as USGS and NOAA.
winning
## Inspiration * The COVID-19 pandemic has bolstered an epidemic of anxiety among students * Frequent **panic attacks** are a symptom of anxiety * In the moment, panic attacks are frightening and crippling * In a time of isolation, Breeve is designed to improve users' mental health by identifying and helping them when they are experiencing panic attacks ## What it does * Heart rate monitor detects significant increase in heart rate (indicative of a panic attack) * Arduino sends signal to Initiate "breathing routine" * Google chrome extension (after getting a message from the Arduino) opens up a new tab with our webpage on it * Our webpage has a serene background, comforting words, and a moving cloud to help people focus on breathing and relaxing ## How we built it * The chrome extension and website are built in HTML, CSS, and JavaScript * The heart rate monitor is comprised of an Arduino UNO microcontroller, heart rate sensor (we substituted a potentiometer since we don't own a heart rate sensor) and a breadboard circuit ## Challenges we ran into * As this is our first hardware hack, we struggled with connecting the hardware and software. We were unable to use the "Keyboard()" Arduino library to let the Arduino initialize the chrome extension, and we struggled with using other technologies like FireBase to connect Arduino sensor input to the chrome extension's output. This is something we plan to learn about for future improvements to Breeve and future hackathons. ## Accomplishments that we're proud of * This is our first hardware hack! ## What we learned * Kirsten learned a lot about Arduino and breadboarding (e.g., how to hook up potentiometer) * Lavan learned about CSS animations and how a database could be used in the future to connect various input and output sources ## What's next for Breeve * More personalized → add prompt to phone a friend, take anxiety medication (if applicable) * Better sensory data (e.g. webcam, temperature sensor) to make a more informed diagnosis * Improved webpage (adding calming music in the background to create a safe, happy atmosphere)
## Inspiration After observing different hardware options, the dust sensor was especially outstanding in its versatility and struck us as exotic. Dust-particulates in our breaths are an ever present threat that is too often overlooked and the importance of raising awareness for this issue became apparent. But retaining interest in an elusive topic would require an innovative form of expression, which left us stumped. After much deliberation, we realized that many of us had a subconscious recognition for pets, and their demanding needs. Applying this concept, Pollute-A-Pet reaches a difficult topic with care and concern. ## What it does Pollute-A-Pet tracks the particulates in a person's breaths and records them in the behavior of adorable online pets. With a variety of pets, your concern may grow seeing the suffering that polluted air causes them, no matter your taste in companions. ## How we built it Beginning in two groups, a portion of us focused on connecting the dust sensor using Arduino and using python to connect Arduino using Bluetooth to Firebase, and then reading and updating Firebase from our website using javascript. Our other group first created gifs of our companions in Blender and Adobe before creating the website with HTML and data-controlled behaviors, using javascript, that dictated the pets’ actions. ## Challenges we ran into The Dust-Sensor was a novel experience for us, and the specifications for it were being researched before any work began. Firebase communication also became stubborn throughout development, as javascript was counterintuitive to object-oriented languages most of us were used to. Not only was animating more tedious than expected, transparent gifs are also incredibly difficult to make through Blender. In the final moments, our team also ran into problems uploading our videos, narrowly avoiding disaster. ## Accomplishments that we're proud of All the animations of the virtual pets we made were hand-drawn over the course of the competition. This was also our first time working with the feather esp32 v2, and we are proud of overcoming the initial difficulties we had with the hardware. ## What we learned While we had previous experience with Arduino, we had not previously known how to use a feather esp32 v2. We also used skills we had only learned in beginner courses with detailed instructions, so while we may not have “learned” these things during the hackathon, this was the first time we had to do these things in a practical setting. ## What's next for Dustables When it comes to convincing people to use a product such as this, it must be designed to be both visually appealing and not physically cumbersome. This cannot be said for our prototype for the hardware element of our project, which focused completely on functionality. Making this more user-friendly would be a top priority for team Dustables. We also have improvements to functionality that we could make, such as using Wi-Fi instead of Bluetooth for the sensors, which would allow the user greater freedom in using the device. Finally, more pets and different types of sensors would allow for more comprehensive readings and an enhanced user experience.
## Inspiration: Home security systems are very expensive and sometimes do not function as intended. Sometimes something simple may happen such as you forgetting the lights on at home or there may be something more drastic such as a large temperature change or even intruder. Our solution aims to be a cheap alert system that would detect three parameters and offer an alert to the user. ## What it does: Our project detects light, temperature and sounds and sends the necessary message to the user. Light sensors would be use to tell the user if they forgot their lights on and hence send an alert to the user. Temperature detection would be use to send drastic changes in temperature or sound to the user as alert messages which may include extreme cold in winter or extreme heat in summer. Sound detection would be used as a security system as it is configured to send alerts to the user once a certain decibel level is reached. Therefore very loud sounds such as breaking glass, shouting or even a gunshot may be detected and an alert sent to the user. These messages are all sent to the user's phone. If anything is wrong, there is a circuit with a red LED light that lights up whenever there is a situation. If the LED is off, the user gets no messages and everything is okay at home. Our project also associates user friendly colors with conditions for example heat is red and cold would be blue. ## How we built it: We used an Arduino as well as a Grove Kit in order to obtain sensors. These sensors were connected to the Arduino and we also attached a breadboard that would receive an input from the Arduino. We coded the entire project and uploaded it unto the chip. We then used an adapter to transfer the input from the Arduino to our phones and tested the output to ensure it worked. ## Challenges we ran into: Unfortunately there was a lack of hardware at our disposal. We wanted to implement bluetooth technology to send data to our phones without wires and even tweet weather alerts. However there was no bluetooth hardware components so we were unable to achieve this. Instead we just used an adapter to connect the arduino to our phone and show a test output. Testing was also an issue since we were not able to generate extreme cold and warm weathers so we had to change our code to test these parameters. ## Accomplishments that we're proud of: We had very little experience in using Grove Kits and were able to figure out a way to implement our project. Also we were able to change our original idea due to there being a limitation of bluetooth and WiFi shield components. ## What we learned: We learned how to use and code the sensors in a Grove Kit. We also improved our knowledge of Arduino and building circuits. ## What's next for Home Automation and Security: Future improvements and Modifications to improvements would be using bluetooth and WiFi to send twitter alerts to people on the user's contact list. In the future we may also include more components to the circuit for example installing a remote button that can contact the police in the case of there being an intruder. We may also install other types of sensors such as touch sensors that may be placed on a welcome mat or door handle during long periods away from home. Code: # include # include "rgb\_lcd.h" # include rgb\_lcd lcd; float temperature; //stores temperature int lightValue; //stores light value int soundValue; //stores sound value bool errorTemp = false; bool errorLight = false; bool errorSound = false; bool errorTempCold = false; bool errorTempHot = false; int lights = 0; int cold = 0; int hot = 0; int intruder = 0; const int B = 4275; const int R0 = 100000; const int pinTempSensor = A0; const int pinLightSensor = A1; const int pinSoundSensor = A2; const int pinLEDRed = 9; const int pinLEDGreen = 8; void setup() { lcd.begin(16, 2); Serial.begin(9600); } void loop() { temperature = 0; temp(); //function that detects the temperature light(); //function that detects light sound(); //function that detects sounds lightMessages(); //function that checks conditions temperatureMessages(); //function that outputs everything to the user ok(); //function that ensures all parameters are correctly calculated and tested serialErrors(); //function that checks logic and sends data to output function } void light() { lightValue = analogRead(pinLightSensor); } void sound() { soundValue = analogRead(pinSoundSensor); //Serial.println(soundValue); if(soundValue > 500) { errorSound = true; } else { errorSound = false; } } void temp() { int a = analogRead(pinTempSensor); float R = 1023.0/((float)a)-1.0; R = R0\*R; temperature = 1.0/(log(R/R0)/B+1/298.15)-303.14; // convert to temperature via datasheet delay(100); } void blinkLED() { analogWrite(pinLEDRed, HIGH); delay(500); analogWrite(pinLEDRed, LOW); delay(500); } void greenLED() { analogWrite(pinLEDGreen, HIGH); } void screenRed() { lcd.setRGB(255,0,0); } void screenBlue() { lcd.setRGB(0,0,255); } void screenNormal() { lcd.setRGB(0,50,50); } void serialErrors() { if (errorSound == false) { if (errorLight == true) { cold = 0; hot = 0; intruder = 0; if(lights == 0) { Serial.println("Important: Lights are on at home!"); lights++; } else { Serial.print(""); } } else if (errorTempCold == true) { lights = 0; hot = 0; intruder = 0; if(cold == 0) { Serial.println("Important: The temperature at home is low!"); cold++; } else { Serial.print(""); } } else if (errorTempHot == true) { lights = 0; cold = 0; intruder = 0; if(hot == 0){ Serial.println("Important: The temperature at home is high!"); hot++; } else { Serial.print(""); } } } else { lights = 0; cold = 0; hot = 0; if(intruder == 0) { Serial.println("IMPORTANT: There was a very loud sound at home! Possible intruder."); intruder++; } else { Serial.print(""); } } } void ok() { if(errorSound == false) { if (errorTemp == false && errorLight == false) { lcd.clear(); analogWrite(pinLEDGreen, HIGH); lcd.setCursor(0, 0); lcd.print("Everything is ok"); lcd.setCursor(1,1); lcd.print("Temp = "); lcd.print(temperature); lcd.print("C"); screenNormal(); } } } void lightMessages() { if(lightValue > 500) { lcd.clear(); lcd.setCursor(0, 0); lcd.print("Lights are on!"); screenRed(); blinkLED(); errorLight = true; } else { errorLight = false; } } void temperatureMessages() { if (errorSound == false) { if (temperature < 20) { lcd.clear(); lcd.setCursor(0,1); lcd.print("Extreme Cold!"); screenBlue(); blinkLED(); errorTemp = true; errorTempCold = true; errorTempHot = false; } else if (temperature > 30) { lcd.clear(); lcd.setCursor(0,1); lcd.print("Extreme Heat!"); screenRed(); blinkLED(); errorTemp = true; errorTempHot = true; errorTempCold = false; } else { errorTemp = false; errorTempHot = false; errorTempCold = false; } } else { lcd.clear(); lcd.setCursor(0,0); lcd.print("LOUD SOUND"); lcd.setCursor(0,1); lcd.print("DETECTED!"); screenRed(); blinkLED(); delay(5000); if (soundValue < 500) { errorSound = false; } else { errorSound = true; } } }
partial
## Overview According to the WHO, at least 2.2 billion people worldwide have a vision impairment or blindness. Out of these, an estimated 1 billion cases could have been prevented or have yet to be addressed. This underscores the vast number of people who lack access to necessary eye care services. Even as developers, our screens have been both our canvas and our cage. We're intimately familiar with the strain they exert on our eyes, a plight shared by millions globally. We need a **CHANGE**. What if vision care could be democratized, made accessible, and seamlessly integrated with cutting-edge technology? Introducing OPTimism. ## Inspiration The very genesis of OPTimism is rooted in empathy. Many in underserved communities lack access to quality eye care, a necessity that most of us take for granted. Coupled with the increasing screen time in today's digital age, the need for effective and accessible solutions becomes even more pressing. Our team has felt this on a personal level, providing the emotional catalyst for OPTimism. We didn't just want to create another app; we aspired to make a tangible difference. ## Core Highlights **Vision Care Chatbot:** Using advanced AI algorithms, our vision chatbot assists users in answering vital eye care questions, offering guidance and support when professional help might not be immediately accessible. **Analytics & Feedback:** Through innovative hardware integrations like posture warnings via a gyroscope and distance tracking with ultrasonic sensors, users get real-time feedback on their habits, empowering them to make healthier decisions. **Scientifically-Backed Exercises:** Grounded in research, our platform suggests eye exercises designed to alleviate strain, offering a holistic approach to vision care. **Gamified Redemption & Leaderboard System:** Users are not just passive recipients but active participants. They can earn optimism credits, leading to a gamified experience where they can redeem valuable eye care products. This not only incentivizes regular engagement but also underscores the importance of proactive vision care. The donation system using Circle allows users to make the vision care product possible. ## Technical Process Bridging the gap between the technical and the tangible was our biggest challenge. We leaned on technologies such as React, Google Cloud, Flask, Taipy, and more to build a robust frontend and backend, containerized using Docker and Kubernetes and deployed on Netlify. Arduino's integration added a layer of real-world interaction, allowing users to receive physical feedback. The vision care chatbot was a product of countless hours spent on refining algorithms to ensure accuracy and reliability. ## Tech Stack React, JavaScript, Vite, Tailwind CSS, Ant Design, Babel, NodeJS, Python, Flask, Taipy, GitHub, Docker, Kubernetes, Firebase, Google Cloud, Netlify, Circle, OpenAI **Hardware List:** Arduino, Ultrasonic sensor, smart glasses, gyroscope, LEDs, breadboard ## Challenges we ran into * Connecting the live data retrieved from the Arduino into the backend application for manipulating and converting into appropriate metrics * Circle API key not authorized * Lack of documentation for different hardwares and support for APIs. ## Summary OPTimism isn't just about employing the latest technologies; it's about leveraging them for a genuine cause. We've seamlessly merged various features, from chatbots to hardware integrations, under one cohesive platform. Our aim? Clear, healthy vision for all, irrespective of their socio-economic background. We believe OPTimism is more than just a project. It's a vision, a mission, and a commitment. We will convert the hope to light the path to a brighter, clearer future for everyone into reality.
## Inspiration In a world in which we all have the ability to put on a VR headset and see places we've never seen, search for questions in the back of our mind on Google and see knowledge we have never seen before, and send and receive photos we've never seen before, we wanted to provide a way for the visually impaired to also see as they have never seen before. We take for granted our ability to move around freely in the world. This inspired us to enable others more freedom to do the same. We called it "GuideCam" because like a guide dog, or application is meant to be a companion and a guide to the visually impaired. ## What it does Guide cam provides an easy to use interface for the visually impaired to ask questions, either through a braille keyboard on their iPhone, or through speaking out loud into a microphone. They can ask questions like "Is there a bottle in front of me?", "How far away is it?", and "Notify me if there is a bottle in front of me" and our application will talk back to them and answer their questions, or notify them when certain objects appear in front of them. ## How we built it We have python scripts running that continuously take webcam pictures from a laptop every 2 seconds and put this into a bucket. Upon user input like "Is there a bottle in front of me?" either from Braille keyboard input on the iPhone, or through speech (which is processed into text using Google's Speech API), we take the last picture uploaded to the bucket use Google's Vision API to determine if there is a bottle in the picture. Distance calculation is done using the following formula: distance = ( (known width of standard object) x (focal length of camera) ) / (width in pixels of object in picture). ## Challenges we ran into Trying to find a way to get the iPhone and a separate laptop to communicate was difficult, as well as getting all the separate parts of this working together. We also had to change our ideas on what this app should do many times based on constraints. ## Accomplishments that we're proud of We are proud that we were able to learn to use Google's ML APIs, and that we were able to get both keyboard Braille and voice input from the user working, as well as both providing image detection AND image distance (for our demo object). We are also proud that we were able to come up with an idea that can help people, and that we were able to work on a project that is important to us because we know that it will help people. ## What we learned We learned to use Google's ML APIs, how to create iPhone applications, how to get an iPhone and laptop to communicate information, and how to collaborate on a big project and split up the work. ## What's next for GuideCam We intend to improve the Braille keyboard to include a backspace, as well as making it so there is simultaneous pressing of keys to record 1 letter.
## Inspiration All of us have gone through the lengthy and often inefficient process of going to the optometrist and then painstakingly reading thorough charts to determine visual acuity - what if there was a much better and faster way to determine it? After talking with Dr. Peter Karth of the Stanford School of Medicine, we realized that there was potential to develop a mobile app that can instantly detect visual acuity. ## What it does Instant Acuity allows patients to quickly detect their visual acuity and for providers to check up on their patients and schedule appointments if necessary. The visual acuity detection system solves two major problems - first it allows for the user to speak the letters he or she sees rather than manually enter them (which is beneficial especially for the elderly population) and second it uses facial recognition and subsequent distance calculations to make sure the user is not holding the device either too far or too close. ## How we built it Instant Acuity was developed using Xcode with Objective-C as the primary programming language. IBM Bluemix was utilized for speech to text conversion while Facial Recognition (via Apple API) was utilized to to determine whether the user holding the phone too close or far away. Design was done within Xcode and with Microsoft PowerPoint. Also, the LogMAR Visual Acuity scale was utilized to determine the user's vision (the varying letter sizes are based on a mathematical formula we developed based on the minimum angle of resolution of a user looking at a particular letter). ## Challenges we ran into Using IBM Blumix to implement speech to text proved to be difficult - we had to go through several iterations to implement the API effectively. Moreover, facial recognition also proved to be a significant challenge (using the front facing camera on an iPhone). ## Accomplishments that we're proud of We're proud of being able to synthesize many varied components together - IBM Bluemix, Apple's Facial Recognition API, and Xcode's User Interface together to create a promising product - all at our very first hackathon! ## What we learned Never stop hacking- where there is a will there is a way! While working out bugs and logical errors can be frustrating, ultimately there is a solution and finding it is immensely rewarding. ## What's next for Instant Acuity: A Mobile App for Vision Testing Instant Acuity is a very promising concept and the prototype demonstrates that it could have a revolutionary impact on vision testing. We hope to do further testing on the app and optimize it for potential use in industry.
partial
## Inspiration We want to build an educational app for kids to learn how to program. ## What it does It is an app that gives lessons to kids for them to learn Python2. ## How we built it We built a Webstie, an ios app and an android app. ## Challenges we ran into ## Accomplishments that we're proud of ## What we learned ## What's next for python4kids
## Inspiration Our inspiration for developing this tool comes from the love and passion we have for Dungeons & Dragons (D&D). We recognized that both new players and veterans often face challenges in character-building and tactical combat. Traditional gameplay requires extensive preparation and understanding of rules, which can be daunting. We wanted to create a tool that simplifies this process and enhances the gameplay experience by leveraging AI to create dynamic and challenging combat scenarios. ## What it does Dungeon Tactics is an innovative, AI-driven battle simulator designed to enhance your Dungeons & Dragons gameplay experience. The tool allows players to pit their characters against AI-controlled monsters in a variety of combat scenarios. The AI makes strategic decisions based on D&D rules, providing a realistic and challenging experience. The simulator currently supports the ability to move and take both unarmed strikes and weapon strikes while adhering to the D&D 5E combat rules. ## How we built it We built Dungeon Tactics using a combination of React for the interactive user interface and OpenAI to drive the AI decision-making for monster actions. The frontend was developed with React, providing a dynamic and user-friendly interface where players can control their characters on a grid-based map. We used the OpenAI API to implement the AI logic, ensuring that monster actions are realistic and adhere to D&D rules. Our development process also involved integrating the D&D 5e SRD API to access game data, such as monster stats and abilities. ## Challenges we ran into One of the main challenges we faced was implementing the complex rules of D&D in a way that the AI could understand and apply during combat. Balancing the AI's difficulty level to ensure it provides a challenge without being unfair was another significant challenge. Additionally, ensuring seamless integration between the frontend (React) and backend (OpenAI API) required meticulous planning and testing. Handling the wide range of possible actions while also making sure that they rendered correctly on the screen was a great challenge. ## Accomplishments that we're proud of We are proud of creating a functional and engaging tool that enhances the D&D gameplay experience. Successfully integrating AI to drive monster actions in a way that feels natural and adheres to the game's rules was a major accomplishment. We are also proud of the user-friendly interface we developed, which makes it easy for players to control their characters and engage with the simulator. Our tool not only aids in character building but also provides a fun and challenging way to practice combat scenarios. For both of us, this was our first time developing a React app on our own and it showed us a lot of what is done before even coding. ## What we learned Throughout the development of Dungeon Tactics, we learned a great deal about the complexities of AI in gaming, especially in a rule-heavy environment like D&D. We gained insights into balancing AI difficulty and ensuring fair play. We also learned about the importance of seamless integration between different technologies (React and OpenAI) and how to manage state and actions in a dynamic, interactive application. Furthermore, we deepened our understanding of D&D rules and mechanics, which was crucial for developing an authentic gameplay experience. ## What's next for Dungeon Tactics Moving forward, we plan to expand Dungeon Tactics by adding more monsters and supporting custom monsters created by users. We aim to incorporate additional actions such as reactions, bonus actions, and spell casting to further enhance the realism of the simulator and support more classes. Another key area of development will be improving the AI's strategic capabilities, making it even more challenging and enjoyable. We also plan to implement multiplayer support, allowing players to team up and face AI-controlled challenges together. Finally, we will continuously refine the user interface to ensure it remains intuitive and engaging for players of all experience levels.
## Inspiration We wanted to build a sustainable project which gave us the idea to plant crops on a farmland in a way that would give the farmer the maximum profit. The program also accounts for crop rotation which means that the land gets time to replenish its nutrients and increase the quality of the soil. ## What it does It does many things, It first checks what crops can be grown in that area or land depending on the weather of the area, the soil, the nutrients in the soil, the amount of precipitation, and much more information that we have got from the APIs that we have used in the project. It then forms a plan which accounts for the crop rotation process. This helps the land regain its lost nutrients while increasing the profits that the farmer is getting from his or her land. This means that without stopping the process of harvesting we are regaining the lost nutrients. It also gives the farmer daily updates on the weather in that area so that he can be prepared for severe weather. ## How we built it For most of the backend of the program, we used Python. For the front end of the website, we used HTML. To format the website we used CSS. we have also used Javascript for formates and to connect Python to HTML. We used the API of Twilio in order to send daily messages to the user in order to help the user be ready for severe weather conditions. ## Challenges we ran into The biggest challenge that we faced during the making of this project was the connection of the Python code with the HTML code. so that the website can display crop rotation patterns after executing the Python back end script. ## Accomplishments that we're proud of While making this each of us in the group has accomplished a lot of things. This project as a whole was a great learning experience for all of us. We got to know a lot of things about the different APIs that we have used throughout the project. We also accomplished making predictions on which crops can be grown in an area depending on the weather of the area in the past years and what would be the best crop rotation patterns. On the whole, it was cool to see how the project went from data collection to processing to finally presentation. ## What we learned We have learned a lot of things in the course of this hackathon. We learned team management and time management, Moreover, we got hands on experience in Machine Learning. We got to implement Linear Regression, Random decision trees, SVM models. Finally, using APIs became second nature to us because of the number of them we had to use to pull data. ## What's next for ECO-HARVEST For now, the data we have is only limited to the United States, in the future we plan to increase it to the whole world and also increase our accuracy in predicting which crops can be grown in the area. Using the crops that we can grow in the area we want to give better crop rotation models so that the soil will gain its lost nutrients faster. We also plan to give better and more informative daily messages to the user in the future.
losing
## Inspiration How many clicks does it take to upload a file to Google Drive? TEN CLICKS. How many clicks does it take for PUT? **TWO** **(that's 1/5th the amount of clicks)**. ## What it does Like the name, PUT is just as clean and concise. PUT is a storage universe designed for maximum upload efficiency, reliability, and security. Users can simply open our Chrome sidebar extension and drag files into it, or just click on any image and tap "upload". Our AI algorithm analyzes the file content and organizes files into appropriate folders. Users can easily access, share, and manage their files through our dashboard, chrome extension or CLI. ## How we built it We the TUS protocol for secure and reliable file uploads, Cloudflare workers for AI content analysis and sorting, React and Next.js for the dashboard and Chrome extension, Python for the back-end, and Terraform allow anyone to deploy the workers and s3 bucket used by the app to their own account. ## Challenges we ran into TUS. Let's prefix this by saying that one of us spent the first 18 hours of the hackathon on a golang backend then had to throw the code away due to a TUS protocol incompatibility. TUS, Cloudflare's AI suite and Chrome extension development were completely new to us and we've run into many difficulties relating to implementing and combining these technologies. ## Accomplishments that we're proud of We managed to take 36 hours and craft them into a product that each and every one of us would genuinely use. We actually received 30 downloads of the CLI from people interested in it. ## What's next for PUT If given more time, we would make our platforms more interactive by utilizing AI and faster client-server communications.
## Inspiration We aims to bridge the communication gap between hearing-impaired individuals and those who don't understand sign language. ## What it does This web app utilizes the webcam to capture hand gestures, recognizes the corresponding sign language symbols using machine learning models, and displays the result on the screen. ### Features * Real-time hand gesture recognition * Supports standard ASL * Intuitive user interface * Cross-platform compatibility (iOS and Android via web browsers) ## How we built it We use [Hand Pose Detection Model](https://github.com/tensorflow/tfjs-models/tree/master/hand-pose-detection) and [Fingerpose](https://github.com/andypotato/fingerpose) to detect hand and its correspond gesture from webcam. For frontend, we use ReactJS and Vite as the build tool and serve our website on Netlify. There is no backend since we embed the models on client side. ## Challenges we ran into ### Model Choice We originally used [google web ML model](https://teachablemachine.withgoogle.com/train/image) to trained with data that contains many noises such as the person and the background. This implies that the prediction will be heavily biased on objects that are not hands. Next, we thought about labeling the image to emphasize on hand. That leads to another bias on some hand gestures are weighted more, so the model tends to predict certain gestures even when we did not pose it that way. Later, we discover to use hand landscape to recognize hand only and joint pose to recognize the gesture which gives much better prediction. ### Streaming In order to let the webcam worked for all device(mobile and desktop) of different specs and screen ratio, we struggle to find a way to enable full screen on all of them. First solution was to hard code the width and height for one device but that was hard to adjust and limited. During try and error, there's another issue that raised, the width and height is applied horizontally for mobile devices, so to work on both mobile and desktop, we dynamically check the user's screen ratio. To solve full screen issue, we used progressive web app to capture the device window. ## Accomplishments that we're proud of * Active and accurate hand tracking from webcam streaming * Finding ways to translate different gestures from ASL to English * Being able to use across mobile and desktop * Intuitive yet functional design ## What we learned * Learned about American Sign Language * Deploy progressive web app * How machine learning model takes inputs and make prediction * How to stream from webcam to inputs to our model * Variations of machine learning model and how to fine-tune them ## What's next for Signado Next step, we plan to add two hand and motion based gestures support since many words in sign language require the use of these two properties. Also to improve on model accuracy, we can use the latest [handpose]{<https://blog.tensorflow.org/2021/11/3D-handpose.html>} model that transform the hand into a 3D mesh. This will provide more possibility and variation to the gesture that can be performed.
## Inspiration Over the past five years, data has not just been a luxury, it has been a need. Each day, hundreds of millions of files are uploaded and shared over the internet, resulting in cloud storage becoming increasingly essential. As students, the need to store and access data at anytime from anywhere has become a necessity and we embarked on this journey to provide free and unlimited storage for everyone. We also needed a robust solution. Who doesn't want unlimited storage? And who doesn't like free stuff? We combined the two, to bring you the best. ## What it does By accessing the Dropbox API, we synchronously chunk-upload files to multiple DropBox accounts using an automated account creation process. Our backend dynamically allocates free space for the user, all without the need for front end interaction. However, due to time constraints, and lack of access to Google's full OCR platform, we manually implemented this process. Filled up an account? No problem, we'll move to the next one. ## How we built it DropBox offers a free tier option to users, where they can upload a limited amount of data. Keeping this in mind, we designed our backend in such a way that we could connect multiple DropBox accounts for a single user without the need for them to authenticate or create DropBox accounts. This essentially allows for a single user to have virtually unlimited storage as if one accounts storage capacity is exceeded, a new account is automatically initialized for them on our platform. Users, therefore, access their files through our platform, but we essentially run headless, connecting the two together. ## Technical Talk: Using Node in the backend with MongoDB, we essentially created a virtual cloud, which amalgamates multiple DropBox accounts for each user. This is then relayed to the front end where it can be interacted with in a user-friendly manner through Materialize.css. ## Challenges I ran into One of the biggest challenges for this project was bypassing the need for a Captcha as well as automating the creation of email and DropBox accounts. Since there is no API to accomplish this, we took an elementary approach and conceptualized a macro which would click through the sites for us. Using Google's OCR which we tested through Google Translate's 'Image to Text Feature', we bypassed the DropBox Captcha in near perfect number of attempts. Once again, due to time constraints, this was not fully implemented and for the sake of completeness, we manually assisted in the process. ## Accomplishments that we're proud of A functioning model by the end of the first night, and working on making it presentable by the next. Creating a simple hack to overcome a seemingly complex task. ## What we learned I'm sorry DropBox, we've always loved you, and will continue to love you. This was purely a learning experience. ## What's next for Dr0pbox * Implementing automated account creation * Breaking down files through bit by bit encoding to increase storage efficiency * Complete CRUD operations on our platform * Switching between grid and list-view and even more user-friendly interface
winning
## Inspiration At Carb0, we're committed to empowering individuals to take control of their carbon footprint and contribute to a more sustainable future. Our inspiration comes from the fact that 72% of CO2 emissions could be reduced by changes in consumer behavior, yet many companies lack the motivation to conduct ESG reports if not required by investors or the government. We believe that establishing consumer-driven ESG can drive companies to be accountable and take action to provide more sustainable products and services. ## What it does We created **a personal carbon tracker** that **incentivizes** customers to adopt low-carbon lifestyles and **democratizes carbon footprint data**, making it easier for everyone to contribute to a sustainable future. Our platform provides information to influence consumers' purchase decisions and provides alternatives to help them make sustainable decisions. This way, we can encourage companies, investors, and the government to take responsibility and be more sustainable. ## How we built it We began by identifying the problem and then went through an intense ideation process to converge on our consumer-driven ESG idea. We defined the user journey and pain points to create a convenient, incentivizing, and user-centric platform. Our reward system easily links to digital payment details and helps track CO2 emissions with data visualization and cashback based on monthly summaries. We also make product carbon footprint data easily accessible and searchable. ## Challenges we ran into Our biggest challenge was integrating front-end and back-end and defining scope. We faced technical assumptions since the accurate database was not available due to time constraints. ## Accomplishments that we're proud of Despite these challenges, we are proud of our self-sustaining system to establish consumer-driven ESG, successful integration of front-end and back-end with a user-friendly interface, and the intense ideation process we went through. ## What we learned During this project, we learned how to rapidly prototype a digital app in limited time and resources, gained a deeper understanding of ESG, its current challenges, and potential solutions. ## What's next for Carb0 - Empower your carbon journey Our next steps are to conduct user testing and iterations for a higher-fidelity prototype, enrich carbon footprint database coverage and accuracy. We also plan to potentially add Carb0 as an add-on for digital wallets to reach a broader audience and engage more people in a more sustainable lifestyle. Our vision is that **consumer-driven ESG** will incentivize governments, investors, and companies to take more initiatives in creating a more sustainable world. Join us on our journey to a sustainable future with Carb0!
## Inspiration Imagine a world where learning is as easy as having a conversation with a friend. Picture a tool that unlocks the treasure trove of educational content on YouTube, making it accessible to everyone, regardless of their background or expertise. This is exactly what our hackathon project brings to life. * Current massive online courses are great resources to bridge the gap in educational inequality. * Frustration and loss of motivation with the lengthy and tedious search for that 60-second content. * Provide support to our students to unlock their potential. ## What it does Think of our platform as your very own favorite personal tutor. Whenever a question arises during your video journey, don't hesitate to hit pause and ask away. Our chatbot is here to assist you, offering answers in plain, easy-to-understand language. Moreover, it can point you to external resources and suggest specific parts of the video for a quick review, along with relevant sections of the accompanying text. So, explore your curiosity with confidence – we've got your back! * Analyze the entire video content 🤖 Learn with organized structure and high accuracy * Generate concise, easy-to-follow conversations⏱️Say goodbye to wasted hours watching long videos * Generate interactive quizzes and personalized questions 📚 Engaging and thought-provoking * Summarize key takeaways, explanations, and discussions tailored to you 💡 Provides tailored support * Accessible to anyone with an internet 🌐 Accessible and Convenient ## How we built it Vite React,js as front-end and Flask as back-end. Using Cohere command-nightly AI and Similarity ranking. ## Challenges we ran into * **Increased application efficiency by 98%:** Reduced the number of API calls lowering load time from 8.5 minutes to under 10 seconds. The challenge we ran into was not taking into account the time taken for every API call. Originally, our backend made over 500 calls to Cohere's API to embed text every time a transcript section was initiated and repeated when a new prompt was made -- API call took about one second and added 8.5 minutes in total. By reducing the number of API calls and using efficient practices we reduced time to under 10 seconds. * **Handling over 5000-word single prompts:** Scraping longer YouTube transcripts efficiently was complex. We solved it by integrating YouTube APIs and third-party dependencies, enhancing speed and reliability. Also uploading multi-prompt conversation with large initial prompts to MongoDB were challenging. We optimized data transfer, maintaining a smooth user experience. ## Accomplishments that we're proud of Created a practical full-stack application that I will use on my own time. ## What we learned * **Front end:** State management with React, third-party dependencies, UI design. * **Integration:** Scalable and efficient API calls. * **Back end:** MongoDB, Langchain, Flask server, error handling, optimizing time complexity and using Cohere AI. ## What's next for ChicSplain We envision ChicSplain to be more than just an AI-powered YouTube chatbot, we envision it to be a mentor, teacher, and guardian that will be no different in functionality and interaction from real-life educators and guidance but for anyone, anytime and anywhere.
## Inspiration People are increasingly aware of climate change but lack actionable steps. Everything in life has a carbon cost, but it's difficult to understand, measure, and mitigate. Information about carbon footprints of products is often inaccessible for the average consumer, and alternatives are time consuming to research and find. ## What it does With GreenWise, you can link email or upload receipts to analyze your purchases and suggest products with lower carbon footprints. By tracking your carbon usage, it helps you understand and improve your environmental impact. It provides detailed insights, recommends sustainable alternatives, and facilitates informed choices. ## How we built it We started by building a tool that utilizes computer vision to read information off of a receipt, an API to gather information about the products, and finally ChatGPT API to categorize each of the products. We also set up an alternative form of gathering information in which the user forwards digital receipts to a unique email. Once we finished the process of getting information into storage, we built a web scraper to gather the carbon footprints of thousands of items for sale in American stores, and built a database that contains these, along with AI-vectorized form of the product's description. Vectorizing the product titles allowed us to quickly judge the linguistic similarity of two products by doing a quick mathematical operation. We utilized this to make the application compare each product against the database, identifying products that are highly similar with a reduced carbon output. This web application was built with a Python Flask backend and Bootstrap for the frontend, and we utilize ChromaDB, a vector database that allowed us to efficiently query through vectorized data. ## Accomplishments that we're proud of In 24 hours, we built a fully functional web application that uses real data to provide real actionable insights that allow users to reduce their carbon footprint ## What's next for GreenWise We'll be expanding e-receipt integration to support more payment processors, making the app seamless for everyone, and forging partnerships with companies to promote eco-friendly products and services to our consumers [Join the waitlist for GreenWise!](https://dea15e7b.sibforms.com/serve/MUIFAK0jCI1y3xTZjQJtHyTwScsgr4HDzPffD9ChU5vseLTmKcygfzpBHo9k0w0nmwJUdzVs7lLEamSJw6p1ACs1ShDU0u4BFVHjriKyheBu65k_ruajP85fpkxSqlBW2LqXqlPr24Cr0s3sVzB2yVPzClq3PoTVAhh_V3I28BIZslZRP-piPn0LD8yqMpB6nAsXhuHSOXt8qRQY)
partial
In the healthcare industry, collecting electronic clinical quality measure (eCQM) data is essential for monitoring and improving patient outcomes. Hospitals typically gather this data to enhance adherence to treatment plans for chronic conditions like hypertension. While patients can easily take blood pressure measurements at home, the subsequent data analysis to determine hypertension risk and medication adjustments can be labor-intensive for healthcare providers. BP Buddy is our innovative healthcare chatbot designed to streamline this process and mitigate medical burnout by reducing the workload of medical professionals. BP Buddy queries patients to gather comprehensive data on various hypertension-related factors, including gender, age, smoking status, blood pressure medication usage, diabetes status, total cholesterol, systolic and diastolic blood pressure, BMI, heart rate, and glucose levels. Leveraging a fully connected neural network (FNN) model, BP Buddy analyzes these inputs to predict whether a patient is at risk of hypertension. If a patient is flagged as at risk, their profile in the hospital system is marked for further review by a medical professional, enabling timely medication adjustments and necessary interventions to ensure optimal patient care. Our model was trained on the Hypertension-risk-model-main.csv dataset from Kaggle, encompassing diverse patient data to enhance predictive accuracy. The neural network architecture features multiple layers, including SiLU and ReLU activation functions, to capture complex patterns in the data. We set the batch size to 4 and the learning rate to 0.0008, optimizing training efficiency and performance. For loss calculation, we employed PyTorch's binary cross-entropy loss function, and the Adam optimizer was utilized for efficient gradient descent. BP Buddy is poised to revolutionize hypertension management by providing an efficient, accurate, and user-friendly solution for both patients and healthcare providers.
## Inspiration We wanted to develop a tool that people could use in developing countries where medical access is heavily restricted and a tool that could help doctors get a better assessment of the patient before they request an appointment. ## What it does The web app asks the user to complete a few tasks and say a few phrases. From these phrases we use the Houndify API to display speech to text and determine the meaning of the phrase that the person is saying. Then it determines displays a medical dashboard with emotion, the pitch, jitter, and shimmer of your voice, and the probability of having diseases like Parkinson's, depression, etc. The Alexa skill asks the user if they are in pain and gives a general statement as to what is most likely the cause of that pain. ## How we built it We build it using flask where we could have the user do a few tasks and record their voice. Using the Houndify API we were able to convert this text to speech in order to give a visualization to the user. We also used the Houndify API to determine the meaning of what the user is saying. Then we ran this audio through a python script using praat in python to get the pitch, jitter, shimmer and a few other attributes from each audio clip. On the medical diagnostic dashboard we used a custom python package to determine the emotion of the user. We also display the pitch, jitter and the shimmer which was calculated by our praat python script. We also determine the percentage chance of having diseases such as depression, Parkinson's, Alzheimer's, etc. We hosted the medical diagnostic page on Google Cloud. Using python and Pytorch we created a feed forward neural net trained a model to determine if, based on the pitch, shimmer, and jitter, the probability that the user had a certain disease. The neural net had a loss of about 0.54 and the accuracy of this model ranges from 60% to 70%. Here is one data set that we used: <https://archive.ics.uci.edu/ml/datasets/Parkinson+Speech+Dataset+with++Multiple+Types+of+Sound+Recordings#> We used Voiceflow to create an Alexa skill that we used to ask more general medical questions and give users a general reply. This Alexa skill will ask the user their name and will ask them what body part they feel pain in. After getting that information the Alexa will respond with the most common reason for pain in that part of the body. ## Challenges we ran into Determining how to detect disease, in order to detect disease we trained our model on a data set that had the pitch, jitter, and shimmer of people's voices if they had a disease like Parkinson's. Working with Alexa, initially we wanted to use Alexa as the voice assistant the people could talk to, however we discovered that there is currently no way to record audio on the Alexa. So we switched the focus of the Alexa to be a general purpose medical evaluator. ## Accomplishments that we're proud of We are proud to have build an intuitive app with a great UI that has the potential to help people in developing countries and help doctors aid a larger number of patients with greater efficiency. ## What we learned In the past we have worked with Tensorflow to build neural networks and machine learning models, however as we were working with python we decided to try using Pytorch as it seemed like a much more versatile tool for python. Working with audio files and manipulating them to get data from them was an interesting challenge that we loved learning from. We learned how to program with Alexa and how to customize commands through Voiceflow. ## What's next for VoiceMD Polish the webapp so that it is ready for production. Launch the site and market it as a solution to aid in the decision to go see a doctor. Publish the Alexa skill so that people can use it to self diagnose the pain that they are having.
## Inspiration In 2012 in the U.S infants and newborns made up 73% of hospitals stays and 57.9% of hospital costs. This adds up to $21,654.6 million dollars. As a group of students eager to make a change in the healthcare industry utilizing machine learning software, we thought this was the perfect project for us. Statistical data showed an increase in infant hospital visits in recent years which further solidified our mission to tackle this problem at its core. ## What it does Our software uses a website with user authentication to collect data about an infant. This data considers factors such as temperature, time of last meal, fluid intake, etc. This data is then pushed onto a MySQL server and is fetched by a remote device using a python script. After loading the data onto a local machine, it is passed into a linear regression machine learning model which outputs the probability of the infant requiring medical attention. Analysis results from the ML model is passed back into the website where it is displayed through graphs and other means of data visualization. This created dashboard is visible to users through their accounts and to their family doctors. Family doctors can analyze the data for themselves and agree or disagree with the model result. This iterative process trains the model over time. This process looks to ease the stress on parents and insure those who seriously need medical attention are the ones receiving it. Alongside optimizing the procedure, the product also decreases hospital costs thereby lowering taxes. We also implemented a secure hash to uniquely and securely identify each user. Using a hyper-secure combination of the user's data, we gave each patient a way to receive the status of their infant's evaluation from our AI and doctor verification. ## Challenges we ran into At first, we challenged ourselves to create an ethical hacking platform. After discussing and developing the idea, we realized it was already done. We were challenged to think of something new with the same amount of complexity. As first year students with little to no experience, we wanted to tinker with AI and push the bounds of healthcare efficiency. The algorithms didn't work, the server wouldn't connect, and the website wouldn't deploy. We persevered and through the help of mentors and peers we were able to make a fully functional product. As a team, we were able to pick up on ML concepts and data-basing at an accelerated pace. We were challenged as students, upcoming engineers, and as people. Our ability to push through and deliver results were shown over the course of this hackathon. ## Accomplishments that we're proud of We're proud of our functional database that can be accessed from a remote device. The ML algorithm, python script, and website were all commendable achievements for us. These components on their own are fairly useless, our biggest accomplishment was interfacing all of these with one another and creating an overall user experience that delivers in performance and results. Using sha256 we securely passed each user a unique and near impossible to reverse hash to allow them to check the status of their evaluation. ## What we learned We learnt about important concepts in neural networks using TensorFlow and the inner workings of the HTML code in a website. We also learnt how to set-up a server and configure it for remote access. We learned a lot about how cyber-security plays a crucial role in the information technology industry. This opportunity allowed us to connect on a more personal level with the users around us, being able to create a more reliable and user friendly interface. ## What's next for InfantXpert We're looking to develop a mobile application in IOS and Android for this app. We'd like to provide this as a free service so everyone can access the application regardless of their financial status.
losing
## Inspiration The internet is filled with user-generated content, and it has become increasingly difficult to manage and moderate all of the text that people are producing on a platform. Large companies like Facebook, Instagram, and Reddit leverage their massive scale and abundance of resources to aid in their moderation efforts. Unfortunately for small to medium-sized businesses, it is difficult to monitor all the user-generated content being posted on their websites. Every company wants engagement from their customers or audience, but they do not want bad or offensive content to ruin their image or the experience for other visitors. However, hiring someone to moderate or build an in-house program is too difficult to manage for these smaller businesses. Content moderation is a heavily nuanced and complex problem. It’s unreasonable for every company to implement its own solution. A robust plug-and-play solution is necessary that adapts to the needs of each specific application. ## What it does That is where Quarantine comes in. Quarantine acts as an intermediary between an app’s client and server, scanning the bodies of incoming requests and “quarantining” those that are flagged. Flagging is performed automatically, using both pretrained content moderation models (from Azure and Moderation API) as well as an in house machine learning model that adapts to specifically meet the needs of the application’s particular content. Once a piece of content is flagged, it appears in a web dashboard, where a moderator can either allow or block it. The moderator’s labels are continuously used to fine tune the in-house model. Together with this in house model and pre-trained models a robust meta model is formed. ## How we built it Initially, we built an aggregate program that takes in a string and runs it through the Azure moderation and Moderation API programs. After combining the results, we compare it with our machine learning model to make sure no other potentially harmful posts make it through our identification process. Then, that data is stored in our database. We built a clean, easy-to-use dashboard for the grader using react and Material UI. It pulls the flagged items from the database and then displays them on the dashboard. Once a decision is made by the person, that is sent back to the database and the case is resolved. We incorporated this entire pipeline into a REST API where our customers can pass their input through our programs and then access the flagged ones on our website. Users of our service don’t have to change their code, simply they append our url to their own API endpoints. Requests that aren’t flagged are simply instantly forwarded along. ## Challenges we ran into Developing the in house machine learning model and getting it to run on the cloud proved to be a challenge since the parameters and size of the in house model is in constant flux. ## Accomplishments that we're proud of We were able to make a super easy to use service. A company can add Quarantine with less than one line of code. We're also proud of adaptive content model that constantly updates based on the latest content blocked by moderators. ## What we learned We learned how to successfully integrate an API with a machine learning model, database, and front-end. We had learned each of these skills individually before, but we has to figure out how to accumulate them all. ## What's next for Quarantine We have plans to take Quarantine even further by adding customization to how items are flagged and taken care of. It is proven that there are certain locations that spam is commonly routed through so we could do some analysis on the regions harmful user-generated content is coming from. We are also keen on monitoring the stream of activity of individual users as well as track requests in relation to each other (detect mass spamming). Furthermore, we are curious about adding the surrounding context of the content since it may be helpful in the grader’s decisions. We're also hoping to leverage the data we accumulate from content moderators to help monitor content across apps using shared labeled data behind the scenes. This would make Quarantine more valuable to companies as it monitors more content.
## Inspiration We admired the convenience Honey provides for finding coupon codes. We wanted to apply the same concept except towards making more sustainable purchases online. ## What it does Recommends sustainable and local business alternatives when shopping online. ## How we built it Front-end was built with React.js and Bootstrap. The back-end was built with Python, Flask and CockroachDB. ## Challenges we ran into Difficulties setting up the environment across the team, especially with cross-platform development in the back-end. Extracting the current URL from a webpage was also challenging. ## Accomplishments that we're proud of Creating a working product! Successful end-to-end data pipeline. ## What we learned We learned how to implement a Chrome Extension. Also learned how to deploy to Heroku, and set up/use a database in CockroachDB. ## What's next for Conscious Consumer First, it's important to expand to make it easier to add local businesses. We want to continue improving the relational algorithm that takes an item on a website, and relates it to a similar local business in the user's area. Finally, we want to replace the ESG rating scraping with a corporate account with rating agencies so we can query ESG data easier.
## Inspiration In a world where a tweet out of context can cost you your career, it is increasingly important to be in the right, but this rigidity alienates a productive and proud group of people in the world--the impulsive. Politically Correct is a solution for those who would risk a slap for a laugh and who would make light of a dark situation. The question of whether we have stepped too far over the line often comes into our minds, sparking the endless internal debate of "Should I?" or "Should I not?" Politically Correct, leveraging both artificial and natural intelligence, gives its users the opportunity to get safe and effective feedback to end these constant internal dialogues. ## What it does Through a carefully integrated messaging backend, this application utilizes Magnet's API to send the text that the user wants to verify to a randomly selected group of users. These user express their opinion of the anonymous user's statement, rating it as acceptable or unacceptable. This application enhances the user's experience with a seamless graphical interface with a "Feed" giving the user messages from others to judge and "My Questions" allowing users to receive feedback. The machine learning component, implemented and ready to be rolled out in "My Questions", will use an Azure-based logistic regression to automatically classify text as politically correct or incorrect. ## How I built it Blood, sweat, tears, and Red Bull were the fuel that ignited Politically Correct. Many thanks to the kind folks from Magnet and Azure (Microsoft) for helping us early in the morning or late at night. For the build we utilized the Magnet SDK to enable easy in-app messaging and receiving between users and a random sample of users. With the messages, we added and triggered a message 'send-event' based on the click of a judgement button or an ask button. When a message was received we sorted the message (either a message to be judged or a message that is a judgement). To ensure that all judgement messages corresponded to the proper question messages we used special hash ids and stored these ids in serialized data. We updated the Feed and the MyQuestions tab on every message receive. For Azure we used logistic regression and a looooooooonnnnnnnnggggggg list of offensive and not offensive phrases. Then after training the set to create a model, we set up a web api that will be called by Politically Correct to get an initial sentiment analysis of the message. ## Challenges We ran into Aside from the multiple attempts of putting foot in mouth, the biggest challenges came from both platforms: **Azure**: *Perfectionism* While developing a workflow for the app the question of "How do I accurately predict the abuse in a statement?" often arose. As this challenge probably provokes similar doubts from Ph.Ds we would like to point to perfectionism as the biggest challenge with Azure. **Magnet:** *Impatience* Ever the victims, we like to blame companies for putting a lot of words in their tutorials because it makes it hard for us to skim through (we can't be bothered with learning we want to DO!!). The tutorials and documentation provided the support and gave all of us the ability to learn to sit down and iterate through a puzzle until we understood the problem. It was difficult to figure out the format in which we would communicate the judgement of the random sample of users. ## Accomplishments that I'm proud of We are very proud of the fact that we have a fully integrated Magnet messaging API, and the perfect implementation of the database backend. ## What I learned Aside from the two virtues of "good enough" and "patience", we learned how to work together, how to not work together, and how to have fun (in a way that sleep deprivation can allow). In the context of technical expertise (which is what everyone is going to be plugging right here), we gained a greater depth of knowledge on the Magnet SDK, and how to create a work flow and api on Azure. ## What's next for Politically Correct The future is always amazing, and the future for Politically Correct is better (believe it or not). The implementation for Politically Correct enjoys the partial integration of two amazing technologies Azure and Magnet, but the full assimilation (we are talking Borg level) would result in the fulfillment of two goals: 1) Dynamically train the offense of specific language by accounting for the people's responses to the message. 2) Allow integrations with various multimedia sites (i.e. Facebook and Twitter) to include an automatic submission/decline feature when there is a consensus on the statement.
winning
💡 ## Inspiration 49 percent of women reported feeling unsafe walking alone after nightfall according to the Office for National Statistics (ONS). In light of recent sexual assault and harassment incidents in the London, Ontario and Western community, women now feel unsafe travelling alone more than ever. Light My Way helps women navigate their travel through the safest and most well-lit path. Women should feel safe walking home from school, going out to exercise, or going to new locations, and taking routes with well-lit areas is an important precaution to ensure safe travel. It is essential to always be aware of your surroundings and take safety precautions no matter where and when you walk alone. 🔎 ## What it does Light My Way visualizes data of London, Ontario’s Street Lighting and recent nearby crimes in order to calculate the safest path for the user to take. Upon opening the app, the user can access “Maps” and search up their destination or drop a pin on a location. The app displays the safest route available and prompts the user to “Send Location” which sends the path that the user is taking to three contacts via messages. The user can then click on the google maps button in the lower corner that switches over to the google maps app to navigate the given path. In the “Alarm” tab, the user has access to emergency alert sounds that the user can use when in danger, and upon clicking the sounds play at a loud volume to alert nearby people for help needed. 🔨 ## How we built it React, Javascript, and Android studio were used to make the app. React native maps and directions were also used to allow user navigation through google cloud APIs. GeoJson files were imported of Street Lighting data from the open data website for the City of London to visualize street lights on the map. Figma was used for designing UX/UI. 🥇 ## Challenges we ran into We ran into a lot of trouble visualization such a large amount of data that we exported on the GeoJson street lights. We overcame that by learning about useful mapping functions in react that made marking the location easier. ⚠️ ## Accomplishments that we're proud of We are proud of making an app that can be of potential help to make women be safer walking alone. It is our first time using and learning React, as well as using google maps, so we are proud of our unique implementation of our app using real data from the City of London. It was also our first time doing UX/UI on Figma, and we are pleased with the results and visuals of our project. 🧠 ## What we learned We learned how to use React, how to implement google cloud APIs, and how to import GeoJson files into our data visualization. Through our research, we also became more aware of the issue that women face daily on feeling unsafe walking alone. 💭 ## What's next for Light My Way We hope to expand the app to include more data on crimes, as well as expand to cities surrounding London. We want to continue developing additional safety features in the app, as well as a chatting feature with the close contacts of the user.
## Inspiration The only thing worse than no WiFi is slow WiFi. Many of us have experienced the frustrations of terrible internet connections. We have too, so we set out to create a tool to help users find the best place around to connect. ## What it does Our app runs in the background (completely quietly) and maps out the WiFi landscape of the world. That information is sent to a central server and combined with location and WiFi data from all users of the app. The server then processes the data and generates heatmaps of WiFi signal strength to send back to the end user. Because of our architecture these heatmaps are real time, updating dynamically as the WiFi strength changes. ## How we built it We split up the work into three parts: mobile, cloud, and visualization and had each member of our team work on a part. For the mobile component, we quickly built an MVP iOS app that could collect and push data to the server and iteratively improved our locationing methodology. For the cloud, we set up a Firebase Realtime Database (NoSQL) to allow for large amounts of data throughput. For the visualization, we took the points we received and used gaussian kernel density estimation to generate interpretable heatmaps. ## Challenges we ran into Engineering an algorithm to determine the location of the client was significantly more difficult than expected. Initially, we wanted to use accelerometer data, and use GPS as well to calibrate the data, but excessive noise in the resulting data prevented us from using it effectively and from proceeding with this approach. We ran into even more issues when we used a device with less accurate sensors like an Android phone. ## Accomplishments that we're proud of We are particularly proud of getting accurate paths travelled from the phones. We initially tried to use double integrator dynamics on top of oriented accelerometer readings, correcting for errors with GPS. However, we quickly realized that without prohibitively expensive filtering, the data from the accelerometer was useless and that GPS did not function well indoors due to the walls affecting the time-of-flight measurements. Instead, we used a built in pedometer framework to estimate distance travelled (this used a lot of advanced on-device signal processing) and combined this with the average heading (calculated using a magnetometer) to get meter-level accurate distances. ## What we learned Locationing is hard! Especially indoors or over short distances. Firebase’s realtime database was extremely easy to use and very performant Distributing the data processing between the server and client is a balance worth playing with ## What's next for Hotspot Next, we’d like to expand our work on the iOS side and create a sister application for Android (currently in the works). We’d also like to overlay our heatmap on Google maps. There are also many interesting things you can do with a WiFi heatmap. Given some user settings, we could automatically switch from WiFi to data when the WiFi signal strength is about to get too poor. We could also use this app to find optimal placements for routers. Finally, we could use the application in disaster scenarios to on-the-fly compute areas with internet access still up or produce approximate population heatmaps.
## Inspiration In the “new normal” that COVID-19 has caused us to adapt to, our group found that a common challenge we faced was deciding where it was safe to go to complete trivial daily tasks, such as grocery shopping or eating out on occasion. We were inspired to create a mobile app using a database created completely by its users - a program where anyone could rate how well these “hubs” were following COVID-19 safety protocol. ## What it does Our app allows for users to search a “hub” using a Google Map API, and write a review by rating prompted questions on a scale of 1 to 5 regarding how well the location enforces public health guidelines. Future visitors can then see these reviews and add their own, contributing to a collective safety rating. ## How I built it We collaborated using Github and Android Studio, and incorporated both a Google Maps API as well integrated Firebase API. ## Challenges I ran into Our group unfortunately faced a number of unprecedented challenges, including losing a team member mid-hack due to an emergency situation, working with 3 different timezones, and additional technical difficulties. However, we pushed through and came up with a final product we are all very passionate about and proud of! ## Accomplishments that I'm proud of We are proud of how well we collaborated through adversity, and having never met each other in person before. We were able to tackle a prevalent social issue and come up with a plausible solution that could help bring our communities together, worldwide, similar to how our diverse team was brought together through this opportunity. ## What I learned Our team brought a wide range of different skill sets to the table, and we were able to learn lots from each other because of our various experiences. From a technical perspective, we improved on our Java and android development fluency. From a team perspective, we improved on our ability to compromise and adapt to unforeseen situations. For 3 of us, this was our first ever hackathon and we feel very lucky to have found such kind and patient teammates that we could learn a lot from. The amount of knowledge they shared in the past 24 hours is insane. ## What's next for SafeHubs Our next steps for SafeHubs include personalizing user experience through creating profiles, and integrating it into our community. We want SafeHubs to be a new way of staying connected (virtually) to look out for each other and keep our neighbours safe during the pandemic.
winning
Currently, about 600,000 people in the United States have some form of hearing impairment. Through personal experiences, we understand the guidance necessary to communicate with a person through ASL. Our software eliminates this and promotes a more connected community - one with a lower barrier entry for sign language users. Our web-based project detects signs using the live feed from the camera and features like autocorrect and autocomplete reduce the communication time so that the focus is more on communication rather than the modes. Furthermore, the Learn feature enables users to explore and improve their sign language skills in a fun and engaging way. Because of limited time and computing power, we chose to train an ML model on ASL, one of the most popular sign languages - but the extrapolation to other sign languages is easily achievable. With an extrapolated model, this could be a huge step towards bridging the chasm between the worlds of sign and spoken languages.
## Inspiration Over **15% of American adults**, over **37 million** people, are either **deaf** or have trouble hearing according to the National Institutes of Health. One in eight people have hearing loss in both ears, and not being able to hear or freely express your thoughts to the rest of the world can put deaf people in isolation. However, only 250 - 500 million adults in America are said to know ASL. We strongly believe that no one's disability should hold them back from expressing themself to the world, and so we decided to build Sign Sync, **an end-to-end, real-time communication app**, to **bridge the language barrier** between a **deaf** and a **non-deaf** person. Using Natural Language Processing to analyze spoken text and Computer Vision models to translate sign language to English, and vice versa, our app brings us closer to a more inclusive and understanding world. ## What it does Our app connects a deaf person who speaks American Sign Language into their device's camera to a non-deaf person who then listens through a text-to-speech output. The non-deaf person can respond by recording their voice and having their sentences translated directly into sign language visuals for the deaf person to see and understand. After seeing the sign language visuals, the deaf person can respond to the camera to continue the conversation. We believe real-time communication is the key to having a fluid conversation, and thus we use automatic speech-to-text and text-to-speech translations. Our app is a web app designed for desktop and mobile devices for instant communication, and we use a clean and easy-to-read interface that ensures a deaf person can follow along without missing out on any parts of the conversation in the chat box. ## How we built it For our project, precision and user-friendliness were at the forefront of our considerations. We were determined to achieve two critical objectives: 1. Precision in Real-Time Object Detection: Our foremost goal was to develop an exceptionally accurate model capable of real-time object detection. We understood the urgency of efficient item recognition and the pivotal role it played in our image detection model. 2. Seamless Website Navigation: Equally essential was ensuring that our website offered a seamless and intuitive user experience. We prioritized designing an interface that anyone could effortlessly navigate, eliminating any potential obstacles for our users. * Frontend Development with Vue.js: To rapidly prototype a user interface that seamlessly adapts to both desktop and mobile devices, we turned to Vue.js. Its flexibility and speed in UI development were instrumental in shaping our user experience. * Backend Powered by Flask: For the robust foundation of our API and backend framework, Flask was our framework of choice. It provided the means to create endpoints that our frontend leverages to retrieve essential data. * Speech-to-Text Transformation: To enable the transformation of spoken language into text, we integrated the webkitSpeechRecognition library. This technology forms the backbone of our speech recognition system, facilitating communication with our app. * NLTK for Language Preprocessing: Recognizing that sign language possesses distinct grammar, punctuation, and syntax compared to spoken English, we turned to the NLTK library. This aided us in preprocessing spoken sentences, ensuring they were converted into a format comprehensible by sign language users. * Translating Hand Motions to Sign Language: A pivotal aspect of our project involved translating the intricate hand and arm movements of sign language into a visual form. To accomplish this, we employed a MobileNetV2 convolutional neural network. Trained meticulously to identify individual characters using the device's camera, our model achieves an impressive accuracy rate of 97%. It proficiently classifies video stream frames into one of the 26 letters of the sign language alphabet or one of the three punctuation marks used in sign language. The result is the coherent output of multiple characters, skillfully pieced together to form complete sentences ## Challenges we ran into Since we used multiple AI models, it was tough for us to integrate them seamlessly with our Vue frontend. Since we are also using the webcam through the website, it was a massive challenge to seamlessly use video footage, run realtime object detection and classification on it and show the results on the webpage simultaneously. We also had to find as many opensource datasets for ASL as possible, which was definitely a challenge, since with a short budget and time we could not get all the words in ASL, and thus, had to resort to spelling words out letter by letter. We also had trouble figuring out how to do real time computer vision on a stream of hand gestures of ASL. ## Accomplishments that we're proud of We are really proud to be working on a project that can have a profound impact on the lives of deaf individuals and contribute to greater accessibility and inclusivity. Some accomplishments that we are proud of are: * Accessibility and Inclusivity: Our app is a significant step towards improving accessibility for the deaf community. * Innovative Technology: Developing a system that seamlessly translates sign language involves cutting-edge technologies such as computer vision, natural language processing, and speech recognition. Mastering these technologies and making them work harmoniously in our app is a major achievement. * User-Centered Design: Crafting an app that's user-friendly and intuitive for both deaf and hearing users has been a priority. * Speech Recognition: Our success in implementing speech recognition technology is a source of pride. * Multiple AI Models: We also loved merging natural language processing and computer vision in the same application. ## What we learned We learned a lot about how accessibility works for individuals that are from the deaf community. Our research led us to a lot of new information and we found ways to include that into our project. We also learned a lot about Natural Language Processing, Computer Vision, and CNN's. We learned new technologies this weekend. As a team of individuals with different skillsets, we were also able to collaborate and learn to focus on our individual strengths while working on a project. ## What's next? We have a ton of ideas planned for Sign Sync next! * Translate between languages other than English * Translate between other sign languages, not just ASL * Native mobile app with no internet access required for more seamless usage * Usage of more sophisticated datasets that can recognize words and not just letters * Use a video image to demonstrate the sign language component, instead of static images
## Inspiration You see a **TON** of digital billboards at NYC Time Square. The problem is that a lot of these ads are **irrelevant** to many people. Toyota ads here, Dunkin' Donuts ads there; **it doesn't really make sense**. ## What it does I built an interactive billboard that does more refined and targeted advertising and storytelling; it displays different ads **based on who you are** ~~(NSA 2.0?)~~ The billboard is equipped with a **camera**, which periodically samples the audience in front of it. Then, it passes the image to a series of **computer vision** algorithm (Thank you *Microsoft Cognitive Services*), which extracts several characteristics of the viewer. In this prototype, the billboard analyzes the viewer's: * **Dominant emotion** (from facial expression) * **Age** * **Gender** * **Eye-sight (detects glasses)** * **Facial hair** (just so that it can remind you that you need a shave) * **Number of people** And considers all of these factors to present with targeted ads. **As a bonus, the billboard saves energy by dimming the screen when there's nobody in front of the billboard! (go green!)** ## How I built it Here is what happens step-by-step: 1. Using **OpenCV**, billboard takes an image of the viewer (**Python** program) 2. Billboard passes the image to two separate services (**Microsoft Face API & Microsoft Emotion API**) and gets the result 3. Billboard analyzes the result and decides on which ads to serve (**Python** program) 4. Finalized ads are sent to the Billboard front-end via **Websocket** 5. Front-end contents are served from a local web server (**Node.js** server built with **Express.js framework** and **Pug** for front-end template engine) 6. Repeat ## Challenges I ran into * Time constraint (I actually had this huge project due on Saturday midnight - my fault -, so I only **had about 9 hours to build** this. Also, I built this by myself without teammates) * Putting many pieces of technology together, and ensuring consistency and robustness. ## Accomplishments that I'm proud of * I didn't think I'd be able to finish! It was my first solo hackathon, and it was much harder to stay motivated without teammates. ## What's next for Interactive Time Square * This prototype was built with off-the-shelf computer vision service from Microsoft, which limits the number of features for me to track. Training a **custom convolutional neural network** would let me track other relevant visual features (dominant color, which could let me infer the viewers' race - then along with the location of the Billboard and pre-knowledge of the demographics distribution, **maybe I can infer the language spoken by the audience, then automatically serve ads with translated content**) - ~~I know this sounds a bit controversial though. I hope this doesn't count as racial profiling...~~
partial
## Inspiration There are many existing chrome extensions on the marketplace that override the new tab page, ranging from pages of beautiful backgrounds to productivity tools. Since the new tab page is one of the most frequented web pages you see on a daily basis, we thought it would be a great platform to deliver relevant news in an easily digestible and unobtrusive manner. ## What it does News Cloud is a chrome extension that aggregates text from a variety of news sites (NPR, CNN, BBC, Ars Technica, Reddit, and etc.) and overrides the new tab page to display shared trending phrases based on word frequency. We take these trending phrases and display them in a word cloud comprised of hyperlinks sized according to their occurrence. The hyperlinks redirect to a Google News search of the keyword or phrases. ## How we built it The extension was built concurrently in two parts. The code dealing with news aggregation and text processing was written entirely in Python. The script scrapes raw text from various sources and puts them through a three step process to determine the final set of data to draw into a word cloud. Step one, upon receiving the raw text, spaCy is used to filter out comparatively meaningless parts of speeches such as conjunctions, prepositions, as well as common words such as “man” or “best”. Step two, the word itself and a bigram created from [prev word] + “ “ + [curr word] is added and updated in the frequency chart. Step three, after all sources have been scraped, duplicates & similar keywords are removed, and around the top 30 key phrases & its frequency are passed on. Meanwhile, the website and word cloud generation scripts were written in HTML/CSS and Javascript. The word cloud script takes in the result of the Python script and sizes them relative to their word frequency. The input list of phrases is first sorted in descending order based on word frequency and then normalized to text sizes appropriate for the browser window. The word cloud is then generated by placing each phrase link onto the page in a spiral path beginning in the middle of the page. Every time a link attempts to be placed down, a simple object collision function checks for overlap and adjusts the position of the new phrase. To allow these two scripting languages to interact with one another in a single web app, we utilized the Flask framework. ## Challenges we ran into One of the main issues that we faced was dealing with how to utilize Python and Javascript in one product. There is no way to directly run a python script on a traditional website. Through our research, we determined that the Flask framework was the optimal solution. Having never even heard of the framework before, we went through many tutorials and debugging sessions, and in the end, we were able to effectively utilize it to complete our project. Another issue that we faced with determining what the word cloud should be comprised of. We originally planned on having only one word phrases, but we quickly realized that they were simply too short to provide relevant context to the situation. We resolved this issue by utilizing bigrams (2-word phrases) drawn from multiple sources and POS-tagging (parts of speech) to filter out less meaningful words. ## Accomplishments that we're proud of One of the main accomplishments that we had was learning and effectively utilizing Flask, a previously unknown technology to us, in order to resolve the core issue of communication between Python and Javascript scripts. Another aspect of the project that we are proud of is the general polish that News Cloud possesses. From the surprisingly effective web scraping and frequency analysis to generate relevant key phrases to the great UI design and cloud generation algorithm. ## What we learned Other than the technical knowledge (ie. Flask, web scraping, PaaS, general web development) we’ve learned on the fly, we also experienced the collaborative workflow and time constraints more representative of real world situations. Through our experience, we realized that often what contributes to the success of the project is the degree of collaboration rather than individual contribution. ## What's next for News Cloud We are proud of what we were able to accomplish with News Cloud, especially as our first hackathon project. That being said, there is no such thing as a completed piece of software. There are several aspects of the project that can be improved upon. The web scraping and frequency calculation algorithm creates a significant loading time whenever the site is opened which limits its effectiveness as a new-tab extension. This was the main reason why we were unable to effectively deploy the Flask project onto a PaaS like AWS, GCP, and Heroku because the HTML request would always time out. Another feature that we would like to include is a customization menu that allows the user to add more news sources, change background image, and settings regarding word cloud generation. We are excited to continue developing News Cloud into the useful and convenient news source that we conceived of. ## Notes News Cloud logo is original. Background image belongs to Firewatch.
## Inspiration We spend a lot of time trying to stay caught up with the news, but it can be hard to keep track of everything. We live in New York City, where we can walk by any bodega and get an idea of what’s going on in the news just by glancing at the newspapers on the stand. ## What it does A Thousand Words displays the cover photos and headlines of your favorite newspapers and magazines, recreating that bodega news experience in app form. It provides a smooth user experience with customization so you can have your favorite news sources front and center. There is also a companion data visualization web app that displays a word cloud of descriptions from the past year of The New York Times’ front page images. Clicking on a word opens up one of the images that corresponds to said label. ## How we built it The app for Android is written in Java, in Android Studio. We used standard Android libraries to build it. We made the UI as simple as possible, the front page is just a vertical layout with a bunch of images on it. This makes our app extremely lightweight and able to run on any size phone with very low power usage. The backend is written in Python and runs on Google App Engine. We used the standard App Engine `webapp2` framework, and `urllib` to scrape images from the web. For the data visualization aspect we used Google BigQuery, the Google Cloud Vision API, and JavaScript. First we used Google BigQuery to organize the past year of New York Times headline photos. Then, once we had the data and images gathered, we used the Google Cloud Vision API to generate a description of each headline image. Using python we ranked each label by how common it is and used that to generate the word cloud. The word cloud is written in JavaScript, using the D3.js data visualization library. We chose to create a dynamic word cloud in JavaScript so that users can interact with and explore different themes. We host the data visualization and theme explorer on [Google App Engine](https://thousand-words.appspot.com). ## Challenges we ran into It was a challenge to get high quality images to download efficiently and quickly, because the news can turn on a dime, or it can stay the same for hours. We created a caching system, along with a bunch of enhancements in the cloud that made the app use much less data and battery power. We also spent a lot getting the Google Vision API to do what we needed it to do. ## Accomplishments that we're proud of There have been a lot of data visualization projects that have taken into account newspapers and the things that newspapers say. But what about the cover photos that play such a crucial role in documenting the world around us? Well we decided we could visualize that data as well. First we collected all the front cover images of the New York Times from the year 2016. Then we used Google’s Cloud Vision API to describe each picture with a couple of labels. Using these labels we were able to build a fascinating word cloud of the most prevalent themes of the cover photos from the New York Times in 2016. We have always felt that this is something that would be awesome to do, and now that we can see it in action it is definitely something that we will use in the future! ## What we learned We learned how to effectively create smooth, clutter-free user interfaces in Android in order to make viewing the news as effortless as possible. We learned about how to use Google App Engine and other Google Cloud API’s to build powerful backend web services. We learned about data visualization using CSS, JavaScript and the D3 library. ## What's next for A Thousand Words We plan to continue working on the Android app, soon to be followed by an iPhone version. The Google Cloud Vision API was a great start in our attempt to describe front page images; however, we believe we can do an even better job in representing what images actually mean. In addition, we’d like to continue the data collection process so we can make more awesome data visualizations, maybe even connecting it back to the phone app in some way!
## Inspiration We wanted to create a new way to interact with the thousands of amazing shops that use Shopify. ![demo](https://res.cloudinary.com/devpost/image/fetch/s--AOJzynCD--/c_limit,f_auto,fl_lossy,q_auto:eco,w_900/http://i.imgur.com/0G1Pdea.jpg) ## What it does Our technology can be implemented inside existing physical stores to help customers get more information about products they are looking at. What is even more interesting is that our concept can be implemented to ad spaces where customers can literally window shop! Just walk in front of an enhanced Shopify ad and voila, you have the product on the sellers store, ready to be ordered right there from wherever you are. ## How we built it WalkThru is Android app built with the Altbeacon library. Our localisation algorithm allows the application to pull the Shopify page of a specific product when the consumer is in front of it. ![Shopify](https://res.cloudinary.com/devpost/image/fetch/s--Yj3u-mUq--/c_limit,f_auto,fl_lossy,q_auto:eco,w_900/http://i.imgur.com/biArh6r.jpg) ![Estimote](https://res.cloudinary.com/devpost/image/fetch/s--B-mjoWyJ--/c_limit,f_auto,fl_lossy,q_auto:eco,w_900/http://i.imgur.com/0M85Syt.jpg) ![Altbeacon](https://avatars2.githubusercontent.com/u/8183428?v=3&s=200) ## Challenges we ran into Using the Estimote beacons in a crowded environment has it caveats because of interference problems. ## Accomplishments that we're proud of The localisation of the user is really quick so we can show a product page as soon as you get in front of it. ![WOW](https://res.cloudinary.com/devpost/image/fetch/s--HVZODc7O--/c_limit,f_auto,fl_lossy,q_auto:eco,w_900/http://i.giphy.com/xT77XWum9yH7zNkFW0.gif) ## What we learned We learned how to use beacons in Android for localisation. ## What's next for WalkThru WalkThru can be installed in current brick and mortar shops as well as ad panels all over town. Our next step would be to create a whole app for Shopify customers which lets them see what shops/ads are near them. We would also want to improve our localisation algorithm to 3D space so we can track exactly where a person is in a store. Some analytics could also be integrated in a Shopify app directly in the store admin page where a shop owner would be able to see how much time people spend in what parts of their stores. Our technology could help store owners increase their sells and optimise their stores.
losing
## Inspiration As students, we know the struggles of applying to countless jobs and being left wondering where you went wrong. Although there are plenty of resources to improve resume quality, most are general information and cannot provide specific feedback for your target role. Hence why we created ResumeRumble, for a fun way for everyone to improve their resume writing skills. ## What it does ResumeRumble uses OpenAI to generate quality, detailed feedback on your resume. Submit your resume and receive feedback and a score out of 100. Optionally you can include the description of your target job for specific feedback and scoring based on how well your resume relates to that role. ResumeRumble incentivizes users to keep improving by featuring friendly competition and peer-learning. Use your resumes to compete against friends, or other players. Create or join a lobby, with an optional job description, and select your resume of choice. Everyone will receive their score and feedback, as well as view other players' resume's; but only one will win! ## How we built it * Resume Rumble is a modern full stack Application using NextJS. * Webpages made with React, Typescript, Tailwind and shadcn-ui. * User auth is handled by Clerk. * Resumes are uploaded and stored on PlanetScale with Prisma * Uses the OpenAI API ## Challenges we ran into The whole thing. But more specifically: * Converting PDFs to text to pass into the OpenAI API * Styling webpages with Tailwind animations * Setting up Prisma schemas to communicate with the database * Hosting project on Vercel and configuring our .tech domain * User resume uploading and management * Connecting Users together for the lobby system ## Accomplishments that we're proud of The whole thing. But more specifically: * A very ambitious project that we were able to get functional within 36 hours. * Fully mobile responsive Front-end with animations built with react components * Robust Back-end functionality including database CRUD operations, API calling, and Middleware management. * Our Domain name * Araf carrying us in the final stretch (goat) ## What we learned * Only one of our group members had experience with the tech stack we used. The other 3 learned the entire stack (Next.js, Planetscale, Tailwind, Typescript) during the project, from knowing nothing to deploying a Full-stack application with those tools. * We also became familiar with shadcn/ui, a very helpful component library for making the website look consistent. ## What's next for ResumeRumble We'd love to make ResumeRumble into a site for all things about making a good resume. With additional features like: * expand our competitive system by increasing lobby sizes, adding an elo system for matchmaking, and adding a leaderboard. * Additionally we'd like to add more practice tools, like sample job postings and lessons on resume building. * We could also add a peer to peer resume reviewing service, where members are able to give feedback on each others resumes.
## Inspiration Credit card victims are often the victims of fraud. ## What it does Based on user input data of merchant zip code and identification, determines if transaction is a fraud or not. ## How we built it Mage.ai, HTML, JS, HTTP request ## Challenges we ran into HTTP request wasn't being authorized ## Accomplishments that we're proud of First ML Project ## What we learned ## What's next for Fraud Detection
## Inspiration 💥 Our inspiration is to alter the way humans have acquired knowledge and skills over the last hundred years. Instead of reading or writing, we devised of a method fo individuals to teach others through communication and mentoring. A way that not only benefits those who learn but also helps them achieve their goals. ## What it does 🌟 Intellex is a diverse skill swapping platform for those eager to learn more. In this era, information is gold. Your knowledge is valuable, and people want it. For the price of a tutoring session, you can receive back a complete and in depth tutorial on whatever you want. Join one on one video calls with safe and rated teachers, and be rewarded for learning more. We constantly move away from agencies and the government and thus Intellex strives to decentralize education. Slowly, the age old classroom is changing. Intellex presents a potential step towards education decentralization by incentivizing education with NFT rewards which include special badges and a leaderboard. ## How we built it 🛠️ We began with planning out our core features and determining what technologies we would use. Later, we created a Figma design to understand what pages we would need for our project, planning our backend integration to store and fetch data from a database. We used Next.js to structure the project which uses React internally. We used TypeScript for type safety across my project which was major help when it came to debugging. Tailwind CSS was leveraged for its easy to use classes. We also utilized Framer Motion for the landing page animations ## Challenges we ran into 🌀 The obstacles we faced were coming up with a captivating idea, which caused us to lose productivity. We've also faced difficult obstacles in languages we're unfamiliar with, and some of us are also beginners which created much confusion during the event. Time management was really difficult to cope with because of the many changes in plans, but  overall we have improved our knowledge and experience. ## Accomplishments that we're proud of 🎊 We are proud of building a very clean, functional, and modern-looking user interface for Intellex, allowing users to experience an intuitive and interactive educational environment. This aligns seamlessly with our future use of Whisper AI to enhance user interactions. To ensure optimized site performance, we're implementing Next.js with Server-Side Rendering (SSR), providing an extremely fast and responsive feel when using the app. This approach not only boosts efficiency but also improves the overall user experience, crucial for educational applications. In line with the best practices of React, we're focusing on using client-side rendering at the most intricate points of the application, integrating it with mock data initially. This setup is in preparation for later fetching real-time data from the backend, including interactive whiteboard sessions and peer ratings. Our aim is to create a dynamic, adaptive learning platform that is both powerful and easy to use, reflecting our commitment to pioneering in the educational technology space. ## What we learned 🧠 Besides the technologies that were listed above, we as a group learned an exceptional amount of information in regards to full stack web applications. This experience marked the beginning of our full stack journey and we took it approached it with a cautious approach, making sure we understood all aspects of a website, which is something that a lot of people tend to overlook. We learned about the planning process, backend integration, REST API's, etc. Most importantly, we learned about the importance of having cooperative and helpful team that will have your back in building out these complex apps on time. ## What's next for Intellex ➡️ We fully plan to build out the backend of Intellex to allow for proper functionality using Whisper AI. This innovative technology will enhance user interactions and streamline the learning process. Regarding the product itself, there are countless educational features that we want to implement, such as an interactive whiteboard for real-time collaboration and a comprehensive rating system to allow peers to see and evaluate each other's contributions. These features aim to foster a more engaging and interactive learning environment. Additionally, we're exploring the integration of adaptive learning algorithms to personalize the educational experience for each user. This is a product we've always wanted to pursue in some form, and we look forward to bringing it to life and seeing its positive impact on the educational community.
losing
## Inspiration Garbage in bins around cities are constantly overflowing. Our goal was to create a system that better allocates time and resources to help prevent this problem, while also positively impacting the environment. ## What it does Urbins provides a live monitoring web application that displays the live capacity of both garbage and recycling compartments using ultrasonic sensors. This functionality can be seen inside the prototype garbage bin. The bin uses a cell phone camera to send an image to the custom learning model built with IBM Watson. The results from the Watson model is used to classify each object placed in the bin so that it can be sorted into either garbage or recycling. Based on the classification, the Android application controls the V-shaped platform using a servo motor to tilt the platform and drop the item into it's correct bin. Once a garbage/recycling bin nears full-capacity, STDlib is used to notify city workers via SMS that bins at a given address are full. Machine learning is applied when an object cannot be classified. When this happens, the image of the object is sent via STDlib to Slack. Along with the image, response buttons are displayed in Slack, which allows a city worker to manually classify the item. Once a selection is made, the new classification is used to further train the Watson model. This updated model is then used by all the connected smart garbage bins, allowing for all the bins to learn. ## Challenges we ran into Integrating all components Learning to use IBM Watson Providing the set of images for IBM Watson (Needed to be a zip file containing at least 10 photos to update the model) ## Accomplishments that we're proud of Integrating all the components. Getting IBM Watson working Getting STDlib working Training IBM Watson using STDLib ## What we learned How to use IBM Watson How to effectively plan a project Designing an effective architecture How to use STDlib ## What's next for Urbins Accounts Algorithm for optimal route for shift Dashboard with map areas, floor plans, housing plans, and event maps Heat map on google maps Bar chart of stats over past 6 months (which bin was the most frequently filled?) Product Information and Brand data
## Inspiration Waste Management: Despite having bins with specific labels, people often put waste into wrong bins which lead to unnecessary plastic/recyclables in landfills. ## What it does Uses Raspberry Pi, Google vision API and our custom classifier to categorize waste and automatically sorts and puts them into right sections (Garbage, Organic, Recycle). The data collected is stored in Firebase, and showed with respective category and item label(type of waste) on a web app/console. The web app is capable of providing advanced statistics such as % recycling/compost/garbage, your carbon emissions as well as statistics on which specific items you throw out the most (water bottles, bag of chips, etc.). The classifier is capable of being modified to suit the garbage laws of different places (eg. separate recycling bins for paper and plastic). ## How We built it Raspberry pi is triggered using a distance sensor to take the photo of the inserted waste item, which is identified using Google Vision API. Once the item is identified, our classifier determines whether the item belongs in recycling, compost bin or garbage. The inbuilt hardware drops the waste item into the correct section. ## Challenges We ran into Combining IoT and AI was tough. Never used Firebase. Separation of concerns was a difficult task. Deciding the mechanics and design of the bin (we are not mechanical engineers :D). ## Accomplishments that We're proud of Combining the entire project. Staying up for 24+ hours. ## What We learned Different technologies: Firebase, IoT, Google Cloud Platform, Hardware design, Decision making, React, Prototyping, Hardware ## What's next for smartBin Improving the efficiency. Build out of better materials (3D printing, stronger servos). Improve mechanical movement. Add touch screen support to modify various parameters of the device.
## Inspiration We all know that great potential lies within the stock markets, but how many of us have the time and money to put into investments? With Minvest anyone can start investing with no minimum portfolio balance, and no prior experience with investments needed. Our platform is well integrated to your bank account, and you decide how much money to invest in, with the option to withdraw any amount at any time. Sit back and watch your investments grow as we expertly manage a well diversified portfolio on your behalf. ## What it does Minvest is an application which uses algorithmic trading to manage clients' investment portfolios. The client simply transfers any amount of money from their bank account into the investment platform, and our algorithms take care of the rest. Each user is classified to a certain investment "style" or "strategy", depending on their own personal preferences, and based on these profiles, our algorithms pick out the best investments, and perform appropriate trades on the stock market when the timing is right. Our platform is a form of "crowd investing" in that users pool their money together in order to make investments that they normally would not be able to on their own. There are two current challenges with investments: either one cannot afford to make the minimum investments (usually minimum 100 shares, or a minimum dollar investment amount), or one cannot afford to diversify their portfolio in order to minimize their risks and maximize their opportunities. With Minvest, users now have access to investments previously out of their reach, and with trading and portfolios managed through algorithms, the management cost stays low, allowing us to offer no minimum investment required for our users. We believe that with this platform, more individuals will be able to benefit from the markets, as well as have a financial peace of mind that their money is expertly managed and will grow in the future. ## How we built it Our front-end client is an Android application, and our back end is created with Django. We used several APIs in our back-end to build our services algorithms. Using Capital One Nessie API we tightly integrate users' bank accounts, and allow them to easily transfer money from their bank into the platform, or withdraw from the platform and deposit into their bank accounts. In creating our algorithm that determines which securities to build our portfolio with, we used the Yahoo Finance API, which brings us key performance indicators of securities in which we analyze to select the best investments. The actual trading of these securities would be done through the Zipline API, which is built with Python. This API is used to create professional trading algorithms, and allows us to build and backtest our algorithms to 10 years of historical data, with full performance and risk indicators. Displaying all this information is our Android client, which gives users the commands to deposit into and withdraw from the investment platform, to view their book value, portfolio value, change, and percentage change. ## Challenges we ran into The greatest challenge we ran into was developing our algorithms. We had to do in depth research about investment performance indicators, from P/E and P/B ratios to alpha beta risk ratios, to how book values are properly calculated when it comes to making multiple trades with different securities at different prices. However, once we got a hang of it, we were able to identify important traits of the securities that would help us make investment decisions. ## Accomplishments that we're proud of We're proud of the fact that we were able to learn a lot about investments, as well as being able to implement a few different APIs together to make a final product! ## What we learned We learned a lot not only about investments, but applied our learning to create algorithms. ## What's next for Minvest Next steps would be to further develop our algorithms to be able adapt to more market situations, and to allow for users who may be more advanced to have the option of further narrowing down what industries they would like to invest in, such as financial services, utilities, commodities, etc.
winning
## Inspiration We wanted to get better at sports, but we don't have that much time to perfect our moves. ## What it does Compares your athletic abilities to other users by building skeletons of both people and showing you where you can improve. Uses ML to compare your form to a professional's form. # Tells you improvements. ## How I built it We used OpenPose to train a dataset we found online and added our own members to train for certain skills. Backend was made in python which takes the skeletons and compares them to our database of trained models to see how you preform. The skeleton for both videos are combined side by side in a video and sent to our react frontend. ## Challenges I ran into Having multiple libraries out of date and having to compare skeletons. ## Accomplishments that I'm proud of ## What I learned ## What's next for trainYou
## Inspiration The three of us love lifting at the gym. We always see apps that track cardio fitness but haven't found anything that tracks lifting exercises in real-time. Often times when lifting, people tend to employ poor form leading to gym injuries which could have been avoided by being proactive. ## What it does and how we built it Our product tracks body movements using EMG signals from a Myo armband the athlete wears. During the activity, the application provides real-time tracking of muscles used, distance specific body parts travel and information about the athlete’s posture and form. Using machine learning, we actively provide haptic feedback through the band to correct the athlete’s movements if our algorithm deems the form to be poor. ## How we built it We trained an SVM based on employing deliberately performed proper and improper forms for exercises such as bicep curls. We read properties of the EMG signals from the Myo band and associated these with the good/poor form labels. Then, we dynamically read signals from the band during workouts and chart points in the plane where we classify their forms. If the form is bad, the band provides haptic feedback to the user indicating that they might injure themselves. ## Challenges we ran into Interfacing with the Myo bands API was not the easiest task for us, since we ran into numerous technical difficulties. However, after we spent copious amounts of time debugging, we finally managed to get a clear stream of EMG data. ## Accomplishments that we're proud of We made a working product by the end of the hackathon (including a fully functional machine learning model) and are extremely excited for its future applications. ## What we learned It was our first time making a hardware hack so it was a really great experience playing around with the Myo and learning about how to interface with the hardware. We also learned a lot about signal processing. ## What's next for SpotMe In addition to refining our algorithms and depth of insights we can provide, we definitely want to expand the breadth of activities we cover too (since we’re primarily focused on weight lifting too). The market we want to target is sports enthusiasts who want to play like their idols. By collecting data from professional athletes, we can come up with “profiles” that the user can learn to play like. We can quantitatively and precisely assess how close the user is playing their chosen professional athlete. For instance, we played tennis in high school and frequently had to watch videos of our favorite professionals. With this tool, you can actually learn to serve like Federer, shoot like Curry or throw a spiral like Brady.
## Inspiration We have all gone through physical therapy, and noticed that it was hard to make sure our form was correct when we were practicing the physical therapy exercises at home, after being with at the PT's office. ## What it does Our web app tracks user movement through mediapipe, and calculates the angles made by all of their joints. Depending on the exercise, our code will analyze the movement, and give feedback on improving form, as well as insight into the positive gain from the movement. ## How we built it We explored pose detection models and determined that mediapipe was the best for our idea. We then spent time figuring out how to use mediapipe within a next.js environment, and used typescript to handle the majority of the product function. After the data is collected, it is passed into a python file which does calculations on the data, and then sends it to an OpenAI API to construct the feedback paragraph and graphs. ## Challenges we ran into Some challenges we ran into were differences in operating systems that made working on the project together difficult, as windows would often not be able to run the code correctly if it was written on a mac ## Accomplishments that we're proud of We built an entire app in Next.js even though we had very minimal previous exposure with javascript/typescript frameworks. ## What we learned We learned a significant amount about developing in Next.js, as well as connecting with the OpenAI API. We also learned about many of the existing problems in the physical therapy space, and how deep tech can work to solve these issues. ## What's next for Physio Assist Polishing the proof of concept and talk to customers.
winning
## Inspiration The internet and current text based communication simply does not promote neurodiversity. People, especially children, with developmental disabilities such as autism have a great deal of difficulty recognizing the emotions of others whether it be verbal or written. The internet gave us the ability to communicate with each other easily. In the new wave of technology, we believe that all humans should be able to understand each other easily as well. ## What it does AllChat works like any other messaging application. However, on top of sending and receiving messages, when you receive a message it displays the emotion of the given text so that those with developmental disabilities can gain more insights and more easily understand other people's messages. ## How we built it The NLP system uses tensorflow and BERT to categorize text into 5 different emotions. BERT computes vector-space representations of natural language that’s suitable for use in deep learning models. The BERT family of models uses the Transformer encoder architecture to process each token of input text in the full context of all tokens before and after. BERT usually just classifies text as either negative or positive, so I had to fine tune it to get the model we have of classifying text in multiple categories. Sockets were used to communicate between different IP addresses and ports. Threading were used to stream text in and out at the same time. The frontend system uses Kivy, a python front end library meant for cross-platform devices and multi-touch displays. ## Challenges we ran into There were a lot of firsts for this group. We are a bunch of first years after all. Whether it was someone's first time using BERT, or first time using kivy, there was a lot of pain in setting things up to a point where we were comfortable with the results. It was especially difficult to find good training data for BERT. It was also difficult to connect front-end to back-end with how the time difference in some of our group members was. ## Accomplishments that we're proud of For training the NLP system we had to read a lot of research papers about how labs have done similar things. It was extremely cool to apply something out of research papers into our own work. All things considered the front-end system looks very good, considering none of us are designers and it was that members first time using kivy a lot of progress was made. ## What we learned A big lesson that continues to be relevant in the space of data science and machine learning is garbage in, garbage out. A model is only as good as the training data you provide it with. On top of that, we learnt to work better as a group despite our time difference by using github better and writing more meaningful commit messages. ## What's next for AllChat Some next steps would be to move to a server instead of having messages being analyzed on device as with long messages it can become time intensive for a mobile phone. On top of that, some security features such as end to end encryption would also be necessary.
## Inspiration It's been really tiring to discern each other's emotions over online platforms, as a lot of the non-verbal modes of communcation are now rendered useless. Because of this, we interpreted that this would be especially difficult for people with autism, alexithymia, or other developmental disorders to communicate nowadays, and thus presume that there is a need for an increased accessibility source for online meetings. This project addresses this issue by creating a means in which these non-verbal communications can be interpreted by Machine Learning! ## High Level Objectives * Help individuals discern emotions over online meetings better, thus increasing the quality of Mental Health and alleviating Zoom Fatigue * Help individuals understand speech better via transcriptions * Provide real-time feedback back to the user continuously By improving the methods of non-verbal communications through online platforms, we hope to enhance the landscape of the online communication lifestyle, thus improving the mental health of all its users. ## What it does Thus, the project team decided to implement technology to help address this opportunity - welcome PELIOS, an ML-based application that communicates back the user emotion and speech back in plain text to the other users! ## How we built it The project is made up of a frontend made in Flutter and a backend in Python which uses Machine Learning libraries and Google Cloud API. Flutter is used to communicate back to the user what emotions are being recognized, and Python is used to discern those facial emotions. ## Challenges we ran into As first year students working online, it was hard to communicate together and distribute tasks to each other. We didn't have much experience in programming, so it was also a challenge trying to learn new languages and integrations to finalize all the outputs of our product. ## Accomplishments that we're proud of We learned a lot of new programming languages and integrations! It's been really cool to learn how to connect various platforms together to create a versatile final product. ## What we learned From working on this project, the author team comprised of all first year students learned: * Flutter * Python Libraries, MI Libraries * Integrations between Flutter and Python * Google Cloud API integrations * Research on UI/UX, accessibility design ## What's next for Pelios * Implementation of a variety of other emotion recognizers to re-verify the facial expressions, such as voice intonation, speech patterns, and sentence structures. All these methods could be added via more ML libraries. * Expansion into other OS/Devices, such as Android, IOS, and Linux * UI/UX improvement, research to better reflect accessibility needs of the user * Move C++ as an improvement (recording audio is faster) * Train custom model using Google’s Auto ML feature
## Inspiration Knowtworthy is a startup that all three of us founded together, with the mission to make meetings awesome. We have spent this past summer at the University of Toronto’s Entrepreneurship Hatchery’s incubator executing on our vision. We’ve built a sweet platform that solves many of the issues surrounding meetings but we wanted a glimpse of the future: entirely automated meetings. So we decided to challenge ourselves and create something that the world has never seen before: sentiment analysis for meetings while transcribing and attributing all speech. ## What it does While we focused on meetings specifically, as we built the software we realized that the applications for real-time sentiment analysis are far more varied than initially anticipated. Voice transcription and diarisation are very powerful for keeping track of what happened during a meeting but sentiment can be used anywhere from the boardroom to the classroom to a psychologist’s office. ## How I built it We felt a web app was best suited for software like this so that it can be accessible to anyone at any time. We built the frontend on React leveraging Material UI, React-Motion, Socket IO and ChartJS. The backed was built on Node (with Express) as well as python for some computational tasks. We used GRPC, Docker and Kubernetes to launch the software, making it scalable right out of the box. For all relevant processing, we used Google Speech-to-text, Google Diarization, Stanford Empath, SKLearn and Glove (for word-to-vec). ## Challenges I ran into Integrating so many moving parts into one cohesive platform was a challenge to keep organized but we used trello to stay on track throughout the 36 hours. Audio encoding was also quite challenging as we ran up against some limitations of javascript while trying to stream audio in the correct and acceptable format. Apart from that, we didn’t encounter any major roadblocks but we were each working for almost the entire 36-hour stretch as there were a lot of features to implement. ## Accomplishments that I'm proud of We are super proud of the fact that we were able to pull it off as we knew this was a challenging task to start and we ran into some unexpected roadblocks. There is nothing else like this software currently on the market so being first is always awesome. ## What I learned We learned a whole lot about integration both on the frontend and the backend. We prototyped before coding, introduced animations to improve user experience, too much about computer store numbers (:p) and doing a whole lot of stuff all in real time. ## What's next for Knowtworthy Sentiment Knowtworthy Sentiment aligns well with our startup’s vision for the future of meetings so we will continue to develop it and make it more robust before integrating it directly into our existing software. If you want to check out our stuff you can do so here: <https://knowtworthy.com/>
losing
## Inspiration It's so easy to drift from your close friends when caught up in your own world. We wanted to make a platform that allows you to stay connected easily with others and catch up with old friends to see how they're doing. ## What it does Everyday, CatchUp will send a notification to all users to go on the app for their daily 5 minute video call with a friend. Once the app is opened, the app randomly selects someone from the user's friend list to video call and they will be able to have a short conversation and to check up on each other. CatchUp makes it easy to have a short conversation and stay connected with your friends. ## How we built it Our group was interested in app development so chose to use Android Studio although not many of us were familiar with the platform. We had to learn the basics of coding in Kotlin, designed the UI of the app using Jetpack Compose and worked on implementing the Twilio video calling API into our app. ## Challenges we ran into As this was the first hackathon that we've participated in, we struggled heavily with setting up and learning how to start our first project. 1. Firstly, we were unfamiliar with what platforms to use for our project. We began experimenting with Flutter and FlutterFlow to work on the UI and tried to implement it with android studio along with Twilio. We found that it was too difficult to integrate the code from Flutter with Twilio on android studio so we switched to using Jetpack Compose for the UI. 2. We struggled to set up the Twilio video calling API as it was our first time using a software like this. It took a lot of time to figure out how to download and set it up and deploy it into android studio. ## Accomplishments that we're proud of We are all proud to say that we have completed our first ever hackathon! We are also proud of the new skills we’ve learned throughout implementing our project including using Android Studio, Jetpack Compose for creating our UI, and learning the basics of Kotlin. For most of us, it was the first time we have worked with these different softwares and we made the best of it. ## What we learned How essential it is to be aware of what software, API's, & SDK's we can use and that are compatible with each other as soon as possible. We also learned how to use Git and Github properly and effectively. For next time, definitely setting these kinds of things up at the very beginning of a project/hackathon would definitely be the most beneficial. ## What's next for CatchUp * Looking into integrating phone numbers/Google/Facebook for the login, and integrating firebase with this to host server information. * Aspiring to fully complete our application with the Twilio API fully integrated and working * Deploy the app in the Google Play store
## Inspiration - I got inspired for making this app when I saw that my friends and family who sometimes tend to not have enough internet bandwidth to spare to an application, and signal drops make calling someone a cumbersome task. Messaging was not included in this app, since I wanted it to be light-weight. It also achieves another goal; making people have a one-on-one conversation, which has reduced day by day as people have started using texting a lot. ## What it does - This app helps people make calls to their friends/co-workers/acquaintances without using too much of internet bandwidth, when signal drops are frequent and STD calls are not possible. The unavailability of messaging feature helps save more internet data and forces people to talk instead of texting. This helps people be more socially active among their friends. ## How I built it - This app encompasses multiple technologies and frameworks. This app is a combination of Flutter, Android and Firebase developed with the help of Dart and Java. It was a fun task to make all the UI elements and then inculcate them into the main frontend of the application. The backend uses Google Firebase for it's database and authentication, which is a service from Google for hosting apps with lots of features, and uses Google Cloud platform for all the work. Connecting the frontend and backend was not an easy task, especially for a single person, hence **the App is still under development phase and not yet fully functional.** ## Challenges we ran into - This whole idea was a pretty big challenge for me. This is my first project in Flutter, and I have never done something on this large scale, so I was totally skeptical about the completion of the project and it's elements. The majority of the time was dedicated to the frontend of the application, but the backend was a big problem especially for a beginner like me, hence the incomplete status. ## Accomplishments that we're proud of - Despite many of the challenges I ran into, I'm extremely proud of what I've been able to produce over the course of these 36 hours. ## What I learned - I learned a lot about Flutter and Firebase, and frontend-backend services in general. I learned how to make many new UI widgets and features, a lot of new plugins, connecting Android SDKs to the app and using them for smooth functioning. I learned how Firebase authenticates users and their emails/passwords with the built in authentications features, and how it stores data in containerized formats and uses it in projects, which will be very helpful in my future. One more important thing I learned was how I could keep my code organized and better formatted for easier changes whenever required. And lastly, I learned a lot about Git and how it is useful for such projects. ## What's next for Berufung - I hope this app will be fully-functioning, and we will add new features such as 2 Factor Authentication, Video calling, and group calling.
## 💫 Inspiration Inspired by our grandparents, who may not always be able to accomplish certain tasks, we wanted to create a platform that would allow them to find help locally . We also recognize that many younger members of the community might be more knowledgeable or capable of helping out. These younger members may be looking to make some extra money, or just want to help out their fellow neighbours. We present to you.... **Locall!** ## 🏘 What it does Locall helps members of a neighbourhood get in contact and share any tasks that they may need help with. Users can browse through these tasks, and offer to help their neighbours. Those who post the tasks can also choose to offer payment for these services. It's hard to trust just anyone to help you out with daily tasks, but you can always count on your neighbours! For example, let's say an elderly woman can't shovel her driveway today. Instead of calling a big snow plowing company, she can post a service request on Locall, and someone in her local community can reach out and help out! By using Locall, she's saving money on fees that the big companies charge, while also helping someone else in the community make a bit of extra money. Plenty of teenagers are looking to make some money whenever they can, and we provide a platform for them to get in touch with their neighbours. ## 🛠 How we built it We first prototyped our app design using Figma, and then moved on to using Flutter for actual implementation. Learning Flutter from scratch was a challenge, as we had to read through lots of documentation. We also stored and retrieved data from Firebase ## 🦒 What we learned Learning a new language can be very tiring, but also very rewarding! This weekend, we learned how to use Flutter to build an iOS app. We're proud that we managed to implement some special features into our app! ## 📱 What's next for Locall * We would want to train a Tensorflow model to better recommend services to users, as well as improve the user experience * Implementing chat and payment directly in the app would be helpful to improve requests and offers of services
losing
## Inspiration A project online detailing the "future cities index," a statistic that aims to calculate the viability of building a future city. After watching the Future Cities presentation, we were interested to see *where* Future Cities would be built, if a project like the one we saw was funded in the US. This prompted us to create a tool that may help social scientists answer that question — as many people work to innovate the various components of future cities, we tried to find possible homes for their ideas. ## What it does Allows Social Scientists and amateur researchers to access aggregated census and economic data through Lightbox API, without writing a single line of code. The program calculates a Future Cities Index based on the resilience of a census tract to natural disaster, housing availability, and the social vulnerability in the area. ## How we built it Interactive UI built with ReactJS, data parsed from Lightbox API with Javascript. ## Challenges we ran into Loading in the census tracts in our interactive map, finding appropriate data to display for each tract, and calculating the Future Cities Index ## Accomplishments that we're proud of Creating a working interactive map, successfully displaying a real-time Future Cities Index ## What we learned How to use geodata to make interactive maps that behave as we wish. We are able to overlay different raster images and polygons onto a map. ## What's next for Future Cities Index Using more parameters in the Future Cities Index, displaying data on the County and City level, linking each county tract to available census data, and allowing users to easily compare tracts
## Inspiration It’s insane how the majority of the U.S. still looks like **endless freeways and suburban sprawl.** The majority of Americans can’t get a cup of coffee or go to the grocery store without a car. What would America look like with cleaner air, walkable cities, green spaces, and effective routing from place to place that builds on infrastructure for active transport and micro mobility? This is the question we answer, and show, to urban planners, at an extremely granular street level. Compared to most of Europe and Asia (where there are public transportation options, human-scale streets, and dense neighborhoods) the United States is light-years away ... but urban planners don’t have the tools right now to easily assess a specific area’s walkability. **Here's why this is an urgent problem:** Current tools for urban planners don’t provide *location-specific information* —they only provide arbitrary, high-level data overviews over a 50-mile or so radius about population density. Even though consumers see new tools, like Google Maps’ area busyness bars, it is only bits and pieces of all the data an urban planner needs, like data on bike paths, bus stops, and the relationships between traffic and pedestrians. As a result, there’s very few actionable improvements that urban planners can make. Moreover, because cities are physical spaces, planners cannot easily visualize what an improvement (e.g. adding a bike/bus lane, parks, open spaces) would look like in the existing context of that specific road. Many urban planners don’t have the resources or capability to fully immerse themselves in a new city, live like a resident, and understand the problems that residents face on a daily basis that prevent them from accessing public transport, active commutes, or even safe outdoor spaces. There’s also been a significant rise in micro-mobility—usage of e-bikes (e.g. CityBike rental services) and scooters (especially on college campuses) are growing. Research studies have shown that access to public transport, safe walking areas, and micro mobility all contribute to greater access of opportunity and successful mixed income neighborhoods, which can raise entire generations out of poverty. To continue this movement and translate this into economic mobility, we have to ensure urban developers are **making space for car-alternatives** in their city planning. This means bike lanes, bus stops, plaza’s, well-lit sidewalks, and green space in the city. These reasons are why our team created CityGO—a tool that helps urban planners understand their region’s walkability scores down to the **granular street intersection level** and **instantly visualize what a street would look like if it was actually walkable** using Open AI's CLIP and DALL-E image generation tools (e.g. “What would the street in front of Painted Ladies look like if there were 2 bike lanes installed?)” We are extremely intentional about the unseen effects of walkability on social structures, the environments, and public health and we are ecstatic to see the results: 1.Car-alternatives provide economic mobility as they give Americans alternatives to purchasing and maintaining cars that are cumbersome, unreliable, and extremely costly to maintain in dense urban areas. Having lower upfront costs also enables handicapped people, people that can’t drive, and extremely young/old people to have the same access to opportunity and continue living high quality lives. This disproportionately benefits people in poverty, as children with access to public transport or farther walking routes also gain access to better education, food sources, and can meet friends/share the resources of other neighborhoods which can have the **huge** impact of pulling communities out of poverty. Placing bicycle lanes and barriers that protect cyclists from side traffic will encourage people to utilize micro mobility and active transport options. This is not possible if urban planners don’t know where existing transport is or even recognize the outsized impact of increased bike lanes. Finally, it’s no surprise that transportation as a sector alone leads to 27% of carbon emissions (US EPA) and is a massive safety issue that all citizens face everyday. Our country’s dependence on cars has been leading to deeper issues that affect basic safety, climate change, and economic mobility. The faster that we take steps to mitigate this dependence, the more sustainable our lifestyles and Earth can be. ## What it does TLDR: 1) Map that pulls together data on car traffic and congestion, pedestrian foot traffic, and bike parking opportunities. Heat maps that represent the density of foot traffic and location-specific interactive markers. 2) Google Map Street View API enables urban planners to see and move through live imagery of their site. 3) OpenAI CLIP and DALL-E are used to incorporate an uploaded image (taken from StreetView) and descriptor text embeddings to accurately provide a **hyper location-specific augmented image**. The exact street venues that are unwalkable in a city are extremely difficult to pinpoint. There’s an insane amount of data that you have to consolidate to get a cohesive image of a city’s walkability state at every point—from car traffic congestion, pedestrian foot traffic, bike parking, and more. Because cohesive data collection is extremely important to produce a well-nuanced understanding of a place’s walkability, our team incorporated a mix of geoJSON data formats and vector tiles (specific to MapBox API). There was a significant amount of unexpected “data wrangling” that came from this project since multiple formats from various sources had to be integrated with existing mapping software—however, it was a great exposure to real issues data analysts and urban planners have when trying to work with data. There are three primary layers to our mapping software: traffic congestion, pedestrian traffic, and bicycle parking. In order to get the exact traffic congestion per street, avenue, and boulevard in San Francisco, we utilized a data layer in MapBox API. We specified all possible locations within SF and made requests for geoJSON data that is represented through each Marker. Green stands for low congestion, yellow stands for average congestion, and red stands for high congestion. This data layer was classified as a vector tile in MapBox API. Consolidating pedestrian foot traffic data was an interesting task to handle since this data is heavily locked in enterprise software tools. There are existing open source data sets that are posted by regional governments, but none of them are specific enough that they can produce 20+ or so heat maps of high foot traffic areas in a 15 mile radius. Thus, we utilized Best Time API to index for a diverse range of locations (e.g. restaurants, bars, activities, tourist spots, etc.) so our heat maps would not be biased towards a certain style of venue to capture information relevant to all audiences. We then cross-validated that data with Walk Score (the most trusted site for gaining walkability scores on specific addresses). We then ranked these areas and rendered heat maps on MapBox to showcase density. San Francisco’s government open sources extremely useful data on all of the locations for bike parking installed in the past few years. We ensured that the data has been well maintained and preserved its quality over the past few years so we don’t over/underrepresent certain areas more than others. This was enforced by recent updates in the past 2 months that deemed the data accurate, so we added the geographic data as a new layer on our app. Each bike parking spot installed by the SF government is represented by a little bike icon on the map! **The most valuable feature** is the user can navigate to any location and prompt CityGo to produce a hyper realistic augmented image resembling that location with added infrastructure improvements to make the area more walkable. Seeing the StreetView of that location, which you can move around and see real time information, and being able to envision the end product is the final bridge to an urban developer’s planning process, ensuring that walkability is within our near future. ## How we built it We utilized the React framework to organize our project’s state variables, components, and state transfer. We also used it to build custom components, like the one that conditionally renders a live panoramic street view of the given location or render information retrieved from various data entry points. To create the map on the left, our team used MapBox’s API to style the map and integrate the heat map visualizations with existing data sources. In order to create the markers that corresponded to specific geometric coordinates, we utilized Mapbox GL JS (their specific Javascript library) and third-party React libraries. To create the Google Maps Panoramic Street View, we integrated our backend geometric coordinates to Google Maps’ API so there could be an individual rendering of each location. We supplemented this with third party React libraries for better error handling, feasibility, and visual appeal. The panoramic street view was extremely important for us to include this because urban planners need context on spatial configurations to develop designs that integrate well into the existing communities. We created a custom function and used the required HTTP route (in PHP) to grab data from the Walk Score API with our JSON server so it could provide specific Walkability Scores for every marker in our map. Text Generation from OpenAI’s text completion API was used to produce location-specific suggestions on walkability. Whatever marker a user clicked, the address was plugged in as a variable to a prompt that lists out 5 suggestions that are specific to that place within a 500-feet radius. This process opened us to the difficulties and rewarding aspects of prompt engineering, enabling us to get more actionable and location-specific than the generic alternative. Additionally, we give the user the option to generate a potential view of the area with optimal walkability conditions using a variety of OpenAI models. We have created our own API using the Flask API development framework for Google Street View analysis and optimal scene generation. **Here’s how we were able to get true image generation to work:** When the user prompts the website for a more walkable version of the current location, we grab an image of the Google Street View and implement our own architecture using OpenAI contrastive language-image pre-training (CLIP) image and text encoders to encode both the image and a variety of potential descriptions describing the traffic, use of public transport, and pedestrian walkways present within the image. The outputted embeddings for both the image and the bodies of text were then compared with each other using scaled cosine similarity to output similarity scores. We then tag the image with the necessary descriptors, like classifiers,—this is our way of making the system understand the semantic meaning behind the image and prompt potential changes based on very specific street views (e.g. the Painted Ladies in San Francisco might have a high walkability score via the Walk Score API, but could potentially need larger sidewalks to further improve transport and the ability to travel in that region of SF). This is significantly more accurate than simply using DALL-E’s set image generation parameters with an unspecific prompt based purely on the walkability score because we are incorporating both the uploaded image for context and descriptor text embeddings to accurately provide a hyper location-specific augmented image. A descriptive prompt is constructed from this semantic image analysis and fed into DALLE, a diffusion based image generation model conditioned on textual descriptors. The resulting images are higher quality, as they preserve structural integrity to resemble the real world, and effectively implement the necessary changes to make specific locations optimal for travel. We used Tailwind CSS to style our components. ## Challenges we ran into There were existing data bottlenecks, especially with getting accurate, granular, pedestrian foot traffic data. The main challenge we ran into was integrating the necessary Open AI models + API routes. Creating a fast, seamless pipeline that provided the user with as much mobility and autonomy as possible required that we make use of not just the Walk Score API, but also map and geographical information from maps + google street view. Processing both image and textual information pushed us to explore using the CLIP pre-trained text and image encoders to create semantically rich embeddings which can be used to relate ideas and objects present within the image to textual descriptions. ## Accomplishments that we're proud of We could have done just normal image generation but we were able to detect car, people, and public transit concentration existing in an image, assign that to a numerical score, and then match that with a hyper-specific prompt that was generating an image based off of that information. This enabled us to make our own metrics for a given scene; we wonder how this model can be used in the real world to speed up or completely automate the data collection pipeline for local governments. ## What we learned and what's next for CityGO Utilizing multiple data formats and sources to cohesively show up in the map + provide accurate suggestions for walkability improvement was important to us because data is the backbone for this idea. Properly processing the right pieces of data at the right step in the system process and presenting the proper results to the user was of utmost importance. We definitely learned a lot about keeping data lightweight, easily transferring between third-party softwares, and finding relationships between different types of data to synthesize a proper output. We also learned quite a bit by implementing Open AI’s CLIP image and text encoders for semantic tagging of images with specific textual descriptions describing car, public transit, and people/people crosswalk concentrations. It was important for us to plan out a system architecture that effectively utilized advanced technologies for a seamless end-to-end pipeline. We learned about how information abstraction (i.e. converting between images and text and finding relationships between them via embeddings) can play to our advantage and utilizing different artificially intelligent models for intermediate processing. In the future, we plan on integrating a better visualization tool to produce more realistic renders and introduce an inpainting feature so that users have the freedom to select a specific view on street view and be given recommendations + implement very specific changes incrementally. We hope that this will allow urban planners to more effectively implement design changes to urban spaces by receiving an immediate visual + seeing how a specific change seamlessly integrates with the rest of the environment. Additionally we hope to do a neural radiance field (NERF) integration with the produced “optimal” scenes to give the user the freedom to navigate through the environment within the NERF to visualize the change (e.g. adding a bike lane or expanding a sidewalk or shifting the build site for a building). A potential virtual reality platform would provide an immersive experience for urban planners to effectively receive AI-powered layout recommendations and instantly visualize them. Our ultimate goal is to integrate an asset library and use NERF-based 3D asset generation to allow planners to generate realistic and interactive 3D renders of locations with AI-assisted changes to improve walkability. One end-to-end pipeline for visualizing an area, identifying potential changes, visualizing said changes using image generation + 3D scene editing/construction, and quickly iterating through different design cycles to create an optimal solution for a specific locations’ walkability as efficiently as possible!
## Inspiration Coming from North Carolina that was recently hit by Hurricane Helene, I always wondered if we were safe living in Raleigh. This was the perfect opportunity for me to experiment with this. ## What it does By inputting an address (even only the street), you can find the chance of a flood in your area and which zone your area falls under. ## How we built it We used Javascript for the frontend, and Flask for the backend. The inputted address is passed as a parameter to the LightBox API which then returns a response. We extract the necessary details from the response and display it in a neat manner. ## Challenges we ran into Incorporating fetch.ai with Lightbox to use in a map that would advise how to get to the nearest safe location in case they were in a flood-prone area. (we were also working on a autonomous recycling marketplace before, but had issues with getting agents to match buy/sell bids) ## Accomplishments that we're proud of New Lookup tool! ## What we learned How frontend and backend communicate together! ## What's next for Save my area Expanding to increase to more endpoints, include maps, ai agents, etc.
winning
## Our Inspiration We were inspired by apps like Duolingo and Quizlet for language learning, and wanted to extend those experiences to a VR environment. The goal was to gameify the entire learning experience and make it immersive all while providing users with the resources to dig deeper into concepts. ## What it does EduSphere is an interactive AR/VR language learning VisionOS application designed for the new Apple Vision Pro. It contains three fully developed features: a 3D popup game, a multi-lingual chatbot, and an immersive learning environment. It leverages the visually compelling and intuitive nature of the VisionOS system to target three of the most crucial language learning styles: visual, kinesthetic, and literacy - allowing users to truly learn at their own comfort. We believe the immersive environment will make language learning even more memorable and enjoyable. ## How we built it We built the VisionOS app using the Beta development kit for the Apple Vision Pro. The front-end and AR/VR components were made using Swift, SwiftUI, Alamofire, RealityKit, and concurrent MVVM design architecture. 3D Models were converted through Reality Converter as .usdz files for AR modelling. We stored these files on the Google Cloud Bucket Storage, with their corresponding metadata on CockroachDB. We used a microservice architecture for the backend, creating various scripts involving Python, Flask, SQL, and Cohere. To control the Apple Vision Pro simulator, we linked a Nintendo Switch controller for interaction in 3D space. ## Challenges we ran into Learning to build for the VisionOS was challenging mainly due to the lack of documentation and libraries available. We faced various problems with 3D Modelling, colour rendering, and databases, as it was difficult to navigate this new space without references or sources to fall back on. We had to build many things from scratch while discovering the limitations within the Beta development environment. Debugging certain issues also proved to be a challenge. We also really wanted to try using eye tracking or hand gesturing technologies, but unfortunately, Apple hasn't released these yet without a physical Vision Pro. We would be happy to try out these cool features in the future, and we're definitely excited about what's to come in AR/VR! ## Accomplishments that we're proud of We're really proud that we were able to get a functional app working on the VisionOS, especially since this was our first time working with the platform. The use of multiple APIs and 3D modelling tools was also the amalgamation of all our interests and skillsets combined, which was really rewarding to see come to life.
## Inspiration Aravind doesn't speak Chinese. When Nick and Jon speak in Chinese Aravind is sad. We want to solve this problem for all the Aravinds in the world -- not just for Chinese though, for any language! ## What it does TranslatAR allows you to see English (or any other language of your choice) subtitles when you speak to other people speaking a foreign language. This is an augmented reality app which means the subtitles will appear floating in front of you! ## How we built it We used Microsoft Cognitive Services's Translation APIs to transcribe speech and then translate it. To handle the augmented reality aspect, we created our own AR device by combining an iPhone, a webcam, and a Google Cardboard. In order to support video capturing along with multiple microphones, we multithread all our processes. ## Challenges we ran into One of the biggest challenges we faced was trying to add the functionality to handle multiple input sources in different languages simultaneously. We eventually solved it with multithreading, spawning a new thread to listen, translate, and caption for each input source. ## Accomplishments that we're proud of Our biggest achievement is definitely multi-threading the app to be able to translate a lot of different languages at the same time using different endpoints. This makes real-time multi-lingual conversations possible! ## What we learned We familiarized ourselves with the Cognitive Services API and were also able to create our own AR system that works very well from scratch using OpenCV libraries and Python Imaging Library. ## What's next for TranslatAR We want to launch this App in the AppStore so people can replicate VR/AR on their own phones with nothing more than just an App and an internet connection. It also helps a lot of people whose relatives/friends speak other languages.
## Inspiration As first-year students, we have experienced the difficulties of navigating our way around our new home. We wanted to facilitate the transition to university by helping students learn more about their university campus. ## What it does A social media app for students to share daily images of their campus and earn points by accurately guessing the locations of their friend's image. After guessing, students can explore the location in full with detailed maps, including within university buildings. ## How we built it Mapped-in SDK was used to display user locations in relation to surrounding buildings and help identify different campus areas. Reactjs was used as a mobile website as the SDK was unavailable for mobile. Express and Node for backend, and MongoDB Atlas serves as the backend for flexible datatypes. ## Challenges we ran into * Developing in an unfamiliar framework (React) after learning that Android Studio was not feasible * Bypassing CORS permissions when accessing the user's camera ## Accomplishments that we're proud of * Using a new SDK purposely to address an issue that was relevant to our team * Going through the development process, and gaining a range of experiences over a short period of time ## What we learned * Planning time effectively and redirecting our goals accordingly * How to learn by collaborating with team members to SDK experts, as well as reading documentation. * Our tech stack ## What's next for LooGuessr * creating more social elements, such as a global leaderboard/tournaments to increase engagement beyond first years * considering freemium components, such as extra guesses, 360-view, and interpersonal wagers * showcasing 360-picture view by stitching together a video recording from the user * addressing privacy concerns with image face blur and an option for delaying showing the image
winning
## Problem In these times of isolation, many of us developers are stuck inside which makes it hard for us to work with our fellow peers. We also miss the times when we could just sit with our friends and collaborate to learn new programming concepts. But finding the motivation to do the same alone can be difficult. ## Solution To solve this issue we have created an easy to connect, all in one platform where all you and your developer friends can come together to learn, code, and brainstorm together. ## About Our platform provides a simple yet efficient User Experience with a straightforward and easy-to-use one-page interface. We made it one page to have access to all the tools on one screen and transition between them easier. We identify this page as a study room where users can collaborate and join with a simple URL. Everything is Synced between users in real-time. ## Features Our platform allows multiple users to enter one room and access tools like watching youtube tutorials, brainstorming on a drawable whiteboard, and code in our inbuilt browser IDE all in real-time. This platform makes collaboration between users seamless and also pushes them to become better developers. ## Technologies you used for both the front and back end We use Node.js and Express the backend. On the front end, we use React. We use Socket.IO to establish bi-directional communication between them. We deployed the app using Docker and Google Kubernetes to automatically scale and balance loads. ## Challenges we ran into A major challenge was collaborating effectively throughout the hackathon. A lot of the bugs we faced were solved through discussions. We realized communication was key for us to succeed in building our project under a time constraints. We ran into performance issues while syncing data between two clients where we were sending too much data or too many broadcast messages at the same time. We optimized the process significantly for smooth real-time interactions. ## What's next for Study Buddy While we were working on this project, we came across several ideas that this could be a part of. Our next step is to have each page categorized as an individual room where users can visit. Adding more relevant tools, more tools, widgets, and expand on other work fields to increase our User demographic. Include interface customizing options to allow User’s personalization of their rooms. Try it live here: <http://35.203.169.42/> Our hopeful product in the future: <https://www.figma.com/proto/zmYk6ah0dJK7yJmYZ5SZpm/nwHacks_2021?node-id=92%3A132&scaling=scale-down> Thanks for checking us out!
## Inspiration Our inspiration for this project was our experience as students. We believe students need more a digestible feed when it comes to due dates. Having to manually plan for homework, projects, and exams can be annoying and time consuming. StudyHedge is here to lift the scheduling burden off your shoulders! ## What it does StudyHedge uses your Canvas API token to compile a list of upcoming assignments and exams. You can configure a profile detailing personal events, preferred study hours, number of assignments to complete in a day, and more. StudyHedge combines this information to create a manageable study schedule for you. ## How we built it We built the project using React (Front-End), Flask (Back-End), Firebase (Database), and Google Cloud Run. ## Challenges we ran into Our biggest challenge resulted from difficulty connecting Firebase and FullCalendar.io. Due to inexperience, we were unable to resolve this issue in the given time. We also struggled with using the Eisenhower Matrix to come up with the right formula for weighting assignments. We discovered that there are many ways to do this. After exploring various branches of mathematics, we settled on a simple formula (Rank= weight/time^2). ## Accomplishments that we're proud of We are incredibly proud that we have a functional Back-End and that our UI is visually similar to our wireframes. We are also excited that we performed so well together as a newly formed group. ## What we learned Keith used React for the first time. He learned a lot about responsive front-end development and managed to create a remarkable website despite encountering some issues with third-party software along the way. Gabriella designed the UI and helped code the front-end. She learned about input validation and designing features to meet functionality requirements. Eli coded the back-end using Flask and Python. He struggled with using Docker to deploy his script but managed to conquer the steep learning curve. He also learned how to use the Twilio API. ## What's next for StudyHedge We are extremely excited to continue developing StudyHedge. As college students, we hope this idea can be as useful to others as it is to us. We want to scale this project eventually expand its reach to other universities. We'd also like to add more personal customization and calendar integration features. We are also considering implementing AI suggestions.
## Inspiration The application was developed for those who do not know exactly what healthcare institution they should be visiting for a health concern. The healthcare industry loses millions of dollars every year when people are hospitalized in the ER simply for not knowing where someone should admit themselves. Having all resources available to you, including clinics and mobile services allows you to make an educated choice. ## What it does ## How I built it The application is able to collect all data that your healthcare application contain, and if any of the areas that are considered "standard" for your lifestyle experience a spike, you get notified and categorized. You then get a recommendation for certain areas near you. ## Challenges I ran into ## Accomplishments that I'm proud of ## What I learned ## What's next for Carrier
winning
## Inspiration More creators are coming online to create entertaining content for fans across the globe. On platforms like Twitch and YouTube, creators have amassed billions of dollars in revenue thanks to loyal fans who return to be part of the experiences they create. Most of these experiences feel transactional, however: Twitch creators mostly generate revenue from donations, subscriptions, and currency like "bits," where Twitch often takes a hefty 50% of the revenue from the transaction. Creators need something new in their toolkit. Fans want to feel like they're part of something. ## Purpose Moments enables creators to instantly turn on livestreams that can be captured as NFTs for live fans at any moment, powered by livepeer's decentralized video infrastructure network. > > "That's a moment." > > > During a stream, there often comes a time when fans want to save a "clip" and share it on social media for others to see. When such a moment happens, the creator can press a button and all fans will receive a non-fungible token in their wallet as proof that they were there for it, stamped with their viewer number during the stream. Fans can rewatch video clips of their saved moments in their Inventory page. ## Description Moments is a decentralized streaming service that allows streamers to save and share their greatest moments with their fans as NFTs. Using Livepeer's decentralized streaming platform, anyone can become a creator. After fans connect their wallet to watch streams, creators can mass send their viewers tokens of appreciation in the form of NFTs (a short highlight clip from the stream, a unique badge etc.) Viewers can then build their collection of NFTs through their inventory. Many streamers and content creators have short viral moments that get shared amongst their fanbase. With Moments, a bond is formed with the issuance of exclusive NFTs to the viewers that supported creators at their milestones. An integrated chat offers many emotes for viewers to interact with as well.
## Inspiration One of the greatest challenges facing our society today is food waste. From an environmental perspective, Canadians waste about *183 kilograms of solid food* per person, per year. This amounts to more than six million tonnes of food a year, wasted. From an economic perspective, this amounts to *31 billion dollars worth of food wasted* annually. For our hack, we wanted to tackle this problem and develop an app that would help people across the world do their part in the fight against food waste. We wanted to work with voice recognition and computer vision - so we used these different tools to develop a user-friendly app to help track and manage food and expiration dates. ## What it does greenEats is an all in one grocery and food waste management app. With greenEats, logging your groceries is as simple as taking a picture of your receipt or listing out purchases with your voice as you put them away. With this information, greenEats holds an inventory of your current groceries (called My Fridge) and notifies you when your items are about to expire. Furthermore, greenEats can even make recipe recommendations based off of items you select from your inventory, inspiring creativity while promoting usage of items closer to expiration. ## How we built it We built an Android app with Java, using Android studio for the front end, and Firebase for the backend. We worked with Microsoft Azure Speech Services to get our speech-to-text software working, and the Firebase MLKit Vision API for our optical character recognition of receipts. We also wrote a custom API with stdlib that takes ingredients as inputs and returns recipe recommendations. ## Challenges we ran into With all of us being completely new to cloud computing it took us around 4 hours to just get our environments set up and start coding. Once we had our environments set up, we were able to take advantage of the help here and worked our way through. When it came to reading the receipt, it was difficult to isolate only the desired items. For the custom API, the most painstaking task was managing the HTTP requests. Because we were new to Azure, it took us some time to get comfortable with using it. To tackle these tasks, we decided to all split up and tackle them one-on-one. Alex worked with scanning the receipt, Sarvan built the custom API, Richard integrated the voice recognition, and Maxwell did most of the app development on Android studio. ## Accomplishments that we're proud of We're super stoked that we offer 3 completely different grocery input methods: Camera, Speech, and Manual Input. We believe that the UI we created is very engaging and presents the data in a helpful way. Furthermore, we think that the app's ability to provide recipe recommendations really puts us over the edge and shows how we took on a wide variety of tasks in a small amount of time. ## What we learned For most of us this is the first application that we built - we learned a lot about how to create a UI and how to consider mobile functionality. Furthermore, this was also our first experience with cloud computing and APIs. Creating our Android application introduced us to the impact these technologies can have, and how simple it really is for someone to build a fairly complex application. ## What's next for greenEats We originally intended this to be an all-purpose grocery-management app, so we wanted to have a feature that could allow the user to easily order groceries online through the app, potentially based off of food that would expire soon. We also wanted to implement a barcode scanner, using the Barcode Scanner API offered by Google Cloud, thus providing another option to allow for a more user-friendly experience. In addition, we wanted to transition to Firebase Realtime Database to refine the user experience. These tasks were considered outside of our scope because of time constraints, so we decided to focus our efforts on the fundamental parts of our app.
## Inspiration Each year, art forgery causes over **$6 billion in losses**. Museums, for example, cannot afford such detrimental costs. In an industry that has spanned centuries, it is crucial that transactions of art pieces can be completed securely and confidently. Introducing **Artful, a virtual marketplace for physical art which connects real-life artworks to secure NFTs on our own private blockchain**. With the power of the blockchain, the legitimacy of high-value real-world art pieces can be verified through an elegant and efficient web application, which with its scalability and decentralized framework, also proves as an efficient and secure measure for art dealership for all artists worldwide. ## What it does To join our system, art owners can upload a picture of their art through our portal. Once authenticated in person by our team of art verification experts, the art piece is automatically generated into an NFT and uploaded on the Eluv.io Ethereum blockchain. In essence, ownership of the NFT represents ownership of the real life artwork. From this point on, prospective buyers no longer need to consult expensive consultants, who charge hundreds of thousands to millions of dollars – they can simply visit the piece on our webapp and purchase it with full confidence in its legitimacy. Artful serves a second purpose, namely for museums. According to the Museum Association, museum grant funding has dropped by over 20% over the last few years. As a result, museums have been forced to drop collections entirely, preventing public citizens from appreciating their beauty. Artful enables museums to create NFT and experiential bundles, which can be sold to the public as a method of fundraising. Through the Eluv.io fabric, experiences ranging from AR trips to games can be easily deployed on the blockchain, allowing for museums to sustain their offerings for years to come. ## How we built it We built a stylish and sleek frontend with Next.js, React, and Material UI. For our main backend, we utilized node.js and cockroachDB. At the core of our project is Eluv.io, powering our underlying Ethereum blockchain and personal marketplace. ## Challenges we ran into Initially, our largest challenge was developing a private blockchain that we could use for development. We tested out various services and ensuring the packages worked as expected was a common obstacle. Additionally, we were attempting to develop a custom NFT transfer smart contract with Solana, which was quite difficult. However, we soon found Eluv.io which eliminated these challenges and allowed us to focus development on our own platform. Overall, our largest challenge was automation. Specifically, the integration of automatically processing an uploaded image, utilizing the Eluv.io content fabric to create marketplaces and content objects in a manner that worked well with our existing frontend modules, generating the NFTs using an automated workflow, and publishing the NFT to the blockchain proved to be quite difficult due to the number of moving parts. ## Accomplishments that we're proud of We’re incredibly proud of the scope and end-to-end completion of our website and application. Specifically, we’ve made a functional, working system which users can (today!) use to upload and purchase art through NFTs on our marketplace on an automated, scalable basis, including functionality for transactions, proof of ownership, and public listings. While it may have been possible to quit in the face of relentless issues in the back-end coding and instead pursue a more theoretical approach (in which we suggest functionality rather than implement it), we chose to persist, and it paid off. The whole chain of commands which previously required manual input through command line and terminal has been condensed into an automated process and contained to a single file of new code. ## What we learned Through initially starting with a completely from scratch Ethereum private blockchain using geth and smart contracts in Solidity, we developed a grounding in how blockchains actually work and the extensive infrastructure that enables decentralization. Moreover, we recognized the power of APIs in using Eluv.io’s architecture after learning it from the ground up. The theme of our project was fundamentally integration—finding ways to integrate our frontend user authentication and backend Eluv.io Ethereum blockchain, and seeing how to integrate the Eluv.io interface with our own custom web app. There were many technical challenges along the way in learning a whole new API, but through this journey, we feel much more comfortable with both our ability as programmers and our understanding of blockchain, a topic which before this hackathon, none of us had really developed familiarity with. By talking a lot with the Eluv.io CEO and founder who helped us tremendously with our project too, we learned a lot about their own goals and aspirations, and we can safely say that we’ve emerged from this hackathon with a much deeper appreciation and curiosity for blockchain and use cases of dapps. ## What's next for Artful Artful plans to incorporate direct communications with museums by building a more robust fundraising network, where donators can contribute to the restoration of art or the renovation of an exhibit by purchasing one of the many available copies of a specific NFT. Also, we have begun implementing a database and blockchain tracking system, which museums can purchase to streamline their global collaboration as they showcase especially-famous pieces on a rotating basis. Fundamentally, we hope that our virtual center can act as a centralized hub for high-end art transfer worldwide, which through blockchain’s security, ensures the solidity (haha) of transactions will redefine the art buying/selling industry. Moreover, our website also acts as a proof of credibility as well — by connecting transactions with corresponding NFTs on the blockchain, we can ensure that every transaction occurring on our website is credible, and so as it scales up, the act of not using our website and dealing art underhandedly represents a loss of credibility. And most importantly, by integrating decentralization with the way high-end art NFTs are stored, we hope to bring the beautiful yet esoteric world of art to more people, further creating a platform for up and coming artists to establish their mark on a new age of digital creators.
winning
## Inspiration Emergency situations can be extremely sudden and can seem paralyzing, especially for young children. In most cases, children from the ages of 4-10 are unaware of how to respond to a situation that requires contact with first responders, and what the most important information to communicate. In the case of a parent or guardian having a health issue, children are left feeling helpless. We wanted to give children confidence that is key to their healthy cognitive and social development by empowering them with the knowledge of how to quickly and accurately respond in emergency situations, which is why we created Hero Alert. ## What it does Our product provides a tangible device for kids to interact with, guiding them through the process of making a call to 9-1-1 emergency services. A conversational AI bot uses natural language understanding to listen to the child’s responses and tailor the conversation accordingly, creating a sense that the child is talking to a real emergency operator. Our device has multiple positive impacts: the educational aspect of encouraging children’s cognitive development skills and preparing them for serious, real-life situations; giving parents more peace of mind, knowing that their child can respond to dire situations; and providing a diverting, engaging game for children to feel like their favorite Marvel superhero while taking the necessary steps to save the day! ## How we built it On the software side, our first step was to find find images from comic books that closely resemble real-life emergency and crisis scenarios. We implemented our own comic classifier with the help of IBM Watson’s visual recognition service, classifying and re-tagging images made available by Marvel’s Comics API into crisis categories such as fire, violence, water disasters, or unconsciousness. The physical device randomly retrieves and displays these image objects from an mLab database each time a user mimics a 9-1-1 call. We used the Houndify conversational AI by SoundHound to interpret the voice recordings and generate smart responses. Different emergencies scenarios were stored as pages in Houndify and different responses from the child were stored as commands. We used Houndify’s smart expressions to build up potential user inputs and ensure the correct output was sent back to the Pi. Running on the Pi was a series of Python scripts, a command engine and an interaction engine, that enabled the flow of data and verified the child’s input. On the hardware end, we used a Raspberry Pi 3 connected to a Sony Eye camera/microphone to record audio and a small hdmi monitor to display a tagged Marvel image. The telephone 9-1-1 digits were inputted as haptic buttons connected to the Pi’s GPIO pins. All of the electronics were encapsulated in a custom laser cut box that acted as both a prototype for children’s toy and as protection for the electronics. ## Challenges we ran into The comics from the Marvel API are hand-drawn and don’t come with detailed descriptions, so we had a tough time training a general model to match pictures to each scenario. We ended up creating a custom classifier with IBM Watson’s visual recognition service, using a few pre-selected images from Marvel, then applied that to the entirety of the Marvel imageset to diversify our selection. The next challenge was creating conversational logic flow that could be applied to a variety of statements a child might say while on the phone. We created several scenarios that involved numerous potential emergency situations and used Houndify’s Smart Expressions to evaluate the response from a child. Matching statements to these expressions allowed us to understand the conversation and provide custom feedback and responses throughout the mock phone call. We also wanted to make sure that we provide a sense of empowerment for the child. While they should not make unnecessary calls, children should not be afraid or anxious to talk with emergency services during an emergency. We want them to feel comfortable, capable, and strong enough to make that call and help the situation they are in. Our implementation of Marvel Comics allowed us provide some sense of super-power to the children during the calls. ## Accomplishments that we're proud of Our end product works smoothly and simulates an actual conversation for a variety of crisis scenarios, while providing words of encouragement and an unconventional approach to emergency response. We used a large variety of APIs and platforms and are proud that we were able to have all of them work with one another in a unified product. ## What we learned We learned that the ideation process and collaboration are keys in solving any wicked problem that exists in society. We also learned that having a multidisciplinary team with very diverse backgrounds and skill sets provides the most comprehensive contributions and challenges us both as individuals and as a team. ## What's next for Hero Alert! We'd love to get more user feedback and continue development and prototyping of the device in the future, so that one day it will be available on store shelves.
## Inspiration The increasing frequency and severity of natural disasters such as wildfires, floods, and hurricanes have created a pressing need for reliable, real-time information. Families, NGOs, emergency first responders, and government agencies often struggle to access trustworthy updates quickly, leading to delays in response and aid. Inspired by the need to streamline and verify information during crises, we developed Disasteraid.ai to provide concise, accurate, and timely updates. ## What it does Disasteraid.ai is an AI-powered platform consolidating trustworthy live updates about ongoing crises and packages them into summarized info-bites. Users can ask specific questions about crises like the New Mexico Wildfires and Floods to gain detailed insights. The platform also features an interactive map with pin drops indicating the precise coordinates of events, enhancing situational awareness for families, NGOs, emergency first responders, and government agencies. ## How we built it 1. Data Collection: We queried You.com to gather URLs and data on the latest developments concerning specific crises. 2. Information Extraction: We extracted critical information from these sources and combined it with data gathered through Retrieval-Augmented Generation (RAG). 3. AI Processing: The compiled information was input into Anthropic AI's Claude 3.5 model. 4. Output Generation: The AI model produced concise summaries and answers to user queries, alongside generating pin drops on the map to indicate event locations. ## Challenges we ran into 1. Data Verification: Ensuring the accuracy and trustworthiness of the data collected from multiple sources was a significant challenge. 2. Real-Time Processing: Developing a system capable of processing and summarizing information in real-time requires sophisticated algorithms and infrastructure. 3. User Interface: Creating an intuitive and user-friendly interface that allows users to easily access and interpret information presented by the platform. ## Accomplishments that we're proud of 1. Accurate Summarization: Successfully integrating AI to produce reliable and concise summaries of complex crisis situations. 2. Interactive Mapping: Developing a dynamic map feature that provides real-time location data, enhancing the usability and utility of the platform. 3. Broad Utility: Creating a versatile tool that serves diverse user groups, from families seeking safety information to emergency responders coordinating relief efforts. ## What we learned 1. Importance of Reliable Data: The critical need for accurate, real-time data in disaster management and the complexities involved in verifying information from various sources. 2. AI Capabilities: The potential and limitations of AI in processing and summarizing vast amounts of information quickly and accurately. 3. User Needs: Insights into the specific needs of different user groups during a crisis, allowing us to tailor our platform to better serve these needs. ## What's next for DisasterAid.ai 1. Enhanced Data Sources: Expanding our data sources to include more real-time feeds and integrating social media analytics for even faster updates. 2. Advanced AI Models: Continuously improving our AI models to enhance the accuracy and depth of our summaries and responses. 3. User Feedback Integration: Implementing feedback loops to gather user input and refine the platform's functionality and user interface. 4. Partnerships: Building partnerships with more emergency services and NGOs to broaden the reach and impact of Disasteraid.ai. 5. Scalability: Scaling our infrastructure to handle larger volumes of data and more simultaneous users during large-scale crises.
## Inspiration Every year hundreds of thousands of preventable deaths occur due to the lack of first aid knowledge in our societies. Many lives could be saved if the right people are in the right places at the right times. We aim towards connecting people by giving them the opportunity to help each other in times of medical need. ## What it does It is a mobile application that is aimed towards connecting members of our society together in times of urgent medical need. Users can sign up as respondents which will allow them to be notified when people within a 300 meter radius are having a medical emergency. This can help users receive first aid prior to the arrival of an ambulance or healthcare professional greatly increasing their chances of survival. This application fills the gap between making the 911 call and having the ambulance arrive. ## How we built it The app is Android native and relies heavily on the Google Cloud Platform. User registration and authentication is done through the use of Fireauth. Additionally, user data, locations, help requests and responses are all communicated through the Firebase Realtime Database. Lastly, the Firebase ML Kit was also used to provide text recognition for the app's registration page. Users could take a picture of their ID and their information can be retracted. ## Challenges we ran into There were numerous challenges in terms of handling the flow of data through the Firebase Realtime Database and providing the correct data to authorized users. ## Accomplishments that we're proud of We were able to build a functioning prototype! Additionally we were able to track and update user locations in a MapFragment and ended up doing/implementing things that we had never done before.
partial
## Overview People today are as connected as they've ever been, but there are still obstacles in communication, particularly for people who are deaf/mute and can not communicate by speaking. Our app allows bi-directional communication between people who use sign language and those who speak. You can use your device's camera to talk using ASL, and our app will convert it to text for the other person to view. Conversely, you can also use your microphone to record your audio which is converted into text for the other person to read. ## How we built it We used **OpenCV** and **Tensorflow** to build the Sign to Text functionality, using over 2500 frames to train our model. For the Text to Sign functionality, we used **AssemblyAI** to convert audio files to transcripts. Both of these functions are written in **Python**, and our backend server uses **Flask** to make them accessible to the frontend. For the frontend, we used **React** (JS) and MaterialUI to create a visual and accessible way for users to communicate. ## Challenges we ran into * We had to re-train our models multiple times to get them to work well enough. * We switched from running our applications entirely on Jupyter (using Anvil) to a React App last-minute ## Accomplishments that we're proud of * Using so many tools, languages and frameworks at once, and making them work together :D * submitting on time (I hope? 😬) ## What's next for SignTube * Add more signs! * Use AssemblyAI's real-time API for more streamlined communication * Incorporate account functionality + storage of videos
## Inspiration A week or so ago, Nyle DiMarco, the model/actor/deaf activist, visited my school and enlightened our students about how his experience as a deaf person was in shows like Dancing with the Stars (where he danced without hearing the music) and America's Next Top Model. Many people on those sets and in the cast could not use American Sign Language (ASL) to communicate with him, and so he had no idea what was going on at times. ## What it does SpeakAR listens to speech around the user and translates it into a language the user can understand. Especially for deaf users, the visual ASL translations are helpful because that is often their native language. ![Image of ASL](https://res.cloudinary.com/devpost/image/fetch/s--wWJOXt4_--/c_limit,f_auto,fl_lossy,q_auto:eco,w_900/http://az616578.vo.msecnd.net/files/2016/04/17/6359646757437353841666149658_asl.png) ## How we built it We utilized Unity's built-in dictation recognizer API to convert spoken word into text. Then, using a combination of gifs and 3D-modeled hands, we translated that text into ASL. The scripts were written in C#. ## Challenges we ran into The Microsoft HoloLens requires very specific development tools, and because we had never used the laptops we brought to develop for it before, we had to start setting everything up from scratch. One of our laptops could not "Build" properly at all, so all four of us were stuck using only one laptop. Our original idea of implementing facial recognition could not work due to technical challenges, so we actually had to start over with a **completely new** idea at 5:30PM on Saturday night. The time constraint as well as the technical restrictions on Unity3D reduced the number of features/quality of features we could include in the app. ## Accomplishments that we're proud of This was our first time developing with the HoloLens at a hackathon. We were completely new to using GIFs in Unity and displaying it in the Hololens. We are proud of overcoming our technical challenges and adapting the idea to suit the technology. ## What we learned Make sure you have a laptop (with the proper software installed) that's compatible with the device you want to build for. ## What's next for SpeakAR In the future, we hope to implement reverse communication to enable those fluent in ASL to sign in front of the HoloLens and automatically translate their movements into speech. We also want to include foreign language translation so it provides more value to **deaf and hearing** users. The overall quality and speed of translations can also be improved greatly.
## Inspiration We wanted to simplify communication between any user and a person who speaks mainly sign language. ## What it does In one direction, it converts sign language from camera input into text to be displayed to the user. On the other hand, it takes speech from the user and coverts it into text to be displayed to the person who mainly speaks sign language. ## How we built it The entire front-end was built using VueJS, with speech recognition being done using Chrome's Web Speech API. The two machine learning models were built. The first model is a frozen pre-trained model which works with a convolutional neural network. The second model is one that was built used Microsoft's custom vision, with manual images being taken and fed. ## Challenges we ran into Getting a model that works well for detecting sign language. ## Accomplishments that we're proud of Getting a semi-working model for detecting sign language. ## What we learned Loads of machine learning knowledge. ## What's next for sli.ai Supporting displaying to multiple screens at once. Refining the machine learning model to be more accurate. Implementing text-to-speech for sign language that's converted into text via the model.
winning
## Inspiration The inspiration for this project came from the group's passion to build health related apps. While blindness is not necessarily something we can heal, it is something that we can combat with technology. ## What it does This app gives blind individuals the ability to live life with the same ease as any other person. Using beacon software, we are able to provide users with navigational information in heavily populated areas such as subways or or museums. The app uses a simple UI that includes the usage of different numeric swipes or taps to launch certain features of the app. At the opening of the app, the interface is explained in its entirety in a verbal manner. One of the most useful portions of the app is a camera feature that allows users to snap a picture and instantly receive verbal cues depicting what is in their environment. The navigation side of the app is what we primarily focused on, but as a fail safe method the Lyft API was implemented for users to order a car ride out of a worst case scenario. ## How we built it ## Challenges we ran into We ran into several challenges during development. One of our challenges was attempting to use the Alexa Voice Services API for Android. We wanted to create a skill to be used within the app; however, there was a lack of documentation at our disposal and minimal time to bring it to fruition. Rather than eliminating this feature all together, we collaborated to develop a fully functional voice command system that can command their application to call for a Lyft to their location through the phone rather than the Alexa. Another issue we encountered was in dealing with the beacons. In a large area like what would be used in a realistic public space and setting, such as a subway station, the beacons would be placed at far enough distances to be individually recognized. Whereas, in such a confined space, the beacon detection overlapped, causing the user to receive multiple different directions simultaneously. Rather than using physical beacons, we leveraged a second mobile application that allows us to create beacons around us with an Android Device. ## Accomplishments that we're proud of As always, we are a team of students who strive to learn something new at every hackathon we attend. We chose to build an ambitious series of applications within a short and concentrated time frame, and the fact that we were successful in making our idea come to life is what we are the most proud of. Within our application, we worked around as many obstacles that came our way as possible. When we found out that Amazon Alexa wouldn't be compatible with Android, it served as a minor setback to our plan, but we quickly brainstormed a new idea. Additionally, we were able to develop a fully functional beacon navigation system with built in voice prompts. We managed to develop a UI that is almost entirely nonvisual, rather used audio as our only interface. Given that our target user is blind, we had a lot of difficulty in developing this kind of UI because while we are adapted to visual cues and the luxury of knowing where to tap buttons on our phone screens, the visually impaired aren't. We had to keep this in mind throughout our entire development process, and so voice recognition and tap sequences became a primary focus. Reaching out of our own comfort zones to develop an app for a unique user was another challenge we successfully overcame. ## What's next for Lantern With a passion for improving health and creating easier accessibility for those with disabilities, we plan to continue working on this project and building off of it. The first thing we want to recognize is how easily adaptable the beacon system is. In this project we focused on the navigation of subway systems: knowing how many steps down to the platform, when they've reached the safe distance away from the train, and when the train is approaching. This idea could easily be brought to malls, museums, dorm rooms, etc. Anywhere that could provide a concern for the blind could benefit from adapting our beacon system to their location. The second future project we plan to work on is a smart walking stick that uses sensors and visual recognition to detect and announce what elements are ahead, what could potentially be in the user's way, what their surroundings look like, and provide better feedback to the user to assure they don't get misguided or lose their way.
## Inspiration During last year's World Wide Developers Conference, Apple introduced a host of new innovative frameworks (including but not limited to CoreML and ARKit) which placed traditionally expensive and complex operations such as machine learning and augmented reality in the hands of developers such as myself. This incredible opportunity was one that I wanted to take advantage of at PennApps this year, and Lyft's powerful yet approachable API (and SDK!) struck me as the perfect match for ARKit. ## What it does Utilizing these powerful technologies, Wher integrates with Lyft to further enhance the process of finding and requesting a ride by improving on ease of use, safety, and even entertainment. One issue that presents itself when using overhead navigation methods is, quite simply, a lack of the 3rd dimension. A traditional overhead view tends to complicate on foot navigation more than it may help, and even more importantly, requires the user to bury their face in their phone. This pulls attention from the users surroundings, and poses a threat to their safety- especially in busy cities. Wher resolves all of these concerns by bringing the experience of Lyft into Augmented Reality, which allows users to truly see the location of their driver and destination, pay more attention to where they are going, and have a more enjoyable and modern experience in the process. ## How I built it I built Wher using several of Apple's Frameworks including ARKit, MapKit, CoreLocation, and UIKit, which allowed me to build the foundation for the app and the "scene" necessary to create and display an Augmented Reality plane. Using the Lyft API I was able to gather information regarding available drivers in the area, including their exact position (real time), cost, ETA, and the service they offered. This information was used to populate the scene and deep link into the Lyft app itself to request a ride and complete the transaction. ## Challenges I ran into While both Apple's well documented frameworks and Lyft's API simplified the learning required to take on the project, there were still several technical hurdles that had to be overcome to complete the project. The first issue that I faced was Lyft's API itself; While it was great in many respects, Lyft has yet to create a branch fit for use with Swift 4 and iOS 11 (required to use ARKit), which meant I had to rewrite certain portions of their LyftURLEncodingScheme and LyftButton classes in order to continue with the project. Another challenge was finding a way to represent a variance in coordinates and 'simulate distance', so to make the AR experience authentic. This, similar to the first challenge, became manageable with enough thought and math. One of the last significant challenges I encountered and overcame was with drawing driver "bubbles" in the AR Plane without encountering graphics glitches. ## Accomplishments that I'm proud of Despite the many challenges that this project presented, I am very happy that I persisted and worked to complete it. Most importantly, I'm proud of just how cool it is to see something so simple represented in AR, and how different it is from a traditional 2D View. I am also very proud to say that this is something I can see myself using any time I need to catch a Lyft. ## What I learned With PennApps being my first Hackathon, I was unsure what to expect and what exactly I wanted to accomplish. As a result, I greatly overestimated how many features I could fit into Wher and was forced to cut back on what I could add. As a result, I learned a lesson in managing expectations. ## What's next for Wher (with Lyft) In the short term, adding a social aspect and allowing for "friends" to organize and mark designated meet up spots for a Lyft, to greater simply the process of a night out on the town. In the long term, I hope to be speaking with Lyft!
## Inspiration The idea for VenTalk originated from an everyday stressor that everyone on our team could relate to; commuting alone to and from class during the school year. After a stressful work or school day, we want to let out all our feelings and thoughts, but do not want to alarm or disturb our loved ones. Releasing built-up emotional tension is a highly effective form of self-care, but many people stay quiet as not to become a burden on those around them. Over time, this takes a toll on one’s well being, so we decided to tackle this issue in a creative yet simple way. ## What it does VenTalk allows users to either chat with another user or request urgent mental health assistance. Based on their choice, they input how they are feeling on a mental health scale, or some topics they want to discuss with their paired user. The app searches for keywords and similarities to match 2 users who are looking to have a similar conversation. VenTalk is completely anonymous and thus guilt-free, and chats are permanently deleted once both users have left the conversation. This allows users to get any stressors from their day off their chest and rejuvenate their bodies and minds, while still connecting with others. ## How we built it We began with building a framework in React Native and using Figma to design a clean, user-friendly app layout. After this, we wrote an algorithm that could detect common words from the user inputs, and finally pair up two users in the queue to start messaging. Then we integrated, tested, and refined how the app worked. ## Challenges we ran into One of the biggest challenges we faced was learning how to interact with APIs and cloud programs. We had a lot of issues getting a reliable response from the web API we wanted to use, and a lot of requests just returned CORS errors. After some determination and a lot of hard work we finally got the API working with Axios. ## Accomplishments that we're proud of In addition to the original plan for just messaging, we added a Helpful Hotline page with emergency mental health resources, in case a user is seeking professional help. We believe that since this app will be used when people are not in their best state of minds, it's a good idea to have some resources available to them. ## What we learned Something we got to learn more about was the impact of user interface on the mood of the user, and how different shades and colours are connotated with emotions. We also discovered that having team members from different schools and programs creates a unique, dynamic atmosphere and a great final result! ## What's next for VenTalk There are many potential next steps for VenTalk. We are going to continue developing the app, making it compatible with iOS, and maybe even a webapp version. We also want to add more personal features, such as a personal locker of stuff that makes you happy (such as a playlist, a subreddit or a netflix series).
partial
## Inspiration Eve the robot from WALL-E for its intelligence. ## What it does It serves as a virtual assistant to help EV drivers plan their trip ahead, and will understand your habits and knows how to improve their driving experience. In case they need to stop somewhere on the road for charging, it also provides tentative plan to spend good quality time like drinking coffee, seeing the scenery or having a lunch, depending on the time of the day, season and region. ## How we built it We used Node.js and Express to facilitate our backend framework, making sure the structure is maintanable according the MVC design. For searching and requesting routes, nearby location of various types, we chose Google Map APIs, and for interpreting the human speech input to further automate the assistant, we used Cohere APIs for speech to text, and intention detection process. ## Challenges we ran into Harnessing the complexity of APIs, and how to simplify the client-server side communication. ## Accomplishments that we're proud of We have successfully integrated natural language processing into our solution, to efficiently identify user intentions, and direct the following steps to execute driven by AI. ## What we learned * Product design that solves problems for a certain group of people. Make good use of the resources you have that aligns with the given scenario. * Using R3E to integrate 3D model with NextJS. ## What's next for Hello Eve Continue to integrate AI and make improvements on giving advice that is more ad-hoc and considering.
## Inspiration Inspired by the vision of Eve from WALL-E, EVE is our answer to a world demanding sustainable spaces. We imagined a robot that could meticulously assess building health, just as Eve analyzed Earth's viability. ## What it does Eve is a cutting-edge robot that assesses your building's environmental performance based off of [LEED](https://www.usgbc.org/leed) , [BREEAM](https://breeam.com/) , and [ISO 14001](https://www.iso.org/standard/60857.html) standards. The robot enabled with IR-motion detection navigates throughout your urban-building and captures image data which is processed using generative-ai to provide a comprehensive report of your building's sustainability. ## How we built it We started with a LEGO Mindstorms EV3 robot, interfacing it with an NVIDIA Jetson for advanced computational capabilities. To enable autonomous navigation, we integrated a Luxonis AI camera that captures high-definition images, which are then sent to a hub using async sockets for fast real-time data transmission. These images are analyzed using Google Gemini to generate a comprehensive environmental report. Throughout the process, we focused on developing robust concurrency, efficient socket communication, and precise localization and mapping techniques. ## Challenges we ran into Building EVE came with its set of challenges. Integrating different hardware components and ensuring seamless communication between them required extensive debugging. Implementing real-time data processing while maintaining accuracy was another hurdle. Additionally, perfecting the robot's navigation and localization within various building environments demanded meticulous calibration and algorithm refinement. We also planned to integrate audio and speech onto the robot but the speaker/microphone turned out to be incompatible with our microcontroller. ## Accomplishments that we're proud of We successfully created a fully autonomous robot capable of navigating complex indoor environments while collecting and analyzing environmental data in real time. Our integration of high-definition imaging and advanced AI for environmental assessment sets EVE apart. The project's ability to generate a comprehensive report like LEED, BREEAM, and ISO 14001 is a significant achievement that we're extremely proud of. ## What we learned Throughout this project, we gained valuable insights into the intricacies of hardware-software integration, particularly in the realms of concurrency and real-time data processing. We enhanced our understanding of socket programming, which was crucial for efficient data transmission. Our experience with localization and mapping provided a deeper appreciation for the complexities of autonomous navigation in dynamic environments. ## What's next for Eve The future for EVE is bright. We plan to enhance its capabilities by incorporating additional sensors for more comprehensive environmental assessments. Improving the AI algorithms for faster and more accurate data analysis is another priority. We also envision expanding EVE's application scope beyond buildings to include outdoor environments and larger infrastructure projects and integrate speech and audio to enable actually talking to the robot. Ultimately, we aim to make EVE an indispensable tool in the quest for sustainable and healthy living spaces.
## Inspiration We realized how visually-impaired people find it difficult to perceive the objects coming near to them, or when they are either out on road, or when they are inside a building. They encounter potholes and stairs and things get really hard for them. We decided to tackle the issue of accessibility to support the Government of Canada's initiative to make work and public places completely accessible! ## What it does This is an IoT device which is designed to be something wearable or that can be attached to any visual aid being used. What it does is that it uses depth perception to perform obstacle detection, as well as integrates Google Assistant for outdoor navigation and all the other "smart activities" that the assistant can do. The assistant will provide voice directions (which can be geared towards using bluetooth devices easily) and the sensors will help in avoiding obstacles which helps in increasing self-awareness. Another beta-feature was to identify moving obstacles and play sounds so the person can recognize those moving objects (eg. barking sounds for a dog etc.) ## How we built it Its a raspberry-pi based device and we integrated Google Cloud SDK to be able to use the vision API and the Assistant and all the other features offered by GCP. We have sensors for depth perception and buzzers to play alert sounds as well as camera and microphone. ## Challenges we ran into It was hard for us to set up raspberry-pi, having had no background with it. We had to learn how to integrate cloud platforms with embedded systems and understand how micro controllers work, especially not being from the engineering background and 2 members being high school students. Also, multi-threading was a challenge for us in embedded architecture ## Accomplishments that we're proud of After hours of grinding, we were able to get the raspberry-pi working, as well as implementing depth perception and location tracking using Google Assistant, as well as object recognition. ## What we learned Working with hardware is tough, even though you could see what is happening, it was hard to interface software and hardware. ## What's next for i4Noi We want to explore more ways where i4Noi can help make things more accessible for blind people. Since we already have Google Cloud integration, we could integrate our another feature where we play sounds of living obstacles so special care can be done, for example when a dog comes in front, we produce barking sounds to alert the person. We would also like to implement multi-threading for our two processes and make this device as wearable as possible, so it can make a difference to the lives of the people.
partial
# DeeR: AI-Powered Study Companion 🦌 ## Inspiration Jumping from study habit to study habit is a loop hole, and I had been stuck in it for a long time! I wanted to build something which had intersection in the domains of studying, neuroscience, and computer science, so I built DeeR. I can finally scrap all the useless apps that I have and be proud to use my own :D ## What it does DeeR is an AI-powered study companion that: * Uses AI to detect emotional conditions which I have a hard time recognizing about myself * Implements the Feynman technique, one of the best approaches in learning techniques that has served me well for the past few weeks, which actively makes me understand rather than brute force remembrance * Adds Study Cycles * Creates workflows * Generates nice summaries from PDFs, PPTs, and other document formats * Emotional Analysis to see when you are distressed and reccommending you to take a break * Hume Model to talk about what you learned about to improve your recalling and retention ## Challenges we ran into * Cartesia API credits reaching 662% of the free usage limit 🤭 * Hume API requiring manual screenshots for each inference rather than connecting through WebSockets * Managing time and presenting solo ## Accomplishments that we're proud of * Integrating multiple technologies and participating in various tracks, especially as a solo developer * Overcoming anxiety by participating in competitions like these * Successfully building a functional prototype of DeeR ## What we learned * Solo hackathons are long but fun * No merge conflicts when working alone! * Discovered interesting aspects of Hume and Deepgram's model training approaches * Improved skills in working with WebSockets and fetch requests * Enhanced ability to read documentation and manage time effectively ## What's next for DeeR * Adding authentication for user accounts * Adding advice on what to do when the person is distressed * Implementing multi-user support with individual LLM preferences * Adding a feature to skip voice lines by Cartesia * Fine-tuning models for better outcomes ## Made with ❤️
## Inspiration A study recently done in the UK learned that 69% of people above the age of 65 lack the IT skills needed to use the internet. Our world's largest resource for information, communication, and so much more is shut off to such a large population. We realized that we can leverage artificial intelligence to simplify completing online tasks for senior citizens or people with disabilities. Thus, we decided to build a voice-powered web agent that can execute user requests (such as booking a flight or ordering an iPad). ## What it does The first part of Companion is a conversation between the user and a voice AI agent in which the agent understands the user's request and asks follow up questions for specific details. After this call, the web agent generates a plan of attack and executes the task by navigating the to the appropriate website and typing in relevant search details/clicking buttons. While the agent is navigating the web, we stream the agent's actions to the user in real time, allowing the user to monitor how it is browsing/using the web. In addition, each user request is stored in a Pinecone database, to the agent has context about similar past user requests/preferences. The user can also see the live state of the web agent navigation on the app. ## How we built it We developed Companion using a combination of modern web technologies and tools to create an accessible and user-friendly experience: For the frontend, we used React, providing a responsive and interactive user interface. We utilized components for input fields, buttons, and real-time feedback to enhance usability as well as integrated VAPI, a voice recognition API, to enable voice commands, making it easier for users with accessibility needs. For the Backend we used Flask to handle API requests and manage the server-side logic. For web automation tasks we leveraged Selenium, allowing the agent to navigate websites and perform actions like filling forms and clicking buttons. We stored user interactions in a Pinecone database to maintain context and improve future interactions by learning user preferences over time, and the user can also view past flows. We hosted the application on a local server during development, with plans for cloud deployment to ensure scalability and accessibility. Thus, Companion can effectively assist users in navigating the web, particularly benefiting seniors and individuals with disabilities. ## Challenges we ran into We ran into difficulties getting the agent to accurately complete each task. Getting it to take the right steps and always execute the task efficiently was a hard but fun problem. It was also challenging to prompt the voice agent such to effectively communicate with the user and understand their request. ## Accomplishments that we're proud of Building a complete, end-to-end agentic flow that is able to navigate the web in real time. We think that this project is socially impactful and can make a difference for those with accessibility needs. ## What we learned The small things that can make or break an AI agent such as the way we display memory, how we ask it to reflect, and what supplemental info we give it (images, annotations, etc.) ## What's next for Companion Making it work without CSS selectors; training a model to highlight all the places the computer can click because certain buttons can be unreachable for Companion.
## Inspiration In a world where education has become increasingly remote and reliant on online platforms, we need human connection **more than ever**. Many students often find it difficult to express their feelings without unmuting themselves and drawing unwanted attention. As a result, teachers are unaware of how their students are feeling and if the material is engaging. This situation is especially challenging for students who struggle with communicating their feelings–such as individuals with autism, selective mutism, social anxiety, and more. We want to help **bridge this gap** by creating a tool that will both enable students to express themselves with less effort and enable teachers to understand and respond to their overall needs. We strongly believe in the importance of **accessibility in education** and supplementing human connection, because at the end of the day, humans are all social beings. ## What it does Our application helps measure the general emotions of participants in a video meeting, displaying a stream of emojis representing up to **80 different emotions**. We periodically sample video frames from all participants with their cameras on at 10-second intervals, feeding this data into **Hume’s Expression Management API** to identify the most prominent expressions. From this, we generate a composite view of the general sentiment using a custom weighted algorithm. Using this aggregated sentiment data, our frontend displays the most frequent emotions with their corresponding emojis on the screen. This way, hosts can adapt their teaching to the general sentiment of the classroom, while students can share how they’re feeling without having to experience the social anxiety that comes with typing a message in the chat or sharing a thought out loud. ## How we built it We leveraged **LiveKit** to create our video conference infrastructure and **Vercel** to deploy our application. We also utilized **Supabase Realtime** as our communication protocol, forwarding livestream data from clients per room and saving that data to Supabase Storage. Our backend, implemented with **FastAPI**, interfaces with the frontend to pull this data from Supabase and feed the captured facial data into Hume AI to detect human emotions. The results are then aggregated and stored back into our Supabase table. Our frontend, built with **Next.js** and styled with **Tailwind CSS**, listens to real-time event triggers from Supabase to detect changes in the table. From this, we’re able to display the stream of emotions in **near real-time**, finally delivering aggregated emotion data as a light-hearted fun animation to keep everyone engaged! ## Challenges we ran into * Livekit Egress has limited documentation * Coordination of different parts using Supabase Realtime * Hume AI API * First-time Frontenders * Hosting our backend thru Vercel (lots of config) ## Accomplishments that we're proud of * Livekit real time streaming video conference * Streaming video data to Hume Supabase Realtime * Emoji animation using Framer Motion * Efficient scoring algorithm using heaps ## What we learned We learned how to use a lot of new tools and frameworks such as Next.js and Supabase as it was some of our members' first time doing full-stack software engineering. From our members all the way from SoCal and the East Coast, we learned how to ride the BART, and we all learned LiveKit for live streaming and video conferencing. ## What's next for Moji We see the potential of this tool in a **wide variety of industries** and have other features in mind that we want to implement. For example, we can focus on enhancing this tool to help streamers with any kind of virtual audience by: * Implementing a dynamic **checklist** that generates to-dos based on questions or requests from viewers. This can benefit teachers in providing efficient learning to their studies or large entertainment streamers in managing a fast-moving chat. This can also be extended to eCommerce, as livestream shopping requires sellers to efficiently navigate their chat interactions. * Using Whisper for **real-time audio speech recognition** to automatically check off answered questions. This provides a hands-free way for streamers to meet their viewers’ requests without having to look extensively through chat. This is especially beneficial for the livestream shopping industry as sellers are typically displaying items while reading messages * Using **RAG** to store answers to previously asked questions and using this data to answer any future questions. This can be a great way to save time for streamers from answering repeated questions. Enhancing video recognition capabilities to identify more complex interactions and objects in real-time. With video recognition, we can lean even heavier into the eCommerce industry, identifying what type of products sellers are displaying and providing a hands-free and AI enhanced way of managing their checklist of requests. * Adding **integrations** with other streaming platforms to broaden its applicability and improve the user experience. The possibilities are endless and we’re excited to see where Moji can go! We hope that Moji can bring a touch of humanity and help us all stay connected and engaged in the digital world.
partial
A "Tinder-fyed" Option for Adoption! ## Inspiration All too often, I seem to hear of friends or family flocking to pet stores or specialized breeders in the hope of finding the exact new pet that they want. When an animal reaches towards the forefront of the pop culture scene, this is especially true. Many new pet owners can adopt for the wrong reasons, and many later become overwhelmed, and we see these pets enter the shelter systems, with a hard chance of getting out. ## What it does PetSwipe is designed to bring the ease and variety of pet stores and pet breeders to local shelter adoption. Users can sign up with a profile and based on the contents of their profile, will be presented with different available pets from local shelters, one at a time. Each animal is presented with their name, sex, age, and their breed if applicable, along with a primary image of the pet. The individual can choose to accept or reject each potential pet. From here, the activity is loaded into a database, where the shelter could pull out the information for the Users' clicking accept on the pet, and email those who's profiles best suit the pet for an in-person meet and greet. The shelters can effectively accept or reject these Users' from their end. ## How I built it A browsable API was built on a remote server with Python, Django, and the Django REST Framework. We built a website, using PHP, javascript, and HTML/CSS with Bootstrap, where users could create an account, configure their preferences, browse and swipe through current pets in their location. These interactions are saved to the databases accessible through our API. ## Challenges I ran into The PetFinder API came with a whole host of unnecessary challenges in its use. From unnecessary MD5 hashing, to location restrictions, this API is actually in Beta and it requires quite a bit of patience to use. ## What I learned Attractive, quick front end website design, paired with a remote server was new for most of the team. We feel PawSwipe could become a fantastic way to promote the "Adopt, Don't Shop" motto of local rescue agencies. As long as the responsibilities of pet owner ship are conveyed to users, paired with the adoptee vetting performed by the shelters, a lot of lovable pets could find their way back into caring homes!
## Inspiration Having experienced a language barrier firsthand, witnessing its effects in family, and reflecting on equity in services inspired our team to create a resource to help Canadian newcomers navigate their new home. Newt aims to reduce one of the most stressful aspects of the immigrant experience by promoting more equitable access to services. ## What it does We believe that everyone deserves equal access to health, financial, legal, and other services. Newt displays ratings on how well businesses can accommodate a user's first language, allowing newcomers to make more informed choices based on their needs. When searching for a particular services, we use a map to display several options and their ratings for the user's first language. Users can then contact businesses by writing a message in their language of choice. Newt automatically translates the message and sends a text to the business provider containing the original and translated message as well as the user's contact information and preferred language of correspondence. ## How we built it Frontend: React, Typescript Backend: Python, Flask, PostgreSQL, Infobip API, Yelp API, Google Translate, Docker ## Challenges we ran into Representing location data within our relational database was challenging. It would not be feasible to store every possible location that users might search for within the database. We needed to find a balance between sourcing data from the Yelp API and updating the database using the results without creating unnecessary duplicates. ## What we learned We learned to display location data through an interactive map. To do so, we learned about react-leaflet to embed maps on React webpages. In the backend, we learned to use Infobip by reviewing related documentation, experimenting with test data, and with the help of Hack Western's sponsors. Lastly, we challenged ourselves to write unit tests for our backend functions and integrate testing within GitHub Actions to ensure every code contribution was safe. ## What's next for Newts * Further support for translating the frontend display in each user's first language. * Expanding backend data sources beyond the Yelp API and including other data sources more specific to user queries
## Inspiration Our initial inspiration was to create a dating app tailored for McGill University students. However, as we brainstormed and delved deeper into the project, a spark of inspiration led us to pivot towards a more heartwarming endeavour—developing a pet-finding app. The idea of connecting individuals with their future furry companions resonated deeply with our team, and Spawk was born. ## What it does Spawk is more than just an app; it's a bridge between potential pet parents and pets in need of loving homes. The app boasts a user-friendly interface that simplifies the process of discovering and adopting pets. For adopters, it provides a seamless experience to explore a vast database of pets, filtering through various criteria to find the perfect match. Simultaneously, it offers pet providers, be they individuals or organizations, an enjoyable platform to showcase pets available for adoption. ## How we built it The development of Spawk was a thoughtful integration of creativity and technology. We utilized various tools to craft an application that not only meets user expectations but surpasses them. Technologies Used: * React Native: The core framework for Spawk, allowing us to create a cross-platform mobile application efficiently. * Firebase: Chosen for its real-time data management, user authentication, and secure backend foundation. * JavaScript: Played a crucial role in connecting the frontend and backend components, ensuring seamless functionality. * CSS: Used to style Spawk's user interface, creating a visually appealing design that adapts well to different devices. * API Development: Crafting APIs from scratch using JavaScript, enabling effective communication between frontend and backend. * Figma: Used for UI/UX design, translating creative visions into tangible elements for a user-friendly experience. ## Challenges we ran into As with any ambitious project, our path was not without its hurdles. Integrating real-time communication features posed a significant challenge, and optimizing the app's performance to handle a large database of pets required careful consideration. Additionally, aligning our vision for a sleek and user-friendly design with technical constraints demanded creative problem-solving. ## Accomplishments that we're proud of Throughout the development of Spawk, we achieved several milestones that fill us with pride. Successfully implementing a comprehensive filtering system for pet searches, creating a secure and efficient backend, and designing an aesthetically pleasing user interface are among the accomplishments that showcase our dedication and hard work. ## What we learned The Spawk project provided a rich learning experience, offering insights and skills that extend beyond the hackathon. * React Native Mastery: Deepening our capabilities in cross-platform mobile app development. * Firebase Finesse: Enhancing our understanding of real-time data management and user authentication. * CSS Proficiency: Sharpening our styling skills to create visually appealing and responsive designs. * JavaScript Skills: Reinforcing our programming skills, especially in both frontend and backend development. * API Crafting with JavaScript: Gaining valuable insights into creating seamless communication channels between different components. * Figma for UI Design: Using Figma as an indispensable tool for translating creative visions into tangible UI/UX elements. ## What's next for Spawk Our vision for Spawk extends beyond its current capabilities. With more time, we aspire to enhance the app's functionality by implementing exciting features, including a chat platform for communication between adopters and providers, geolocation filters to refine pet searches, and continuous improvements to the UI design to ensure an unparalleled user experience. Spawk is not just an app; it's a commitment to evolving and growing to better serve the community of pet lovers.
partial
## Inspiration It's lunchtime, you are looking for somewhere to go to eat so you open Yelp and look for recommendations. After scrolling through many pages, you are overwhelmed by the number of restaurants around you and can't decide where to eat so you end up going to the fast food restaurant you always go to. We've all been there. What others like may not be what you like, but you also do not want to waste time entering all your preferences on the app. But wait, you always like photos on social media, so shouldn't your phone know what you like already? ## What it does Doko will collect data about the photos foods/restaurants the user have liked on social media and next time the user pass by that location, we will notify you. The user can also see restaurants around him/her in a convenient map view. Since t you have shown interest already in these restaurants, we are confident in our recommendations. ## How we built it We used Twitter API to query tweets a certain user has liked every 10 seconds. The backend is written in Python serving as the API connecting Mongo DB and iOS. ## Challenges we ran into Getting trapped by MongoDB Stitch iOS SDK. It took us nearly 2 hours to find out the issue in our project (also the inexplicit of the document) after reading the source code of it. ## Accomplishments that we're proud of ## What we learned ## What's next for Doko Our first step will be to support social media other than Twitter. Then we can include additional features such as restaurant recommendations using machine learning algorithms, or making a reservation within the app.
## **opiCall** ## *the line between O.D. and O.K. is one opiCall away* --- ## What it does Private AMBER alerts for either 911 or a naloxone carrying network ## How we built it We used Twilio & Dasha AI to send texts and calls, and Firebase & Swift for the iOS app's database and UI itself. ## Challenges we ran into We had lots of difficulties finding research on the topic, and conducting our own research due to the taboos and Reddit post removals we faced. ## What's next for opiCall In depth research on First Nations' and opioids to guide our product further.
## Inspiration We were inspired by the large time delays between when citizens report an issue and when first responders are able to process the information and take action. We realized that these delays are especially pronounced in urban areas where there may be high volumes of calls due to the condensed population. We recognized that the speed of first responder action is crucial in building any community and set out to build a solution. ## What it does UrbanEye allows residents to call any phone number and explain any and all details of their emergency to a system that will present this information to first responders in the form of a summary that is organized in our user dashboard. UrbanEye is intended to be used when call traffic limits the amount of human attention each problem gets ## How we built it This project was primarily built using Node.js, with Twilio integrated to enable call answering and voice-based interactions. To generate responses, ChatGPT and its APIs were utilized, employing prompt engineering to guide ChatGPT's role and conversational position. Ngrok facilitated communication between the website, private backend, and public Twilio accounts, ensuring smooth integration between local and internet-based functions. Additionally, the Hugging Face API was employed for supplementary tasks like using All-MiniLM-16-v2 for basic sorting. ## Challenges we ran into The primary obstacle I encountered while developing 911 Responder was effectively integrating the diverse API services. Given the backend's critical role in supplying data for significant portions of the frontend through numerous API calls, this challenge was particularly acute. To complete the project, I had to rapidly adapt and familiarize myself with several new APIs. The most significant hurdle was integrating Ngrok, which was not initially part of my design but became essential as the backend matured. Learning Ngrok's applications quickly was a defining experience of the hackathon, highlighting the importance of adaptability and problem-solving in such a fast-paced environment. ## Accomplishments that we're proud of Using Ngrok could slow down our 911 Responder app, especially when handling lots of data. We also had to work hard to make sure the app didn't crash if something went wrong and to give users helpful error messages. ## What we learned Through building the project we learned the invaluable importance of a development schedule, particularly when tackling complex projects. By meticulously estimating the time and effort required for each feature, we were able to effectively prioritize tasks, allocate resources, and anticipate potential challenges. This strategic approach enabled me to overcome obstacles more efficiently and ultimately deliver the project within the desired timeframe. The experience reinforced the critical role of planning and organization in successful project management. ## What's next for UrbanEye Next up for UrbanEye, I will be finalizing our offerings for all kinds of calls. This will expand the project's utility and provide immediate development opportunities. Additionally, we will further explore the ChatGPT Whisper and Hugging Face APIs. The former was introduced by a mentor, and the latter offers features that can enhance the application. To ensure the highest level of security, we will also prioritize implementing robust data protection measures throughout the development process.
winning
## Inspiration Are you out in public but scared about people standing too close? Do you want to catch up on the social interactions at your cozy place but do not want to endanger your guests? Or you just want to be notified as soon as you have come in close contact to an infected individual? With this app, we hope to provide the tools to users to navigate social distancing more easily amidst this worldwide pandemic. ## What it does The Covid Resource App aims to bring a one-size-fits-all solution to the multifaceted issues that COVID-19 has spread in our everyday lives. Our app has 4 features, namely: - A social distancing feature which allows you to track where the infamous "6ft" distance lies - A visual planner feature which allows you to verify how many people you can safely fit in an enclosed area - A contact tracing feature that allows the app to keep a log of your close contacts for the past 14 days - A self-reporting feature which enables you to notify your close contacts by email in case of a positive test result ## How we built it We made use primarily of Android Studio, Java, Firebase technologies and XML. Each collaborator focused on a task and bounced ideas off of each other when needed. The social distancing feature functions based on a simple trigonometry concept and uses the height from ground and tilt angle of the device to calculate how far exactly is 6ft. The visual planner adopts a tactile and object-oriented approach, whereby a room can be created with desired dimensions and the touch input drops 6ft radii into the room. The contact tracing functions using Bluetooth connection and consists of phones broadcasting unique ids, in this case, email addresses, to each other. Each user has their own sign-in and stores their keys on a Firebase database. Finally, the self-reporting feature retrieves the close contacts from the past 14 days and launches a mass email to them consisting of quarantining and testing recommendations. ## Challenges we ran into Only two of us had experience in Java, and only one of us had used Android Studio previously. It was a steep learning curve but it was worth every frantic google search. ## What we learned * Android programming and front-end app development * Java programming * Firebase technologies ## Challenges we faced * No unlimited food
## Inspiration Every year hundreds of thousands of preventable deaths occur due to the lack of first aid knowledge in our societies. Many lives could be saved if the right people are in the right places at the right times. We aim towards connecting people by giving them the opportunity to help each other in times of medical need. ## What it does It is a mobile application that is aimed towards connecting members of our society together in times of urgent medical need. Users can sign up as respondents which will allow them to be notified when people within a 300 meter radius are having a medical emergency. This can help users receive first aid prior to the arrival of an ambulance or healthcare professional greatly increasing their chances of survival. This application fills the gap between making the 911 call and having the ambulance arrive. ## How we built it The app is Android native and relies heavily on the Google Cloud Platform. User registration and authentication is done through the use of Fireauth. Additionally, user data, locations, help requests and responses are all communicated through the Firebase Realtime Database. Lastly, the Firebase ML Kit was also used to provide text recognition for the app's registration page. Users could take a picture of their ID and their information can be retracted. ## Challenges we ran into There were numerous challenges in terms of handling the flow of data through the Firebase Realtime Database and providing the correct data to authorized users. ## Accomplishments that we're proud of We were able to build a functioning prototype! Additionally we were able to track and update user locations in a MapFragment and ended up doing/implementing things that we had never done before.
## Inspiration' One of our team members saw two foxes playing outside a small forest. Eager he went closer to record them, but by the time he was there, the foxes were gone. Wishing he could have recorded them or at least gotten a recording from one of the locals, he imagined a digital system in nature. With the help of his team mates, this project grew into a real application and service which could change the landscape of the digital playground. ## What it does It is a social media and educational application, which it stores the recorded data into a digital geographic tag, which is available for the users of the app to access and playback. Different from other social platforms this application works only if you are at the geographic location where the picture was taken and the footprint was imparted. In the educational part, the application offers overlays of monuments, buildings or historical landscapes, where users could scroll through historical pictures of the exact location they are standing. The images have captions which could be used as instructional and educational and offers the overlay function, for the user to get a realistic experience of the location on a different time. ## How we built it Lots of hours of no sleep and thousands of git-hubs push and pulls. Seen more red lines this weekend than in years put together. Used API's and tons of trial and errors, experimentation's and absurd humour and jokes to keep us alert. ## Challenges we ran into The app did not want to behave, the API's would give us false results or like in the case of google vision, which would be inaccurate. Fire-base merging with Android studio would rarely go down without a fight. The pictures we recorded would be horizontal and load horizontal, even if taken in vertical. The GPS location and AR would cause issues with the server, and many more we just don't want to recall... ## Accomplishments that we're proud of The application is fully functional and has all the basic features we planned it to have since the beginning. We got over a lot of bumps on the road and never gave up. We are proud to see this app demoed at Penn Apps XX. ## What we learned Fire-base from very little experience, working with GPS services, recording Longitude and Latitude from the pictures we taken to the server, placing digital tags on a spacial digital map, using map box. Working with the painful google vision to analyze our images before being available for service and located on the map. ## What's next for Timelens Multiple features which we would love to have done at Penn Apps XX but it was unrealistic due to time constraint. New ideas of using the application in wider areas in daily life, not only in education and social networks. Creating an interaction mode between AR and the user to have functionality in augmentation.
winning
## Inspiration Everybody struggles with their personal finances. Financial inequality in the workplace is particularly prevalent among young females. On average, women make 88 cents per every dollar a male makes in Ontario. This is why it is important to encourage women to become more cautious of spending habits. Even though budgeting apps such as Honeydue or Mint exist, they depend heavily on self-motivation from users. ## What it does Our app is a budgeting tool that targets young females with useful incentives to boost self-motivation for their financial well-being. The app features simple scale graphics visualizing the financial balancing act of the user. By balancing the scale and achieving their monthly financial goals, users will be provided with various rewards, such as discount coupons or small cash vouchers based on their interests. Users are free to set their goals on their own terms and follow through with them. The app re-enforces good financial behaviour by providing gamified experiences with small incentives. The app will be provided to users free of charge. As with any free service, the anonymized user data will be shared with marketing and retail partners for analytics. Discount offers and other incentives could lead to better brand awareness and spending from our users for participating partners. The customized reward is an opportunity for targeted advertising ## Persona Twenty-year-old Ellie Smith works two jobs to make ends meet. The rising costs of living make it difficult for her to maintain her budget. She heard about this new app called Re:skale that provides personalized rewards for just achieving the budget goals. She signed up after answering a few questions and linking her financial accounts to the app. The app provided simple balancing scale animation for immediate visual feedback of her financial well-being. The app frequently provided words of encouragement and useful tips to maximize the chance of her success. She especially loves how she could set the goals and follow through on her own terms. The personalized reward was sweet, and she managed to save on a number of essentials such as groceries. She is now on 3 months streak with a chance to get better rewards. ## How we built it We used : React, NodeJs, Firebase, HTML & Figma ## Challenges we ran into We had a number of ideas but struggled to define the scope and topic for the project. * Different design philosophies made it difficult to maintain consistent and cohesive design. * Sharing resources was another difficulty due to the digital nature of this hackathon * On the developing side, there were technologies that were unfamiliar to over half of the team, such as Firebase and React Hooks. It took a lot of time in order to understand the documentation and implement it into our app. * Additionally, resolving merge conflicts proved to be more difficult. The time constraint was also a challenge. ## Accomplishments that we're proud of * The use of harder languages including firebase and react hooks * On the design side it was great to create a complete prototype of the vision of the app. * Being some members first hackathon, the time constraint was a stressor but with the support of the team they were able to feel more comfortable with the lack of time ## What we learned * we learned how to meet each other’s needs in a virtual space * The designers learned how to merge design philosophies * How to manage time and work with others who are on different schedules ## What's next for Re:skale Re:skale can be rescaled to include people of all gender and ages. * More close integration with other financial institutions and credit card providers for better automation and prediction * Physical receipt scanner feature for non-debt and credit payments ## Try our product This is the link to a prototype app <https://www.figma.com/proto/nTb2IgOcW2EdewIdSp8Sa4/hack-the-6ix-team-library?page-id=312%3A3&node-id=375%3A1838&viewport=241%2C48%2C0.39&scaling=min-zoom&starting-point-node-id=375%3A1838&show-proto-sidebar=1> This is a link for a prototype website <https://www.figma.com/proto/nTb2IgOcW2EdewIdSp8Sa4/hack-the-6ix-team-library?page-id=0%3A1&node-id=360%3A1855&viewport=241%2C48%2C0.18&scaling=min-zoom&starting-point-node-id=360%3A1855&show-proto-sidebar=1>
## Inspiration We wanted to bring financial literacy into being a part of your everyday life while also bringing in futuristic applications such as augmented reality to really motivate people into learning about finance and business every day. We were looking at a fintech solution that didn't look towards enabling financial information to only bankers or the investment community but also to the young and curious who can learn in an interesting way based on the products they use everyday. ## What it does Our mobile app looks at company logos, identifies the company and grabs the financial information, recent company news and company financial statements of the company and displays the data in an Augmented Reality dashboard. Furthermore, we allow speech recognition to better help those unfamiliar with financial jargon to better save and invest. ## How we built it Built using wikitude SDK that can handle Augmented Reality for mobile applications and uses a mix of Financial data apis with Highcharts and other charting/data visualization libraries for the dashboard. ## Challenges we ran into Augmented reality is very hard, especially when combined with image recognition. There were many workarounds and long delays spent debugging near-unknown issues. Moreover, we were building using Android, something that none of us had prior experience with which made it harder. ## Accomplishments that we're proud of Circumventing challenges as a professional team and maintaining a strong team atmosphere no matter what to bring something that we believe is truly cool and fun-to-use. ## What we learned Lots of things about Augmented Reality, graphics and Android mobile app development. ## What's next for ARnance Potential to build in more charts, financials and better speech/chatbot abilities into out application. There is also a direction to be more interactive with using hands to play around with our dashboard once we figure that part out.
## Inspiration Many individuals lack financial freedom, and this stems from poor spending skills. As a result, our group wanted to create something to help prevent that. We realized how difficult it can be to track the expenses of each individual person in a family. As humans, we tend to lose track of what we purchase and spend money on. Inspired, we wanted to create an app that stops all that by allowing individuals to strengthen their organization and budgeting skills. ## What It Does Track is an expense tracker website targeting households and individuals with the aim of easing people’s lives while also allowing them to gain essential skills. Imagine not having to worry about tracking your expenses all while learning how to budget and be well organized. The website has two key components: * Family Expense Tracker: The family expense tracker is the `main dashboard` for all users. It showcases each individual family member’s total expenses while also displaying the expenses through categories. Both members and owners of the family can access this screen. Members can be added to the owner’s family via a household key which is only given access to the owner of the family. Permissions vary between both members and owners. Owners gain access to each individual’s personal expense tracker, while members have only access to their own personal expense tracker. * Personal Expense Tracker: The personal expense tracker is assigned to each user, displaying their own expenses. Users are allowed to look at past expenses from the start of the account to the present time. They are also allowed to add expenses with a click of a button. ## How We Built It * Utilized the MERN (MongoDB, Express, React, Node) stack * Restful APIs were built using Node and Express which were integrated with a MongoDB database * The Frontend was built with the use of vanilla React and Tailwind CSS ## Challenges We Ran Into * Frontend: Connecting EmailJS to the help form Retrieving specific data from the backend and displaying pop-ups accordingly Keeping the theme consistent while also ensuring that the layout and dimensions didn’t overlap or wrap Creating hover animations for buttons and messages * Backend: Embedded objects were not being correctly updated - needed to learn about storing references to objects and populating the references Designing the backend based on frontend requirements and the overall goal of the website ## Accomplishments We’re Proud Of As this was all of our’s first or second hackathons we are proud to have created a functioning website with a fully integrated front and back-end. We are glad to have successfully implemented pop-ups for each individual expense category that displays past expenses. Overall, we are proud of ourselves for being able to create a product that can be used in our day-to-day lives in a short period of time. ## What We Learned * How to properly use embedded objects so that any changes to the object are reflected wherever the object is embedded * Using the state hook in ReactJS * Successfully and effectively using React Routers * How to work together virtually. It allowed us to not only gain hard skills but also enhance our soft skills such as teamwork and communication. ## What’s Next For Track * Implement an income tracker section allowing the user to get a bigger picture of their overall net income * Be able to edit and delete both expenses and users * Store historical data to allow the use of data analysis graphs to provide predictions and recommendations. * Allow users to create their own categories rather than the assigned ones * Setting up different levels of permission to allow people to view other family member’s usage
winning
## Inspiration Sign language is already difficult to learn; adding on the difficulty of learning movements from static online pictures makes it next to impossible to do without help. We came up with an elegant robotic solution to remedy this problem. ## What it does Handy Signbot is a tool that translates voice to sign language, displayed using a set of prosthetic arms. It is a multipurpose sign language device including uses such as: a teaching model for new students, a voice to sign translator for live events, or simply a communication device between voice and sign. ## How we built it **Physical**: The hand is built from 3D printed parts and is controlled by several servos and pulleys. Those are in turn controlled by Arduinos, housing all the calculations that allow for finger control and semi-spherical XYZ movement in the arm. The entire setup is enclosed and protected by a wooden frame. **Software**: The bulk of the movement control is written in NodeJS, using the Johnny-Five library for servo control. Voice to text is process using the Nuance API, and text to sign is created with our own database of sign movements. ## Challenges we ran into The Nuance library was not something we have worked with before, and took plenty of trial and error before we could eventually implement it. Other difficulties included successfully developing a database, and learning to recycle movements to create more with higher efficiency. ## Accomplishments that we're proud of From calculating inverse trigonometry to processing audio, several areas had to work together for anything to work at all. We are proud that we were able successfully combine so many different parts together for one big project. ## What we learned We learned about the importance of teamwork and friendship :) ## What's next for Handy Signbot -Creating a smaller scale model that is more realistic for a home environment, and significantly reducing cost at the same time. -Reimplement the LeapMotion to train the model for an increased vocabulary, and different accents (did you know you can have an accent in sign language too?).
## Overview People today are as connected as they've ever been, but there are still obstacles in communication, particularly for people who are deaf/mute and can not communicate by speaking. Our app allows bi-directional communication between people who use sign language and those who speak. You can use your device's camera to talk using ASL, and our app will convert it to text for the other person to view. Conversely, you can also use your microphone to record your audio which is converted into text for the other person to read. ## How we built it We used **OpenCV** and **Tensorflow** to build the Sign to Text functionality, using over 2500 frames to train our model. For the Text to Sign functionality, we used **AssemblyAI** to convert audio files to transcripts. Both of these functions are written in **Python**, and our backend server uses **Flask** to make them accessible to the frontend. For the frontend, we used **React** (JS) and MaterialUI to create a visual and accessible way for users to communicate. ## Challenges we ran into * We had to re-train our models multiple times to get them to work well enough. * We switched from running our applications entirely on Jupyter (using Anvil) to a React App last-minute ## Accomplishments that we're proud of * Using so many tools, languages and frameworks at once, and making them work together :D * submitting on time (I hope? 😬) ## What's next for SignTube * Add more signs! * Use AssemblyAI's real-time API for more streamlined communication * Incorporate account functionality + storage of videos
## Inspiration In school, we were given the offer to take a dual enrollment class called Sign Language. A whole class for the subject can be quite time consuming for most children including adults. If people are interested in learning ASL, they either watch Youtube videos which are not interactive or spend HUNDREDS of dollars in classes (<https://deafchildren.org> requiring $70-100). Our product provides a cost-effective, time-efficient, and fun experience when learning the new unique language. ## What it does Of course you have to first learn the ASL alphabets. A, B, C, D ... Z. Each alphabet has a unique hand gesture. You also have the option to learn phrases like "Yes", "No", "Bored", etc. The app makes sure you have done the alphabet correctly by displaying a circular progress view on how long you have to hold the gesture. We provide many images to make the learning experience accessible. After learning all the alphabets and practicing a few words, time for GAME time :). Test your ability to show a gesture and see how long you can go until you give up. The gamified experience leads to more learning and engaging for children. ## How we built it The product was built using the language Swift. The hand-tracking was done using CoreML Components. We used hand landmarks and found distances between all points of the hand. Comparing the distances it SHOULD be and what it is as a specific time frame helps us figure out whether the hand pose is occurring. For the UI we planned it out using Figma and later wrote the code in Swift. We used the SwiftUI components to save time. For data storing we used UIData which syncs across devices with the same iCloud account. ## Challenges we ran into There are 26 alphabets. That's a lot of arrays, comparing statements, and repetitive work. Testing would sometimes become difficult because the iPhone would eventually become hot and get temperature notifications. We only had one phone to test, so phone testing was frequently used for hand landmarks mostly. The project was extremely lengthy and putting so much content in one 36 hours is difficult, so we had to risk sleep. A cockroach in the room. ## Accomplishments that we're proud of The hand landmark detection for an alphabet actually works much better than expected. Moving your hand super fast does not glitch the system. A fully functional vision app with clean UI makes the experience fun and open for all people. ## What we learned Quantity < Quality. We created more than 6 functioning pages with different level of UI quality. It's very noticeable which views were created quickly because of the time crunch. Instead of having so many pages, decreasing the number of pages and maybe adding more content into each View would make the app appear flawless. Comparing arrays of the goal array and current time-frame array is TEDIOUS. So much time is wasted from testing. We could not figure out action classifier in Swift as there was no basic open-source code. Explaining problems to Chat GPT becomes difficult because the LLM never seems to understand basic tasks, but perfectly performs in complex tasks. Stack Overflow will still be around (for now) if we face problems. ## What's next for Hands-On The app fits well on my iPhone 11, but on an iPad? I do not think so. The next step to take the project further is to scale UI, so it works for iPads an iPhones of any size. Once we fix that problem, we could release the app to the App Store. Since we do not use any API, we would have no expenses related to hosting the API. Making the app public could help people of all ages learn a new language in an interactive manner.
winning
## Inspiration We've all been in the situation where we've ran back and forth in the store, looking for a single small thing on our grocery list. We've all been on the time crunch and have found ourselves running back and forth from dairy to snacks to veggies, frustrated that we can't find what we need in an efficient way. This isn't a problem that just extends to us as college students, but is also a problem which people of all ages face, including parents and elderly grandparents, which can make the shopping experience very unpleasant. InstaShop is a platform that solves this problem once and for all. ## What it does Input any grocery list with a series of items to search at the Target retail store in Boston. If the item is available, then our application will search the Target store to see where this item is located in the store. It will add a bullet point to the location of the item in the store. You can add all of your items as you wish. Then, based on the store map of the Target, we will provide the exact route that you should take from the entrance to the exit to retrieve all of the items. ## How we built it Based off of the grocery list, we trigger the Target retail developer API to search for a certain item and retrieve the aisle number of the location within the given store. Alongside, we also wrote classes and functions to create and develop a graph with different nodes to mock the exact layout of the store. Then, we plot the exact location of the given item within the map. Once the user is done inputting all of the items, we will use our custom dynamic programming algorithm which we developed using a variance of the Traveling Salesman algorithm along with a breadth first search. This algorithm will return the shortest path from the entrance to retrieving all of your items to the exit. We display the shortest path on the frontend. ## Challenges we ran into One of the major problems we ran into was developing the intricacies of the algorithm. This is a very much so convoluted algorithm (as mentioned above). Additionally, setting up the data structures with the nodes, edges, and creating the graph as a combination of the nodes and edges required a lot of thinking. We made sure to think through our data structure carefully and ensure that we were approaching it correctly. ## Accomplishments that we're proud of According to our approximations in acquiring all of the items within the retail store, we are extremely proud that we improved our runtime down from 1932! \* 7 / 100! minutes to a few seconds. Initially, we were performing a recursive depth-first search on each of the nodes to calculate the shortest path taken. At first, it was working flawlessly on a smaller scale, but when we started to process the results on a larger scale (10\*10 grid), it took around 7 minutes to find the path for just one operation. Assuming that we scale this to the size of the store, one operation would take 7 divided by 100! minutes and the entire store would take 1932! \* 7 / 100! minutes. In order to improve this, we run a breadth-first search combined with an application of the Traveling Salesman problem developed in our custom dynamic programming based algorithm. We were able to bring it down to just a few seconds. Yay! ## What we learned We learned about optimizing algorithms and overall graph usage and building an application from the ground up regarding the structure of the data. ## What's next for InstaShop Our next step is to go to Target and pitch our idea. We would like to establish partnership with many Target stores and establish a profitable business model that we can incorporate with Target. We strongly believe that this will be a huge help for the public.
## Inspiration We were looking for ways to use some of the most wild, inaccurate, out-there claims about the Covid-19 Vaccine to instead produce good outcomes. ## What it does Aggregates misinformation regarding the COVID-19 vaccine in one location to empower public health officials and leaders to quickly address these false claims. ## How we built it We built it with React front-end, Node.js back-end, and various Python libraries (pandas, matplotlib, snscrape, fuzzywuzzy, and wordcloud) to fuel our data visualizations. ## Challenges we ran into * Fuzzy matching, figuring out how to apply method to each column without having to declare a new method each time * Figuring out how to use python with node.js; when child\_process.spawn didn’t work I had to worked around it by using the PythonShell module in node which just ran the python scripts based on my local python environment instead of in-project. * Figuring out how to get python code to run in both VS code and with nodejs * We found a work around that allowed the python script to be executed by Node.js server, but it requires the local machine to have all the python dependencies installed D: * THIS IS EVERYBODY’S FIRST HACKATHON!!! ## Accomplishments that we're proud of We're proud to have worked with a tech stack we are not very familiar with and completed our first hackathon with a working(ish) product! ## What we learned * How to web scrape from twitter using python and snscrape, how to perform fuzzy matching on pandas data frames, and become more comfortable and proficient in data analysis and visualization * How to work with a web stack structure and communicate between front-end and back-end * How to integrate multiple languages even despite incompatibilities ## What's next for COVAX (mis)Info Addressing other types of false claims (for instance, false claims about election interference or fraud), and expanding to other social media platforms. Hopefully finding a way to more smoothly integrate Python scripts and libraries too!
## Inspiration Apple Wallet's a magical tool for anybody. Enjoy the benefits of your credit card, airline or concert ticket, and more without the hassles of keeping a physical object. But one crucial limitation is that these cards generally can't be put on the Apple Wallet without the company itself developing a supportive application. Thus, we're stuck in an awkward middle ground of half our cards being on our phone and the other half being in our wallet. ## What it does Append scans your cards and collects enough data to create an Apple Wallet pass. This means that anything with a standardized code(most barcodes, QR codes, Aztec codes, etc.) can be scanned in, and redisplayed on your Apple Wallet for your use. This includes things like student IDs, venue tickets, and more. ## How we built it We used Apple's Vision framework to analyze the live video feed from the iPhone to collect the code data. Our app parses the barcode to analyze its data, and then generates its own code to put on the wallet. It uses an external API to assist with generating the necessary wallet files, and then the wallet is presented to the user. From the user opening the app to downloading his/her wallet, the process takes about 30 seconds total. ## Challenges we ran into We ran into some troubles regarding nonstandardized barcodes, or ambiguity concerning different standards. Fortunately, we developed methods around them, and reached a point where we can recognize the major standards that exist. ## Accomplishments that we're proud of A large portion of the documentation regarding adding custom Apple Wallet passes are outdated. What's worse is that a subset of these outdated documentation are wrong in subtle ways, i.e. the code compiles but has different behavior. Navigating with limited vision was difficult, but we succeeded in the end. ## What we learned We learned a lot about Apple's PassKit API. The protocols behind it are well implemented, and seeing them in action gave us even more personal confidence in using Apple Wallet for wallet needs in the future. ## What's next for Append We want to implement our own custom API for producing Apple Wallet files, to make sure that any user data is completely secure. Additionally, we want to take advantage of iPhone hardware to read and reproduce NFC data so that every aspect of the physical card can be replaced.
losing
## Inspiration Inspired by the MIT Reuse mailing list, which is rather chaotic and unorganized. ## What it does * Allows users to create public listings of items they are giving away * Allows users to mark items as taken * Displays all available listings on a map * Auto archives old listings * Looks good on all screens ## How we built it * Django for the backend * Bulma CSS framework for the frontend * Google Maps API * Added custom JavaScript to allow users to pick a location using GMaps ## Challenges we ran into * Limiting the scope of the project * Learning how to use the GMaps API * Falling asleep after drinking too much coffee ## Accomplishments that we're proud of * Building a working project * Working as a team * Sleeping ## What we learned * Hacking under time-pressure ## What's next for MIT Reuse * Adding pictures to listings * Logging in using Athena kerberos * Integration with the MIT Reuse mailing list
## Inspiration The inspiration for this project comes from my town back in New York. There was an environmental committee in my town that would create projects and clean-ups, where the community can work together to increase sustainability in the community. An example of this, would be every month EcoPel (the environmental committee) would have a town clean up where people gather and go to different places around town to pick up garbage. Our inspiration for the project came from this committee and kinds of environmental sustainable events that they held. ## What it does Our site increases transparency across a community, and allows for increased collaboration to help the environment at all times. Our site uses google maps, and allows users to drop a pin on the map at a spot in the area where there is a lot of garbage, and needs cleaning up. Dropping a pin then prompts the user to input their name and date of when they plan to clean to help clean it up. This information, as well as the location of the marker will then show up in a data table on the website, so that other users can see when others in the community are going to clean up areas of their town/city. ## How I built it We used the Google Firebase to create a webpage online. We then used the Google Maps API to get google maps on our webpage. We then used HTML, javascript and CSS to change the API to allow popups and contact forms directly on the map. We also changed the formatting of the webpage, and allowed the data inputted by the user to be shown on the website data table for other users to see. ## Challenges I ran into The biggest challenge we ran into was getting a good idea for our project. We originally had a different idea, which we worked on for 5-6 hours, and then had to scratch it because of technical difficulties. So we had a smaller amount of time to implement this project and get it to the level that we wanted it to be at. Other difficulties we faced as a group were changing features of the Google Maps API, as none of us have used it before. ## Accomplishments that I'm proud of As a group I'm proud that we were able to quickly change paths and come up with something new, with time not being on our side. We all stayed up late working really hard, and I'm proud of the amount of work everyone put in, and the amount of passion everyone showed towards the project. I'm also proud of our group's idea, as we think the environment is a huge concern in today's world, and we all want to make a difference. ## What I learned I learned how to use the Google Maps API, as well as the Google Firebase. We all got more comfortable coding in HTML, CSS and javascript, as those were not our strongest languages coming into the project. We also learned how to divvy up the work more efficiently, so that we can achieve goals quicker. ## What's next for WeClean WeClean still has much potential to grow as an idea, and also grow in terms of its development. Some ideas in the future we have as a group would be to make WeClean into an app, so that people on their mobiles can access it quicker. We could also create an account/points system in the future, to incentive people to clean more, as well as have accounts for increased security.
## Inspiration For many college students, finding time to socialize and make new friends is hard. Everyone's schedule seems perpetually busy, and arranging a dinner chat with someone you know can be a hard and unrewarding task. At the same time, however, having dinner alone is definitely not a rare thing. We've probably all had the experience of having social energy on a particular day, but it's too late to put anything on the calendar. Our SMS dinner matching project exactly aims to **address the missed socializing opportunities in impromptu same-day dinner arrangements**. Starting from a basic dining-hall dinner matching tool for Penn students only, we **envision an event-centered, multi-channel social platform** that would make organizing events among friend groups, hobby groups, and nearby strangers effortless and sustainable in the long term for its users. ## What it does Our current MVP, built entirely within the timeframe of this hackathon, allows users to interact with our web server via **SMS text messages** and get **matched to other users for dinner** on the same day based on dining preferences and time availabilities. ### The user journey: 1. User texts anything to our SMS number 2. Server responds with a welcome message and lists out Penn's 5 dining halls for the user to choose from 3. The user texts a list of numbers corresponding to the dining halls the user wants to have dinner at 4. The server sends the user input parsing result to the user and then asks the user to choose between 7 time slots (every 30 minutes between 5:00 pm and 8:00 pm) to meet with their dinner buddy 5. The user texts a list of numbers corresponding to the available time slots 6. The server attempts to match the user with another user. If no match is currently found, the server sends a text to the user confirming that matching is ongoing. If a match is found, the server sends the matched dinner time and location, as well as the phone number of the matched user, to each of the two users matched 7. The user can either choose to confirm or decline the match 8. If the user confirms the match, the server sends the user a confirmation message; and if the other user hasn't confirmed, notifies the other user that their buddy has already confirmed the match 9. If both users in the match confirm, the server sends a final dinner arrangement confirmed message to both users 10. If a user decides to decline, a message will be sent to the other user that the server is working on making a different match 11. 30 minutes before the arranged time, the server sends each user a reminder ###Other notable backend features 12. The server conducts user input validation for each user text to the server; if the user input is invalid, it sends an error message to the user asking the user to enter again 13. The database maintains all requests and dinner matches made on that day; at 12:00 am each day, the server moves all requests and matches to a separate archive database ## How we built it We used the Node.js framework and built an Express.js web server connected to a hosted MongoDB instance via Mongoose. We used Twilio Node.js SDK to send and receive SMS text messages. We used Cron for time-based tasks. Our notable abstracted functionality modules include routes and the main web app to handle SMS webhook, a session manager that contains our main business logic, a send module that constructs text messages to send to users, time-based task modules, and MongoDB schema modules. ## Challenges we ran into Writing and debugging async functions poses an additional challenge. Keeping track of potentially concurrent interaction with multiple users also required additional design work. ## Accomplishments that we're proud of Our main design principle for this project is to keep the application **simple and accessible**. Compared with other common approaches that require users to download an app and register before they can start using the service, using our tool requires **minimal effort**. The user can easily start using our tool even on a busy day. In terms of architecture, we built a **well-abstracted modern web application** that can be easily modified for new features and can become **highly scalable** without significant additional effort (by converting the current web server into the AWS Lambda framework). ## What we learned 1. How to use asynchronous functions to build a server - multi-client web application 2. How to use posts and webhooks to send and receive information 3. How to build a MongoDB-backed web application via Mongoose 4. How to use Cron to automate time-sensitive workflows ## What's next for SMS dinner matching ### Short-term feature expansion plan 1. Expand location options to all UCity restaurants by enabling users to search locations by name 2. Build a light-weight mobile app that operates in parallel with the SMS service as the basis to expand with more features 3. Implement friend group features to allow making dinner arrangements with friends ###Architecture optimization 4. Convert to AWS Lamdba serverless framework to ensure application scalability and reduce hosting cost 5. Use MongoDB indexes and additional data structures to optimize Cron workflow and reduce the number of times we need to run time-based queries ### Long-term vision 6. Expand to general event-making beyond just making dinner arrangements 7. Create explore (even list) functionality and event feed based on user profile 8. Expand to the general population beyond Penn students; event matching will be based on the users' preferences, location, and friend groups
losing
## Inspiration This project was inspired by one of the group member's grandmother and her friends. Each month, the grandmother and her friends each contribute $100 to a group donation, then discuss and decide where the money should be donated to. We found this to be a really interesting concept for those that aren't set on always donating to the same charity. As well, it is a unique way to spread awareness and promote charity in communities. We wanted to take this concept, and make it possible to join globally. ## What it does Each user is prompted to sign up for a monthly Stripe donation. The user can then either create a new "Collective" with a specific purpose, or join an existing one. Once in a collective, the user is able to add new charities to the poll, vote for a charity, or post comments to convince others on why their chosen charity needs the money the most. ## How we built it We used MongoDB as the database with Node.js + Express for the back-end, hosted on a Azure Linux Virtual Machine. We made the front-end a web app created with Vue. Finally, we used Pusher to implement real time updates to the poll as people vote. ## Challenges we ran into Setting up real-time polling proved to be a challenge. We wanted to allow the user to see updates to the poll without having to refresh their page. We needed to subscribe to only certain channels of notifications, depending on which collective the user is a member of. This real-time aspect required a fair bit of thought on race conditions for when to subscribe, as well as how to display the data in real time. In the end, we implemented the real-time poll as a pie graph, which resizes as people vote for charities. ## Accomplishments that we're proud of Our team has competed in several hackathons now. Since this isn't our first time putting a project together in 24 hours, we wanted to try to create a polished product that could be used in the real world. In the end, we think we met this goal. ## What we learned Two of our team of three had never used Vue before, so it was an interesting framework to learn. As well, we learned how to manage our time and plan early, which saved us from having to scramble at the end. ## What's next for Collective We plan to continue developing Collective to support multiple subscriptions from the same person, and a single person entering multiple collectives.
## Inspiration As students, we have found that there are very few high-quality resources on investing for those who are interested but don't have enough resources. Furthermore, we have found that investing and saving money can be a stressful experience. We hope to change this for those who want to save better with the help of our app, hopefully making it fun in the process! ## What it does Our app first asks a new client a brief questionnaire about themselves. Then, using their banking history, it generates 3 "demons", aka bad spending habits, to kill. Then, after the client chooses a habit to work on, it brings them to a dashboard where they can monitor their weekly progress on a task. Once the week is over, the app declares whether the client successfully beat the mission - if they did, they get rewarded with points which they can exchange for RBC Loyalty points! ## How we built it We built the frontend using React + Tailwind, using Routes to display our different pages. We used Cohere for our AI services, both for generating personalized weekly goals and creating a more in-depth report. We used Firebase for authentication + cloud database to keep track of users. For our data of users and transactions, as well as making/managing loyalty points, we used the RBC API. ## Challenges we ran into Piecing the APIs together was probably our most difficult challenge. Besides learning the different APIs in general, integrating the different technologies got quite tricky when we are trying to do multiple things at the same time! Besides API integration, definitely working without any sleep though was the hardest part! ## Accomplishments that we're proud of Definitely our biggest accomplishment was working so well together as a team. Despite only meeting each other the day before, we got along extremely well and were able to come up with some great ideas and execute under a lot of pressure (and sleep deprivation!) The biggest reward from this hackathon are the new friends we've found in each other :) ## What we learned I think each of us learned very different things: this was Homey and Alex's first hackathon, where they learned how to work under a small time constraint (and did extremely well!). Paige learned tons about React, frontend development, and working in a team. Vassily learned lots about his own strengths and weakness (surprisingly reliable at git, apparently, although he might have too much of a sweet tooth). ## What's next for Savvy Saver Demos! After that, we'll just have to see :)
## Inspiration Our inspiration was to provide a robust and user-friendly financial platform. After hours of brainstorming, we decided to create a decentralized mutual fund on mobile devices. We also wanted to explore new technologies whilst creating a product that has several use cases of social impact. Our team used React Native and Smart Contracts along with Celo's SDK to discover BlockChain and the conglomerate use cases associated with these technologies. This includes group insurance, financial literacy, personal investment. ## What it does Allows users in shared communities to pool their funds and use our platform to easily invest in different stock and companies for which they are passionate about with a decreased/shared risk. ## How we built it * Smart Contract for the transfer of funds on the blockchain made using Solidity * A robust backend and authentication system made using node.js, express.js, and MongoDB. * Elegant front end made with react-native and Celo's SDK. ## Challenges we ran into Unfamiliar with the tech stack used to create this project and the BlockChain technology. ## What we learned We learned the many new languages and frameworks used. This includes building cross-platform mobile apps on react-native, the underlying principles of BlockChain technology such as smart contracts, and decentralized apps. ## What's next for *PoolNVest* Expanding our API to select low-risk stocks and allows the community to vote upon where to invest the funds. Refine and improve the proof of concept into a marketable MVP and tailor the UI towards the specific use cases as mentioned above.
partial
## Inspiration We were inspired to develop ClinicAnalytic after learning that the Healthcare system employs methodologies often based on race, gender and other features, sometimes resulting in varying success rates. We then decided to develop a Machine Learning algorithm that takes in Hospital records as its data set and determine whether these methodologies work while detecting potential systemic racism in the Healthcare sector. Furthermore we noticed that there was not a centralized database for Healthcare providers to securely store their data and noticing this, we developed ClinicAnalytic as a way for hospitals to record keep while also being able to analyze their own data. ## What it does Our application allows Hospitals to keep track of the basic information of both doctors and patients, as well as any procedures that doctors conduct with patients. The application allows users to upload valid health cards which will be read have the text parsed. Using this information, it is able to display scatter plot graphs using the information given so that users may analyze them to discover notable trends or patterns. We have set up the system to target 3 of the themes presented in the hackathon, namely Diversity, Health, and Discovery. We incorporated concepts such as Machine Learning and Optical Character Recognition in our application to give it more practical usage as well as display our skillset. ## How we built it We built the front-end of the project using React, Tailwind, Redux and Material UI and JSON Web Tokens for authentication. The back-end on the other hand was built using Flask and MongoDB to record and dispatch the information required for our application to function. Our backend also employed the pandas and matplotlib libraries to aid us in visualizing and representing the data that the user collects, as well as also implementing machine learning with the help of the OpenCV and EasyOCR libraries. ## Challenges we ran into For our analysis page, we ran into implementation problems as well as semantical problems related to how we can gauge performance. We were uncertain of what thresholds would need to be crossed in order for a doctor to be considered “underperforming”, as we do not have any medical background or exposure to how often medical procedures are to fail or succeed, so we settled with making sure that the performance of one doctor should be relative to the performance of other doctors at the hospital. Our group agreed that this was a reasonable compromise as if we were to consider the case that a doctor has a lower success rate (which is determined by dividing the number of successes they have in a procedure by the total number of that specific procedure that this doctor conducted) than the average success rate of doctors at the hospital minus the standard deviation, it is safe to declare that they are underperforming. ## Accomplishments that we're proud of We are proud that within the span of Hack The Valley, we were able to develop an application that leverages Machine Learning in order to detect anomalies in the Healthcare system. ## What we learned We conducted research on and learned about several Machine Learning algorithms. We also learned about optical character recognition. ## What's next for ClinicAnalytic Caching data required for Machine Learning algorithms using Redis in order to optimize application's Scalability.
## Inspiration Data analytics can be **extremely** time-consuming. We strove to create a tool utilizing modern AI technology to generate analysis such as trend recognition on user-uploaded datasets.The inspiration behind our product stemmed from the growing complexity and volume of data in today's digital age. As businesses and organizations grapple with increasingly massive datasets, the need for efficient, accurate, and rapid data analysis became evident. We even saw this within one of our sponsor's work, CapitalOne, in which they have volumes of financial transaction data, which is very difficult to manually, or even programmatically parse. We recognized the frustration many professionals faced when dealing with cumbersome manual data analysis processes. By combining **advanced machine learning algorithms** with **user-friendly design**, we aimed to empower users from various domains to effortlessly extract valuable insights from their data. ## What it does On our website, a user can upload their data, generally in the form of a .csv file, which will then be sent to our backend processes. These backend processes utilize Docker and MLBot to train a LLM which performs the proper data analyses. ## How we built it Front-end was very simple. We created the platform using Next.js and React.js and hosted on Vercel. The back-end was created using Python, in which we employed use of technologies such as Docker and MLBot to perform data analyses as well as return charts, which were then processed on the front-end using ApexCharts.js. ## Challenges we ran into * It was some of our first times working in live time with multiple people on the same project. This advanced our understand of how Git's features worked. * There was difficulty getting the Docker server to be publicly available to our front-end, since we had our server locally hosted on the back-end. * Even once it was publicly available, it was difficult to figure out how to actually connect it to the front-end. ## Accomplishments that we're proud of * We were able to create a full-fledged, functional product within the allotted time we were given. * We utilized our knowledge of how APIs worked to incorporate multiple of them into our project. * We worked positively as a team even though we had not met each other before. ## What we learned * Learning how to incorporate multiple APIs into one product with Next. * Learned a new tech-stack * Learned how to work simultaneously on the same product with multiple people. ## What's next for DataDaddy ### Short Term * Add a more diverse applicability to different types of datasets and statistical analyses. * Add more compatibility with SQL/NoSQL commands from Natural Language. * Attend more hackathons :) ### Long Term * Minimize the amount of work workers need to do for their data analyses, almost creating a pipeline from data to results. * Have the product be able to interpret what type of data it has (e.g. financial, physical, etc.) to perform the most appropriate analyses.
## Inspiration A major challenge that hospitals and doctors face daily around the world is collecting the necessary medical history of the patient to take the appropriate measures for treating them accurately. The lack of information might be due to patient negligence, patient being in a major accident, important details hidden amidst a lot of documents and so on. And it causes a serious health threat to the patient. From speaking with a healthcare mentor for the hackathon, we were able to learn about the inefficient collection of medical history in the healthcare industry and how the process of currently collecting medical documents about patients is inefficient. From learning about this, we knew that there was a need for a platform that could aid in collection of medical history. Another problem we discovered was how people search for their healthcare related doubts on the internet which leads to a lot of misinformation. So there is a need for a platform that provides verified correct information related to healthcare as well. ## What it does The main features of our web application are: * Instant access to critical patient information is provided to the medical staff. * one-stop for all the past medical documents and records of the patient. * face recognition feature to search for the patient in case of an accident. * online prescription can be sent to the patient by verified medical staff only * ask relevant medical questions and get them answered by verified medical professionals. * sends important information about the patient to the patient's emergency contact number. ## How we built it Our project was built using the python web framework Django, as well as bootstrap for formatting (and the standard web stack of HTML, CSS etc). The website has been built to be as generic and as scalable as possible - given the time constraints. The first step was to formalise the UI in canva - to ensure we had a cohesive understanding of the functionality of each page, as well as the access requirements. This process helped shape the further development. We chose to create a limited set of pages to demonstrate the minimum viable product and showcase our idea, but the website can be easily expanded. In addition, security was an important concern. The Doctors area can only be accessed by users who are a part of the Doctor’s group - a role which we would aim to have verified in the future. These are the only users that can view patient data. Patients can only see their own data, and files sent to them by a doctor. They cannot access anything else. This access is controlled by assigning the users to group types and using django decorators to verify a user’s group and if they have been authenticated. This protects all pages with confidential information. This can be easily expanded. We also made a third user group - administrators - which we envision to include the group of people who would need access to individual files, but should not have access to the patient’s full medical information. For example, pharmacists. ## Challenges we ran into The most challenging aspect was to understand what features we are trying to implement are actually clinically relevant across countries. We brainstormed a lot to finalise our ideas, took help from mentors and researched to find out what would be working on finally to make the product the most useful to everyone around the world alike. ## Accomplishments that we're proud of We have created a product that will help medical professionals immensely around the globe. It provides the whole medical history of a patient at one glance, which is otherwise a frustrating task. This will not only help the doctors to give the necessary care quickly but also improve the treatment provided. The application provides a sustainable and secure way of maintaining medical records and prescriptions. This feature helps the doctors to instantly transfer the prescriptions and other documents to the patient. The patient can show these documents directly to the pharmacist or the relevant authority. We also help people across countries to get correct medical advice only from verified medical professionals. This will lead to a significant decrease in the healthcare misinformation that spreads through social media. ## What we learned We were able to gain more knowledge about the current state of healthcare thanks to our mentor for this hackathon. We were also able to learn a lot more about Django from working on the project. ## What's next for HealthRepo The next components for the project would be creating a summary of key medical documents using OCR and Natural Language Processing to quickly provide information about a document. Another addition to the project would be a QR-code based way of sending information about documents to emphasize security for the patient and doctor and also provide instant verification of the document.
partial
# Itinera - Manahil and Avani ## Inspiration As avid travelers and planners, we’re all too familiar with the struggles of juggling several windows with hundreds of tabs, all while going back and forth between spreadsheets and websites collecting destinations of interest. The monotonous copy-pasting process and grouping of locations was tedious and took hours and days of research leading to the ideation behind Itinera. ## What it does Itinera is a Chrome extension that scrapes the websites the User opens once they click ‘START’, to collect content from the web page. Through the extension’s plugin code, the text data is sent into the OpenAI API, where the plugin prompts ChatGPT to extract specific locations and attractions mentioned. These results are returned to the User. The User selects their locations of interest through the extension and may select ‘SAVE’ which stores all locations in a database, allowing them to apply the extension on multiple tabs at once. When they are satisfied with their selected locations across all tabs, they may click “GENERATE” and enter the duration of their stay. Then they will be redirected to view their itinerary on ChatGPT where they can work directly with the GenAI to edit their itinerary. No further web page data is collected by the extension. ## How we built it We began by breaking down the overall project into manageable pieces we could start hacking away at - Phase 1, the scraping of the webpage by the Chrome Extension and connecting GenAI platforms to extract locations, and Phase 2, the importing of the locations and preferences into the GenAI platform to generate an itinerary. For Phase 1, we began by coding the component of the Chrome Extension that reads from the current webpage and saves the text. This required exploring various methods of creating an efficient, well-packaged Chrome Extension in JavaScript and HTML and understanding the ways the extension interacts with a web page. With the extension successfully extracting all the text, we explored how Chrome Extensions could connect to ChatGPT for Phase 1. This involved finetuning our prompts to ChatGPT to efficiently list locations in a streamlined manner by working with the Chrome extension code to submit the prompts. In the meantime, we worked on Phase 2 of the project, taking the collected locations and building a personalized itinerary based on the duration of their stay. This required further prompt fine-tuning as well. We continued to research how to weave all elements of the project together to form a single streamline process from opening a web page and clicking “START” to building the final itineraries with the power of GenAI. ## Challenges we ran into As early developers ourselves, we attended our first Hackathon with hopes of creating technology we would be passionate about - both to develop and to use. With our idea in mind, we set out exploring and researching all the necessary technologies and connections, while also focusing on making the user experience as smooth and as intuitive as possible. Understanding and experimenting with these technologies was a clear learning curve, where every step of the way required new documentation to read up, more tutorials to watch, and new technology to play with. Further, navigating the intricate world of OpenAI and various APIs was a daunting task, alongside the pressure of building such a complex project ground up. ## Accomplishments that we're proud of Yet, beyond these challenges, this experience has provided us with the opportunity to move beyond our comfort zone and put all our effort into making a solution that we are highly passionate about. We are proud that, even without a fully deployed solution now, we were able to make strides in increasing our technical knowledge and have a deeply fleshed-out plan to bring this solution successfully to reality soon. We are also deeply satisfied with our work together as a team, and how we could combine our skills and put them to the test when building our project. We are excited to bring this project to reality and are highly proud of the progress we have made so far. ## What we learned As first-time hackers, we learned the effort it takes to create a proof of concept, taking the user experience into account. We learned with every challenge we ran into, using our passion for the concept to carry us into late-night progress checks and demos. ## What's next for Itinera - Travel AI We have several potential features we’d like to implement in Itinera, all for greater convenience, applications, and improvement. For example, we aim to incorporate Google’s MyMaps feature into the extension for quick, organized visualization of itineraries. Further, we will develop this technology to easily collect destinations of interest from liked/favorited posts on social media such as Instagram. On the existing extension, we will add the ability to track the sentiment of an article’s opinions on each location and assign a score to each destination accordingly. Further refinements include the ability to save/reload and revisit previously saved locations and to allow various levels of planning, from simply grouping locations through planning to the minute.
## Inspiration Often, bucket lists don’t come to fruition because people don’t know how to execute their plans. We wanted to create an app to inspire young adults to achieve the goals that they imagined to be too far from the present. Let BuckIt guide your journey. ## What it does Introducing the **ultimate** bucket list planner app - designed to turn your wildest dreams into reality. With our user-friendly interface, you can easily input and save all of your bucket list ideas in one convenient place. From there, our advanced algorithm generates personalized itineraries for you to follow, taking into account factors such as budget, time, and location. Whether you're planning a once-in-a-lifetime trip or trying to learn a new skill, our app has got you covered. With features such as mapping, budget tracking and even suggestions for local events and activities, you'll have all the tools you need to make your bucket list dreams a reality. ## How we built it * The front end was build through React-Native to support mobile iOS usage as the primary median. More specifically, we built it with Expo. This project has a Flask/Python backend with the intention of deploying to Heroku. The backend gathers user input and uses NLP and GPT-3 to analyze the query, assign it into a specific category, and utilizes Google Custom Search API to find local itineraries for said activity. The UI/UX was completed in Figma. ## Challenges we ran into * Some challenges we ran into include React-native compatibility issues and difficulty in implementing front-end UI in a timely manner. In addition, we were unable to complete the majority of the front end screens in the working demo. A majority of the development process was fine tuning and enhancing our algorithms to provide the best responses for a wide variety of bucket list items. The user flow was also difficult to pinpoint as we were attempting to cover a wide variety of AI queries. ## Accomplishments that we're proud of * A proud accomplishment we had was the successful implementation of backend services such as querying GPT-3 into Google Custom Search API in order to create a step-by-step personalized itinerary for the user. * Our UX/UI Design team had greatly succeeded in making a clean, functional, user friendly design. * Completion of a working iOS Expo demo. ## What we learned Our team felt inexperienced in the realm of mobile/React Native development. Towards the end of the project, each contributor grew in their own skill sets such as designing styled UI, mobile implementation, backend API integration, etc. ## What's next for BuckIt * Implementation of social platform features that allows other people to view and share bucket list progress and remaining activities. * Fine-tuning GPT-3 Model to better suit itinerary planning * Integration of travel service APIs to gather stronger pricing models and live planning and adjustments. * Finalizing all screens into a working MVP
## Inspiration One of our team members was stunned by the number of colleagues who became self-described "shopaholics" during the pandemic. Understanding their wishes to return to normal spending habits, we thought of a helper extension to keep them on the right track. ## What it does Stop impulse shopping at its core by incentivizing saving rather than spending with our Chrome extension, IDNI aka I Don't Need It! IDNI helps monitor your spending habits and gives recommendations on whether or not you should buy a product. It also suggests if there are local small business alternatives so you can help support your community! ## How we built it React front-end, MongoDB, Express REST server ## Challenges we ran into Most popular extensions have company deals that give them more access to product info; we researched and found the Rainforest API instead, which gives us the essential product info that we needed in our decision algorithm. However this proved, costly as each API call took upwards of 5 seconds to return a response. As such, we opted to process each product page manually to gather our metrics. ## Completion In its current state IDNI is able to perform CRUD operations on our user information (allowing users to modify their spending limits and blacklisted items on the settings page) with our custom API, recognize Amazon product pages and pull the required information for our pop-up display, and dynamically provide recommendations based on these metrics. ## What we learned Nobody on the team had any experience creating Chrome Extensions, so it was a lot of fun to learn how to do that. Along with creating our extensions UI using React.js this was a new experience for everyone. A few members of the team were also able to spend the weekend learning how to create an express.js API with a MongoDB database, all from scratch! ## What's next for IDNI - I Don't Need It! We plan to look into banking integration, compatibility with a wider array of online stores, cleaner integration with small businesses, and a machine learning model to properly analyze each metric individually with one final pass of these various decision metrics to output our final verdict. Then finally, publish to the Chrome Web Store!
losing
## Inspiration As college students, we didn't know anything, so we thought about how we can change that. One way was by being smarter about the way we take care of our unused items. We all felt that our unused items could be used in better ways through sharing with other students on campus. All of us shared our items on campus with our friends but we felt that there could be better ways to do this. However, we were truly inspired after one of our team members, and close friend, Harish, an Ecological Biology major, informed us about the sheer magnitude of trash and pollution in the oceans and the surrounding environments. Also, as the National Ocean Science Bowl Champion, Harish truly was able to educate the rest of the team on how areas such as the Great Pacific Garbage Patch affect the wildlife and oceanic ecosystems, and the effects we face on a daily basis from this. With our passions for technology, we wanted to work on an impactful project that caters to a true need for sharing that many of us have while focusing on maintaining sustainability. ## What it does The application essentially works to allow users to list various products that they want to share with the community and allows users to request items. If one user sees a request they want to provide a tool for or an offer they find appealing, they’ll start a chat with the user through the app to request the tool. Furthermore, the app sorts and filters by location to make it convenient for users. Also, by allowing for community building through the chat messaging, we want to use ## How we built it We first, focused on wireframing and coming up with ideas. We utilized brainstorming sessions to come up with unique ideas and then split our team based on our different skill sets. Our front-end team worked on coming up with wireframes and creating designs using Figma. Our backend team worked on a whiteboard, coming up with the system design of our application server, and together the front-end and back-end teams worked on coming up with the schemas for the database. We utilized the MERN technical stack in order to build this. Our front-end uses ReactJS in order to build the web app, our back-end utilizes ExpressJS and NodeJS, while our database utilizes MongoDB. We also took plenty of advice and notes, not only from mentors throughout the competition, but also our fellow hackers. We really went around trying to ask for others’ advice on our web app and our final product to truly flush out the best product that we could. We had a customer-centric mindset and approach throughout the full creation process, and we really wanted to look and make sure that what we are building has a true need and is truly wanted by the people. Taking advice from these various sources helped us frame our product and come up with features. ## Challenges we ran into Integration challenges were some of the toughest for us. Making sure that the backend and frontend can communicate well was really tough, so what we did to minimize the difficulties. We designed the schemas for our databases and worked well with each other to make sure that we were all on the same page for our schemas. Thus, working together really helped to make sure that we were making sure to be truly efficient. ## Accomplishments that we're proud of We’re really proud of our user interface of the product. We spent quite a lot of time working on the design (through Figma) before creating it in React, so we really wanted to make sure that the product that we are showing is visually appealing. Furthermore, our backend is also something we are extremely proud of. Our backend system has many unconventional design choices (like for example passing common ids throughout the systems) in order to avoid more costly backend operations. Overall, latency and cost and ease of use for our frontend team was a big consideration when designing the backend system ## What we learned We learned new technical skills and new soft skills. Overall in our technical skills, our team became much stronger with using the MERN frameworks. Our front-end team learned so many new skills and components through React and our back-end team learned so much about Express. Overall, we also learned quite a lot about working as a team and integrating the front end with the back-end, improving our software engineering skills The soft skills that we learned about are how we should be presenting a product idea and product implementation. We worked quite a lot on our video and our final presentation to the judges and after speaking with hackers and mentors alike, we were able to use the collective wisdom that we gained in order to really feel that we created a video that shows truly our interest in designing important products with true social impact. Overall, we felt that we were able to convey our passion for building social impact and sustainability products. ## What's next for SustainaSwap We’re looking to deploy the app in local communities as we’re at the point of deployment currently. We know there exists a clear demand for this in college towns, so we’ll first be starting off at our local campus of Philadelphia. Also, after speaking with many Harvard and MIT students on campus, we feel that Cambridge will also benefit, so we will shortly launch in the Boston/Cambridge area. We will be looking to expand to other college towns and use this to help to work on the scalability of the product. We ideally, also want to push for the ideas of sustainability, so we would want to potentially use the platform (if it grows large enough) to host fundraisers and fundraising activities to give back in order to fight climate change. We essentially want to expand city by city, community by community, because this app also focuses quite a lot on community and we want to build a community-centric platform. We want this platform to just build tight-knit communities within cities that can connect people with their neighbors while also promoting sustainability.
## Inspiration We thought it would be nice if, for example, while working in the Computer Science building, you could send out a little post asking for help from people around you. Also, it would also enable greater interconnectivity between people at an event without needing to subscribe to anything. ## What it does Users create posts that are then attached to their location, complete with a picture and a description. Other people can then view it two ways. **1)** On a map, with markers indicating the location of the post that can be tapped on for more detail. **2)** As a live feed, with the details of all the posts that are in your current location. The posts don't last long, however, and only posts within a certain radius are visible to you. ## How we built it Individual pages were built using HTML, CSS and JS, which would then interact with a server built using Node.js and Express.js. The database, which used cockroachDB, was hosted with Amazon Web Services. We used PHP to upload images onto a separate private server. Finally, the app was packaged using Apache Cordova to be runnable on the phone. Heroku allowed us to upload the Node.js portion on the cloud. ## Challenges we ran into Setting up and using CockroachDB was difficult because we were unfamiliar with it. We were also not used to using so many technologies at once.
## Inspiration As a team, we had a collective interest in sustainability and knew that if we could focus on online consumerism, we would have much greater potential to influence sustainable purchases. We also were inspired by Honey -- we wanted to create something that is easily accessible across many websites, with readily available information for people to compare. Lots of people we know don’t take the time to look for sustainable items. People typically say if they had a choice between sustainable and non-sustainable products around the same price point, they would choose the sustainable option. But, consumers often don't make the deliberate effort themselves. We’re making it easier for people to buy sustainably -- placing the products right in front of consumers. ## What it does greenbeans is a Chrome extension that pops up when users are shopping for items online, offering similar alternative products that are more eco-friendly. The extension also displays a message if the product meets our sustainability criteria. ## How we built it Designs in Figma, Bubble for backend, React for frontend. ## Challenges we ran into Three beginner hackers! First time at a hackathon for three of us, for two of those three it was our first time formally coding in a product experience. Ideation was also challenging to decide which broad issue to focus on (nutrition, mental health, environment, education, etc.) and in determining specifics of project (how to implement, what audience/products we wanted to focus on, etc.) ## Accomplishments that we're proud of Navigating Bubble for the first time, multiple members coding in a product setting for the first time... Pretty much creating a solid MVP with a team of beginners! ## What we learned In order to ensure that a project is feasible, at times it’s necessary to scale back features and implementation to consider constraints. Especially when working on a team with 3 first time hackathon-goers, we had to ensure we were working in spaces where we could balance learning with making progress on the project. ## What's next for greenbeans Lots to add on in the future: Systems to reward sustainable product purchases. Storing data over time and tracking sustainable purchases. Incorporating a community aspect, where small businesses can link their products or websites to certain searches. Including information on best prices for the various sustainable alternatives, or indicators that a product is being sold by a small business. More tailored or specific product recommendations that recognize style, scent, or other niche qualities.
partial
## Inspiration Inspired by the challenges posed by complex and expensive tools like Cvent, we developed Eventdash: a comprehensive event platform that handles everything from start to finish. Our intuitive AI simplifies the planning process, ensuring it's both effortless and user-friendly. With Eventdash, you can easily book venues and services, track your budget from beginning to end, and rely on our agents to negotiate pricing with venues and services via email or phone. ## What it does EventEase is an AI-powered, end-to-end event management platform. It simplifies planning by booking venues, managing budgets, and coordinating services like catering and AV. A dashboard which shows costs and progress in real-time. With EventEase, event planning becomes seamless and efficient, transforming complex tasks into a user-friendly experience. ## How we built it We designed a modular AI platform using Langchain to orchestrate services. AWS Bedrock powered our AI/ML capabilities, while You.com enhanced our search and data retrieval. We integrated Claude, Streamlit, and Vocode for NLP, UI, and voice features, creating a comprehensive event planning solution. ## Challenges we ran into We faced several challenges during the integration process. We encountered difficulties integrating multiple tools, particularly with some open-source solutions not aligning with our specific use cases. We are actively working to address these issues and improve the integration. ## Accomplishments that we're proud of We're thrilled about the strides we've made with Eventdash. It's more than just an event platform; it's a game-changer. Our AI-driven system redefines event planning, making it a breeze from start to finish. From booking venues to managing services, tracking budgets, and negotiating pricing, Eventdash handles it all seamlessly. It's the culmination of our dedication to simplifying event management, and we're proud to offer it to you. **Eventdash could potentially achieve a market cap in the range of $2 billion to $5 billion just on B2B sector** the market cap could potentially be higher due to the broader reach and larger number of potential users. ## What we learned Our project deepened our understanding of AWS Bedrock's AI/ML capabilities and Vocode's voice interaction features. We mastered the art of seamlessly integrating 6-7 diverse tools, including Langchain, You.com, Claude, and Streamlit. This experience enhanced our skills in creating cohesive AI-driven platforms for complex business processes. ## What's next for EventDash We aim to become the DoorDash of event planning, revolutionizing the B2B world. Unlike Cvent, which offers a more traditional approach, our AI-driven platform provides personalized, efficient, and cost-effective event solutions. We'll expand our capabilities, enhancing AI-powered venue matching, automated negotiations, and real-time budget optimization. Our goal is to streamline the entire event lifecycle, making complex planning as simple as ordering food delivery.
## Inspiration As programmers, we collectively realized how much we disliked the process of creating a pitch site/landing page to explain our project - it took away precious time from working on the actual product! We recognized our shared need for a quick landing page solution, which would sum up the basics of our project, idea, and solution for any viewer to understand. ## What it does MyLandingPage creates a landing page site for an emerging project within seconds. Based on an elevator pitch of a project, MyLandingPage uses Cohere's large language model to generate informative copy for the website (the headline, product description, benefits/solutions, and a call to action). ## How we built it We used MongoDB, Express, React, Node, Typescript, Cohere, Google Cloud Platform, CICD, and AppEngine in order to create our final product. ## Challenges we ran into We struggled to think of an idea early on in our coding process. We initially wanted to create a texting device with vision-tracking glasses, or use NLP to summarize complex textbooks into simpler text, and didn't come up with our final idea until Saturday morning. We also struggled to delegate all aspects of the project among our team members, and manage our time efficiently in order to get everything before the due date. However, we got better at settling on our main idea/problem statement and figured out how to allocate roles efficiently between the four team members. ## Accomplishments that we're proud of * Finishing a prototype for demo day. * Starting off our coding by defining the problem and empathizing with our users, rather than starting off with the software product. * Being able to whip together a successful NLP model in such a short amount of time. * Successfully creating a clean user interface for the prototype. ## What we learned It's important to have a plan early on in the development process: not just for our project itself, but for who is ultimately responsible for which aspect of the project, and how long we want to allocate to each aspect of the project. It's also a good idea to refer back to our team members' areas of expertise when we're trying to create a project in such a short period of time (e.g. one of our members has past experience working with NLP text generation, so we should have recognized this as a competitive advantage within our project earlier)! ## What's next for MyLandingPage 1. The option to customize the landing page by making edits/additions to the text and images, altering the placement of elements on the website, etc. 2. Giving users the option to personalize their domain, allowing for the sharability of the site. 3. A slideshow/pitch deck generation as an add-on to our landing site generation, to allow hackers + entrepreneurs to easily pitch to entrepreneurs.
## Inspiration With multiple members of our team having been a part of environmental conservation initiatives and even running some of our own, an issue we have continually recognized is the difficulty in reaching out to community members that share the same vision. Outside of a school setting, it's difficult to easily connect with initiatives and to find others interested in them, and so we wanted to solve that issue by centralizing a space for these communities. ## What it does The demographic here is two-fold. Users that are interested in volunteering have the capability of logging in, and uses their provided location to narrow down nearby events to a radius of their choosing. This makes sorting through hundreds of events quick and easy, and provides a clear pathway to convert the desire to help into tangible change. Users interested in organizing their own events can create accounts and use a simple process to create an event with all its information and post it both to their own page's feed and to the main initiatives list that volunteers are able to browse through. With just a few clicks, an event can be made available to the many volunteers eager to make a difference. ## How we built it As this project is a website, and many of our team are beginners, we worked mostly with HTML, CSS, and JS. We also integrated bootstrap to help with styling and formatting for the pages to improve user experience. ## Challenges we ran into As relative beginners, one challenge we ran into was working with JavaScript files across multiple HTML pages, and finding that parts of our functionality were only accessible using node.js. To work around this, we focused on rebranching our website pages to ensure easier connections and finding ways to make our code simpler and more comprehensive. ## Accomplishments that we're proud of We're proud of the community that we built with each other during this hackathon. We truly had so much passion for making this a working product, and loved our logo so much we event made stickers! On a technical level, as first-time users of JavaScript, we're particularly proud of our work with connecting HTML input, using JavaScript for string handling, and then creating new elements on the website. Being able to collect input initiatives into our database and display them with live updates was for us, the most difficult technical work, but also by far the most rewarding. ## What we learned For our team as a whole, the biggest takeaway has been a strongly renewed interest in web development and the intricacies behind connecting so many different aspects of functionality using JavaScript. ## What's next for BranchOut Moving forward, we're looking to integrate node.js to supplement our implementation, and to increase connectivity between the different inputs available. We truly believe in our mission to promote nature conservation initiatives, and hope to further expand this into an app to increase accessibility and improve user experience.
partial
## Inspiration We wanted to make a computer vision user app that detected if a fruit was good to eat or not based on its discolouration/irregularities after picking up a few discoloured/bruised oranges at lunch on the first day of McHacks. ## What it does It uses scikit-image to detect edges using the canny algorithm, which it then filters with a Gaussian distribution to subtract noise. It uses the edges to create a mask to filter out the background, which it feeds into a blob detection (difference of Gaussian) method with specific parameters to extract the moldy blobs/irregularities. The final result plots the original image, the edges detected on it, the mask applied to the edge detection and the blobs found using the doG method. The backend is done in Python and the frontend is a basic user UI to upload jpegs/jpgs/pngs to. ## Challenges we ran into Edge inconsistencies are harder to detect than we thought. We originally wanted to determine how far a bruised orange deviates from a 'perfect' orange shape. ## What's next for produce sort Making a better user interface (an actual landing page) and maybe using tensorflow to get a better idea of food that is safe to consume based on its appearance.
## Inspiration We're college students who are constantly hungry after a long day of coding, but on our college budget, we often have to make do with the ingredients that we already have in our pantry. It can be easy to get stuck into a cooking routine. We're here to break you out of that with OpenCFood! ## What it does First, take a picture of all the ingredients you have and are willing to use. Send that picture to our app, and our program will find which ingredients you have using object recognition, then look up recipes that have use the ingredients that you have. Then, Amazon Alexa will help step you through the recipe you'd like to try. It's that simple! ## How we built it The image processing is implemented through the OpenCV Python SDK. Our front-end is deployed on Google Polymer. We also use Amazon Alexa and the Spoonacular API to look up recipes. ## Challenges we ran into We had a much bigger vision for this app but we soon realized that streaming live video and AR were not realistic for our time frame. However, we pivoted many times and found that this iteration of our idea was the most interesting, so we ran with it. ## Accomplishments that we're proud of Implementing the OpenCV object recognition. That was hard. ## What we learned We learned to be ambitious in the beginning but know when to take a step back to prioritize quality over quantity. Also we learned many new technologies including Google Polymer.
## Inspiration My father put me in charge of his finances and in contact with his advisor, a young, enterprising financial consultant eager to make large returns. That might sound pretty good, but to someone financially conservative like my father doesn't really want that kind of risk in this stage of his life. The opposite happened to my brother, who has time to spare and money to lose, but had a conservative advisor that didn't have the same fire. Both stopped their advisory services, but that came with its own problems. The issue is that most advisors have a preferred field but knowledge of everything, which makes the unknowing client susceptible to settling with someone who doesn't share their goals. ## What it does Resonance analyses personal and investment traits to make the best matches between an individual and an advisor. We use basic information any financial institution has about their clients and financial assets as well as past interactions to create a deep and objective measure of interaction quality and maximize it through optimal matches. ## How we built it The whole program is built in python using several libraries for gathering financial data, processing and building scalable models using aws. The main differential of our model is its full utilization of past data during training to make analyses more wholistic and accurate. Instead of going with a classification solution or neural network, we combine several models to analyze specific user features and classify broad features before the main model, where we build a regression model for each category. ## Challenges we ran into Our group member crucial to building a front-end could not make it, so our designs are not fully interactive. We also had much to code but not enough time to debug, which makes the software unable to fully work. We spent a significant amount of time to figure out a logical way to measure the quality of interaction between clients and financial consultants. We came up with our own algorithm to quantify non-numerical data, as well as rating clients' investment habits on a numerical scale. We assigned a numerical bonus to clients who consistently invest at a certain rate. The Mathematics behind Resonance was one of the biggest challenges we encountered, but it ended up being the foundation of the whole idea. ## Accomplishments that we're proud of Learning a whole new machine learning framework using SageMaker and crafting custom, objective algorithms for measuring interaction quality and fully utilizing past interaction data during training by using an innovative approach to categorical model building. ## What we learned Coding might not take that long, but making it fully work takes just as much time. ## What's next for Resonance Finish building the model and possibly trying to incubate it.
losing
## Inspiration Most of us have had to visit loved ones in the hospital before, and we know that it is often not a great experience. The process can be overwhelming between knowing where they are, which doctor is treating them, what medications they need to take, and worrying about their status overall. We decided that there had to be a better way and that we would create that better way, which brings us to EZ-Med! ## What it does This web app changes the logistics involved when visiting people in the hospital. Some of our primary features include a home page with a variety of patient updates that give the patient's current status based on recent logs from the doctors and nurses. The next page is the resources page, which is meant to connect the family with the medical team assigned to your loved one and provide resources for any grief or hardships associated with having a loved one in the hospital. The map page shows the user how to navigate to the patient they are trying to visit and then back to the parking lot after their visit since we know that these small details can be frustrating during a hospital visit. Lastly, we have a patient update and login screen, where they both run on a database we set and populate the information to the patient updates screen and validate the login data. ## How we built it We built this web app using a variety of technologies. We used React to create the web app and MongoDB for the backend/database component of the app. We then decided to utilize the MappedIn SDK to integrate our map service seamlessly into the project. ## Challenges we ran into We ran into many challenges during this hackathon, but we learned a lot through the struggle. Our first challenge was trying to use React Native. We tried this for hours, but after much confusion among redirect challenges, we had to completely change course around halfway in :( In the end, we learnt a lot about the React development process, came out of this event much more experienced, and built the best product possible in the time we had left. ## Accomplishments that we're proud of We're proud that we could pivot after much struggle with React. We are also proud that we decided to explore the area of healthcare, which none of us had ever interacted with in a programming setting. ## What we learned We learned a lot about React and MongoDB, two frameworks we had minimal experience with before this hackathon. We also learned about the value of playing around with different frameworks before committing to using them for an entire hackathon, haha! ## What's next for EZ-Med The next step for EZ-Med is to iron out all the bugs and have it fully functioning.
## Inspiration We take our inspiration from our everyday lives. As avid travellers, we often run into places with foreign languages and need help with translations. As avid learners, we're always eager to add more words to our bank of knowledge. As children of immigrant parents, we know how difficult it is to grasp a new language and how comforting it is to hear the voice in your native tongue. LingoVision was born with these inspirations and these inspirations were born from our experiences. ## What it does LingoVision uses AdHawk MindLink's eye-tracking glasses to capture foreign words or sentences as pictures when given a signal (double blink). Those sentences are played back in an audio translation (either using an earpiece, or out loud with a speaker) in your preferred language of choice. Additionally, LingoVision stores all of the old photos and translations for future review and study. ## How we built it We used the AdHawk MindLink eye-tracking classes to map the user's point of view, and detect where exactly in that space they're focusing on. From there, we used Google's Cloud Vision API to perform OCR and construct bounding boxes around text. We developed a custom algorithm to infer what text the user is most likely looking at, based on the vector projected from the glasses, and the available bounding boxes from CV analysis. After that, we pipe the text output into the DeepL translator API to a language of the users choice. Finally, the output is sent to Google's text to speech service to be delivered to the user. We use Firebase Cloud Firestore to keep track of global settings, such as output language, and also a log of translation events for future reference. ## Challenges we ran into * Getting the eye-tracker to be properly calibrated (it was always a bit off than our view) * Using a Mac, when the officially supported platforms are Windows and Linux (yay virtualization!) ## Accomplishments that we're proud of * Hearing the first audio playback of a translation was exciting * Seeing the system work completely hands free while walking around the event venue was super cool! ## What we learned * we learned about how to work within the limitations of the eye tracker ## What's next for LingoVision One of the next steps in our plan for LingoVision is to develop a dictionary for individual words. Since we're all about encouraging learning, we want to our users to see definitions of individual words and add them in a dictionary. Another goal is to eliminate the need to be tethered to a computer. Computers are the currently used due to ease of development and software constraints. If a user is able to simply use eye tracking glasses with their cell phone, usability would improve significantly.
## Inspiration In the “new normal” that COVID-19 has caused us to adapt to, our group found that a common challenge we faced was deciding where it was safe to go to complete trivial daily tasks, such as grocery shopping or eating out on occasion. We were inspired to create a mobile app using a database created completely by its users - a program where anyone could rate how well these “hubs” were following COVID-19 safety protocol. ## What it does Our app allows for users to search a “hub” using a Google Map API, and write a review by rating prompted questions on a scale of 1 to 5 regarding how well the location enforces public health guidelines. Future visitors can then see these reviews and add their own, contributing to a collective safety rating. ## How I built it We collaborated using Github and Android Studio, and incorporated both a Google Maps API as well integrated Firebase API. ## Challenges I ran into Our group unfortunately faced a number of unprecedented challenges, including losing a team member mid-hack due to an emergency situation, working with 3 different timezones, and additional technical difficulties. However, we pushed through and came up with a final product we are all very passionate about and proud of! ## Accomplishments that I'm proud of We are proud of how well we collaborated through adversity, and having never met each other in person before. We were able to tackle a prevalent social issue and come up with a plausible solution that could help bring our communities together, worldwide, similar to how our diverse team was brought together through this opportunity. ## What I learned Our team brought a wide range of different skill sets to the table, and we were able to learn lots from each other because of our various experiences. From a technical perspective, we improved on our Java and android development fluency. From a team perspective, we improved on our ability to compromise and adapt to unforeseen situations. For 3 of us, this was our first ever hackathon and we feel very lucky to have found such kind and patient teammates that we could learn a lot from. The amount of knowledge they shared in the past 24 hours is insane. ## What's next for SafeHubs Our next steps for SafeHubs include personalizing user experience through creating profiles, and integrating it into our community. We want SafeHubs to be a new way of staying connected (virtually) to look out for each other and keep our neighbours safe during the pandemic.
winning
## Inspiration Around 40% of the lakes in America are too polluted for aquatic life, swimming or fishing.Although children make up 10% of the world’s population, over 40% of the global burden of disease falls on them. Environmental factors contribute to more than 3 million children under age five dying every year. Pollution kills over 1 million seabirds and 100 million mammals annually. Recycling and composting alone have avoided 85 million tons of waste to be dumped in 2010. Currently in the world there are over 500 million cars, by 2030 the number will rise to 1 billion, therefore doubling pollution levels. High traffic roads possess more concentrated levels of air pollution therefore people living close to these areas have an increased risk of heart disease, cancer, asthma and bronchitis. Inhaling Air pollution takes away at least 1-2 years of a typical human life. 25% deaths in India and 65% of the deaths in Asia are resultant of air pollution. Over 80 billion aluminium cans are used every year around the world. If you throw away aluminium cans, they can stay in that can form for up to 500 years or more. People aren’t recycling as much as they should, as a result the rainforests are be cut down by approximately 100 acres per minute On top of this, I being near the Great Lakes and Neeral being in the Bay area, we have both seen not only tremendous amounts of air pollution, but marine pollution as well as pollution in the great freshwater lakes around us. As a result, this inspired us to create this project. ## What it does For the react native app, it connects with the Website Neeral made in order to create a comprehensive solution to this problem. There are five main sections in the react native app: The first section is an area where users can collaborate by creating posts in order to reach out to others to meet up and organize events in order to reduce pollution. One example of this could be a passionate environmentalist who is organizing a beach trash pick up and wishes to bring along more people. With the help of this feature, more people would be able to learn about this and participate. The second section is a petitions section where users have the ability to support local groups or sign a petition in order to enforce change. These petitions include placing pressure on large corporations to reduce carbon emissions and so forth. This allows users to take action effectively. The third section is the forecasts tab where the users are able to retrieve data regarding various data points in pollution. This includes the ability for the user to obtain heat maps regarding the amount of air quality, pollution and pollen in the air and retrieve recommended procedures for not only the general public but for special case scenarios using apis. The fourth section is a tips and procedures tab for users to be able to respond to certain situations. They are able to consult this guide and find the situation that matches them in order to find the appropriate action to take. This helps the end user stay calm during situations as such happening in California with dangerously high levels of carbon. The fifth section is an area where users are able to use Machine Learning in order to figure out whether where they are is in a place of trouble. In many instances, not many know exactly where they are especially when travelling or going somewhere unknown. With the help of Machine Learning, the user is able to place certain information regarding their surroundings and the Algorithm is able to decide whether they are in trouble. The algorithm has 90% accuracy and is quite efficient. ## How I built it For the react native part of the application, I will break it down section by section. For the first section, I simply used Firebase as a backend which allowed a simple, easy and fast way of retrieving and pushing data to the cloud storage. This allowed me to spend time on other features, and due to my ever growing experience with firebase, this did not take too much time. I simply added a form which pushed data to firebase and when you go to the home page it refreshes and see that the cloud was updated in real time For the second section, I used native base in order to create my UI and found an assortment of petitions which I then linked and added images from their website in order to create the petitions tab. I then used expo-web-browser, to deep link the website in opening safari to open the link within the app. For the third section, I used breezometer.com’s pollution api, air quality api, pollen api and heat map apis in order to create an assortment of data points, health recommendations and visual graphics to represent pollution in several ways. The apis also provided me information such as the most common pollutant and protocols for different age groups and people with certain conditions should follow. With this extensive api, there were many endpoints I wanted to add in, but not all were added due to lack of time. For the fourth section, it is very much similar to the second section as it is an assortment of links, proofread and verified to be truthful sources, in order for the end user to have a procedure to go to for extreme emergencies. As we see horrible things happen, such as the wildfires in California, air quality becomes a serious concern for many and as a result these procedures help the user stay calm and knowledgeable. For the fifth section, Neeral please write this one since you are the one who created it. ## Challenges I ran into API query bugs was a big issue in formatting back the query and how to map the data back into the UI. It took some time and made us run until the end but we were still able to complete our project and goals. ## What's next for PRE-LUTE We hope to use this in areas where there is commonly much suffering due to the extravagantly large amount of pollution, such as in Delhi where seeing is practically hard due to the amount of pollution. We hope to create a finished product and release it to the app and play stores respectively.
## Inspiration 🌱 Climate change is affecting every region on earth. The changes are widespread, rapid, and intensifying. The UN states that we are at a pivotal moment and the urgency to protect our Earth is at an all-time high. We wanted to harness the power of social media for a greater purpose: promoting sustainability and environmental consciousness. ## What it does 🌎 Inspired by BeReal, the most popular app in 2022, BeGreen is your go-to platform for celebrating and sharing acts of sustainability. Everytime you make a sustainable choice, snap a photo, upload it, and you’ll be rewarded with Green points based on how impactful your act was! Compete with your friends to see who can rack up the most Green points by performing more acts of sustainability and even claim prizes once you have enough points 😍. ## How we built it 🧑‍💻 We used React with Javascript to create the app, coupled with firebase for the backend. We also used Microsoft Azure for computer vision and OpenAI for assessing the environmental impact of the sustainable act in a photo. ## Challenges we ran into 🥊 One of our biggest obstacles was settling on an idea as there were so many great challenges for us to be inspired from. ## Accomplishments that we're proud of 🏆 We are really happy to have worked so well as a team. Despite encountering various technological challenges, each team member embraced unfamiliar technologies with enthusiasm and determination. We were able to overcome obstacles by adapting and collaborating as a team and we’re all leaving uOttahack with new capabilities. ## What we learned 💚 Everyone was able to work with new technologies that they’ve never touched before while watching our idea come to life. For all of us, it was our first time developing a progressive web app. For some of us, it was our first time working with OpenAI, firebase, and working with routers in react. ## What's next for BeGreen ✨ It would be amazing to collaborate with brands to give more rewards as an incentive to make more sustainable choices. We’d also love to implement a streak feature, where you can get bonus points for posting multiple days in a row!
## Inspiration Climate change is one of the greatest challenges threatening the future of humankind. If we, as humans, don't change our negative environmentally impacting activities drastically, then by 2030, according to IPCC in a report to the United Nations, the levels of CO₂ in the atmosphere will cause irreversible damage to Planet Earth. Habitats and lives are in danger with the increase of natural disasters related to human-induced global warming. While, here, we see a problem that we feel passionate about solving, we also, as individuals, empathize with the people who feel largely overwhelmed and powerless in addressing such a seemingly impenetrable problem that has such terrible consequences. These are individuals who may not want to think about the horrible impact of their uninformed actions. As such, we knew that trying to move the needle on the attitude towards climate change would be difficult. But, we were also inspired by the power of community, and so we wanted to tap into that for our project addressing climate change. ## What it does I remember the city covered in smoke. A haze had covered the city that didn't look like fog. The fog was normal. This... wasn't. It wasn't until later on that day that I learned about the forest fires that had overtaken Northern California. The next two weeks were that of chaos. No one knew who was safe, no one knew how fast the fire was spreading. It was hard to know when it was safe to go outside when it was safe to breathe the air. Updates were slow, if at all. This could have been different. What we seek to do is to empower average citizens to contribute to the data needed regarding fires and air quality. A person can see the fire spreading, and immediately update everyone by simply dropping a pin on the location. More dedicated users can even host their own air quality sensors, which will automatically be added to our database. Our project, “Safer”, is a suite of tools that simultaneously encourage citizen science which addresses climate change and acts as a community resource finder for individuals in the event of a natural disaster. “Safer” is a web app that showcases maps that are populated with data from multiple APIs including datasets collected by NASA. Our hope is to build maps that citizens can use that will be helpful in the event of a disaster. ## How we built it We created a backend API in Node to collect and handle incoming data. React was used on the frontend to update users with a visual interface, giving them an interactive map to inform them about disasters. Then came the hardware. We used a Raspberry Pi equipped with sensors to pick up air quality. We also integrated various datasets, including those from NASA, to integrate into our app, to give users as much live information as possible during a threat, such as wildfires. ## Challenges we ran into Elliot: Setting up the backend with the proper database was the most challenging for me. I chose to use a NASA data set that, while relevant and informative, lacked user-friendly documentation. I had to reverse engineer the data to make it fit into a database and was between torn between using MongoAtlas or SQL. After hours of hacking, I finally made MongoAtlas work and was able to get our web app to query data for our map. Jaeson: I had never worked with getting datasets using APIs before, including the Breezometer air quality API, and had been on a 6-month hiatus from Node.js. This was my return to Javascript, Node.js, and data science. Beyond refamiliarizing myself with the various conventions and tools, I also had to try integrating all new ones, such as frontend virtual reality and cloud-based database systems. Colleen: Wow! Where to begin! I spent a good amount of time trying to set up passport.js authentication but to no avail. It would've been nice to have a sleek login/signup page for the web app with the user's choice OAuth from Google, Facebook, Twitter, or Local, but our mentor refocused our team's attention to hacking the air quality measuring hardware prototype. It was emotionally difficult to make the pivot but I wanted to do what was best for the team as well. Two of my teammates eventually took over and worked together to solve the user provisioning problem with their own written authentication methods. Justin: Our project had many moving parts and I didn't know if we would be able to prototype the air quality sensor because, honestly, we didn't even have all the parts to make a raspberry pi to work, to begin with. After a few hours and a lot of searching on the web and in nearby stores, we tracked down a seller on FB Marketplace who got back to us. And then! In a surprising turn of events, our mentor helped connect us with another team who actually had just about everything we needed. That was when we realized how amazing CalHacks was -- people who are strangers can be so friendly and supportive. ## Accomplishments that we're proud of Despite all the crazy and often laborious challenges we ran into, we accomplished a great deal! We really worked hard to implement an idea that we think can make a difference. The web app itself is quite complex, using 3+ sources of data to populate a map using the Mapbox API, which we had to integrate into the frontend and backend. We had to learn to work as a team to get our features pushed to production ## What we learned Well, we spent 50+ hours with each other in one weekend, so we learned a lot about how to push each other's buttons. But in all seriousness, and, despite our snipes and squabbles, we learned that CalHacks was actually a really wonderful way to form a shared experience. ## What's next for Safer Moving forward, we will work on all the kinks in the live, deployed version of our app, finish working on the air quality sensor prototype, and take it to the next level by launching Kickstarter Campaign to spread awareness and carry the social responsibility that is placed on our shoulders to protect communities and our beautiful blue planet.
winning
## Inspiration We were inspired by the numerous Facebook posts, Slack messages, WeChat messages, emails, and even Google Sheets that students at Stanford create in order to coordinate Ubers/Lyfts to the airport as holiday breaks approach. This was mainly for two reasons, one being the safety of sharing a ride with other trusted Stanford students (often at late/early hours), and the other being cost reduction. We quickly realized that this idea of coordinating rides could also be used not just for ride sharing to the airport, but simply transportation to anywhere! ## What it does Students can access our website with their .edu accounts and add "trips" that they would like to be matched with other users for. Our site will create these pairings using a matching algorithm and automatically connect students with their matches through email and a live chatroom in the site. ## How we built it We utilized Wix Code to build the site and took advantage of many features including Wix Users, Members, Forms, Databases, etc. We also integrated SendGrid API for automatic email notifications for matches. ## Challenges we ran into ## Accomplishments that we're proud of Most of us are new to Wix Code, JavaScript, and web development, and we are proud of ourselves for being able to build this project from scratch in a short amount of time. ## What we learned ## What's next for Runway
### Saturday 11AM: Starting Out > > *A journey of a thousand miles begins with a single step* > > > BusBuddy is pulling the curtain back on school buses. Students and parents should have equal access to information to know when and where their buses are arriving, how long it will take to get to school, and be up-to-date on any changes in routes. When we came onboard the project, our highest priorities were efficiency, access, and sustainability. With our modern version of a solution to the traveling salesman problem, we hope to give students and parents some peace of mind when it comes to school transportation. Not only will BusBuddy make the experience more comfortable, but having reliable information means more parents will opt to save on gas and send their kids by bus. ### Saturday 3PM: Roadblocks, Missteps, Obstacles > > *I would walk a thousand miles just to fall down at your door* > > > No road is without its potholes; our road was no exception to this. Alongside learning curves and getting to know each other, we faced issues with finicky APIs that disagreed with our input data, temperamental CSS margins that refused to anchor where we wanted them, and missing lines of code that we swear we put in. With enough time and bubble tea, we found our critical errors and began to build our vision. ### Saturday 9PM: Finding Our Way > > *Just keep swimming, just keep swimming, just keep swimming, swimming, swimming…* > > > We conceptualized in Figma with asset libraries; we built our front-end in VS Code with HTML, CSS, and Jinja2; we used Flask, Python, SQL databases, and a Google Maps API, alongside the Affinity Propagation Clustering algorithm, to cluster home addresses; and finally, we ran a recursive DFS on a directed weighted graph to optimize a route for bus pickup of all students. ### Sunday 7AM: Summiting the Peak > > *Planting a flag at the top* > > > We achieved our minimum viable product! Given that our expectations were not low, it was no easy feat to climb this mountain. ### Sunday 11AM: Journey’s End > > *The journey matters more than the destination* > > > With a team composed of an 11th grader, a 12th grader, a UWaterloo first year, and a Mac second year, we certainly did not lack in range of experiences to bring to the table. Our biggest asset was having each other as sounding boards to bounce ideas off of. Getting to collaborate with each other certainly broadened our worldviews, especially with each others’ anecdotes about school pre-, during, and post-COVID. ### Sunday Onward > > *New Horizons* > > > So what’s next for us? And what’s next for BusBuddy? Well, we’ll be doing some sleeping. As for BusBuddy, we hope to scale up and turn our application into something that BusBuddy’s students can use for years to come.
## Inspiration The RBC challenge pushed us to target the future of the helpdesk. We realized that we could reverse the traditional user to helpdesk to solution pipeline, automating solutions directly to the user with integrations across various services. ## What it does Cura is an all-in-one solution for companies, automating tasks on their end using insights from their customers, generated across the customer's digital life. As it stands now, Cura is one part back-end automation of processes that would have traditionally required lengthy communication between users and companies, and another part browser extension which intelligently offers access to said automations. ## How we built it We created a chrome extension using Bootstrap, jQuery, and JavaScript as well as a backend in Flask which manages the application's data. This backend, hosted on the Google Cloud, serves as an API for the front end and develops useful data from a user's activities, pushing them to the user through our chrome extension. ## Challenges we ran into Every member of our team had to learn various skills from scratch, including how to use Google Cloud, Chrome Extensions, Flask and Flutter. This turned out to be one of the greatest challenges, not only because each team member was building up their knowledge from nothing, but we then had to understand how to seamlessly combine these technologies together. We pursued various paths that never came to fruition, such as learning how to use Flutter while developing an android app, before realizing that an android app would not cohesively work towards our end goal. ## Accomplishments that we are proud of We are proud of our ambition and determination when faced with these daunting and complex technologies, as well as the technicalities necessary to pull off the seamless connection of said technologies. ## What's next for Cura In the future, we imagine Cura to be integrated into the very fabric and infrastructure of the websites themselves, creating a space online that completely eliminates the existence of help desks, saving the consumers time, and the companies money. This would create an internet of preemptive services. Yesterday's future was an internet of connected things. Tomorrow's future is an internet of preemptive services that cater to your needs before issues can materialize. Cura is that future. ## Ivey Challenge Please consult the Google Drive attachment.
partial
## Inspiration We wanted to do something fun and exciting, nothing too serious. Slang is a vital component to thrive in today's society. Ever seen Travis Scott go like, "My dawg would prolly do it for a Louis belt", even most menials are not familiar with this slang. Therefore, we are leveraging the power of today's modern platform called "Urban Dictionary" to educate people about today's ways. Showing how today's music is changing with the slang thrown in. ## What it does You choose your desired song it will print out the lyrics for you and then it will even sing it for you in a robotic voice. It will then look up the urban dictionary meaning of the slang and replace with the original and then it will attempt to sing it. ## How I built it We utilized Python's Flask framework as well as numerous Python Natural Language Processing libraries. We created the Front end with a Bootstrap Framework. Utilizing Kaggle Datasets and Zdict API's ## Challenges I ran into Redirecting challenges with Flask were frequent and the excessive API calls made the program super slow. ## Accomplishments that I'm proud of The excellent UI design along with the amazing outcomes that can be produced from the translation of slang ## What I learned A lot of things we learned ## What's next for SlangSlack We are going to transform the way today's menials keep up with growing trends in slang.
## Inspiration Being a student of the University of Waterloo, every other semester I have to attend interviews for Co-op positions. Although it gets easier to talk to people, the more often you do it, I still feel slightly nervous during such face-to-face interactions. During this nervousness, the fluency of my conversion isn't always the best. I tend to use unnecessary filler words ("um, umm" etc.) and repeat the same adjectives over and over again. In order to improve my speech through practice against a program, I decided to create this application. ## What it does InterPrep uses the IBM Watson "Speech-To-Text" API to convert spoken word into text. After doing this, it analyzes the words that are used by the user and highlights certain words that can be avoided, and maybe even improved to create a stronger presentation of ideas. By practicing speaking with InterPrep, one can keep track of their mistakes and improve themselves in time for "speaking events" such as interviews, speeches and/or presentations. ## How I built it In order to build InterPrep, I used the Stdlib platform to host the site and create the backend service. The IBM Watson API was used to convert spoken word into text. The mediaRecorder API was used to receive and parse spoken text into an audio file which later gets transcribed by the Watson API. The languages and tools used to build InterPrep are HTML5, CSS3, JavaScript and Node.JS. ## Challenges I ran into "Speech-To-Text" API's, like the one offered by IBM tend to remove words of profanity, and words that don't exist in the English language. Therefore the word "um" wasn't sensed by the API at first. However, for my application, I needed to sense frequently used filler words such as "um", so that the user can be notified and can improve their overall speech delivery. Therefore, in order to implement this word, I had to create a custom language library within the Watson API platform and then connect it via Node.js on top of the Stdlib platform. This proved to be a very challenging task as I faced many errors and had to seek help from mentors before I could figure it out. However, once fixed, the project went by smoothly. ## Accomplishments that I'm proud of I am very proud of the entire application itself. Before coming to Qhacks, I only knew how to do Front-End Web Development. I didn't have any knowledge of back-end development or with using API's. Therefore, by creating an application that contains all of the things stated above, I am really proud of the project as a whole. In terms of smaller individual accomplishments, I am very proud of creating my own custom language library and also for using multiple API's in one application successfully. ## What I learned I learned a lot of things during this hackathon. I learned back-end programming, how to use API's and also how to develop a coherent web application from scratch. ## What's next for InterPrep I would like to add more features for InterPrep as well as improve the UI/UX in the coming weeks after returning back home. There is a lot that can be done with additional technologies such as Machine Learning and Artificial Intelligence that I wish to further incorporate into my project!
## Inspiration We went through ten or fifteen different ideas, but couldn't quite shake this one - taking photos and converting them to music. All of us had a certain draw to it, whether it was an interest in computational photography and looking at different features we could draw from images, or music and how we could select songs based off of pitch variation, energy, and valence (musical positiveness). Perhaps the most unique part of our project, our user experience simply consists of sending images from one's phone via text, where photos usually live in the first place. No additional app installation/user entry needed! ## What it does Our app uses Twilio to help users send photos to our server, which parses images and derives features to correlate against accumulated Spotify song data. Using features like saturation and lightness derived from the color profile of the image to utilizing sentiment analysis on keywords from object detection, we determined rules that mapped these to song features like danceability, variance, energy, and mode. These songs form playlists that map to the original image - for instance, higher saturated images will map to more "danceable" songs and images with higher sentiment magnitude from its keywords will map to higher "energy" songs. After texting the photo, the user will get a Spotify playlist back that contains these songs. ## How we built it Our app uses Twilio to handle SMS messaging (both sending images to the server and sending links back to the user). To handle vision and NLP parsing, we used Google Cloud APIs within our Flask app. Specifically, we used the Google Cloud Vision API to extract object names and color profiles from images, while using the Google Cloud Natural Language API to run sentiment analysis on extracted labels to determine overall mood of an image. For music data, we used the Spotify API to run scripts for accumulating data and creating playlists. ## Challenges we ran into One challenge we ran into was determining how to map color profiles to musical features - to overcome this, it was incredibly useful to have a variety of skills on our team. Some of us had more computational photography experience, some with more of a musical background, and some of us had more ideas on how to store and retrieve data. ## Accomplishments that we're proud of We're proud of being able to use a number of APIs successfully and handling authentication across all of them. This was also our first time using Twilio and using SMS texts to interface with the user. Overall, we're super proud of coming up with an MVP pretty early on, and then being able to each independently build upon it, making our product better and better. ## What we learned We learned a lot about how to derive information from photos and how deep the Spotify API goes. We also learned about how to divide up our strengths and interests so we could finish our project efficiently. ## What's next for ColorDJ Next for ColorDJ: WhatsApp integration, more efficient song database, Google Photos integration (stretch) for auto-generated movies. Machine learning could make an appearance here with training models in parallel to better match color profiles or derived keywords with songs. Putting music to photos and adding that extra dimension helps further connect people with their creations. ColorDJ makes it easier to generate a playlist to commemorate any memory, literally with two taps on a screen!
winning
Everyday social expansion From the ages of 13-40 you meet hundreds, if not thousands of people: in classrooms, to sports teams and extra-cirrculars, to social and professional events-- essentially any room with a person you dont know give you an opportunity to use plug. In recent years we have used technology and particularly social media to build connections with people, and learn more about them. The level of engaagment and form which we interact have continuously evolved, therfore generating a multitude of useful applications. The only problem is we use more than one app; usually for different things. When we meet someone, we want to effeciently connect on social media, which is a time consuming process of several minutes that often if not always consist of mispelling usernames, dropping phones, feeling creepy to ask for 5 different social medias at once-- you get the point. Plug makes it easy and fun to selectively toggle and share your handles at a speed that has never been done before, we think its close to lightning.
## Inspiration Given the prompt of "Restoration" we were quick to start thinking of what we would like to see restored. As a joke, one group member said that he would like his sleep schedule to be restored, after laughing about it, we quickly realized that going back to in person learning would need us fixing our sleep, study, and productivity habits. We strove to build an easy to use web app that could house everything a student needs to seamlessly transition back to in person learning. We hope this can help others achieve what we hope to achieve going back in person. ## What it does "On Track" was build with productivity and ease of use in mind. It allows you to log in and authenticate with a Google account, once you're in, you have access to a personalized calendar, daily schedule, and to-do list based on your Google events. You also get access to a programmable alarm meant to remind you when to sleep and take breaks from studying. Finally, you get access to a wonderfully relaxing 24/7 lo-fi music station, available to you whenever you'd like. ## How we built it We wanted this project to focus on React.js, building upon this framework. All of our components are created and rendered in React, with Bootstrap used to properly format elements and support browser resizing. Hosting and Authentication/login is handled with Google Firebase, which also saves users so that custom properties could be applied. Libraries such as moments are used whenever data such as date/time are needed. ## Challenges we ran into Getting the Google authentication flow working as we would like was a bit difficult and required a bit of trial and error. We were not getting the behavior we were expecting at first and ran into issues trying to debug the issue. We were actually able to reference one of our past hackathon submissions (namely Snippy from RUHacks) where we ran into a similar problem. This enabled us to solve the auth flow issue and get our user login working as intended. ## Accomplishments that we're proud of Going into this hackathon, our goal was to gain a working knowledge and understanding of React.js, a framework that is widely used in the software development industry. We can proudly say that we gained that and more. We were able to follow guides and tutorials to learn enough and let us build the React components we desired. We hope to continue on our React.js adventure in the coming future. ## What we learned side from learning about the technologies themselves, we also learned about problem solving when it comes to software development. There are many resources available to you, it is just a matter of knowing where and how to look for them. We learned a lot about the research element that goes into software development and are definitely better coders as a result. ## What's next for On Track We hope to add more Google event support, including adding Events through On Track as well as an auto-compiling to-do list with priorities sorted out. We were very excited to work on this project and hope to follow similar endeavors in the future.
## Inspiration We got together a team passionate about social impact, and all the ideas we had kept going back to loneliness and isolation. We have all been in high pressure environments where mental health was not prioritized and we wanted to find a supportive and unobtrusive solution. After sharing some personal stories and observing our skillsets, the idea for Remy was born. **How can we create an AR buddy to be there for you?** ## What it does **Remy** is an app that contains an AR buddy who serves as a mental health companion. Through information accessed from "Apple Health" and "Google Calendar," Remy is able to help you stay on top of your schedule. He gives you suggestions on when to eat, when to sleep, and personally recommends articles on mental health hygiene. All this data is aggregated into a report that can then be sent to medical professionals. Personally, our favorite feature is his suggestions on when to go on walks and your ability to meet other Remy owners. ## How we built it We built an iOS application in Swift with ARKit and SceneKit with Apple Health data integration. Our 3D models were created from Mixima. ## Challenges we ran into We did not want Remy to promote codependency in its users, so we specifically set time aside to think about how we could specifically create a feature that focused on socialization. We've never worked with AR before, so this was an entirely new set of skills to learn. His biggest challenge was learning how to position AR models in a given scene. ## Accomplishments that we're proud of We have a functioning app of an AR buddy that we have grown heavily attached to. We feel that we have created a virtual avatar that many people really can fall for. ## What we learned Aside from this being many of the team's first times work on AR, the main learning point was about all the data that we gathered on the suicide epidemic for adolescents. Suicide rates have increased by 56% in the last 10 years, and this will only continue to get worse. We need change. ## What's next for Remy While our team has set out for Remy to be used in a college setting, we envision many other relevant use cases where Remy will be able to better support one's mental health wellness. Remy can be used as a tool by therapists to get better insights on sleep patterns and outdoor activity done by their clients, and this data can be used to further improve the client's recovery process. Clients who use Remy can send their activity logs to their therapists before sessions with a simple click of a button. To top it off, we envisage the Remy application being a resource hub for users to improve their overall wellness. Through providing valuable sleep hygiene tips and even lifestyle advice, Remy will be the one-stop, holistic companion for users experiencing mental health difficulties to turn to as they take their steps towards recovery.
losing
## TLDR Duolingo is one of our favorite apps of all time for learning. For DeerHacks, we wanted to bring the amazing learning experience from Duolingo even more interactive by bringing it to life in VR, making it more accessible by offering it for free for all, and making it more personalized by offering courses beyond languages so everyone can find a topic they enjoy. Welcome to the future of learning with Boolingo, let's make learning a thrill again! ## Inspiration 🌟 We were inspired by the monotonous grind of traditional learning methods that often leave students disengaged and uninterested. We wanted to transform learning into an exhilarating adventure, making it as thrilling as gaming. Imagine diving into the depths of mathematics, exploring the vast universe of science, or embarking on quests through historical times—all while having the time of your life. That's the spark that ignited BooLingo! 🚀 ## What it does 🎮 BooLingo redefines the learning experience by merging education with the immersive world of virtual reality (VR). It’s not just a game; it’s a journey through knowledge. Players can explore different subjects like Math, Science, Programming, and even Deer Facts, all while facing challenges, solving puzzles, and unlocking levels in a VR landscape. BooLingo makes learning not just interactive, but utterly captivating! 🌈 ## How we built it 🛠️ We leveraged the power of Unity and C# to craft an enchanting VR world, filled with rich, interactive elements that engage learners like never before. By integrating the XR Plug-in Management for Oculus support, we ensured that BooLingo delivers a seamless and accessible experience on the Meta Quest 2, making educational adventures available to everyone, everywhere. The journey from concept to reality has been nothing short of a magical hackathon ride! ✨ ## Challenges we ran into 🚧 Embarking on this adventure wasn’t without its trials. From debugging intricate VR mechanics to ensuring educational content was both accurate and engaging, every step presented a new learning curve. Balancing educational value with entertainment, especially in a VR environment, pushed us to our creative limits. Yet, each challenge only fueled our passion further, driving us to innovate and iterate relentlessly. 💪 ## Accomplishments that we're proud of 🏆 Seeing BooLingo come to life has been our greatest achievement. We're incredibly proud of creating an educational platform that’s not only effective but also enormously fun. Watching players genuinely excited to learn, laughing, and learning simultaneously, has been profoundly rewarding. We've turned the daunting into the delightful, and that’s a victory we’ll cherish forever. 🌟 ## What we learned 📚 This journey taught us the incredible power of merging education with technology. We learned that when you make learning fun, the potential for engagement and retention skyrockets. The challenges of VR development also taught us a great deal about patience, perseverance, and the importance of a user-centric design approach. BooLingo has been a profound learning experience in itself, teaching us that the sky's the limit when passion meets innovation. 🛸 ## What's next for BooLingo 🚀 The adventure is just beginning! We envision BooLingo expanding its universe to include more subjects, languages, and historical epochs, creating a limitless educational playground. We’re also exploring social features, allowing learners to team up or compete in knowledge quests. Our dream is to see BooLingo in classrooms and homes worldwide, making learning an adventure that everyone looks forward to. Join us on this exhilarating journey to make education thrillingly unforgettable! Let's change the world, one quest at a time. 🌍💫
## Inspiration Conventional language learning apps like Duolingo don’t offer the ability to have freeform and dynamic conversations. Additionally, finding a language partner can be difficult and costly. Lingua Franca tackles this head-on by offering intermediate to advanced language learners an immersive, interactive experience. Although other apps exist that try to do the same thing, their interaction topics are hard-coded, meaning that you encounter yourself in the same dialogue over and over again. By leveraging LLMs, we’re able to ensure that no two experiences are the same! ## What it does You stumble into a foreign land and must communicate with the townsfolk in order to get by. As you talk with them, you must reply by recording yourself speaking in their language. Aided by LLMs, their responses dynamically change depending on what you say. Additionally, at some points in the conversation, they will give you checkpoints that you must accomplish, which encourages you to talk to other villagers. After each of your responses, you can also see alternative phrases you could’ve said in response to the villager. Seeing these alternative responses can aid in learning vocabulary, grammar, and can help the user branch outside of their usual go-to phrases in the language they are learning. Not only can you guide the conversation to whatever topic you’d like to practice, but to keep the user engaged, we’ve also added backstory to the characters in the village. Each time you talk with them, you can learn something more about their relationship with others in the village! ## How we built it Development was done in Unity3D. We used Wit.ai to capture and transcribe the user’s recorded responses. Those transcribed responses were then fed into an LLM from Together.ai, along with extra information to give context and guide the LLM to prompt the user to complete checkpoints. The response from the LLM becomes the villager’s response to the player. We created the world using assets from Unity Asset store, and the character models are from Mixamo. ## What we learned Developing in VR was new to all team members, so developing for the Oculus Quest and using Unity3D was a great learning experience. LLMs aren’t perfect, and working to mitigate poor, harmful, or unproductive responses is difficult. However, we took this challenge seriously while working on this app and carefully tuned our prompts to give the model the context it needed to avoid these situations. ## What's next for Lingua Franca The next steps for this app include: Adding more languages adding audio feedback from the villagers as an addition to text responses adding new locations, characters, and worlds for more variation in the experience.
## Inspiration Inspired by the learning incentives offered by Duolingo, and an idea from a real customer (Shray's 9 year old cousin), we wanted to **elevate the learning experience by integrating modern technologies**, incentivizing students to learn better and teaching them about different school subjects, AI, and NFTs simultaneously. ## What it does It is an educational app, offering two views, Student and Teacher. On Student view, compete with others in your class through a leaderboard by solving questions correctly and earning points. If you get questions wrong, you have the chance to get feedback from Together.ai's Mistral model. Use your points to redeem cool NFT characters and show them off to your peers/classmates in your profile collection! For Teachers, manage students and classes and see how each student is doing. ## How we built it Built using TypeScript, React Native and Expo, it is a quickly deployable mobile app. We also used Together.ai for our AI generated hints and feedback, and CrossMint for verifiable credentials and managing transactions with Stable Diffusion generated NFTs ## Challenges we ran into We had some trouble deciding which AI models to use, but settled on Together.ai's API calls for its ease of use and flexibility. Initially, we wanted to do AI generated questions but understandably, these had some errors so we decided to use AI to provide hints and feedback when a student gets a question wrong. Using CrossMint and creating our stable diffusion NFT marketplace was also challenging, but we are proud of how we successfully incorporated it and allowed each student to manage their wallets and collections in a fun and engaging way. ## Accomplishments that we're proud of Using Together.ai and CrossMint for the first time, and implementing numerous features, such as a robust AI helper to help with any missed questions, and allowing users to buy and collect NFTs directly on the app. ## What we learned Learned a lot about NFTs, stable diffusion, how to efficiently prompt AIs, and how to incorporate all of this into an Expo React Native app. Also met a lot of cool people and sponsors at this event and loved our time at TreeHacks! ## What's next for MindMint: Empowering Education with AI & NFTs Our priority is to incorporate a spaced repetition-styled learning algorithm, similar to what Anki does, to tailor the learning curves of various students and help them understand difficult and challenging concepts efficiently. In the future, we would want to have more subjects and grade levels, and allow the teachers to input questions for the student to solve. Another interesting idea we had was to create a mini real-time interactive game for students to play among themselves, so they can encourage each other to play between themselves.
winning
## Inspiration Every time we go out with friends, it's always a pain to figure payments for each person. Charging people through Venmo is often tedious and requires lots of time. What we wanted to do was to make the whole process either by just easily scanning a receipt and then being able to charge your friends immediately. ## What it does Our app takes a picture of a receipt and sends to a python server(that we made) which filters and manipulates the image before performing OCR. Afterwards, the OCR is parsed and the items and associated prices are sent to the main app where the user can then easily charge his friends for use of the service. ## How we built it We built the front-end of the app using meteor to allow easy reactivity and fast browsing time. Meanwhile, we optimized the graphics so that the website works great on mobile screens. Afterwards, we send the photo data to a flask server where we run combination of python, c and bash code to pre-process and then analyze the sent images. Specifically, the following operations are performed for image processing: 1. RGB to Binary Thresholding 2. Canny Edge Detection 3. Probabilistic Hough Lines on Canny Image 4. Calculation of rotation disparity to warp image 5. Erosion to act as a flood-fill on letters ## Challenges we ran into We ran into a lot of challenge actively getting the OCR from the receipts. Established libraries such Microsoft showed poor performance. As a result, we ended up testing and creating our own methods for preprocessing and then analyzing the images of receipts we received. We tried many different methods for different steps: * Different thresholding methods (some of which are documented below) * Different deskewing algorithms, including hough lines and bounding boxes to calculate skew angle * Different morphological operators to increase clarity/recognition of texts. Another difficulty we ran into was implementing UI such that it would run smoothly on mobile devices. ## Accomplishments that we're proud of We're very proud of the robust parsing algorithm that we ended up creating to classify text from receipts. ## What we learned The the building of SplitPay, we learned many different techniques in machine vision. We also learned about implementing communication between two web frameworks and about the reactivity used to build Meteor. ## What's next for SplitPay In the future, we hope to continue the development of SplitPay and to make it easier to use, with easier browsing of friends and more integration with other external APIs, such as ones from Facebook, Microsoft, Uber, etc.
## Inspiration As students, we have found that there are very few high-quality resources on investing for those who are interested but don't have enough resources. Furthermore, we have found that investing and saving money can be a stressful experience. We hope to change this for those who want to save better with the help of our app, hopefully making it fun in the process! ## What it does Our app first asks a new client a brief questionnaire about themselves. Then, using their banking history, it generates 3 "demons", aka bad spending habits, to kill. Then, after the client chooses a habit to work on, it brings them to a dashboard where they can monitor their weekly progress on a task. Once the week is over, the app declares whether the client successfully beat the mission - if they did, they get rewarded with points which they can exchange for RBC Loyalty points! ## How we built it We built the frontend using React + Tailwind, using Routes to display our different pages. We used Cohere for our AI services, both for generating personalized weekly goals and creating a more in-depth report. We used Firebase for authentication + cloud database to keep track of users. For our data of users and transactions, as well as making/managing loyalty points, we used the RBC API. ## Challenges we ran into Piecing the APIs together was probably our most difficult challenge. Besides learning the different APIs in general, integrating the different technologies got quite tricky when we are trying to do multiple things at the same time! Besides API integration, definitely working without any sleep though was the hardest part! ## Accomplishments that we're proud of Definitely our biggest accomplishment was working so well together as a team. Despite only meeting each other the day before, we got along extremely well and were able to come up with some great ideas and execute under a lot of pressure (and sleep deprivation!) The biggest reward from this hackathon are the new friends we've found in each other :) ## What we learned I think each of us learned very different things: this was Homey and Alex's first hackathon, where they learned how to work under a small time constraint (and did extremely well!). Paige learned tons about React, frontend development, and working in a team. Vassily learned lots about his own strengths and weakness (surprisingly reliable at git, apparently, although he might have too much of a sweet tooth). ## What's next for Savvy Saver Demos! After that, we'll just have to see :)
## Inspiration A major problem when it comes to finances for students is maintaining their budgets. Saving receipts and budgeting manually can be quite burdensome. Additionally, this process is quite inefficient. That is why we wanted to create a program where people can easily scan their receipts and our program would be able to optimally budget their finance and accurately categorize the items which there buying. ## What it does Our program functions on three main parts. Firstly, it scans an user's uploaded receipt and analyzes the text for the items bought. The items are then categorized by our program to a certain list in which the program calculates how much money you are spending in each category. Based on the budget that you are trying to maintain, the bot informs the user through telegram about the details of their purchase and how well you are doing related to their budget goals. Overall, this program provides in detailed information about your budget and expenses by simply scanning your receipt. ## How we built it This program was built using technologies like AWS and Rekognition to develop the backend program which analyzed and categorized the scanned data receipt. A chatbot in the Telegraph was written in Python to provide users details about their budget. ## Challenges we ran into The main challenge we ran into was translating the receipt's image into text. As Amazon Rekognition is quite sensitive to image quality, we invested a lot of time into preprocessing the images to guarantee the best possible OCR result. Another issue we faced was displaying all the analyzed information through a bot in Telegram. In order to do this, we needed to get the data containing the cost and name of the items from the array of dictionaries. In the end, we were able to select each component and display it as needed. ## Accomplishments that we're proud of We are proud that in such a short span of time, we were able to meet the goals of our desired programs. As most of this technology was new to most of the members, it was an accomplishment to successfully code the program. Additionally, we used several different technologies that we were exposed to during the workshops and challenges. ## What we learned We all were able to delve deep into areas out of our comfort zone and see the workings behind apps we have previously used on our phones(telegram). We were able to not only create a new bot on telegram but also program it to respond to input relative to the input that the user gave it. Additionally, we were able to use different technologies like AWS Rekognition and RNN and implemented them all together to make one coherent program.
partial
## Inspiration With a vision to develop an innovative solution for portable videography, Team Scope worked over this past weekend to create a device that allows for low-cost, high-quality, and stable motion and panoramic photography for any user. Currently, such equipment exists only for high-end dslr cameras, is expensive, and is extremely difficult to transport. As photographers ourselves, such equipment has always felt out of reach, and both amateurs and veterans would substantially benefit from a better solution, which provides us with a market ripe for innovation. ## What it does In contrast to current expensive, unwieldy designs, our solution is compact and modular, giving us the capability to quickly set over 20ft of track - while still fitting all the components into a single backpack. There are two main assemblies to SCOPE: first, our modular track whose length can be quickly extended, and second, our carriage which houses all electronics and controls the motion of the mounted camera. ## Design and performance The hardware was designed in Solidworks and OnShape (a cloud based CAD program), and rapidly prototyped using both laser cutters and 3d printers. All materials we used are readily available, such as mdf fiberboard and acrylic plastic, which would drive down the cost of our product. On the software side, we used an Arduino Uno to power three full-rotation continuous servos, which provide us with a wide range of possible movements. With simple keyboard inputs, the user can interact with the system and control the lateral and rotational motion of the mounted camera, all the while maintaining a consistent quality of footage. We are incredibly proud of the performance of this design, which is able to capture extended time-lapse footage easily and at a professional level. After extensive testing, we are pleased to say that SCOPE has beaten our expectations for ease of use, modularity, and quality of footage. ## Challenges and lessons Given that this was our first hackathon, and that all team members are freshman with limited experience, we faced numerous challenges in implementing our vision. Foremost among these was learning to code in the Arduino language, which none of us had ever used previously - something that was made especially difficult by our inexperience with software in general. But with the support of the PennApps community, we are happy to have learned a great deal over the past 36 hours, and are now fully confident in our ability to develop similar arduino-controlled products in the future. In addition, As we go forward, we are excited to apply our newly-acquired skills to new passions, and to continue to hack. The people we've met at PennApps have helped us with everything from small tasks, such as operating a specific laser cutter, to intangible advice about navigating the college world and life in general. The four of us are better engineers as a result. ## What's next? We believe that there are many possibilities for the future of SCOPE, which we will continue to explore. Among these are the introduction of a curved track for the camera to follow, the addition of a gimbal for finer motion control, and the development of preset sequences of varying speeds and direction for the user to access. Additionally, we believe there is significant room for weight reduction to enhance the portability of our product. If produced on a larger scale, our product will be cheap to develop, require very few components to assemble, and still be just as effective as more expensive solutions. ## Questions? Contact us at [teamscopecamera@gmail.com](mailto:teamscopecamera@gmail.com)
## Inspiration We had a wine and cheese last week. ## Challenges I ran into A W S + python 3 connecting to domain ## What's next for Whine and Cheese A team wine and cheese
## About We have found that as busy college students it is difficult to find time to stay up-to-date with the immense amount of news. Even so, sometimes we can get stuck in filter bubbles by only reading one news source. We wanted to create a tool that can summarize news from various sources for popular news topics and is simple enough to be used by people of all ages. Introducing your new favorite website... **Gist** Select the news categories that you are interested in, press generate, and immerse yourself in your favorite news topics. Yes, it is that easy! Scroll through the articles and use the selection at the top to jump to a specific category. If you would like to read the full article, simply click on the title. Gist formats to both mobile and desktop browsers, and it is also accessible with a screenreader. We know that you are probably wondering: What could possibly be added to Gist? Well, we could not include everything we wanted in a 36 hour hackathon, so here are some of our ideas: * Pull the news from more sources * Determine a ranking for the articles and have the best ones display first * Use Generative AI to summarize the articles rather than relying on the news source providing a summary * Use Generative AI to summarize all the articles for a given category * Let the user search instead of having pre-defined categories
partial
## Inspiration 1. We all hate reading textbooks 2. Students are busy 3. Students spend a lot of time reading textbooks Furthermore we all agree with the following icon from the hit show *The Office*: ![Kevin from The Office "Why Waste Time When Few Word Do Trick" image](https://i.imgur.com/3XTx9ws.jpeg) ## What it does FastsFacts is a web app that reads your textbook for you and summarizes it for you to save you time! ## How we built it Using HTML/CSS, JS and some jQuery, Google Vision API and Cohere. Everything is client-side or runs in one of our APIs. No need to roll a dedicated server to run FastFacts! The web pages are all hosted on GitHub pages (both for the actual app and its demo) while still linking to our Domain.com name ## Challenges we ran into We had some weird issues with linking our domain name to GitHub, plus it took some fiddling with CoHere's parameters before we were able to generate good summaries. ## Accomplishments that we're proud of * Not requiring a server! ~~Last year our project need 11 AWS EC2 instances and one of our team members was charged $200 because he left the instances running 😬 (he got a refund fortunately)~~ * Good time management! * The animated CSS background 👀 ## What we learned * How to use ML libraries (Google Cloud Vision) * Practiced callbacks and other JavaScript complexities ## What's next for FastFacts * Add the ability to crop photos * Preventing popups from being blocked and other Chrome oddities * Improving our summarizing algorithm's breadth * Refining our GUI * Improved mobile support -- maybe even an app!
## Inspiration Other apps exist that estimate how long it takes to read a book, but they make the calculation based on an assumption that the average reading speed is 300 WPM. However, since reading speed can vary a lot depending on what you’re reading, it makes more sense to calculate it based on an average of individual responses rather than assuming a base speed of 300 WPM. ## What it does This application prototype stores a database of books along with the time it took each reader to finish them. A user can look up a book's average reading time (which is literally the average of all responses for a given book), add their own time to the database, or add a book if it is not currently in the database. Although we were not able to fully implement a lot of these features, the basic prototype is still there. ## How we built it Using Javascript, React.js, and a few additional modules installed from npm. ## Challenges we ran into One of the biggest challenges was finding a way to link all of our pages together, given that we all had limited experience using React.js. Another challenge was we were all just learning React for the first time so it was challenging to find a way to create what we wanted while also finding out the method to accomplish it using React. Trying to use different environments for this project, such as VSCode and WebStorm, was also a challenge. We ran into issues with getting initially set-up, with trying to push and pull correctly, and merging with branches without losing all our work. ## Accomplishments that we're proud of It was a lot of fun (with some struggles) learning to use React.js, which offers so many possibilities. We are very proud of how we worked as a team, we were always understanding of each other, listened to each other’s ideas and everyone was willing to help if anyone ran into technical issues or was not sure how to do something. We were all very patient with each other and were always on the same page, which made the experience very fun. ## What we learned This was our first time using React and for some of us our first hackathon/first time using Git! We learned to use React Components, Bootstrap, Routers (well mainly Arjun, god bless his soul). All of these concepts combined, we were able to create a web application with multiple pages, each offering different functionality. We also learned how to manage a single project with multiple authors via GitHub by pushing onto our own branches, and merging everyone’s work in the end. ## What's next for Average Reader After nwHacks, there are ways we can improve our Average Reader App to include more functionality on the backend.
## Inspiration The inspiration for the project was our desire to make studying and learning more efficient and accessible for students and educators. Utilizing advancements in technology, like the increased availability and lower cost of text embeddings, to make the process of finding answers within educational materials more seamless and convenient. ## What it does Wise Up is a website that takes many different types of file format, as well as plain text, and separates the information into "pages". Using text embeddings, it can then quickly search through all the pages in a text and figure out which ones are most likely to contain the answer to a question that the user sends. It can also recursively summarize the file at different levels of compression. ## How we built it With blood, sweat and tears! We used many tools offered to us throughout the challenge to simplify our life. We used Javascript, HTML and CSS for the website, and used it to communicate to a Flask backend that can run our python scripts involving API calls and such. We have API calls to openAI text embeddings, to cohere's xlarge model, to GPT-3's API, OpenAI's Whisper Speech-to-Text model, and several modules for getting an mp4 from a youtube link, a text from a pdf, and so on. ## Challenges we ran into We had problems getting the backend on Flask to run on a Ubuntu server, and later had to instead run it on a Windows machine. Moreover, getting the backend to communicate effectively with the frontend in real time was a real challenge. Extracting text and page data from files and links ended up taking more time than expected, and finally, since the latency of sending information back and forth from the front end to the backend would lead to a worse user experience, we attempted to implement some features of our semantic search algorithm in the frontend, which led to a lot of difficulties in transferring code from Python to Javascript. ## Accomplishments that we're proud of Since OpenAI's text embeddings are very good and very new, and we use GPT-3.5 based on extracted information to formulate the answer, we believe we likely equal the state of the art in the task of quickly analyzing text and answer complex questions on it, and the ease of use for many different file formats makes us proud that this project and website can be useful for so many people so often. To understand a textbook and answer questions about its content, or to find specific information without knowing any relevant keywords, this product is simply incredibly good, and costs pennies to run. Moreover, we have added an identification system (users signing up with a username and password) to ensure that a specific account is capped at a certain usage of the API, which is at our own costs (pennies, but we wish to avoid it becoming many dollars without our awareness of it. ## What we learned As time goes on, not only do LLMs get better, but new methods are developed to use them more efficiently and for greater results. Web development is quite unintuitive for beginners, especially when different programming languages need to interact. One tool that has saved us a few different times is using json for data transfer, and aws services to store Mbs of data very cheaply. Another thing we learned is that, unfortunately, as time goes on, LLMs get bigger and so sometimes much, much slower; api calls to GPT3 and to Whisper are often slow, taking minutes for 1000+ page textbooks. ## What's next for Wise Up What's next for Wise Up is to make our product faster and more user-friendly. A feature we could add is to summarize text with a fine-tuned model rather than zero-shot learning with GPT-3. Additionally, a next step is to explore partnerships with educational institutions and companies to bring Wise Up to a wider audience and help even more students and educators in their learning journey, or attempt for the website to go viral on social media by advertising its usefulness. Moreover, adding a financial component to the account system could let our users cover the low costs of the APIs, aws and CPU running Whisper.
losing
## Inspiration AI voices are stale and impersonal. Chrome extensions like "Free Text To Speech Online" use default voices to read text messages on the web out loud. While these default voices excel in cadence and clarity, they miss the nuance and emotion inherent in human speech. This emotional connection is important for a user, as it helps them feel engaged in online communication. Using personalized speech also helps users with special needs who rely on text-to-speech, as this feature assists them in identifying who is talking when vocalizing the messages. ## What it does TeleSpeech is a Chrome Extension that converts Telegram messages into custom AI-generated speech, mimicking the distinct voice of each sender. Using the chrome extension and the web app, you can upload anyone's voice and use it to read messages out loud in a Telegram group chat. ## How we built it We used a Chrome Extension (HTML/CSS, Vanilla JS) to read message data and run the text-to-speech, and a Next.js web app to manage the voices used for text-to-speech. To use TeleSpeech, a user will first upload their voice on our Next.js web app (<https://telespeakto.us>), which will then use the Eleven-Labs Text-to-Speech API to send the AI-generated voice back to the Chrome extension. All user credentials and voice data are securely stored in a Firebase database. On the Chrome extension, when a user has the Telegram Web App open, the extension's service worker will collect all the messages in a chat. When the Chrome extension is open and a user logs in, a "Play Sound" button appears. When pressed, the Chrome extension sends the web app all the message text, and the web app returns an audio file with an AI-generated voice saying the text data. ## Challenges we ran into We struggled the most with communicating between the Chrome extension and the web app. Using Vanilla JS with the extension's strict CSP policies made it hard to transfer data between the 2. We also struggled with learning how to use the Eleven-Labs API because we'd never used it before. Finally, two of the members of our team didn't know typescript as well had a decently steep learning curve as we headed into the projects. ## Accomplishments that we're proud of When we were first able to get one teammate's voice to come out of the speakers reading a message was so incredible. We all thought we could do this project before that happened, but after that, it felt so much more real and attainable. Another is that we built a fully functioning project despite it being our first time at a Hackathon. ## What we learned Two of the members in the group did not know a lot of JavaScript or typescript going in. The short time was not enough to completely prepare them. But, over the last 36 hours, they were able to figure it out to a higher degree than thought. The other two members learned a lot about how to use Chrome extensions, such as how to use service workers and how to have it communicate with a web app. Besides coding, the four of us also learned a lot about accessibility on screens. ## What's next for TeleSpeech The next big thing for TeleSpeech is for it to work for multiple platforms, not just Telegram. We want to expand it to WhatsApp, Instagram, and Facebook. It would also be nice if we could use it for news articles, where it would read news articles in the author's voice, or have the articles' quotes be read by the person's voice.
## Inspiration 1 in 2 Canadians will personally experience a mental health issue by age 40, with minority communities at a greater risk. As the mental health epidemic surges and support at its capacity, we sought to build something to connect trained volunteer companions with people in distress in several ways for convenience. ## What it does Vulnerable individuals are able to call or text any available trained volunteers during a crisis. If needed, they are also able to schedule an in-person meet-up for additional assistance. A 24/7 chatbot is also available to assist through appropriate conversation. You are able to do this anonymously, anywhere, on any device to increase accessibility and comfort. ## How I built it Using Figma, we designed the front end and exported the frame into Reacts using Acovode for back end development. ## Challenges I ran into Setting up the firebase to connect to the front end react app. ## Accomplishments that I'm proud of Proud of the final look of the app/site with its clean, minimalistic design. ## What I learned The need for mental health accessibility is essential but unmet still with all the recent efforts. Using Figma, firebase and trying out many open-source platforms to build apps. ## What's next for HearMeOut We hope to increase chatbot’s support and teach it to diagnose mental disorders using publicly accessible data. We also hope to develop a modeled approach with specific guidelines and rules in a variety of languages.
## Inspiration In the current media landscape, control over distribution has become almost as important as the actual creation of content, and that has given Facebook a huge amount of power. The impact that Facebook newsfeed has in the formation of opinions in the real world is so huge that it potentially affected the 2016 election decisions, however these newsfeed were not completely accurate. Our solution? FiB because With 1.5 Billion Users, Every Single Tweak in an Algorithm Can Make a Change, and we dont stop at just one. ## What it does Our algorithm is two fold, as follows: **Content-consumption**: Our chrome-extension goes through your facebook feed in real time as you browse it and verifies the authenticity of posts. These posts can be status updates, images or links. Our backend AI checks the facts within these posts and verifies them using image recognition, keyword extraction, and source verification and a twitter search to verify if a screenshot of a twitter update posted is authentic. The posts then are visually tagged on the top right corner in accordance with their trust score. If a post is found to be false, the AI tries to find the truth and shows it to you. **Content-creation**: Each time a user posts/shares content, our chat bot uses a webhook to get a call. This chat bot then uses the same backend AI as content consumption to determine if the new post by the user contains any unverified information. If so, the user is notified and can choose to either take it down or let it exist. ## How we built it Our chrome-extension is built using javascript that uses advanced web scraping techniques to extract links, posts, and images. This is then sent to an AI. The AI is a collection of API calls that we collectively process to produce a single "trust" factor. The APIs include Microsoft's cognitive services such as image analysis, text analysis, bing web search, Twitter's search API and Google's Safe Browsing API. The backend is written in Python and hosted on Heroku. The chatbot was built using Facebook's wit.ai ## Challenges we ran into Web scraping Facebook was one of the earliest challenges we faced. Most DOM elements in Facebook have div ids that constantly change, making them difficult to keep track of. Another challenge was building an AI that knows the difference between a fact and an opinion so that we do not flag opinions as false, since only facts can be false. Lastly, integrating all these different services, in different languages together using a single web server was a huge challenge. ## Accomplishments that we're proud of All of us were new to Javascript so we all picked up a new language this weekend. We are proud that we could successfully web scrape Facebook which uses a lot of techniques to prevent people from doing so. Finally, the flawless integration we were able to create between these different services really made us feel accomplished. ## What we learned All concepts used here were new to us. Two people on our time are first-time hackathon-ers and learned completely new technologies in the span of 36hrs. We learned Javascript, Python, flask servers and AI services. ## What's next for FiB Hopefully this can be better integrated with Facebook and then be adopted by other social media platforms to make sure we stop believing in lies.
winning
## Inspiration Simplify was born out of wanting to understand complex topics and stories on a more quantifiable and informatic level. It started with wanting to develop a tool for authors and writers to make more cohesive well flowing essays and articles by analyzing sentiments across multiple groups of text such as chapters and paragraphs ## How we built it AWS, JS, HTML, CSS, Python, Co:here, Estuary, github ## Challenges we ran into Literally everything broke, then we fixed it
## Inspiration One of the major inspirations of this project is when we got to know about the case of our team member who earlier met with a car accident. According to him, he didn't get the proper guidance from the doctors as the different doctors gave him different solutions. We have tons of examples from our society who needs help for their case from the right minds. That's how we figured it what to build for the patients who need help. We thought to create a platform where a panel of doctors can work together for a patient to provide a better solution. ## What it does RareCare is a web application designed to facilitate the management and research of rare diseases. It provides separate portals for patients, doctors, and researchers, each with specific functionalities: Patients can view their health data, search for doctors, and book appointments. Doctors can manage appointments, review patient cases, and conduct video consultations. Researchers can access publications, collaborate with other researchers, analyze data, and provide insights on case studies. ## How we built it Next.js as the React framework TypeScript for type-safe JavaScript Tailwind CSS for styling Custom components (like GlassCard) for consistent UI elements React hooks for state management Tabs component for organizing different sections within each portal Some AI tools ## Challenges we ran into Creating a unified yet role-specific interface for different user types Implementing secure authentication and role-based access control Designing an intuitive UI for complex medical and research data ## Accomplishments that we're proud of Developing a comprehensive platform that serves patients, doctors, and researchers Creating a visually appealing and modern UI with a consistent design language Implementing features like video consultations and data analysis tools ## What we learned How to structure a complex application with multiple user roles Techniques for creating reusable components in React Best practices for handling medical data and research information ## What's next for RareCare Implementing real authentication and database integration Adding more advanced features like AI-assisted diagnosis Expanding the data analysis capabilities for researchers Enhancing the collaboration tools for the research community Implementing a mobile app version for better accessibility
## Inspiration Throughout the COVID-19 pandemic, there has been an astounding 86% increase in technology usage amongst senior citizens. However, with feature rich platforms found everywhere on the internet, big walls of text and countless buttons can overwhelm those who are not familiar with technology. Senior folks who are used to face-to-face interactions are now having a difficult time adjusting to doing their everyday tasks online. ## What it does Using genuine user activity data, the Walkthru chrome extension develops highlighted walk-throughs of the most common use cases. By either selecting an existing use case, or using the voice command feature, Walkthru immediately begins the guidance process from start to end in order to accomplish any task you may have. ## How we built it Walkthru is built as a Chrome extension that automatically loads the appropriate common actions taken on a website. These common actions are based off user data taken from website heatmap data sources such as Hotjar. For the voice command feature, we delegate Google Speech-to-Text to match the spoken command to a predefined use case, if one exists. Walkthru even accepts commands in foreign languages, thanks to the Speech-to-Text multilingual functionality. ## Challenges we ran into Assuming the perspective of senior citizens without being one; often had to overcome our own biases by speaking with senior citizens firsthand. ## Accomplishments that we're proud of A unique and unparalleled solution to simple yet intelligent guidance for senior citizens that can be scaled as an accessibility tool for other demographics and differently-abled people. ## What's next for Walkthru Combine Machine Learning with the Hotjar heatmap data to automate the use case guidance creation process, since use cases are currently created manually.
losing
## Inspiration Since this was the first hackathon for most of our group, we wanted to work on a project where we could learn something new while sticking to familiar territory. Thus we settled on programming a discord bot, something all of us have extensive experience using, that works with UiPath, a tool equally as intriguing as it is foreign to us. We wanted to create an application that will allow us to track the prices and other related information of tech products in order to streamline the buying process and enable the user to get the best deals. We decided to program a bot that utilizes user input, web automation, and web-scraping to generate information on various items, focusing on computer components. ## What it does Once online, our PriceTracker bot runs under two main commands: !add and !price. Using these two commands, a few external CSV files, and UiPath, this stores items input by the user and returns related information found via UiPath's web-scraping features. A concise display of the product’s price, stock, and sale discount is displayed to the user through the Discord bot. ## How we built it We programmed the Discord bot using the comprehensive discord.py API. Using its thorough documentation and a handful of tutorials online, we quickly learned how to initialize a bot using Discord's personal Development Portal and create commands that would work with specified text channels. To scrape web pages, in our case, the Canada Computers website, we used a UiPath sequence along with the aforementioned CSV file, which contained input retrieved from the bot's "!add" command. In the UiPath process, each product is searched on the Canada Computers website and then through data scraping, the most relevant results from the search and all related information are processed into a csv file. This csv file is then parsed through to create a concise description which is returned in Discord whenever the bot's "!prices" command was called. ## Challenges we ran into The most challenging aspect of our project was figuring out how to use UiPath. Since Python was such a large part of programming the discord bot, our experience with the language helped exponentially. The same could be said about working with text and CSV files. However, because automation was a topic none of us hardly had any knowledge of; naturally, our first encounter with it was rough. Another big problem with UiPath was learning how to use variables as we wanted to generalize the process so that it would work for any product inputted. Eventually, with enough perseverance, we were able to incorporate UiPath into our project exactly the way we wanted to. ## Accomplishments that we're proud of Learning the ins and outs of automation alone was a strenuous task. Being able to incorporate it into a functional program is even more difficult, but incredibly satisfying as well. Albeit small in scale, this introduction to automation serves as a good stepping stone for further research on the topic of automation and its capabilities. ## What we learned Although we stuck close to our roots by relying on Python for programming the discord bot, we learned a ton of new things about how these bots are initialized, the various attributes and roles they can have, and how we can use IDEs like Pycharm in combination with larger platforms like Discord. Additionally, we learned a great deal about automation and how it functions through UiPath which absolutely fascinated us the first time we saw it in action. As this was the first Hackathon for most of us, we also got a glimpse into what we have been missing out on and how beneficial these competitions can be. Getting the extra push to start working on side-projects and indulging in solo research was greatly appreciated. ## What's next for Tech4U We went into this project with a plethora of different ideas, and although we were not able to incorporate all of them, we did finish with something we were proud of. Some other ideas we wanted to integrate include: scraping multiple different websites, formatting output differently on Discord, automating the act of purchasing an item, taking input and giving output under the same command, and more.
## Inspiration The issue of waste management is something that many people view as trivial yet is one of the fundamental factors that will decide the liveability of the world. Yet even in Canada, a developed country, only 9% of plastics are recycled, meaning that the equivalent of 24 CN towers of recyclable plastic enters our landfills each year. In developing nations, this is an even more serious issue that can have profound impacts on quality of life. ## What it does Detritus AI is a smart garbage can that is able to detect and categorize waste into the respective containers and transmit essential information that allows for the optimization of garbage routes. DetritusAI tracks the quantity of waste that is in each container and communicates with the client-side applications used by garbage truck drivers to determine how full each container is. Based on the capacity of each garbage can and its location, DetritusAI calculates the optimal route for garbage trucks to collect garbage while minimizing distance and time, even taking into account traffic. ## How we built it When users place an object near the garbage can, a time of flight sensor detects the object and triggers an image classification algorithm that identifies the category of the waste. A message is sent via Solace, which instructs the garbage can to open the appropriate lid. Within the garbage cans, the time of flight sensors continuously determines the capacity of the bin and communicates that information via Solace to the client-side application. Using the Google Directions API, the optimal route for garbage collection is determined by factoring in traffic, distance, and the capacity of each bin. An optimal route is displayed on the dashboard, along with turn by turn directions. ## Challenges I ran into An issue we had was that we wanted to display a visual representation of the optimal route; however, we did not have enough time to figure out how to visually display the directions of the optimal route that we calculated. ## Accomplishments that I'm proud of We're proud of how we were able to integrate the hardware, classification algorithm, and the dashboard into a seamless solution for waste management -- especially given the tight time constraint. ## What I learned Communication between different components often takes more time than one might imagine. Thankfully, Solace is a very powerful tool that has resolved this issue. ## What's next for DetritusAI 1. Visually display the optimized route on the dashboard for the user 2. Add a compost category because the environment is cool. 3. Incorporate a social aspect that encourages people to recycle such as incentives or leaderboards
## Inspiration COVID-19 has rapidly affected our day to day life, businesses, and also disrupted the world trade and movements. With such a drastic pause in our lives, we wanted to provide an easy to access COVID-19 data according to the user's needs, Covid-19 screening, new/headlines, facts, etc. for our discord users. Conducting searches on a web browser can be inconvenient and costs time. Thus, we wanted to create something that could provide important information fast without the need of opening a web-browser or visiting another hard-to-use statistical website. BOT-19 was designed to provide exactly that - fast and accurate information on the fly with simple commands. This discord bot uses APIs for COVID-19 statistics as well as news sources to extract data without requiring the user to visit these sites. ## What it does BOT-19 provides a range of data: COVID-19 statistics for the entire world, for specific countries, or even specific Canadian provinces. In addition, it has a news/headline command that provides headline news on any topic of the user's choice, with or without relation to COVID-19. On top of that, the bot can also visualize COVID-19 data with a daily self-updating graph for easier communication. Furthermore, the bot can perform COVID-19 screening (with the help of the Ontario Ministry of Health guidelines) through interactive questions and answers. ## How we built it To perform such operations, an internal sophisticated language was our priority. Thus, we predominantly used the Python language/framework and its libraries to create Bot-19. This involved obtaining public API data sets and performing operations on them to convert them to JSON objects, that provides us readable data structures such as dictionaries and lists. Specifically, we used Postman's Covid-19 APIs and "newsapi.io" for fetching news/headlines relevant to the user's needs. We then implemented numerous commands to provide users with their desired information. Furthermore, the implementation of the Pandas and Madplotlib libraries in Python helped plot Covid-19 cases of selected countries in a visual format, with an API and data set that refreshes daily. ## Challenges we ran into Initially, we worked together to understand the processes of JSON requests in Python and how to obtain data from APIs. With some determination and dedication and external research, we were successfully able to implement APIs into our code to be able to retrieve our desired data as per the user's request. Another challenge was regarding incorporating the graph into an Embed structure for an output to the user. Mutual collaboration and experimentation helped solve that problem. ## Accomplishments that we're proud of We are proud of creating a project that allows an individual to educate themselves and know about the current events. Since COVID-19 has changed our whole lives, we believe it is eminently important for people to be updated with the information regarding COVID-19 so that they are able to observe possible trends, which will allow them to also take extra precautions. We built this bot with a multitude of features so that everyone is able to benefit from it, and learn something new. We are also proud of implementing technologies we had limited knowledge of, including APIs and different libraries. ## What we learned We learned about the use of APIs and how to effectively incorporate them with the use of different libraries such as the MATPLOTLIB for creating a live graph, while also practicing data implementation in the Python language. We also investigated different features of discord applications and how to create a bot application through a coding language we are familiar with. ## What's next for Bot-19 Firstly, we would like to expand our news database so we cover a wider range of topics. We want to create a more friendly user interface, and transition screening into a DM chat instead of in-server, while also developing more functions. Some examples include providing comparisons of different statistics. Moreover, we can also add a live graph for each of the countries showing their trends which will help the user visualize it better. Finally, we would like to host our discord bot, to allow for users to be able to use the bot in their own server.
partial
# 🍅 NutriSnap ### NutriSnap is an intuitive nutrition tracker that seamlessly integrates into your daily life. ## Inspiration Every time you go to a restaurant, its highly likely that you see someone taking a picture of their food before they eat it. We wanted to create a seamless way for people to keep track of their nutritional intake, minimizing the obstacles required to be aware of the food you consume. Building on the idea that people already often take pictures of the food they eat, we decided to utilize something as simple as one's camera app to keep track of their daily nutritional intake. ## What it does NutriSnap analyzes pictures of food to detect its nutritional value. After simply scanning a picture of food, it summarizes all its nutritional information and displays it to the user, while also adding it to a log of all consumed food so people have more insight on all the food they consume. NutriSnap has two fundamental features: * scan UPC codes on purchased items and fetch its nutritional information * detect food from an image using a public ML food-classification API and estimate its nutritional information This information is summarized and displayed to the user in a clean and concise manner, taking their recommended daily intake values into account. Furthermore, it is added to a log of all consumed food items so the user can always access a history of their nutritional intake. ## How we built it The app uses React Native for its frontend and a Python Django API for its backend. If the app detects a UPC code in the photo, it retrieves nutritional information from a [UPC food nutrition API](https://world.openfoodfacts.org) and summarizes its data in a clean and concise manner. If the app fails to detect a UPC code in the photo, it forwards the photo to its Django backend, which proceeds to classify all the food in the image using another [open API](https://www.logmeal.es). All collected nutritional data is forwarded to the [OpenAI API](https://platform.openai.com/docs/guides/text-generation/json-mode) to summarize nutritional information of the food item, and to provide the item with a nutrition rating betwween 1 and 10. This data is displayed to the user, and also added to their log of consumed food. ## What's next for NutriSnap As a standalone app, NutriSnap is still pretty inconvenient to integrate into your daily life. One amazing update would be to make the API more independent of the frontend, allowing people to sync their Google Photos library so NutriSnap automatically detects and summarizes all consumed food without the need for any manual user input.
## Inspiration Travelling can be expensive but data plans abroad can really push the expenses to a whole new level. With NavText, we were inspired to create an app fuelled by SMS messaging to do all the same services that might be useful while travellling. With this app, travelling can be made easy without the stress of finding Wifi or a data plan. ## What it does NavText has multiple functionalities and is created as an all around app useful for travelling locally or abroad. NavText will guide you in navigating with multiple modes of transportation including requesting an uber driver nearby, taking the local transit, driving or walking. Information such as the estimated time of travel, step by step directions will be texted to your phone after making a request. You can also explore local attractions and food places which will text you the address, directions, opening hours, and price level. ## How we built it Swift Uber API Google Maps API Message Bird ## Challenges we ran into Integrating Message Bird with the Google API. Working around iOS SMS limitations, such as reading and composing text messages. ## Accomplishments that we're proud of Polished iOS app which allows easy use of the formatting of the text message. ## What we learned ## What's next for NavText
## Inspiration Over the past 30 years, the percentage American adults who read literature has dropped about 14%. We found our inspiration. The issue we discovered is that due to the rise of modern technologies, movies and other films are more captivating than reading a boring book. We wanted to change that. ## What it does By implementing Google’s Mobile Vision API, Firebase, IBM Watson, and Spotify's API, Immersify first scans text through our Android Application by utilizing Google’s Mobile Vision API. After being stored into Firebase, IBM Watson’s Tone Analyzer deduces the emotion of the text. A dominant emotional score is then sent to Spotify’s API where the appropriate music is then played to the user. With Immerisfy, text can finally be brought to life and readers can feel more engaged into their novels. ## How we built it On the mobile side, the app was developed using Android Studio. The app uses Google’s Mobile Vision API to recognize and detect text captured through the phone’s camera. The text is then uploaded to our Firebase database. On the web side, the application pulls the text sent by the Android app from Firebase. The text is then passed into IBM Watson’s Tone Analyzer API to determine the tone of each individual sentence with the paragraph. We then ran our own algorithm to determine the overall mood of the paragraph based on the different tones of each sentence. A final mood score is generated, and based on this score, specific Spotify playlists will play to match the mood of the text. ## Challenges we ran into Trying to work with Firebase to cooperate with our mobile app and our web app was difficult for our whole team. Querying the API took multiple attempts as our post request to IBM Watson was out of sync. In addition, the text recognition function in our mobile application did not perform as accurately as we anticipated. ## Accomplishments that we're proud of Some accomplishments we’re proud of is successfully using Google’s Mobile Vision API and IBM Watson’s API. ## What we learned We learned how to push information from our mobile application to Firebase and pull it through our web application. We also learned how to use new APIs we never worked with in the past. Aside from the technical aspects, as a team, we learned collaborate together to tackle all the tough challenges we encountered. ## What's next for Immersify The next step for Immersify is to incorporate this software with Google Glasses. This would eliminate the two step process of having to take a picture on an Android app and going to the web app to generate a playlist.
winning
# see our presentation [here](https://docs.google.com/presentation/d/1AWFR0UEZ3NBi8W04uCgkNGMovDwHm_xRZ-3Zk3TC8-E/edit?usp=sharing) ## Inspiration Without purchasing hardware, there are few ways to have contact-free interactions with your computer. To make such technologies accessible to everyone, we created one of the first touch-less hardware-less means of computer control by employing machine learning and gesture analysis algorithms. Additionally, we wanted to make it as accessible as possible in order to reach a wide demographic of users and developers. ## What it does Puppet uses machine learning technology such as k-means clustering in order to distinguish between different hand signs. Then, it interprets the hand-signs into computer inputs such as keys or mouse movements to allow the user to have full control without a physical keyboard or mouse. ## How we built it Using OpenCV in order to capture the user's camera input and media-pipe to parse hand data, we could capture the relevant features of a user's hand. Once these features are extracted, they are fed into the k-means clustering algorithm (built with Sci-Kit Learn) to distinguish between different types of hand gestures. The hand gestures are then translated into specific computer commands which pair together AppleScript and PyAutoGUI to provide the user with the Puppet experience. ## Challenges we ran into One major issue that we ran into was that in the first iteration of our k-means clustering algorithm the clusters were colliding. We fed into the model the distance of each on your hand from your wrist, and designed it to return the revenant gesture. Though we considered changing this to a coordinate-based system, we settled on changing the hand gestures to be more distinct with our current distance system. This was ultimately the best solution because it allowed us to keep a small model while increasing accuracy. Mapping a finger position on camera to a point for the cursor on the screen was not as easy as expected. Because of inaccuracies in the hand detection among other things, the mouse was at first very shaky. Additionally, it was nearly impossible to reach the edges of the screen because your finger would not be detected near the edge of the camera's frame. In our Puppet implementation, we constantly *pursue* the desired cursor position instead of directly *tracking it* with the camera. Also, we scaled our coordinate system so it required less hand movement in order to reach the screen's edge. ## Accomplishments that we're proud of We are proud of the gesture recognition model and motion algorithms we designed. We also take pride in the organization and execution of this project in such a short time. ## What we learned A lot was discovered about the difficulties of utilizing hand gestures. From a data perspective, many of the gestures look very similar and it took us time to develop specific transformations, models and algorithms to parse our data into individual hand motions / signs. Also, our team members possess diverse and separate skillsets in machine learning, mathematics and computer science. We can proudly say it required nearly all three of us to overcome any major issue presented. Because of this, we all leave here with a more advanced skillset in each of these areas and better continuity as a team. ## What's next for Puppet Right now, Puppet can control presentations, the web, and your keyboard. In the future, puppet could control much more. * Opportunities in education: Puppet provides a more interactive experience for controlling computers. This feature can be potentially utilized in elementary school classrooms to give kids hands-on learning with maps, science labs, and language. * Opportunities in video games: As Puppet advances, it could provide game developers a way to create games wear the user interacts without a controller. Unlike technologies such as XBOX Kinect, it would require no additional hardware. * Opportunities in virtual reality: Cheaper VR alternatives such as Google Cardboard could be paired with Puppet to create a premium VR experience with at-home technology. This could be used in both examples described above. * Opportunities in hospitals / public areas: People have been especially careful about avoiding germs lately. With Puppet, you won't need to touch any keyboard and mice shared by many doctors, providing a more sanitary way to use computers.
## Inspiration Ideas for interactions from: * <http://paperprograms.org/> * <http://dynamicland.org/> but I wanted to go from the existing computer down, rather from the bottom up, and make something that was a twist on the existing desktop: Web browser, Terminal, chat apps, keyboard, windows. ## What it does Maps your Mac desktop windows onto pieces of paper + tracks a keyboard and lets you focus on whichever one is closest to the keyboard. Goal is to make something you might use day-to-day as a full computer. ## How I built it A webcam and pico projector mounted above desk + OpenCV doing basic computer vision to find all the pieces of paper and the keyboard. ## Challenges I ran into * Reliable tracking under different light conditions. * Feedback effects from projected light. * Tracking the keyboard reliably. * Hooking into macOS to control window focus ## Accomplishments that I'm proud of Learning some CV stuff, simplifying the pipelines I saw online by a lot and getting better performance (binary thresholds are great), getting a surprisingly usable system. Cool emergent things like combining pieces of paper + the side ideas I mention below. ## What I learned Some interesting side ideas here: * Playing with the calibrated camera is fun on its own; you can render it in place and get a cool ghost effect * Would be fun to use a deep learning thing to identify and compute with arbitrary objects ## What's next for Computertop Desk * Pointing tool (laser pointer?) * More robust CV pipeline? Machine learning? * Optimizations: run stuff on GPU, cut latency down, improve throughput * More 'multiplayer' stuff: arbitrary rotations of pages, multiple keyboards at once
## Inspiration The mission of OpenDoAR is to empower universities and small businesses in a safe return to physical spaces for their people. We aim to improve outcomes for our users in affordable **health compliance** and **overall monitoring**. In this year and the next, the return to physical spaces for organisations and educational institutions are in progress. For big companies, advanced and expensive methods are employed to ensure healthy employees are entering the office. These methods are not always affordable or convenient for smaller organisations. Looking at educational institutions, there is an ambiguity in compliance to the return-to-campus system. Currently, UC Berkeley uses a badge system for COVID health monitoring and safety. Everyday for a student on campus, a questionnaire is meant to be filled out via mobile which grants the student a badge status. E.g. Green badge granted (assuming student doesn’t have symptoms, are fully vaccinated, etc). There are yellow and red badges too from less ideal answers. Before entering a lab or classroom, a Teaching Assistant (TA) is meant to check if the student has a green badge, and if not, entry is not granted. The effort from the student needing to pull out and display the green badge on their phone, and showing it to the TA at the door is higher than students are willing for. Hence, for over 90% of the time, this practice is ignored. This becomes more common as the reality and local-impact of COVID fades from people's minds (as infection rates drop). We decided to use Face-Id now for our implementation since it is very commonplace with recent iPhones and simplifies ease of use. Within the last few years, it has become normal for people to look at their phone to unlock private information. Due it is simplicity of access, we hope to extend this to our AR application to allow for quick identification. ## Core Features * AR based on text overlay and image query for mobile. Low effort scanning of students entering classroom. * Mobile Dashboard to analyze core statistics about number of green/red badges * User accounts and Authentication for multiple events and for authorized access * Face detection of students ONLY in the TA's class * Desktop Power BI Dashboard for higher management oversight on compliance and monitoring. #### MobileApp Implementation On the admin side, the camera aims towards the door of a classroom or space, and as students walk in, the app processes students’ faces against the pictures they provided to their organization (or uploaded via the app from their user side) and tags them as their respective colored badge (green for compliant, yellow for not having filled out a daily screening survey, and red for not compliant). The data is recorded for future reference on the compliance of the selected person. This data was kept in an anonymous fashion to prevent HIPAA issues. #### WebApp Implementation Primarily for use by admins, they can view data visualization of their attendees/students’ badge statuses and vaccination statuses and the proportion of badge statuses relative to the entire class/event to better plan future events with regards to health regulations and safety. In order to support the backend, we created a flask server hosted on Azure. The face detection model uses dlib landmark dataset, which labels important features of the face and tries to detect similarity between faces with a 99% accuracy. The backend also supports users and groups and dynamically indexes sets of face databases into memory as needed. We also encrypt on transit and don’t store images that are processed to prevent issues with storage of possible personally identifiable information, breaking HIPPA policies. For the website, we used React to build out the UI and NextJS for server-side rendering, and for the mobile app, we used Flutter and its ARKit framework to develop the frontend and the primary AR features of our solution. Power BI was used to create a visualisation dashboard for higher management to understand their people's compliance and health in their return to physical spaces. An Azure Virtual machine with datascience configuration was used to build this. ## Challenges we ran into * Initially implemented a trained keras model to predict face detection, but a raw dlib based face landmark detection worked better. * Implementing AR functionality and sending Image had a different byte encoding, causing issues when sending it over to the flask server for processing. * Struggled initially figuring out the scope, problem statement, and user cases for our idea that addressed a real problem and also targeted the primary categories of the hackathon. * Used Azure VM for first time to build on Power BI. * Publishing our Power BI dashboard to the the PBI service and then embedding it into our website was blocked as none of our members' school/work accounts permitted Power BI sign up. This is honestly a 5 min step if we gained permissions from our organisation accounts. File is attached in Github to allow running in PBI Desktop. ## What's next for OpenDoAR For events, the world is returning to physical events. And during this transition period, the need for easy tools for health and safety is critical to keep re-emergence chances of the pandemic low. **We want to put an app into the hands of event organisers that easily allow them to check their attendees without making it take ages for people to get in.** A side pivot, in the context of networking events, our infrastructure could allow for people at networking events to figure out who to talk to. At many networking events, people try to find others with similar interests, but might end up talking to the wrong people. In order to simplify this process, we can do a quick scan of the people’s face to see if the interests align (show AR overlay of profession, interest, company). This also reduces the social awkwardness/time spent in talking with someone you realise you're not interested in. The faces of the people at the event can be integrated into the database as attendees are there to be public and meet people. In terms of actual work, there isn’t too much involved in extending it to the networking space. Additionally, pronouns are something we can overlay (like LinkedIn) to clear any ambiguity in a socially conscious society. For small and medium sized companies while people are re-integrating to the workplace, an automatic system to detect compliance people entering the property could be **difficult and expensive**. Even with security guards, which could be expensive, they are limited by very manual checking methods. With our system, we can help employees in their re-integrate into their work, by **simply downloading a new app to the phones security guards already have.** Additionally, we are taking UC Berkeley as a proving ground in a simple effective tool. We want to roll this out to other universities in the US and beyond. Making it the solution that enables smoother exchanges and visits from people outside the university. ## Try it yourself at <https://github.com/vikranth22446/greenhelth> ## See our prototype <https://www.figma.com/proto/6k9jPCZPsMdHJeWmx7Jwlt/CalHacks-Wireframes?node-id=9%3A102&scaling=scale-down&page-id=0%3A1&starting-point-node-id=30%3A91> # Website demo <https://drive.google.com/file/d/1Fctu4RkF0ecVfJsftB7z6G8HFwNnv0DI/view?usp=sharing> <http://opendoar.tech> Username: [test@gmail.com](mailto:test@gmail.com) Password: test
winning
## Inspiration We wanted to help college students keep up with current news related to their major. ## What it does The application provides a way for students to keep up-to-date with the current events of their major, enabling them to stay informed and knowledgable. ## How we built it We used a Next.js framework that enabled us to use React. In addition, we implemented express.js for server routing and API calls, as well as the MongoDB Atlas to hold our user's information in a database. We intended to use the Metaphor and Twilio APIs. ## Challenges we ran into The Metaphor API did not work, as we chanced upon errors from the get go (Just importing the package into our javascript file), and the Twilio API gave us troubles with implementing as a module type project, rather than a commonjs. ## Accomplishments that we're proud of Having user authentication implemented, as well as the frontend design. In addition, as beginners, we are proud of our commitment to trying new technologies and things we are unfamiliar with. ## What we learned How to go about a project in a more efficient manner, rather than stubbornly building a project from the bottom-up. ## What's next for NewsApp We would like to improve the connection between the fronted and backend and integrate it more. We would also like to add more specific topic of interest options and even have a way to choose multiple topics versus just one.
## Inspiration I was inspired by a bug infestation in my house. ## What it does You play as a farmer, and water a plant to make it grow while being attacked by insects. You win when the plant grows large enough. ## How we built it We built this game through Unity, and coded in C# ## Challenges we ran into Our largest issue was dealing with the health system. Connecting the player with the Unity UI was tricky, as UI was coded separately from all the game mechanics. We learned to use static variables to carry over important info such as health to update the UI. ## Accomplishments that we're proud of We are proud of the experience our teammates earned. 3 of our members were new to Unity, and not only were they able to learn material quickly, they were able to contribute new ideas, such as designing and implementing sprites into the game. Our greatest challenge was managing work. What components and lines of code could I separate between teammates in order to help them understand Unity within the timeframe of 3 days? And how could the work they complete be implemented within the main program the game will be ran in? We managed to separate work based on prefabs, which are objects in unity such as pests and growing plants. Each of us took a prefab to work on and combined them all into a scene for our game. From doing so we did not need to use extra programs to collaborate, and simplify the work process. ## What we learned We learned how to connect various small systems, such as movement controls and UI, design and implement art and work as a team. ## What's next for Plant Defense Plant defense will have an increased map size, as well as several platforms and different enemies to allow the player to experiment and theorize unique ways to win.
## Inspiration ``` We had multiple inspirations for creating Discotheque. Multiple members of our team have fond memories of virtual music festivals, silent discos, and other ways of enjoying music or other audio over the internet in a more immersive experience. In addition, Sumedh is a DJ with experience performing for thousands back at Georgia Tech, so this space seemed like a fun project to do. ``` ## What it does ``` Currently, it allows a user to log in using Google OAuth and either stream music from their computer to create a channel or listen to ongoing streams. ``` ## How we built it ``` We used React, with Tailwind CSS, React Bootstrap, and Twilio's Paste component library for the frontend, Firebase for user data and authentication, Twilio's Live API for the streaming, and Twilio's Serverless functions for hosting and backend. We also attempted to include the Spotify API in our application, but due to time constraints, we ended up not including this in the final application. ``` ## Challenges we ran into ``` This was the first ever hackathon for half of our team, so there was a very rapid learning curve for most of the team, but I believe we were all able to learn new skills and utilize our abilities to the fullest in order to develop a successful MVP! We also struggled immensely with the Twilio Live API since it's newer and we had no experience with it before this hackathon, but we are proud of how we were able to overcome our struggles to deliver an audio application! ``` ## What we learned ``` We learned how to use Twilio's Live API, Serverless hosting, and Paste component library, the Spotify API, and brushed up on our React and Firebase Auth abilities. We also learned how to persevere through seemingly insurmountable blockers. ``` ## What's next for Discotheque ``` If we had more time, we wanted to work on some gamification (using Firebase and potentially some blockchain) and interactive/social features such as song requests and DJ scores. If we were to continue this, we would try to also replace the microphone input with computer audio input to have a cleaner audio mix. We would also try to ensure the legality of our service by enabling plagiarism/copyright checking and encouraging DJs to only stream music they have the rights to (or copyright-free music) similar to Twitch's recent approach. We would also like to enable DJs to play music directly from their Spotify accounts to ensure the good availability of good quality music. ```
losing
## Inspiration Music labels and platforms have been under fire of late, as they've continued to take a chunk of profits from artists while failing to comply with copyright standards. This (and our interest in Blockchain technology) inspired us to create a decentralized platform for buying and sharing music. ## What it does On Tune-Chainz, you can upload your musical creations without the need for a music label. Powered by Ethereum, a smart-contract is generated upon each song's upload, allowing users to buy, sell, and listen to music with one another without a third party. Users can also listen to music that they have bought/produced on their profile page. ## How we built it Using the Truffle Suite, we built a React app that could interact with the Ethereum Blockchain. Songs are uploaded with cover art to Cloudinary, from which they are streamed as needed. Meanwhile, MongoDB is responsible for keeping track of Users and quick querying of songs in the marketplace. The smart contract handles all financial components of our platform. Each time a song is uploaded, the address of a newly generated Song smart-contract is stored, while users can pay artists directly to listen to their work. ## Challenges we ran into Developing decentralized applications is a challenge--there is not a huge number of resources out there detailing best practices/syntax, and such applications are never discussed in the classroom. A big issue was using Metamask to interact with the smart-contract--as we were relying on a callback function after transaction confirmation, which is slightly delayed by the Chrome extension. It was also a challenge integrating so many databases into one seamless platform. ## Accomplishments that we're proud of We spent a great deal of time thinking about the user experience using Tune-Chainz, and are quite happy with the way it ended up looking, particularly the media player. We additionally feel that the use of blockchain to allow financial transactions was a challenge we were pretty successful with. ## What we learned We learned a great deal, from using Truffle in dApp development, to integrating React components and maintaining state. ## What's next for Tune-Chainz We need to iron out use of Metamask, add Redux for better control of state, and consider a better financial model (perhaps users should be able to dictate their song prize to an extent to create more of a marketplace).
## Inspiration Being second years, we all related to finding it difficult connecting with people and making support/study groups, especially due to the transition onto online learning. StudyBuddy is a way to connect you with people taking the same courses as you, in hope of forming these friend/study groups. ## What it does Study buddy is a clever online application that enables students to learn collectively in an effective and efficient manner. Study buddy keeps records of these courses a user takes in their current study period, it searches through its database to find students taking the same course and groups nearby matches to a group study session. Collaborative education has been proven to improve student comprehension and overall content because students can turn lecture notes into their own words. ## Challenges we ran into We didn't start working until Saturday night and ran into a time crunch. It was especially difficult getting the routes for various pages to synchronize with one another and ensure that all the pages were running smoothly. However, after persevering quite a bit we were able to make a breakthrough and figure out what we needed to do. Another issue we ran into near the submission period was trying to deploy the app on an online platform. ## What we learned Time management is key. We definitely learned a lot about effective communication skills and that taking a proactive approach to discussions can not only boost morale for a team but also help build stronger bonds between team members. Several of us were exposed to new technologies and learned how to use them in a short period of time which is one of our key takeaways from this event. ## What's next for StudyBuddy We may refine the app more and possibly launch it on the app/play store. Furthermore, we plan to implement more immersive features to reduce the number of third parties we would rely on by building out an in-house chat system and connect feature. Study buddy has great potential to become an MVP in a student's resource arsenal. We would like to conduct more research on the target market we aim to reach and how much of that market we will be able to capture. Finally, we would love to implement more management software to ensure that our databases are updated and managed regularly.
## Inspiration I was compelled to undertake a project on my own for this first time in my hackathoning career. One that I covers my interests in web applications and image processing and would be something "do-able" within the competition. ## What it does Umoji is a web-app that take's an image input and using facial recognition maps emoji symbols onto the faces in the image matching their emotion/facial expressions. ## How I built it Using Google Cloud Vision API as the backbone for all the ML and visual recognition, flask to serve up the simple bootstrap based html front-end. ## Challenges I ran into Creating an extensive list of Emoji to map to the different levels of emotion predicted by the ML Model. Web deployment / networking problems. ## Accomplishments that I'm proud of That fact that I was able to hit all the check boxes for what I set out to do. Not overshooting with stretch features or getting to caught up with extending the main features beyond the original scope. ## What I learned How to work with Google's cloud API / image processing and rapid live deployment. ## What's next for Umoji More emojis, better UI/UX and social media integration for sharing.
losing
In their paper "A Maze Solver for Android", Rohan Paranjpe and Armon Saied devised a method to automate solving mazes with nothing but a picture of said maze. My partner and I decided that this would be cool to implement ourselves, and did so accordingly. Many of the things we found in the paper were absolutely crucial to the success of the project, such as using a median and Otsu's filter to preprocess the image, and using the Zhang-Suen algorithm to generate a 1-pixel wide path. However, we innovated on some of their methods, like generating a graph out of the thinned path so that future iterations of the project can handle a more robust problem space (like multiple entrances or exits). Our entire algorithm runs in a little less than a minute on average for a 720x1080 image. The run time of this algorithm is comparable to the time it would take the intended audience of these mazes, elementary school children, to solve. Improvements to the run time of this algorithm can be made by improving our implementation of the Zhang-Suen algorithm, as this is currently the most computationally expensive step.
## Inspiration I like playing games so I thought it's about time I actually make one. ## What it does Allows you to walk around a 3D maze environment in first person view. Can interact with an object on the way out as well. ## How I built it I used the OpenGL library along with Glut. The code is actually written in C++. ## Challenges I ran into Many things such as Ray tracing and texture mapping were new concepts to me, so that took a while to figure out. ## Accomplishments that I'm proud of The fact that I actually finished a project at a hackathon ## What I learned OpenGL is awesome! I'd prefer it over any other graphics library any day ## What's next for Dungeon Maze Possibly add enemies, with AI implemented somewhere in there; have more intractable objects or pretty much whatever comes to mind
## Inspiration Nobody remembers the past exactly as it happened. ## Features * Immersive AR experience * Share "memories" to certain circles of friends so they can experience them as well * Annotate your surroundings with details that you may want to look back on, i.e. the spot where you won a prize at Calhacks
losing
## Inspiration 💡 **Due to rising real estate prices, many students are failing to find proper housing, and many landlords are failing to find good tenants**. Students looking for houses often have to hire some agent to get a nice place with a decent landlord. The same goes for house owners who need to hire agents to get good tenants. *The irony is that the agent is totally motivated by sheer commission and not by the wellbeing of any of the above two.* Lack of communication is another issue as most of the things are conveyed by a middle person. It often leads to miscommunication between the house owner and the tenant, as they interpret the same rent agreement differently. Expensive and time-consuming background checks of potential tenants are also prevalent, as landowners try to use every tool at their disposal to know if the person is really capable of paying rent on time, etc. Considering that current rent laws give tenants considerable power, it's very reasonable for landlords to perform background checks! Existing online platforms can help us know which apartments are vacant in a locality, but they don't help either party know if the other person is really good! Their ranking algorithms aren't trustable with tenants. The landlords are also reluctant to use these services as they need to manually review applications from thousands of unverified individuals or even bots! We observed that we are still using these old-age non-scalable methods to match the home seeker and homeowners willing to rent their place in this digital world! And we wish to change it with **RentEasy!** ![Tech-Stack](https://ipfs.infura.io/ipfs/QmRco7zU8Vd9YFv5r9PYKmuvsxxL497AeHSnLiu8acAgCk) ## What it does 🤔 In this hackathon, we built a cross-platform mobile app that is trustable by both potential tenants and house owners. The app implements a *rating system* where the students/tenants can give ratings for a house/landlord (ex: did not pay security deposit back for no reason), & the landlords can provide ratings for tenants (the house was not clean). In this way, clean tenants and honest landlords can meet each other. This platform also helps the two stakeholders build an easily understandable contract that will establish better trust and mutual harmony. The contract is stored on an InterPlanetary File System (IPFS) and cannot be tampered by anyone. ![Tech-Stack](https://ipfs.infura.io/ipfs/QmezGvDFVXWHP413JFke1eWoxBnpTk9bK82Dbu7enQHLsc) Our application also has an end-to-end encrypted chatting module powered by @ Company. The landlords can filter through all the requests and send requests to tenants. This chatting module powers our contract generator module, where the two parties can discuss a particular agreement clause and decide whether to include it or not in the final contract. ## How we built it ️⚙️ Our beautiful and elegant mobile application was built using a cross-platform framework flutter. We integrated the Google Maps SDK to build a map where the users can explore all the listings and used geocoding API to encode the addresses to geopoints. We wanted our clients a sleek experience and have minimal overhead, so we exported all network heavy and resource-intensive tasks to firebase cloud functions. Our application also has a dedicated **end to end encrypted** chatting module powered by the **@-Company** SDK. The contract generator module is built with best practices and which the users can use to make a contract after having thorough private discussions. Once both parties are satisfied, we create the contract in PDF format and use Infura API to upload it to IPFS via the official [Filecoin gateway](https://www.ipfs.io/ipfs) ![Tech-Stack](https://ipfs.infura.io/ipfs/QmaGa8Um7xgFJ8aa9wcEgSqAJZjggmVyUW6Jm5QxtcMX1B) ## Challenges we ran into 🧱 1. It was the first time we were trying to integrate the **@-company SDK** into our project. Although the SDK simplifies the end to end, we still had to explore a lot of resources and ask for assistance from representatives to get the final working build. It was very gruelling at first, but in the end, we all are really proud of having a dedicated end to end messaging module on our platform. 2. We used Firebase functions to build scalable serverless functions and used expressjs as a framework for convenience. Things were working fine locally, but our middleware functions like multer, urlencoder, and jsonencoder weren't working on the server. It took us more than 4 hours to know that "Firebase performs a lot of implicit parsing", and before these middleware functions get the data, Firebase already had removed them. As a result, we had to write the low-level encoding logic ourselves! After deploying these, the sense of satisfaction we got was immense, and now we appreciate millions of open source packages much more than ever. ## Accomplishments that we're proud of ✨ We are proud of finishing the project on time which seemed like a tough task as we started working on it quite late due to other commitments and were also able to add most of the features that we envisioned for the app during ideation. Moreover, we learned a lot about new web technologies and libraries that we could incorporate into our project to meet our unique needs. We also learned how to maintain great communication among all teammates. Each of us felt like a great addition to the team. From the backend, frontend, research, and design, we are proud of the great number of things we have built within 36 hours. And as always, working overnight was pretty fun! :) --- ## Design 🎨 We were heavily inspired by the revised version of **Iterative** design process, which not only includes visual design, but a full-fledged research cycle in which you must discover and define your problem before tackling your solution & then finally deploy it. ![Double-Diamond](https://ipfs.infura.io/ipfs/QmPDLVVpsJ9NvJZU2SdaKoidUZNSDJPhC2SQAB8Hh66ZDf) This time went for the minimalist **Material UI** design. We utilized design tools like Figma, Photoshop & Illustrator to prototype our designs before doing any coding. Through this, we are able to get iterative feedback so that we spend less time re-writing code. ![Brand-identity](https://ipfs.infura.io/ipfs/QmUriwycp6S98HtsA2KpVexLz2CP3yUBmkbwtwkCszpq5P) --- # Research 📚 Research is the key to empathizing with users: we found our specific user group early and that paves the way for our whole project. Here are few of the resources that were helpful to us — * Legal Validity Of A Rent Agreement : <https://bit.ly/3vCcZfO> * 2020-21 Top Ten Issues Affecting Real Estate : <https://bit.ly/2XF7YXc> * Landlord and Tenant Causes of Action: "When Things go Wrong" : <https://bit.ly/3BemMtA> * Landlord-Tenant Law : <https://bit.ly/3ptwmGR> * Landlord-tenant disputes arbitrable when not covered by rent control : <https://bit.ly/2Zrpf7d> * What Happens If One Party Fails To Honour Sale Agreement? : <https://bit.ly/3nr86ST> * When Can a Buyer Terminate a Contract in Real Estate? : <https://bit.ly/3vDexWO> **CREDITS** * Design Resources : Freepik, Behance * Icons : Icons8 * Font : Semibold / Montserrat / Roboto / Recoleta --- # Takeways ## What we learned 🙌 **Sleep is very important!** 🤐 Well, jokes apart, this was an introduction to **Web3** & **Blockchain** technologies for some of us and introduction to mobile app developent to other. We managed to improve on our teamwork by actively discussing how we are planning to build it and how to make sure we make the best of our time. We learned a lot about atsign API and end-to-end encryption and how it works in the backend. We also practiced utilizing cloud functions to automate and ease the process of development. ## What's next for RentEasy 🚀 **We would like to make it a default standard of the housing market** and consider all the legal aspects too! It would be great to see rental application system more organized in the future. We are planning to implement more additional features such as landlord's view where he/she can go through the applicants and filter them through giving the landlord more options. Furthermore we are planning to launch it near university campuses since this is where people with least housing experience live. Since the framework we used can be used for any type of operating system, it gives us the flexibility to test and learn. **Note** — **API credentials have been revoked. If you want to run the same on your local, use your own credentials.**
## FLEX [Freelancing Linking Expertise Xchange] ## Inspiration Freelancers deserve a platform where they can fully showcase their skills, without worrying about high fees or delayed payments. Companies need fast, reliable access to talent with specific expertise to complete jobs efficiently. "FLEX" bridges the gap, enabling recruiters to instantly find top candidates through AI-powered conversations, ensuring the right fit, right away. ## What it does Clients talk to our AI, explaining the type of candidate they need and any specific skills they're looking for. As they speak, the AI highlights important keywords and asks any more factors that they would need with the candidate. This data is then analyzed and parsed through our vast database of Freelancers or the best matching candidates. The AI then talks back to the recruiter, showing the top candidates based on the recruiter’s requirements. Once the recruiter picks the right candidate, they can create a smart contract that’s securely stored and managed on the blockchain for transparent payments and agreements. ## How we built it We built starting with the Frontend using **Next.JS**, and deployed the entire application on **Terraform** for seamless scalability. For voice interaction, we integrated **Deepgram** to generate human-like voice and process recruiter inputs, which are then handled by **Fetch.ai**'s agents. These agents work in tandem: one agent interacts with **Flask** to analyze keywords from the recruiter's speech, another queries the **SingleStore** database, and the third handles communication with **Deepgram**. Using SingleStore's real-time data analysis and Full-Text Search, we find the best candidates based on factors provided by the client. For secure transactions, we utilized **SUI** blockchain, creating an agreement object once the recruiter posts a job. When a freelancer is selected and both parties reach an agreement, the object gets updated, and escrowed funds are released upon task completion—all through Smart Contracts developed in **Move**. We also used Flask and **Express.js** to manage backend and routing efficiently. ## Challenges we ran into We faced challenges integrating Fetch.ai agents for the first time, particularly with getting smooth communication between them. Learning Move for SUI and connecting smart contracts with the frontend also proved tricky. Setting up reliable Speech to Text was tough, as we struggled to control when voice input should stop. Despite these hurdles, we persevered and successfully developed this full stack application. ## Accomplishments that we're proud of We’re proud to have built a fully finished application while learning and implementing new technologies here at CalHacks. Successfully integrating blockchain and AI into a cohesive solution was a major achievement, especially given how cutting-edge both are. It’s exciting to create something that leverages the potential of these rapidly emerging technologies. ## What we learned We learned how to work with a range of new technologies, including SUI for blockchain transactions, Fetch.ai for agent communication, and SingleStore for real-time data analysis. We also gained experience with Deepgram for voice AI integration. ## What's next for FLEX Next, we plan to implement DAOs for conflict resolution, allowing decentralized governance to handle disputes between freelancers and clients. We also aim to launch on the SUI mainnet and conduct thorough testing to ensure scalability and performance.
## Inspiration As students transition back into in person classes, housing search will pile on to the numerous challenges students already face. Sadly, the vaccine does not provide us immunity to the housing crisis students face so we had to generate our own vaccine called Spot. The frustration of messaging multiple landlords on a variety of platforms while keeping track of all the location addresses was the main drive for our project idea. * **Struggling** to find housing due to tedious search process * Multi-platform search is a necessity which leads to high time consumption * Poor response rates from landlords due to mass application messages ## What it does Spotᵀᴹ is a housing market platform where tenants and landlords can match based on preference and compatibility. By incorporating the simple match system from Tinder, tenants can breeze through housing applications curated towards their housing preferences while landlords get the luxury of a clear tenant application with no text fluff. The instantaneous matching feature and profile customization seeks to condense and simplify the rental process for tenants and landlords alike. * Allows tenants and landlords to match based on their preferences * **Simplifies** application process with profile and simple match system * **Reduces redundancy** in application process, only one application is required from both sides ## How we built it We built Spot with **React**, **AWS**, **GraphQL**, **Typescript**, **Amplify**, and **DynamoDB** . We used Amplify throughout as it helped with user authentication, creating GraphQL APIs, and setting up our database. ## Challenges we ran into During this hackathon we ran into various issues, the issue that was prominent throughout the event was merge conflicts between shared tasks. Since we all knew how to code the front-end, we had to make edits which were conflicting with each other. Additionally, we also ran into a lot of issues using AWS and Amplify as we struggled to get the User Authentication to work and there were deployment issues. * **Merge conflicts** between shared tasks * Image on deploy not showing * User login and authentication from amplify not working * Backend functions not outputting as desired ## Accomplishments that we're proud of We are proud that in such a short time we can create a slick front-end, that is simple and pleasing to the eye. Furthermore, we're proud that we used new technologies for the first time and were able to complete the project before the demo. Lastly, we were able to develop a solution to a common issue that we can closely relate to as we are students ourselves. * Create a Slick front-end, simple and pleasing to the eye * Developed a solution to a common issue amongst students and tenants * Having a complete project before the demo : ) ## What we learned During this hackathon we learned that communication is key as it helped us prevent issues, develop our tasks efficiently and piece the entire project together. We also learned that we couldn't make all the features we talked about in the planning phase and that we need to allocate time for the main features. Lastly, we all learned how to use new technologies, as for some of us is was our first time using AWS, GraphQL, DynamoDB, and Typescript. * Time allocation for priority tasks * How to resolve merge conflicts * Creating an eye-catching UI * Using new technologies ## What's next for Spot We definitely want to continue working on Spot as we could not deliver all the features we planned during the hackathon. One of the features we wanted to develop is a premium mode that grants access to greater features for both tenants and landlords. These include a Premium function that automatically suggests tenants and landlords to match based on their compatibility score. Additionally, we wanted to limit swipes and allowing Premium members to swipe more. Lastly, a lease agreement generation that generates a contract based on the tenant and landlords needs. * Premium function that automatically suggests tenants and landlords to match based on their compatibility score (calculated from their preference setting) * Limited swipes/possible matches per day for monetization. *Premium allows for more swipes* * **Lease agreement generation** based on the address of the location and user credentials (lease period, agreed rent price etc.)
winning
## Inspiration During quarantine, it's much harder to stay motivated and get your needed daily exercise and workouts done. As gyms are closed and everyone is encouraged to stay inside, it is sometimes difficult to get up and workout. This was our primary inspiration for FitBot by Timmy. ## What it does Fitbot by Timmy is a website that tracks your daily exercises, gives you information like calories burned and the most important part of it all: provides you with a friend to help you get through it, a virtual robot. With the chatbot that we implemented (Timmy), all you have to do is tell him your name and you can go ahead and get started. You can ask Timmy all kinds of tips, different exercises for all types of muscles, we've also allowed Timmy to inform you on how to stay safe during COVID-19 while staying fit. ## How we built it Aside from using HTML and JavaScript for the website itself, we went ahead and tried to face a new challenge by using Firebase's real-time database to store our client's data. We also learned how to implement a chatbot with machine learning using smartloop.ai and incorporated it into our website. ## Challenges we ran into The biggest challenge that we faced was to fully implement the Firebase real-time database. We were able to store one client's data, but not all clients. We also ran into some problems choosing a good chatbot software due to some of them being incompatible with our website.
## 💡 Inspiration * The pandemic has restricted us to stay at home and has taken a huge toll in our physical well-being * Exercising within our house boundaries is a real challenge. * We've developed a novel application to accurately track the count of certain curated indoor exercises and get the amount of calories burnt * This is a cheap, free-to-use alternative to measure the effectivness of your workout session ## 💻 What it does * The website uses AI to recognise the number of *pushups/squats and bicep curls* * It then calculates the calories burnt and notifies the user in their mobile phones * The user can select any kind of excerises and do them. ![Model]() ## ⚙️ How we built it * The site runs on mediapipe, posenet, js. * We've used mediapipe to detect user motion and then calculate the number of reps. * A report is generated and sent as a message using the Twilio API. * The user can end the session anytime if they wanted, just by clicking "Stop". ![Architecture](https://cdn.discordapp.com/attachments/821039436137103410/933269090393542676/unknown.png) ## 🧠 Challenges we ran into * Application hangs, screen freezes because the tensorflow was blocking the camera. * Organising the structure of the project. * Tweaking with the mediapipe AI model to accurately detect the type of motion. ## 📖 What we learned * Mediapipe using Javascript * Running AI models for posture detection. * Using Twilio for sending messages. * AssemblyAI API for posting user datas and result in CockroachDB ## 📧 Use of Google Cloud * Google Cloud offers text to voice conversion. * We used google cloud speech conversation for voice control exercise web application. ## 📧 Use of Assembly.AI API * We used Assembly.AI API for storing user info in CochroachDB. * It used for a safe and secure transformation of data. * We will be using user authentication for user login in future. ## 📖 Use of Deso * Deso is a decentralized social application and it is open source & on chain open data * We used deso for login, logout purpose and also for transactions occurs in our website. ## 📧 Use of Twilio * We used Twilio to send users report to our users. * Twilio is safe and secure API for sending text messages. ![Twilio Message](https://discord.com/channels/@me/821039436137103410/927475056794292275) ## 🚀 What's next for FitnessZone * Parsing the voice commands using NLP. * Smart execrise recommendation system. * Accurate detection using deep learning models. * More exercise recognization. ## 🏅 Accomplishments that we're proud of * We're glad to sucessfully complete this project! * The end goal was achieved to a satisfactory level and the outcome would help us as well to excerise at home. ## 🔨 How to run * Fork repo * Run index.html file in the html folder
## Inspiration A chatbot is often described as one of the most advanced and promising expressions of interaction between humans and machines. For this reason we wanted to create one in order to become affiliated with Natural Language Processing and Deep-Learning through neural networks. Due to the current pandemic, we are truly living in an unprecedented time. As the virus' spread continues, it is important for all citizens to stay educated and informed on the pandemic. So, we decided to give back to communities by designing a chatbot named Rona who a user can talk to, and get latest information regarding COVID-19. (This bot is designed to function similarly to ones used on websites for companies such as Amazon or Microsoft, in which users can interact with the bot to ask questions they would normally ask to a customer service member, although through the power of AI and deep learning, the bot can answer these questions for the customer on it's own) ## What it does Rona answers questions the user has regarding COVID-19. More specifically, the training data we fed into our feed-forward neural network to train Rona falls under 5 categories: * Deaths from COVID-19 * Symptoms of COVID-19 * Current Cases of COVID-19 * Medicines/Vaccines * New Technology/Start-up Companies working to fight coronavirus We also added three more categories of data for Rona to learn, those being greetings, thanks and goodbyes, so the user can have a conversation with Rona which is more human-like. ## How we built it First, we had to create my training data. Commonly referred to as 'intentions', the data we used to train Rona consisted of different phrases that a user could potentially ask. We split up all of my intentions into 7 categories, which we listed above, and these were called 'tags'. Under our sub-branch of tags, we would provide Rona several phrases the user could ask about that tag, and also gave it responses to choose from to answer questions related to that tag. Once the intentions were made, we put this data in a json file for easy access in the rest of the project. Second, we had to use 3 artificial-intelligence, natural language processing, techniques to process the data, before it was fed into our training model. These were 'bag-of-words', 'tokenization' and 'stemming'. First, bag-of-words is a process which took a phrase, which were all listed under the tags, and created an array of all the words in that phrase, making sure there are no repeats of any words. This array was assigned to an x-variable. A second y-variable delineated which tag this bag-of-words belonged to. After these bags-of-words were created, tokenization was applied through each bag-of-words and split them up even further into individual words, special characters (like @,#,$,etc.) and punctuation. Finally, stemming created a crude heuristic, i.e. it chopped off the ending suffixes of the words (organize and organizes both becomes organ), and replaced the array again with these new elements. These three steps were necessary, because the training model is much more effective when the data is pre-processed in this way, it's most fundamental form. Next, we made the actual training model. This model was a feed-forward neural network with 2 hidden layers. The first step was to create what are called hyper-parameters, which is a standard procedure for all neural networks. These are variables that can be adjusted by the user to change how accurate you want your data to be. Next, the network began with 3 layers which were linear, and these were the layers which inputted the data which was pre-processed earlier. After, these were passed on into what are called activation functions. Activation functions output a small value for small inputs, and a larger value if its inputs exceed a threshold. If the inputs are large enough, the activation function "fires", otherwise it does nothing. In other words, an activation function is like a gate that checks that an incoming value is greater than a critical number. The training was completed, and the final saved model was saved into a 'data.pth' file using pytorch's save method. ## Challenges we ran into The most obvious challenge was simply time constraints. We spent most of our time trying to make sure the training model was efficient, and had to search up several different articles and tutorials on the correct methodology and API's to use. Numpy and pytorch were the best ones. ## Accomplishments that we're proud of This was our first deep-learning project so we are very proud of completing at least the basic prototype. Although we were aware of NLP techniques such as stemming and tokenization, this is our first time actually implementing them in action. We have created basic neural nets in the past, but also never a feed-forward one which provides an entire model as its output. ## What we learned We learned a lot about deep learning, neural nets, and how AI is trained for communication in general. This was a big step up for us in Machine Learning. ## What's next for Rona: Deep Learning Chatbot for COVID-19 We will definitely improve on this in the future by updating the model, providing a lot more types of questions/data related to COVID-19 for Rona to be trained on, and potentially creating a complete service or platform for users to interact with Rona easily.
losing
## The Problem Gift giving plays a significant role in society. From busy seasons like the winter holidays, to birthdays, individuals are always asked: what can I get you? Communicating these wants clearly and simply is not easy. There are lists across multiple online retailers, but beyond that there was no simple, secure, and well designed system to share wish lists with friends and family. ## What it does Our platform fills this gap affording users a clean, well designed system on which they can collect their desired items and securely invite friends to view them. ## How We built it The platform is built upon a PHP and mySQL backend and driven by a responsive jQuery and AJAX front end, which we designed and coded ourselves from scratch. We took great care in crafting the design of the site to maximize its usability for our users. ## Challenges We ran into We had some difficulties getting started with an AJAX driven frontend, but by the end of the project we had significantly improve our abilities. ## Accomplishments that We're proud of When creating this project we aimed to deploy an easy account system that was also very secure. We decided to use a passwordless system that uses uniquely generated and expiring login links to securely sign users into our system without the need of a password. This session is then maintain using browser cookies. We also were able to use live AJAX calls to smartly populate our item entry form as the user entered its name as well as pull a photo for each item. ## What We learned This hackthon our team learned a lot about web security, we had the opportunity work with a few industry professionals who gave us some great information and support. ## What's next for WishFor (wishfor.xyz) The platform will be expanded to allow further collaboration between friends in claiming and splitting the costs of items. ## Functionality Demo (dev post GIF images not displaying properly) <https://gyazo.com/18ed9bd881265342853d59692fa00e4d> <https://gyazo.com/75f904f287b6780dd90c6976e4ede9e8>
## Inspiration The inspiration for our project stems from the increasing trend of online shopping and the declining foot traffic in physical stores. Our goal was to provide a unique and engaging experience for customers, encouraging them to visit physical stores and rediscover the joy of in-person shopping. We wanted to create an interactive and entertaining shopping experience that would entice customers to visit stores more frequently and foster a deeper connection between them and the store's brand. ## What it does Our project is an AR scavenger hunt experience that gamifies the shopping experience. The scavenger hunt encourages customers to explore the store and discover new products they may have otherwise overlooked. As customers find specific products, they can earn points which can be redeemed for exclusive deals and discounts on future purchases. This innovative marketing scheme not only provides customers with an entertaining experience but also incentivizes them to visit stores more frequently and purchase products they may have otherwise overlooked. ## How we built it To create the AR component of our project, we used Vuforia and Unity, two widely used platforms for building AR applications. The Vuforia platform allowed us to create and track image targets, while Unity was used to design the 3D models for the AR experience. We then integrated the AR component into an Android application by importing it as a Gradle project. Our team utilized agile development methodologies to ensure efficient collaboration and problem-solving throughout the development process. ## Challenges we ran into One of the challenges we faced was integrating multiple APIs and ensuring that they worked together seamlessly. Another challenge was importing the AR component and creating the desired functionality within our project. We also faced issues with debugging and resolving technical errors that arose during the development process. ## Accomplishments that we're proud of Despite the challenges we faced, we were able to achieve successful teamwork and collaboration. Despite forming the team later than other groups, we were able to effectively communicate and work together to bring our project to fruition. We are proud of the end result, which was a polished and functional AR scavenger hunt experience that met our objectives. ## What we learned We learned how difficult it is to truly ship out software, and we are grateful to have joined the hackathon. We gained a deeper understanding of the importance of project planning, effective communication, and collaboration among team members. We also learned that the development process can be challenging and unpredictable, and that it requires perseverance and problem-solving skills. Additionally, participating in the hackathon taught us valuable technical skills such as integrating APIs, creating AR functionality, and importing projects onto an Android application. ## What's next for Winnur Looking forward, we plan to incorporate Computer Vision technology into our project to prevent potential damage to our product's packaging. We also aim to expand the reach of our AR scavenger hunt experience by partnering with more retailers and enhancing the user interface and experience. We are excited about the potential for future development and growth of Winnur.
## Inspiration We wanted to make investing more community based, accessible, and beneficial to society. ## What it does Allows people to trade stocks by democratic voting, and donate their capital gains to charity. ## How I built it We used React Native to build our app, and we used Expo to manage how we view, tested, and ran our app on our mobile devices . Low fidelity prototypes were built on scratch paper. The back-end was built on Python with Flask and hosted on Heroku, we also went serverless with google cloud functions and CockroachDB. ## Challenges I ran into We weren’t able to implement all the extra functionality we envisioned such as donations, and profile tabs. Our development team took on the challenge of working with new tools and languages such as javascript, react native, expo, and API technologies. Other challenges include, managing version control history, adding navigation in React Native, and installation and initial setup errors. The team had debugging issues that took up a substantial amount of our development time. Another struggle was working remotely from home and the technology limitations this presented. ## Accomplishments that we're proud of Our development team had limited prior knowledge of mobile development prior to the hackathon. Given this, we’re proud of what we were able to accomplish in a short period of time, as well as the teamwork we exhibited throughout the process. We were able to develop a minimum viable product, and create app functionality like forms, cards, flatlists, touchable components, styling, using the alpaca API, creating mobile navigation, a splash screen, learning how to use gradient colour on apps, and sending debugging requests to service. We’re also proud of our idea which we believe, if implemented, has the potential to make a positive impact. ## What we learned Our development team learned new technologies by using tools like react native, expo and APIs. In particular, how Expo has made development easier and more intuitive (e.g. publishing apps, multi-platform compatibility is automatic, React makes the app building a little more intuitive). During the opening ceremonies, we learned about different company API’s, and we learned how to incorporate Coackroach’s API into our project. We also learned a little about investing and how stocks work! ## What's next for RobinGood Donations Feature = allow lending money to those with less buying power and allow donations to charitable causes. Profile Feature = complete the profile page (users and management already created on back-end, but did not link everything) Overall better styling = this wasn’t the minimum viable product of our project. That is why we concentrated much less on the styling, and much more on the actual functionality of our app. However, it would be beneficial for us to invest time in styling our app for better engagement with
winning
## Inspiration We take our inspiration from our everyday lives. As avid travellers, we often run into places with foreign languages and need help with translations. As avid learners, we're always eager to add more words to our bank of knowledge. As children of immigrant parents, we know how difficult it is to grasp a new language and how comforting it is to hear the voice in your native tongue. LingoVision was born with these inspirations and these inspirations were born from our experiences. ## What it does LingoVision uses AdHawk MindLink's eye-tracking glasses to capture foreign words or sentences as pictures when given a signal (double blink). Those sentences are played back in an audio translation (either using an earpiece, or out loud with a speaker) in your preferred language of choice. Additionally, LingoVision stores all of the old photos and translations for future review and study. ## How we built it We used the AdHawk MindLink eye-tracking classes to map the user's point of view, and detect where exactly in that space they're focusing on. From there, we used Google's Cloud Vision API to perform OCR and construct bounding boxes around text. We developed a custom algorithm to infer what text the user is most likely looking at, based on the vector projected from the glasses, and the available bounding boxes from CV analysis. After that, we pipe the text output into the DeepL translator API to a language of the users choice. Finally, the output is sent to Google's text to speech service to be delivered to the user. We use Firebase Cloud Firestore to keep track of global settings, such as output language, and also a log of translation events for future reference. ## Challenges we ran into * Getting the eye-tracker to be properly calibrated (it was always a bit off than our view) * Using a Mac, when the officially supported platforms are Windows and Linux (yay virtualization!) ## Accomplishments that we're proud of * Hearing the first audio playback of a translation was exciting * Seeing the system work completely hands free while walking around the event venue was super cool! ## What we learned * we learned about how to work within the limitations of the eye tracker ## What's next for LingoVision One of the next steps in our plan for LingoVision is to develop a dictionary for individual words. Since we're all about encouraging learning, we want to our users to see definitions of individual words and add them in a dictionary. Another goal is to eliminate the need to be tethered to a computer. Computers are the currently used due to ease of development and software constraints. If a user is able to simply use eye tracking glasses with their cell phone, usability would improve significantly.
## Inspiration Over 70 million people around the world use sign language as their native form of communication. 70 million voices who were unable to be fully recognized in today’s society. This disparity inspired our team to develop a program that will allow for real-time translation of sign language into a text display monitor. Allowing for more inclusivity among those who do not know sign language to be able to communicate with a new community by breaking down language barriers. ## What it does It translates sign language into text in real-time processing. ## How we built it We set the environment by installing different packages (Open cv, mediapipe, scikit-learn), and set a webcam. -Data preparation: We collected data for our ML model by capturing a few sign language letters through the webcam that takes the whole image frame to collect it into different categories to classify the letters. -Data processing: We use the media pipe computer vision inference to capture the hand gestures to localize the landmarks of your fingers. -Train/ Test model: We trained our model to detect the matches between the trained images and hand landmarks captured in real time. ## Challenges we ran into The challenges we ran into first began with our team struggling to come up with a topic to develop. Then we ran into the issue of developing a program to integrate our sign language detection code with the hardware due to our laptop lacking the ability to effectively process the magnitude of our code. ## Accomplishments that we're proud of The accomplishment that we are most proud of is that we were able to implement hardware in our project as well as Machine Learning with a focus on computer vision. ## What we learned At the beginning of our project, our team was inexperienced with developing machine learning coding. However, through our extensive research on machine learning, we were able to expand our knowledge in under 36 hrs to develop a fully working program.
## Inspiration### Background Growing up, multiple group members struggled with communication as second-generation immigrants. Torn between trying to learn English while maintaining their native tongue, there has been a constant theme of linguistic barriers surrounding miscommunication in life. ### Mission We are looking to vastly improve the language learning process. We aim to eliminate the tedious dialect learning process by: * Reducing user input * Improving user experience * Integrating language learning into everyday life ### technology We began with a machine learning model using multiple Python libraries, including: * **TensorFlow** * **OpenCV** * **Mediapipe** * **NumPy** Our main feature is live-video glasses, which allow users to point at any object and receive translations in a language of their choice. The translations can be outputted via text-to-speech or through our front-end mobile app.
winning
## Check it out on GitHub! The machine learning and web app segments are split into 2 different branches. Make sure to switch to these branches to see the source code! You can view the repository [here](https://github.com/SuddenlyBananas/be-right-back/). ## Inspiration Inspired in part by the Black Mirror episode of the same title (though we had similar thoughts before we made the connection). ## What it does The goal of the project is to be able to talk to a neural net simulation of your Facebook friends you've had conversations with. It uses a standard base model and customizes it based on message upload input. However, we ran into some struggles that prevented the full achievement of this goal. The user downloads their message history data and uploads it to the site. Then, they can theoretically ask the bot to emulate one of their friends and the bot customizes the neural net model to fit the friend in question. ## How we built it Tensor Flow for the machine learning aspect, Node JS and HTML5 for the data-managing website, Python for data scraping. Users can interact with the data through a Facebook Messenger Chat Bot. ## Challenges we ran into AWS wouldn't let us rent a GPU-based E2 instance, and Azure didn't show anything for us either. Thus, training took much longer than expected. In fact, we had to run back to an apartment at 5 AM to try to run it on a desktop with a GPU... which didn't end up working (as we found out when we got back half an hour after starting the training set). The Facebook API proved to be more complex than expected, especially negotiating the 2 different user IDs assigned to Facebook and Messenger user accounts. ## Accomplishments that we're proud of Getting a mostly functional machine learning model that can be interacted with live via a Facebook Messenger Chat Bot. ## What we learned Communication between many different components of the app; specifically the machine learning server, data parsing script, web server, and Facebook app. ## What's next for Be Right Back We would like to fully realize the goals of this project by training the model on a bigger data set and allowing more customization to specific users.
# Relive and Relearn *Step foot into a **living photo album** – a window into your memories of your time spent in Paris.* ## Inspiration Did you know that 70% of people worldwide are interested in learning a foreign language? However, the most effective learning method, immersion and practice, is often challenging for those hesitant to speak with locals or unable to find the right environment. We sought out to try and solve this problem by – even for experiences you yourself may not have lived; While practicing your language skills and getting personalized feedback, enjoy the ability to interact and immerse yourself in a new world! ## What it does Vitre allows you to interact with a photo album containing someone else’s memories of their life! We allow you to communicate and interact with characters around you in those memories as if they were your own. At the end, we provide tailored feedback and an AI backed DELF (Diplôme d'Études en Langue Française) assessment to quantify your French capabilities. Finally, it allows for the user to make learning languages fun and effective; where users are encouraged to learn through nostalgia. ## How we built it We built all of it on Unity, using C#. We leveraged external API’s to make the project happen. When the user starts speaking, we used ChatGPT’s Whisper API to transform speech into text. Then, we fed that text into co:here, with custom prompts so that it could role play and respond in character. Meanwhile, we are checking the responses by using co:here rerank to check on the progress of the conversation, so we knew when to move on from the memory. We store all of the conversation so that we can later use co:here classify to give the player feedback on their grammar, and give them a level on their french. Then, using Eleven Labs, we converted co:here’s text to speech and played it for the player to simulate a real conversation. ## Challenges we ran into VR IS TOUGH – but incredibly rewarding! None of our team knew how to use Unity VR and the learning curve sure was steep. C# was also a tricky language to get our heads around but we pulled through! Given that our game is multilingual, we ran into challenges when it came to using LLMs but we were able to use and prompt engineering to generate suitable responses in our target language. ## Accomplishments that we're proud of Figuring out how to build and deploy on Oculus Quest 2 from Unity Getting over that steep VR learning curve – our first time ever developing in three dimensions Designing a pipeline between several APIs to achieve desired functionality Developing functional environments and UI for VR ## What we learned * 👾 An unfathomable amount of **Unity & C#** game development fundamentals – from nothing! * 🧠 Implementing and working with **Cohere** models – rerank, chat & classify * ☎️ C# HTTP requests in a **Unity VR** environment * 🗣️ **OpenAI Whisper** for multilingual speech-to-text, and **ElevenLabs** for text-to-speech * 🇫🇷🇨🇦 A lot of **French**. Our accents got noticeably better over the hours of testing. ## What's next for Vitre * More language support * More scenes for the existing language * Real time grammar correction * Pronunciation ranking and rating * Change memories to different voices ## Credits We took inspiration from the indie game “Before Your Eyes”, we are big fans!
## Inspiration Having completed college applications during COVID-19, finding in-depth information into the atmosphere, daily life, and monetary costs of universities was a struggle. In addition, the inability to physically visit colleges made it nearly impossible to gauge not only the environment but also the people on campus. The internet can only do so much to account for such voids; it offers high level statistics and generalized surface-level information but nothing more. High school students still struggle to understand what it is like to be a part of a university and to envision themselves in a current college student's shoes. They lack vital information that would be of immense use in their application processes such as course load, social atmospheres, financial aid processes, and connections as a whole. One existing solution would be to seek out a college counselor, but many counselors have outdated information and are far past the age at which they applied to universities--which consequently creates an age-gap-induced social barrier. On top of this, many students from low-income families and neighborhoods lack the financial freedom to be able to hire a college counselor in the first place. Because of this, we felt that a platform was greatly needed to connect current college students and high school students in a casual environment to enable high school students to not only have their imperative questions answered, but also to gain a holistic and confident outlook on their futures at and beyond college. ## What it does Unify is a platform that connects high school students with current college students to provide them with an opportunity to obtain more in-depth knowledge in regards to specific colleges as well as to start building their connections. Upon registering an account, high school students can enter their major/career interests and are given access to a dashboard in which they can enter and manage their college list, along with indicators that display their progress, due date, and application category (reach/target/safety) of their respective applications/essays. High school students also have trouble when it comes to planning their college finances, especially for students who will later go on to graduate schools. Unify comes with a built-in finance tool that performs advanced calculations to provide students with accurate pricing models for their prospective tuitions, offering valuable insight into how they should manage their money and what they should expect over the course of their academic career. Financial literacy is also a big deal for families in general as they work to sponsor their children's education. Students can then access an explore page where they are intelligently recommended college students that attend institutions that the high school students are applying to and those that are majoring in the interests of the high school students. High school students can reach out to college students by viewing their profiles and using the built-in messaging feature to ask questions about college and career fields. College students can also register accounts and keep track of their high school connections, but can also access a counseling dashboard in which they are given the ability to become an honorary college counselor upon request. This is extremely useful to high school applicants because college students charge low prices for their services, but they are all-too-familiar with the process themselves and are also easier to get along with than older, more prestigious college counselors. College students can also provide updated information on their schools and give specific insight into what makes their campus experience special. College students can keep track of their ratings, counseling sessions, transactions, and messaging threads in an intuitive interface. ## How we built it Unify was built using React.js with the addition of the Chakra UI component library. In addition, we utilized Google Cloud's Firestore to store, access, and modify all user information. Tuition data inside the financial information modal for each college on the student dashboard was derived from Kaggle. ## Challenges we ran into Throughout the challenge, learning and adapting to the new APIs was a challenge. Specifically, creating and connecting Firebase to our frontend and obtaining keys was a bit of a challenge. However, once these issues were resolved, the product was very rewarding. Financial calculations and scraping for college tuition statistics were also a challenge, but we overcame it through a bit of outside reading and research. Some of the tools that we used were new to many of us, so going through the process of learning, applying, and then debugging was quite a struggle. ## Accomplishments that we're proud of We are very proud of how functional our web app turned out. All team members were purposefully assigned to roles in which they had to work with frameworks and languages that they hadn't touched on before, and yet we were able to figure out the functionalities and deliver a nearly end-to-end web app for streamlining the college application process and leveling the financial literacy playing field. We're also proud of the organization of our code. We organize everything into components and have a utils file where all the calls to the database are managed. All of the data that we display is stored on a Firestore database, a tool that none of us were really familiar with before. Finally, we were very satisfied by how we were able to incorporate information about a college's financial costs into Unify. Tuition and other associated expenditures are defining element of every student's college decision making and we are proud that Unify can help them make core budgeting choices. ## What we learned Through this challenge, we learned the importance and value of teamwork, specifically dividing and conquering. Abstraction came to be very important was individuals were able to work on their portion knowing that the other portions would work. For example, our frontend developers were confident that the backend would work without directly interfacing with it, whereas backend developers relied on the efficient components that frontend developers created. Technically, we learned how to build a react app with Chakra UI and how to host a Firestore database with Google Cloud. We also learned how to integrate Firebase authentication into our app with Google Sign-In. ## What's next for Unify In the future, we hope to finish our monetization system to truly allow college students and high school students to interact through our platform more naturally. In addition, we hope to finalize our messaging system and our database storage method.
winning
# Hide and Squeak ## Inspiration The robotic mouse feeder is device is designed to help promote physical and mental health in indoor cats, by providing a way to hunt to kill - by pouncing or attacking the robotic mouse, the device will release some "random" amount of kibble (from a pre-determined set) for the cat to eat after successfully "catching" it's prey. ## What it does Right now it it a proof of concept; the pounce sensor (a button) will trigger a stop in the motion and prints to the Serial monitor when the kibble would be released using a servo motor. The commands to a stepper motor (that acts as one of the two wheels on the mouse), will be printed to the Serial monitor. After a "random" time, the device chooses a new "random" speed and direction (CW or CCW) for the wheel. A "random" direction for the mouse itself would be achieved in that both wheels would be turning independently, each with a different speed. When the bumper button is hit (indicates running into an object in the environment) - the direction sent to the motors is reversed and runs at the same speed as before. This will need testing with an actual motor to see if it is sufficient to remove the robot from an obstacle. ## Challenges I ran into The hardware lab had no wires that would connect to the stepper motor shield, so there was no way to debug or test the code with the motor itself. There is output to the Serial monitor in lieu of controlling the actual motor, which is mediocre at best for testing the parameters chosen to be the range for both speed and time. ## What's next 1. Attaching the stepper motor to see if the code functions properly with an actual motor; and testing the speed min and max (too slow? too fast?) 2. Programming with a second stepper motor to see how the code handles running two steppers at once; 3. Structural and mechanical design - designing the gears, wheels, casing, kibble pinwheel, etc., needed to bring the code to life. ## Future possibilities 1. Adding a remote feature so you can interact with your cat 2. Adding a camera and connecting to the home's WiFI so you can watch your cat play remotely 3. Adding an internal timer to turn on/off at specified times during the day 4. The ability to change the speeds, time until switching directions, or to pre-program a route
## Inspiration I was inspired to make this device while sitting in physics class. I really felt compelled to make something that I learned inside the classroom and apply my education to something practical. Growing up I always remembered playing with magnetic kits and loved the feeling of repulsion between magnets. ## What it does There is a base layer of small magnets all taped together so the North pole is facing up. There are hall effect devices to measure the variances in magnetic field that is created by the user's magnet attached to their finger. This allows the device to track the user's finger and determine how they are interacting with the magnetic field pointing up. ## How I built it It is build using the intel edison. Each hall effect device is either on or off depending if there is a magnetic field pointing down through the face of the black plate. This determines where the user's finger is. From there the analog data is sent via serial port to the processing program on the computer that demonstrates that it works. That just takes the data and maps the motion of the object. ## Challenges I ran into There are many challenges I faced. Two of them dealt with just the hardware. I bought the wrong type of sensors. These are threshold sensors which means they are either on or off instead of linear sensors that give a voltage proportional to the strength of magnetic field around it. This would allow the device to be more accurate. The other one deals with having alot of very small worn out magnets. I had to find a way to tape and hold them all together because they are in an unstable configuration to create an almost uniform magnetic field on the base. Another problem I ran into was dealing with the edison, I was planning on just controlling the mouse to show that it works but the mouse library only works with the arduino leonardo. I had to come up with a way to transfer the data to another program, which is how i came up dealing with serial ports and initially tried mapping it into a Unity game. ## Accomplishments that I'm proud of I am proud of creating a hardware hack that I believe is practical. I used this device as a way to prove the concept of creating a more interactive environment for the user with a sense of touch rather than things like the kinect and leap motion that track your motion but it is just in thin air without any real interaction. Some areas this concept can be useful in is in learning environment or helping people in physical therapy learning to do things again after a tragedy, since it is always better to learn with a sense of touch. ## What I learned I had a grand vision of this project from thinking about it before hand and I thought it was going to theoretically work out great! I learned how to adapt to many changes and overcoming them with limited time and resources. I also learned alot about dealing with serial data and how the intel edison works on a machine level. ## What's next for Tactile Leap Motion Creating a better prototype with better hardware(stronger magnets and more accurate sensors)
## Inspiration We wanted to make something that linked the virtual and real worlds, but in a quirky way. On our team we had people who wanted to build robots and people who wanted to make games, so we decided to combine the two. ## What it does Our game portrays a robot (Todd) finding its way through obstacles that are only visible in one dimension of the game. It is a multiplayer endeavor where the first player is given the task to guide Todd remotely to his target. However, only the second player is aware of the various dangerous lava pits and moving hindrances that block Todd's path to his goal. ## How we built it Todd was built with style, grace, but most of all an Arduino on top of a breadboard. On the underside of the breadboard, two continuous rotation servo motors & a USB battery allows Todd to travel in all directions. Todd receives communications from a custom built Todd-Controller^TM that provides 4-way directional control via a pair of Bluetooth HC-05 Modules. Our Todd-Controller^TM (built with another Arduino, four pull-down buttons, and a bluetooth module) then interfaces with Unity3D to move the virtual Todd around the game world. ## Challenges we ran into The first challenge of the many that we ran into on this "arduinous" journey was having two arduinos send messages to each other via over the bluetooth wireless network. We had to manually configure the setting of the HC-05 modules by putting each into AT mode, setting one as the master and one as the slave, making sure the passwords and the default baud were the same, and then syncing the two with different code to echo messages back and forth. The second challenge was to build Todd, the clean wiring of which proved to be rather difficult when trying to prevent the loose wires from hindering Todd's motion. The third challenge was building the Unity app itself. Collision detection was an issue at times because if movements were imprecise or we were colliding at a weird corner our object would fly up in the air and cause very weird behavior. So, we resorted to restraining the movement of the player to certain axes. Additionally, making sure the scene looked nice by having good lighting and a pleasant camera view. We had to try out many different combinations until we decided that a top down view of the scene was the optimal choice. Because of the limited time, and we wanted the game to look good, we resorted to looking into free assets (models and textures only) and using them to our advantage. The fourth challenge was establishing a clear communication between Unity and Arduino. We resorted to an interface that used the serial port of the computer to connect the controller Arduino with the unity engine. The challenge was the fact that Unity and the controller had to communicate strings by putting them through the same serial port. It was as if two people were using the same phone line for different calls. We had to make sure that when one was talking, the other one was listening and vice versa. ## Accomplishments that we're proud of The biggest accomplishment from this project in our eyes, was the fact that, when virtual Todd encounters an object (such as a wall) in the virtual game world, real Todd stops. Additionally, the fact that the margin of error between the real and virtual Todd's movements was lower than 3% significantly surpassed our original expectations of this project's accuracy and goes to show that our vision of having a real game with virtual obstacles . ## What we learned We learned how complex integration is. It's easy to build self-sufficient parts, but their interactions introduce exponentially more problems. Communicating via Bluetooth between Arduino's and having Unity talk to a microcontroller via serial was a very educational experience. ## What's next for Todd: The Inter-dimensional Bot Todd? When Todd escapes from this limiting world, he will enter a Hackathon and program his own Unity/Arduino-based mastery.
partial
## Inspiration One of our team members was in the evacuation warning zone for the raging California fires in the Bay Area just a few weeks ago. Part of their family's preparation for this disaster included the tiresome, tedious, time-sensitive process of listing every item in their house for insurance claims in the event that it's burned down. This process took upwards of 15 hours between 3 people working on it and even then many items were missed an unaccounted for. Claim Cart is here to help! ## What it does Problems Solved (1) Families often have many belongings they don’t account for. It’s time intensive and inconvenient to coordinate, maintain, and update extensive lists of household items. Listing mundane, forgotten items can potentially add thousands of dollars to their insurance. (2) Insurance companies have private master lists of the most commonly used items and what the cheapest viable replacements are. Families are losing out on thousands of dollars because their claims don’t state the actual brand or price of their items. For example, if a family listed “toaster”, they would get $5 (the cheapest alternative), but if they listed “stainless steel - high end toaster: $35” they might get $30 instead. Claim Cart has two main value propositions: time and money. It is significantly faster to take a picture of your items than manually entering every object in. It’s also more efficient for members to collaborate on making a family master list. ## Challenges I ran into Our team was split between 3 different time zones, so communication and coordination was a challenge! ## Accomplishments that I'm proud of For three of our members, PennApps was their first hackathon. It was a great experience building our first hack! ## What's next for Claim Cart In the future, we will make Claim Cart available to people on all platforms.
## **opiCall** ## *the line between O.D. and O.K. is one opiCall away* --- ## What it does Private AMBER alerts for either 911 or a naloxone carrying network ## How we built it We used Twilio & Dasha AI to send texts and calls, and Firebase & Swift for the iOS app's database and UI itself. ## Challenges we ran into We had lots of difficulties finding research on the topic, and conducting our own research due to the taboos and Reddit post removals we faced. ## What's next for opiCall In depth research on First Nations' and opioids to guide our product further.
# 🚗 InsuclaimAI: Simplifying Insurance Claims 📝 ## 🌟 Inspiration 💡 After a frustrating experience with a minor fender-bender, I was faced with the overwhelming process of filing an insurance claim. Filling out endless forms, speaking to multiple customer service representatives, and waiting for assessments felt like a second job. That's when I knew that there needed to be a more streamlined process. Thus, InsuclaimAI was conceived as a solution to simplify the insurance claim maze. ## 🎓 What I Learned ### 🛠 Technologies #### 📖 OCR (Optical Character Recognition) * OCR technologies like OpenCV helped in scanning and reading textual information from physical insurance documents, automating the data extraction phase. #### 🧠 Machine Learning Algorithms (CNN) * Utilized Convolutional Neural Networks to analyze and assess damage in photographs, providing an immediate preliminary estimate for claims. #### 🌐 API Integrations * Integrated APIs from various insurance providers to automate the claims process. This helped in creating a centralized database for multiple types of insurance. ### 🌈 Other Skills #### 🎨 Importance of User Experience * Focused on intuitive design and simple navigation to make the application user-friendly. #### 🛡️ Data Privacy Laws * Learned about GDPR, CCPA, and other regional data privacy laws to make sure the application is compliant. #### 📑 How Insurance Claims Work * Acquired a deep understanding of the insurance sector, including how claims are filed, and processed, and what factors influence the approval or denial of claims. ## 🏗️ How It Was Built ### Step 1️⃣: Research & Planning * Conducted market research and user interviews to identify pain points. * Designed a comprehensive flowchart to map out user journeys and backend processes. ### Step 2️⃣: Tech Stack Selection * After evaluating various programming languages and frameworks, Python, TensorFlow, and Flet (From Python) were selected as they provided the most robust and scalable solutions. ### Step 3️⃣: Development #### 📖 OCR * Integrated Tesseract for OCR capabilities, enabling the app to automatically fill out forms using details from uploaded insurance documents. #### 📸 Image Analysis * Exploited an NLP model trained on thousands of car accident photos to detect the damages on automobiles. #### 🏗️ Backend ##### 📞 Twilio * Integrated Twilio to facilitate voice calling with insurance agencies. This allows users to directly reach out to the Insurance Agency, making the process even more seamless. ##### ⛓️ Aleo * Used Aleo to tokenize PDFs containing sensitive insurance information on the blockchain. This ensures the highest levels of data integrity and security. Every PDF is turned into a unique token that can be securely and transparently tracked. ##### 👁️ Verbwire * Integrated Verbwire for advanced user authentication using FaceID. This adds an extra layer of security by authenticating users through facial recognition before they can access or modify sensitive insurance information. #### 🖼️ Frontend * Used Flet to create a simple yet effective user interface. Incorporated feedback mechanisms for real-time user experience improvements. ## ⛔ Challenges Faced #### 🔒 Data Privacy * Researching and implementing data encryption and secure authentication took longer than anticipated, given the sensitive nature of the data. #### 🌐 API Integration * Where available, we integrated with their REST APIs, providing a standard way to exchange data between our application and the insurance providers. This enhanced our application's ability to offer a seamless and centralized service for multiple types of insurance. #### 🎯 Quality Assurance * Iteratively improved OCR and image analysis components to reach a satisfactory level of accuracy. Constantly validated results with actual data. #### 📜 Legal Concerns * Spent time consulting with legal advisors to ensure compliance with various insurance regulations and data protection laws. ## 🚀 The Future 👁️ InsuclaimAI aims to be a comprehensive insurance claim solution. Beyond just automating the claims process, we plan on collaborating with auto repair shops, towing services, and even medical facilities in the case of personal injuries, to provide a one-stop solution for all post-accident needs.
winning
## What it does It's an agar.io clone written by three sleep deprived students who learned as they worked. If you're not familiar with agar.io, it's a bunch of colourful circles brutally murdering and eating each other in an endless quest for dominance (or until they get bored and go play golf or something). ## How we built it We coded it in assembly on a computer we made from popsicle sticks and duct tape. Kidding of course, for this kind of thing you need a computer built from high quality maple skewers, not cheap popsicle sticks! We actually wrote the client and server pieces in javascript, making use of jQuery, Node.js, express.js, and socket.io to make our lives easier and to facilitate communication between the two. The server is hosted on Amazon Web Services and based on its performance is being powerd by two agitated ferrets with graphing calculators. ## Challenges we ran into Well most of use had little to no experience developing web applications, never mind real time server-client communicaiton, so it has been a bit of a learning experience to say the least. Learning was definitely the biggest challenge, but we still learned many things. Let's be honest, it's a miracle this thing works at all. ## Accomplishments that we're proud of We're super happy we manage to train two ferrets to use graphing calculators to run the server. Oh, and we're also pretty proud that we got it working, that too. ## What we learned If nothing else this project certainly fits the theme of learning since we had to learn almost everything about this project. We came in knowing almost nothing, and in the end we now have experience writing server-client applications using javascript and html, as well as ferret training. ## What's next for Hangry Time Being rewritten by people with more experience would be good, but failing that some shinier graphics would be nice. Maybe replacing all the circles with pictures of Donald Trump and Hilary Clinton's faces would liven things up.
## Inspiration Looking around you in your day-to-day life you see so many people eating so much food. Trust me, this is going somewhere. All that stuff we put in our bodies, what is it? What are all those ingredients that seem more like chemicals that belong in nuclear missiles over your 3 year old cousins Coke? Answering those questions is what we set out to accomplish with this project. But answering a question doesn't mean anything if you don't answer it well, meaning your answer raises as many or more questions than it answers. We wanted everyone, from pre-teens to senior citizens to be able to understand it. So in summary of what we wanted to do, we wanted to give all these lazy couch potatoes (us included) an easy, efficient, and most importantly, comprehendible method of knowing what it is exactly that we're consuming by the metric ton on a daily basis. ## What it does What our code does is that it takes input either in the form of text or image, and we use it as input for an API from which we extract our final output using specific prompts. Some of our outputs are the nutritional values, a nutritional summary, the amount of exercise required to burn off the calories gained from the meal, (its recipe), and its health in comparison to other foods. ## How we built it Using Flask, HTML,CSS, and Python for backend. ## Challenges we ran into We are all first-timers so none of us had any idea as to how the whole thing worked, so individually we all faced our fair share of struggles with our food, our sleep schedules, and our timidness, which led to miscommunication. ## Accomplishments that we're proud of Making it through the week and keeping our love of tech intact. Other than that we really did meet some amazing people and got to know so many cool folks. As a collective group, we really are proud of our teamwork and ability to compromise, work with each other, and build on each others ideas. For example we all started off with different ideas and different goals for the hackathon but we ended up all managing to find a project we all liked and found it in ourselves to bring it to life. ## What we learned How hackathons work and what they are. We also learned so much more about building projects within a small team and what it is like and what should be done when our scope of what to build was so wide. ## What's next for NutriScan -Working ML -Use of camera as an input to the program -Better UI -Responsive -Release
## Inspiration Lectures all around the world last on average 100.68 minutes. That number goes all the way up to 216.86 minutes for art students. As students in engineering, we spend roughly 480 hours a day listening to lectures. Add an additional 480 minutes for homework (we're told to study an hour for every hour in a lecture), 120 minutes for personal breaks, 45 minutes for hygeine, not to mention tutorials, office hours, and et. cetera. Thinking about this reminded us of the triangle of sleep, grades and a social life-- and how you can only pick two. We felt that this was unfair and that there had to be a way around this. Most people approach this by attending lectures at home. But often, they just put lectures at 2x speed, or skip sections altogether. This isn't an efficient approach to studying in the slightest. ## What it does Our web-based application takes audio files- whether it be from lectures, interviews or your favourite podcast, and takes out all the silent bits-- the parts you don't care about. That is, the intermediate walking, writing, thinking, pausing or any waiting that happens. By analyzing the waveforms, we can algorithmically select and remove parts of the audio that are quieter than the rest. This is done over our python script running behind our UI. ## How I built it We used PHP/HTML/CSS with Bootstrap to generate the frontend, hosted on a DigitalOcean LAMP droplet with a namecheap domain. On the droplet, we have hosted an Ubuntu web server, which hosts our python file which gets run on the shell. ## Challenges I ran into For all members in the team, it was our first time approaching all of our tasks. Going head on into something we don't know about, in a timed and stressful situation such as a hackathon was really challenging, and something we were very glad that we persevered through. ## Accomplishments that I'm proud of Creating a final product from scratch, without the use of templates or too much guidance from tutorials is pretty rewarding. Often in the web development process, templates and guides are used to help someone learn. However, we developed all of the scripting and the UI ourselves as a team. We even went so far as to design the icons and artwork ourselves. ## What I learned We learnt a lot about the importance of working collaboratively to create a full-stack project. Each individual in the team was assigned a different compartment of the project-- from web deployment, to scripting, to graphic design and user interface. Each role was vastly different from the next and it took a whole team to pull this together. We all gained a greater understanding of the work that goes on in large tech companies. ## What's next for lectr.me Ideally, we'd like to develop the idea to have much more features-- perhaps introducing video, and other options. This idea was really a starting point and there's so much potential for it. ## Examples <https://drive.google.com/drive/folders/1eUm0j95Im7Uh5GG4HwLQXreF0Lzu1TNi?usp=sharing>
losing
## Inspiration The Materials Engineering Department from McMaster University proposed a challenge to the DeltaHacks Attendees - to analyze microscopic grain structures from metals. The program should be able to distinguish the grain boundaries and display information about the 3 types of grains in the image (light, dark, and lam). ## What it does The user interfaces with our program with our GUI. They drag and drop a file into the GUI and run the program. It will output a mask for each of the following: grain boundaries, precipitates, light grains, dark grains, and lam grains. The user can scroll through and save the masks. Information about each type grain is included in a table - average grain area, average grain length, and number of grains of that type. ## How we built it We built the GUI using pyqt. The majority of the image processing algorithms were programmed in python using opencv. We performed processing steps such as thresholding, condensing, flattening, and expanding boarders. The curtaining effect was removed by implementing a Fourier transform and then removing the frequency of the curtaining from the image. The light flares were reduced by sampling the image to obtain a background level and subtracting that from the original image. ## Challenges we ran into The noise caused by the precipitates were the largest challenge we faced as the noise that resulted from them in each processing step impacted our ability to extract information from the images. We had to determine how to remove the precipitates from the images early in the processing procedure. ## Accomplishments that we're proud of We are proud that we accomplished something for each task. ## What we learned Theoretical image processing techniques do not work well on real images due to noise and other artifacts. We found an actual application for Fourier transforms. ## What's next for Materialistic The next steps would be to fine tune our algorithms so that they work a bit more efficiently. Additionally, we would like to further automate our procedure and improve the functionality by implementing a neural network. Given the data sets that were provided, the results of our image segmentation would have been much more sophisticated if we had the time to train the neural network to recognize the periodic frequency of the curtaining artifacts in the image. ML would be able to optimize the pattern recognition of the curtaining, resulting in a more accurate Fourier transform model to remove the specified frequency content from the image.
## Inspiration Canadians produce more garbage per capita than any other country on earth, with the United States ranking third in the world. In fact, Canadians generate approximately 31 million tonnes of garbage a year. According to the Environmental Protection Agency, 75% of this waste is recyclable. Yet, only 30% of it is recycled. In order to increase this recycling rate and reduce our environmental impact, we were inspired to propose a solution through automating waste sorting. ## What it does Our vision takes control away from the user, and lets the machine do the thinking when it comes to waste disposal! By showing our app a type of waste through the webcam, we detect and classify the category of waste into either recyclable, compost, or landfill. From there, the appropriate compartment is opened to ensure that the right waste gets to the right place! ## How we built it Using TensorFlow and object detection, a python program analyzes the webcam image input and classifies the objects shown. The TensorFlow data is then collected and pushed to our MongoDB Atlas database via Google Cloud. For this project, we used machine learning and used a single shot detector model to maintain a balance between accuracy and speed. For the hardware, an Arduino 101 and a step motor were responsible for manipulating the position of the lid and opening the appropriate compartment. ## Challenges we ran into We had many issues with training our ML Models on Google Cloud, due to the meager resources provided by Google. Another issue we encountered was finding the right datasets, due to the novelty of our product. Due to these setbacks, we resorted to modifying a TensorFlow provided model. ## Accomplishments that I'm proud of We managed to work through difficulties and learned a lot during the process! We learned to connect TensorFlow, Arduino, MongoDB, and Express.js to create a synergistic project. ## What's next for Trash Code In the future, we aim to create a mobile app for improved accessibility and to create a fully customized trained ML model. We also hope to design a fully functional full-sized prototype with the Arduino.
## Inspiration Frustrated by the steep learning curve of LaTeX—a formatting language used by over 90% of students in scientific fields like math or physics—we, as engineering students, found ourselves spending more time converting handwritten notes into LaTeX than solving the actual problems. ## What it does Upload notes or problem sets as images, and Elevate converts them into LaTeX documents while annotating errors with explanations. This process not only simplifies formatting but also enhances learning by helping you identify and correct mistakes for improved academic performance. ## How we built it Convex helped us link our React frontend to the backend smoothly, making it quicker to change things and catch mistakes thanks to its type-safe coding. We organized our data using Convex's NoSQL database, adding structure and making searches faster with indexing. For security, we hooked up authentication easily with Convex's Clerk Adaptor. Since we needed to store images and LaTeX files, Convex was handy there too. Plus, we used Convex’s vector database to smarten up our chatbot, making everything work together nicely and more efficiently. **OCR Preprocessing (Image2Latex)**: We converted images to grayscale via the following formula: ![Grayscale Conversion Formula](https://i.ibb.co/KWnLMwQ/Screenshot-2024-02-18-at-8-19-38-AM.png) We applied Gaussian Blur, which smooths the image by averaging the pixels based on their spatial closeness and intensity similarity, helping to reduce high-frequency noise for next steps. We utilized the OpenCV Library to compute a (5, 5) kernel with standard deviation 0, via the following formula for a two-dimensional Gaussian: ![](https://i.ibb.co/Nj50g0L/Screenshot-2024-02-18-at-8-29-08-AM.png) Finally, we employed a custom sharpening kernel, ![](https://i.ibb.co/7vdwTBp/Screenshot-2024-02-18-at-8-24-13-AM.png) which enhances edges by increasing the contrast between adjacent pixels. The convolution operation of the image with the kernel, I \* K, is given by ![](https://i.ibb.co/88qvbSq/Screenshot-2024-02-18-at-8-24-36-AM.png) After the preprocessing stage, when then feed our resulting image into OpenAI’s Computer Vision API, to generate text describing what is in the image. We then feed this text into together.AI’s Mixtral-8x7B LLM, which is able to generate appropriate latex code for the text. Finally, once again using together.AI’s LLM and extensive prompt engineering and fine-tuning, we were able to highlight errors and generate corrections. ![](https://i.ibb.co/b30xYJK/DEMO.png) Original Image ![](https://i.ibb.co/rGdvkPP/BBB.jpg) Image After Gaussian Blur (Reduced Noise) ![](https://i.ibb.co/RySLp00/CCC.jpg) Image After Sharpening Kernel Convolution ![](https://i.ibb.co/1G1m7YJ/Screenshot-2024-02-18-at-8-46-05-AM.png) Latex Code Generated by Together.AI ![](https://i.ibb.co/311Z2Lm/Screenshot-2024-02-18-at-8-46-20-AM.png) Latex Code Corrected by Together.AI **Database Vector Search**: Using Intersystem’s Langchain-IRIS vector search technology, we were able to: * Generate embedded vectors for latex notes * Use cosine similarity to query, for a certain problem, the most relevant notes that contain information to answer this problem * By representing each note as an edge and drawing an edge between nodes with high similarity (low vector degree distance), create a connection graph and mind map of all notes in the database * For example, querying the question “How can I find a stable matching?” in a database of one of our team member's discrete math course notes results in the following responses from Intersystem’s API: ![https://ibb.co/Vg0Fq0F](https://i.ibb.co/ZBRQMRQ/Screenshot-2024-02-18-at-8-58-17-AM.png) Leveraging Together.AI’s Mixtral LLM, we also created an interactive ChatBot that is able to summarize ideas in the notes and answer questions from users about the notes. ![https://ibb.co/xXwVj7B](https://i.ibb.co/rFqn6wz/Screenshot-2024-02-18-at-9-14-22-AM.png) ## Challenges we ran into It took a while to figure out how to render LaTeX text in a browser, as no existing packages met our needs for clarity and precision in displaying complex formulas. Creating LLM prompts for annotating student solutions with feedback proved difficult. We had to not only ensure that the feedback was accurate but also that it was formatted and presented properly. The technical complexity of implementing the vector graph for note comparison presented a hard challenge. Perfecting an algorithm that seemlessly integrate Intersystem's Langchain vector embeddings into a graph structure, and generating appropriate adjacency lists required deep dives into ML, optimization, and Data Structures. Additionally, rendering the graph on our live website presented an additional challenge. ## Accomplishments that we're proud of We successfully built a tool that we will use for the rest of our lives and are optimistic enough about the quality/market potential to go to market after the Hackathon. ## What we learned We learned the crucial importance of having a diverse team with varied skill sets, enabling us to effectively divide the workload across design, backend development, and machine learning tasks. ## What's next for Elevate Get 1k users by posting on social media. Use feedback from users to iterate product and add features they would find helpful.
partial
## Realm Inspiration Our inspiration stemmed from our fascination in the growing fields of AR and virtual worlds, from full-body tracking to 3D-visualization. We were interested in realizing ideas in this space, specifically with sensor detecting movements and seamlessly integrating 3D gestures. We felt that the prime way we could display our interest in this technology and the potential was to communicate using it. This is what led us to create Realm, a technology that allows users to create dynamic, collaborative, presentations with voice commands, image searches and complete body-tracking for the most customizable and interactive presentations. We envision an increased ease in dynamic presentations and limitless collaborative work spaces with improvements to technology like Realm. ## Realm Tech Stack Web View (AWS SageMaker, S3, Lex, DynamoDB and ReactJS): Realm's stack relies heavily on its reliance on AWS. We begin by receiving images from the frontend and passing it into SageMaker where the images are tagged corresponding to their content. These tags, and the images themselves, are put into an S3 bucket. Amazon Lex is used for dialog flow, where text is parsed and tools, animations or simple images are chosen. The Amazon Lex commands are completed by parsing through the S3 Bucket, selecting the desired image, and storing the image URL with all other on-screen images on to DynamoDB. The list of URLs are posted on an endpoint that Swift calls to render. AR View ( AR-kit, Swift ): The Realm app renders text, images, slides and SCN animations in pixel perfect AR models that are interactive and are interactive with a physics engine. Some of the models we have included in our demo, are presentation functionality and rain interacting with an umbrella. Swift 3 allows full body tracking and we configure the tools to provide optimal tracking and placement gestures. Users can move objects in their hands, place objects and interact with 3D items to enhance the presentation. ## Applications of Realm: In the future we hope to see our idea implemented in real workplaces in the future. We see classrooms using Realm to provide students interactive spaces to learn, professional employees a way to create interactive presentations, teams to come together and collaborate as easy as possible and so much more. Extensions include creating more animated/interactive AR features and real-time collaboration methods. We hope to further polish our features for usage in industries such as AR/VR gaming & marketing.
## Inspiration We were inspired by the Instagram app, which set out to connect people using photo media. We believe that the next evolution of connectivity is augmented reality, which allows people to share and bring creations into the world around them. This revolutionary technology has immense potential to help restore the financial security of small businesses, which can no longer offer the same in-person shopping experiences they once did before the pandemic. ## What It Does Metagram is a social network that aims to restore the connection between people and small businesses. Metagram allows users to scan creative works (food, models, furniture), which are then converted to models that can be experienced by others using AR technology. ## How we built it We built our front-end UI using React.js, Express/Node.js and used MongoDB to store user data. We used Echo3D to host our models and AR capabilities on the mobile phone. In order to create personalized AR models, we hosted COLMAP and OpenCV scripts on Google Cloud to process images and then turn them into 3D models ready for AR. ## Challenges we ran into One of the challenges we ran into was hosting software on Google Cloud, as it needed CUDA to run COLMAP. Since this was our first time using AR technology, we faced some hurdles getting to know Echo3D. However, the documentation was very well written, and the API integrated very nicely with our custom models and web app! ## Accomplishments that we're proud of We are proud of being able to find a method in which we can host COLMAP on Google Cloud and also connect it to the rest of our application. The application is fully functional, and can be accessed by [clicking here](https://meta-match.herokuapp.com/). ## What We Learned We learned a great deal about hosting COLMAP on Google Cloud. We were also able to learn how to create an AR and how to use Echo3D as we have never previously used it before, and how to integrate it all into a functional social networking web app! ## Next Steps for Metagram * [ ] Improving the web interface and overall user experience * [ ] Scan and upload 3D models in a more efficient manner ## Research Small businesses are the backbone of our economy. They create jobs, improve our communities, fuel innovation, and ultimately help grow our economy! For context, small businesses made up 98% of all Canadian businesses in 2020 and provided nearly 70% of all jobs in Canada [[1]](https://www150.statcan.gc.ca/n1/pub/45-28-0001/2021001/article/00034-eng.htm). However, the COVID-19 pandemic has devastated small businesses across the country. The Canadian Federation of Independent Business estimates that one in six businesses in Canada will close their doors permanently before the pandemic is over. This would be an economic catastrophe for employers, workers, and Canadians everywhere. Why is the pandemic affecting these businesses so severely? We live in the age of the internet after all, right? Many retailers believe customers shop similarly online as they do in-store, but the research says otherwise. The data is clear. According to a 2019 survey of over 1000 respondents, consumers spend significantly more per visit in-store than online [[2]](https://www.forbes.com/sites/gregpetro/2019/03/29/consumers-are-spending-more-per-visit-in-store-than-online-what-does-this-man-for-retailers/?sh=624bafe27543). Furthermore, a 2020 survey of over 16,000 shoppers found that 82% of consumers are more inclined to purchase after seeing, holding, or demoing products in-store [[3]](https://www.businesswire.com/news/home/20200102005030/en/2020-Shopping-Outlook-82-Percent-of-Consumers-More-Inclined-to-Purchase-After-Seeing-Holding-or-Demoing-Products-In-Store). It seems that our senses and emotions play an integral role in the shopping experience. This fact is what inspired us to create Metagram, an AR app to help restore small businesses. ## References * [1] <https://www150.statcan.gc.ca/n1/pub/45-28-0001/2021001/article/00034-eng.htm> * [2] <https://www.forbes.com/sites/gregpetro/2019/03/29/consumers-are-spending-more-per-visit-in-store-than-online-what-does-this-man-for-retailers/?sh=624bafe27543> * [3] <https://www.businesswire.com/news/home/20200102005030/en/2020-Shopping-Outlook-82-Percent-of-Consumers-More-Inclined-to-Purchase-After-Seeing-Holding-or-Demoing-Products-In-Store>
## Inspiration Not all products are designed in a user-friendly and intuitive way. We often come across devices that are annoying and unclear to use. This is especially true for people with less exposure to tech, such as seniors. Whether it’s setting up a new tech gadget or controlling the AC in a new rental car, reading long user manuals or finding a random YouTube tutorial is currently the best course of action. But what if an AI could generate the tutorial specifically for you directly on your phone and visually explain the product using interactive AR? ## What it does We leave AI chatbots in the dust, by combining them with 3D stable diffusion and Augmented Reality to create a user experience as if an expert is physically next to you, visually answering your question with a helpful virtual demonstration. ## Workflow 1. User wants to know how to interact with an object. 2. They open the app and place their camera in-front of the object. 3. The user asks their question e.g. How do I do 'X'? 4. Object detection model detects the item in-front of user. 5. Speech to text understands the user’s question and sends the label and prompt to the backend LLM instruction agent. 6. The instruction agent takes the user's prompt and generates a list of clear instructions to resolve the user’s problem. 7. The detected object and contextualised instructions are fed into a 3D stable diffusion model which generates a digital twin that is displayed alongside the real object in AR. 8. The 3D models are positioned in AR space as a visual guidance for the written instructions, which are also shown to the user. ## How we built it **FrontEnd:** The core frontend was developed using Swift UI, using ARKit for rendering the tutorials in space and CoreML as the on-device model to detect the object in front of the camera. We also used AVFoundation to enable speech-to-text capabilities to simplify the user experience. For more complex and involved tutorials we aim to make the frontend compatible with the Apple Vision Pro in the near future. **Instruction Agent:** The instruction agent simplifies user guidance by generating concise instructions in three clear steps. It receives prompts via a REST API from the front-end, incorporating them into the output JSON format. These instructions are then contextualised for the Text-to-3D model, which facilitates the generation and positioning of AR objects. This process involves passing the question and label through a LLM to produce the finalised JSON. **Text to 3D Stable Diffusion:** The text to 3D stable diffusion model was developed using a pre-trained 2D text-to-image diffusion model to perform text-to-3D synthesis. We used probability density distillation loss to optimise a NeRF model using gradient descent. The resulting model can be viewed from any angle and requires no 3D training data or modifications to the image diffusion model. Because querying each ray in a NeRF requires a lot of computation we used a Sparse Neural Radiance Grid (SNeRG) that enables real-time rendering. This involved reformulation of the architecture using a sparse voxel grid representation with learned feature vectors. We used USDPython with ARConvert for Usdz compatibility on iOS. The following papers were used as technical support and inspiration: * <https://instruct-nerf2nerf.github.io/> * <https://phog.github.io/snerg/> * <https://dreamfusion3d.github.io/> ## Challenges we ran into Rendering 3D models at high speed and quality turned out to be very tough. Our model started out producing low quality AR objects within one minute, and after precomputing and storing the NeRF into a SNeRG, we were able to cut that time down to several seconds. Producing the highest quality models takes longer and is a challenge that we want to address in the future. For now, the lower quality version suffices, and on the small size of a smartphone screen, is not much of an issue. ## Accomplishments that we're proud of We made a fully functional demo and MVP! Despite facing many technical challenges along the way, we managed to overcome them all and are proud of the functionality and complexity of our product. We were able to integrate many packages and models into a complex pipeline that seamlessly converts the user’s question into a visual tutorial. The technical complexity of our solution was both challenging and rewarding, and we are excited to work on this further and see how far we can push the performance and quality of the model, especially considering how close to the edge of research it is. ## What we learned We used many new packages and techniques in this project, significantly expanding our skillset. Our biggest breakthrough was getting the 3D stable diffusion algorithm to work, as this was something we had never done before. We also expanded our AR capabilities by learning about ARKit, RealityKit and AVFoundation as well as using the ‘Combine’ and ‘Speech’ packages to transcribe the user’s spoken prompt and ensure a smooth experience. ## What's next for Aira Our next goal is to improve the model to animate the AR objects generated using 3D stable diffusion. This involves identifying each moving component as a separate object, generating them separately, and then using the contextualisation ability of the instruction agent to understand the desired movement of the components relative to each other, and outputting the motion in polar coordinates. Following this, we will further fine-tune and optimise our model to cut down the time it takes to generate the 3D AR models. To improve the UX we also plan to add arrows visualising the actions needed to interact with the user too. Deck: <https://www.canva.com/design/DAF9EZRlAW8/lDw9k8mMUDGqLUeVQBfBbw/edit?utm_content=DAF9EZRlAW8&utm_campaign=designshare&utm_medium=link2&utm_source=sharebutton>
winning
## Inspiration Two of our teammates have personal experiences with wildfires: one who has lived all her life in California, and one who was exposed to a fire in his uncle's backyard in the same state. We found the recent wildfires especially troubling and thus decided to focus our efforts on doing what we could with technology. ## What it does CacheTheHeat uses different computer vision algorithms to classify fires from cameras/videos, in particular, those mounted on households for surveillance purposes. It calculates the relative size and rate-of-growth of the fire in order to alert nearby residents if said wildfire may potentially pose a threat. It hosts a database with multiple video sources in order for warnings to be far-reaching and effective. ## How we built it This software detects the sizes of possible wildfires and the rate at which those fires are growing using Computer Vision/OpenCV. The web-application gives a pre-emptive warning (phone alerts) to nearby individuals using Twilio. It has a MongoDB Stitch database of both surveillance-type videos (as in campgrounds, drones, etc.) and neighborhood cameras that can be continually added to, depending on which neighbors/individuals sign the agreement form using DocuSign. We hope this will help creatively deal with wildfires possibly in the future. ## Challenges we ran into Among the difficulties we faced, we had the most trouble with understanding the applications of multiple relevant DocuSign solutions for use within our project as per our individual specifications. For example, our team wasn't sure how we could use something like the text tab to enhance our features within our client's agreement. One other thing we were not fond of was that DocuSign logged us out of the sandbox every few minutes, which was sometimes a pain. Moreover, the development environment sometimes seemed a bit cluttered at a glance, which we discouraged the use of their API. There was a bug in Google Chrome where Authorize.Net (DocuSign's affiliate) could not process payments due to browser-specific misbehavior. This was brought to the attention of DocuSign staff. One more thing that was also unfortunate was that DocuSign's GitHub examples included certain required fields for initializing, however, the description of these fields would be differ between code examples and documentation. For example, "ACCOUNT\_ID" might be a synonym for "USERNAME" (not exactly, but same idea). ## Why we love DocuSign Apart from the fact that the mentorship team was amazing and super-helpful, our team noted a few things about their API. Helpful documentation existed on GitHub with up-to-date code examples clearly outlining the dependencies required as well as offering helpful comments. Most importantly, DocuSign contains everything from A-Z for all enterprise signature/contractual document processing needs. We hope to continue hacking with DocuSign in the future. ## Accomplishments that we're proud of We are very happy to have experimented with the power of enterprise solutions in making a difference while hacking for resilience. Wildfires, among the most devastating of natural disasters in the US, have had a huge impact on residents of states such as California. Our team has been working hard to leverage existing residential video footage systems for high-risk wildfire neighborhoods. ## What we learned Our team members learned concepts of various technical and fundamental utility. To list a few such concepts, we include MongoDB, Flask, Django, OpenCV, DocuSign, Fire safety. ## What's next for CacheTheHeat.com Cache the Heat is excited to commercialize this solution with the support of Wharton Risk Center if possible.
## Inspiration The California wildfires have proven how deadly fires can be; the mere smoke from fireworks can set ablaze hundreds of acres. What starts as a few sparks can easily become the ignition for a fire capable of destroying homes and habitats. California is just one example; fires can be just as dangerous in other parts of the world, even if not as often. Approximately 300,000 were affected by fires and 14 million people were affected by floods last year in the US alone. These numbers will continue to rise, due to issues such as climate change. Preventative equipment and forecasting is only half of the solution; the other half is education. People should be able to navigate any situation they may encounter. However, there are inherent shortcomings in the traditional teaching approach, and our game -S.O.S. - looks to bridge that gap by mixing fun and education. ## What it does S.O.S. is a first-person story mode game that allows the player to choose between two scenarios: a home fire, or a flooded car. Players will be presented with multiple options designed to either help get the player out of the situation unscathed or impede their escape. For example, players may choose between breaking open car windows in a flood or waiting inside for help, based on their experience and knowledge. Through trial and error and "bulletin boards" of info gathered from national institutions, players will be able to learn about fire and flood safety. We hope to make learning safety rules fun and engaging, straying from conventional teaching methods to create an overall pleasant experience and ultimately, save lives. ## How we built it The game was built using C#, Unity, and Blender. Some open resource models were downloaded and, if needed, textured in Blender. These models were then imported into Unity, which was then laid out using ProBuilder and ProGrids. Afterward, C# code was written using the built-in Visual Studio IDE of Unity. ## Challenges we ran into Some challenges we ran into include learning how to use Unity and code in C# as well as texture models in Blender and Unity itself. We ran into problems such as models not having the right textures or the wrong UV maps, so one of our biggest challenges was troubleshooting all of these problems. Furthermore, the C# code proved to be a challenge, especially with buttons and the physics component of Unity. Time was the biggest challenge of all, forcing us to cut down on our initial idea. ## Accomplishments that we're proud of There are many accomplishments we as a team are proud of in this hackathon. Overall, our group has become much more adept with 3D software and coding. ## What we learned We expanded our knowledge of making games in Unity, coding in C#, and modeling in Blender. ## What's next for SOS; Saving Our Souls Next, we plan to improve the appearance of our game. The maps, lighting, and animation could use some work. Furthermore, more scenarios can be added, such as a Covid-19 scenario which we had initially planned.
## Inspiration Global disasters have always been serious threats and challenges to all living creatures on our planet. We feel the urge to put our capabilities to help everyone in the world from potential dangers. ## What it does Our AI system informs people of ways to better prepare and recover before, during and after natural disasters. The Admin system allows the government to quickly communicate with local organizations to provide resources to people in need immediately. Our AI chatbot provides seamlessly accurate solution response to people's questions and needs, minimizes the organization staffing shortage. ## How we built it We brainstormed all potential issues, possible solutions to those issues and collaborated to create a design that would endure. We assigned tasks to each team member, efficiently creating a facebook chatbot and hosting it on a dynamic web app that has a user-friendly interface. ## Challenges we ran into Understanding the IBM cloud compute service. We had issues with deploying the node.js application on IBM. Another challenge was to understand about IBM Watson and its possible integration to the bot. In DocuSign, We ran into multiple challenges, which were mainly focused on understanding the API architecture and the process by which DocuSign works. The concept is interesting and seems very straightforward to use but we first needed to understand the technical flow behind the screen. Since we never worked on digital signing documents but were familiar with the concept of signing the Git commit we had initial impression that we would need cryptic knowledge of it. But with the help of DocuSign mentors, Video Demos, and their open source sample applications, we finally were able to understand the flow and implemented it. ## Accomplishments that we're proud of We were able to understand the concept of IBM cloud, tooling and how to deploy our node.js application in IBM cloud. Also, the experience of using DocuSign was very interesting for us as we were able to smooth the process and simplify the contract process. ## What we learned We learned to use IBM Compute, DocuSign API and a lot of new technologies. We achieved our goal of connecting people who are stuck, due to disasters. We learned how to use the technology that can bridge the gap to improve what we need. ## What's next for Rescue-Bot We are going to implement emergency alert notifications, broadcasting alert messages and hardware support to build better connections between people and communities.
winning
## Inspiration Algorithm interviews... suck. They're more a test of sanity (and your willingness to "grind") than a true performance indicator. That being said, large language models (LLMs) like Cohere and ChatGPT are rather *good* at doing LeetCode, so why not make them do the hard work...? Introduce: CheetCode. Our hack takes the problem you're currently screensharing, feeds it to an LLM target of your choosing, and gets the solution. But obviously, we can't just *paste* in the generated code. Instead, we wrote a non-malicious (we promise!) keylogger to override your key presses with the next character of the LLM's given solution. Mash your keyboard and solve hards with ease. The interview doesn't end there though. An email notification will appear on your computer after with the subject "Urgent... call asap." Who is it? It's not mom! It's CheetCode, with a detailed explanation including both the time and space complexity of your code. Ask your interviewer to 'take this quick' and then breeze through the follow-ups. ## How we built it The hack is the combination of three major components: a Chrome extension, Node (actually... Bun) service, and Python script. * The **extension** scrapes LeetCode for the question and function header, and forwards the context to the Node (Bun) service * Then, the **Node service** prompts an LLM (e.g., Cohere, gpt-3.5-turbo, gpt-4) and then forwards the response to a keylogger written in Python * Finally, the **Python keylogger** enables the user to toggle cheats on (or off...), and replaces the user's input with the LLM output, seamlessly (Why the complex stack? Well... the extension makes it easy to interface with the DOM, the LLM prompting is best written in TypeScript to leverage the [TypeChat](https://microsoft.github.io/TypeChat/) library from Microsoft, and Python had the best tooling for creating a fast keylogger.) (P.S. hey Cohere... I added support for your LLM to Microsoft's project [here](https://github.com/michaelfromyeg/typechat). gimme job plz.) ## Challenges we ran into * HTML `Collection` data types are not fun to work with * There were no actively maintained cross-platform keyloggers for Node, so we needed another service * LLM prompting is surprisingly hard... they were not as smart as we were hoping (especially in creating 'reliable' and consistent outputs) ## Accomplishments that we're proud of * We can now solve any Leetcode hard in 10 seconds * What else could you possibly want in life?!
## Inspiration We are part of a generation that has lost the art of handwriting. Because of the clarity and ease of typing, many people struggle with clear shorthand. Whether you're a second language learner or public education failed you, we wanted to come up with an intelligent system for efficiently improving your writing. ## What it does We use an LLM to generate sample phrases, sentences, or character strings that target letters you're struggling with. You can input the writing as a photo or directly on the webpage. We then use OCR to parse, score, and give you feedback towards the ultimate goal of character mastery! ## How we built it We used a simple front end utilizing flexbox layouts, the p5.js library for canvas writing, and simple javascript for logic and UI updates. On the backend, we hosted and developed an API with Flask, allowing us to receive responses from API calls to state-of-the-art OCR, and the newest Chat GPT model. We can also manage user scores with Pythonic logic-based sequence alignment algorithms. ## Challenges we ran into We really struggled with our concept, tweaking it and revising it until the last minute! However, we believe this hard work really paid off in the elegance and clarity of our web app, UI, and overall concept. ..also sleep 🥲 ## Accomplishments that we're proud of We're really proud of our text recognition and matching. These intelligent systems were not easy to use or customize! We also think we found a creative use for the latest chat-GPT model to flex its utility in its phrase generation for targeted learning. Most importantly though, we are immensely proud of our teamwork, and how everyone contributed pieces to the idea and to the final project. ## What we learned 3 of us have never been to a hackathon before! 3 of us never used Flask before! All of us have never worked together before! From working with an entirely new team to utilizing specific frameworks, we learned A TON.... and also, just how much caffeine is too much (hint: NEVER). ## What's Next for Handwriting Teacher Handwriting Teacher was originally meant to teach Russian cursive, a much more difficult writing system. (If you don't believe us, look at some pictures online) Taking a smart and simple pipeline like this and updating the backend intelligence allows our app to incorporate cursive, other languages, and stylistic aesthetics. Further, we would be thrilled to implement a user authentication system and a database, allowing people to save their work, gamify their learning a little more, and feel a more extended sense of progress.
## Inspiration Imagine a world where learning is as easy as having a conversation with a friend. Picture a tool that unlocks the treasure trove of educational content on YouTube, making it accessible to everyone, regardless of their background or expertise. This is exactly what our hackathon project brings to life. * Current massive online courses are great resources to bridge the gap in educational inequality. * Frustration and loss of motivation with the lengthy and tedious search for that 60-second content. * Provide support to our students to unlock their potential. ## What it does Think of our platform as your very own favorite personal tutor. Whenever a question arises during your video journey, don't hesitate to hit pause and ask away. Our chatbot is here to assist you, offering answers in plain, easy-to-understand language. Moreover, it can point you to external resources and suggest specific parts of the video for a quick review, along with relevant sections of the accompanying text. So, explore your curiosity with confidence – we've got your back! * Analyze the entire video content 🤖 Learn with organized structure and high accuracy * Generate concise, easy-to-follow conversations⏱️Say goodbye to wasted hours watching long videos * Generate interactive quizzes and personalized questions 📚 Engaging and thought-provoking * Summarize key takeaways, explanations, and discussions tailored to you 💡 Provides tailored support * Accessible to anyone with an internet 🌐 Accessible and Convenient ## How we built it Vite React,js as front-end and Flask as back-end. Using Cohere command-nightly AI and Similarity ranking. ## Challenges we ran into * **Increased application efficiency by 98%:** Reduced the number of API calls lowering load time from 8.5 minutes to under 10 seconds. The challenge we ran into was not taking into account the time taken for every API call. Originally, our backend made over 500 calls to Cohere's API to embed text every time a transcript section was initiated and repeated when a new prompt was made -- API call took about one second and added 8.5 minutes in total. By reducing the number of API calls and using efficient practices we reduced time to under 10 seconds. * **Handling over 5000-word single prompts:** Scraping longer YouTube transcripts efficiently was complex. We solved it by integrating YouTube APIs and third-party dependencies, enhancing speed and reliability. Also uploading multi-prompt conversation with large initial prompts to MongoDB were challenging. We optimized data transfer, maintaining a smooth user experience. ## Accomplishments that we're proud of Created a practical full-stack application that I will use on my own time. ## What we learned * **Front end:** State management with React, third-party dependencies, UI design. * **Integration:** Scalable and efficient API calls. * **Back end:** MongoDB, Langchain, Flask server, error handling, optimizing time complexity and using Cohere AI. ## What's next for ChicSplain We envision ChicSplain to be more than just an AI-powered YouTube chatbot, we envision it to be a mentor, teacher, and guardian that will be no different in functionality and interaction from real-life educators and guidance but for anyone, anytime and anywhere.
winning
## Inspiration In North America alone, there are over 58 million people of all ages experiencing limited hand mobility. Our team strives to better facilitate the daily life for people who have the daunting task of overcoming their disabilities. We identified a disconcerting trend that rehabilitation options for people with limited motor function were too expensive. We challenged ourselves with developing an inexpensive therapeutic device to train fine motor skills, while also preserving normative routines within daily life. ## What it does Xeri is a custom input device. Xeri allows for the translation of simple gestures into commands for a computer. Xeri requires a user to touch a specific digit to their thumbs in order to perform actions like right click, left click, scroll up, and scroll down. This touching action stimulates the muscle-mind connection enhancing it's ability to adapt to changes. With prolonged and consistent use of Xeri, patients will be able to see an improvement in their motor control. Xeri transforms the simple routine action of browsing the internet into a therapeutic medium that feels normal to the patient. ## How we built it Xeri is composed of ESP32 Core v2, MPU6050 Gyroscope, four analog sensors, and four custom analog contacts. The ESP32 allows for bluetooth connection, the MPU allows for tracking of the hand, and the sensors and contacts allow for touch control. Xeri was developed in three prototype stages: P0, P1, and P2. In P0, we developed our custom analog sensors for our hands, created a rudimentary cardboard brace, and determined how to evaluate input. In P1, we replaced our cardboard brace with a simple gloved and modified to fit the needs of the hardware. In P2, we incorporated all elements of our hardware onto our glove and fully calibrated our gyroscope. Ultimately, we created a device that allows for therapeutic motion to be processed into computer input. ## Challenges we ran into Xeri's development was not a trouble-free experience. We first encountered issues developing our custom analog contacts. We had trouble figuring out the necessary capacitance for the circuit, but through trial and error, we eventually succeeded. The biggest setback we had to mitigate was implementing our gyroscope into our code. The gyroscope we were using was not only cheap, but the chip was also defective. Our only solution was to work around this damage by reverse-engineering the supporting libraries once again ## Accomplishments that we're proud of Our greatest achievement by far was being able to create a fully operable glove that could handle custom inputs. Although, Xeri is focused on hand mobility, the device could be implemented for a variety of focuses. Xeri's custom created analog contacts were another major achievement of ours due to their ability to measure analog signal using a Redbull can. One of our developers was very inspired by what we had built and spent some time researching and desiging an Apple Watch App that enables the watch to function as a similar device. This implementation can be found in our github for others to reference. ## What we learned During the development process of Xeri, it was imperative that we had to be innovative with the little hardware we were able to obtain. We learned how to read and reverse-engineer hardware libraries. We also discovered how to create our own analog sensors and contacts. Overall, this project was incredibly rewarding as we truly developed a firm grasp on hardware devices. ## What's next for Xeri? Xeri has a lot of room for improvement. Our first future development will be to make the device fully wireless, as we did not have a remote power source to utilize. Later updates would include, a replacement of the gyrosensor, a slimmed down fully uniform version, being able to change the resistance of each finger to promote muscle growth, and create more custom inputs for more accessibility.
## Inspiration I was inspired to make this device while sitting in physics class. I really felt compelled to make something that I learned inside the classroom and apply my education to something practical. Growing up I always remembered playing with magnetic kits and loved the feeling of repulsion between magnets. ## What it does There is a base layer of small magnets all taped together so the North pole is facing up. There are hall effect devices to measure the variances in magnetic field that is created by the user's magnet attached to their finger. This allows the device to track the user's finger and determine how they are interacting with the magnetic field pointing up. ## How I built it It is build using the intel edison. Each hall effect device is either on or off depending if there is a magnetic field pointing down through the face of the black plate. This determines where the user's finger is. From there the analog data is sent via serial port to the processing program on the computer that demonstrates that it works. That just takes the data and maps the motion of the object. ## Challenges I ran into There are many challenges I faced. Two of them dealt with just the hardware. I bought the wrong type of sensors. These are threshold sensors which means they are either on or off instead of linear sensors that give a voltage proportional to the strength of magnetic field around it. This would allow the device to be more accurate. The other one deals with having alot of very small worn out magnets. I had to find a way to tape and hold them all together because they are in an unstable configuration to create an almost uniform magnetic field on the base. Another problem I ran into was dealing with the edison, I was planning on just controlling the mouse to show that it works but the mouse library only works with the arduino leonardo. I had to come up with a way to transfer the data to another program, which is how i came up dealing with serial ports and initially tried mapping it into a Unity game. ## Accomplishments that I'm proud of I am proud of creating a hardware hack that I believe is practical. I used this device as a way to prove the concept of creating a more interactive environment for the user with a sense of touch rather than things like the kinect and leap motion that track your motion but it is just in thin air without any real interaction. Some areas this concept can be useful in is in learning environment or helping people in physical therapy learning to do things again after a tragedy, since it is always better to learn with a sense of touch. ## What I learned I had a grand vision of this project from thinking about it before hand and I thought it was going to theoretically work out great! I learned how to adapt to many changes and overcoming them with limited time and resources. I also learned alot about dealing with serial data and how the intel edison works on a machine level. ## What's next for Tactile Leap Motion Creating a better prototype with better hardware(stronger magnets and more accurate sensors)
## Inspiration This generation of technological innovation and human factor design focuses heavily on designing for individuals with disabilities. As such, the inspiration for our project was an application of object detection (Darknet YOLOv3) for visually impaired individuals. This user group in particular has limited visual modality, which the project aims to provide. ## What it does Our project aims to provide the visually impaired with sufficient awareness of their surroundings to maneuver. We created a head-mounted prototype that provides the user group real-time awareness of their surroundings through haptic feedback. Our smart wearable technology uses a computer-vision ML algorithm (convolutional neural network) to help scan the user’s environment to provide haptic feedback as a warning of a nearby obstacle. These obstacles are further categorized by our algorithm to be dynamic (moving, live objects), as well as static (stationary) objects. For our prototype, we filtered through all the objects detected to focus on the nearest object to provide immediate feedback to the user, as well as providing a stronger or weaker haptic feedback if said object is near or far respectively. ## Process While our idea is relatively simple in nature, we had no idea going in just how difficult the implementation was. Our main goal was to meet a minimum deliverable product that was capable of vibrating based on the position, type, and distance of an object. From there, we had extra goals like distance calibration, optimization/performance improvements, and a more complex human interface. Originally, the processing chip (the one with the neural network preloaded) was intended to be the Huawei Atlas. With the additional design optimizations unique to neural networks, it was perfect for our application. After 5 or so hours of tinkering with no progress however, we realized this would be far too difficult for our project. We turned to a Raspberry Pi and uploaded Google’s pre-trained image analysis network. To get the necessary IO for the haptic feedback, we also had this hooked up to an Arduino which was connected to a series of haptic motors. This showed much more promise than the Huawei board and we had a functional object identifier in no time. The rest of the night was spent figuring out data transmission to the Arduino board and the corresponding decoding/output. With only 3 hours to go, we still had to finish debugging and assemble the entire hardware rig. ## Key Takeaways In the future, we all learned just how important having a minimum deliverable product (MDP) was. Our solution could be executed with varying levels of complexity and we wasted a significant amount of time on unachievable pipe dreams instead of focusing on the important base implementation. The other key takeaway of this event was to be careful with new technology. Since the Huawei boards were so new and relatively complicated to set up, they were incredibly difficult to use. We did not even use the Huawei Atlas in our final implementation meaning that all our work was not useful to our MDP. ## Possible Improvements If we had more time, there are a few things we would seek to improve. First, the biggest improvement would be to get a better processor. Either a Raspberry Pi 4 or a suitable replacement would significantly improve the processing framerate. This would make it possible to provide more robust real-time tracking instead of tracking with significant delays. Second, we would expand the recognition capabilities of our system. Our current system only filters for a very specific set of objects, particular to an office/workplace environment. Our ideal implementation would be a system applicable to all aspects of daily life. This means more objects that are recognized with higher confidence. Third, we would add a robust distance measurement tool. The current project uses object width to estimate the distance to an object. This is not always accurate unfortunately and could be improved with minimal effort.
partial
## Inspiration Inspiration - So, when we were leaving our AirBnB in Boston to head towards Harvard we met this jolly guy name Calvin who was standing just in front of the backyard. We got to know that he was a Dyslexia patient, he was staring at the watermelon and giving all of us beautiful smile but what made me give it a second thought was the way he was speaking which was difficult to interpret and when we understood what he was trying to say was ”Good Luck have a great time“ I couldn’t stop thinking about it . ## What it does Easylexia is a mobile application which makes learning easy for the children suffering from Dyslexia. What we have done is that we've included some games which will help them to learn Alphabets, Numbers and Stories. So, the children will be able to learn by playing games in our app. From this, they'll enjoy also and will be able to learn also. We also have a website which has a similar functionality. ## How we built it All the coding was done in Java because we have built an Android app. Also, for the website we have used react. We have hosted our website on Heroku. Although we had been given a domain name by Domain.com but we were facing certain issues while hosting it on Domain.com, so we had to use Heroku. We have also built a User Experience design for it using Adobe XD. ## Challenges we ran into There were many challenges. First of all, our team lacked someone who had experience in ML/Deep Learning so we had to think of an idea which did not require these skills. Also, working on a project with new people was a bit challenging. ## Accomplishments that we're proud of We gave our best and were able to complete our project. Also, we met new people during the event which was a great experience. ## What we learned We learned to work as a team. Also, how to manage a lot of work within a limited time period. ## What's next for dyslexia We have thought that we'll be adding a location based feature of interactive sessions in which the people using this app in a specific locality will gather at a place and use our app for collective learning. ## Website Link <http://dyslexian.herokuapp.com/game>
## Inspiration Our inspiration comes from a true story. One of us was recently stuck in a convenience store during a robbery. While thankfully no one was hurt, it took a huge amount of time for the shopkeeper to report the crime to the police and the robbers escaped. Several studies have established that if the police take less than 5 minutes to respond to a call involving crime, the probability of making an arrest is **60 percent**. When the time exceeds 5 minutes, the arrest probability drops to approximately **20 percent**. Research has also shown that the median delay for citizen reporting is 10 minutes and that almost three-quarters of crimes-related calls are delayed beyond the 5-minute figure. Our goal is to find a way to reduce the response time for reporting crimes. ## What it does **1. Enter Location**: The product owner enters their current location through Google Maps. **2. Nearest Police Stations:** Our product detects the top 5 nearest police stations and stores their contact numbers. **3. AI Model Detects Weapon**: In the case of a crime, the object detection model will detect a gun and take a screenshot of the scene with the location. **4. Police Is Informed**: An MMS containing the location and image of the crime scene is sent instantly to the nearest police station. ## How we built it We used **React.js** to build the landing page and the front end of the application. **Google Maps API**was used to render the map along with the current location of the user. We used Google Maps nearby feature to find the top 5 closest police stations and get their contact information. For the gun detection model, we trained a **YOLO V5 Object Detection** model with pre-annotated images of different guns. We ran Python scripts to preprocess the information and fine-tune the model, achieving an accuracy of around **84%**. The backend is built in Python Flask and uses **Twilio API** to send multimedia messages including the screenshot of the detected scene to the nearest police station. ## Challenges we ran into This was our first time working with computer vision and training a YOLO V5 model with annotated images. It was daunting at first, but we made it work by reading the documentation and it worked out pretty well in the end! ## Accomplishments that we're proud of We are happy to turn our idea into a tangible application and it's something that might be useful to a lot of businesses, and stores, and potentially reduce crimes and save lives! ## What we learned We learn that when your product has the potential to make a positive impact on society you don't get tired and everyone on your team works twice as hard. When the people you work with believe in the product's vision and mission, they stay awake for 24+ hours (shoutout to Red Bulls)! ## What's next for VigiLENS We have thought of a lot more ideas for VigiLENS such as: * custom hand signal to call the police * detecting a wider variety of weapons * building a better, more scalable computer vision model with more accuracy * video-to-text transformation to describe suspects and the scene to the police * detecting other forms of crimes, such as stealing objects, etc.
## Inspiration Cliff is dyslexic, so reading is difficult and slow for him and makes school really difficult. But, he loves books and listens to 100+ audiobooks/yr. However, most books don't have an audiobook, especially not textbooks for schools, and articles that are passed out in class. This is an issue not only for the 160M people in the developed world with dyslexia but also for the 250M people with low vision acuity. After moving to the U.S. at age 13, Cliff also needed something to help him translate assignments he didn't understand in school. Most people become nearsighted as they get older, but often don't have their glasses with them. This makes it hard to read forms when needed. Being able to listen instead of reading is a really effective solution here. ## What it does Audiobook maker allows a user to scan a physical book with their phone to produced a digital copy that can be played as an audiobook instantaneously in whatever language they choose. It also lets you read the book with text at whatever size you like to help people who have low vision acuity or are missing their glasses. ## How we built it In Swift and iOS using Google ML and a few clever algorithms we developed to produce high-quality scanning, and high quality reading with low processing time. ## Challenges we ran into We had to redesign a lot of the features to make the app user experience flow well and to allow the processing to happen fast enough. ## Accomplishments that we're proud of We reduced the time it took to scan a book by 15X after one design iteration and reduced the processing time it took to OCR (Optical Character Recognition) the book from an hour plus, to instantaneously using an algorithm we built. We allow the user to have audiobooks on their phone, in multiple languages, that take up virtually no space on the phone. ## What we learned How to work with Google ML, how to work around OCR processing time. How to suffer through git Xcode Storyboard merge conflicts, how to use Amazon's AWS/Alexa's machine learning platform. ## What's next for Audiobook Maker Deployment and use across the world by people who have Dyslexia or Low vision acuity, who are learning a new language or who just don't have their reading glasses but still want to function. We envision our app being used primarily for education in schools - specifically schools that have low-income populations who can't afford to buy multiple of books or audiobooks in multiple languages and formats. ## Treehack themes treehacks education Verticle > personalization > learning styles (build a learning platform, tailored to the learning styles of auditory learners) - I'm an auditory learner, I've dreamed a tool like this since the time I was 8 years old and struggling to learn to read. I'm so excited that now it exists and every student with dyslexia or a learning difference will have access to it. treehacks education Verticle > personalization > multilingual education ( English as a second-language students often get overlooked, Are there ways to leverage technology to create more open, multilingual classrooms?) Our software allows any book to become polylingual. treehacks education Verticle > accessibility > refugee education (What are ways technology can be used to bring content and make education accessible to refugees? How can we make the transition to education in a new country smoother?) - Make it so they can listen to material in their mother tongue if needed. or have a voice read along with them in English. Make it so that they can carry their books wherever they go by scanning a book once and then having it for life. treehacks education Verticle >language & literacy > mobile apps for English literacy (How can you build mobile apps to increase English fluency and literacy amongst students and adults?) -One of the best ways to learn how to read is to listen to someone else doing it and to follow yourself. Audiobook maker lets you do that. From a practical perspective - learning how to read is hard and it is difficult for an adult learning a new language to achieve proficiency and a high reading speed. To bridge that gap Audiobook Maker makes sure that every person can and understand and learn from any text they encounter. treehacks education Verticle >language & literacy > in-person learning (many people want to learn second languages) - Audiobook maker allows users to live in a foreign countrys and understand more of what is going on. It allows users to challenge themselves to read or listen to more of their daily work in the language they are trying to learn, and it can help users understand while they studying a foreign language in the case that the meaning of text in a book or elsewhere is not clear. We worked a lot with Google ML and Amazon AWS.
losing
## Inspiration Kingston's website is the place to go when having questions about having life in Kingston or when searching for events going on in the city, but navigating through the hundreds of the city's webpages for an answer can be gruesome. We were all interested in AI and wanted to challenge ourselves to build a chatbot website. ## What it does Kingsley is a chatbot built to help residents in Kingston with their inquiries. It takes in user input, and responds with a helpful answer along with a link to where more information can be found on the city of Kingston website if applicable. It has an option for voice input and output for greater accessibility. ## How we built it * Kingsley uses a GPT-3 model fine-tuned on data from the city of Kingston website. * The data was scraped using Beautiful Soup. * A GloVe model was used to find website links relevant to the user's question. * Jaccard similarity was used to find relevant text that specifically mentioned key words in the user's question. * Relevant texts were narrowed down and passed as part of the prompt to GPT-3 for an answer completion. * The website along with the voice functionality were created using React. ## Challenges we ran into The CityOfKingston website has a huge amount of pages, a lot of which are archived, calendars, or not very useful. OpenAI's API on the other hand only allowed a limited context. so to have the bot be able to read relevant pages as its context, we had to go through multiple methods of data filtering to find the relevant pages. We spent a great amount of time implementing speech-to-text and text-to-speech for our webapp. Many of the solutions on the internet were of little help, and we tried using several npm packages before being successful in the end. ## Accomplishments that we're proud of We successfully made a working chatbot! And it references real facts! (sometimes) ## What we learned Throughout the project, we gained experience working with various APIs. We learned how to use and combine different natural language processing techniques to optimize accuracy and computation time. We learned React hooks useState and useEffect, Javascript functions, and how to use React developer tools to debug components in Chrome. We figured out how to link backend Flask with frontend app, setup a domain, and use text-to-speech and speech-to-text libraries. ## What's next for Kingsley Due to free trial limits, we chose to use the Ada GPT model for our chatbot. In the future if we had more credits, we could use a better version of GPT-3 in order to produce more relevant and helpful results. We are also interested in expanding Kingsley to reference data from other websites. It can also be adapted as an extension or floating popup that can be used directly on top of Kingston's website.
## Inspiration <https://www.youtube.com/watch?v=lxuOxQzDN3Y> Robbie's story stuck out to me at the endless limitations of technology. He was diagnosed with muscular dystrophy which prevented him from having full control of his arms and legs. He was gifted with a Google home that crafted his home into a voice controlled machine. We wanted to take this a step further and make computers more accessible for people such as Robbie. ## What it does We use a Google Cloud based API that helps us detect words and phrases captured from the microphone input. We then convert those phrases into commands for the computer to execute. Since the python script is run in the terminal it can be used across the computer and all its applications. ## How I built it The first (and hardest) step was figuring out how to leverage Google's API to our advantage. We knew it was able to detect words from an audio file but there was more to this project than that. We started piecing libraries to get access to the microphone, file system, keyboard and mouse events, cursor x,y coordinates, and so much more. We build a large (~30) function library that could be used to control almost anything in the computer ## Challenges I ran into Configuring the many libraries took a lot of time. Especially with compatibility issues with mac vs. windows, python2 vs. 3, etc. Many of our challenges were solved by either thinking of a better solution or asking people on forums like StackOverflow. For example, we wanted to change the volume of the computer using the fn+arrow key shortcut, but python is not allowed to access that key. ## Accomplishments that I'm proud of We are proud of the fact that we had built an alpha version of an application we intend to keep developing, because we believe in its real-world applications. From a technical perspective, I was also proud of the fact that we were able to successfully use a Google Cloud API. ## What I learned We learned a lot about how the machine interacts with different events in the computer and the time dependencies involved. We also learned about the ease of use of a Google API which encourages us to use more, and encourage others to do so, too. Also we learned about the different nuances of speech detection like how to tell the API to pick the word "one" over "won" in certain context, or how to change a "one" to a "1", or how to reduce ambient noise. ## What's next for Speech Computer Control At the moment we are manually running this script through the command line but ideally we would want a more user friendly experience (GUI). Additionally, we had developed a chrome extension that numbers off each link on a page after a Google or Youtube search query, so that we would be able to say something like "jump to link 4". We were unable to get the web-to-python code just right, but we plan on implementing it in the near future.
## Inspiration Coming from South Texas, two of the team members saw ESL (English as a Second Language) students being denied of a proper education. Our team created a tool to break down language barriers that traditionally perpetuate socioeconomic cycles of poverty by providing detailed explanations about word problems using ChatGPT. Traditionally, people from this group would not have access to tutoring or 1-on-1 support, and this website is meant to rectify this glaring issue. ## What it does The website takes in a photo as input, and it uses optical character recognition to get the text from the problem. Then, it uses ChatGPT to generate a step-by-step explanation for each problem, and this output is tailored to the grade level and language of the student, enabling students from various backgrounds to get assistance they are often denied of. ## How we built it We coded the backend in Python with two parts: OCR and ChatGPT API implementation. Moreover, we considered the multiple parameters, such as grade and language, that we could implement in our code and eventually query ChatGPT with to make the result as helpful as possible. On the other side of the stack, we coded it in React with TypeScript to be as simple and intuitive as possible. It has two sections that clearly show what it is outputting and what ChatGPT has generated to assist the student. ## Challenges we ran into During the development of our product we often ran into struggles with deciding the optimal way to apply different APIs and learning how to implement them, many of which we ended up not using or changing our applications for, such as the IBM API. Through this process, we had to change our high level plan for the backend functions and consequently reimplement our frontend user interface to fit the operations. This also provided a compounding challenge of having to reestablish and discuss new ideas while communicating as a team. ## Accomplishments that we're proud of We are proud of the website layout. Personally the team is very fond of the color and the arrangement of the site’s elements. Another thing that we are proud of is just that we have something working, albeit jankily. This was our first hackathon, so we were proud to be able to contribute to the hackathon in some form. ## What we learned One invaluable skill we developed through this project was learning more about the unique plethora of APIs available and how we can integrate and combine them to create new revolutionary products that can help people in everyday life. We not only developed our technical skills, including git familiarity and web development, but we also developed our ability to communicate our ideas as a team and gain the confidence and creativity to create and carry out an idea from thought to production. ## What's next for Homework Helper As part of our mission to increase education accessibility and combat common socioeconomic barriers, we hope to use Homework Helper to not only translate and minimize the language barrier, but to also help those with visual and auditory disabilities. Some functions we hope to implement include having text-to-speech and speech-to-text features, and producing video solutions along with text answers.
winning
# 🤖🖌️ [VizArt Computer Vision Drawing Platform](https://vizart.tech) Create and share your artwork with the world using VizArt - a simple yet powerful air drawing platform. ![image](https://user-images.githubusercontent.com/65676392/215330789-e38f2b41-1d7b-45b9-bb4f-09be3ffb9bf8.png) ## 💫 Inspiration > > "Art is the signature of civilizations." - Beverly Sills > > > Art is a gateway to creative expression. With [VizArt](https://vizart.tech/create), we are pushing the boundaries of what's possible with computer vision and enabling a new level of artistic expression. ***We envision a world where people can interact with both the physical and digital realms in creative ways.*** We started by pushing the limits of what's possible with customizable deep learning, streaming media, and AR technologies. With VizArt, you can draw in art, interact with the real world digitally, and share your creations with your friends! > > "Art is the reflection of life, and life is the reflection of art." - Unknow > > > Air writing is made possible with hand gestures, such as a pen gesture to draw and an eraser gesture to erase lines. With VizArt, you can turn your ideas into reality by sketching in the air. ![image](https://user-images.githubusercontent.com/65676392/215330736-0e670fe9-4b35-47f5-a948-a8cc107e78e1.png) ![4](https://user-images.githubusercontent.com/65676392/215330565-568a319a-6175-434e-b2de-5017ea4853c5.png) ![5](https://user-images.githubusercontent.com/65676392/215330572-36799049-dc33-430d-b59b-59ad50eb9e7a.png) Our computer vision algorithm enables you to interact with the world using a color picker gesture and a snipping tool to manipulate real-world objects. ![](https://user-images.githubusercontent.com/65676392/215331038-055999cb-85ad-4383-8373-f47d3534457d.png) > > "Art is not what you see, but what you make others see." - Claude Monet > > > The features I listed above are great! But what's the point of creating something if you can't share it with the world? That's why we've built a platform for you to showcase your art. You'll be able to record and share your drawings with friends. ![image](https://user-images.githubusercontent.com/65676392/215331079-f676ea67-5e5c-4164-9c92-969919ef285b.png) ![image](https://user-images.githubusercontent.com/65676392/215331103-10c5a04c-f4f8-48a1-b40c-a1ff06202ffa.png) I hope you will enjoy using VizArt and share it with your friends. Remember: Make good gifts, Make good art. # ❤️ Use Cases ### Drawing Competition/Game VizArt can be used to host a fun and interactive drawing competition or game. Players can challenge each other to create the best masterpiece, using the computer vision features such as the color picker and eraser. ### Whiteboard Replacement VizArt is a great alternative to traditional whiteboards. It can be used in classrooms and offices to present ideas, collaborate with others, and make annotations. Its computer vision features make drawing and erasing easier. ### People with Disabilities VizArt enables people with disabilities to express their creativity. Its computer vision capabilities facilitate drawing, erasing, and annotating without the need for physical tools or contact. ### Strategy Games VizArt can be used to create and play strategy games with friends. Players can draw their own boards and pieces, and then use the computer vision features to move them around the board. This allows for a more interactive and engaging experience than traditional board games. ### Remote Collaboration With VizArt, teams can collaborate remotely and in real-time. The platform is equipped with features such as the color picker, eraser, and snipping tool, making it easy to interact with the environment. It also has a sharing platform where users can record and share their drawings with anyone. This makes VizArt a great tool for remote collaboration and creativity. # 👋 Gestures Tutorial ![image](https://user-images.githubusercontent.com/65676392/215335093-d911eaa1-0cc6-4e78-adc7-b63b323b2f74.png) ![image](https://user-images.githubusercontent.com/65676392/215335107-09c394a4-4811-4199-b692-74ef7377b23c.png) ![image](https://user-images.githubusercontent.com/65676392/215335122-8a517c4a-1374-42f0-ac71-6372a63a7075.png) ![image](https://user-images.githubusercontent.com/65676392/215335137-61a1bd8a-a95c-4e0d-806c-53c443dcdd9d.png) ![image](https://user-images.githubusercontent.com/65676392/215335143-93bc8edb-c2b2-4a8f-b562-d67b8524ac66.png) # ⚒️ Engineering Ah, this is where even more fun begins! ## Stack ### Frontend We designed the frontend with Figma and after a few iterations, we had an initial design to begin working with. The frontend was made with React and Typescript and styled with Sass. ### Backend We wrote the backend in Flask. To implement uploading videos along with their thumbnails we simply use a filesystem database. ## Computer Vision AI We use MediaPipe to grab the coordinates of the joints and upload images. WIth the coordinates, we plot with CanvasRenderingContext2D on the canvas, where we use algorithms and vector calculations to determinate the gesture. Then, for image generation, we use the DeepAI open source library. # Experimentation We were using generative AI to generate images, however we ran out of time. ![image](https://user-images.githubusercontent.com/65676392/215340713-9b4064a0-37ac-4760-bd35-e6a30c2f4613.png) ![image](https://user-images.githubusercontent.com/65676392/215340723-ee993e2b-70bb-4aa3-a009-ac4459f23f72.png) # 👨‍💻 Team (”The Sprint Team”) @Sheheryar Pavaz @Anton Otaner @Jingxiang Mo @Tommy He
## Inspiration for Creating sketch-it Art is fundamentally about the process of creation, and seeing as many of us have forgotten this, we are inspired to bring this reminder to everyone. In this world of incredibly sophisticated artificial intelligence models (many of which can already generate an endless supply of art), now more so than ever, we must remind ourselves that our place in this world is not only to create but also to experience our uniquely human lives. ## What it does Sketch-it accepts any image and breaks down how you can sketch that image into 15 easy-to-follow steps so that you can follow along one line at a time. ## How we built it On the front end, we used Flask as a web development framework and an HTML form that allows users to upload images to the server. On the backend, we used Python libraries ski-kit image and Matplotlib to create visualizations of the lines that make up that image. We broke down the process into frames and adjusted the features of the image to progressively create a more detailed image. ## Challenges we ran into We initially had some issues with scikit-image, as it was our first time using it, but we soon found our way around fixing any importing errors and were able to utilize it effectively. ## Accomplishments that we're proud of Challenging ourselves to use frameworks and libraries we haven't used earlier and grinding the project through until the end!😎 ## What we learned We learned a lot about personal working styles, the integration of different components on the front and back end side, as well as some new possible projects we would want to try out in the future! ## What's next for sketch-it Adding a feature that converts the step-by-step guideline into a video for an even more seamless user experience!
## Inspiration We set out to create a fascinating visual art experience for the user. Our work was inspired by “A Neural Algorithm of Artistic Style” by Gatys, Ecker, and Bethge. We hope the project will inspire the user to seek artwork in real life, but if they are unable, we have created a way for anyone to enrich their life with visual art through virtual reality. We want everyone to be able to experience an EverydaY MasterpiecE ## What it does The user enters a virtual reality environment where they can switch between original images and a version that has been manipulated. Using the algorithm created by Gatys, Ecker, and Bethge, the user experiences the same image but translated into the style of a famous painting. ## How we built it We used the algorithm created by Gatys, Ecker, and Bethge which allowed us to transform pictures into different styles of art taken from masterpieces. We then developed a program to display these pictures in a personal experience. Specifically, we captured images using fisheye lenses and filters. We then ran the images through the algorithm to change them into the different art styles. Finally, we created a program to display these images in virtual reality with the Oculus Rift. ## Challenges we ran into At first, we could not even figure out how to hook up the Oculus Rift to the computer. We also had lots of difficulties adding our images to Unity and switching between them. For the non-photorealistic rendering, we based our method on a recent advancement in the literature of deep neural networks, and there is some demo code online that we used to render our images. However, making all the dependencies including caffe, torch, cutorch, and cudnn function correctly is not a trivial task given the limited amount of time that we had. As deep neural networks require a huge amount of computation, we tried use the Amazon Cloud Computing Service (AWS) to facilitate our computation. We were able to use the CPU to complete our rendering, but we were unable to successfully use the GPU to render at a faster pace. ## Accomplishments that we're proud of We are proud to be using some of the latest technologies and especially a very recent advancement in non-photorealistic rendering using deep neural networks. ## What we learned We learned the importance of search engine optimization while creating our webpage. ## What's next for EyMe We would try to move towards a real time rendering. We could attach a camera to the front of the Oculus Rift so the world would be translated into art in real time. This would require huge improvements to the way the algorithm works and also to the hardware we would use for the rendering. This goal is very lofty, but there is one feasible step that could get us started.. We would try to use GPU computing through AWS instead of CPU, which would make great improvements to our rendering time. Another step would be to automate the entire process. Currently, it is tedious to manually submit each photo for rendering without a queue. By creating a queue and auto-retrieving results, lots of time could be saved. **Paintings used** *The Starry Night* by Vincent van Gogh *Woman with a Hat* by Henri Matisse *A Wheatfield with Cypresses* by Vincent van Gogh *Please note:* As attributed above, the algorithm for the rendering came from “A Neural Algorithm of Artistic Style” by Gatys, Ecker, and Bethge. We did not write our own code for the non-photorealistic rendering. We used the github project <https://github.com/jcjohnson/neural-style> , which depends on a few key projects: <https://github.com/soumith/cudnn.torch> <https://github.com/szagoruyko/loadcaffe> as well as the following caffe install instruction: <https://github.com/BVLC/caffe/wiki/Install-Caffe-on-EC2-from-scratch-(Ubuntu,-CUDA-7,-cuDNN)>
winning
## Inspiration We are tired of being forgotten and not recognized by others for our accomplishments. We built a software and platform that helps others get to know each other better and in a faster way, using technology to bring the world together. ## What it does Face Konnex identifies people and helps the user identify people, who they are, what they do, and how they can help others. ## How we built it We built it using Android studio, Java, OpenCV and Android Things ## Challenges we ran into Programming Android things for the first time. WiFi not working properly, storing the updated location. Display was slow. Java compiler problems. ## Accomplishments that we're proud of. Facial Recognition Software Successfully working on all Devices, 1. Android Things, 2. Android phones. Prototype for Konnex Glass Holo Phone. Working together as a team. ## What we learned Android Things, IOT, Advanced our android programming skills Working better as a team ## What's next for Konnex IOT Improving Facial Recognition software, identify and connecting users on konnex Inputting software into Konnex Holo Phone
## Inspiration We as a team shared the same interest in knowing more about Machine Learning and its applications. upon looking at the challenges available, we were immediately drawn to the innovation factory and their challenges, and thought of potential projects revolving around that category. We started brainstorming, and went through over a dozen design ideas as to how to implement a solution related to smart cities. By looking at the different information received from the camera data, we landed on the idea of requiring the raw footage itself and using it to look for what we would call a distress signal, in case anyone felt unsafe in their current area. ## What it does We have set up a signal that if done in front of the camera, a machine learning algorithm would be able to detect the signal and notify authorities that maybe they should check out this location, for the possibility of catching a potentially suspicious suspect or even being present to keep civilians safe. ## How we built it First, we collected data off the innovation factory API, and inspected the code carefully to get to know what each part does. After putting pieces together, we were able to extract a video footage of the nearest camera to us. A member of our team ventured off in search of the camera itself to collect different kinds of poses to later be used in training our machine learning module. Eventually, due to compiling issues, we had to scrap the training algorithm we made and went for a similarly pre-trained algorithm to accomplish the basics of our project. ## Challenges we ran into Using the Innovation Factory API, the fact that the cameras are located very far away, the machine learning algorithms unfortunately being an older version and would not compile with our code, and finally the frame rate on the playback of the footage when running the algorithm through it. ## Accomplishments that we are proud of Ari: Being able to go above and beyond what I learned in school to create a cool project Donya: Getting to know the basics of how machine learning works Alok: How to deal with unexpected challenges and look at it as a positive change Sudhanshu: The interesting scenario of posing in front of a camera while being directed by people recording me from a mile away. ## What I learned Machine learning basics, Postman, working on different ways to maximize playback time on the footage, and many more major and/or minor things we were able to accomplish this hackathon all with either none or incomplete information. ## What's next for Smart City SOS hopefully working with innovation factory to grow our project as well as inspiring individuals with similar passion or desire to create a change.
## Inspiration We were inspired by clowns pieing each other, yet remaining happy during the process. ## What it does Our product detects a smile on your face, and will slap you. ## How I built it The facial detection used OpenCV classification with webcam feed into an Arduino controlling a motor. ## Challenges I ran into We were new to OpenCV and it was difficult to implement. ## Accomplishments that I'm proud of The idea ## What I learned OpenCV, and using python libraries to connect to an Arduino. ## What's next for Smile Away Tasers
winning
## Inspiration A lot of DSLR cameras don't have any sort of autofocus feature in video mode, making it impossible to shoot high quality footage without making things look blurry. ## What it does The arduino rig uses ultrasonic distance sensors to measure distance and calculate the correct focus amount to set a camera's lens to. ## How I built it The rig is pretty simple, just an arduino hooked into a brushless motor. ## Challenges I ran into Working with hardware for the first time was interesting, lots of unexpected issues with getting the motor to accurately rotate the lens. ## Accomplishments that I'm proud of The fact that I got a relatively stable autofocus system out of essentially one motor and some zip ties. ## What I learned Get better hardware next time! ## What's next for Ultrasonic Autofocus I'd like to make the rig cleaner and less janky-looking
## Inspiration Think about how convenient our lives have become since the dawn of smartphones. However, according to the world health organization, about 4.1% of the world’s population, or about 400 million people consider themselves deaf or have serious difficulty hearing. And all these people face a lot of difficulties while communicating in situations we might take for granted. Our theme today is connecting dots, and we strongly believe that we'd be connecting a lot of those dots by helping the deaf community - we provide an entirely new approach for deaf people to communicate, and the promise of our tech is to let them experience greater connectivity in life. ## What it does * EASI is an Electronic Assist for Sign Interpretation. It helps bridge the communication gap between the deaf population and the mainstream society. Upon research, we found that interpreting and understanding English text obtained using prevalent Speech-To-Text isn't as easy for a deaf person as it is for most of us. Essentially, deaf people do not have native access to English, since they don't hear the vocabulary and linguistic patterns of the language from birth, and instead are more exposed to sign language. This puts them at a certain disadvantage when interpreting English text * EASI is oriented towards delivering sign-language interpretations from a speech in real-time * EASI provides users with the opportunity to create, store, and share their customized signs for certain phrases that permeate through multiple social strata * We also provide a website page for them to seamlessly have a coherent experience across different platforms. The website increases accessibility to users who cannot get access to the app while enabling users to manage data anytime, anywhere ## Challenges I ran into implementing Google-cloud services with flutter ## Accomplishments that I'm proud of * We run a custom algorithm to find and suggest the best suitable phrase for a given social interaction based on the previous usage of the app and a few keywords * We are proud that EASI will be very helpful to a large group of people, that they can find their ways hearing and staying connected ## What I learned We did a large amount of research about difficulties deaf people are facing right now, as well as the inconvenience and weakness of the apps existing already and we found that a lot of deaf people still cannot get used to relying on technology communicating with people ## What's next for EASI In the future, we want to implement more features to the EASI, like gesture recognition. Also, we will provide solutions to help with the communication between deaf people
## Inspiration There were two primary sources of inspiration. The first one was a paper published by University of Oxford researchers, who proposed a state of the art deep learning pipeline to extract spoken language from video. The paper can be found [here](http://www.robots.ox.ac.uk/%7Evgg/publications/2018/Afouras18b/afouras18b.pdf). The repo for the model used as a base template can be found [here](https://github.com/afourast/deep_lip_reading). The second source of inspiration is an existing product on the market, [Focals by North](https://www.bynorth.com/). Focals are smart glasses that aim to put the important parts of your life right in front of you through a projected heads up display. We thought it would be a great idea to build onto a platform like this through adding a camera and using artificial intelligence to gain valuable insights about what you see, which in our case, is deciphering speech from visual input. This has applications in aiding individuals who are deaf or hard-of-hearing, noisy environments where automatic speech recognition is difficult, and in conjunction with speech recognition for ultra-accurate, real-time transcripts. ## What it does The user presses a button on the side of the glasses, which begins recording, and upon pressing the button again, recording ends. The camera is connected to a raspberry pi, which is a web enabled device. The raspberry pi uploads the recording to google cloud, and submits a post to a web server along with the file name uploaded. The web server downloads the video from google cloud, runs facial detection through a haar cascade classifier, and feeds that into a transformer network which transcribes the video. Upon finished, a front-end web application is notified through socket communication, and this results in the front-end streaming the video from google cloud as well as displaying the transcription output from the back-end server. ## How we built it The hardware platform is a raspberry pi zero interfaced with a pi camera. A python script is run on the raspberry pi to listen for GPIO, record video, upload to google cloud, and post to the back-end server. The back-end server is implemented using Flask, a web framework in Python. The back-end server runs the processing pipeline, which utilizes TensorFlow and OpenCV. The front-end is implemented using React in JavaScript. ## Challenges we ran into * TensorFlow proved to be difficult to integrate with the back-end server due to dependency and driver compatibility issues, forcing us to run it on CPU only, which does not yield maximum performance * It was difficult to establish a network connection on the Raspberry Pi, which we worked around through USB-tethering with a mobile device ## Accomplishments that we're proud of * Establishing a multi-step pipeline that features hardware, cloud storage, a back-end server, and a front-end web application * Design of the glasses prototype ## What we learned * How to setup a back-end web server using Flask * How to facilitate socket communication between Flask and React * How to setup a web server through local host tunneling using ngrok * How to convert a video into a text prediction through 3D spatio-temporal convolutions and transformer networks * How to interface with Google Cloud for data storage between various components such as hardware, back-end, and front-end ## What's next for Synviz * With stronger on-board battery, 5G network connection, and a computationally stronger compute server, we believe it will be possible to achieve near real-time transcription from a video feed that can be implemented on an existing platform like North's Focals to deliver a promising business appeal
losing
## Inspiration of Punch My Professors As McGillians, we juggle from one final to another, drowning in projects that seem to have no end, all, while tackling some mind-bending assignments that leave you scratching your head. It's like you're constantly drowning, swamped with stress, wishing for an epic way to blow off some steam. That's the whole vibe of this game – it's a cool, offbeat escape to channel all that university tension. It's not about getting physical; it's more about flipping all that intense study stress into a good chuckle and a light-hearted jab. ## What is it about "Punch My Professors" is a game that utilizes OpenCV hand recognition to detect a player's hand movements, translating real-life actions into the digital world. Players can form a fist and simulate a boxing punch towards a 3D model of their professor in the game. The game translates the player's physical punches into virtual ones, allowing them to interact with the professor's 3D model in a fun and harmless boxing match setup. ## How we built it We built "Punch My Professors" by integrating several technologies: **Mediapipe Hand Recognition**: We used Mediapipe's advanced hand recognition algorithms to detect and track the player's hand movements in real-time. **2D to 3D Model Conversion AI**: We leveraged cutting-edge AI to convert 2D images of professors into detailed 3D models, ensuring a realistic and engaging virtual representation. **Unity Game Development**: The core of the game was developed in Unity, allowing us to craft an immersive 3D environment and ensuring smooth gameplay and physics interactions. **Custom Physics Engine**: We developed a custom physics engine tailored to simulate the impact of the punches on the 3D models, providing a satisfying feedback loop to the player's actions. ## Challenges we ran into **Hand Recognition Accuracy**: Achieving high accuracy and low latency in hand recognition was challenging, especially in varying lighting conditions and backgrounds. **3D Model Fidelity**: Ensuring the 3D models accurately resembled the professors while maintaining performance was a fine balance to strike. **Physics Simulation**: Creating a realistic and responsive physics simulation that accurately represented the impact of punches on the 3D models required meticulous tweaking and testing. ## Accomplishments that we're proud of **Seamless Integration**: Successfully integrating OpenCV with Unity and ensuring smooth real-time hand recognition in the game environment. **High-Fidelity 3D Models**: The AI-powered 2D to 3D conversion resulted in highly realistic models of the professors, exceeding our initial expectations. **Engaging Gameplay**: Crafting an engaging and fun game that has been well-received by players, providing them with a unique way to de-stress and have fun. **Expanding the Roster**: Including more characters and scenarios, potentially allowing custom uploads of images to be converted into 3D models. ## What we learned **Advanced Computer Vision**: Delving deep into OpenCV's capabilities taught us a lot about hand recognition and real-time image processing. **AI-Powered Modeling**: We gained insights into the latest advancements in AI for 3D modeling, exploring the potential of neural networks in creating lifelike 3D representations from 2D images. **Game Physics**: We enhanced our understanding of physics in game development, learning how to simulate realistic interactions between virtual objects. ## What's next for Box my Professor **Multiplayer Functionality**: Introduce a multiplayer mode where players can box against each other's professors in a friendly competitive setup. **Enhanced Interactivity**: Implement more complex hand gestures and interactions, allowing for a more varied and rich gameplay experience. **Cross-Platform Support**: Develop the game for multiple platforms, making it accessible to a wider audience. "Punch My Professors" started as a fun concept but has grown into a project that pushes the boundaries of AI and game development, and we're excited to see how far it can go!
## Check out our site -> [Saga](http://sagaverse.app) ## Inspiration There are few better feelings in the world than reading together with a child that you care about. “Just one more story!” — “I promise I’ll go to bed after the next one” — or even simply “Zzzzzzz” — these moments forge lasting memories and provide important educational development during bedtime routines. We wanted to make sure that our loved ones never run out of good stories. Even more, we wanted to create a unique, dynamic reading experience for kids that makes reading even more fun. After helping to build the components of the story, kids are able to help the character make decisions along the way. “Should Balthazar the bear search near the park for his lost friend? or should he look in the desert?” These decisions help children learn and develop key skills like decisiveness and action. The story updates in real time, ensuring an engaging experience for kids and parents. Through copious amounts of delirious research, we learned that children can actually learn better and retain more when reading with parents on a tablet. After talking to 8 users (parents and kiddos) over the course of the weekend, we defined our problem space and set out to create a truly “Neverending Story.” ## What it does Each day, *Saga* creates a new, illustrated bedtime story for children aged 0-7. Using OpenAI technology, the app generates and then illustrates an age and interest-appropriate story based on what they want to hear and what will help them learn. Along the way, our application keeps kids engaged by prompting decisions; like a real-time choose-your-own-adventure story. We’re helping parents broaden the stories available for their children — imprinting values of diversity, inclusion, community, and a strong moral compass. With *Saga*, parents and children can create a universe of stories, with their specific interests at the center. ## How we built it We took an intentional approach to developing a working MVP * **Needs finding:** We began with a desire to uncover a need and build a solution based on user input. We interviewed 8 users over the weekend (parents and kids) and used their insights to develop our application. * **Defined MVP:** A deployable application that generates a unique story and illustrations while allowing for dynamic reader inputs using OpenAI. We indexed on story, picture, and educational quality over reproducibility. * **Tech Stack:** We used the latest LLM models (GPT-3 and DALLE-2), Flutter for the client, a Node/Express backend, and MongoDB for data management * **Prompt Engineering:** Finding the limitations of the underlying LLM technology and instead using Guess and check until we narrowed down the prompt to produce to more consistent results. We explored borderline use cases to learn where the model breaks. * **Final Touches:** Quality control and lots of tweaking of the image prompting functionality ## Challenges we ran into Our biggest challenges revolved around fully understanding the power of, and the difficulties stemming from prompt generation for OpenAI. This struggle hit us on several different fronts: 1. **Text generation** - Early on, we asked for specific stories and prompts resembling “write me a 500-word story.” Unsurprisingly, the API completely disregarded the constraints, and the outputs were similar regardless of how we bounded by word count. We eventually became more familiar with the structure of quality prompts, but we hit our heads against this particular problem for a long time. 2. **Illustration generation** - We weren’t able to predictably write OpenAI illustration prompts that provided consistently quality images. This was a particularly difficult problem for us since we had planned on having a consistent character illustration throughout the story. Eventually, we found style modifiers to help bound the problem. 3. **Child-safe content** - We wanted to be completely certain that we only presented safe and age-appropriate information back to the users. With this in mind, we built several layers of passive and active protection to ensure all content is family friendly. ## What we learned So many things about OpenAI! 1. Creating consistent images using OpenAI generation is super hard, especially when focusing on one primary protagonist. We addressed this by specifically using art styles to decrease the variability between images. 2. GPT-3's input / output length limitations are much more stringent than ChatGPT's -- this meant we had to be pretty innovative with how we maintained the context over the course of 10+ page stories. 3. How to reduce overall response time while using OpenAI's API, which was really important when generating so many images and using GPT-3 to describe and summarize so many things. 4. Simply instructing GPT to not do something doesn’t seem to work as well as carefully crafting a prompt of behavior you would like it to model. You need to trick it into thinking it is someone or something -- from there, it will behave. ## Accomplishments that we're proud of We’re super excited about what we were able to create given that this is the first hackathon for 3 of our team members! Specifically, we’re proud of: * Developing a fun solution to help make learning engaging for future generations * Solving a real need for people in our lives * Delivering a well-scoped and functional MVP based on multiple user interviews * Integrating varied team member skill sets from barely technical to full-stack ## What's next for Saga ### **Test and Iterate** We’re excited to get our prototype project in the hands of users and see what real-world feedback looks like. Using this customer feedback, we’ll quickly iterate and make sure that our application is really solving a user need. We hope to get this on the App Store ASAP!! ### **Add functionality** Based on the feedback that we’ll receive from our initial MVP, we will prioritize additional functionality: **Reading level that grows with the child** — adding more complex vocabulary and situations for a story and character that the child knows and loves. **Allow for ongoing universe creation** — saving favorite characters, settings, and situations to create a rich, ongoing world. **Unbounded story attributes** — rather than prompting parents with fixed attributes, give an open-ended prompt for more control of the story, increasing child engagement **Real-time user feedback on a story to refine the prompts** — at the end of each story, capture user feedback to help personalize future prompts and stories. ### **Monetize** Evaluate unit economics and determine the best path to market. Current possible ideas: * SaaS subscription based on one book per day or unlimited access * Audible tokens model to access a fixed amount of stories per month * Identify and partner with mid-market publishers to license IP and leverage existing fan bases * Whitelabel the solution on a services level to publishers who don’t have a robust engineering team ## References <https://www.frontiersin.org/articles/10.3389/fpsyg.2017.00677/full>
## Inspiration We found that even there are so many resources to learn to code, but all of them fall into one of two categories: they are either in a generic course and grade structure, or are oversimplified to fit a high-level mould. We thought the ideal learning environment would be an interactive experience where players have to learn to code, not for a grade or score, but to progress an already interactive game. The code the students learn is actual Python script, but it guided with the help of an interactive tutorial. ## What it does This code models a "dinosaur game" structure where players have to jump over obstacles. However, as the player experiences more and more difficult obstacles through the level progression, they are encouraged to automate the character behavior with the use of Python commands. Players can code the behavior for the given level, telling the player to "jump when the obstacles is 10 pixels away" with workable Python script. The game covers the basic concepts behind integers, loops, and boolean statements. ## How we built it We began with a Pygame template and created a game akin to the "Dinosaur game" of Google Chrome. We then integrated a text editor that allows quick and dirty compilation of Python code into the visually appealing format of the game. Furthermore, we implemented a file structure for all educators to customize their own programming lessons and custom functions to target specific concepts, such as for loops and while loops. ## Challenges we ran into We had most trouble with troubleshooting an idea that is both educational and fun. Finding that halfway point pushed both our creativity and technical abilities. While there were some ideas that had heavily utilizing AI and VR, we knew that we could not code that up in 36 hours. The idea we settled on still challenged us, but was something we thought was accomplishable. We also had difficulty with the graphics side of the project as that is something that we do not actively focus on learning through standard CS courses in school. ## Accomplishments that we're proud of We were most proud of the code incorporation feature. We had many different approaches for incorporating the user input into the game, that finding one that worked proved to be very difficult. We considered making pre-written code snippets that the game would compare to the user input or creating a pseudocode system that could interpret the user's intentions. The idea we settled upon, the most graceful, was a method through which the user input is directly input into the character behavior instantiation, meaning that the user code is directly what is running the character--no proxies or comparison strings. We are proud of the cleanliness and truthfulness this hold with our mission statement--giving the user the most hand-ons and accurate coding experience. ## What we learned We learned so much about game design and the implementation of computer science skills we learned in the classroom. We also learned a lot about education, through both introspection into ourselves as well as some research articles we found about how best to teach concepts and drill practice. ## What's next for The Code Runner The next steps for Code Runner would be adding more concepts covered through the game functionality. We were hoping to cover while-loops and other Python elements that we thought were crucial building blocks for anyone working with code. We were also hoping to add some gravity features where obstacles can jump with realistic believability.
partial
## Inspiration Our inspiration came from how we wanted to help manage and easily visualize social events with friends. One of our teammates took an interest in films, and with the Toronto International Film Festival approaching soon, we figured this could be a great opportunity to fill that need. Tools exist already that show film times, but we wanted to add a social aspect where users could directly coordinate with their friends what they're seeing and when – in the high-intensity (and relatively expensive) week that many a Toronto film fan anticipates for September, having more clarity in control would be no small service! ## What it does TIFFTok is a React-based web app that connects to a public list of TIFF 2023 films (and would easily plug into 2024 data as soon as it's fully available), allowing users to find a list of what's playing, and where. ## How we built it We idealized various ideas for a generic event planner web app on the first day; as there was no theme for this hackathon, we instead used our interests- films, to help serve as a basis for our project. Later on, UIs were drawn up using Figma, and a basic React boilerplate app was created. The front end was mainly done by Derek and Will, the back end by Will and Sana, as well as Niloy worked on integrating Auth0 into our project. We used MongoDB and Python to traverse the data given by the TIFF website, as well as JavaScript, React, and TailwindCSS for the front end. We used AI software like ChatGPT for supplementing debugging alongside our wonderful mentors. ## Challenges we ran into We ran into multiple challenges with Auth0, integrating the backend with the front end, and bugs that were encountered with the event compounded with the short timeframe that we had. We worked as a team, alongside getting help from our mentors, to help resolve these issues to the best of our ability. Ultimately, accommodating for this intense learning experience, we ran out of time within the weekend to actually implement that social aspect, given how innately reliant it is on the technical conventions of wherever user data is stored. ## Accomplishments that we're proud of We are proud that we achieved a good portion of a functional, clean full-stack web app from scratch, especially given our limited hackathon experience. Our team had fantastic chemistry with each other, with tasks being delegated toward our strengths. We constantly tried to mutually facilitate opportunities for each other to learn and to help each other out when appropriate. Also, learning Auth0 and the general concepts and practices surrounding OAuth has been very interesting. Finding most authentication systems already a little daunting, Auth0 was far removed from any notion we had of how an auth system might integrate with our front-end, back-end and databases. And on the other hand, our experience with MongoDB was virtually painless. ## What we learned We expanded our knowledge in React, TailwindCSS, MongoDB, Auth0, and integrating back-end to the front-end for a full web app. We applied classroom knowledge like object-oriented programming, the user design process, and list traversing to a real-world project. We worked together with Git in a setting akin to workplace software development. ## What's next for Tiff-Tok TIFF's 2024 edition arrives in about a month, making it ideal timing to further pursue this project and expand its functionality in time to be really useful to a small but fervent niche. The aforementioned social element, of directly sharing, comparing and aligning your schedule with a friend's, is very achievable within that amount of time. We also want to improve traversal efficiency, fix bugs and polish the layout, overall just trying to improve the user experience.
## Inspiration Whenever I go on vacation, what I always fondly look back on is the sights and surroundings of specific moments. What if there was a way to remember these associations by putting them on a map to look back on? We strived to locate a problem, and then find a solution to build up from. What if instead of sorting pictures chronologically and in an album, we did it on a map which is easy and accessible? ## What it does This app allows users to collaborate in real time on making maps over shared moments. The moments that we treasure were all made in specific places, and being able to connect those moments to the settings of those physical locations makes them that much more valuable. Users from across the world can upload pictures to be placed onto a map, fundamentally physically mapping their favorite moments. ## How we built it The project is built off a simple React template. We added functionality a bit at a time, focusing on creating multiple iterations of designs that were improved upon. We included several APIs, including: Google Gemini and Firebase. With the intention of making the application very accessible to a wide audience, we spent a lot of time refining the UI and the overall simplicity yet useful functionality of the app. ## Challenges we ran into We had a difficult time deciding the precise focus of our app and which features we wanted to have and which to leave out. When it came to actually creating the app, it was also difficult to deal with niche errors not addressed by the APIs we used. For example, Google Photos was severely lacking in its documentation and error reporting, and even after we asked several experienced industry developers, they could not find a way to work around it. This wasted a decent chunk of our time, and we had to move in a completely different direction to get around it. ## Accomplishments that we're proud of We're proud of being able to make a working app within the given time frame. We're also happy over the fact that this event gave us the chance to better understand the technologies that we work with, including how to manage merge conflicts on Git (those dreaded merge conflicts). This is our (except one) first time participating in a hackathon, and it was beyond our expectations. Being able to realize such a bold and ambitious idea, albeit with a few shortcuts, it tells us just how capable we are. ## What we learned We learned a lot about how to do merges on Git as well as how to use a new API, the Google Maps API. We also gained a lot more experience in using web development technologies like JavaScript, React, and Tailwind CSS. Away from the screen, we also learned to work together in coming up with ideas and making decisions that were agreed upon by the majority of the team. Even with being friends, we struggled to get along super smoothly while working through our issues. We believe that this experience gave us an ample amount of pressure to better learn when to make concessions and also be better team players. ## What's next for Glimpses Glimpses isn't as simple as just a map with pictures, it's an album, a timeline, a glimpse into the past, but also the future. We want to explore how we can encourage more interconnectedness between users on this app, so we want to allow functionality for tagging other users, similar to social media, as well as providing ways to export these maps into friendly formats for sharing that don't necessarily require using the app. We also seek to better merge AI into our platform by using generative AI to summarize maps and experiences, but also help plan events and new memories for the future.
## Inspiration We love spending time playing role based games as well as chatting with AI, so we figured a great app idea would be to combine the two. ## What it does Creates a fun and interactive AI powered story game where you control the story and the AI continues it for as long as you want to play. If you ever don't like where the story is going, simply double click the last point you want to travel back to and restart from there! (Just like in Groundhog Day) ## How we built it We used Reflex as the full-stack Python framework to develop an aesthetic frontend as well as a robust backend. We implemented 2 of TogetherAI's models to add the main functionality of our web application. ## Challenges we ran into From the beginning, we were unsure of the best tech stack to use since it was most members' first hackathon. After settling on using Reflex, there were various bugs that we were able to resolve by collaborating with the Reflex co-founder and employee on site. ## Accomplishments that we're proud of All our members are inexperienced in UI/UX and frontend design, especially when using an unfamiliar framework. However, we were able to figure it out by reading the documentation and peer programming. We were also proud of optimizing all our background processes by using Reflex's asynchronous background tasks, which sped up our website API calls and overall created a much better user experience. ## What we learned We learned an entirely new but very interesting tech stack, since we had never even heard of using Python as a frontend language. We also learned about the value and struggles that go into creating a user friendly web app we were happy with in such a short amount of time. ## What's next for Groundhog More features are in planning, such as allowing multiple users to connect across the internet and roleplay on a single story as different characters. We hope to continue optimizing the speeds of our background processes in order to make the user experience seamless.
losing
## Inspiration Food can be tricky, especially for college students. There are so many options for your favorite foods, like restaurants, frozen foods, grocery stores, and even cooking yourself. We want to be able to get the foods we love, with the best experience. ## What it does Shows you and compares all of the options for a specific dish you want to eat. Takes data from restaurants, grocery stores, and more and aggregates everything into an easy to understand list. Sorts based on a relevance algorithm to give you the best possible experience. ## How we built it The frontend is HTML, CSS, and JS. The backend we used Python, NodeJS, Beautiful Soup for web scraping, and several APIs for getting data. We also used Google Firebase to store the data in a database. ## Challenges we ran into It was difficult trying to decide the scope, and what we could realistically make in such a short time frame. ## Accomplishments that we're proud of We're proud of the overall smooth visuals, and the functionality of showing you many different options for foods. ## What we learned We learned about web design, along with data collection with APIs and web scraping. ## What's next for Food Master We are hoping to implement the ChatGPT API to interpret restaurant menus and automate data collection.
## Inspiration Our team wanted to address the growing concern for young adults and college students who are becoming increasingly dependent on food-delivery services like DoorDash and UberEats. This creates unhealthy habits of eating, which will scale into an unfavorable lifestyle in the future. For convenience's sake, we wanted to build a custom user-centric platform that is simple enough that even the least culinary-inclined could embrace the process of preparing a meal that you find delicious. ## What it does MealPrep gathers dishes from the keywords of your choice, steps you through the process of cooking with a voice-dictated AI chatbot, and bookmarks your favorite foods for next time. ## How we built it Built with React.js on the frontend, supported by Vite.js and Tailwind.css, packaged with Node.js, and deployed to Vercel. On the backend, we're using Javascript with Firebase user-authentication and Firestore to store user bookmarks. Our user experience is derived from Groq.AI's lightning responsive LLM responses, as well as Cartesia AI's voice-to-text dictation. ## Challenges we ran into * Attempting to deploy on industry-standard technologies like AWS / Docker * Trying to integrate Linter in a CI/CD pipeline using GitHub Actions * Adding/removing data using Firestore ## Accomplishments that we're proud of Creating a unified user-interface that reflects our values and simplistic, modern product vision. Adding dynamic animations that create a smooth navigational experience. Having integrated many technologies that all overlap with each other, consistently researched the docs to see what works best for our code. In the end, our technologies all fit together like a beautiful piece of the puzzle. ## What we learned API's are difficult to implement and wrangle around, and sometimes diving too deep into one solution isn't the most optimal workflow. Always remember to think of different solutions and take yourself outside of the box sometimes. Having a cycle of communication, delegating tasks, and single-focus times worked well for our team's productive workflow. It felt like mini-sprints which organized and allowed us to adapt to our own roles on the team. Taking a step back from coding to brainstorm, draw out our ideas, and have fun with the process is so important to the team's cohesive vision. It allows us to remain on the same page. ## What's next for MealPrep Features like ingredient overlap detection would focus on giving the users a way to see which of their favorite recipes can be built using the same ingredients. Engagement is an important factor for any platform. To add engagement, we want to focus on shifting our platform to social media. Features like a daily popular meal, or community postings where people can share their experiences and advice on meal-prepping would be a fantastic addition. On top of our default guides, we want there to be more interactivity with guides created by users and brands. Speaking of brands, having companies promote their meal-prep-friendly products on our page would be a great monetization strategy.
## Inspiration Food is capable of uniting all of us together, no matter which demographic we belong to or which cultures we identify with. Our team recognized that there was a problem with how challenging it can be for groups to choose a restaurant that accommodated everyone's preferences. Furthermore, food apps like Yelp and Zomato can often cause 'analysis paralysis' as there are too many options to choose from. Because of this, we wanted to build a platform to facilitate the process of coming together for food, and make the process as simple and convenient as possible. ## What it does Bonfire is an intelligent food app that takes into account the food preferences of multiple users and provides a fast, reliable, and convenient recommendation based on the aggregate inputs of the group. To remove any friction while decision-making, Bonfire is even able to make a reservation on behalf of the group using Google's Dialogflow. ## How we built it We used Android Studio to build the mobile application and connected it to a Python back-end. We used Zomato's API for locating restaurants and data collection, and Google Sheets API and Google Apps scripts to decide the optimal restaurant recommendation given the user's preferences. We then used Adobe XD to create detailed wireframes to visualize the app's UI/UX. ## Challenges we ran into We found that Integrating all the API's into our app was quite challenging as some required Partner access privileges and restricted the amount of information we could request. In addition, choosing a framework to connect the back-end was a difficult. ## Accomplishments that we're proud of As our team is comprised of students studying bioinformatics, statistics, and kinesiology, we are extremely proud to have been able to bring an idea to fruition, and we are excited to continue working on this project as we think it has promising applications. ## What we learned We learned that trying to build a full-stack application in 24 hours is no easy task. We managed to build a functional prototype and a wireframe to visualize what the UI/UX experience should be like. ## What's next for Bonfire: the Intelligent Food App For the future of Bonfire, we are aiming to include options for dietary restrictions and incorporating Google Duplex into our app for a more natural-sounding linguistic profile. Furthermore, we want to further polish the UI to enhance the user experience. To improve the quality of the recommendations, we plan to implement machine learning for the decision-making process, which will also take into account the user's past food preferences and restaurant reviews.
losing
## Inspiration As a team, we had a collective interest in sustainability and knew that if we could focus on online consumerism, we would have much greater potential to influence sustainable purchases. We also were inspired by Honey -- we wanted to create something that is easily accessible across many websites, with readily available information for people to compare. Lots of people we know don’t take the time to look for sustainable items. People typically say if they had a choice between sustainable and non-sustainable products around the same price point, they would choose the sustainable option. But, consumers often don't make the deliberate effort themselves. We’re making it easier for people to buy sustainably -- placing the products right in front of consumers. ## What it does greenbeans is a Chrome extension that pops up when users are shopping for items online, offering similar alternative products that are more eco-friendly. The extension also displays a message if the product meets our sustainability criteria. ## How we built it Designs in Figma, Bubble for backend, React for frontend. ## Challenges we ran into Three beginner hackers! First time at a hackathon for three of us, for two of those three it was our first time formally coding in a product experience. Ideation was also challenging to decide which broad issue to focus on (nutrition, mental health, environment, education, etc.) and in determining specifics of project (how to implement, what audience/products we wanted to focus on, etc.) ## Accomplishments that we're proud of Navigating Bubble for the first time, multiple members coding in a product setting for the first time... Pretty much creating a solid MVP with a team of beginners! ## What we learned In order to ensure that a project is feasible, at times it’s necessary to scale back features and implementation to consider constraints. Especially when working on a team with 3 first time hackathon-goers, we had to ensure we were working in spaces where we could balance learning with making progress on the project. ## What's next for greenbeans Lots to add on in the future: Systems to reward sustainable product purchases. Storing data over time and tracking sustainable purchases. Incorporating a community aspect, where small businesses can link their products or websites to certain searches. Including information on best prices for the various sustainable alternatives, or indicators that a product is being sold by a small business. More tailored or specific product recommendations that recognize style, scent, or other niche qualities.
## Inspiration 🌱 Climate change is affecting every region on earth. The changes are widespread, rapid, and intensifying. The UN states that we are at a pivotal moment and the urgency to protect our Earth is at an all-time high. We wanted to harness the power of social media for a greater purpose: promoting sustainability and environmental consciousness. ## What it does 🌎 Inspired by BeReal, the most popular app in 2022, BeGreen is your go-to platform for celebrating and sharing acts of sustainability. Everytime you make a sustainable choice, snap a photo, upload it, and you’ll be rewarded with Green points based on how impactful your act was! Compete with your friends to see who can rack up the most Green points by performing more acts of sustainability and even claim prizes once you have enough points 😍. ## How we built it 🧑‍💻 We used React with Javascript to create the app, coupled with firebase for the backend. We also used Microsoft Azure for computer vision and OpenAI for assessing the environmental impact of the sustainable act in a photo. ## Challenges we ran into 🥊 One of our biggest obstacles was settling on an idea as there were so many great challenges for us to be inspired from. ## Accomplishments that we're proud of 🏆 We are really happy to have worked so well as a team. Despite encountering various technological challenges, each team member embraced unfamiliar technologies with enthusiasm and determination. We were able to overcome obstacles by adapting and collaborating as a team and we’re all leaving uOttahack with new capabilities. ## What we learned 💚 Everyone was able to work with new technologies that they’ve never touched before while watching our idea come to life. For all of us, it was our first time developing a progressive web app. For some of us, it was our first time working with OpenAI, firebase, and working with routers in react. ## What's next for BeGreen ✨ It would be amazing to collaborate with brands to give more rewards as an incentive to make more sustainable choices. We’d also love to implement a streak feature, where you can get bonus points for posting multiple days in a row!
## Inspiration We were inspired by the culture of overconsumption of goods that stems from social media trends. We wanted to design a way to reduce unnecessary purchases that are both financially and environmentally unsustainable. ## What it does The WI$E Chrome extension encourages the user to rethink their choices on where they spend their money on various shopping websites. It does so by asking the user a series of questions to ensure their purchase is reasonable, sustainable and truly beneficial to their life. If the user determines that their purchase is not wise, they are redirected to the Google home page. If the user is unsure about their purchase, they're given time to think with a timer. If the user believes their purchase is wise, they are prompted with the choice to a more sustainable option that supports local businesses in Montreal. ## How we built it As WI$E is a Chrome Extension, we were required to implement a .json file to make it compatible with Chrome. From there we worked with JavaScript and HTML to implement the visual components and make it user interactive. ## Challenges we ran into Because we are students in 1st and 2nd year of computer science, we did not have any experience using HTML or Javascript. Although it was challenging for us to learn these languages, we were able to find the resources online to successfully implement our code. ## Accomplishments that we're proud of Considering this is our first time using theses languages and interfaces, we are glad that this first prototype is functional and successfully user-interactive. ## What we learned Working on this project, we learned how to build a Chrome Extension and along with this were introduced to new scripting languages such as JavaScript and HTML. We learned a different style of testing our code, which involved displaying the output on the Chrome browser. As well, we created our first repository on Github and utilized various features of the platform. Throughout this experience, we realized that programming has positive societal impacts, and we are motivated to use our skills to contribute to a sustainable future. ## What's next for WI$E We wish to make WI$E more open-ended by making it functional on more websites. However, this would require AI and databases. In fact, we want WI$E to be able to detect items that are trending on social media and raise awareness about the effects of unnecessary consumption caused by these online trends.
winning
## Inspiration As developers, we have had those moments where we wish we had a personal coding tutor for our projects. Our friends might not have the answers. Sometimes, Stackoverflow can be intimidating and the community can be toxic. Therefore, we have made this app to instantly find an available and experienced developer to help you out. The community is a judgement-free zone, where any questions can be asked. We hope that we could make the hacking community more wholesome and collaborative ## What it does We built a collaborative online code editor and executor with built in video calling and instant messaging functionality. Our app also includes user profiles where users can mark themselves as tutors and accept requests for help. We thing that by providing beginner programmers with specialized tools for collaborating and communicating with industry professionals, we can have a positive impact on their success later in life. ## How I built it We built our app using the Nuxt frontend framework, and powered it with Firebase. We use Firebase to store all our user profiles, perform authentication, perform instant messaging and initialize our peer-to-peer video calling powered by WebRTC. ## Challenges I ran into We were challenged by issues involving authentication security, and firebase intigration, and there remain some of these improvements that remain possible for future iterations of our app ## Accomplishments that I'm proud of We're proud that we managed to build an app that feels intuitive for new users, and can provide them with opportunities to improve their craft, which will hopefully have lasting impacts on their future careers ## What I learned We learn to be more experimental with our approach to web app development. We learn how to integrate Firebase to our application. We learned how to implement voice/video chat as well as code sharing ## What's next for devTutor We are exploring the possibility of adding chatrooms where developers can discuss solutions together. We also discuss a 5-star rating system where the tutees can rate their tutors, and then the top tutors off the week can show up on the homepage. Github: <https://github.com/EthanHaid/DevTutor>
## Inspiration A great mentor can be instrumental for us to grow as a person in life. A mentor can be anyone, someone giving us career advice, or someone helping us learn to cook the most delicious meal we have ever eaten. We often interact with people and talk about our skills, or how they could help us learn new skills. Although we are living in a digitized world, a certain knack can only be discovered via personal interactions. Most of us learn more from real mentors as compared to online videos. Especially for students, stay at home parents and elderly people, we believe that a real-life mentor would be far more impactful. Many times it is difficult to register for online or group courses because they are extremely costly and require long term commitment, while we might just need a few hours of mentoring in order to acquire a new skill. With this in mind, we have created an application to help people connect and learn from each other and thus grow as a society, without the financial and logistic barriers. Google democratized information, we wish to democratize skills. ## What it does A centralized platform where users connect and network with talented people & skill teachers in the local community. The application enables a user to indicate interests, connect with the right people in the locality and receive personalized training from talented people as well as share their skills with others. It is different from other platforms as most of them lack the personal connection and specificity, are mostly career focused, or oriented towards learning technology and fail to establish personal connections. Our platform also rewards the mentors in credits, which can then be used to schedule a session as a mentee. Thus, user retention is maintained via a continuous sharing of knowledge and skills. The user is always informed of their progress and activity on the platform with the help of lucid data visualizations. ## How we built it We have built our application using React and Firebase. We have used Nivo for data visualization. Firebase has also been used for hosting the application and user authentication. ## Challenges we ran into Being newbies in full-stack development, initially, designing and stitching things together was the challenge. As we progressed we faced gradual impediments and questions that led to some more iterations than expected. ## Accomplishments that we're proud of We are proud of the impact the application can have on society. With SkillEd, we present before you a platform, which brings a personal touch to knowledge sharing in this world where everyone is desperately looking into a black mirror. We believe that easier access to education and knowledge sharing is important for any society to thrive. With our application, we strive to bring technology to education and mentorship for a better future. ## What we learned We learned a lot of things while building this app: 1. Web app development 2. Hosting web apps 3. Front and Back end development 4. And definitely Karate! ## What's next for SkillEd Our goal is to break down obstacles to non-traditional skills-based education, reinvigorate traditional educational platforms by promoting skill diversification and support mentorship and networking between community members. The journey doesn't end here. We aim to take SkillEd to another level and make it running in production. There are a bunch of things that we would like to focus on. * Develop a system in which users can transact with SkillEd credits * Integration with GeoSpatial API to discover local mentors/mentees easily * Intelligent mentor discovery with machine learning * Recommendation system for skills, venues, and people * Android and iOS applications * Integration with Google calendar * Recommend resources (Amazon marketplace)
## Inspiration 🌈 Our team has all experienced the struggle of jumping into a pre-existing codebase and having to process how everything works before starting to add our own changes. This can be a daunting task, especially when commit messages lack detail or context. We also know that when it comes time to push our changes, we often gloss over the commit message to get the change out as soon as possible, not helping any future collaborators or even our future selves. We wanted to create a web app that allows users to better understand the journey of the product, allowing users to comprehend previous design decisions and see how a codebase has evolved over time. GitInsights aims to bridge the gap between hastily written commit messages and clear, comprehensive documentation, making collaboration and onboarding smoother and more efficient. ## What it does 💻 * Summarizes commits and tracks individual files in each commit, and suggests more accurate commit messages. * The app automatically suggests tags for commits, with the option for users to add their own custom tags for further sorting of data. * Provides a visual timeline of user activity through commits, across all branches of a repository Allows filtering commit data by user, highlighting the contributions of individuals ## How we built it ⚒️ The frontend is developed with Next.js, using TypeScript and various libraries for UI/UX enhancement. The backend uses Express.js , which handles our API calls to GitHub and OpenAI. We used Prisma as our ORM to connect to a PostgreSQL database for CRUD operations. For authentication, we utilized GitHub OAuth to generate JWT access tokens, securely accessing and managing users' GitHub information. The JWT is stored in cookie storage and sent to the backend API for authentication. We created a github application that users must all add onto their accounts when signing up. This allowed us to not only authenticate as our application on the backend, but also as the end user who provides access to this app. ## Challenges we ran into ☣️☢️⚠️ Originally, we wanted to use an open source LLM, like LLaMa, since we were parsing through a lot of data but we quickly realized it was too inefficient, taking over 10 seconds to analyze each commit message. We also learned to use new technologies like d3.js, the github api, prisma, yeah honestly everything for me ## Accomplishments that we're proud of 😁 The user interface is so slay, especially the timeline page. The features work! ## What we learned 🧠 Running LLMs locally saves you money, but LLMs require lots of computation (wow) and are thus very slow when running locally ## What's next for GitInsights * Filter by tags, more advanced filtering and visualizations * Adding webhooks to the github repository to enable automatic analysis and real time changes * Implementing CRON background jobs, especially with the analysis the application needs to do when it first signs on an user, possibly done with RabbitMQ * Creating native .gitignore files to refine the summarization process by ignoring files unrelated to development (i.e., package.json, package-lock.json, **pycache**).
losing
## Inspiration Growing up with consoles and controllers, members of the Sculptic team have always felt more immersed in a world with a little bit of shakiness. We wanted to integrate haptic feedback (more specifically vibration) into the controllers of the future - like Leap Motion's virtual hands. Sculptic has the ability to make a VR experience into something even more immersive. ## What it does Sculptic is a glove-like device to be worn over the hand. It has vibration motors at key points over the fingers and palm, along with an LED for additional effect. It provides a unique and varied set of haptic feedback to the user, which is fully programmable, customizable, and more versatile than any controller. ## How we built it We used a Particle Photon board so that our game, programmed in Unity, could send POST requests using the REST API and make changes to the state of the Sculptic. This is what allows us to create an integrated gaming/interactive VR experience just from Unity. ## Challenges we ran into The Particle Photon board occasionally has connection issues, and seems to relish in its failure. ## Accomplishments that we're proud of We have managed to integrate a wide variety of different technologies into one working, compete project, and we've learned a lot throughout the process! ## What we learned We've learned about interesting and useful connection APIs, as well as tidbits from game design, and firmware algorithms. We've also learned how to work together over the course of a long weekend with minimal rest! ## What's next for Sculptic We don't know if Sculptic will ever become a commercial success, but with the Rabbit Hole VR club and their hardware, we here at the Sculptic team intend to spend a good number of nights in Lab 64 making and playing truly interactive games with the Sculptic!
## Inspiration The inspiration for HANDAR came from observing the challenges people face when providing remote assistance, especially in scenarios requiring precise instructions and guidance. We envisioned a tool that could make remote help as effective and intuitive as being physically present. ## What it does HANDAR is an interactive AR video call app that detects and visualizes 3D hands and objects in real-time. It allows users to provide remote assistance with added annotations, making instructions clear and actionable. ## How we built it We built HANDAR using Unity and integrated WebRTC for streaming capabilities. We utilized MediaPipe for hand and object detection, and Unity's AR Toolkit for overlaying 3D models in the video feed. Our team collaborated on designing an intuitive user interface and seamless interaction flow. ## Challenges we ran into One of the main challenges was achieving real-time detection and rendering of 3D hands without significant latency. Integrating WebRTC with Unity and ensuring stable streaming quality was another hurdle. Additionally, creating an intuitive annotation system that works smoothly in an AR environment required careful design and testing. ## Accomplishments that we're proud of We are proud of successfully implementing real-time 3D hand detection and AR annotations, which significantly enhance the remote assistance experience. Our seamless integration of WebRTC for video streaming in Unity was a major technical achievement. ## What we learned Throughout the development of HANDAR, we gained deeper insights into AR technology and real-time video processing. We learned about the complexities of integrating different technologies and optimizing performance for a smooth user experience. Collaborative problem-solving and iterative testing were crucial to our success. ## What's next for Handar Next, we aim to refine HANDAR's user interface and expand its capabilities by adding more interactive tools and features. We plan to conduct user testing to gather feedback and improve the app's usability. Exploring potential integrations with other platforms and expanding the app's reach to different industries are also on our roadmap.
## What it does ColoVR is a virtual experience allowing users to modify their environment with colors and models. It's a relaxing, low poly experience that is the first of its kind to bring an accessible 3D experience to the Google Cardboard platform. It's a great way to become a creative in 3D without dropping a ton of money on VR hardware. ## How we built it We used Google Daydream and extended off of the demo scene to our liking. We used Unity and C# to create the scene, and do all the user interactions. We did painting and dragging objects around the scene using a common graphics technique called Ray Tracing. We colored the scene by changing the vertex color. We also created a palatte that allowed you to change tools, colors, and insert meshes. We hand modeled the low poly meshes in Maya. ## Challenges we ran into The main challenge was finding a proper mechanism for coloring the geometry in the scene. We first tried to paint the mesh face by face, but we had no way of accessing the entire face from the way the Unity mesh was set up. We then looked into an asset that would help us paint directly on top of the texture, but it ended up being too slow. Textures tend to render too slowly, so we ended up doing it with a vertex shader that would interpolate between vertex colors, allowing real time painting of meshes. We implemented it so that we changed all the vertices that belong to a fragment of a face. The vertex shader was the fastest way that we could render real time painting, and emulate the painting of a triangulated face. Our second biggest challenge was using ray tracing to find the object we were interacting with. We had to navigate the Unity API and get acquainted with their Physics raytracer and mesh colliders to properly implement all interaction with the cursor. Our third challenge was making use of the Controller that Google Daydream supplied us with - it was great and very sandboxed, but we were almost limited in terms of functionality. We had to find a way to be able to change all colors, insert different meshes, and interact with the objects with only two available buttons. ## Accomplishments that we're proud of We're proud of the fact that we were able to get a clean, working project that was exactly what we pictured. People seemed to really enjoy the concept and interaction. ## What we learned How to optimize for a certain platform - in terms of UI, geometry, textures and interaction. ## What's next for ColoVR Hats, interactive collaborative space for multiple users. We want to be able to host the scene and make it accessible to multiple users at once, and store the state of the scene (maybe even turn it into an infinite world) where users can explore what past users have done and see the changes other users make in real time.
losing
## Inspiration We were very inspired by the chaotic innovations happening all around the world and wanted to create something that's just a little stupid, silly, and fun using components that are way too overspecced for the usecase. ## What it does It is intended to use electromagnetic properties and an intense amount of power to fuel a magnetic accelerator that will accelerate objects towards some destination. This was then switched to a coil gun model, and then using a targeting system made up of a turret and a TFLite model we aim for stupid objects with the power of AI. ## How we built it We attempted to create an electromagnet and fiddled around for a while with the Qualcomm HDK ## Challenges we ran into We severely underestimated how short 24 hours is and a lot of the components intended to be added onto the creation do not function as intended. ## Accomplishments that we're proud of We mixed a huge aspect of electrophysics with modern day cutting edge engineering to create something unique and fun! ## What we learned Time is short, and this is definitely a project we want to continue into the future. ## What's next for Magnetic Accelerator To develop it properly and aim properly in the future (summer 2023)
## Inspiration Mood tracking apps are known to help many individuals dealing with mental health struggles understand and learn about their mood changes, improve their mood, and manage their mental illnesses (AMIA Annu Symp Proc. 2017; 2017: 495–504). However, many people have trouble fitting the task of charting mood into their daily routine -- recording daily mood seems unimportant, many individuals struggling with their mental health have low motivation to accomplish this kind of mundane task, it's a small task that's easy to push off indefinitely, and it serves as a daily reminder of their difficulties or disabilities. ## What it does Pilot encourages individuals to fill out their mood tracker, and it helps them do so in an efficient, natural, and engaging way. And while many suffering from low mood find themselves stuck in self-propagating cycles of negative thinking and isolation, Pilot strives to boost mood more effectively than current trackers by actively listening and responding, and by introducing people to unique places and cultures around the world! Pilot serves as a browser landing page, and each day a unique picture of a mystery location encourages users to engage with the app in order to identify where the image was taken. Users are prompted to discuss the events of the day out loud, though they can enter text if they wish -- this style of information is more efficient and natural than the process required by most mood trackers, and encourages comprehensiveness and candor. Pilot then responds to the individual's sentiment and prompts them to reflect on their current mood further with a probing question. Once the user has responded to these questions, Pilot recommends how the user may be feeling on three different scales, mood, anxiety, and cynicism, allowing the user to adjust as they see fit. Once this step is complete, the destination is revealed and the user can explore the location they were matched to for that day. Pilot also provides users with a clean environment with which to view their mood and mental health history. ## How I built it We used Watson for sentiment analysis, Rev for speech-to-text, and Amadeus to identify sights and Mapbox and Wikipedia to display information about locations. ## Accomplishments that I'm proud of We're extremely proud of all the features we were able to pack into Pilot in 24 hours.
## Inspiration Sweet Tweetment lets you check up on your friends who need it the most. We were inspired to do this to solve two major problems: 1) In the digital age, we're following so many people on social media sites such as Twitter that we're unable to manually keep up with everyone we care about 2) Mental health issues are on the rise (especially on college campuses and among youth), and bullying has become very prevalent on social media We wanted to solve these problems by automating the process of checking up on your friends. ## What it does Sweet Tweetment finds all of the people you follow, and searches for whether you should reach out to them because they've been suffering in some way or are at risk in the future. Our application uses sentiment analysis to read in your friends' tweets, classify them, and suggest a means to reach out to those in need. We use machine learning make our recommendations predictive and reactive, so you can find friends at risk based on current trends, or friends who have already experienced some traumatic events. Our app can also identify whether your friends are being bullied, and can direct you to tools to help them to #HackHarrassment. You can even create an account, and tweet your friends to check up on them right form the website! ## How we built it The frontend is built with AngularJS, HTML, and CSS. The user can log in to their twitter account through our site which uses the Twitter API and OAuth to return a list of all the people they are following. They can select all friends who they would like to check up on (for harassment and well-being). The backend is built in Node Express, and uses the Twitter API to get lists of followers and recent tweets. This data is source dto IBM Watson's (Bluemix) emotional analysis platform (Alchemy API) to determine how sad or aggressive the messages were. Based on the emotional analysis, each tweet is scored and this information is fed through a machine learning linear regression framework to predict how sad their subsequent posts will be (i.e. the future emotional state of the user), and gather how sad posts have been on average previously. A similar process is used to determine online harassment. The tweets that are sent to their friends are emotionally analyzed for the anger sentiment and if an overwhelming amount of their tweets are classified as being “angry or aggressive” (based on the emotional analysis results), the friend is classified as being a target of online bullying. The app is hosted using Linode. ## Challenges we ran into You can run the same code and have it stop working. ## Accomplishments that we're proud of -Empowering people to help their friends in need -Helping create new avenues to find and stop bullying ## What we learned You can run the same code and have it stop working! ## What's next for tweethacks We're hoping to have users log into the website once, and send them emails whenever any of their friends appears to be struggling in any way to make Sweet Tweetment even more user friendly!
partial
## Inspiration We were inspired by Xcode's Swift playground, which evaluates code live in a sidebar. ## What it does This is an extension for Atom that evaluates Python code and presents it live next to the corresponding line in the code editor. Thus, the user can see how every variable is changed as the user is typing the program. Users can keep typing and catch whatever mistakes they has have committed without having to run the code. This benefits for students of computer science and more advanced users who would like to prototype quickly. ## How we built it There is a Python backend with our custom Python interpreter that evaluates expression and displays user-friendly output. On the front-end, we created an Atom package. We used Javascript ES6 and Atom APIis. ## Challenges we ran into The main challenge was building the interpreter. It was difficult to do this since no interpreter out there already did what we needed. We had to parse the code and prevent all sorts of errors involved in running code within code. It was also challenging to figure out the Atom APIs to get a user-friendly and very responsive interface. ## Accomplishments that we're proud of We got some logic in the interpreter, and it is somewhat useful. It is also extensible and can be built-upon by others. ## What we learned We learned about interacting between a Python program and an Atom extension. We learned Atom APIs and also some basic Python parsing and interpreting. ## What's next for atom-playground We would like to add graphics to understand loops and support for recursion analysis.
## Inspiration As students and developers, we often struggle with finding the information we need online. We created nSWER as an ‘answer’ to that problem in the form of a Chrome extension that will get us what we need from a website in the form of a user-friendly “chatbot” style Q-and-A UI. ## What it does nSWER parses a website's data and allows the user to ask any question about it, which is then processed and passed through OpenAI's API to generate a unique and accurate answer. ## How we built it We built nSWER's (very beautiful) front end with HTML and CSS. For its backend, we used express.js and javascript to handle information passed from the front end as well as tokenize it and pass it to OpenAI's API. nSWER's digital assets were created using canva or sourced from public domain or copyright-free assets online. ## Challenges Encountered: One significant challenge we faced was ensuring effective communication between each component within our Chrome extension, particularly when utilizing Chrome's API for scraping web content. This was especially tricky for Single Page Applications (SPAs). Additionally, our team's unfamiliarity with the backend technology Flask presented hurdles. Midway through the hackathon, we decided to switch to Express, which required rapid adaptation and learning. ## Achievements: Despite these obstacles, we are incredibly proud of the final outcome. Our extension not only functions as intended, but we also take great pride in the originality and practicality of our idea. It's a solution that we see ourselves using daily. To address the challenge of scraping SPAs, we innovated by leveraging hashcodes. This approach allowed us to dynamically interact with SPA content, making our scraping process much more effective and versatile. ## What we learned We became familiarized with setting up communication between frontend, backend, and OpenAi's API. We also learned to build a Chrome extension using only native technologies such as HTML, JavaScript, and CSS. We also learned the importance delegating tasks in an efficient and effective manner to meet project deadlines. ## What's next for nSWER We would love to implement further accessible resources into nSWER such as "make this page visually accessible for me" to change the way the website is formatted. We would also love to be able to implement OpenAI's SORA to explain concepts visually through the form of a tutorial video to accomodate visual learners. An optimization we would love to work on is improved parsing of the webpage to prioritize sections to pass through OpenAI's API.
## Inspiration While tackling our university courses, we were faced with the task of implementing (and debugging) various data structures. We found that using traditional debuggers or print statements obscures the structure of the data, making testing and debugging code tedious and unintuitive. We were determined to find a way to take advantage of the inherent properties of data structures. With Visually Study Yo Code, you can see the data structure come together as you step through your code. ## What it does Visually Study Yo Code offers gives you the choice of using a graphical depiction of your variables to debug your JavaScript code. By right clicking a variable in the editor, you can open a tab which displays nodes to depict trees, linked lists and custom data structures. Nodes are added, deleted or modified as you step through your code in the debugger, making it easy to find and trace down errors in your algorithms. ## How we built it We created and deployed a Visual Studio Code Extension using the Extension API to let us integrate our new functionality into the editor. The graphical representation is constructed using canvas in HTML and displayed to the user by using a webview. To access the data, we interfaced with Visual Studio Code's Debug Adapter and parsed the information about the variables into a JSON object. ## Challenges we ran into The functionality of our extension was limited by what the Visual Studio Code Extension API provides. While we originally planned to add a command directly to the debugging menu, the ability to add new commands was constrained to the editor. Furthermore, accessing the editors colour themes directly was difficult, so that we decided to limit ourselves to supporting a dark theme. ## Accomplishments that we're proud of We are proud to have completed a project that we would like to use in the future. In contrast to most hackathon projects, we feel like Visually Study Yo Code benefits us directly by letting us create more robust code more quickly. ## What we learned Since we had not made an extension before, we learned how to use the Visual Studio Code Extension API. Since Visually Study Yo Code is focused on debugging, we learned about the debugging architecture used by Visual Studio Code. ## What's next for Visually Study Yo Code Visually Study Yo Code currently only supports debugging in JavaScript. In the future, we will increase the scope of our extension to include other programming languages. Furthermore, we plan to provide support for custom themes for our graphs. By using SVG, we can make our webviews interactive to make debugging even more effective. ## Challenges we ran into The functionality of our extension was limited by what is possible with the Visual Studio Code Extension API. The ability to add new commands was constrained to the editor, so that we had to give up our original plan of right clicking the variables in the debugging menu. Furthermore, accessing the editors colour themes directly was difficult, so that we decided to limit ourselves to supporting a dark theme for the hackathon. ## Accomplishments that we're proud of We are proud to have completed a project that we ourselves would like to use in the future. In contrast to most hackathon projects, we feel like Visually Study Yo Code could benefit us in the future by letting us create more robust code more quickly. ## What we learned We learned how to use the Visual Studio Code Extension API, since we had not made an extension before. Since Visually Study Yo Code is focused on debugging, we learned about the debugging architecture used by Visual Studio Code. ## What's next for Visually Study Yo Code Visually Study Yo Code currently only supports debugging in JavaScript. In the future, we will increase the scope of our extension to include other programming languages. Furthermore, we plan to provide support for custom themes for our graphs. By using SVG, we can make our webviews interactive to make debugging even more effective.
losing
## Inspiration Our inspiration for this project came from newfound research stating the capabilities of models to perform the work of data engineers and provide accurate tools for analysis. We realized that such work is impactful in various sectors, including finance, climate change, medical devices, and much more. We decided to test our solution on various datasets to see the potential in its impact. ## What it does A lot of things will let you know soon ## How we built it For our project, we developed a sophisticated query pipeline that integrates a chatbot interface with a SQL database. This setup enables users to make database queries effortlessly through natural language inputs. We utilized SQLAlchemy to handle the database connection and ORM functionalities, ensuring smooth interaction with the SQL database. To bridge the gap between user queries and database commands, we employed LangChain, which translates the natural language inputs from the chatbot into SQL queries. To further enhance the query pipeline, we integrated Llama Index, which facilitates sequential reasoning, allowing the chatbot to handle more complex queries that require step-by-step logic. Additionally, we added a dynamic dashboard feature using Plotly. This dashboard allows users to visualize query results in an interactive and visually appealing manner, providing insightful data representations. This seamless integration of chatbot querying, sequential reasoning, and data visualization makes our system robust, user-friendly, and highly efficient for data access and analysis. ## Challenges we ran into Participating in the hackathon was a highly rewarding yet challenging experience. One primary obstacle was integrating a large language model (LLM) and chatbot functionality into our project. We faced compatibility issues with our back-end server and third-party APIs, and encountered unexpected bugs when training the AI model with specific datasets. Quick troubleshooting was necessary under tight deadlines. Another challenge was maintaining effective communication within our remote team. Coordinating efforts and ensuring everyone was aligned led to occasional misunderstandings and delays. Despite these hurdles, the hackathon taught us invaluable lessons in problem-solving, collaboration, and time management, preparing us better for future AI-driven projects. ## Accomplishments that we're proud of We successfully employed sequential reasoning within the LLM, enabling it to not only infer the next steps but also to accurately follow the appropriate chain of actions that a data analyst would take. This advanced capability ensures that complex queries are handled with precision, mirroring the logical progression a professional analyst would utilize. Additionally, our integration of SQLAlchemy streamlined the connection and ORM functionalities with our SQL database, while LangChain effectively translated natural language inputs from the chatbot into accurate SQL queries. We further enhanced the user experience by implementing a dynamic dashboard with Plotly, allowing for interactive and visually appealing data visualizations. These accomplishments culminated in a robust, user-friendly system that excels in both data access and analysis. ## What we learned We learned the skills in integrating various APIs along with the sequential process of actually being a data engineer and analyst through the implementation of our agent pipeline. ## What's next for Stratify For our next steps, we plan to add full UI integration to enhance the user experience, making our system even more intuitive and accessible. We aim to expand our data capabilities by incorporating datasets from various other industries, broadening the scope and applicability of our project. Additionally, we will focus on further testing to ensure the robustness and reliability of our system. This will involve rigorous validation and optimization to fine-tune the performance and accuracy of our query pipeline, chatbot interface, and visualization dashboard. By pursuing these enhancements, we strive to make our platform a comprehensive, versatile, and highly reliable tool for data analysis and visualization across different domains.
## 💡Inspiration * In Canada, every year there are 5.3 million people who feel they need some form of help for their mental health! But ordinary therapies is unfortunately boring and might be ineffective :( Having to deal with numerous patients every day, it also might be difficult for a mental health professional to build a deeper connection with their patient that allows the patient to heal and improve mentally and in turn physically. * Therefore, we built TheraVibe.VR! A portable professional that is tailored to understand you! TheraVibe significantly improves patients' mental health by gamifying therapy sessions so that patients can heal wherever and with whomever they can imagine! ## 🤖 What it does * TheraVibe provides professional psychological advisory powered by Cohere's API with the assistance of a RAG! * It is powered by Starknet for its private and decentralized approach to store patient information! * To aid the effort of helping more people, TheraVibe also uses Starknet to reward patients with cryptocurrencies effectively in a decentralized network to incentivize consistency in attending our unique "therapy sessions"! ## 🧠 How we built it * With the base of C# and Unity Engine, we used blockchain technology via the beautiful Starknet API to create and deploy smart contracts to ensure safe storage of a "doctor's" evaluation of a patient's health condition as well as blockchain transactions made to patient in a gamified manner to incentivize future participation and maximize healing! * For the memory import NextJS web app, we incorporated Auth0 for the security of our users and hosted it with a GoDaddy domain! * The verbal interaction of the therapist and the user is powered by ElevenLabs and AssemblyAI! The cognitive process of the therapist is given by Cohere's and a RAG! * To implement the VR project, we developed animation in Unity with C#, and used the platform to build and run our VR project! ## 🧩 Challenges we ran into * Auth0 helped reveal a cache problem in our program, and so we had to deal with server-side rendered issues in Nextjs last-minute! * We managed to deal with a support issue when hosting the domain name on GoDaddy and linking it to our Vercel deployment! * Deploying the C# Unity APP on Meta Quest 2 took 24 hours of our development! ## 🏆 Accomplishments that we're proud of * First time deploying on MetaQuest2 and building a project with Unity! * Integrating multiple API's together like AssemblyAI for speech transcription, ElevenLabs for speech generation, Cohere, and the implementation of a RAG through a complex pipeline with minimal latency! ## 🌐What we learned * How Auth0 requires HTTPS protocol(always fascinating how we don't have to reinvent the wheel for authenticating users!) * Our first time hosting on GoDaddy(especially a cool project domain!) * Building and running production-fast pipelines that have minimal latency to maximize user experience! ## 🚀What's next for TheraVibe.VR * In the exciting journey ahead, TheraVibe.VR aspires to revolutionize personalized therapy by reducing latency, expanding our immersive scenes, and introducing features like virtual running. Our future goals include crafting an even more seamless and enriching experience and pushing the boundaries of therapeutic possibilities for all our users.
## Inspiration The inspiration for the project was our desire to make studying and learning more efficient and accessible for students and educators. Utilizing advancements in technology, like the increased availability and lower cost of text embeddings, to make the process of finding answers within educational materials more seamless and convenient. ## What it does Wise Up is a website that takes many different types of file format, as well as plain text, and separates the information into "pages". Using text embeddings, it can then quickly search through all the pages in a text and figure out which ones are most likely to contain the answer to a question that the user sends. It can also recursively summarize the file at different levels of compression. ## How we built it With blood, sweat and tears! We used many tools offered to us throughout the challenge to simplify our life. We used Javascript, HTML and CSS for the website, and used it to communicate to a Flask backend that can run our python scripts involving API calls and such. We have API calls to openAI text embeddings, to cohere's xlarge model, to GPT-3's API, OpenAI's Whisper Speech-to-Text model, and several modules for getting an mp4 from a youtube link, a text from a pdf, and so on. ## Challenges we ran into We had problems getting the backend on Flask to run on a Ubuntu server, and later had to instead run it on a Windows machine. Moreover, getting the backend to communicate effectively with the frontend in real time was a real challenge. Extracting text and page data from files and links ended up taking more time than expected, and finally, since the latency of sending information back and forth from the front end to the backend would lead to a worse user experience, we attempted to implement some features of our semantic search algorithm in the frontend, which led to a lot of difficulties in transferring code from Python to Javascript. ## Accomplishments that we're proud of Since OpenAI's text embeddings are very good and very new, and we use GPT-3.5 based on extracted information to formulate the answer, we believe we likely equal the state of the art in the task of quickly analyzing text and answer complex questions on it, and the ease of use for many different file formats makes us proud that this project and website can be useful for so many people so often. To understand a textbook and answer questions about its content, or to find specific information without knowing any relevant keywords, this product is simply incredibly good, and costs pennies to run. Moreover, we have added an identification system (users signing up with a username and password) to ensure that a specific account is capped at a certain usage of the API, which is at our own costs (pennies, but we wish to avoid it becoming many dollars without our awareness of it. ## What we learned As time goes on, not only do LLMs get better, but new methods are developed to use them more efficiently and for greater results. Web development is quite unintuitive for beginners, especially when different programming languages need to interact. One tool that has saved us a few different times is using json for data transfer, and aws services to store Mbs of data very cheaply. Another thing we learned is that, unfortunately, as time goes on, LLMs get bigger and so sometimes much, much slower; api calls to GPT3 and to Whisper are often slow, taking minutes for 1000+ page textbooks. ## What's next for Wise Up What's next for Wise Up is to make our product faster and more user-friendly. A feature we could add is to summarize text with a fine-tuned model rather than zero-shot learning with GPT-3. Additionally, a next step is to explore partnerships with educational institutions and companies to bring Wise Up to a wider audience and help even more students and educators in their learning journey, or attempt for the website to go viral on social media by advertising its usefulness. Moreover, adding a financial component to the account system could let our users cover the low costs of the APIs, aws and CPU running Whisper.
losing
## Inspiration🧠 Even with today’s cutting edge technology and leading scientific research that helps us develop, advance and improve in everyday life, those with rare genetic diseases are still left behind. Living with life threatening condition with little to no cure, considering, “less than 5% of more than 7,000 rare diseases believed to affect humans currently have an effective treatment”, is already frustrating, but when doctors aren’t knowledgeable/experienced enough to treat such cases, or when patients have only themselves to rely on to search for any experimental drugs, the everyday struggle becomes a nightmare to deal with. But what’s even more tragic is despite there being “300 million people worldwide [suffering a rare disease], [where] approximately 4% of the total world population is affected by [one] at any given time” , people still have to go through the exhausting trial and error process of finding a cure/treatment, EVEN when in several cases, they share exactly the same disease! Shockingly enough, there isn’t ANY collection of data or analysis being shared, on what medications/treatments work for different people, and which ones help or harm them! **Citation** Kaufmann, P., Pariser, A.R. & Austin, C. From scientific discovery to treatments for rare diseases – the view from the National Center for Advancing Translational Sciences – Office of Rare Diseases Research. Orphanet J Rare Dis 13, 196 (2018). <https://doi.org/10.1186/s13023-018-0936-x> Wakap SN, Lambert DM, Alry A, et al. Estimating cumulative point prevalence of rare diseases: analysis of the Orphanet database [published online September 16, 2019]. Eur J Hum Genet. doi: 10.1038/s41431-019-0508-0. <https://ojrd.biomedcentral.com/articles/10.1186/s13023-018-0936-x> ## What it does 💻 For our project, we have tried our best to match Varient’s goal in partially helping develop a diagnosis assistance tool for the rare disease population (with genetic mutations), so that it becomes a crucial gadget in finding appropriate drug treatments, providing accurate and up to date information, while also facilitating support in decision making. Our My Heroes gene assistant web app’s specific features include: Ability to select images that indicate a relevant gene in the report Generating and displaying relevant keywords, such as names of related disease and mutated gene names. Providing insights on how the related disease can be treated. Supporting patients understand key information from the reports. The user interface includes: a User registration/login (for authorization and account information purposes), a Dropbox/file attachment ( for images), a Catalog of uploads (for the usability of modifying/deleting items), Display of labeled/annotated report, and a Summary page ## How we built it 🔧 1. Used Python for the backend and Machine learning component of the app. 2. Implemented pytesseract OCR to extract texts/key words (mutation names on ) from the images(image labeling) supplied by the report, and labeled them with OpenCV, along with; 3. Using spacy’s en\_ner\_bionlp13cg\_md (pretrained NLP model for medical report text processing) to extract relevant keywords from the texts. 4. Used streamlit library to deploy the machine learning web app. 5. Worked with React.js for frontend (login, signup, the navbar, settings), Firebase for User authentication and Google authentication integration and Firestore (NoSQL database) implementation as well as storage. 6. We utilized Google Docs/Discord for brainstorming, and Trello for distributing and keeping track of time and tasks assigned. 7. Utilized Figma for designing and prototyping. ## Challenges we ran into 🔥 1. Familiarizing ourselves with Figma to build a complex but easy to use medical record health app. 2. We had trouble integrating the NLP model part to the frontend and ended up using streamlit to make the backend functional. 3. Even though we were aware the Machine Learning part would take a significant chunk of our time, we didn't realize just how much it actually did. We also required and were working with all hands on deck which prevented us from other tasks. 4. With one of our members being a novice programmer and involved with another large scale event commitment taking place at the same time as this event, we were short one team member 5. Another team member lacking significant experience in Machine Learning and related technologies resulted in a lack of cohesiveness throughout the process. 6. None of us was familiar with how to use Flask, and only one of us was familiar with REST API’s. We also had several issues integrating with the frontend (connecting API’s, sending post requests, getting data back), and had to figure out an alternative solution by using Streamlit to display images, modify it using functions in Python, and display the new image and extracted keywords. We also had issues deploying the streamlit app, as we kept getting errors. ## Accomplishments that we're proud of 💪 We are proud of being able to collaborate and work together despite our overall lack of experience in Machine Learning and differences in previous experiences within the team mates. We are also proud to have built a functional ML app, and make it usable to the user because we spent most of our time getting the NLP to work. **How to run the app** Pytesseract For windows: Via <https://github.com/UB-Mannheim/tesseract/wiki> For mac Download and install the spacy model: Download en\_ner\_bionlp13cg\_md via <https://allenai.github.io/scispacy/> pip install spacy Pip install ## What we learned ✍️ . Restoring the health of the patients by streamlining the process and help the doctors provide the best treatment for such specific and rare diseases. (Our app could be used as an assistant (AI assistant?) Or personal record tracker or personal assistant 2. It facilitates universal information sharing, and keeps all the data in one place (some people might get private treatments which don't require the use if a health card, so they can input their info in this central platform for easier, quicker,and efficient process. ## What's next for My Heroes ✨ Integrating the machine learning app to the frontend so that the app can have actual users and a smooth, simple UI Design . To improve the accessibility features of the app. We would love to see our app to be in the hands of our patiently waiting users as soon as possible! We hope that with its improvements, it helps them provide some peace of mind, and hopefully makes life easy for them.
MediBot: Help us help you get the healthcare you deserve ## Inspiration: Our team went into the ideation phase of Treehacks 2023 with the rising relevance and apparency of conversational AI as a “fresh” topic occupying our minds. We wondered if and how we can apply conversational AI technology such as chatbots to benefit people, especially those who may be underprivileged or underserviced in several areas that are within the potential influence of this technology. We were brooding over the six tracks and various sponsor rewards when inspiration struck. We wanted to make a chatbot within healthcare, specifically patient safety. Being international students, we recognize some of the difficulties that arise when living in a foreign country in terms of language and the ability to communicate with others. Through this empathetic process, we arrived at a group that we defined as the target audience of MediBot; children and non-native English speakers who face language barriers and interpretive difficulties in their communication with healthcare professionals. We realized very early on that we do not want to replace the doctor in diagnosis but rather equip our target audience with the ability to express their symptoms clearly and accurately. After some deliberation, we decided that the optimal method to accomplish that using conversational AI was through implementing a chatbot that asks clarifying questions to help label the symptoms for the users. ## What it does: Medibot initially prompts users to describe their symptoms as best as they can. The description is then evaluated to compare to a list of proper medical terms (symptoms) in terms of similarity. Suppose those symptom descriptions are rather vague (do not match very well with the list of official symptoms or are blanket terms). In that case, Medibot asks the patients clarifying questions to identify the symptoms with the user’s added input. For example, when told, “My head hurts,” Medibot will ask them to distinguish between headaches, migraines, or potentially blunt force trauma. But if the descriptions of a symptom are specific and relatable to official medical terms, Medibot asks them questions regarding associated symptoms. This means Medibot presents questions inquiring about symptoms that are known probabilistically to appear with the ones the user has already listed. The bot is designed to avoid making an initial diagnosis using a double-blind inquiry process to control for potential confirmation biases. This means the bot will not tell the doctor its predictions regarding what the user has, and it will not nudge the users into confessing or agreeing to a symptom they do not experience. Instead, the doctor will be given a list of what the user was likely describing at the end of the conversation between the bot and the user. The predictions from the inquiring process are a product of the consideration of associative relationships among symptoms. Medibot keeps track of the associative relationship through Cosine Similarity and weight distribution after the Vectorization Encoding Process. Over time, Medibot zones in on a specific condition (determined by the highest possible similarity score). The process also helps in maintaining context throughout the chat conversations. Finally, the conversation between the patient and Medibot ends in the following cases: the user needs to leave, the associative symptoms process suspects one condition much more than the others, and the user finishes discussing all symptoms they experienced. ## How we built it We constructed the MediBot web application in two different and interconnected stages, frontend and backend. The front end is a mix of ReactJS and HTML. There is only one page accessible to the user which is a chat page between the user and the bot. The page was made reactive through several styling options and the usage of states in the messages. The back end was constructed using Python, Flask, and machine learning models such as OpenAI and Hugging Face. The Flask was used in communicating between the varying python scripts holding the MediBot response model and the chat page in the front end. Python was the language used to process the data, encode the NLP models and their calls, and store and export responses. We used prompt engineering through OpenAI to train a model to ask clarifying questions and perform sentiment analysis on user responses. Hugging Face was used to create an NLP model that runs a similarity check between the user input of symptoms and the official list of symptoms. ## Challenges we ran into Our first challenge was familiarizing ourselves with virtual environments and solving dependency errors when pushing and pulling from GitHub. Each of us initially had different versions of Python and operating systems. We quickly realized that this will hinder our progress greatly after fixing the first series of dependency issues and started coding in virtual environments as solutions. The second great challenge we ran into was integrating the three separate NLP models into one application. This is because they are all resource intensive in terms of ram and we only had computers with around 12GB free for coding. To circumvent this we had to employ intermediate steps when feeding the result from one model into the other and so on. Finally, the third major challenge was resting and sleeping well. ## Accomplishments we are proud of First and foremost we are proud of the fact that we have a functioning chatbot that accomplishes what we originally set out to do. In this group 3 of us have never coded an NLP model and the last has only coded smaller scale ones. Thus the integration of 3 of them into one chatbot with front end and back end is something that we are proud to have accomplished in the timespan of the hackathon. Second, we are happy to have a relatively small error rate in our model. We informally tested it with varied prompts and performed within expectations every time. ## What we learned: This was the first hackathon for half of the team, and for 3/4, it was the first time working with virtual environments and collaborating using Git. We learned quickly how to push and pull and how to commit changes. Before the hackathon, only one of us had worked on an ML model, but we learned together to create NLP models and use OpenAI and prompt engineering (credits to OpenAI Mem workshop). This project's scale helped us understand these ML models' intrinsic moldability. Working on Medibot also helped us become much more familiar with the idiosyncrasies of ReactJS and its application in tandem with Flask for dynamically changing webpages. As mostly beginners, we experienced our first true taste of product ideation, project management, and collaborative coding environments. ## What’s next for MediBot The next immediate steps for MediBot involve making the application more robust and capable. In more detail, first we will encode the ability for MediBot to detect and define more complex language in simpler terms. Second, we will improve upon the initial response to allow for more substantial multi-symptom functionality.Third, we will expand upon the processing of qualitative answers from users to include information like length of pain, the intensity of pain, and so on. Finally, after this more robust system is implemented, we will begin the training phase by speaking to healthcare providers and testing it out on volunteers. ## Ethics: Our design aims to improve patients’ healthcare experience towards the better and bridge the gap between a condition and getting the desired treatment. We believe expression barriers and technical knowledge should not be missing stones in that bridge. The ethics of our design therefore hinges around providing quality healthcare for all. We intentionally stopped short of providing a diagnosis with Medibot because of the following ethical considerations: * **Bias Mitigation:** Whatever diagnosis we provide might induce unconscious biases like confirmation or availability bias, affecting the medical provider’s ability to give proper diagnosis. It must be noted however, that Medibot is capable of producing diagnosis. Perhaps, Medibot can be used in further research to ensure the credibility of AI diagnosis by checking its prediction against the doctor’s after diagnosis has been made. * **Patient trust and safety:** We’re not yet at the point in our civilization’s history where patients are comfortable getting diagnosis from AIs. Medibot’s intent is to help nudge us a step down that path, by seamlessly, safely, and without negative consequence integrating AI within the more physical, intimate environments of healthcare. We envision Medibot in these hospital spaces, helping users articulate their symptoms better without fear of getting a wrong diagnosis. We’re humans, we like when someone gets us, even if that someone is artificial. However, the implementation of AI for pre-diagnoses still raises many ethical questions and considerations: * **Fairness:** Use of Medibot requires a working knowledge of the English language. This automatically disproportionates its accessibility. There are still many immigrants for whom the questions, as simple as we have tried to make them, might be too much for. This is a severe limitation to our ethics of assisting these people. A next step might include introducing further explanation of troublesome terms in their language (Note: the process of pre-diagnosis will remain in English, only troublesome terms that the user cannot understand in English may be explained in a more familiar language. This way we further build patients’ vocabulary and help their familiarity with English ). There are also accessibility concerns as hospitals in certain regions or economic stratas may not have the resources to incorporate this technology. * **Bias:** We put severe thought into bias mitigation both on the side of the doctor and the patient. It is important to ensure that Medibot does not lead the patient into reporting symptoms they don’t necessarily have or induce availability bias. We aimed to circumvent this by asking questions seemingly randomly from a list of symptoms generated based on our Sentence Similarity model. This avoids leading the user in just one direction. However, this does not eradicate all biases as associative symptoms are hard to mask from the patient (i.e a patient may think chills if you ask about cold) so this remains a consideration. * **Accountability:** Errors in symptom identification can be tricky to detect making it very hard for the medical practitioner to know when the symptoms are a true reflection of the actual patient’s state. Who is responsible for the consequences of wrong pre-diagnoses? It is important to establish these clear systems of accountability and checks for detecting and improving errors in MediBot. * **Privacy:** MediBot will be trained on patient data and patient-doctor diagnoses in future operations. There remains concerns about privacy and data protection. This information, especially identifying information, must be kept confidential and secure. One method of handling this is asking users at the very beginning whether they want their data to be used for diagnostics and training or not.
# MediTrack: Smarter Medication, Safer Lives ## What Inspired Us Prescription costs are skyrocketing, and for many patients with chronic illnesses, affording medication has become a struggle. The rising costs often lead to missed or delayed doses, which can have serious effects on patient outcomes. The alarming statistic that 75% of Americans struggle to take their medication as directed—and the fact that approximately 135,000 deaths per year in the United States are linked to medication nonadherence—served as a powerful motivator for us to find a solution. We envisioned a system that not only tracks medication intake but also provides real-time insights into patient adherence. Our goal was to reduce the number of medication-related complications and hospital admissions, and that's why we built **MediTrack**—an automated RFID-based patient tracking and reporting system. We wanted to create a seamless, reliable way to help patients stay on top of their medication schedules while reducing the burden on healthcare providers. The app also uses a custom-coded regression model to predict whether or not our patient X will take their medication on time. Based on its training, we can predict whether our patients are at high-risk, medium, or low-risk. This makes it easier for healthcare providers or caretakers to respond to urgent needs immediately. ## What We Learned Throughout our journey, we learned the importance of clear data structuring and agile development practices. Here are some of our key takeaways: * **Data Management is Crucial:** We realized that having a well-defined data structure is essential for any system that relies on complex integrations, like linking RFID data with patient medication schedules. Or our full stack software application!! (that could be used by both patients and pharma companies) * **User Experience Matters:** Building an intuitive, user-friendly interface significantly impacts how patients and healthcare providers interact with our system. Focusing on three core features for simplicity made our platform more accessible. * **Collaboration is Key:** Teamwork played a crucial role in overcoming challenges. Leveraging each member's unique skills helped us deliver a comprehensive solution. ## How We Built It We built **MediTrack** using a combination of cutting-edge technologies to ensure reliability, scalability, and seamless integration with existing systems: * **RFID Technology:** We utilized RFID tags for patient identification, enabling real-time tracking of medication intake. * **Backend Development:** Our backend was powered by MongoDB for secure and efficient data storage. This allowed us to handle patient records, medication schedules, and timestamps accurately. * **Notification System:** We integrated push notifications using Twilio and email alerts via SMTP, providing real-time alerts to patients and caregivers whenever medication schedules were missed or delayed. * **AI-Powered Analytics:** Our custom-coded regression classifier model, trained on previous patient data, predicts the risk levels (high, medium, low) of patients based on their adherence patterns. * **Seamless Integration:** We ensured that MediTrack works based on research with existing Electronic Health Records (EHR) systems to automate data logging and report generation. ## Challenges We Faced Building a solution like MediTrack came with its fair share of challenges, including: * **Data Management for RFID Cards:** Organizing and integrating pill data with RFID cards was complex. We had to ensure that each tag accurately corresponded to the correct medication and dosage information. * **Setting Up the Push Notification Service:** Establishing a reliable push notification system that worked seamlessly across various devices was a significant hurdle. * **Frontend Design and Integration:** Aligning the frontend user experience with our robust backend logic required extensive testing and refinement to ensure smooth data flow and user interactions. * **Achieving Scalability:** Making sure the solution was scalable to accommodate different healthcare environments, from small clinics to large hospitals, was a key focus. ## Accomplishments We're Proud Of Despite the challenges, we’re proud of several key achievements: * **Successful Integration of RFID Technology:** We successfully implemented RFID technology to track medication adherence, providing real-time data to healthcare providers. * **Operational Push Notification System:** Our notification service is fully functional, sending real-time alerts to patients and caregivers about missed or incorrect doses. * **User-Centric Design:** We built an easy-to-use interface that simplifies medication management, leading to positive feedback from initial user testing. * **Expanding RFID Capabilities:** Beyond medication tracking, we explored innovative ways to use RFID for inventory control, patient safety, and hospital efficiency. ## What's Next for MediTrack We're not stopping with medication tracking. Our vision is to expand MediTrack's RFID system to enhance other areas of healthcare, including: * **Inventory Control:** Real-time monitoring of medication stock levels to ensure availability and manage expired medications. * **Patient Safety:** Implementing RFID wristbands for newborns to prevent accidental swaps and secure patient locations in emergencies. * **Enhanced Infection Control:** Using RFID to strengthen protocols and improve response times during infection outbreaks. With MediTrack, we aim to transform healthcare through smarter technology, improving patient outcomes, reducing healthcare costs, and making hospitals more efficient. **MediTrack: Smarter Medication, Safer Lives.**
partial
## Inspiration Feeling major self-doubt when you first start hitting the gym or injuring yourself accidentally while working out are not uncommon experiences for most people. This inspired us to create Core, a platform to empower our users to take control of their well-being by removing the financial barriers around fitness. ## What it does Core analyses the movements performed by the user and provides live auditory feedback on their form, allowing them to stay fully present and engaged during their workout. Our users can also take advantage of the visual indications on the screen where they can view a graph of the keypoint which can be used to reduce the risk of potential injury. ## How we built it Prior to development, a prototype was created on Figma which was used as a reference point when the app was developed in ReactJs. In order to recognize the joints of the user and perform analysis, Tensorflow's MoveNet model was integrated into Core. ## Challenges we ran into Initially, it was planned that Core would serve as a mobile application built using React Native, but as we developed a better understanding of the structure, we saw more potential in a cross-platform website. Our team was relatively inexperienced with the technologies that were used, which meant learning had to be done in parallel with the development. ## Accomplishments that we're proud of This hackathon allowed us to develop code in ReactJs, and we hope that our learnings can be applied to our future endeavours. Most of us were also new to hackathons, and it was really rewarding to see how much we accomplished throughout the weekend. ## What we learned We gained a better understanding of the technologies used and learned how to develop for the fast-paced nature of hackathons. ## What's next for Core Currently, Core uses TensorFlow to track several key points and analyzes the information with mathematical models to determine the statistical probability of the correctness of the user's form. However, there's scope for improvement by implementing a machine learning model that is trained on Big Data to yield higher performance and accuracy. We'd also love to expand our collection of exercises to include a wider variety of possible workouts.
During the COVID-19 pandemic, time spent at home, time spent not exercising, and time spent alone has been at an all time high. This is why, we decided to introduce FITNER to the other fitness nerds like ourselves who struggle to find others to participate in exercise with. As we all know that it is easier to stay healthy, and happy with friends. We created Fitner as a way to help you find friends to go hiking with, play tennis or even go bowling with! It can be difficult to practice the sport that you love when none of your existing friends are interested, and you do not have the time commitment to join a club. Fitner solves this issue by bridging the gap between fitness nerds who want to reach their potential but don't have the community to do so. Fitner is a mobile application built with React Native for an iOS and Android front-end, and Google Cloud / Firebase as the backend. We were inspired by the opportunity to use Google Cloud platforms in our application, so we decided to do something we had never done before, which was real-time communication. Although it was our first time working with real-time communication, we found ourselves, in real-time, overtaking the challenges that came along with it. We are very proud of our work ethic, our resulting application and dedication to our first ever hackathon. Future implementations of our application can include public chat rooms that users may join and plan public sporting events with, and a more sophisticated algorithm which would suggest members of the community that are at a similar skill and fitness goals as you. With FITNER, your fitness goals will be met easily and smoothly and you will meet lifelong friends on the way!
## Inspiration: The need for an accessible workout tool that helps improve form and keep users engaged. ## What it does: Gives real-time feedback on the user's form during workouts. ## How we built it Tracking and Evaluation: This was coded in javascript and built using html. It incorporates a trained TensorFlow model call PoseNet to receive live data on "keypoints" in a video stream. The key points correspond to joints in the user's body. The keypoints motion and relative position is then used to evaluate the user's form. ## Challenges we ran into Integrating machine learning with computer vision isn't simple, even when trying to use a pre-trained model. Some similar technologies are even more complex or require massive technology requirements (equivalent to ~$2,400 video card) so finding the correct model and platform for our application was critical and challenging. ## Accomplishments that we're proud of This being the first hackathon for every member of the team, we are very proud of the learning we all achieved and the final product we were able to create. We learned so much about coding languages we were unfamiliar with (some members learned new languages from scratch), computer vision, machine learning, data models, and mobile UI/UX design. With our limited coding experience, we were able to research and persist through learning barriers and finish with something to show for it. ## What we learned We learned how to track body movements using PoseNet and TensorFlow with Javascript, how to effectively use virtual environments to run the program locally, and how to communicate with the user through UI/UX on mobile devices. ## What's next for Trackout There is a lot of potential for TrackOut to become a huge platform to host an amazing community of users wanting to improve their workout routine. By using recurring neural networks, TrackOut will be able to provide specific and meaningful feedback to help the user achieve high levels of form and consistency with their workouts. TrackOut will also have an extensive social aspect, connecting users by allowing them to share their own workouts and help each other by providing feedback in comment sections. Finally, Trackout seeks to collaborate with major YouTube and Instagram influencers within the existing online workout space, to bring a high volume of users, and to keep them actively involed in the community.
winning
## 💡Inspiration Gaming is often associated with sitting for long periods of time in front of a computer screen, which can have negative physical effects. In recent years, consoles such as the Kinect and Wii have been created to encourage physical fitness through games such as "Just Dance". However, these consoles are simply incompatible with many of the computer and arcade games that we love and cherish. ## ❓What it does We came up with Motional at HackTheValley wanting to create a technological solution that pushes the boundaries of what we’re used to and what we can expect. Our product, Motional, delivers on that by introducing a new, cost-efficient, and platform-agnostic solution to universally interact with video games through motion capture, and reimagining the gaming experience. Using state-of-the-art machine learning models, Motional can detect over 500 features on the human body (468 facial features, 21 hand features, and 33 body features) and use these features as control inputs to any video game. Motional operates in 3 modes: using hand gestures, face gestures, or full-body gestures. We ship certain games out-of-the-box such as Flappy Bird and Snake, with predefined gesture-to-key mappings, so you can play the game directly with the click of a button. For many of these games, jumping in real-life (body gesture) /opening the mouth (face gesture) will be mapped to pressing the "space-bar"/"up" button. However, the true power of Motional comes with customization. Every simple possible pose can be trained and clustered to provide a custom command. Motional will also play a role in creating a more inclusive gaming space for people with accessibility needs, who might not physically be able to operate a keyboard dexterously. ## 🤔 How we built it First, a camera feed is taken through Python OpenCV. We then use Google's Mediapipe models to estimate the positions of the features of our subject. To learn a new gesture, we first take a capture of the gesture and store its feature coordinates generated by Mediapipe. Then, for future poses, we compute a similarity score using euclidean distances. If this score is below a certain threshold, we conclude that this gesture is the one we trained on. An annotated image is generated as an output through OpenCV. The actual keyboard presses are done using PyAutoGUI. We used Tkinter to create a graphical user interface (GUI) where users can switch between different gesture modes, as well as select from our current offering of games. We used MongoDB as our database to keep track of scores and make a universal leaderboard. ## 👨‍🏫 Challenges we ran into Our team didn't have much experience with any of the stack before, so it was a big learning curve. Two of us didn't have a lot of experience in Python. We ran into many dependencies issues, and package import errors, which took a lot of time to resolve. When we initially were trying to set up MongoDB, we also kept timing out for weird reasons. But the biggest challenge was probably trying to write code while running on 2 hours of sleep... ## 🏆 Accomplishments that we're proud of We are very proud to have been able to execute our original idea from start to finish. We managed to actually play games through motion capture, both with our faces, our bodies, and our hands. We were able to store new gestures, and these gestures were detected with very high precision and low recall after careful hyperparameter tuning. ## 📝 What we learned We learned a lot, both from a technical and non-technical perspective. From a technical perspective, we learned a lot about the tech stack (Python + MongoDB + working with Machine Learning models). From a non-technical perspective, we worked a lot working together as a team and divided up tasks! ## ⏩ What's next for Motional We would like to implement a better GUI for our application and release it for a small subscription fee as we believe there is a market for people that would be willing to invest money into an application that will help them automate and speed up everyday tasks while providing the ability to play any game they want the way they would like. Furthermore, this could be an interesting niche market to help gamify muscle rehabilition, especially for children.
## Inspiration We wanted to promote an easy learning system to introduce verbal individuals to the basics of American Sign Language. Often people in the non-verbal community are restricted by the lack of understanding outside of the community. Our team wants to break down these barriers and create a fun, interactive, and visual environment for users. In addition, our team wanted to replicate a 3D model of how to position the hand as videos often do not convey sufficient information. ## What it does **Step 1** Create a Machine Learning Model To Interpret the Hand Gestures This step provides the foundation for the project. Using OpenCV, our team was able to create datasets for each of the ASL alphabet hand positions. Based on the model trained using Tensorflow and Google Cloud Storage, a video datastream is started, interpreted and the letter is identified. **Step 2** 3D Model of the Hand The Arduino UNO starts a series of servo motors to activate the 3D hand model. The user can input the desired letter and the 3D printed robotic hand can then interpret this (using the model from step 1) to display the desired hand position. Data is transferred through the SPI Bus and is powered by a 9V battery for ease of transportation. ## How I built it Languages: Python, C++ Platforms: TensorFlow, Fusion 360, OpenCV, UiPath Hardware: 4 servo motors, Arduino UNO Parts: 3D-printed ## Challenges I ran into 1. Raspberry Pi Camera would overheat and not connect leading us to remove the Telus IoT connectivity from our final project 2. Issues with incompatibilities with Mac and OpenCV and UiPath 3. Issues with lighting and lack of variety in training data leading to less accurate results. ## Accomplishments that I'm proud of * Able to design and integrate the hardware with software and apply it to a mechanical application. * Create data, train and deploy a working machine learning model ## What I learned How to integrate simple low resource hardware systems with complex Machine Learning Algorithms. ## What's next for ASL Hand Bot * expand beyond letters into words * create a more dynamic user interface * expand the dataset and models to incorporate more
## Inspiration With the sudden move to online videoconferencing, presenters and audiences have been faced with a number of challenges. Foremost among these is a lack of engagement between presenters and the audience, which is exacerbated by a lack of gestures and body language. As first year students, we have seen this negatively impact our learning throughout both high school and our first year of University. In fact, many studies, such as [link](https://dl.acm.org/doi/abs/10.1145/2647868.2654909), emphasize the direct link between gestures and audience engagement. As such, we wanted to find a way to give presenters the opportunity to increase audience engagement through bringing natural presentations techniques to videoconferencing. ## What it does PGTCV is a Python program that allows users to move back from their camera and incorporate body language into their presentations without losing fundamental control. In its current state, the Python script uses camera information to determine whether a user needs their slides to be moved forwards or backwards. To trigger these actions, users raise their left fist to enable the program to listen for instructions. They can then swipe with their palm out to the left or to the right to trigger a forwards or backwards slide change. This process allows users to use common body language and hand gestures without accidentally triggering the controls. ## How we built it After fetching webcam data through OpenCV2, we use Google's MediaPipe library to receive a co-ordinate representation of any hands on-screen. This is then fed through a pre-trained algorithm to listen for any left-hand controlling gestures. Once a control gesture is found, we track right-hand motion gestures, and simulate the relevant keyboard input using pynput in whatever application the user is focused on. The application also creates a new virtual camera in a host Windows machine using pyvirtualcam and Unity Capture since Windows only allows one application to use any single camera device. The virtual camera can be used by any videoconferencing application. ## Challenges we ran into Inability to get IDEs working. Mac M1 chip not supporting Tensorflow. Inability to use webcam in multiple applications at once. Setting up right-hand gesture recognition with realistic thresholds. ## Accomplishments that we're proud of Successfully implementing our idea in our first hackathon. Getting a functional and relatively bug-free version of the program running with time to spare. Learning to successfully work with a number of technologies that we previously had no experience with (everything other than Python). ## What we learned A number of relevant technologies. Implementing simple computer vision algorithms. Taking code from idea to functional prototype in a limited amount of time. ## What's next for Presentation Gestures Through Computer Vision (PGTCV) A better name. Implementation of a wider range of gestures. Optimization of algorithms. Increased accuracy in detecting gestures. Implementation into existing videoconferencing applications.
partial
# Hackathon Project: SpeakEasy ## Inspiration We have friends with special needs who struggle with communication, especially in understanding how to respond in specific scenarios. Witnessing these challenges firsthand inspired us to create a tool that provides real-time, actionable feedback to help improve their communication skills. We wanted to make a difference by leveraging technology to support effective communication and build confidence. ## What it does SpeakEasy is an innovative platform designed to enhance communication abilities through real-time feedback on tone, grammar, pronunciation, and facial expressions. Users can record and upload videos, which are then analyzed to provide detailed, sentence-by-sentence feedback, helping them refine their speaking skills and build confidence. ## How we built it * **Frontend**: Built with Next.js, allowing users to easily record and upload videos directly from the web interface. * **Backend**: Implemented with FastAPI, handling video processing, feedback generation, and communication with AI models. * **Whisper**: Used for transcribing audio with word-level timestamps for precise feedback. * **Pydub**: Utilized to split audio into manageable segments for detailed analysis. * **Hume AI**: Analyzes prosody (tone) and facial expressions in video clips to provide comprehensive feedback. * **OpenAI GPT-4**: Generates detailed feedback on tone, grammar, pronunciation, and facial expressions based on the analyzed data. * **MoviePy**: Processes video files to extract and split clips corresponding to individual sentences. * **FFmpeg**: Used for real-time audio and video processing, ensuring accurate synchronization and seamless analysis. * **Postman**: Utilized for API testing to ensure seamless communication between frontend and backend components. ## Challenges we ran into 1. **Rendering and Splitting Clips**: Ensuring clean and accurate rendering of video clips without overlaps or errors was challenging. Training our AI models with reliable code samples helped ensure precise analysis. 2. **Syncing Audio and Video**: Initially, syncing text-to-speech audio with video clips was problematic. FFmpeg was instrumental in stitching the audio and video accurately. 3. **Fine-Tuning AI Models**: Prompt engineering and tuning our generative AI models to provide precise and useful feedback required significant effort. ## Accomplishments that we're proud of * Successfully integrating multiple AI models to achieve smooth and accurate feedback. * Implementing an automated process where AI models correct their own errors, enhancing feedback reliability. * Creating an interactive and user-friendly frontend that allows users to easily navigate and engage with the feedback. ## What we learned * Advanced prompt engineering techniques for effective AI communication. * Video and audio editing using command-line tools like FFmpeg. * How to integrate and orchestrate multiple AI models to work seamlessly together. * Developing a user-centric interface that enhances the learning experience. ## What's next for SpeakEasy * **Scenario-Based Training**: Implementing features that ask users to respond to various scenarios, providing context-specific feedback to improve their communication in different situations. * **Multilingual Support**: Allowing users to choose their preferred language for feedback. * **Subtitles and Accessibility**: Implementing subtitles for feedback to increase accessibility. * **Expanded Capabilities**: Extending the platform to cover more disciplines and providing additional educational resources. * **Optimized Backend**: Continuously optimizing backend requests and improving error handling for a smoother user experience. Join us in making communication accessible and effective for everyone!
## SpeakEasy [![forthebadge made-with-python](https://res.cloudinary.com/devpost/image/fetch/s--xz50-m8Y--/c_limit,f_auto,fl_lossy,q_auto:eco,w_900/http://ForTheBadge.com/images/badges/made-with-python.svg)](https://www.python.org/) [![No Maintenance Intended](https://res.cloudinary.com/devpost/image/fetch/s--3YExuAi4--/c_limit,f_auto,fl_lossy,q_auto:eco,w_900/http://unmaintained.tech/badge.svg)](http://unmaintained.tech/) [![made-with-Markdown](https://img.shields.io/badge/Made%20with-Markdown-1f425f.svg)](http://commonmark.org) **SpeakEasy** (Speech-processing easily-accessibly kit, SPEAK) is an open-source website designed to encourage users to be confident about their public speaking skills. It supports integration with Quizlet, letting users upload their cue cards to read at their own pace. When you record a snippet of your speech, it automatically analyzes your face, emotions, tone and provides relevant feedback at a click of a button. # Running Git clone into a local repository, and follow the following guides ## Server (requires `python3` and `unix/linux`) 1. Install all python dependencies from `requirements.txt` into your current python environment 2. Install flask through `pip3` 3. Install `google-cloud-vision`, `google-cloud-language`, and `google-cloud-speech` through `pip3` 4. Setup google api credentials: * Register for a google developers account * Enable `Google Cloud Vision`, `Google Natural Language`, and `Google Speech-to-text` apis * Create Service account credentials, download and put the `json` file in the `server` folder (named `credentials.json`) 5. Run the server using `make run` ## Client (requires `python3`) 1. Just run `python3 -m http.server` in the `client` directory. 2. Navigate to `localhost:8000` 3. ??? 4. Profit.
## Inspiration Partially inspired by the Smart Cities track, we wanted our app to have the direct utility of ordering food, while still being fun to interact with. We aimed to combine convenience with entertainment, making the experience more enjoyable than your typical drive-through order. ## What it does You interact using only your voice. The app automatically detects when you start and stop talking, uses AI to transcribe what you say, figures out the food items (with modifications) you want to order, and adds them to your current order. It even handles details like size and flavor preferences. The AI then generates text-to-speech audio, which is played back to confirm your order in a humorous, engaging way. There is absolutely zero set-up or management necessary, as the program will completely ignore all background noises and conversation. Even then, it will still take your order with staggering precision. ## How we built it The frontend of the app is built with React and TypeScript, while the backend uses Flask and Python. We containerized the app using Docker and deployed it using Defang. The design of the menu is also done in Canva with a dash of Harvard colors. ## Challenges we ran into One major challenge was getting the different parts of the app—frontend, backend, and AI—to communicate effectively. From media file conversions to AI prompt engineering, we worked through each of the problems together. We struggled particularly with maintaining smooth communication once the app was deployed. Additionally, fine-tuning the AI to accurately extract order information from voice inputs while keeping the interaction natural was a big hurdle. ## Accomplishments that we're proud of We're proud of building a fully functioning product that successfully integrates all the features we envisioned. We also managed to deploy the app, which was a huge achievement given the complexity of the project. Completing our initial feature set within the hackathon timeframe was a key success for us. Trying to work with Python data type was difficult to manage, and we were proud to navigate around that. We are also extremely proud to meet a bunch of new people and tackle new challenges that we were not previously comfortable with. ## What we learned We honed our skills in React, TypeScript, Flask, and Python, especially in how to make these technologies work together. We also learned how to containerize and deploy applications using Docker and Docker Compose, as well as how to use Defang for cloud deployment. ## What's next for Harvard Burger Moving forward, we want to add a business-facing interface, where restaurant staff would be able to view and fulfill customer orders. There will also be individual kiosk devices to handle order inputs. These features would allow *Harvard Burger* to move from a demo to a fully functional app that restaurants could actually use. Lastly, we can sell the product by designing marketing strategies for fast food chains.
losing
## Inspiration Peer-review is critical to modern science, engineering, and healthcare endeavors. However, the system for implementing this process has lagged behind and results in expensive costs for publishing and accessing material, long turn around times reminiscent of snail-mail, and shockingly opaque editorial practices. Astronomy, Physics, Mathematics, and Engineering use a "pre-print server" ([arXiv](https://arxiv.org)) which was the early internet's improvement upon snail-mailing articles to researchers around the world. This pre-print server is maintained by a single university, and is constantly requesting donations to keep up the servers and maintenance. While researchers widely acknowledge the importance of the pre-print server, there is no peer-review incorporated, and none planned due to technical reasons. Thus, researchers are stuck with spending >$1000 per paper to be published in journals, all the while individual article access can cost as high as $32 per paper! ([source](https://www.nature.com/subscriptions/purchasing.html)). For reference, a single PhD thesis can contain >150 references, or essentially cost $4800 if purchased individually. The recent advance of blockchain and smart contract technology ([Ethereum](https://www.ethereum.org/)) coupled with decentralized file sharing networks ([InterPlanetaryFileSystem](https://ipfs.io)) naturally lead us to believe that archaic journals and editors could be bypassed. We created our manuscript distribution and reviewing platform based on the arXiv, but in a completely decentralized manner. Users utilize, maintain, and grow the network of scholarship by simply running a simple program and web interface. ## What it does arXain is a Dapp that deals with all the aspects of a peer-reviewed journal service. An author (wallet address) will come with a bomb-ass paper they wrote. In order to "upload" their paper to the blockchain, they will first need to add their file/directory to the IPFS distributed file system. This will produce a unique reference number (DOI is currently used in journals) and hash corresponding to the current paper file/directory. The author can then use their address on the Ethereum network to create a new contract to submit the paper using this reference number and paperID. In this way, there will be one paper per contract. The only other action the author can make to that paper is submitting another draft. Others can review and comment on papers, but an address can not comment/review its own paper. The reviews are rated on a "work needed", "acceptable" basis and the reviewer can also upload an IPFS hash of their comments file/directory. Protection is also built in such that others can not submit revisions of the original author's paper. The blockchain will have a record of the initial paper submitted, revisions made by the author, and comments/reviews made by peers. The beauty of all of this is one can see the full transaction histories and reconstruct the full evolution of the document. One can see the initial draft, all suggestions from reviewers, how many reviewers, and how many of them think the final draft is reasonable. ## How we built it There are 2 main back-end components, the IPFS file hosting service and the Ethereum blockchain smart contracts. They are bridged together with ([MetaMask](https://metamask.io/)), a tool for connecting the distributed blockchain world, and by extension the distributed papers, to a web browser. We designed smart contracts in Solidity. The IPFS interface was built using a combination of Bash, HTML, and a lot of regex! . Then we connected the IPFS distributed net with the Ethereum Blockchain using MetaMask and Javascript. ## Challenges we ran into On the Ethereum side, setting up the Truffle Ethereum framework and test networks were challenging. Learning the limits of Solidity and constantly reminding ourselves that we had to remain decentralized was hard! The IPFS side required a lot of clever regex-ing. Ensuring that public access to researchers manuscript and review history requires other proper identification and distribution on the network. The hardest part was using MetaMask and Javascript to call our contracts and connect the blockchain to the browser. We struggled for about hours trying to get javascript to deploy a contract on the blockchain. We were all new to functional programming. ## Accomplishments that we're proud of Closing all the curly bois and close parentheticals in javascript. Learning a whole lot about the blockchain and IPFS. We went into this weekend wanting to learning about how the blockchain worked, and came out learning about Solidity, IPFS, Javascript, and a whole lot more. You can see our "genesis-paper"on an IPFS gateway (a bridge between HTTP and IPFS) [here](https://gateway.ipfs.io/ipfs/QmdN2Hqp5z1kmG1gVd78DR7vZmHsXAiSbugCpXRKxen6kD/0x627306090abaB3A6e1400e9345bC60c78a8BEf57_1.pdf) ## What we learned We went into this with knowledge that was a way to write smart contracts, that IPFS existed, and minimal Javascript. We learned intimate knowledge of setting up Ethereum Truffle frameworks, Ganache, and test networks along with the development side of Ethereum Dapps like the Solidity language, and javascript tests with the Mocha framework. We learned how to navigate the filespace of IPFS, hash and and organize directories, and how the file distribution works on a P2P swarm. ## What's next for arXain With some more extensive testing, arXain is ready for the Ropsten test network *at the least*. If we had a little more ETH to spare, we would consider launching our Dapp on the Main Network. arXain PDFs are already on the IPFS swarm and can be accessed by any IPFS node.
## Inspiration It’s Friday afternoon, and as you return from your final class of the day cutting through the trailing winds of the Bay, you suddenly remember the Saturday trek you had planned with your friends. Equipment-less and desperate you race down to a nearby sports store and fish out $$$, not realising that the kid living two floors above you has the same equipment collecting dust. While this hypothetical may be based on real-life events, we see thousands of students and people alike impulsively spending money on goods that would eventually end up in their storage lockers. This cycle of buy-store-collect dust inspired us to develop Lendit this product aims to stagnate the growing waste economy and generate passive income for the users on the platform. ## What it does A peer-to-peer lending and borrowing platform that allows users to generate passive income from the goods and garments collecting dust in the garage. ## How we built it Our Smart Lockers are built with RaspberryPi3 (64bit, 1GB RAM, ARM-64) microcontrollers and are connected to our app through interfacing with Google's Firebase. The locker also uses Facial Recognition powered by OpenCV and object detection with Google's Cloud Vision API. For our App, we've used Flutter/ Dart and interfaced with Firebase. To ensure *trust* - which is core to borrowing and lending, we've experimented with Ripple's API to create an Escrow system. ## Challenges we ran into We learned that building a hardware hack can be quite challenging and can leave you with a few bald patches on your head. With no hardware equipment, half our team spent the first few hours running around the hotel and even the streets to arrange stepper motors and Micro-HDMI wires. In fact, we even borrowed another team's 3-D print to build the latch for our locker! On the Flutter/ Dart side, we were sceptical about how the interfacing with Firebase and Raspberry Pi would work. Our App Developer previously worked with only Web Apps with SQL databases. However, NoSQL works a little differently and doesn't have a robust referential system. Therefore writing Queries for our Read Operations was tricky. With the core tech of the project relying heavily on the Google Cloud Platform, we had to resolve to unconventional methods to utilize its capabilities with an internet that played Russian roulette. ## Accomplishments that we're proud of The project has various hardware and software components like raspberry pi, Flutter, XRP Ledger Escrow, and Firebase, which all have their own independent frameworks. Integrating all of them together and making an end-to-end automated system for the users, is the biggest accomplishment we are proud of. ## What's next for LendIt We believe that LendIt can be more than just a hackathon project. Over the course of the hackathon, we discussed the idea with friends and fellow participants and gained a pretty good Proof of Concept giving us the confidence that we can do a city-wide launch of the project in the near future. In order to see these ambitions come to life, we would have to improve our Object Detection and Facial Recognition models. From cardboard, we would like to see our lockers carved in metal at every corner of this city. As we continue to grow our skills as programmers we believe our product Lendit will grow with it. We would be honoured if we can contribute in any way to reduce the growing waste economy.
## Inspiration Civic authorities are constantly working to improve transport for their cities. Their efforts, however, often lack an important piece: access to accurate data about how current transport systems are used. A better understanding of how people commute, share rides, and take public transport today will inform how these systems can be better tomorrow. ## What it does In essence, Argonaut is a marketplace: it helps users part with any personal data that they're willing to share, in return for monetary incentives in the form of Algorand coins (Algos). Authorities can buy data packs - which are anonymised and aggregated clusters of such data submitted by multiple users, for a price, which is then distributed amongst each of the contributors equally. Here's the key: the blockchain keeps a track record of what organisation is requesting and accessing what data, even while maintaining complete anonymity on the part of the sellers. ## How I built it We used React.js to build the frontend, a Node.js/Express powered backend; Python to scrape through personal data dumps accessed via Google Takeout, and the Algorand JavaScript SDK to implement the blockchain-based transaction management system. ## Challenges I ran into The principal challenge we faced was how to implement Blockchain for managing access to personal data. Another important challenge was to do with making sure that the anonymity of users sharing their data is maintained, whilst also keeping any shared data private. ## Accomplishments that I'm proud of This was our first deep dive into Blockchain technology, and we were glad to be able to use Algorand's APIs and Dev Tools to be able to build a platform aimed at enhancing urban decision making, while simultaneously helping people like you and us take back control of their personal data. ## What I learned We learned about the power of the blockchain, and how decentralised ledgers have applications far and beyond the traditional financial markets that we've currently seen them in. ## What's next for Argonaut Argonaut can be scaled up to include the ability for users to connect Google Maps/Uber/Lyft and a lot of other apps directly to the platform so that their periodic data dumps can be seamlessly and automatically streamed into the platform, versus the current requirement of having to upload personal data dumps.
winning
As a team with one expert Geoguesser player and a complete newbie, sometimes the game feels impossible; rigged, even. We wanted to present the user with a genuinely rigged game as a silly joke. As our first hackathon, the most intimidating challenge was getting some kind of non-terminal UI. For this we got to do a lot of practice with Pygames.
## Inspiration A team member saw their mom working and repeating the same steps over and over on her computer. Open an excel sheet, copy 10 lines, tab into another sheet, paste, scroll, repeat. These monotonous tasks are a time waster which is terrible especially since the technology exists to automate these tasks. The problem is that most people aren't aware, have forgotten, or find it to complicated which is why we aim to reintroduce this technology in a simple manner that anyone can use. An application that could automate such repetitive tasks would not only save time but also could have other uses. For example, individuals that have arthritis may find it challenging to complete tasks on the computer that require repetitive movement on an everyday basis. We offer an application that can take these repetitive movements and simplify it to 1 click. ## What it does Our program, dubbed "Ekko", serves to: 1. Reduce menial/repetitive tasks 2. Increase accessibility to mouse and keyboard actions for those with diseases/disabilities 3. Aids those who aren’t technologically literate Users will be able to record any series of clicks, cursor movements and key presses as they would like, and save this sequence. They would then be able to play it back at will whenever they desire. ## How we built it To create this application we used various Python libraries. In the backend we had to implement the *Pynput* library which allowed tracking of clicks, cursor movement and key presses. We have individual threads set up for monitoring both keyboard and mouse actions at the same time. In the front end, we used QtDesigner and *PyQt5* to design our GUI. Using Python Libraries/Software: *Pynput*, *PyQt5*, QtDesigner, *Multithreading* Used *Pynput* to track and record cursor movement and keypresses and clicks Used *PyQt5* and *QtDesigner* to design the GUI ## Challenges we ran into Initially, without the use of the *Multithreading* library our program would crash after one use. It took a lot of time to figure out what the exact problem was and how to allow for recording of cursor and keyboard inputs while the Python program was running. Getting the GUI to interface with the recording/playing scripts was also annoying. A bit of a funny obstacle we encountered was that while 1 team member was following a tutorial he didn't realize they were alternating between Spanish and English while writing code which led to a few confusing errors. ## Accomplishments that we're proud of We had created a timeline for our project with deadlines for specific tasks. We were able to stick to this schedule and complete our project in a timely manner, even though we were busy. We also successfully implemented a program with a working GUI. Finally, it was the first hackathon experience for 2 of our members so it was nice to have a finished product which often doesn't happen on an individual's first hackathon. ## What we learned We learned many new tech skills such as how to create GUI's, using threads, how to track keyboard and mouse input. We also learned important soft skills such as time management. These skills will be useful in whatever future projects we decide to pursue. ## What's next for Ekko: Automate Your Computer Implementing voice recognition to make automating tasks more accessible. The movements of the cursor can also be more accurate during repetition.
💭 **Inspiration**💭 We know what it's like to be indecisive about where to eat (trust us, we're both extremely indecisive), so we created a solution that'll solve just that! 🍴 **What it does** 🍴 For all the indecisive people, our program will randomly select a restaurant as well as recommend the highest rated restaurant in the given radius! If neither of these satisfy your cravings, Find Dine will then provide you a full list of restaurants in the area. 🔨 **How we built it** 🔨 We wanted to work with a coding language that was familiar to both of us, hence we used Python. The IDE we chose is called Visual Studio Code which has a live sharing tool that made collaboration a breeze. Lastly, to make our code come together, we used Google's Geocoding API to turn the inputted location into coordinates. Along with that, we gathered filters based on additional user inputs to then generate the restaurants using Google's Places API. 😭 **Challenges we ran into** 😭 We are both fairly new to coding and this is our first ever hackathon! That being said, we have never worked with API's so it took a good chunk of our time learning the uses and functions. As a result, we never had the time to create an interface. ✨ **Accomplishments that we're proud of** ✨ 1. WE FINISHED OUR FIRST HACKATHON *(kinda)*! 2. We incorporated something we've never used or learnt about before into out code. 3. We pushed ourselves to use something outside of our comfort zones. 4. We persevered till the end. 📚 **What we learned** 📚 1. We learned about APIs and how to work with them. 2. How to use Git and GitHub properly and effectively. 🔮 **What's next for Find Dine** 🔮 We would like to finish an interface to get our code out to the public one day (hopefully an app so we can take this to go)! As well as incorporate a way for the program to retrieve a user's location by itself, rather than them typing it in.
losing
## Inspiration **According to the Alzheimer’s Association, 6.7 million Americans age 65 and older are living with Alzheimer's in 2023.** Alzheimer’s is a gradually progressive brain disorder involving memory loss. It is the most common form of dementia; people forget who their loved ones are and cannot carry out daily tasks anymore. Alzheimer’s and dementia is not only a personal health crisis but impacts family, friends, and caregivers. It is important to focus on the prevention and slow progression of symptoms related to memory loss. **How can we use the Memory Palace – a psychology-based technique where people can associate mnemonic images in their mind to places they know – to help prevent and ease the lives of those with Alzheimer’s and dementia?** There is a lot of psychology research focused on memory and learning that can be used to guide technical applications and enhance memory performance. Our team is very interested in memory and learning mechanisms, which have inspired our idea to use scientific background to improve memory. ## What it does **Memory Playground is a web application that helps boost memory recall and retention through the Memory Palace technique, especially for senior citizens and those with Alzheimer’s and dementia.** The application allows users to pick a setting/environment and list out words that are related. Then, we create broad yet distinct categories for the words. From here, we have users practice classifying objects, allowing them to create visual mappings of images physically and mentally. This allows them to strengthen memory connections and enhance memory performance. We also give them other words that fall into those categories to expand on the established mental connections. Memory Playground also uses an integration of zero-knowledge proofs. It stores uploaded data securely on servers and allows users to anonymously interact with the application without revealing personally identifiable information, which is ideal for those concerned about privacy. ## How we built it We used the OpenAI API to prompt their GPT-4 model for category groupings and new objects. Then we used Together.AI's stable diffusion model for image generation. From there we connected the Python components to the web side using Fetch API. We built the web application with React, HTML, CSS, and JS. We used Flask to integrate the Python backend with the frontend. ## Challenges we ran into 1) Working with multiple servers 2) Integrating Flask with React 3) Learning to use multiple APIs to integrate various services 4) Implementing drag and drop functionality using use states 5) Limited credit and GPU usage 6) Utilizing multimodal machine learning models ## Accomplishments that we're proud of We are proud of how we efficiently and quickly were able to understand and implement new technologies and concepts. We were new to Flask and a lot of the recent AI technology. We are also proud of how we worked as a team, ideation stages to product creation. Additionally, we are proud of how we were able to integrate multiple varying technologies. ## What we learned We developed a lot of technical skills involving using APIs, prompt engineering, model optimization, Flask, managing multiple server applications, and web development. We also learned a lot about the practical applications of AI in healthcare towards a potential treatment of Alzheimer's and dementia. It is critical for us as a society to consider how we can *prevent*, not just treat, such diseases. We also learned a lot about the engineering design process. Throughout the hackathon we went through research, ideation, designing, prototyping, and building phases. We gained a lot of skills through this process that helped us as we adapted to new technologies and grew our knowledge base. ## What's next for Memory Playground 1) Scaling: We are looking to make our application public and available to the community. This would involve cloud data storage (ex. Firestore) and increased efficiency to manage larger requests. 2) Speech-to-text recognition: We would like to implement a speech to text recognition model so that we can utilize verbal connections and improve accessibility.
## Inspiration This project was inspired by the rising issue of people with dementia. Symptoms of dementia can be temporarily improved by regularly taking medication, but one of the core symptoms of dementia is forgetfulness. Moreover, patients with dementia often need a caregiver, who is often a family member, to manage their daily tasks. This takes a great toll on both the caregiver, who is at higher risk for depression, high stress levels, and burnout. To alleviate some of these problems, we wanted to create an easy way for patients to take their medication, while providing ease and reassurance for family members, even from afar. ## Purpose The project we have created connects a smart pillbox to a progressive app. Using the app, caregivers are able to create profiles for multiple patients, set and edit alarms for different medications, and view if patients have taken their medication as necessary. On the patient's side, the pillbox is not only used as an organizer, but also as an alarm to remind the patient exactly when and which pills to take. This is made possible with a blinking light indicator in each compartment of the box. ## How It's Built Design: UX Research: We looked into the core problem of Alzheimer's disease and the prevalence of it. It is estimated that half of the older population do not take their medication as intended. It is a common misconception that Alzheimer's and other forms of dementia are synonymous with memory loss, but the condition is much more complex. Patients experience behavioural changes and slower cognitive processes that often require them to have a caretaker. This is where we saw a pain point that could be tackled. Front-end: NodeJS, Firebase Back-end: We used azure to host a nodeJS server and postgres database that dealt with the core scheduling functionality. The server would read write and edit all the schedules and pillboxes. It would also decide when the next reminder was and ask the raspberry pi to check it. The pi also hosted its own nodeJS server that would respond to the azure server for requests to check if the pill had been taken by executing a python script that directly interfaced with the general purpose input-output pins. Hardware: Raspberry Pi: Circuited a microswitch to control an LED that was engineered into the pillbox and programmed with Python to blink at a specified date and time, and to stop blinking either after approx 5 seconds (recorded as a pill not taken) or when the pillbox is opened and the microswitch opens (recorded as a pill taken). ## Challenges * Most of us are new with Hackathons, and we have different coding languages abilities. This caused our collaboration to be difficult due to our differences in skills. * Like many others, we have time constraints, regarding our ideas, design and what was feasible within the 24 hours. * Figuring out how to work with raspberry pi, how to connect it with nodeJS and React App. * Automatically schedule notifications from the database. * Setting up API endpoints * Coming up with unique designs of the usage of the app. ## Accomplishments * We got through our first Hackaton, Wohoo! * Improving skills that we are strong at, as well as learning our areas of improvement. * With the obstacles we faced, we still managed to pull out thorough research, come up with ideas and concrete products. * Actually managed to connect raspberry pi hardware to back-end and front-end servers. * Push beyond our comfort zones, mentally and physically ## What's Next For nudge: * Improve on the physical design of the pillbox itself – such as customizing our own pillbox so that the electrical pieces would not come in contact with the pills. * Maybe adding other sensory cues for the user, such as a buzzer, so that even when the user is located a room away from the pillbox, they would still be alerted in terms of taking their medicines at the scheduled time. * Review the codes and features of our mobile app, conduct a user test to ensure that it meets the needs of our users. * Rest and Reflect
## Inspiration: People living with dementia often lose access to core memories about their loved ones. Having to ask repeatedly for reminders about who their family members are and key stories they shared can be dehumanizing and painful for their family. As students with grandparents who suffer from memory loss, we wanted to create an accessible tool that lets elderly family members remain connected to their most treasured memories and access key details about their loved ones, all without worrying about becoming a burden to their caretakers or family. We aim to preserve cherished memories and human dignity through old age. ## What it does: To address these challenges, we built Memory. This cutting-edge tech with a human-centered approach allows seniors living with dementia to converse verbally with our Amazon Alexa add-on. Our app consists of multiple features as outlined below: Memory: Prompt our solution with simple questions like “Remind me about my son” or “Where does my daughter live.” Going beyond simple facts, seniors can ask for reminders such as “Tell me a story about my son” or even something as nostalgic as “I want to hear my childhood song.” These facts, stories, and pieces of music come from user input, which can be custom-added by family members. Neuro-intervention: Decrease a patient’s agitation and improve communication and caregiver relationships - for instance, with music prompts delaying cognitive decline and promoting brain plasticity in the elderly brain. ## How we built it: Our application consists of a front-end (conversation) built with Amazon Alexa Developer Tools and a back-end built with Python and Javascript with AWS as our Cloud Server. We built our conversational recall assistant by designing and coding our own custom Amazon Alexa add-on and DynamoDB database within the Alexa developer console and AWS, enabling speech-to-text with accessible transcription of patients’ prompts, and comprehensive queryable insights allowing near-human-like dialogues between patients and Memory. ## Challenges we ran into: Creating the database from which memories and stories can be pulled was a major challenge. Learning to build our back-end on the Alexa Developer Tool, AWS, and DynamoDB was more difficult than expected, with limited documentation and experience building on the platform. Yet, considering the additional benefits of this platform and device - being easily accessible for dementia patients with its always-on capability - we considered building on this platform essential to enhance accessibility, thus tying our previous knowledge of Python and Javascript to Alexa Developer Tool extensions. Another main challenge for us was our desire to learn deeply about the medical industry with a specific focus on dementia even with our 36-hour time crunch, meaning that we spent hours researching pain points in the industry. Despite the time pressures this introduced, it ultimately allowed us to refine our idea into a viable product. ## Accomplishments that we're proud of: As novice coders and first-time hackers, we’re proud of successfully learning and navigating an entirely new interface, creating a product focused on accessibility, and making a significant advancement for dementia patients. We are also proud of tackling a problem that is meaningful to us both on a deeply human level and that has broad-reaching socioeconomic implications. According to a study by Mayo Clinic, dementia impacts over 55 million elders across the world, costing caregivers and society over 1.3 trillion USD per year. While more than 60-70% of elderly people are affected by dementia today, only 30% have their condition under control. In a time of aging populations and shortages of young workers, it will become increasingly critical to elders to be able to recall memories independently and without reliance on caretakers. Harnessing the increasingly widespread technology of home conversational assistants like Alexa is, we believe, an important first step towards tackling this problem. ## What we learned: Technical: Building this app, we learned about new programming tools (Amazon Alexa Developer Tools and AWS), collaborating as a team, and creatively solving problems. Industrial: Throughout this hackathon, we also had the opportunity to learn about a significant industry and to learn from other developers and mentors who provided valuable industrial and individual pain points. ## What's next for Alexa Memory Beyond these 36 hours, we hope to explore several paths: We hope to allow Alexa Memory to record memories when elderly users do remember them and decide to tell their stories orally, so that when the user loses access to these memories, Alexa Memory can play it back to them. This feature would benefit not only the elderly but also their younger family members. The recording function can help to preserve oral histories that are often lost, so that they can be passed down through generations. We need to conduct deeper research on effective behavioral treatments of dementia and the specific needs of dementia patients. We’ll do this through more interviews with professors and researchers, physicians, and patients and their families. These conversations will help us build product features that truly solve our users’ pain points and help us better empathize with their experience. Lastly, we aim to finish building out our user input system, in order to allow family members to input memories and facts that the elderly user can then ask Memory to recall.
partial
## Inspiration One of our members recently came back from Japan and he noted that while he was there, he experienced difficulties tracking his spending during his trip. Turns out, our intuition of knowing when something too expensive and knowing when we've spent too much weakens when we use unfamiliar currencies. So we decided that it would be great to make an app that would aid others who travel and want to keep their money management skills while abroad. ## What it does BudgetCation is a web-app that allows users to manage spendings in both their home currency and their own currency. It is flexible, allowing users to keep track of different trips, in different locations and different currencies. ## How we built it We hosted the app on the React Client, which uses Google's Firebase Authorization to manage logins, signups and emailing. The client interacts with multiple APIs to receive hourly updates to global exchange rates. And then uses Firebase to store and manage our users' trips, budgets, etc. This is all held together using Flask in order to make RESTful APIs for our databases ## Challenges we ran into One of the bigger challenges we ran into was the limited time we had to do the project, as our team was very ambitious so it took a while to agree on an idea. On the more technical side, we had trouble combining the front-end and the back-end, and this took a lot of time as we only had a limited amount of team members who could work on that. ## Accomplishments that we're proud of This was the first hackathon for two of our team members, so they are very proud of everything they had learned during this event. ## What we learned For some of us, we learned a lot at this hackathon such as Flask, Ajax, RESTful API, coding in teams, etc. We have also learned that getting an idea early or even before the hackathon would be very beneficial for our future hackathon performances.
## Inspiration Our inspiration for this project was our desire to travel after COVID, but we are unsure how much we will have to save per month in order to travel to the destination of our choice ## What it does Our app allows users to budget their expenses and show how much they are spending. It also allows users to add goal items and our app calculates how much per month they will have to save in order to reach their goal by the desired timeline. The user can view these goals in the view page. It also features a user login and tracks data specific to the user's login ## How I built it We built the whole app using React and used Firebase Auth/Firebase DB to store and authenticate our users data ## Challenges I ran into One of the biggest challenges we ran into was reading data from the Firebase DB and displaying it. ## Accomplishments that I'm proud of We are proud of the fact that we can present a product and that we managed to sleep at a decent time ## What I learned We learned many things about the inner workings of React and how the React Context works. In addition we also learned how to use the firebase DB and firebase auth. ## What's next for planSmart We plan to refine the UI of planSmart and continue to work on the inner functionality of the view page. As well, we want to add an investment API that automatically saves the users money per time period according to their savings goals and calculated annuity.
## Inspiration ## What it does The leap motion controller tracks hand gestures and movements like what an actual DJ would do (raise/lower volume, cross-fade, increase/decrease BPM, etc.) which translate into the equivalent in the VirtualDJ software. Allowing the user to mix and be a DJ without touching a mouse or keyboard. Added to this is a synth pad for the DJ to use. ## How we built it We used python to interpret gestures using Leap Motion and translating them into how a user in VirtualDJ would do that action using the keyboard and mouse. The synth pad we made using an Arduino and wiring it to 6 aluminum "pads" that make sounds when touched. ## Challenges we ran into Creating all of the motions and make sure they do not overlap was a big challenge. The synth pad was challenging to create also because of lag problems that we had to fix by optimizing the C program. ## Accomplishments that we're proud of Actually changing the volume in the VirtualDJ using leap motion. That was the first one we made work. ## What we learned Using the Leap Motion, learned how to wire an arduino to create a MIDI synthesizer. ## What's next for Tracktive Sell to DJ Khaled! Another one.
losing
## What it does MemoryLane is an app designed to support individuals coping with dementia by aiding in the recall of daily tasks, medication schedules, and essential dates. The app personalizes memories through its reminisce panel, providing a contextualized experience for users. Additionally, MemoryLane ensures timely reminders through WhatsApp, facilitating adherence to daily living routines such as medication administration and appointment attendance. ## How we built it The back end was developed using Flask and Python and MongoDB. Next.js was employed for the app's front-end development. Additionally, the app integrates the Google Cloud speech-to-text API to process audio messages from users, converting them into commands for execution. It also utilizes the InfoBip SDK for caregivers to establish timely messaging reminders through a calendar within the application. ## Challenges we ran into An initial hurdle we encountered involved selecting a front-end framework for the app. We transitioned from React to Next due to the seamless integration of styling provided by Next, a decision that proved to be efficient and time-saving. The subsequent challenge revolved around ensuring the functionality of text messaging. ## Accomplishments that we're proud of The accomplishments we have achieved thus far are truly significant milestones for us. We had the opportunity to explore and learn new technologies that were previously unfamiliar to us. The integration of voice recognition, text messaging, and the development of an easily accessible interface tailored to our audience is what fills us with pride. ## What's next for Memory Lane We aim for MemoryLane to incorporate additional accessibility features and support integration with other systems for implementing activities that offer memory exercises. Additionally, we envision MemoryLane forming partnerships with existing systems dedicated to supporting individuals with dementia. Recognizing the importance of overcoming organizational language barriers in healthcare systems, we advocate for the formal use of interoperability within the reminder aspect of the application. This integration aims to provide caregivers with a seamless means of receiving the latest health updates, eliminating any friction in accessing essential information.
## Inspiration We wanted to tackle a problem that impacts a large demographic of people. After research, we learned that 1 in 10 people suffer from dyslexia and 5-20% of people suffer from dysgraphia. These neurological disorders go undiagnosed or misdiagnosed often leading to these individuals constantly struggling to read and write which is an integral part of your education. With such learning disabilities, learning a new language would be quite frustrating and filled with struggles. Thus, we decided to create an application like Duolingo that helps make the learning process easier and more catered toward individuals. ## What it does ReadRight offers interactive language lessons but with a unique twist. It reads out the prompt to the user as opposed to it being displayed on the screen for the user to read themselves and process. Then once the user repeats the word or phrase, the application processes their pronunciation with the use of AI and gives them a score for their accuracy. This way individuals with reading and writing disabilities can still hone their skills in a new language. ## How we built it We built the frontend UI using React, Javascript, HTML and CSS. For the Backend, we used Node.js and Express.js. We made use of Google Cloud's speech-to-text API. We also utilized Cohere's API to generate text using their LLM. Finally, for user authentication, we made use of Firebase. ## Challenges we faced + What we learned When you first open our web app, our homepage consists of a lot of information on our app and our target audience. From there the user needs to log in to their account. User authentication is where we faced our first major challenge. Third-party integration took us significant time to test and debug. Secondly, we struggled with the generation of prompts for the user to repeat and using AI to implement that. ## Accomplishments that we're proud of This was the first time for many of our members to be integrating AI into an application that we are developing so that was a very rewarding experience especially since AI is the new big thing in the world of technology and it is here to stay. We are also proud of the fact that we are developing an application for individuals with learning disabilities as we strongly believe that everyone has the right to education and their abilities should not discourage them from trying to learn new things. ## What's next for ReadRight As of now, ReadRight has the basics of the English language for users to study and get prompts from but we hope to integrate more languages and expand into a more widely used application. Additionally, we hope to integrate more features such as voice-activated commands so that it is easier for the user to navigate the application itself. Also, for better voice recognition, we should
## Inspiration The inspiration for this project came to us from recognizing that this simple at the first look app could have a great social cause, and there is nothing like that on the market. We wanted to create a tool for Alzheimer’s patients that would be a single platform for aggregates tools for their individual needs We also wanted to get experience and play with different technologies. ## What it does Our app helps Alzheimer’s patients recognize people, associated memories, and manage reminders for daily activities using Augmented Reality and Machine Learning. Our reminder system helps users to keep track of their daily routines and medications. ## How we built it Our iOS application uses Microsoft Azure’s Custom Vision to recognize family and friends. The app uses coreML, the ARKit, and the Vision framework to label the recognized people in real time. Using the Houndify API custom commands and the Oracle Cloud Database, the user can verbally request and receive information about people saved to their account. We also have a voice assistant utilizing Almond Voice API offered by Stanford to greet the user. ## Challenges we ran into During the project, we ran into a few challenges -- starting with lack of documentations for some API’s and finishing with software failure (iPhone simulator) We used several powerful API’s, yet it was a challenge to implement them to our application due to the lack of good documentation. In addition, all of us were pioneers in using RESTful API’s. ## Accomplishments that we're proud of We proud that out application has great social cause. We hope, it may help the patients to ease their daily life. We are proud that we were able to overcome implementing difficulties. ## What we learned We learned how to implement different API’s such as voice assistance and computer vision. ## What's next for memory lane We need to provide users with the option of uploading images and improving accuracy of recognition.
winning
## Inspiration One of our team members was stunned by the number of colleagues who became self-described "shopaholics" during the pandemic. Understanding their wishes to return to normal spending habits, we thought of a helper extension to keep them on the right track. ## What it does Stop impulse shopping at its core by incentivizing saving rather than spending with our Chrome extension, IDNI aka I Don't Need It! IDNI helps monitor your spending habits and gives recommendations on whether or not you should buy a product. It also suggests if there are local small business alternatives so you can help support your community! ## How we built it React front-end, MongoDB, Express REST server ## Challenges we ran into Most popular extensions have company deals that give them more access to product info; we researched and found the Rainforest API instead, which gives us the essential product info that we needed in our decision algorithm. However this proved, costly as each API call took upwards of 5 seconds to return a response. As such, we opted to process each product page manually to gather our metrics. ## Completion In its current state IDNI is able to perform CRUD operations on our user information (allowing users to modify their spending limits and blacklisted items on the settings page) with our custom API, recognize Amazon product pages and pull the required information for our pop-up display, and dynamically provide recommendations based on these metrics. ## What we learned Nobody on the team had any experience creating Chrome Extensions, so it was a lot of fun to learn how to do that. Along with creating our extensions UI using React.js this was a new experience for everyone. A few members of the team were also able to spend the weekend learning how to create an express.js API with a MongoDB database, all from scratch! ## What's next for IDNI - I Don't Need It! We plan to look into banking integration, compatibility with a wider array of online stores, cleaner integration with small businesses, and a machine learning model to properly analyze each metric individually with one final pass of these various decision metrics to output our final verdict. Then finally, publish to the Chrome Web Store!
## Inspiration Our journey with PathSense began with a deeply personal connection. Several of us have visually impaired family members, and we've witnessed firsthand the challenges they face navigating indoor spaces. We realized that while outdoor navigation has seen remarkable advancements, indoor environments remained a complex puzzle for the visually impaired. This gap in assistive technology sparked our imagination. We saw an opportunity to harness the power of AI, computer vision, and indoor mapping to create a solution that could profoundly impact lives. We envisioned a tool that would act as a constant companion, providing real-time guidance and environmental awareness in complex indoor settings, ultimately enhancing independence and mobility for visually impaired individuals. ## What it does PathSense, our voice-centric indoor navigation assistant, is designed to be a game-changer for visually impaired individuals. At its heart, our system aims to enhance mobility and independence by providing accessible, spoken navigation guidance in indoor spaces. Our solution offers the following key features: 1. Voice-Controlled Interaction: Hands-free operation through intuitive voice commands. 2. Real-Time Object Detection: Continuous scanning and identification of objects and obstacles. 3. Scene Description: Verbal descriptions of the surrounding environment to build mental maps. 4. Precise Indoor Routing: Turn-by-turn navigation within buildings using indoor mapping technology. 5. Contextual Information: Relevant details about nearby points of interest. 6. Adaptive Guidance: Real-time updates based on user movement and environmental changes. What sets PathSense apart is its adaptive nature. Our system continuously updates its guidance based on the user's movement and any changes in the environment, ensuring real-time accuracy. This dynamic approach allows for a more natural and responsive navigation experience, adapting to the user's pace and preferences as they move through complex indoor spaces. ## How we built it In building PathSense, we embraced the challenge of integrating multiple cutting-edge technologies. Our solution is built on the following technological framework: 1. Voice Interaction: Voiceflow * Manages conversation flow * Interprets user intents * Generates appropriate responses 2. Computer Vision Pipeline: * Object Detection: Detectron * Depth Estimation: DPT (Dense Prediction Transformer) * Scene Analysis: GPT-4 Vision (mini) 3. Data Management: Convex database * Stores CV data and mapping information in JSON format 4. Semantic Search: Cohere's Rerank API * Performs semantic search on CV tags and mapping data 5. Indoor Mapping: MappedIn SDK * Provides floor plan information * Generates routes 6. Speech Processing: * Speech-to-Text: Groq model (based on OpenAI's Whisper) * Text-to-Speech: Unreal Engine 7. Video Input: Multiple TAPO cameras * Stream 1080p video of the environment over Wi-Fi To tie it all together, we leveraged Cohere's Rerank API for semantic search, allowing us to find the most relevant information based on user queries. For speech processing, we chose a Groq model based on OpenAI's Whisper for transcription, and Unreal Engine for speech synthesis, prioritizing low latency for real-time interaction. The result is a seamless, responsive system that processes visual information, understands user requests, and provides spoken guidance in real-time. ## Challenges we ran into Our journey in developing PathSense was not without its hurdles. One of our biggest challenges was integrating the various complex components of our system. Combining the computer vision pipeline, Voiceflow agent, and MappedIn SDK into a cohesive, real-time system required careful planning and countless hours of debugging. We often found ourselves navigating uncharted territory, pushing the boundaries of what these technologies could do when working in concert. Another significant challenge was balancing the diverse skills and experience levels within our team. While our diversity brought valuable perspectives, it also required us to be intentional about task allocation and communication. We had to step out of our comfort zones, often learning new technologies on the fly. This steep learning curve, coupled with the pressure of working on parallel streams while ensuring all components meshed seamlessly, tested our problem-solving skills and teamwork to the limit. ## Accomplishments that we're proud of Looking back at our journey, we're filled with a sense of pride and accomplishment. Perhaps our greatest achievement is creating an application with genuine, life-changing potential. Knowing that PathSense could significantly improve the lives of visually impaired individuals, including our own family members, gives our work profound meaning. We're also incredibly proud of the technical feat we've accomplished. Successfully integrating numerous complex technologies - from AI and computer vision to voice processing - into a functional system within a short timeframe was no small task. Our ability to move from concept to a working prototype that demonstrates the real-world potential of AI-driven indoor navigation assistance is a testament to our team's creativity, technical skill, and determination. ## What we learned Our work on PathSense has been an incredible learning experience. We've gained invaluable insights into the power of interdisciplinary collaboration, seeing firsthand how diverse skills and perspectives can come together to tackle complex problems. The process taught us the importance of rapid prototyping and iterative development, especially in a high-pressure environment like a hackathon. Perhaps most importantly, we've learned the critical importance of user-centric design in developing assistive technology. Keeping the needs and experiences of visually impaired individuals at the forefront of our design and development process not only guided our technical decisions but also gave us a deeper appreciation for the impact technology can have on people's lives. ## What's next for PathSense As we look to the future of PathSense, we're brimming with ideas for enhancements and expansions. We're eager to partner with more venues to increase our coverage of mapped indoor spaces, making PathSense useful in a wider range of locations. We also plan to refine our object recognition capabilities, implement personalized user profiles, and explore integration with wearable devices for an even more seamless experience. In the long term, we envision PathSense evolving into a comprehensive indoor navigation ecosystem. This includes developing community features for crowd-sourced updates, integrating augmented reality capabilities to assist sighted companions, and collaborating with smart building systems for ultra-precise indoor positioning. With each step forward, our goal remains constant: to continually improve PathSense's ability to provide independence and confidence to visually impaired individuals navigating indoor spaces.
## 💫 Inspiration Love online shopping, but tired of the deliveries arriving when you're not home? Or perhaps how you're continuously checking when you're package arrives, only to delay your plans out of worry? Lovers of Online shopping and E-commerce. We present to you, **pacs**. ## 😮 What **pacs** does **pacs** is a unique storage solution with plans to open multiple facilities that have lockers of all shapes and sizes. Kind of like a community mailbox. As a shopper, you won’t have to put in your address. With our app, you’ll be able to see all your purchases using **pacs** in one convenient place where you can find your respective locker number and key code. ## 🔨 How we built it We’re always trying to learn something new! For this app, we implemented an MVC (Model, View, Controller) application structure. For the view, we’ve decided to use React that communicates with a controller using Node.js and Express.js, which interacts with a model which is ProtoBuf for this web application. With this model, everything has been modularized and we’ve found it very easy to create endpoints and implement functionality with great ease. ## 😰 Challenges we ran into * A major issue was to properly implement Protobuf particularly with an M1 Mac * Some minor design challenges * Furthermore, as the app scaled in the prototype, developing the front end React view was becoming increasingly heavier. ## 😤 Accomplishments that we're proud of For each team member this was somewhat different! Rithik and Saqif researched and learned regarding protobuf and its syntax, so it was a wonderful learning experience. As for Shaiza she learned how to properly use ES6 and successfully parse data from JSON files! Despite the insane time crunch, we thoroughly enjoyed this experience. ## 🧠 What we learned We learned… * How to scale our idea for the prototype * New tools and techniques * How full-stack applications work * An approach to modularize solutions ## 💙 What's next for **pacs** Because there are so many moving parts in our application, there are several microservices that would require intercommunication. We would be working on the different microservices and ensuring that protobuf works efficiently between them. For example, we may have an internal server that needs to fetch and store the utilization status of the many locker facilities. A location of 30 lockers may have 24 occupied and that information has to be relayed to the user. There may be other server applications that are dedicated for businesses like API testing and communication which will also require microservices to generate unique IDs, access certain user information and more. We would be constructing these microservices and rigorously test the interactions. We would also migrate the services into the cloud. In addition, we can add important features such as Twilio, in order to incorporate SMS notifications, which we got to learn.
winning
## Inspiration ClassMate was inspired by the boom of OpenAI and ChatGPT. We sought a way to integrate the powerful tool of ChatGPT into classrooms to help students further explore academic concepts while also mitigating cheating and abuse of the AI tool. ## What it does ClassMate compiles together a student's classes and topics covered in each class. Upon clicking each topic page, students are given concept-related prompts written by their teachers to help guide their exploration. Moreover, they are also given fill-in-the-blank sentence frames which they can fill out with specific topics/skills they would like to improve. These pre-generated prompts aim to guide students utilizing AI instead of allowing free roaming in order to discourage cheating. Prompts selected by the student are then inputted via the OpenAI API and an answer is returned to the student. ## How we built it We started by building the core functionalities of our website through HTML, CSS, and Flask. We also used SQLite to build a database to store key website information like user logins and user class topics/questions. Once we had the website up, we embedded the OpenAI API to the class topics section to help with student questions. ## Challenges we ran into Building a database from scratch using SQLite was definitely not easy and took a lot of trial and error to get going. Also, it was challenging to code the HTML and CSS formatting, as well as getting correct inputs and outputs for various functions like user logins and user questions for the OpenAI API. ## Accomplishments that we're proud of We're proud of being able to integrate the OpenAI API into our aesthetically pleasing and user-friendly website. We were also proud that we were able to turn a tool that is associated with possible academic abuse into a tool for improved student learning and teaching.
## Inspiration 🌱 The inspiration behind ClassQAI was deeply rooted in addressing a communal struggle often an unspoken issue in classrooms – the hesitancy of students to ask questions due to a fear of feeling "stupid" or behind on class content. Recognizing that every student learns at their own pace and may encounter moments of confusion, we wanted to create a platform that would empower them to overcome this hurdle. ClassQAI stems from a deep desire to enhance the classroom experience by fostering an encouraging and judgement-free environment for students to feel empowered to ask questions without hesitation. ## What it does 🚀 ClassQAI replaces the traditional classroom environment with a more interactive and dynamic learning experience. Teachers can easily set up a digital classroom through a user-friendly dashboard, enabling seamless class management. Students join using a code or QR, and the platform allows them to ask questions anonymously with the feature for students to flag questions, ensuring a personalized touch from the teacher when information is missing. The AI instantly answers, providing a quick and efficient way for students to gain simple yet effective insights into their questions. ## How we built it 🛠️ Our development stack uses Auth0 for authentication. This allowed us to seamlessly authenticate users, ensuring that both teachers and students could access the platform securely. We used MongoDB as our database to store information securely, including user profiles between student and teacher, classroom data, and the questions asked by students. Our backend was created using Next.js JAM stack, which is responsible for calling the OPENAI API for our triage. ## Challenges we ran into 🤯 Developing ClassQAI brought about challenges including adapting to MongoDB which demanded a learning curve for optimal data storage. Implementing Next.js was also a learning curve with little experience. Setting ClassQAI apart from a standard chatbot was a unique challenge, pushing us to integrate the OpenAI API for intelligent responses while offering anonymity, real-time engagement, classroom management, and user-friendly features. These challenges led to the creation of a transformative platform that goes beyond conventional Q&A systems, addressing the specific needs of students and educators in a wide educational realm. ## Accomplishments that we're proud of 💪 We are immensely proud of achieving a seamless integration of AI technology into the classroom environment. The successful implementation of instant AI answers, combined with the teacher dashboard's efficiency, marks a significant accomplishment. We are also extremely proud of the fine-tuning of the API to meet our project's demands. ## What we learned 📚 We mainly learned how to implement Next.js. We also encountered challenges such as adapting OpenAI's API for real-time responses, ensuring Auth0 works with our specific use case. ## What's next for ClassQAI 🔮 Looking ahead, we envision incorporating some analytics in the teacher dashboard to get better insight on how to better support the students. We are also looking to refine responses depending on their learning style. Overall, we hope to test it soon in a real-world classroom environment to see its impact on the students' learning.
## Inspiration Being introduced to financial strategies, many are skeptical simply because they can't imagine a significant reward for smarter spending. ## What it does * Gives you financial advice based on your financial standing (how many credit cards you have, what the limits are, whether you're married or single etc.) * Shows you a rundown of your spendings separated by category (Gas, cigarettes, lottery, food etc.) * Identifies transactions as reasonable or unnecessary ## How I built it Used React for the most part in combination with Material UI. Charting library used is Carbon Charts which is also developed by me: <https://github.com/carbon-design-system/carbon-charts> ## Challenges I ran into * AI * Identification of reasonable or unnecessary transactions * Automated advising ## Accomplishments that I'm proud of * Vibrant UI ## What I learned * Learned a lot about React router transitions * Aggregating data ## What's next for SpendWise To find a home right inside your banking application.
losing